Master's Thesis Research

Temporal Contrastive Learning for Interpretable Facial Emotion Recognition

A self-supervised temporal contrastive learning framework for depression detection from facial video sequences with interpretable attention and saliency outputs.

Temporal Contrastive Learning for Interpretable Facial Emotion Recognition project visual

2024 - 2026

Technical overview

This graduate research integrates temporal contrastive pretraining, BiLSTM with multi-head attention, and interpretability modules including attention-weight visualizations and Grad-CAM saliency maps for facial video analysis.

Uses self-supervised temporal contrastive pretraining with NT-Xent loss.
Models temporal dependencies across long facial video sequences.
Reported 100% depressed-class recall and ensemble AUC of 0.72 on DAIC-WOZ in CV results.