I explore how emerging, human-centric interfaces — particularly audio-based platforms — change the nature of social interaction and user profiling. My work includes:
- Studying audio-based social networks and extracting user insights from speech and voice interaction
- Modelling non-textual semantics, including ambient context and spoken cues
- Investigating how social signal processing can inform future AI systems that are more empathetic and context-aware
Themes:
Audio-based Social Media Analytics,,
X(Twitter) Spaces and Clubhouse,
Multimodal User Profiling and User Privacy,
Bias in Recommendation Engines