CeHCI investigates the problem of multimodal emotion modeling and empathic response modeling to build human-centered design systems. By multimodal, we mean we recognize a human’s emotion based on facial expressions, speech, and movement (such as posture and gait). Because we extend a computing system from software to a physical space, novel approaches to providing empathic responses are required. To achieve this, we use emotion-based interactions with sensor-rich, ubiquitous computing and ambient intelligence.
Research Team

Norshuhani Zamin
Lab Head
Ph.D. Information Technology, Universiti Teknologi Petronas

Judith J. Azcarraga
Ph.D. Computer Science, De La Salle University

Rafael Cabredo
Ph.D Information Science and Technology, Osaka University

Gregory G. Cu
ongoing Ph.D. Computer Science, De La Salle University

Merlin Teodosia C. Suarez
Ph.D. Computer Science, De La Salle University

Jocelynn W. Cu
Ph.D. Computer Science Candidate, De La Salle University

Fritz Kevin S. Flores
ongoing Ph.D. Computer Science, De La Salle University

Ryan A. Ebardo
(D. Information Technology, De La Salle University
Research Projects
-
The TALA Empathic Space: Integrating Affect and Activity Recognition into a Smart Space
Advancement in ambient intelligence is driving the trend towards innovative interaction with computing systems. In this paper, we present our…