Home / Deep Learning AI Improves Facial Movement Tracking

Deep Learning AI Improves Facial Movement Tracking

A growing area of research in mobile data collection is the real-time tracking and communication of facial movements. While this presents another opportunity for human progress through data, it also raises technological challenges and serious privacy concerns. 

Research in this area has faced a variety of barriers. Potential solutions have been too physically cumbersome, too invasive, too power-hungry, too inaccurate or some combination of these. Researchers at Cornell and Peking Universities sought to overcome these barriers using AI, allowing for a minimally invasive solution to fit inside current style “earables,” (wearable ear devices such as wireless earbuds) while producing highly accurate results with low power consumption as outlined in their paper EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements

As with all types of data collection and use of AI, there are serious privacy concerns with these kinds of wearable sensors proliferating and potentially extracting more information about a person than they want to reveal. As a facial movements tracking system’s accuracy increases, notably with AI, potential misuse and associated consequences become more of a concern. The ability to reconstruct facial expressions and interpret human emotions could mean possession of enough information to infer the wearer’s emotional state or current circumstances. Such information could aid in malicious attacks or be used for manipulative purposes such as targeted advertising. In the case of “earables,” this information could be collected and utilized without the knowledge or consent of the wearer.  

Indeed, this has become such a concern that “emotion recognition technologies” has been acknowledged by the UNHRC as an area of “emergent priority” in the adopted resolution recognizing a right to privacy in the digital age: