top of page

Research Activities

Applications
figure_01_v2.png
Articulatory-based speech synthesis
figure_4.jpg

In collaboration with the Gipsa-Lab, we develop speech synthesis methods based on the conversion of the movements of the main articulators of the vocal tract (lips, tongue, jaw, and velum) into acoustic parameters of speech using artificial neural networks. These methods are compatible with real-time speech brain-computer applications

Feel free to download our freely available articulatory acoustic dataset from Zenodo

Intraoperative recording of speech activity

In collaboration with the Grenoble University Hospital, we perform intraoperative recordings of cortical speech areas during awake surgeries  for tumor or epileptic area resection

Awake surgery.png
Speech decoding from neural activity
30-40.PNG

We develop decoding approaches to reconstruct speech from cortical neural signals. These methods are compatible with real-time speech brain-computer applications

Our Projects
bottom of page