Articulatory-based speech synthesis
In collaboration with the Gipsa-Lab, we develop speech synthesis methods based on the conversion of the movements of the main articulators of the vocal tract (lips, tongue, jaw, and velum) into acoustic parameters of speech using artificial neural networks. These methods are compatible with real-time speech brain-computer applications
​
Feel free to download our freely available articulatory acoustic dataset from Zenodo
Intraoperative recording of speech activity
In collaboration with the Grenoble University Hospital, we perform intraoperative recordings of cortical speech areas during awake surgeries for tumor or epileptic area resection
We develop decoding approaches to reconstruct speech from cortical neural signals. These methods are compatible with real-time speech brain-computer applications