10 years ago, 10% of French people slept less than 6 hours a night. Today, the figure is 30%, according to the VIFASOM Unit (Vigilance, Fatigue, Sommeil et Santé Publique), in collaboration with Santé Publique France. The consequences of this lack of sleep impact not only their physical and intellectual capacities, but also… their voice! These are the findings of a multidisciplinary French team including an acoustician, an engineer and sleep clinicians.
22 women with no particular sleep problems slept just three hours a night for two days. They were monitored by the Vigilance, Fatigue, Sleep and Public Health (VIFASOM) research team, headed by Professor Damien Léger (Université Paris Cité). They were asked to read excerpts from a book, in this case Alexandre Dumas’ The Count of Monte Cristo, to assess how sleep deprivation affected their voices. “We first transformed these recordings into computer-usable sound representations, to simulate the way the brain processes sounds,” explains Prof. Etienne Thoret, a CNRS researcher at the Timone Neuroscience Institute (Aix-Marseille University/CNRS). “This provides a visualization by spectra, frequencies and modulations.”
“The aim was to determine whether artificial intelligence could recognize a sleep-deprived voice. Our intuition was right, what we were looking for was not absurd”, says an enthusiastic Pr Daniel Pressnitzer, CNRS researcher at the Perceptive Systems Laboratory (CNRS/ENS – PSL). The AI is able to recognize the subjects’ lack of sleep with a success rate that is “above chance,” the scientist points out, “it doesn’t work in 100% of cases, but the machine gets it wrong less than chance.”
“We analyzed whether the AI was able to recognize whether the extract listened to was from before or after sleep deprivation, for each subject”, adds Prof. Etienne Thoret. The results show that two dimensions of the voice are affected: prosody, the melody of the sentence (faster, slower, more jerky, etc.) and the timbre of the voice (nasality, raspiness, brightness, etc.). The results were published in February 2024, in PLoS Computational Biology¹.
A first paper had already validated the method², in 2021, to “understand the causal relationship between sound parameters and decisions made by AI systems,” says Prof. Pressnitzer, “because our experiment enabled us to identify a vocal fingerprint of the effect of sleep deprivation on each subject’s voice. This generic study provides an approach that could be applied to anything that is reflected in the voice”, for example in attention disorders, sleep deprivation during night shifts, or to assess the effectiveness of insomnia treatments. This work could lead to the creation of a rapid, objective test, notably to prevent accidents linked to sleep deprivation.