In recent years, Alzheimer’s disease has been on the rise all over the world and is rarely diagnosed at an early stage when it can still be effectively controlled. Using artificial intelligence, KTU researchers conducted a study to determine if human-computer interfaces could be adapted so that people with memory impairments could recognize an object visible in front of them.
Rytis Maskeliūnas, a researcher at the Multimedia Engineering Department of Kaunas University of Technology (KTU), considers that the classification of visible information on the face is an everyday human function: “By communicating, the face ‘tells’ us the context conversation, especially from an emotional point of view, but can we identify visual stimuli based on brain signals?”
The visual processing of the human face is complex. Information such as a person’s identity or emotional state can be perceived by us, by analyzing faces. The aim of the study was to analyze a person’s ability to process contextual facial information and detect how a person responds to it.
The face can indicate the first symptoms of the disease
According to Maskeliūnas, numerous studies demonstrate that brain diseases can potentially be analyzed by examining facial muscles and eye movements, since degenerative brain disorders affect not only memory and cognitive functions, but also the cranial nervous system associated with facial movements. (especially eyepieces) above.
Dovilė Komolovaitė, a graduate of KTU’s Faculty of Mathematics and Natural Sciences, who co-authored the study, shared that the research clarified whether a patient with Alzheimer’s disease visually processes visible faces in the brain. in the same way as individuals without the disease.
“The study uses data from an electroencephalograph, which measures electrical impulses in the brain,” says Komolovaitė, who is currently studying for a master’s program in artificial intelligence at the Faculty of Computer Science.
In this study, the experiment was carried out on two groups of individuals: healthy and suffering from Alzheimer’s disease.
“The brain signals of a person with Alzheimer’s disease are usually much louder than in a healthy person,” says Komolovaitė, pointing out that this correlates with a reason that makes it harder for a person to to concentrate and pay attention when she feels the symptoms of Alzheimer’s disease.
Photos of people’s faces were shown during the study
The study selected an elderly group of women over the age of 60: “Advanced age is one of the main risk factors for dementia, and since the effects of sex were noticed in brain waves, the study is more accurate when only one sex group is chosen.”
During the study, each participant performed experiments lasting up to an hour, during which photos of human faces were shown. According to the researcher, these photos were selected according to several criteria: in the analysis of the influence of emotions, neutral and fearful faces are shown, while by analyzing the factor of familiarity, people known and chosen at random are provided to study participants.
In order to understand whether a person is seeing and understanding a face correctly, study participants were asked to press a button after each stimulus to indicate whether the displayed face was reversed or correct.
“Even at this stage, an Alzheimer’s patient makes mistakes, so it is important to determine whether the degradation of the object is due to memory or vision processes”, explains the researcher.
Inspired by real-life interactions with patients with Alzheimer’s disease
Maskeliūnas reveals that his work with Alzheimer’s disease began with his collaboration with the Huntington’s Disease Association, which opened his eyes to what these many neurodegenerative diseases really look like.
The researcher has also been in direct contact with patients suffering from Alzheimer’s disease: “I have seen that the diagnosis is generally confirmed too late when the brain is already irreversibly damaged. years of life”.
Today we can see how human-computer interaction is adapted to lighten the lives of people with physical disabilities. Controlling a robotic hand by “thinking” or a paralyzed person writing text by imagining letters is not a new concept. Yet trying to understand the human brain is probably one of the most difficult tasks left today.
In this study, researchers worked with data from standard electroencephalograph equipment, however, Maskeliūnas points out that to create a practical tool, it would be better to use data collected from invasive microelectrodes, which can measure more precisely the activity of neurons. . This would greatly increase the quality of the AI model.
Of course, in addition to the technical requirements, there should be a community environment focused on making life easier for people with Alzheimer’s disease. Yet in my personal opinion, after five years, I think we will still see technologies focused on improving physical function, and the focus on people affected by brain disease in this area will only come more late.”
Rytis Maskeliūnas, Researcher, Department of Multimedia Engineering, Kaunas University of Technology
According to master’s student Komolovaitė, a clinical examination with the help of colleagues in the field of medicine is necessary, so this step of the process would take a lot of time: “If we want to use this test as a medical tool, a certification process is also necessary”.
Source:
Journal reference:
Komolovaite, D., et al. (2022) Classification of Visual Stimuli Based on a Deep Convolutional Neural Network Using Electroencephalography Signals from Healthy and Alzheimer’s Disease Subjects. Life. doi.org/10.3390/life12030374.