'While it May Sound Futuristic, It Holds Great Promise': Olga Dragoy Shares Her Thoughts on Language Function Restoration and the Future of Neurotechnology
In the spring of 2023, the fifth strategic project of the Priority 2030 programme, 'Human Brain Resilience: Neurocognitive Technologies for Adaptation, Learning, Development and Rehabilitation in a Changing Environment,' was launched at HSE University. The strategic project brings together researchers from all campuses of HSE University. In her interview with the HSE News Service, Olga Dragoy, head of the strategic project and Director of the HSE Centre for Language and Brain, shares an overview of the advanced technologies neuroscientists are creating today, the underlying inspiration driving these efforts, and the operational dynamics of interdisciplinary applied projects.
From Conducting Basic Research to Helping Patients
I believe that all of us who have joined the large team of the strategic project this year have reached a point in our research where we are compelled to venture into creating tangible, applied solutions, such as tools which can be used immediately to benefit people. Our centre has already started developing such solutions: this summer, we successfully launched HSE University's first application on RuStore—it is designed for diagnosing language disorders in children (KORABLIK, a Russian acronym for Clinical Assessment of Basic Linguistic Competencies). It is the culmination of five years of foundational work, followed by several more years dedicated to translating the results into a user-friendly application.
The strategic project 'Human Brain Resilience' implemented by the Centre for Language and Brain comprises two major areas of focus. The first one concerns neural interfaces and digital tools for treating language disorders. During the ten years that the centre has been operating within HSE University, we have gained extensive insights into the mechanisms of various language disorders in both adults and children. Now, we are ready to propose tools for treating these disorders. It is undoubtedly a technological challenge, as we need to acquire new expertise and establish new partnerships. The second area of focus involves formulating clinical protocols to aid in the recovery of language function after stroke through a combination of brain stimulation and evidence-based speech therapy programmes.
EEG-Based Neural Interfaces
This year, we have started a collaboration with the Chen Frontier Lab for Applied Neurotechnology in China. The team of Dr Gerwin Schalk at the Chen Frontier Lab has developed an inexpensive and easy-to-use EEG device with four electrodes. By way of comparison, similar EEG systems typically incorporate 64 or more electrodes. A device the size of a mobile phone is affixed to a person’s head to read the electrical activity of their brain. Based on this technology, we have been developing a neural computer interface in Russian for the treatment of a specific language disorder: aphasia. A treatment scheme has been formulated, and corresponding linguistic material has been selected and programmed. Currently, we are integrating our software solutions with the EEG system.

This development will help medical practitioners and patients manage anomia—the most common symptom in all types of aphasia, where individuals have difficultly retrieving words for naming objects during spontaneous speech. For example, they may say 'box' instead of 'basket' or use a word that sounds similar to 'basket,' such as 'bicycle.'
Instead of cumbersome and costly installations that require patients to wear caps with numerous electrodes and involve complex algorithms, we will be able to offer a compact and relatively user-friendly device.
This winter, we will conduct tests on healthy volunteers. Next year, we plan to involve patients, and by the end of 2024, we expect to deliver a tangible outcome: a hardware and software complex ready for use by any medical practitioner.
'While It May Sound Futuristic, It Holds Great Promise'
Individuals with specific neurogenic injuries may find themselves unable to speak aloud, yet they retain the ability to generate words and sentences internally. This indicates that their inner speech remains intact. The reasons for this condition can vary, including, among others, disorders in the conductive pathways and articulatory atrophy.
One way to assist such patients is through the use of electrode arrays surgically implanted on the cerebral cortex. These arrays can register electrical signals in the brain, so that the signals may be subsequently translated into sounds or written text using language models.
Globally, efforts are underway to expand dictionaries for training machine learning models not just on individual words, but on entire sentences. Three years ago, researchers from the UK were the first to publish their results; their model demonstrated the ability to recognise 50 individual words. Mind reading is still a distant prospect, but an early step in this direction has been taken. Language models will contribute to expanding vocabulary and progressing from single-word reconstruction to generating complete sentences. In this area, we collaborate with a team of computational linguists at HSE University led by Olga Lyashevskaya, author of the frequency dictionary of the Russian language.

This may sound like science fiction, but experiments are underway and some progress has been achieved. Until now, processing speed has been the primary challenge for all such technologies. Invasive EEG-based neuroprostheses are currently slow and challenging for routine practical use.
I can see enormous fundamental benefits to be gained from research in this area. The technology we are referring to does not require physical effort. Through the implantation of electrodes into the cerebral cortex, we can use specific algorithms to reconstruct the language function. Speech does not have to be confined to articulated sounds; it could be encoded in ones and zeros and transmitted to another person, virtually anywhere. I believe that it holds great promise, although it may sound futuristic. If successful, we will be able to transmit internal speech without words, creating a new communication channel.
I am not sure whether healthy individuals would consent to be implanted with electrodes. This innovative communication channel must first demonstrate its effectiveness in patients who need it. The horizon of my expectations regarding this technology lies within the next 10 to 20 years.
Preparing for Future Technologies: Language Mapping
Meanwhile, we are making preparations for the introduction of these futuristic technologies. We are learning to identify, accurately and promptly, the brain’s electrical activity in connection with various aspects of language function.
We are working on this in parallel with Alexey Ossadtchi's team at the Centre for Bioelectric Interfaces. Specifically, we have been conducting research on patients with electrode arrays implanted in their brains for medical reasons. These patients participate in various language-related tasks, such as naming objects in pictures, repeating words, and undergoing listening tests. Meanwhile, we map precisely where the electrical signals associated with these language activities occur. Our team of neurolinguists has been designing speech paradigms (linguistic tasks) that facilitate effective language mapping in different regions of the brain. We then compare language maps for each task to determine which aspect of language function is realised in different parts of the brain, such as the frontal, temporal, or parietal lobes.
Therapy in a Game Format
Additionally, in collaboration with HSE linguists, we have developed a therapeutic tablet application—a digital game designed to help alleviate anomia (the problem with word selection which I mentioned earlier) in post-stroke patients. A patient views pictures of objects on the tablet and is instructed to name them. When the answer is correct, an arrow displayed on the screen hits the target; if the answer is not quite accurate, the arrow falls near the target; and if the patient’s answer is completely off the mark, the arrow misses the target.

Such feedback training lies at the core of any well-designed therapy. However, even the most advanced clinical institutions offer such treatment for a maximum of three weeks, while our application can be downloaded and used for practice as much as one desires. As of now, it is the only gaming application worldwide designed to address language disorders.
Helping Stroke Patients through Brain Stimulation
A segment of our team, led by Olga Buivolova, has been working on restoring language function through non-invasive brain stimulation, using methods such as transcranial electrical stimulation (micropolarisation) and transcranial magnetic stimulation (TMS). Strokes affecting the left hemisphere, where language function is predominantly localised, are of particular interest, as damage to this area is what leads to aphasia.
A stroke causes an imbalance of activity between the hemispheres. The left hemisphere is suppressed, while the right hemisphere, following the laws of nature, is, conversely, overexcited. Approximately 20 years ago, scientists proposed the idea of slowing down the right hemisphere to restore activity to the functioning areas in the left hemisphere. This idea is compelling and has been supported by the majority of scientists. In reality, however, it proves effective in only 50% of patients. The question is, why? Our hypothesis is that the success or failure of this approach depends on the amount of preserved brain matter in the left hemisphere. With a certain size of brain lesion, this approach can be productive and effective, and activity can be restored, but not with a different size.

To find an answer to this question, we are investigating the association between successful left hemisphere recovery and the residual neural substrate in language processing areas. The metric for this residual neural substrate is the volume of the arcuate fasciculus, a white matter bundle around which the entire language system is constructed.
The arcuate fasciculus connects the language processing areas in the frontal lobe with the temporal and parietal lobes. There is a hypothesis suggesting that individual language processing zones are precisely located where this bundle enters the cortex. In other words, the bundle enters there to facilitate the connection of different language processing zones, thereby ensuring the operation of the entire language network in the brain. This means that the structure reflects the function.
The anatomy of the arcuate fasciculus is well-known; therefore, by measuring its residual volume (in cubic millimetres) using tractography, we can reasonably predict whether it makes sense to use this type of therapy with a particular patient.
We are currently conducting a clinical trial with groups of patients who have varying volumes of the arcuate fasciculus. One group is receiving speech therapy with stimulation, another is receiving therapy without stimulation, and the third is receiving only stimulation without therapy. This will enable us to understand the main factor in successful language function recovery—whether it is the amount of preserved brain matter for language processing in the left hemisphere, the application of TMS, or the application of speech therapy. Once we determine this, we can translate our findings into technology by implementing a clinical protocol for the treatment of aphasia, providing clear indications of who will and will not benefit from this treatment.
See also:
'We Are Now Nearing Practical Application of a Stimulus-Free Brain-Mapping System'
Neural interfaces developed by scientists at HSE University in collaboration with clinicians make it possible to communicate with the brain and decode its signals. The use of such interfaces opens up opportunities to stimulate brain activity, restore and normalise muscle control in patients who have suffered a stroke, heart attack, or other neurological disorders, and support the rehabilitation of individuals with traumatic brain injuries or limb loss. Alexey Ossadtchi, Director of the Centre for Bioelectric Interfaces at the HSE Institute for Cognitive Neuroscience, discusses the centre and its work.
HSE University Presents Smart Orthosis at Hope for Technology Forum
On July 10–11, 2025, Moscow hosted the 10th Hope for Technology National Forum of the Rehabilitation Industry and Universal Design. The event was aimed at discussing innovative achievements in the rehabilitation industry. The MIEM Student Design Bureau and the HSE Institute for Cognitive Neuroscience presented a smart orthosis, which was developed at the request of orthopedists.
Experts Assess How BRICS Countries’ Education Systems Are Evolving
The BRICS Expert Council–Russia, based at HSE University and operating in collaboration with the HSE Institute of Education, has released an analytical report titled ‘The Transformation of General Education in BRICS Countries.’ The study explores how BRICS nations are addressing shared challenges, including equitable access to schooling, digital integration, support for inclusion, linguistic diversity, and intercultural dialogue.
Centre for Language and Brain Conducts First Neurolinguistic Field Study of Reading in Yakut
In July, a team from the HSE Centre for Language and Brain, in collaboration with the Centre for the Study, Preservation, and Development of Native Languages of the Academy of Sciences of the Republic of Sakha (Yakutia), conducted the first-ever neurolinguistic expedition to the village of Churapcha to study reading in the Yakut language using electroencephalography (EEG). For the first time, EEG data from 43 adults and behavioural data from 40 children was collected during the two-week expedition.
HSE Neurolinguists Reveal What Makes Apps Effective for Aphasia Rehabilitation
Scientists at the HSE Centre for Language and Brain have identified key factors that increase the effectiveness of mobile and computer-based applications for aphasia rehabilitation. These key factors include automated feedback, a variety of tasks within the application, extended treatment duration, and ongoing interaction between the user and the clinician. The article has been published in NeuroRehabilitation.
Scientists Develop AI Tool for Designing Novel Materials
An international team of scientists, including researchers from HSE University, has developed a new generative model called the Wyckoff Transformer (WyFormer) for creating symmetrical crystal structures. The neural network will make it possible to design materials with specified properties for use in semiconductors, solar panels, medical devices, and other high-tech applications. The scientists will present their work at ICML, a leading international conference on machine learning, on July 15 in Vancouver. A preprint of the paper is available on arxiv.org, with the code and data released under an open-source license.
Recommender Systems: New Algorithms and Current Practices
The AI and Digital Science Institute at the HSE Faculty of Computer Science hosted a conference focused on cutting-edge recommender system technologies. In an atmosphere of active knowledge sharing among leading industry experts, participants were introduced to the latest advancements and practical solutions in recommender model development.
‘The Development of Creative Industries Has Become a Priority for Both Russia and Uzbekistan’
The Tourism Development Institute under the Committee for Tourism of the Republic of Uzbekistan and HSE University have signed a cooperation agreement aimed at establishing and developing academic, cultural, and other partnerships in the fields of education and tourism. The initiative for signing the agreement came from the Institute for Creative Industries Development at the HSE Faculty of Creative Industries.
MIEM Begins Open Testing of Claw Engine 2.0
The team at MIEM’s Game Engineering and Interactive Systems studio has begun open testing of the second version of its game engine, Claw Engine. The HSE News Service takes a closer look at the product, which is being developed by MIEM’s student project teams.
When Thoughts Become Movement: How Brain–Computer Interfaces Are Transforming Medicine and Daily Life
At the dawn of the 21st century, humans are increasingly becoming not just observers, but active participants in the technological revolution. Among the breakthroughs with the potential to change the lives of millions, brain–computer interfaces (BCIs)—systems that connect the brain to external devices—hold a special place. These technologies were the focal point of the spring International School ‘A New Generation of Neurointerfaces,’ which took place at HSE University.