We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.

  • A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

AI vs AI: Scientists Develop Neural Networks to Detect Generated Text Insertions

AI vs AI: Scientists Develop Neural Networks to Detect Generated Text Insertions

© iStock

A research team, including Alexander Shirnin from HSE University, has developed two models designed to detect AI-generated insertions in scientific texts. The AIpom system integrates two types of models: a decoder and an encoder. The Papilusion system is designed to detect modifications through synonyms and summarisation by neural networks, using one type of models: encoders. In the future, these models will assist in verifying the originality and credibility of scientific publications. Articles describing the Papilusion and AIpom systems have been published in the ACL Anthology Digital Archive.

As language models like ChatGPT and GigaChat become more popular and widely used, it becomes increasingly challenging to distinguish original human-written text from AI-generated content. Artificial intelligence is already being used to write scientific publications and graduation papers. Therefore, it is crucial to develop tools capable of identifying AI-generated insertions in texts. A research team, including scientists from HSE University, presented their solutions at the SemEval 2024 and DAGPap24 international scientific competitions. 

The AIpom model was used to identify the boundaries between original and generated fragments in scientific papers. In each paper, the proportion of machine-generated text to the author's text varied. To train the models, the organisers provided texts on the same topic. However, during the verification stage, the topics changed, making the task more challenging. 

Alexander Shirnin

'Models perform well on familiar topics, but their performance declines when presented with new topics,' according to Alexander Shirnin, co-author of the paper and Research Assistant at the Laboratory for Models and Methods of Computational Pragmatics, HSE Faculty of Computer Science. 'It's like a student who, having learned how to solve one type of problem, struggles to solve a problem on an unfamiliar topic or from a different subject as easily or accurately.'

To improve the system's performance, the researchers combined two models: a decoder and an encoder. At the first stage, a neural network decoder was used, with the input consisting of an instruction and the source text, and the output being a text fragment presumably generated by AI. Next, in the original text, the area where the model predicted the beginning of a generated fragment was highlighted using a special <BREAK> token. The encoder then processed the text marked up in the first stage and refined the decoder's predictions. To do this, it categorised each token—the smallest unit of text, such as a word or part of a word—and identified whether it was written by a human or generated by AI. This approach improved accuracy compared to systems that used only one type of model: AIpom ranked second at the SemEval-2024 competition. 

The Papilusion model also distinguished between written text and generated text. Using Papilusion, sections of the text were classified into four categories: written by a human, modified with synonyms, generated, or summarised by a model. The task was to accurately identify each category. The number of categories and the length of insertions in the texts varied. 

In this case, the developers used three models, all of the same type: encoders. They were trained to predict one of the four categories for each token in the text, with each model trained independently of the others. When a model made an error, a cost was applied, and the model was retrained with the lower layers frozen. 

'Each model has a different number of layers, depending on its architecture. When training a model, we can leave the first ten or so layers unchanged and adjust only the parameters in the last two layers. This is done to prevent losing important data embedded in the first layers during training,' explains Alexander Shirnin. 'It can be compared to an athlete who makes an error in the movement of their hand. We only need to explain this part to them, rather than resetting their entire learning and retraining them, as they might forget how to move correctly overall. The same logic applies here. The method is not universal and may not work with all models, but in our case, it was effective.' 

The three encoders independently determined the category for each token (word). The system's final prediction was based on the category that received the most points. Papilusion ranked sixth out of 30 in the competition. 

According to the researchers, current AI detection models perform reasonably well but still have limitations. Primarily, they struggle to process data beyond what they were trained on, and overall, there is a lack of diverse data to train the models effectively. 

'To obtain more data, we need to focus on collecting it. Both companies and laboratories have been doing this. Specifically for this type of task, it is necessary to collect datasets that include texts modified using multiple AI models and modification methods,' the researcher comments. 'Instead of continuing a text using just one model, more realistic scenarios should be created, such as asking the model to add to the text, rewrite the beginning for better coherence, remove parts of it, or generate a portion of the text in a new style using a different prompt. Of course, it is also important to collect data in different languages and on a variety of topics.' 

See also:

Scientists Test Asymmetry Between Matter and Antimatter

An international team, including scientists from HSE University, has collected and analysed data from dozens of experiments on charm mixing—the process in which an unstable charm meson oscillates between its particle and antiparticle states. These oscillations were observed only four times per thousand decays, fully consistent with the predictions of the Standard Model. This indicates that no signs of new physics have yet been detected in these processes, and if unknown particles do exist, they are likely too heavy to be observed with current equipment. The paper has been published in Physical Review D.

HSE Scientists Reveal What Drives Public Trust in Science

Researchers at HSE ISSEK have analysed the level of trust in scientific knowledge in Russian society and the factors shaping attitudes and perceptions. It was found that trust in science depends more on everyday experience, social expectations, and the perceived promises of science than on objective knowledge. The article has been published in Universe of Russia.

Scientists Uncover Why Consumers Are Reluctant to Pay for Sugar-Free Products

Researchers at the HSE Institute for Cognitive Neuroscience have investigated how 'sugar-free' labelling affects consumers’ willingness to pay for such products. It was found that the label has little impact on the products’ appeal due to a trade-off between sweetness and healthiness: on the one hand, the label can deter consumers by implying an inferior taste, while on the other, it signals potential health benefits. The study findings have been published in Frontiers in Nutrition.

Final of International Yandex–HSE Olympiad in AI and Data Analysis Held at HSE University

Yandex Education and the HSE Faculty of Computer Science have announced the results of the international AIDAO (Artificial Intelligence and Data Analysis Olympiad) competition. Students from 14 countries took part. For the second year in a row, first place went to the team AI Capybara, which developed the most accurate AI model for an autonomous vehicle vision system.

AI Lingua Included in Compilation of Best International AI Practices in Higher Education

HSE University has been acknowledged internationally for its pioneering efforts in integrating artificial intelligence into higher education. The AI Lingua Neural Network developed at HSE was included in the renowned international collection ‘The Global Development of AI-Empowered Higher Education: Beyond the Horizon.’ The compilation was prepared by the Institute of Education (IOE) of Tsinghua University with the support of the Ministry of Education of the People's Republic of China and a global advisory committee, which included experts from Oxford, UCL, Sorbonne, Stanford, and other leading academic centres.

Technological Breakthrough: Research by AI and Digital Science Institute Recognised at AI Journey 2025

Researchers from the AI and Digital Science Institute (part of the HSE Faculty of Computer Science) presented cutting-edge AI studies, noted for their scientific novelty and practical relevance, at the AI Journey 2025 International Conference. A research project by Maxim Rakhuba, Head of the Laboratory for Matrix and Tensor Methods in Machine Learning, received the AI Leaders 2025 award. Aibek Alanov, Head of the Centre of Deep Learning and Bayesian Methods, was among the finalists.

HSE Psycholinguists Launch Digital Tool to Spot Dyslexia in Children

Specialists from HSE University's Centre for Language and Brain have introduced LexiMetr, a new digital tool for diagnosing dyslexia in primary school students. This is the first standardised application in Russia that enables fast and reliable assessment of children’s reading skills to identify dyslexia or the risk of developing it. The application is available on the RuStore platform and runs on Android tablets.

HSE University to Join Physical AI Garage Project by Yandex

Yandex is collaborating with leading Russian universities to launch a new educational programme called Physical AI Garage. This initiative unites five universities—HSE University, ITMO, MIPT, MAI, and MEPhI—to train future professionals in physical artificial intelligence by tackling real-world industrial challenges. The programme is free, and participants will receive scholarships.

Physicists Propose New Mechanism to Enhance Superconductivity with 'Quantum Glue'

A team of researchers, including scientists from HSE MIEM, has demonstrated that defects in a material can enhance, rather than hinder, superconductivity. This occurs through interaction between defective and cleaner regions, which creates a 'quantum glue'—a uniform component that binds distinct superconducting regions into a single network. Calculations confirm that this mechanism could aid in developing superconductors that operate at higher temperatures. The study has been published in Communications Physics.

Neural Network Trained to Predict Crises in Russian Stock Market

Economists from HSE University have developed a neural network model that can predict the onset of a short-term stock market crisis with over 83% accuracy, one day in advance. The model performs well even on complex, imbalanced data and incorporates not only economic indicators but also investor sentiment. The paper by Tamara Teplova, Maksim Fayzulin, and Aleksei Kurkin from the Centre for Financial Research and Data Analytics at the HSE Faculty of Economic Sciences has been published in Socio-Economic Planning Sciences.