PROBLEMS OF DATA UNRELIABILITY WHEN USING ARTIFICIAL INTELLIGENCE IN EDUCATIONAL ACTIVITIES
DOI:
https://doi.org/10.31651/2524-2660-2025-3-127-136Keywords:
hallucinations, artificial intelligence, data reliability, education, critical thinking, large language models, LLM, academic integrity, data verificationAbstract
Summary. Problem (Introduction). The rapid integration of generative artificial intelligence (AI) into education has created unprecedented opportunities for personalised learning, yet it has also raised serious concerns about the reliability of AI-generated content. Large language models (LLMs) optimise for plausibility rather than truth, and they can fabricate facts, citations or even legal cases (When AI Gets It Wrong: Addressing AI Hallucinations and Bias - MIT Sloan Teaching & Learning Technologies, n.d.). Such hallucinations threaten academic integrity: students may unknowingly absorb falsehoods, while teachers could inadvertently reproduce inaccuracies in course materials. Empirical studies reveal that AI systems can hallucinate from less than 1 % up to 15–40 % of cases in educational tasks depending on the model and domain (Figure 1), and systematic reviews note that over‑reliance on AI dialogue systems is linked to diminished critical thinking, increased technology dependence and the spread of misinformation (Zhai et al., 2024).
Purpose. This article aims to analyse the scope and causes of AI-generated misinformation in education and to develop evidence-based recommendations for mitigating these risks. It combines technical insights on model architecture and training data with pedagogical strategies to foster AI literacy. The goal is to ensure that AI enhances rather than undermines learning.
Methods. A systematic literature review of over 80 sources, including scientific articles, policy documents (AI Act, UNESCO guidelines), and empirical studies, provided a theoretical foundation. Comparative analysis of hallucination rates across models (Makhno et al., 2025; Lelièvre et al., 2025) informed the quantitative assessment. The study also modelled mitigation strategies such as Retrieval‑Augmented Generation (RAG) and Chain‑of‑Verification and evaluated pedagogical interventions like lateral reading and AI literacy programmes.
Results. The findings show that hallucinations stem from both internal (model architecture and context limitations) and external (biased or incomplete training data) factors. Even top models misinform 1–3% of the time, whereas widely used free systems can err 15–40% of the time when generating bibliographies or research proposals (Balch & Blanck, 2024). Hallucinations manifest in various forms: logical errors, mathematical mistakes, fabricated sources and factual inaccuracies. Their educational consequences include decreased critical thinking, increased plagiarism (“AI‑giarism”) and a risk of spreading disinformation. Regulatory frameworks classify educational AI systems as high‑risk; Annex III of the EU AI Act lists educational AI systems for admissions, assessment and monitoring as high‑risk and sets obligations for accuracy, transparency and human oversight (Nguyen, 2025). Among mitigation strategies, RAG reduces hallucinations, while Chain‑of‑Verification and self‑consistency improve reliability. Pedagogically, teaching students lateral reading, updating academic policies, and redesigning assessments to require reflection and verification are essential.
Originality. Unlike purely technical surveys or broad commentaries, this study bridges AI research with educational practice and policy, providing a holistic perspective tailored to Ukrainian higher education. It synthesises international findings with local realities, offers a taxonomy of AI errors, presents original visualisations (Table 1 and Figure 1), and proposes a multi-level framework combining technical, pedagogical and regulatory solutions. The article emphasises that AI hallucinations are not simply technical bugs but systemic challenges requiring cultural change.
Conclusion. To harness AI’s benefits in education, stakeholders must recognise and mitigate the problem of misinformation. Improving models (via RAG, verification chains), enhancing AI literacy, and adhering to high-risk regulatory standards will help ensure that AI supports, rather than sabotages, learning. Future research should focus on domain-specific hallucination rates, real-time fact-checkers for Ukrainian-language content, and longitudinal studies on AI’s cognitive impact. Ultimately, balancing technological innovation with human oversight and ethical principles will determine whether AI becomes a trustworthy educational ally or a source of confusion.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Сергій МЕЛЬНИК

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.