top of page

The Negatives of Artificial Intelligence in Education

The Negatives of Artificial Intelligence in Education

Table of Contents:

I. Introduction

II. Widening the Digital Divide

III. Loss of Human Interaction and Personalized Attention

IV. Data Privacy and Security Concerns

V. Ethical Considerations and Bias

VI. The limitations of AI written works

VII. Conclusion

VIII. Sources

I. Introduction

In recent years, the integration of artificial intelligence (AI) into society has rapidly increased. Within the last few years, AI has made its way into a variety of diverse fields including education, medicine, engineering, and many more. In particular, the field of education has experienced a major emergence of AI-powered tools and systems that are aimed at revolutionizing the educational field and learning processes. With the rapid advancements in artificial intelligence, there is the potential to improve educational experiences by tailoring lesson plans to the individual, helping to streamline administrative tasks, saving educators massive amounts of time. However, it is important to acknowledge the negative implications that AI can have on education and educators.

The main purpose of this paper is to explore negative aspects associated with the use of artificial intelligence in education. In particular, this paper will address the following topics: the widening of the digital divide, the loss of human interaction and personalized attention, and the concerns surrounding data privacy, security, and ethics. In looking deeper into these topics, we can begin to further understand the risks that go along with integrating artificial intelligence in educational environments. The hope is that through the analysis presented in this paper, the implementation of AI in educational settings can be effectively navigated.

II. Widening the Digital Divide

Since AI is a digital technology, integrating its use into educational settings has the potential to exacerbate existing inequalities through the widening of the digital divide. The digital divide is the gap between people who have access to and use of modern information and communication technology and those who do not. It can be influenced by a variety of different factors including: income, age, skills, political engagements, etc. One major concern is that resources will not be readily available among disadvantaged students. Many students, specifically those who reside in remote or underserved areas, already face limited access to reliable internet connections and technology sources. If the digital divide widens through the use of AI, these disadvantaged students will only face worse situations, and it is not just students who will suffer. The digital divide can have negative implications on the field of health, social inclusion, and economic development. When there are disparities in who has access to technology, then opportunities for learning will also decrease, as those unable to afford or attain these AI tools will not receive the benefits of them, limiting the benefit of Artificial intelligence.

As mentioned above, the insufficient availability of high-tech devices further adds to the digital divide. Disadvantaged students often struggle to afford or even access technology devices that are capable of supporting AI tools and technologies. Much of this divide comes down to differing socioeconomic statuses. The vast disparity in device ownership, like computers and smartphones, among those with different socioeconomic backgrounds leads to the unequal access of educational resources, including AI powered software. High socioeconomic status students often have access to computers and AI tools which allow them to gain educational advantages over lower economic status individuals.. Those with the means to access AI are granted access to personalized education and enhanced learning experiences, perpetuating the existing gap between who can and who cannot access and use artificial intelligence resources. As a result of this, the achievement gap between privileged and disadvantaged students widens, hindering efforts to promote equitable education.

If AI is used as a tool that personalizes education to the individual, then those who are unable to access AI will be the ones left behind. This marginalization will lead to not only the academic performance of these students being unequal but could also lead to more negative educational experiences overall. This could lead to increased mental health issues and reductions in the rates of those who graduate.

In addition to the access issues previously discussed, technical expertise and support are significant challenges to implementing artificial intelligence in schools. If AI is intended to be widely utilized within educational settings, training will need to be conducted for both students and teachers to fully understand how to use them. At this moment in time, with many artificial technologies being as new as they are, teachers and students generally lack the necessary training and digital literacy skills to effectively use artificial intelligence tools and resources. This will require significant time, energy, and financial resources to properly train both teachers and students. Implementing a training plan of this magnitude would require significant investment in infrastructure, financial support programs for all students to ensure devices are affordable and accessible, regardless of economic status,, and comprehensive training programs to enable teachers and students to effectively utilize artificial intelligence tools.

Overall, while the integration of artificial intelligence within educational settings seems promising in radically transforming the educational experience, it also represents many challenges. One of these challenges is the widening of the digital divide, which if not addressed appropriately, will continue to grow. Insufficient access to resources, the marginalization of disadvantaged students, and the reinforcement of socioeconomic disparities are all major concerns of implementing AI in the educational field. To ensure the responsible and equitable use of AI in education, it is imperative to bridge the digital divide through targeted initiatives, promote equal access to AI technologies, and address the socioeconomic barriers that hinder equitable educational opportunities.

III. Loss of Human Interaction and Personalized Attention

Integrating artificial intelligence in educational settings raises concerns about the potential loss of interaction and personalized teacher-student attention. Both of these are crucial for the success of students and promoting effective learning. Face-to-face interactions between teacher and student help to foster rapport, trust, and a sense of community within the classroom that helps build an environment in which students are comfortable asking questions, engaging in meaningful discussions, and expressing their thoughts. For all of these reasons, artificial intelligence cannot fully replace the care that teachers provide, nor can it replicate human interaction.

One of the biggest drawbacks of artificial intelligence systems is their limited and reduced capacity to understand the emotional needs of students. AI may be able to provide automated feedback to students in need, but it lacks the empathy of human interactions. Students need emotional support to succeed, especially when it comes to education, as they are stressed and constantly learning. The absence of human interaction in scholarly settings could negatively impact student motivation, engagement, and overall well-being. Another major concern is the potential impact of artificial intelligence on the development of critical thinking skills and creativity. Artificial intelligence software often prioritizes efficiency as opposed to creativity, leading to an emphasis on memorization as opposed to true learning. Human teachers encourage open-ended questions that promote explorations of topics, problem-solving, and critical thinking. Increasing reliance on AI technologies in the educational process may limit opportunities for students to engage inessential aspects of cognitive and creative development.

In looking forward, efforts to integrate artificial intelligence should complement teachers’ efforts as opposed to undermining or replacing them. Collaborative projects, group activities, and class discussions could use artificial intelligence to foster teamwork and cooperation. However, teachers are more equipped to moderate discussions to curtail arguments and work through disagreements between students, where AI does not have this capability. It is important to explore different ways that artificial intelligence can work with teachers as opposed to taking their place. This may require changing the way we think about teachers and their roles in education by focusing on their unique skills as humans to care for, mentor, and nurture their students’ emotional and physical well-being.

Additionally, one of the major proposed positives of artificial intelligence technology is that it can create personalized lesson plans. With artificial intelligence largely relying on algorithms and standardized approaches, this could cause issues in understanding and responding to the unique needs of students. AI would need to make many adjustments, oftentimes quickly, to adapt to unique situations. With AI’s current capabilities focused on patterns, it may experience limitations in adapting to student’s unique needs, and would therefore struggle to accommodate diverse learning styles and effectively address specific areas of learning difficulty. Teachers are more equipped to consider a student’s progress holistically and provide more individualized support to ensure students are given the support they need to reach their full potential.

In summary, the integration of artificial intelligence in education presents several challenges including the loss of human interaction and personalized attention towards students. The emotional support, tailored guidance, and individualized feedback that human teachers provide cannot be fully replicated by AI systems. AI’s inability to adapt to diverse learning mechanisms and lack of foster creative thinking are major negatives of using AI in school. Moving forward, it is crucial that humans are not taken out of the equation for education so that they are able to receive the personalized care that they need for their overall growth and academic success.

IV. Data Privacy and Security Concerns

The integration of artificial intelligence in education requires the collection and analysis of vast amounts of student data and information. While this data will be necessary to create personalized lesson plans for students, it also raises serious concerns regarding data privacy and security. Ensuring that student data is secured is crucial in order to protect student privacy, maintain trust within educational institutions, and prevent the potential misuse or breach of any of this information. AI systems often gather data including student demographics, academic performance, behavioral patterns, and more. This wide collection of personal information increases the risk of unauthorized access, data breaches, or potential misuse by third parties. For these reasons, it will be crucial that academic institutions have a robust privacy department to protect all student information.

In breaking down the data breaches and potential privacy concerns, we can discuss several different examples that could negatively impact students. The first could be that the misuse of student data can have significant consequences including identity theft, cyberbullying, and exposure to inappropriate content. If student information is not securely and privately maintained, then many problems might arise. What is needed are stricter regulations, new policies, clear consent mechanisms, and transparency in data handling practices. To address some of these issues, institutions should implement robust data policies and safeguards to protect all students. This includes adopting encryption and secure data storage practices, conducting regular audits and assessments of AI systems, and providing clear guidelines on data access and usage. In addition, establishing ethical review boards and committees can help oversee the implementation and deployment of AI technologies, ensuring that they align with ethical principles and legal frameworks. Ongoing monitoring and auditing of AI systems will be crucial to identify and address potential vulnerabilities. Regular assessments should be conducted to ensure compliance with data protection regulations and ethical standards. Educational institutions must also establish protocols for promptly addressing data breaches and security incidents, including notifying affected parties and implementing measures to prevent future breaches.

Another important aspect to consider is the potential for bias and discrimination to specific students based on the information that AI might collect. If AI algorithms collect biometric data, then they might perpetuate biases that are present in their algorithms. AI systems are trained on large datasets, and if these datasets contain biases, the algorithms can replicate and amplify those biases. This can lead to discriminatory outcomes in areas such as grading, recommendations, and educational opportunities. It will be paramount to ensure that the artificial intelligence algorithms are developed using diverse and inclusive training data, addressing any existing biases to ensure that all students are treated equitably. Moving forward, it will be necessary that institutions clearly define what information will be collected by AI systems as well as how that information will be utilized. It should be necessary that both parents and students provide consent to AI using all of this information and storing it. This transparency builds trust and allows individuals to make informed decisions about their privacy.

Lastly, educating all students, teachers, and administrators about data privacy and security will be vital moving forward. Students should be informed of the importance of protecting their personal information and the risks of sharing that information online, while teachers and administrators should all receive training on data privacy practices. By promoting data literacy and cybersecurity awareness, educational institutions can empower individuals to take an active role in safeguarding their data. All of this training and information will be crucial to ensuring that AI is used to enhance the student’s learning experience in a positive way while keeping student information secure and safe.

In conclusion, integrating artificial intelligence in education raises serious concerns regarding data privacy measures and security. Protecting the information of students should be prioritized to promote safety for students and maintain trust within educational institutions. Transparent and robust data practices and policies, informed consent, regular monitoring, and collaboration amongst all stakeholders will all be critical to protect and secure student information. Additionally, testing and evaluating algorithms will be important to eliminate any potential cases of bias or discrimination that could arise. All of these measures will be necessary and require successful implementation to protect the well-being of all students.

V. Ethical Considerations and Bias

Integrating artificial intelligence in education can raise some serious ethical considerations, specifically when thinking about bias. Bias has been largely discussed throughout this paper as it is a major issue that could have serious negative implications on students. Artificial intelligence algorithms are specifically created on evaluating vast datasets. If these datasets contain inherent biases, then the software can perpetuate these biases within educational settings, making it worse for many students.

One major area of concern with regard to bias comes to grading and assignments. What happens if AI favors certain groups over others? If AI collects demographic information, it will have the capability to evaluate work based on these datasets, enhancing the likelihood for bias. This could have a significant impact on students from marginalized communities, confounding already existing disparities. Therefore, it will be critical to regularly audit artificial intelligence grading systems to detect and prevent any biases that might be already present or arise so that all students have equal opportunities. These biases could also be perpetuated by the lack of diverse personnel working on the development and training of the AI software systems. If those programming the AI software are not representative of the diversity of the student population, then the algorithms may not accurately address and understand the experiences, perspectives, and needs of the marginalized communities. Encouraging diversity among artificial intelligence teams will be necessary to address these issues and ensure biases are not present in educational settings.

To address some of these ethical considerations, a multipronged approach will be needed. Ethical review boards and committees should be created and implemented within institutions to ensure adherence to any and all ethical policies and principles. These committees could serve to provide oversight, evaluate any potential risks and benefits, and make recommendations for the equitable use of artificial intelligence in education. Regular audits, transparency in algorithm design, and ongoing training on ethical considerations are essential components of this approach. These committees can also facilitate effective communication in terms of the goals, capabilities, and limitations of artificial intelligence systems to help manage expectations and foster trust among students and their families.

Another ethical consideration is the potential impact on student autonomy. If artificial intelligence takes over for teachers and creates personalized lesson plans, then this can impact not only their current learning but also their future learning. If not structured correctly, AI can influence decision-making processes including course recommendations or career guidance. This could limit students in what they can explore for future careers, and could significantly interfere with their autonomy in making decisions for their future. It is important to ensure that students have the freedom to make informed decisions to pursue their own interests and aspirations.

In summary, using artificial intelligence in educational institutions requires carefully evaluating ethical policies. Addressing biases, ensuring transparency, preserving student autonomy, and respecting data ownership and control are vital in creating an ethical framework for AI use in education. With the implementation of ethical committees, ethical challenges can be navigated and effectively managed. Addressing the possibility of bias in grading, the lack of diversity of AI in development, and the potential dehumanization of education are all major issues that need to be addressed. If AI is going to be implemented within educational systems, it will be important that all ethical considerations are considered to promote equitable learning environments.

VI. The limitations of AI written works

The following paragraphs will highlight some of the key limitations associated with AI written works. These limitations include the absence of emotional intelligence, the inability to provide opinions that are subjective, and tendencies to provide surface-level information when it comes to complex or higher-level topics. The following ChatGPT examples will illustrate some of the limitations of artificial intelligence in showing emotions, offering opinions, and the ability to deeply dive into complex subject matter.

The first example shows the limitation of artificial intelligence to show any real emotion.

This piece of writing from ChatGPT serves as a direct reminder that AI does not have the emotional capacity to connect with students. Development and growth of students requires that they receive the emotional and physical support that they need. AI has directly stated itself that it lacks the ability to accomplish this. Without the nurturing touch of human teachers, kids in educational settings will not grow up with the care that they need. This increases the importance of human educators who possess the empathy, understanding, and ability to connect with students on an emotional level. The unique abilities of human teachers to promote well-being and growth cannot be replaced by artificial intelligence, highlighting the irreplaceable role that human interactions have on young students.

The second example shows the inability of artificial intelligence to provide opinions, which can be invaluable methods of learning.

This writing sample from ChatGPT directly states that AI does not have the ability to state nor create personal opinions. While this may seem like a positive aspect of AI, where students will be presented with facts instead of opinions, it also takes away a major part of the learning experience. When students ask questions in class, they want to get an answer that is not just robotic and void of emotion. Additionally, not all questions in school are directly related to subject matter taught in class. Students want to get to know their teachers and form human connections with them. Without this critical aspect of learning, students will not receive the full experience that education should afford and become confined to what artificial intelligence tells them.

The last example discusses the inability of AI to deeply evaluate topics of higher education. AI largely evaluates patterns within datasets without actually truly understanding what that data means. The following was too large to fit in one picture so it is as follows.

My question: I am working on a public health issue for diabetics among the Pima American Indian tribe, how can I address this community's need for healthier food, proposing specific interventions.

ChatGPT’s answer:

I am a current Master’s student pursuing my degree in Public Health, so this is a topic from past work that I have done. While this is not an inherently bad response from ChatGPT, it is not a response that contains any real substance to it. This is a complex and higher-level question that requires knowledge about the group in question along with understanding the environment in which we may aim to help. This answer discusses generalities for ways to address diabetes among communities. It talks about new educational materials, private gardens, and cooking classes for the community to learn to create healthier versions of traditional dishes. These are all great suggestions for addressing diabetes generally, but it does not take into consideration the unique struggles experienced by the Pima community. The Pima are a community that largely suffer from high rates of diabetes due to lifestyle changes that started around 100 years ago. These were mostly facilitated through the Damming of the Gila River and the loss of water which was critical to the way the Pima community used to live. They used to rely on the flowing water that helped them grow their own food which helped enforce an active lifestyle and prevent the rise of diabetes. Now that this lifestyle has been taken away from them, they have been forced to buy unhealthy foods that come in to the few markets that they have available. This is an issue much wider than what can be obtained superficially from ChatGTP. This is why artificial intelligence software’s need to be carefully evaluated before being used at any higher education institutions.

VII. Conclusion

With artificial intelligence being brought into educational settings, it has brought with it several potentially negative implications as well. One major concern is the risk of the widening digital divide among students of all levels. AI tools require access to internet connections and smart devices in order to operate. Not all students have the same opportunities which can create disparities in educational opportunities. Students from disadvantaged backgrounds may lack access to these resources, exacerbating existing inequalities and further marginalizing them in the learning process.

Another major drawback of relying on artificial intelligence in education is the loss of human interaction. Current teaching practices now emphasize a connection between students and teachers with face-to-face interactions. This allows for teachers to understand their students’ strengths, weaknesses, and motivations so that they can better help them succeed. This is not something that AI tools have the ability to do. They cannot empathize or provide the emotional support that humans can offer within classrooms. Another drawback of artificial intelligence is that it raises concerns about data privacy and security. Educational systems using AI can collect wide amounts of student information which is susceptible to breaches, hacking, and misuse. Additionally, AI algorithms may perpetuate bias and discrimination leading to unequal treatment for students based on race, gender, socioeconomic background, culture, and more.

In conclusion, while artificial intelligence tools may have the ability to revolutionize educational institutions, it is necessary to acknowledge its flaws. The widening of the digital divide, the loss of human interaction in schools, and concerns about data privacy are all negatives that need to be carefully evaluated to ensure equitable use of AI in education. It will also be important to evaluate whether or not artificial intelligence tools can adequately interpret and understand the information that it shares. At this moment in time, it has been observed that AI does not possess the means to perform high level thinking noremotions. In all, while artificial intelligence might have several positives to it, there are many negatives associated with its use within educational settings that need to be carefully evaluated before it can ever be implemented successfully.

VIII. Sources

AFRIKTA Team. “10 Negative Effects of Artificial Intelligence in Education.” AFRIKTA, 12 Apr. 2023,

Alkaissi, H., & McFarlane, S. I. (2023). Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus, 15(2).

Borji, A. (2023). A categorical archive of ChatGPT failures. arXiv.

Civaner, M. M., Uncu, Y., Bulut, F., Chalil, E. G., & Tatli, A. (2022). Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Medical Education, 22(1).

D, O., M, D., J, L., A, H., J, L., & D, M. (2018, June 7). Barriers and solutions to online learning in medical education - an integrative review. BMC Medical Education.

Elsevier. (2023, May 23). The pros and cons of using ChatGPT in clinical radiology: An open discussion.

Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 1–15.

Fritsch, S. J., Blankenheim, A., Wahl, A., Hetfeld, P., Maassen, O., Deffge, S., Kunze, J., Rossaint, R., Riedel, M., Marx, G., & Bickenbach, J. (2022). Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients. DIGITAL HEALTH, 8(8), 205520762211167.

Gallix, B., & Chong, J. (2019). Artificial intelligence in radiology: who’s afraid of the big bad wolf? European Radiology, 29(4), 1637–1639.

Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2022). Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. BioRxiv.

Hulick, Kathryn. “How ChatGPT and Similar AI Will Disrupt Education.” ScienceNews, 12 Apr. 2023,

Hunt, Tamlyn. “Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not.” Scientific American, 23 May 2023,

Lai, T., Xie, C., Ruan, M., Wang, Z., Lu, H., & Fu, S. (2023). Influence of artificial intelligence in education on adolescents’ social adaptability: The mediatory role of social support. PLOS ONE, 18(3), e0283170.

Langreo, Lauraine. “What Educators Think about Using AI in Schools.” Education Week, 14 Apr. 2023,

Lecler, A., Duron, L., & Soyer, P. (2023). Revolutionizing radiology with GPT-based models: Current applications, future possibilities, and limitations of ChatGPT. Diagnostic and Interventional Imaging.

Lewis, Cora. “Regulators Take Aim at AI to Protect Consumers and Workers.” Los Angeles Times, 26 May 2023,

Lynch, M. (2018, December 29). The Effects of Artificial Intelligence on Education - The Edvocate. The Edvocate.

Marr, Bernard. “What Are the Negative Impacts of Artificial Intelligence (AI)?”, 29 Aug. 2019,

Movement, Q. ai-Powering a Personal Wealth. “The Pros and Cons of Artificial Intelligence.” Forbes, 1 Dec. 2022,

Positive Negative Effects. “Positive and Negative Effects of Artificial Intelligence - Essay and Speech.” Positive Negative Effects, 26 Sept. 2017,

Roose, Kevin. “A.I. Poses “Risk of Extinction,” Industry Leaders Warn.” The New York Times, 30 May 2023,

Sallam, M. (2023, 2023-02). The utility of ChatGPT as an example of large language models in healthcare education, research and practice: Systematic review on the future perspectives and potential limitations. medRxiv.

UNESCO. (2018, June 25). The ethical risks of AI.

U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington, DC, 2023.

Zhai, X. (2022). ChatGPT user experience: Implications for education. Available at SSRN 4312418.


bottom of page