The following section represents a part of a broader research comprised in: Ana Maria Costea, Ioana Roxana Melenciuc, “AI in education: a win-win or a zero-sum game?” in Alexandra Zbuchea, Florina Pinzaru, Cristian Vidu (Eds.), Changing the game. AI in Education, Tritonic 2023
Technological development is among the most debated issues at national and international levels since it touches all the aspects of our current society, from the individual level to the international one. The digitalization process, the increase of internet coverage, the internet of things, and the development of AI and its usage in approximately all aspects of everyday life is already the status quo, be it intrinsically acknowledged by its users or not. The benefits of modern technology are huge, from increased life quality of an individual, reduced processing time or errors when using smart technology, to the development of high-level military operations that do not rely on humans anymore nor are they so time-consuming, thus cost more money (e.g., autonomous drones, automatized missile shields, etc.). Among the most successful modern technologies, we can pinpoint artificial intelligence (AI), which is used at the macro level, not only by banks, the military sector (Nurkin & Siegel, 2023) in peace or wartime situations (Franke & Söderström, 2023), social media platforms such as Facebook and Instagram (Clegg, 2023), or Tiktok (n.d.), but also education (Chen, Chen, & Lin, 2020). In this sector, there have been developed several AI-based applications from the famous ChatGPT (OpenAI, n.d.) to programs that detect plagiarism, robots, computers that use AI to develop simulations, predictions, and even programs that change the music according to the mood that the AI is sensing as coming from the persons that are in the room at that time (Chen, Chen, & Lin, 2020). On the other side, we can identify vulnerabilities, threats, and legal and ethical issues that arose from the increased usage of these technologies.
Thus, although AI is a new reality, according to a survey conducted by UNESCO in May 2023, only 13% of the 450 universities that participated offered formal guidance to their staff and students. Although they have issued some guides regarding the topic at hand, they vary a lot. “Only half have detailed instructions, the rest approve of AI but leave it up to users to decide how generative AI apps are applied. In 40% of cases, the guidance is not written, but only communicated orally” (Naujokaitytė, 2023).
Among the universities that adopted such strategies, we will focus attention on five of them: The University of Edinburgh (UK), The University of Ljubljana (Slovenia), The University of Tartu (Estonia), The University of Iowa (US), and The University of Arizona (US).
The University of Edinburgh (2021) has an official view over the ethical aspects of using AI, pinpointing towards using AI as a force for good, coordinating its research centers to take into consideration the five core themes, “Developing moral foundations for AI; Anticipating and evaluating the risks and benefits of AI; Creating responsible innovation pathways for the adoption of AI; Developing AI technologies that satisfy ethical requirements; Transforming the practice of AI research and innovation” (University of Edinburgh, 2021), when conducting their activities. At the same time, the strategy tends to be more like a general guideline than a clear-cut position regarding the vulnerabilities the AI poses. Concretely regarding ChatGPT, the university issued in March 2023 a Guidance for students on the use of Generative AI (such as ChatGPT). Officially the university does not pose a clear restriction on using AI tools but emphasizes the expectation that the students have to deliver original ideas. Also, the document mentions the limitations of such tools and acknowledges the fact that students are using them or in some cases are even advised to use them, thus the University of Edinburgh is among the few universities that not only allow the use of AI but also integrates this type of technology in the educational process but do not have a clear procedure regarding these aspects (University of Edinburgh, 2023).
The University of Ljubljana published its guide in September 2023. As in the case of the aforementioned university, this institution acknowledges the use of AI, emphasizes its limitations, and encourages its users to check the information that is generated and not take it for granted. At the same time, it is very specific regarding the possibility of using it as a copy/paste instrument, or when the teacher forbids it. Thus, the university as we will see in other cases, gives the professor the liberty to decide if their students are allowed or not to use ChatGPT in classes or for assignments/exams (University of Ljubljana, 2023).
The University of Tartu developed its guide in April 2023. Besides the general principles and the acknowledgment of AI use in the educational process, it sets specific grounds for ChatGPT usage by its students. Compared with other universities that give the choice to the professor, this university formally allows the use of AI, and the teacher decides the degree to which the students should use it. Also, the university allows papers that are generated by AI as long as they have the appropriate references and the author establishes from the beginning that it is a result of AI processing. Since providing this information and presenting the paper as one’s work would be considered academic fraud, the guide encompasses even ways to cite correctly the information that was generated by AI (University of Tartu, 2023).
The University of Iowa adopted the Guidelines for the Secure and Ethical Use of Artificial Intelligence in September 2023. As in the above cases, it acknowledges the existence and the use of AI such as ChatGPT, and its limitations but does not integrate its use into the university’s practices. Also, it pinpoints the policies of OpenAI (University of Iowa, 2023), which establishes the ground rules for its usage: the company disallows the use of its products for illegal or unethical purposes such as fraud, plagiarism, discrimination, etc. (OpenAI, 2023).
The University of Arizona has developed a student guide for using ChatGPT, thus integrating its use into an institutional framework (the University of Arizona Student Guide). At the same time, the university does not have an established policy, leaving the decision at the level of each professor (University of Arizona Student Guide – Integrity).
Therefore, there is no unity regarding how universities should react/adapt to AI technologies, not even at the level of the EU member states. A large part of the universities did not even develop a guide regarding its use by their staff and their students, leaving the situation in a grey area, where practically the professors cannot forbid their students to use ChatGPT in a percentage of 100% for their papers without proper references, given the lack of institutional framework. Additionally, since there is still no program that can detect for certain if information was generated by AI, the student remains the deciding factor. From a rational point of view, returning the game theory defection would be a winning strategy in the absence of norms, transforming AI in education into a zero-sum game in which time and high grades are won and critical thinking and analytical skills are lost.
Sources
Chen, L., Chen, P., & Lin, Z. (2020) Artificial Intelligence in Education: A Review. IEEE Access, 75264-75278.
https://doi.org/10.1109/access.2020.2988510
Clegg, N. (2023, June 29). How AI Influences What You See on Facebook and Instagram. Meta. Retrieved from
Franke, U., & Söderström, J. (2023). Star tech enterprise: Emerging technologies in Russia’s war on Ukraine.
European Council on Foreign Relations. Retrieved from https://ecfr.eu/publication/star-tech-enterprise-
emerging-technologies-in-russias-war-on-ukraine/
Naujokaitytė, G. (2023). Universities ready to take up generative artificial intelligence, but say guidelines are
needed. Science Business. Retrieved from https://sciencebusiness.net/news/universities/universities-
ready-take-generative-artificial-intelligence-say-guidelines-are
Nurkin, T., & Siegel, J. (2023). How modern militaries are leveraging AI. Atlantic Council. Retrieved from
https://www.atlanticcouncil.org/in-depth-research-reports/report/how-modern-militaries-are-leveraging-
ai/
OpenAI. (n.d.). Introducing ChatGPT. Retrieved from https://openai.com/blog/chatgpt
TikTok. (n.d.). About AI-generated content. TikTok. Retrieved from https://support.tiktok.com/en/using-
tiktok/creating-videos/ai-generated-content
University of Arizona. (n.d.). Student Guide to ChatGPT- Is using ChatGPT considered cheating? Retrieved from
https://libguides.library.arizona.edu/students-chatgpt/integrity
University of Arizona. (n.d.). Student Guide to ChatGPT. Retrieved from
https://libguides.library.arizona.edu/students-chatgpt/use
University of Edinburgh. (2023, March). Guidance for students on the use of Generative AI (such as ChatGPT).
Retrieved from
https://www.ed.ac.uk/sites/default/files/atoms/files/universityguidanceforstudentsonworkingwithgenera
tiveai.pdf
University of Edinburgh. (2021). Ethical AI. Retrieved from https://www.ed.ac.uk/c/ethical-ai
University of Iowa. (2023). Guidelines for the secure and ethical use of Artificial Intelligence. Retrieved from
https://itsecurity.uiowa.edu/guidelines-secure-and-ethical-use-artificial-intelligence
University of Ljubljana. (2023). Recommendations of the University of Ljubljana on the Use of Artificial Intelligence.
Retrieved from https://www.uni-lj.si/news/news/2023092014431970/
University of Tartu. (2023). University of Tartu guidelines for using AI chatbots for teaching and studies. Retrieved
from https://ut.ee/sites/default/files/2023-
05/university_of_tartu_guidelines_for_using_ai_chatbots_for_teaching_and_studies_28_april_2023_pdf.
pdf
The full chapter can be accessed here.
About the authors
Ana Maria Costea is a Lecturer at SNSPA, the Department of International Relations and European Integration. She holds a PhD is International Relations and European Studies and is currently teaching MA courses like: Cybersecurity, International regulations in the cyberspace and regional security. Having several articles already published, she was also the manager from SNSPA of the international project “Building an Innovative Network for Sharing of Best Educational Practices, Incl. Game Approach, in the Areas of International Logistics and Transport”(2019-1-BG01-KA203-062602). Among the results of the project we could name: publication of academic articles, development of courses for students and stakeholders, etc. She is also one of the two founders of “Security and Technology” MA program.
Ioana Roxana Melenciuc is a lecturer within SNSPA. For the period of 2020-2023 she has been the Head of the Department of International Relations and European Integration. Starting august 2023 she is the President of the National Institute for Public Administration, while between 2017-2020 she was advisor to the Minister Delegate for European Affairs and the Head of Unit on the Management of the National EU Affairs System, within the Ministry of Foreign Affairs. For her contribution to the successful completion of Romania’s mandate at the Presidency of the EU Council, she was awarded the medal for “The Diplomatic Merit”. She has been working in consultancy and research in various projects, focusing on the evaluation of public policies, European policies, and Europeanisation. She was the institutional coordinator of a project developed in partnership with the Romanian Ministry of Foreign Affairs and the Romanian Agency for International Development Cooperation, aiming to improve the national public policy on international cooperation for development and humanitarian aid. She is currently holding several courses on Evaluation of EU Policies, Fundamentals of Evaluation and EU Sectorial Policies. She is the writer of the book “The Emergence of a European Evaluation Culture” (2016) and of various articles in her field of expertise.