Three people, to one, to put

How to take a university exam allowing students to use AI

By Javier Cuervo – Co-founder of Proportione, professor of innovation and strategy at UNIE . PhD candidate in business innovation – Universidade de Aveiro

The initial resistance: fear of a new technological "trap"

Many teachers have received the irruption of generative artificial intelligence tools (such as ChatGPT) with concern. In the corridors of our faculties, well-known fears are heard: "Students will cheat on exams with ChatGPT" , "AI is going to make my job irrelevant" , "How will we evaluate if we can't know who actually wrote their essays?" . This reaction It's understandable : Generative AI challenges traditional teaching and assessment methods that have been in practice for decades. Let's remember that this is not the first time that a technology has caused suspicion in the classroom. The pocket calculator was seen at the time as a threat to mathematical learning, and something similar happened with the personal computer and the Internet. The teachers feared that These tools "cheat" for students , providing them with effortless answers.

Three people, to one, to put
Illustration – the evolution of tools perceived as "traps" in education: from calculators, to personal computers, to generative AI.

However, over time we learned that banning the calculator was not the solution: we incorporated it into teaching, focusing on more complex problems and deep understanding rather than mechanical arithmetic. Similarly, the Internet was no longer seen solely as a source of plagiarism but as an indispensable repository of knowledge in the classroom. Each new disruptive technology has forced the educational community to Adapt and rethink strategies, not to entrench oneself in the old ways. Therefore, although The initial fear is natural, it is also wrong . To adopt a prohibitive stance on AI would be to miss the mark and forget the true purpose of education. After all, the mission of teaching is not to prevent students from using advanced tools, but to prevent them from using advanced tools. teach them to use them judiciously .

Understandable but wrong: why banning AI is not the solution

The reaction of many teachers to ChatGPT – blocking its use, trying to detect it as if it were plagiarism – is understandable in the short term. Generative AI has appeared very quickly on the scene and has taken by surprise educational institutions that still rely on traditional exams, written assignments, and assignments designed for a pre-AI era. But Trying to put doors on the digital field is futile . "AI-written text" detection tools are unreliable, generate numerous false positives, and turning the assessment into a technological witch hunt only adds uncertainty. More importantly, focusing on pursuing the use of AI means losing sight of what it is that we seek to foster in our students.

We've seen it before: when an attempt was made to restrict access to Wikipedia for fear that students would copy answers, the focus shifted to recognizing that it was better teach how to contrast sources and cite correctly , rather than denying them access to a global encyclopedia. Similarly, whether a tool gives students a competitive advantage to do something better or faster , our duty as teachers is not to prevent them from using it, but to guide them so that they can take advantage of it in the best way. A repressive approach to AI in the classroom risks missing valuable pedagogical opportunities.

Moreover, banning generative AI will not prepare learners for the real world either. Outside the classroom, these technologies are already boosting productivity across multiple industries. A Harvard Business School study with Boston Consulting Group showed that, in complex tasks, consultants using ChatGPT (GPT-4) completed their work 25% faster and with 40% higher quality than those who did not use it. Tellingly, the professionals with the lowest initial performance were the ones who improved the most thanks to AI (up to a 43%of increase in its performance), evidencing the leveling power of this technology. Don't we want to give that same advantage to our students, especially those with the most difficulties? To ignore or veto AI in education would be to condemn our students to Compete at a disadvantage in their future working life, where these tools are already daily allies.

Learning from the Past: Evaluating the criterion , not mechanical execution

If the "revolutions" of the calculator or the Internet taught us anything, it is that education must focus on what machines cannot do for students: thinking critically, providing criteria and creativity, and knowing how to apply knowledge . In the age of generative AI, it makes less sense than ever to grade the student on their ability to memorize facts or write without grammatical errors. An AI can generate a structured essay in seconds, but You don't know if that essay makes sense, if it answers the question correctly, or if it contains biases and errors . That's where human judgment comes in. Therefore, the emphasis of evaluation must shift to the ability to analyse, filter, correct and add value on an automatically generated basis.

In my experience, what needs to be evaluated is the criterion of the student, not the mechanical execution. It matters less whether the final text was typed entirely by the student or received help from an algorithm; The crucial thing is that the learner understands, validates and improves that result. What is relevant is not whether the student writes the text word for word, but that, upon receiving an answer, he understands that he should not just copy it, but review it carefully, adjust it when necessary and take responsibility for its final content. This work of active revision and correction, on the part of the student, Reinforces learning more than the mere manual writing of the text. In fact, many teachers already notice that when students use AI assistants to polish their work (for example, to improve grammar in a second language), the result is positive: students produce more readable reports, and we can focus on the content and quality of their ideas, rather than wasting time correcting minor details.

In short, we have to redefine what we mean by "cheating". Is it "cheating" for a student to use a legitimate tool that is at their disposal to obtain a better result? It is not if that result is accompanied by one's own understanding and intellectual work. On the contrary, it would be a mistake to penalize him for efficiently using the tools of the 21st century. As teachers, our responsibility is to teach how to use these tools ethically and effectively , integrating them into the training process.

A case study: integrating generative AI into real assessment

I would like to share a personal experience that illustrates how AI can become an ally in the classroom. Since January 2022 I started experimenting with generative AI with my students (then, in the LEINN degree in Leadership, Entrepreneurship and Innovation). It was the students themselves who soon discovered multiple creative uses: they generated content for their websites by improving their SEO, they went from text to images and even videos using different tools. I saw how some created musical compositions with AI and incorporated them into presentations, or how they shared their experiences with each other. prompts more effective at inspiring each other. Far from becoming a race to deceive the teacher, it became a healthy competition to see who achieved the most innovative results using these tools. Each new finding was enthusiastically shared with peers – and many times, I was caught teaching something new to myself as well. The atmosphere in the classroom went from initial suspicion to a climate of constant collaboration, focused on how to improve work with the help of AI instead of how to hide its use.

The result of this early incorporation of AI was a Enriching learning process . The students not only learned the subject itself, but also developed the ability to understand the limitations and strengths of the different AIs, refining their queries and combining their own ideas with the suggestions generated. At the end of the course, we did something perhaps unthinkable years ago: we celebrated the achievements obtained with the support of AI as much as those achieved by traditional methods. I can say that after three semesters using generative AI in that program, The experience was very positive, even a great success in terms of student performance and motivation. Many demonstrated greater depth of analysis, precisely because they had had to spend more time on Monitor, contrast and refine the information obtained automatically, instead of investing it only in writing from scratch.

As a professor of innovation and strategy at UNIE University, I have taken this integration a step further, especially in evaluation. We designed an exam system based on Role play strategic and open cases, where the use of AI by students is Mandatory and guided . What does it consist of? Before the final exam, each student is assigned a role (e.g., innovation director of a shell company, external consultant, competing startup, etc.) within a complex and open case study. They must prepare an innovation strategy for that case, and for this they are required to use generative AI tools during its preparation. We provide them with a Guiding prompt script to explore different facets of the AI problem (from SWOT analysis to creative solution proposals). On the day of the exam, students present their strategy and, in addition, they deliver an annex documenting how they used AI: what queries they made, how they interpreted the answers, and how they incorporated (or discarded) them into their final solution.

This approach accomplishes two things. First normalizes the use of AI As part of learning: Students know that they are expected to use these tools, just as they are expected to know how to search for information in databases or collaborate in teams. Second, it allows teachers to evaluate what is really important: their Criteria and decision-making . By reviewing the AI usage annex, I can see if a student was satisfied with the first answer generated or if they knew how to refine their questions, if they checked the veracity of what the AI suggested, or how they reacted to possible errors in the model. In practice, I find it much more valuable—and assessable—to suspend the student who asks poor questions or accepts dubious content from AI, than the student who intelligently used AI to Raise quality of their work. As Enrique Dans also points out, asking students to include the prompt that they used allows us to assess whether they have made correct use of the tool. It's about evaluating their strategy and judgment, not whether they typed every word of the report with their own fingers.

The results have been remarkable. On the one hand, the average quality of the proposed solutions has risen, because no student gets stuck on a blank sheet of paper; they always have a generated starting point that they then customize and improve. On the other hand, it reduces the anxiety associated with the exam, since students feel that they have "an assistant" and that the key is in how they guide them. They learn to think critically about AI responses , to take nothing for granted without corroboration, and to combine human creativity with machine efficiency. In short, we are better preparing our students for a world where knowing how to use AI will be as basic as knowing how to use a web search engine or a spreadsheet.

From Resistance to Adoption: Managing Change in Educational Culture

Incorporating generative AI into the classroom is not just a technological issue; it's a process of cultural change in education. And like any profound change, it needs to be managed with intelligence and empathy. Many teachers need support to make this leap, which is entirely reasonable. According to A study the 70% of changes in educational institutions do not achieve their initial objectives due to lack of support and adequate management of the transition. In other words, if we simply "launch" AI in the classroom without training teachers or adjusting practices, we risk its adoption failing.

To avoid this, it is essential to design a strategy of Change Management in our educational centers. This involves several actions: clearly communicating the purpose and benefits of integrating AI (dispelling the idea that it comes to replace the teacher), providing continuous training and spaces for experimentation for the teacher, updating evaluation and academic honesty policies, and creating a culture where error is seen as part of learning. Experts in organizational transformation insist that Any evolution in the way of working requires proper change management : Manage expectations, establish a communication plan, foster a receptive environment, and address resistance constructively . In teaching practice, this could translate into piloting the use of AI with a small group of pioneering teachers, sharing good practices and success stories (those "early adopters" who show that it is possible and beneficial), and involving departments and management teams to align the change with the school's strategic vision.

It is also key Involve students in this process of change . After all, they will be the most affected (for the better) by the integration of AI into their learning. Explain the reason of the new dynamics, listening to their concerns (for example, some may fear that depending on AI will make them "think less", something that we must deny with facts) and even co-create rules of responsible use with them, can facilitate a smoother transition. In the same way, it will be necessary to dialogue with parents and the educational community in general: many of them share the same doubts as teachers, and they need to understand that we are taking this direction not to lower the demand, but to raise it in a different sense, more in line with the times.

Change management includes facing resistance openly. There will be teachers who remain skeptical or who feel that this threatens their comfort zone. It is important to show empathy – change can be perceived as a criticism of their traditional way of teaching, when in reality it is a necessary evolution of the profession. Offer mentoring, concrete examples and above all Demonstrate results it will help convince the doubters. In my case, sharing with colleagues how my students improved their deliverables and engagement thanks to the guided use of AI was more persuasive than any theory. When they see students more motivated, participative and obtaining tangible achievements, many teachers go from suspicion to curiosity, and from there to the interest in trying themselves.

A strategic ally for teachers and students

Far from destroying education, generative AI can elevate it to new heights of efficiency and reach. We are at an inflection point: we can cling to yesterday's methods, seeing AI as an enemy to contain, or we can integrate it strategically as an ally that enhances our capabilities. History teaches us that winning this battle does not mean defeating the machine, but incorporating it into our side.

In the business world, there is talk of "centaurus" teams (AI-powered humans) achieving results that no human or machine would achieve separately. In education, we should aim for something similar: AI-augmented teachers that they have more time for what is important (personalizing teaching, motivating, thinking of new ways to inspire their students) because they delegate certain operational or first correction tasks to AI; and AI-augmented learners who can explore ideas and knowledge with a tireless helper, freeing up mental space for creativity and critical thinking.

Of course, this path is not without its challenges. Ethical issues (plagiarism, bias, misuse) will have to be resolved, equal access to these technologies will have to be ensured, and their impact will have to be closely monitored. But if there is one thing that is clear to me after my experience, it is that ignoring or fearing AI will not stop its advance ; it would only leave us behind. The responsible alternative is learning to live together and collaborate with AI. As a recent reflection by Mayte Tortosa from Proportione , "The traditional view of human-machine competition has become obsolete. Instead of fearing impersonation... it is imperative to explore new forms of collaboration" . Instead of asking ourselves whether AI will replace teachers, we should ask ourselves How good teachers will be those of us who know how to take full advantage of it .

In my dual role as a strategic consultant and university professor, I see the arrival of generative AI in the classroom as a Unique opportunity to modernize teaching and bring it closer to professional reality. It forces us to rethink What and how we teach , to get out of the comfort zone of rote exams, and to focus on higher competencies: critical thinking, complex problem solving, creativity, continuous learning. All this, with AI as a companion, not as an adversary. Teaching has always been about preparing new generations for the future; Today that future necessarily includes artificial intelligence. Let's make her our ally in the mission of educating, and not our enemy. The beneficiaries will be both the students – who will learn more and better – and us, the teachers, who will renew our value proposition in the digital age.