Artificial Intelligence (AI) tools are increasingly being used in classrooms around the world. Last month, British universities were warned to “stress-test” all assessments after new research revealed that “almost all” undergraduates are using generative AI (GenAI) in their studies. Last year, a study by TeamLease EdTech revealed that over 61% of educators in India are using AI tools. All this has given rise to fears that students will likely begin accepting information at face value rather than critically analysing it. Does the use of AI in education affect critical thinking skills? Arun Kumar Tangirala and Arul George Scaria discuss the question in a conversation moderated by N. Sai Charan. Edited excerpts:
Should AI be permitted in college classrooms? If yes, to what extent?
Arun Kumar Tangirala: Yes, AI should be permitted. As it has pervaded every aspect of our lives, it is not a good idea to prohibit it. Even if you were to prohibit it, students will use it because it has pervaded every home and device.
The extent to which it should be used and who should be using it in the classroom is contextual. It would depend on whether I am teaching a coding course, a technical course, a science course, or a humanities course. For example, if my aim is to impart cognitive skills, I would use AI minimally. But if I am teaching coding, it would be different. There has been a shift in skill sets in the industry. The ability to code is not necessarily the primary skill; the ability to evaluate and validate a code is more or less the evolving skill. So I would use AI, because everybody uses AI to generate codes. But it is important to make sure that the students use AI in an ethical and responsible way.
There are no government regulations on usage yet. So institutes and instructors have to form their own rules, declare these clearly at the beginning of the course, and also explain why they are imposing these. As long as things are done in a systematic, informed, ethical, and responsible manner, AI should be allowed.
Arul George Scaria: It is nearly impossible and perhaps even futile to prohibit AI in classrooms. Whether we like it or not, that is the reality. We might have to change our teaching and learning approaches according to this changing scenario because AI is also getting more and more integrated into many of the applications we use daily. For example, Copilot is getting integrated into Microsoft Word. Even when you open Adobe Reader, it suggests providing an AI-generated summary.
When we talk about AI usage in classrooms, we have to also understand that it is not just students who are using AI; teachers are using it too. School administrators want to bring AI into the classroom and many policymakers believe that there should be greater use of AI in education. But in all these contexts, ethical and responsible AI usage policies are required. AI is helpful in many ways, but it is also a tool that needs to be used cautiously for a wide variety of reasons, including, but not limited to, issues such as potential biases in responses.
The decision on the extent of use should be clearly guided by the learning objectives of the courses. When I teach a comparative copyright law course in collaboration with Professor William Fisher at the Harvard Law School, one of the assignments I give to the students is to use different AI platforms to generate potentially copyright infringing materials. Through this hands-on experience, the students get a better picture of the diverse issues in this particular area. They even get a better sense of whether it is possible to prevent generation of potentially copyright infringing materials and what kind of steps have been put in place by firms in this regard. So we need to evolve general guidelines for all stakeholders in an educational institution, but let the specific approaches for each course be developed as per the learning objectives of those courses.
Courses are being developed with AI. In that case, do you think AI will slowly be seen as a critical part of infrastructure?
Arun Kumar Tangirala: Yes. AI is going to be integral to every type of operation in an academic institution, company, or any other organisation. Therefore, preparations have to be underway in order to integrate AI in a seamless manner. A report published by the World Economic Forum not only throws light on the future skills required, but also on how institutions should realign themselves. The Future of Jobs Report 2025, published in January, showed that the top skills that learners require are analytical and cognitive thinking, AI related skills, social connection, adaptability, etc. Programming skills are lower down the list. If you compare it with earlier reports, the big difference you see is the arrival of AI and related skills. If AI-related skills have to be acquired, not only by the users, but also by employees, it has to be integrated into the infrastructure. But there has to be a secure way of doing this. Unlike a calculator or a computer, AI tools such as ChatGPT, Perplexity, and other LLM models that you are using take your data and broadcast it back to the server, which means that your own personal and confidential information could be at stake if it is not integrated properly. Every user has to be trained and made aware of the benefits and side effects.
Arul George Scaria: AI is becoming critical infrastructure in many different ways. The government and other stakeholders need to be mindful of this and take appropriate measures for regulation. For example, many State governments suggest adopting AI in schools. But has there been a safety audit of the AI tools that have been suggested for incorporation in these schools? Has there been an audit on the potential biases that might exist in the system? With respect to the training data, are we demanding disclosure? Are we mindful of the impacts?
With AI here to stay, do you think we should accept it in a regulated manner rather than being critical about it?
Arun Kumar Tangirala: There is apprehension and fear about the usage of AI. But we should start using it. For a long time, I also desisted and my reason for using it was to really experience what the fears are about. There is no point in imagining what may happen. Start using AI in a limited way, and experience the benefits and the possible risks. Do it the way you would with an automobile.
The difference is that for automobiles, we have excellent regulations. For AI, it will take time. There are countries saying forget about regulations for now, because that is going to hamper the growth of this technology. I disagree. While technology is evolving, discussions on regulation should also be happening. The European Union has been active in that respect. In India, there are more and more discussions happening now, but it will take some time for actual regulations to kick in.
Arul George Scaria: It is clear that the state might take some time to frame regulations. To me, it is vital that every university initiates dialogues among faculty members and students on responsible AI usage. This will help in evolving appropriate and ethical usage guidelines as per the needs of the institution. We cannot have universal rules, but at least some general guidelines can be evolved at the institutional level. The most prominent global universities have a general AI policy and they have also left it to the faculty to frame specific policies with respect to their courses. That is the only way forward now.
There are concerns that students may become overly dependent on AI-generated responses. Are these valid?
Arun Kumar Tangirala: It’s true; many teachers fear this. I don’t think it is a valid fear. It depends on what skills we want to impart. An academic institution has its own goals apart from preparing students for employment. Those goals may include training students to think deeply and in a scholarly manner. But at the same time, we need to be practical. We have to train our students so that they get jobs, not necessarily scholarly ones, but where they can implement what they have learned. So, in any course, we will probably have to ask to what extent we want to impart critical thinking skills vis-a-vis practical skills.
Arul George Scaria: I have a slightly different perspective. At a broader level, I fear that we are currently seeing an over-dependence on AI-generated responses among students and sometimes even among many faculty members. So we need to educate everyone on how to be responsible AI users, particularly by understanding the limitations of the technology. This might even require re-imagining many components of our education. Maybe the kind of technology which we are talking about here can have more negative impacts if we indiscriminately adopt it. Over time, AI technologies may also be more mature. But as it stands now, my fear is that most of the time, people are overlooking the limitations of technology.