Blog
16.02.2023

ChatGPT in the Academic World: Disruptive tool or subversive collaboration?

ChatGPT is revolutionising all activities involving text. The impact it will have on science and university teaching is therefore the subject of much public debate. A culture of prohibition or the reduction of (social-) scientists to fact-checkers underestimates the transformative and potentially liberalising effects that the language model can have on academia. We are now confronted with the question of what skills we possess that can never be replaced by AI. The answer will allow us to satisfactorily and productively collaborate with the bot.

ChatGPT, an AI tool for automatic language processing developed by OpenAI, is a milestone in the development of AI. The software is based on machine learning models which recognise complex patterns in huge data sets that are freely accessible on the internet, and it tries to generate answers that are most likely to match a user’s request. It is important to understand that ChatGPT reproduces patterns, which it learns to observe in the training data, and which are based on concrete programming decisions made on the legitimacy of correlations and the (precarious) work of click-workers, teaching the system to screen out potentially illegal or toxic content. However, since machine learning is “deep learning”, its goal is to create artificial neural networks inspired by the human nervous system. This creates the danger that, with many such hidden layers, “black boxes” emerge where it becomes no longer comprehensible how the AI arrives at its results. 

After only a few months since its release, ChatGPT’s success is (also due to its interface and free usability) ground breaking and is predicted to revolutionise main activities involving texts in the foreseeable future. To improve its functions, Microsoft has announced an investment of 10 billion USD in OpenAI and the integration of the system into its own software products, such as Word, Powerpoint, and Outlook, as well as the lodging of Bing with AI  – which poses a real threat for Google Search. 

In Germany, and all over the world, public discussions on the chatbot’s potentially heavy impact on science, academia, and higher education are overflowing. The language model can already write authentic scientific university essays in the field of social sciences and humanities that can hardly be distinguished from those written by students. In addition, the software is able to write scientific abstracts, which human reviewers can no longer reliably separate from “real” human-written abstracts. Moreover, the AI tool has already been listed as a co-author in a few research papers. But that’s not all, it also manages to produce complex philosophical statements: in an experiment, philosophical questions were posed to philosopher Daniel Dennett as well as to ChatGPT, with the aim to test whether the bot would answer similarly to Dennett himself. The results show that even Dennett experts have difficulties in distinguishing Dennet’s real answers from alternative answers generated by the bot. 

It follows that the chatbot can “fool” professors, human reviewers, and scientific experts. This has given rise to acute fears for which quick solutions are being sought. To avoid potential “ChatGPT-enabled cheating” by students, for example, some suggest that teachers allow students to let the chatbot write their essays in the first step and then, in a second step, assign them the task of evaluating the output for its accuracy. In this case, the students’ main task is to assess and weigh scientific evidence. Consequently, the end of classic scientific seminar papers, as we know it, is announced. Others call for a transformation of the current examination system at universities, to entirely switch from written to oral evaluation procedures. As for the writing of scientific abstracts and entire research papers, leading scientific journals have begun to update their editorial policies. In order to prohibit the use of the AI tool, they require authors to sign a form stating that they are responsible for their contribution to the paper. As ChatGPT cannot do this, it cannot be an author. Others recommend researchers rigorously double-check the chatbot’s produced content if they use it as a supporting tool, so that the risk of mistakes and incorrect conclusions is minimised. 

The question must be asked, however, whether a culture of prohibition and fact-checking provide adequate or sufficient responses to the continuous rise of this new technology. Do these debates fall too short of fully grasping the risk but also the potential of the tool, for academia and for science in general? 

To better understand its revolutionary impact, one needs to acknowledge that this tool is not a technology that “approaches science from the outside to ‘disrupt’ it”. Rather, it is an innovation that could have only been developed within the context of a specific scientific paradigm: the more than a century-old scientific paradigm of positivism, which has led the way to the more recent, 20-year old narrower paradigm of “dataism”. Grounded on the epistemological claim that correlations of data (if the data basis is broad enough) can reflect society through neutral and objective facts, this paradigm reproduces – as it is explained by Prof. Blayne Haggart – “a technician’s world view”. The paradigm is based on the imperative that data-based facts constitute “the scientific truth”. The task of scientists here is essentially to “reveal” these facts and to describe them in a neutral language, without normative evaluations. This also applies to humanities, where analytical knowledge production (such as analytical philosophy) is deemed to prevail over other forms of knowledge generation. Such a technicist paradigm is to a certain degree interwoven with a neo-liberal economic paradigm, which demands high publication rates from researchers, and requests from universities cost-effective mass-management of students. In such a paradigm, an AI tool might pose a real danger to replacing scientists and researchers, since it can correlate data and describe its output faster and more efficiently than a human scientist. Haggart argues that what we have now is a “correlations-based, dataist faith in big data”, where the results of machine learning, pattern recognition processes are turned into “authoritative knowledge” and proclaims not only “the end the university essay”, but also “the death of science”. 

For this reason, the measures mentioned above do not sufficiently address the current and upcoming challenges. When the focus is mainly on improving researchers’ fact-checking capabilities, or on tools to prevent possible abuses, the potential of the new technology to transform the notion of good scientific practice and to reconsider the role of scientists in our society is overlooked. If researchers are taught to focus first and foremost on the evaluation of the bot’s scientific evidence, for instance, they might be “replaced” over and over again by new software. This is shown for example, in the release of new AI detectors (like GPTZero) which will soon be able to sufficiently assess whether a text has been written by a bot, the development of “fact-checking technologies”, which will – as far as this is possible – filter out misinformation, and the establishment of safeguards to produce non-toxic outputs.

In this sense, ChatGPT offers us not only the chance, but also the necessity to ask the following, fundamental questions: What is good scientific practice? What is the essential “mission” of universities? What are the skills and abilities that make a good (social-) scientist or researcher? For, not to fear redundancy, researchers must seek to strengthen the awareness of their specific skills, abilities and responsibilities that cannot be replaced by any technology.

While an AI tool can excellently build correlations of available data, summarise the output and provide helpful information and explanations in a comprehensible way, it does not have the capacity to create genuinely new ideas. It cannot critically engage with the world, establish relations between different levels of abstraction, or provide normative evaluations of social conditions. It cannot create new theories or research questions for which it would need creativity, critical thinking, and self-reflection. Such skills in a human are the result of years of development and a variety of methods: the study of theories in the social sciences, which cannot be understood simply by summarising them, but by reading them hermeneutically; the forming of critical and also normative/ethical positions, developed in interactive discussions with peers and teachers; the reflection and awareness of one’s own perspective, through interpersonal relations; the probing of interesting research questions though a curious approach to one’s social environment; the creation of future forecasts based on differentiated analyses; the formulation and comprehension of different arguments which requires much practice with writing. 

Prof. Robert Lepenies of the German University of Karlsruhe argues for making the university “a place of experience”, where, for instance, the focus is put on the process of writing a university essay and not first and foremost on a fact-checking evaluation of its output.  

The acquisition and emphasis of such skills lead to a critical investigation of the current scientific paradigm’s idea that data-based facts constitute the scientific truth, by acknowledging that no (social science-) knowledge is absolutely objective, neutral, decontextualizable. As a consequence, the paradigm’s “authoritative knowledge” can be deconstructed. This allows for light to be shed on the programming decisions of individual data scientists and the potential gender, political or cultural biases of an AI system. Most of all, a conception of (social) science and humanities can be permitted which not only critically reflects and reveals its inherent ideals and hidden assumptions but might also open up to a scientific paradigm that allows again for more theory-driven or qualitative-led, creative, inter-disciplinary research, which is also less focused on producing fast publications. For facts only attain truth when they are embedded in a “sense-making” narrative that builds on a (possibly normative) argumentation. 

In this sense, ChatGPT could have a liberalising effect on (social) sciences and humanities and researchers could enter into a kind of division of labour with the chatbot. This way, researchers can delegate (with precaution) certain tasks, which can be done faster and more efficiently by a bot. Hereby, new doors can be opened to scientists, to academia and to university education. The program can be used as a tool for the provision of information and can thus be of great help in expanding interdisciplinary work or in helping as a generator for ideas. Scholars can then process this knowledge by applying their own competencies of meaningful, critical, creative and differentiated thinking. Moreover, as a “virtual writing assistant”, it can bring back more ease into writing and help to dissolve writer’s blockages. Prof. Doris Weßels, Germany’s leading expert on the use of AI in education, stresses in this respect that universities need to develop agile approaches to be able to best recognise and apply the potential that the tool unveils. This also requires more resources to be made available to universities, close supervisory relationships, and more creative, cross-disciplinary collaboration.

Teaser photo by Wander Fleur on Unsplash.