Can we trust AI with research?

Artificial intelligence is still the talk of the town. As a scholar, I am very interested in AI, especially in focusing on what it can do, rather than what it can’t. In academia, many discussions about AI in education and research tend to be rather critical claiming that AI can never be “trustworthy” or used for complex tasks where the end result is important. I totally agree with that but in many cases where AI has rendered itself useless, it is often due to a) poor prompting, b) attempts to use it for tasks it was not developed for, and c) unrealistic expectations.

The text you are reading right now is not generated by AI, but like much else on this blog, it has been edited by ChatGPT 4.0. The reasons for this are quite obvious: I am not a native speaker of English and I would prefer not to sound like an idiot in text. For this purpose, ChatGPT actually works quite well, but I also need to be careful about what I prompt into it and what I expect from it. I usually make a prompt stating that I only want to focus on grammar and spelling, without losing my tone or style. After that I paste the text into ChatGPT and review its suggested changes. I often choose to use its grammar and spelling suggestions, but I frequently have to re-do its use of commas and similar things, since these do not feel natural to me. I thus use AI as a smarter version of Word, not to save time (though it does save me time) but rather to polish a product I have already written.

Another way I use AI is as a smart search engine. For example, I often ask AI about texts I once read but have forgotten how to find or I use it to locate material on the internet that relates to my research. A while ago, for instance, I needed a text on why certain religious artefacts are not supposed to be conveyed to people outside a belief system, which makes them ethically unsuitable to be placed in a digital collection. ChatGPT identified a number of texts with this reasoning, and I chose one which I then carefully read. Hence, I did not “buy” ChatGPT’s own interpretation, but used it to find sources suitable for the task that I could later double-check.

A third way I use AI is to identify publishing opportunities. I usually describe the research I am about to do, and it helps me find a suitable journal after some time. Once the journal has been identified, I also ask it how articles in that field are generally structured, what sort of reference system they use, and so on—thus helping me build the formalia and structure. I usually double-check this with a selection of actual articles.

To summarize: AI is useful, but not in the way a lot of people use it. You still have to be prepared to do the work yourself, but it can give you the tools to carry it out.

Comments

Popular posts from this blog

Contemporary Sexual Politics: Efforts to Silence Sexuality in Politics

The old bucket and me - Hinke Bergegrens philosophical works

Book review of sorts: A history of childhood by Colin Heywood