The black box of code and AI in research
You would have to be sleeping under a rock for the past year if you have not heard of the praise of AI in contemporary society. Basically, AI is described as a new tool in which we can process information faster than ever before, and utilize it to make informed decisions. Many of the proponents of AI believe that there does not exist a single factor in society which will not be affected by the so called AI revolution, and perhaps they are right. Still, AI also brings forth significant challenges to the scientific community which I do not believe that we have discussed at length yet. In this post I will highlight both how I utilize external AI (i.e. AI that is created solely for being AI and not included in common programmes such as Word or Chrome) and what I still would not trust it with.
As a scholar my main tool for work is the written word. Since english is not my native tongue, this however becomes complicated when working with the international scientific community. In my published writing I therefore utilize ChatGPT (primarily 4.0) for proof-reading of texts. In this regard I however do not copy-paste the proof-read text into a word-document, since AI often have tendency to change word and give texts a completly different meaning. Instead I utilize AI to find common language errors i make, such as the confusion between "where" and "were" and checking so that the text holds a coherent tempus. This is all work that a human could do, but it would be complicated from two points of view. Firstly, translators are expensive which does not often fit into a research budget. Secondly, the utilization of AI makes it possible to maintain a coherent written text throughout my writing process and making changes in the text whilst working with it rather than having to make a final version that will be send for proof-reading. As the technology have evolved it has also become better and better at giving feedback, but it is still nowhere near given the same response as a human proofreader. Still, sometimes in life we have to settle for good enough.
The second utilization of AI in my research is transcribing interviews. The process of transcribing is often very time consuming, were ten minutes of interviews often takes 40–50 minutes to transcribe. By utilizing AI this is basically done during the same time as I take a coffee or bathroom break. Still it is rather clear that it is tool for providing structure for a transcription, since I still have to read the transcript through and correct errors (and those are many) that AI makes.
However, AI also comes with both legal and scientific challenges which is also why I would never let it do the analysis for me. The first legal challenge is the utilization of third-party software, where interviews and texts are stored across the atlantic. According to current european legislation - the GDPR - it is not legal to store sensitive information on a server outside of the European union. Therefore I am when working with transcripts often bound to utilizing softwares such as Whisper which are not nearly as good as online based-platforms.
A second ethical challenge with AI is that whilst we can put data in, and it can reach a conclusion, we have no idea how this conclusion were drawn. This is largely due to to aspects: firstly many AI providers are private companies, which makes their algorithms a business secret. From a scientific point of view AI thus presents a new challenge with regard to methodological transparency, where it is virtually impossible for a reviewer to fully understand the process. This can be seen as the "black box of code", which have to be overwon if AI actually would hold any scientific analytical value. This is for instance the case in where I can provide a transcript of a interview and ask the AI to code it into certain analytical themes, whilst the themes might be correct, I have no way of double check how it choosed those themes in the first place.
For a long time the humanities and social sciences have had a sharp division between so called techno-optimists and techno-scepticals. I do not belong to either party, since I view that new technology will affect the way we work. Still I think it is of great importance that the scientific community remains scientific in approaching those new technologies. This will perhaps be one of the history subjects most common challenges in the coming decade or so.
Comments
Post a Comment