Algorithms, AI and towards a dystopia?

A few days ago, I attended an interesting seminar with scholars at Linnaeus University, where I am currently employed. The subject was cultural policy and how cultural politics are undergoing transformation, not least due to the ongoing war. Another theme that was explored concerned algorithms and AI, and as I left the seminar I began to reflect on how the world is currently being challenged by the rise of what might be called “everyday algorithms and AI”.

Basically, almost all information that citizens encounter on social media today is curated by algorithms. This can, in turn, sometimes lead to dangerous outcomes, such as the material one encounters gradually becoming more and more radical. This happens because algorithms measure both the time users spend engaging with information and how they interact with it. This is hardly surprising, and numerous scholars have warned about this development.

With the rise of AI, algorithms have also become an increasingly integral part of everyday life. We use AI to search for information on the web, thereby allowing it to decide what counts as important. We also use AI in our writing, for instance through spelling and grammar tools. In this way, AI is already not only shaping reality but also interfering in our lives without us necessarily noticing it. This becomes even more evident as more and more technology incorporates AI without explicitly declaring it.

I am a strong proponent of AI, as it has a wide range of applications, from medical diagnoses to writing and information retrieval. At the same time, I strongly support the regulation of AI, since I believe we should not allow machines to make formal decisions independently. Concepts such as “human in the loop”, where a human carefully monitors the system, are therefore crucial—especially when it comes to accountability in cases where AI-driven decisions can have concrete impacts on human lives.

Propaganda - perhaps more subtile than ever before.


However, as more and more technology becomes “AI by default”, the line between what we should and should not accept becomes blurred. Using AI to search the web, correct our language, and so forth may seem innocent, but by doing so we also give up part of our autonomy to algorithms. Here a more disturbing thought emerges: these algorithms are controlled by companies that develop AI technologies. What they are actually selling is either the algorithms themselves or the outcomes they produce. This has led to a situation in which we increasingly relinquish our autonomy to private companies—and, in the worst case, even pay them to do so.

It is often emphasized that AI can be used to create deepfakes capable of destabilizing elections. Another, equally important question, however, concerns how AI monitors and filters the information citizens are given, and how this affects political outcomes. It is no coincidence that dictatorships often shut down the internet in order to gain control over the narrative. The question, then, is whether AI will ultimately serve the interests of its users—or the interests of those who finance it.

Comments

Popular posts from this blog

Contemporary Sexual Politics: Efforts to Silence Sexuality in Politics

Contemporary Sexual Politics: a Background

The old bucket and me - Hinke Bergegrens philosophical works