Towards a legislation of Artificial Intelligence
Given the last years advances in artificial intelligence, and in particular the so called generative artificial intelligence, the EU have launched a series of regulation towards the implementation of AI in every-day life. Given the wast amount of dystopian ideas on what AI can do with society, this legislation is from a humanities and historical perspective quite interesting. The presentation of the legislation comes in the form of starting with a highlight of how AI can change our society for the better, such as better healthcare access and changing the conditions of manufacturing. However, the legislation also presents some key elements to which the EU have to regulate the use of AI in society. Drawing inspiration from the field of policy analysis, the legislation in itself can be seen as the product of which risks that the decisionmakers think that AI poses to society (Bacchi 2006), so let's start with the areas of legislation.
The policy-document presents three forms of risk presented. The first is about managing behaviour in society, where their own example is voice-activated toys that promotes dangerous play behaviour in children. The second risks are social scoring, in which AI can be used to section the population into low or high income earners and so forth. Lastly, it is also prohibted to use bio-metric data for identifiying and categorize people. Yet, the last notion is challenged by that it in some regards can be legal to use such biometric analysis in order to stop terrorist attacks or enforcing the law.
From an historical point of view it is especially the last two systems that are interesting. In many totalitarian states social scoring have been utilized by a number of regimes for authoritian notions that even challenged the concept of human rights. The second part with bio-metric use stems from a similar notion, in which it probably can be used for both ethnic cleansing or categorize citizens, similar developments that have been seen in contemporary society. What however is interesting is that such products may be developed in order to keep the state secure in the case of law enforcement. It is in this regard hard to not think of Foucaults notion of security as something that trumps all other social values in contemporary western society. This can also be added to a long list of where certain individuals securities are given up in order to create a "safe" society, where we for the past twenty years have seen how potentially dangerous data such as IP-adresses have been saved in order to make legal actions possible.
![]() |
European border regime - how notions of secutiry can trump other social values. |
What I as an historian however can not help but note is that when it comes to electronic products, mentioned in the first section, they have for a long time been utilized to affect peoples behaviour. Many of us remembers the Cambridge analytica affair in which profiles where made of potential voters social media data in order to affect how they would be targeted for strategies towards managing their voting behaviour. Even though the aftermatch of this scandale led to fines towards Meta (Facebook), there have been little iniatives to actually prohibt similar companies and algorithms, on the contrary big data is basically the talk of the town in the arena of political lobbyism. It is hence rather curious that AI have been targeted as a major threat, when we perhaps should take a bit more of the utilization of big data in general rather than it's implication when combined with AI.
Comments
Post a Comment