AI act - the nexus between law, technology and philosphy

 In late august the European parliament launched new legislation on AI, and scholarship probably will spend the coming decades interpreting it. Due to this being a new legislation, this is also something that will take time before can see it's full extent and application. This is something that is quite problematic, since the world right now are screaming for more and more people with knowledge on AI, from it's technical development to legislation and policy. When I was working out a new course that I have taught at Lund University this fall, I therefore had to turn to the very slim literature on it and I actually found two very interesting articles. 

The first was written by Stefan Larsson which adressed the theme of legal pace. From this perspective, there is always a tension between law and technology, since latter often develops faster than first. Furthermore, AI often comes with a huge bagage technologically speaking which will give us several problems when trying to regulate it. A clear example of this is the term AI in itself, where we can't really understand nor define it because we can not really understand what intelligence is. For instance, we could define AI as a tool that adapt to the users requirement and the feedback it gets from certain actions. I.e. we could define intelligence as something that adapts it's behaviour according to changes in it's sourrunding. The great problem with this definition is that is to wide, since it for instance would entail thermostats and other tools that we normally would not classify as hightech or AI. On the other hand, we could define it as certain sets of algorithms, a definition where there is a large risk of being to narrow and thus missing out. 

A similar theme has been discussed by Luciano Floridi, whom have written an article on the process of creating the AI act. A problem that he adresses was that many members of the european parliament did not actually understand the technology they were set to regulate. Rather than viewing it as large langue models, people seemed to believe at AI was the same as Skynet from the Terminator movies. What this actually means is that the legal frameworks for AI have met huge difficulties, and that the AI act can be seen as a way to mend after the current situation. 

Do the obvious problems with this legislation lead to a bad legislation? I personally do not think so, since the framework in some situations are better than none. What I do however worry quite a bit about is that it exists a gap between theory and practice. For instance it's classification on risks is rather vague, since the technology in itself does not bring risk. Instead, it is a question of how it is used and operated that make certain systems dangerous in some situations, rather than the systems in themselves.

 Let's say that we first instance create a tool for vetting job-applicants for a security-graded employment and that this system is carefully monitored. In this case it does not pose a serious threat to somebody, since it more or less just automatise a process which a human can do. If it on the other hand would be used by a fertility clinic to vet out people deemed not worthy of pro-creation (a task that is extreme, but have occured for the last century) it poses a serious risk for mankind and human rights. Perhaps this is also the reason why I believe that AI legislation should depart incoorporate various groups - from programmers to philosophers and NGO:s rather than just people with technical understanding. I know that this has been the case with the AI act, but I am still really interested in how this legislation will be translated from theory to practice. 

Comments

Popular posts from this blog

The use of history and why historians should mind the gap

Contemporary Sexual Politics: Efforts to Silence Sexuality in Politics

Contemporary Sexual Politics: a Background