This is a script I wrote for my opening statement on a panel about "Auditing generative AI models: identifying and mitigating risks to democratic discourse" at DisinfoCon 2024. You can watch the panel here. These ideas were developed into a longer piece for the AlgorithmWatch website.
The key term that’s informing how I’m currently thinking about these questions is ‘Jigsaw puzzle’. I’ll explain.
But to start with my first concern stemming from generative AI: The possible uses of these technologies are very broad. And thus the possible risks and harms are also very, very broad.
So thinking about democracy. The example that is discussed a lot , is that bad actors use these technologies to sway voters. Worth thinking about, but I think get disproportionate attention. Other examples might be that citizens use chatbots as search engines and get wrong or biased answers, as we at AlgorithmWatch have recently researched and you can find on our website We also shouldn’t just thinking of elections. GenAI can, and is, used to harass political actors, particularly from underrepresented communities. Which means political participation can be harmed, even before an election actually happens. And we also can’t just think of these end-products of GenAI.
We need to audit the entire value chain of AI creation. There’s the input data, there’s the ecological impacts. At this very conference last year Richard Mathenge of the African Content Moderators Union spoke about exploitative labour involved in de-toxifying models. So that’s a lot to deal with, right. And that’s the first problem.My second concernis also kind of a solution. There are increasingly many legislative instruments, especially in Europe, that could theoretically provide protection. We probably think of the AI Act and the Digital Services Act, as well as institutions such as the AI Office and ECAT, which are set up for these laws. I’ll leave Brando to talk about the AI Act, as I think you were somewhat involved in it.
But we don’t need to think just about these tech regulations. The new Supply Chain Law could possibly help against human rights violations in model training. These other tools can also help to include other non-technical groups, such as human rights defenders, and we at AlgorithmWatch would like to think more about these possibilities. However, the concern I have is whether there is the resource available in CSOs, NGOs, research communities, etc. to really keep up with all of this, shape it, use it, etc.
So, we have this wide range of risks through GenAI and a wide range of potential tools to address them. If that sounds like a complicated picture, it is. It’s the world that the tech companies have handed us. That is why I’m thinking about Jigsaw Puzzles. The picture is complex, but there are many pieces, and different groups and different tools will fit in different places. We just need to know how to find them and fit them into the right place. A lot of the most important pieces are hoarded by the tech companies, and maybe sometimes by regulators. A lot of the pieces will exist outside the sorts of tech-focused spaces that think a lot about AI. And some of the pieces will be fought over a lot, while others are left lying on the ground. But I think this auditing by ecosystem, not by specialist individuals or firms, or by overly narrow or standardised processes, is the best way to approach the real complexity of the problem. We just need to work out how to make it happen.
Comments