Last month Serbian politician Dragan Djilas took a seat across from the host on a popular TV show, ready to share his thoughts on his political journey and the future of his party. As the politician began to speak though, he launched a tirade against not only his own supporters but also his fellow party members.
Seen as a shocking display of political self-destruction, the opposition politician’s speech left viewers dumbfounded. However, the man on the screen was not the real Dragan Djilas. Instead, the two-minute video which had been featured in the news edition of local TV Pink, was created with the use of AI and deepfakes by TV Pink’s owner Zeljko Mitrovic.
The Serbian politician later announced that he will file lawsuits against TV Pink TV and its owner, possibly making this the first case in the CEE region which involves the misuse of AI.
Similar risks such as AI-generated images have also raised red flags across Europe – in April, a picture of an elderly man being violently restrained by French riot police went viral, only to be debunked afterwards as the work of a digital artist.
As the world becomes a place where the truth can be digitally distorted more and more with each passing day, this is also a reminder of how AI can be used when put in the wrong hands.
Is there an antidote to the misuse of AI?
On the one hand, AI has the potential to combat fake news and misinformation by rapidly analyzing vast amounts of information to verify facts. It can be a powerful tool for ensuring that the public is well-informed, a cornerstone of a healthy democracy.
However, the very same technology can be exploited to spread misinformation, casting a shadow over the democratic process, Belgrade-based AI researcher Ljubisa Bojic explains. The rise of generative AI that can mimic human speech and behavior has made it easier for malicious actors to flood social media platforms with convincing but false narratives.
“We need to enforce stringent regulations on the creation and dissemination of deep fakes and more proactive tools for detecting misinformation. For example, this might include forbidding automated AI posts on social media during election campaigns and harsh penalties for deepfakes. We would also need to introduce digital identity standards to verify the profile of each person,” Bojic suggests.
Such campaigns can also create echo chambers that undermine democratic ideals, leading people to question the value of democratic principles such as free speech, freedom of association, and free and fair elections. In such cases, however, public awareness plays a critical role as well.
“Being vigilant and considering the credibility of the source before sharing information can go a long way in combating misinformation,” Bojic tells The Recursive.
For cognitive scientist and co-founder of Slovakian AI platform CulturePulse Justin Lane though, there’s an often-overlooked aspect of AI’s impact on democracy: hybrid threats. These threats, carried out by foreign actors, fall beneath the threshold of war but have the potential to erode the social cohesion vital for a functioning democracy.
When a society loses its cohesion, it can result in decreased tax compliance, a reluctance to engage in the democratic process, and diminished faith in the government, Lane argues.
“Finding ways for AI to better protect and serve in those key democratic functions that we don’t always think about is really important. When we think about democracy often we think about what our governments do for us. We don’t always think about democracy as an ideal and things like free speech and freedom of association and you know, freedom of economy, and the ability to have free and fair elections,” Lane tells The Recursive.
Rethinking AI’s role in democracy
To address such challenges, he argues that AI must be used differently. Traditional AI models, which focus on inputs and outputs and are trained on historical data, might not be suitable for the rapid adaptation required in democratic processes.
Democracies excel at responding swiftly to changing environments, but AI’s traditional approach may not capture the nuances of human behavior and values, he points out.
“So what we need to understand are the people that act in changing environments and how our environments are going to change, and how we act to do that. We need to fundamentally rethink artificial intelligence. It can’t just be input to output because then sometimes you get an issue of garbage in, and garbage out, as they say,” Lane adds.
The focus should also shift toward analyzing human psychology, and understanding motivations, values, and morals. Rather than relying on AI to make crucial decisions about free speech, elections, or censorship, people should harness AI to better understand ourselves and our society, the scientist says.
“These are the issues of artificial intelligence that I think are really going to be more important as we move forward into 2024 which is an area where you have election cycles all over the world throughout the democratic world,” he tells The Recursive.
Therefore, in a world where AI looks to wield immense power, the key to preserving democracy lies in a thoughtful, human-led approach.
“We need to be very careful about how we utilize artificial intelligence. And the idea that AI should be making decisions about free speech, or decisions about elections, I think is very dangerous. Instead, what we need is to have humans making the most important decisions in our democracy because democracy ultimately is a process of people, a process of the people’s voice,” Lane concludes.