Imagine a scenario where you apply for a job position and you come across an AI-based recruitment process. If such a system is trained on reliable data sets, then you can expect a good outcome. However, there is also a dark side of AI.
If the AI is trained on biased data sets, then this could lead it to make decisions that perpetuate or even amplify existing inequalities. For example, if a ML algorithm is trained on data that reflects historical hiring practices, it might learn to prioritize male candidates over female candidates, leading to gender discrimination.
While these models are incredibly powerful and can make decisions that often outperform human experts, they come with a significant downside: they are largely opaque – so it can be difficult, if not impossible, to understand how they arrive at their decisions.
And these are just some of the mild examples illustrating the potential dangers that the technology poses.
The Recursive had insightful conversations with a group of innovative AI startup founders about the dark side of AI, delving into their concerns about the revolutionary technology they’re crafting, as well as the sincere desire to educate people on how to skilfully navigate it.
The dangers of of discriminating and biased AI models
The loudest ethical concerns behind large language models such as ChatGPT right now come in the form of whether such models can be potentially discriminating and create content which is simply dangerous. However, there’s a lot more to it, according to Croatian mathematician and entrepreneur Sinisa Slijepcevic.
“I’d go back to a story I’m fascinated about and it comes from the UK. Back when COVID-19 happened, there was a situation when suddenly there were no A-Level exams, which are the basis for graduation and a precondition to enter any university. And then somebody in UK Ministry of Education came up with a model who’s going to predict some of this final grade, based on all the grades that this person has received in the past, and based on everything else that we can think of,” Slijepcevic, who is also the CEO and founder of data analytics and ML startup Cantab PI, tells The Recursive.
What happened next is that the model turned out to be hugely discriminating – if you went to good school by default, the model pushed your grades up, if you went to a bad school, it pushed your grades down, the Croatian mathematician explains.
“This is super dangerous. So whoever had this idea, first it was not thought through and it wasn’t very good. Because this is not only about discrimination – it is about developing predictive models without standard use cases. You could create damage by deploying something that you do not fully understand and do a lot of damage,” he says.
Understanding what’s under the hood
According to Slijepcevic, while black box models can change the world if this technology is applied in a wise and responsible manner – the hype around them needs to tone down and the industry also needs to assess the risks that are out there.
“This can be done by combining the deep understanding of what the problem is and capabilities that this technology has – so it’s not just to avoid discrimination, but also to prevent any kind of damage,” Slijepcevic adds.
People and companies also need to have a deep understanding of these products and services and what their limitations are, Croatian entrepreneur Mislav Malenica agrees.
“I see a lot of companies right now basing their products fully or solely on ChatGPT. And many, many people actually don’t understand what are the advantages and limitations of GPT. And then if you rely too much on something that was never the thing that you’re actually selling – that’s a problem because you can create a feeling that someone takes care of people and that someone actually really doesn’t understand what’s going on,” Malenica tells The Recursive.
For Malenica, there’s also the question of what are the intentions of those that are behind such technology.
“It’s like a statistical parrot, it creates things or says things that actually sound smart. But they aren’t necessarily accurate and can even be harmful. But if you actually have an expertise in the field, and if you are a responsible person, and you are trying to build a company and you provide a service and take responsibility then this is a completely different story – a tool like ChatGPT can be a super valuable tool. Then again, you need to know its limitations,” Malenica adds.
Talking about the dark side of AI is then the first step in limiting potential threats.
Education and transparency as key to living with AI-based models
The rapid advancement of AI and machine learning capabilities has led to the proliferation of similar models in many areas, from finance to healthcare to self-driving cars, to name a few. And in the years to come, living to learn with AI will become crucial for most of those in these industries, Matija Nakic, CEO of cloud solution for financial planning Farseer, tells The Recursive.
For Nakic, AI should be used as a supporting kind of companion for decision making, which is here to help, and not fully replace those that are vital to the process and make decisions on its own.
“I think all levels of managers will have to become more data savvy and more, let’s say financial model savvy, so that they understand the basics of what’s driving a certain outcome within their business. And then if they get any kind of recommendation from the AI model, they will be able to deduce where it’s coming from, and what are the potential outcomes,” Nakic says.
Transparency is another key aspect which can help people understand how these models operate.
“If you give people some kind of a suggestion, prediction, whatever from the system, and you don’t explain why, the adoption will be very low. Besides transparency, another important thing is the ability for people to impact that outcome. People have intuition and knowledge and they have to work together with AI and together with software to come to the best kind of solution – so that’s how I expect things to go in the future – we will all have to learn how to work with AI,” Nakic concludes.