With a rich professional background in both data science and management consulting, Łukasz Borowiecki brings a wealth of experience to the AI industry in Poland.
As a CEO of consulting company 10 Senses and an AI sector expert at the Digital Poland Foundation, the leading NGO in Poland committed to advancing the discourse and application of AI, his expertise spans diverse sectors, including healthcare, telco, automotive, FMCG, and transportation.
In an interview with The Recursive, Borowiecki sheds light on how the AI industry can tackle the various data science and data-focused challenges it faces, the development of the industry in Poland and across the rest of the CEE, and much more.
The following interview was conducted as a part of The Recursive’s “State of AI in CEE” report. Download the full report with insights from 40+ experts and an analysis of 900 AI product companies from CEE here.
The Recursive: What are the most important considerations that institutions need to take into account when developing and deploying AI technologies? (e.g. fairness, privacy, accountability, human control, societal impact, environmental impact)
Lukasz Borowiecki: Data is the key thing. So they should have smart people organizing data and have it as a long-term project, as a journey. What actually happens is that they hire and assign teams and follow the hype – these guys try to do something, they fail, and it’s done, the money is burned.
The key thing is to start with the start with data. Have it well managed, and have it accessible. Those that want to have AI should start with data and analytics. And afterward, the AI will be easy. Second thing, regarding AI, I would say that it makes sense not to develop it internally, but just buy it.
As a company, we do AI and machine learning, but we put close to zero effort into promoting this. What we try to do and promote is business intelligence. However, when we get asked for AI, then we talk to companies.
In my view, however, selling machine learning or AI projects is difficult, because usually the obvious use cases will be done by the software houses. A lot of machine learning is very problem specific and it will take such a long time to develop skills there.
So, it makes more sense to focus on some small areas and develop them. We want to be very good at Explainable AI because we have this background in research. For me, Explainable AI is a new type of way of doing regression modeling. So we want to just be good at this one particular problem and try to win projects in this area. When I think of AI, I always think about a particular problem and how to wrap it in a solution.
You can sell business intelligence like that, as a general thing. You can sell software development, app development, and so on, like the software houses do. But with AI, I would say it’s better to focus on particular small issues and problems from the get go.
What is the process of creating a responsible AI strategy and how is Poland doing in this regard?
I would say pretty poor. But on the other hand, there are just a few countries that have a proper strategy, which actually works and makes sense. So I would say Scandinavia, like the northern Western countries, US, make sense. Because we are not alone without having a real sensible strategy, because Italy and Spain are in the same camp.
We have one which is something that they created because they had to do it. It’s not clear how they want to finance it, the responsibilities are not clear. The goals are there, but the KPIs aren’t there. And these are major drawbacks.
Do you see enough collaboration between businesses, institutions or state actors when it comes to AI advances and if not – why?
I would say that in terms of companies like ours and big business, there is cooperation, and if there is a good idea then the cooperation is like that.
Regarding cooperation between the private sector and government, this it’s always potentially a problem, especially in our region. It’s what Scandinavian and Anglo-Saxon countries are good at. But here, we aren’t really that good.
What is your take on the most recent AI legislation developments in the EU and the US and have you noted any differences in ethical areas when it comes to Europe?
We don’t know what will happen there. Because if you read the AI Act it is so rigid, that it doesn’t make sense. It’s like if you do AI then you break the law. So are waiting on the actual interpretation of this law.
Our idea with Explainable AI is exactly that, because we are currently working on building tools to measure the effects within various effects within black box models, which is important because you want to measure the effect because the effect of let’s say, variable gender being female, and what is the effect this is having on the scoring model.
So, if you want to measure this, it’s a measurement problem. So, that’s why we are looking at the AI Act. We think that these Explainable AI solutions will be a direct and very handy tool for addressing the topics from the AI Act.
But we don’t know what will happen – there are so many things in the AI Act that we are still waiting to be resolved – each country will have to decide who takes the functions of a regulatory body. Because one thing is that the AI model should not discriminate. So maybe it’s a body within the bodies responsible for privacy of data, maybe it’s a body responsible for finances.
The other thing is that you have products like toys, which is also AI, so this may be in a different regulatory body – one that certifies products. Another thing is that if we want to have these rules around AI, then they should translate into jurisdiction. So when there’s a case against an AI model in a court, what procedures do we take? So we don’t really know.
How can we take further steps to address such issues from the regulatory side?
Generally, it makes sense that AI should be regulated. Like, people whether this will stem the innovation.
But look at the regulations that we have when developing drugs. In my opinion, it might even be over-regulated because there are hundreds of drugs that never got to market and thousands of lives were never saved. But that’s what we have in drugs.
So it’s funny when they say if this will stem innovation because we’re killing innovation in so many fields. This is just like this is just a tiny bit compared to the pharma industry.
Which factors do you reckon will have the biggest impact on improving this outlook for CEE? (e.g. talent, government support, collaboration between academia and industry, access to funding, infrastructure development etc.)
One thing is the adoption of IT – data and AI among companies. The other thing is the development of solutions as AI tools here in the region. So these are the two different things that might have an impact.