Search for...

The Big Open Question for AI: How to Tell the Truth in a World of Data

As a leading expert in the field, computer scientist Swarat Chaudhuri is taking on the challenge of creating a new era of AI systems that prioritize reliability, transparency, and security.
, ~

As a leading expert in the field, computer scientist Swarat Chaudhuri is taking on the challenge of creating a new era of AI systems that prioritize reliability, transparency, and security.

Having in mind the implications of language models like Chat GPT and what if their actions go horribly wrong, Chaudhuri research mainly focuses on such possibilities. According to him, reliability and much more can be achieved by combining insights from programming languages, formal methods, and machine learning – eventually resulting  in having AI systems that are both powerful and trustworthy.

A PhD in philosophy in computer science at the University of Pennsylvania, Chaudhuri was previously a faculty member at Pennsylvania State University and Rice University, has taught a wide range of undergraduate and graduate courses on computer science. His accolades include an NSF CAREER award, a Google Research Award, the ACM SIGPLAN John Reynolds Doctorate Dissertation Award, among others.

Currently an Associate Professor of Computer Science at the University of Texas at Austin, Chaudhuri also leads a research lab called Trishul and studies problems at the interface of programming languages, logic and formal methods, and machine learning.

In an interview with The Recursive, Chaudhuri shares his unique perspective on the future of AI, discusses how his research is shaping the way we approach AI development, the meaning of trustworthiness for the AI systems, as well as his predictions about the next big trends in the industry – a topic he will also cover as a speaker at the upcoming INSAIT Series on Trends in AI & Computing event on February 16 in Sofia.

The Recursive: Can you explain your vision for this new class of intelligent systems and how they differ from contemporary AI?

Swarat Chaudhuri: About 10 years ago I started getting interested in machine learning. And it turned out that that was a good area to be interested in because over the last few years ML has really taken over the world. The question that interests me now is how do we make machine learning more reliable and more trustworthy.

Specifically, if you think of software systems, historically these were entirely engineered by humans. However, now we are increasingly seeing machine learning components being introduced into software. So when you are using a sufficiently complex piece of software, there are certain pieces there that are already using machine learning. And this trend is going to just roll.

If you fast forward to 2030 for example, real systems are going to have big chunks of machine learned code inside it. And then there are also many tasks for which machine learning is just absolutely essential. For example, if you think of computer vision, or natural language processing, historically much of computer science was focused on just certain areas where only experts would handle them.

But increasingly, computer programmes are everywhere – your phone is a more powerful computer than a large academic, or industrial lab had access to 30 or 40 years ago. So, computing is basically everywhere.

Also, we are surrounded by data – and in order to make sense of this data and operate on images and sounds and text, machine learning is really essential. But again, there’s this question of how do we make sure that there is some order to this madness? How do we make sure that we’re not going to have things go horribly wrong?

Read more:  From Bucharest to Vilnius: Sequoia Atlas Identifies Europe's Rising Stars in Engineering Talent

So that’s where my research comes in – I’m interested in building machine learning technologies that are more trustworthy by construction. If you think of a system like ChatGPT – sometimes it produces things that are just absolutely amazing. But then sometimes it produces outputs that are horribly wrong.

And while I am very excited by the best case scenario of ChatGPT, I’m also troubled by the worst case scenario. So I’m imagining a world where we are deploying these sorts of systems in real applications, and then you are causing damage – for example the system is being used to generate code, and then the code is turning out to have security flaws, takes down our software infrastructure, and so on.

Overall, my goal is to make machine learning more understandable, more trustworthy, and more reliable.

Can you provide examples of complex tasks that these systems can perform, and how they achieve these tasks?

One example is it goes back to this point about ChatGPT generating code that is potentially buggy. The question that we studied a couple of years ago was how do we have generative models of code, these sorts of neural networks that produce code, for example like ChatGPT but could be a whole variety of other things that produce code that is correct by construction, and are type-safe.

In languages like Java, Rust and so on, you have this strong notion of type safety, which makes sure that your programme is not going to do something absolutely wrong. Now, the question is, let’s say that I want to have a model like GPT, but it should only produce programmes that are type-safe.

So how do you make this happen? It turns out that existing methods just treat code as text. So if you think of how ChatGPT thinks about the world, it’s really that the whole world is just text. And all the code on GitHub is just text as well. And you are just collecting huge volumes of data, and then throwing large scale machine learning at this problem.

So our belief was that giving the machine learning model some explicit information about what kind of programmes are type-safe. One approach is that you just throw a lot of examples of good code, and then it just learns somehow magically. But by experimenting what we found out is that even the best machine learning models start making these sorts of little mistakes.

And then as you generate more and more code, you start seeing these errors, or mistakes. However, in software, a small error in one place can be actually an extremely bad security hole that can be exploited to completely take over the system or make the system crash.

So that’s why the idea of small mistakes is really important and there are certain kinds of mistakes that you just can’t have. Then the question is, how do you generate programmes that come with this sort of assurance?

The way to do this is to do sort of a mathematical proof that whenever you’re using a variable, there has been some value that is put into it, and that value has a meaning. So the idea is that we could take this sort of proof and expose it, along with the text of the programme to the machine learning model – so GPT is not just seeing the tokens of the programme, but it’s also seeing the tokens of the proof.

Read more:  LegalTech Trends and Startups to Follow in 2023

So this is an example of how you’re not just doing pure neural learning over the basic code, you are constructing something using symbolic information, symbolic methods and you are exposing the neural network to that as well. This is one example of how something, in addition to the purely neural method, is beneficial.

What is the roadmap for developing and deploying these systems, and what are the key challenges you anticipate?

I would say that we have to start with identifying some application domains where you have a real need for trust. And we have already identified some of these domains – in particular, for cyber physical systems, or in robotics, for example. So this is where you start to deploy machine learning, and we’ll have to do it in a careful way.

Then there is the critical system infrastructure – there are now machine learning components inside cloud infrastructures and networks, computer networks, and even increasingly, operating systems and so on. So if you want to build these sorts of real world software systems that benefit from the power of machine learning, but at the same time respect critical safety and security properties, how do we build them? That’s another important question.

Then, critical software systems is another place where we are very interested in seeing these methods work. And the third domain is science. In the scientific setting, we are really trying to understand how the world works and as a result, we need to have some of these symbolic pieces, because ultimately, humans understand using this kind of symbolic language, but at the same time, we want to do empirical data-driven learning, and that is what the statistical part is for.

We need both of these pieces in order to do AI for science effectively. That’s the third domain. Now that we have identified these domains, what is the roadmap for actually getting this to work? At the same time, as we identify these domains, we are working on the basic tooling for the algorithms here and we are developing new algorithms, but we are also building software infrastructures in which these algorithms can be put together.

At some point, the algorithms are going to be scalable enough that they can be deployed on a large scale, and there is still some work that remains to be done. But there are some methods that are more practical than others. There’s a range of approaches, and I think that they can actually be deployed in a very short run. It’s just that somebody has to really take these ideas and try to build companies out of them.

2023 looks to be a big year for AI – what is your take on what is currently happening in the industry?

The last couple of years have been years of large language models, and I think that we are going to see more of that this year. Because this is the space where we have a very clear roadmap, we have models like GPT and now we want to scale them up to even more parameters – so this is something we know how to do and it’s just a matter of doing it.

Read more:  Serbia-founded podcast platform Sounder raises $7.7M Series A round to help audio creators manage and market content

But there is this fundamental issue that these models don’t have a sense of truth – they don’t know what’s true or false. They just know that this is what the data in the world is, this is how people talk, and so on. So I think that this can cause a lot of problems. 

First of all, you know, beyond a certain level of complexity, I think that it just doesn’t scale and do complex tasks. So then how do we go from machines that are able to repeat patterns that are seen in the real world to actually constructing really new things and complex, sophisticated arguments, in a way that is logically sound.

And I don’t think that this problem is going to go away if you just throw more computing at it – it may get a little bit better, but fundamentally, I think the issue is that there is no understanding of the ground truth realities of the world.

So I think this is going to be the big open question for AI not just for the year, but for the decade, or the next 20 years, I think that if we are to ever going to get the general purpose of AI then we need to solve this question of how do you give these models a sense of what is true and what is false, and how do you make them reason.

I don’t think we are anywhere close to solving this, and I think this will take decades to solve effectively. In the meantime, I would say that there are still going to be advances and we are still going to have a lot of impact in the real world.

We are going to see more of these systems, we are going to see more assistance for writing, coding, photo or video editing. So this will allow people to just sort of creatively explore new designs but at the same time we are nowhere close to a world where these AI systems just start running off independently and taking off big parts of human work. I think these are going to be more like assistants that help out humans in the same way that web search, compilers, and programming languages have helped out humans.

What are your expectations for INSAIT and how would you rate the potential that the region has when it comes to computing and AI?

This is a very exciting moment and I was absolutely thrilled when I heard about INSAIT. I think that ultimately human talent is distributed similarly everywhere, but if you look where most of the AI research papers are coming from and where most of the big tech companies are at – they are concentrated in certain parts of the world.

And I don’t think that this has to be the case – we should create the circumstances so there is a Silicon Valley in Bulgaria, an Austin in North Macedonia, and so on.

So how do we make this happen? It requires a lot of things – world class universities, government policies that support these universities, startup ecosystems, regulatory frameworks that enable creating new businesses and innovations easier. I think that this is all possible, and INSAIT is a great first step.

Help us grow the emerging innovation hubs in Central and Eastern Europe

Every single contribution of yours helps us guarantee our independence and sustainable future. With your financial support, we can keep on providing constructive reporting on the developments in the region, give even more global visibility to our ecosystem, and educate the next generation of innovation journalists and content creators.

Find out more about how your donation could help us shape the story of the CEE entrepreneurial ecosystem!

One-time donation

You can also support The Recursive’s mission with a pick-any-amount, one-time donation. 👍

https://therecursive.com/author/bojanstojkovski/

Bojan is The Recursive’s Western Balkans Editor, covering tech, innovation, and business for more than a decade. He’s currently exploring blockchain, Industry 4.0, AI, and is always open to covering diverse and exciting topics in the Western Balkans countries. His work has been featured in global media outlets such as Foreign Policy, WSJ, ZDNet, and Balkan Insight.