What vision for the future drives Bulgarian Gen Z entrepreneurs to create, develop, and scale […]
He plays the guitar, practices tennis, and enjoys swimming. But he’s also a rising star in the world of AI research.
His name is Victor Kolev and he is an 18-year-old Stanford University student. He has recently become one of the winners at the 32nd EU Contest for Young Scientists, thus securing a €7000 grant to develop his scientific project in neural abstract reasoning. The project itself explores the capabilities of artificial intelligence (AI) to quickly understand logical rules and learn abstract concepts. To test AI, Victor and his team worked with multiple different logical puzzles for which the algorithm had to figure out the pattern. “It’s like an IQ test, really,” Victor shares with The Recursive. He has already published a scientific paper for Research Science Institute 2020, exploring the topic.
Victor says that he has been involved with computer science since he was in second grade. During his high school years, he got involved with statistics, which in turn led him to the world of AI.
Today, The Recursive team gets into Victor Kolev’s notebook to learn more about his experience, the major developments surrounding his award-winning project, and the most pressing problems in the field of AI.
The Recursive: How did statistics lead you to AI? Where did the inspiration for your project come from?
Victor Kolev: It all happened because of my colleagues and peers – a fabulous team of researchers that have helped me develop and guided me on my research journey. It was one of my mentors that introduced me to my current topic. He showed me the problem and told me that it is a very interesting problem that almost no one has tackled before and that there was no known solution. He suggested that I try working on it and see what comes out. We basically discussed ideas every one-two weeks, I did experiments on my own to see what works, what doesn’t. And eventually, with enough, almost a year of iterations, we arrived at the work that I presented.
What does your solution look like and what does it do?
We have an assembly of different machine learning models, as there was no single architecture that could fit all of our needs. We use a variety of methods, whether that’s attention mechanisms, which try to find similarities and differences between different objects, or external memory, which aims to store information and access it when needed. We also use different augmentations so that we maximize the information that we have. To test the neural networks, we used logical puzzles, which are kind of like IQ tests.
What does the typical experiment look like and how do you assess results?
Everything starts in a notebook. You jot down an idea. You think about how and why this would help and how you can implement it in practice. Then you usually deploy the experiment on a server or run it on a laptop. You wait a few days and see what comes out.
Results are usually pretty self-explanatory since you leave the model to do its thing, and it gives you an accuracy, which is just a percentage number. So you really just compare with your previous experiment and deduct why you got this result. In the cases when it is likely better than the previous one, you also need to try to understand what you did right and vice versa. It is really important to remember that an experiment is really a trial and error process because AI is a new field that is very poorly understood. We have no theoretical background on why something works and another thing doesn’t. In fact, there are a lot of techniques that theoretically, should be completely impossible but in practice work quite well, and we are still not entirely sure why this is.
What knowledge and expertise from different fields do you need to do the experiments and continue developing such an AI solution?
So you have a lot of mathematics, different branches, including approximation theory or linear algebra. There is a lot of it; you also have, of course, computer science, especially when it comes to deciding how these models are implemented. You also have distributed and high-performance computing, because we’re talking about the huge computational power here that is needed for each experiment that oftentimes means that you need to distribute the computational log on multiple machines.
When it comes to my project, even philosophy is involved. As my topic explored whether AI can think abstractly, I had to answer many philosophical questions: what does it mean for something to reason, what is abstract reasoning in the first place and can machines do it, or are we humans special in some way. The problems that I was working with were logical puzzles. You have multiple examples, you need to figure out the pattern in the examples and fill the pattern on a test sample. This is relatively easy – like an IQ test. But if we look at it from a purely theoretical perspective, there is an infinite array of patterns that are logical on a theoretical level. For us, humans, this is only one pattern, which we see intuitively. The problem I had to deal with was connected to trying to figure out what is this intuition that guides us and can we implement that in the machine learning model. This is called the concept of core knowledge by the creator of this challenge and it’s still a very profound topic of interest in the area.
What is the difference between human and AI abstract thinking?
What we derived from our framework was that, essentially, our intuition looks for the simplest pattern possible. Because we rarely think about something incredibly complex, we try to make our models as simple as possible. And if we look at something like physics, the first Newtonian mechanics, they are all linear models. From a computational perspective, that’s as simple as it gets. And that was our postulate on which we built everything. We formalized this through some Computer Science frameworks, such as the complexity theory. And eventually, we brought our way up. But we again relied on this postulate that human intuition is towards simpler answers. However, this is not necessarily true, since we often see simple events but overcomplicate them and overthink.
What are the next steps towards ensuring the robustness of AI algorithms?
There are many aspects that need to be improved – from the data collection and refinement processes to the architecture of the AI algorithms themselves. I can just give an example that became quite popular some time ago – scientists were making an experiment with a stop sign to test the capabilities of an AI-powered self-driving vehicle. They placed a sticker on the stop sign and it was then recognized as a 50 kilometer per hour speed limit.
There are multiple approaches currently which promise to alleviate all of these concerns, but none of them fixes everything completely. Nonetheless, AI is maybe the fastest developing sphere. Its development is so rapid, that it’s incredibly difficult to keep up with all of the new innovations in the field, even for people who are actively doing research and are on the bleeding edge.
Things that were discovered two years ago are now ubiquitous. AI currently looks nothing like AI 10 years ago, and there is no field in which for 10 years, you see such a huge leap in progress. So we really do not know with the speed at which it is developing currently, how we’re going to be in the next 10 years. We won’t have robots running the world but it would be very interesting to follow how things are going to develop and whether we’ll hit a point in which progress is very difficult.
What is the most important personality trait an AI data scientist or engineer has to have?
Perseverance. Sometimes, things go perfectly well. Everything is phenomenal, and everything works. But that’s during 1% of the time. Usually, stuff doesn’t work and you don’t know why. And you need to find out. AI is a black box, you don’t know what’s going on. You give it a set of data and expect the results to come out. You don’t know how this initial set is transformed into the result that you obtain. It is often that you are left wondering why you got a particular result and you don’t have a clear answer, you can guess and test out your hypothesis to eventually get to the bottom of it. But you never know directly what went wrong. Even though there is a lot of research in this area as well. So perseverance is incredibly important.
What does the Bulgarian education system lack to stimulate more young researchers like you?
I actually think we’re incredibly lucky in some regards. Especially for science, maths, and informatics, we have very deep traditions to do those things. My passion grew through a couple of programs, which are entirely Bulgarian-run. One of them is the Summer Research School, by the High School’s Student Institute of Mathematics and Informatics. They were hugely influential in my development. And they actually sent me to a program in the US called RSI, the Research Science Institute, which is typically conducted for six weeks during the summer at the Massachusetts Institute of Technology. I also participated in other initiatives, including a young talents forum, organized by the Ministry of Education, and the InnoFair pre-college competition, made possible by the club of young scientists. In terms of other fields, there is a summer camp for research in biology. There are experimental physics competitions in which lots of people participate. What I mean to say here is that we’re relatively lucky with the opportunities that we have, because many other countries, including Western countries, do not have the same amount of young talent development, that there is a new area. Resources are out there. They’re waiting to be utilized.