Search for...

OpenAI Board Member Shares Advice for AI Startups in CEE: Work Closer to Academia!

Zico Kolter, INSAIT
Image credit: Zico Kolter, picture: INSAIT
~

What are the ways to bridge the gap between industry and academia? While the AI breakthroughs in industry often make headlines, these successes are just the tip of the iceberg. Industry is increasingly taking the lead in controlling the three critical elements driving modern AI research. Computing power, large datasets, as well as a pool of highly skilled researchers – their knowledge shared with the broader community is one way to support and strengthen this progress.

In October, Zico Kolter, a Professor of Computer Science and the head of the Machine Learning Department at Carnegie Mellon University, had a lecture on AI Safety and Robustness as part of the INSAIT Tech Series in front of almost 500 people in Sofia University. Zico completed his Ph.D. in computer science at Stanford University in 2010, followed by a postdoctoral fellowship at MIT from 2010 to 2012.

The Recursive spoke to the professor about the importance of collaboration between academia and industry, the development of LLMs and his newest role as part of OpenAI’s board.

Zico Kolter emphasized that AI remains primarily a research-driven technology. “OpenAI, for example, is conducting what I would call very fundamental, academic-style research, but they are doing it in an industrial context. However, this is somewhat of an outlier – they have massive resources enabling them to do so,” he explained. 

Kolter pointed out that for the most part, significant advancements in AI at other companies, particularly smaller ones and startups, are born out of breakthroughs in academia

“Therefore, I believe the best way to drive innovation, especially for startups or industry players without the vast resources of companies like OpenAI or Google, is through genuine, close integration with fundamental academic research. These two areas are deeply interconnected, and time and again, we see the profound impact that basic research has on our current world.”

Kolter further explained that discussions are underway regarding potential formal collaborations between CMU and INSAIT. He also highlighted the strength of the regional talent pool: “I am aware that the culture here is incredibly strong when it comes to programming and math competitions. This is something many students take pride in, and it’s a characteristic that many tech companies actively seek. In that sense, the groundwork is absolutely solid for fostering collaborations in this region,” he added.

Read more:  Top AI Scientist on Joining INSAIT: Turning a Longstanding Brain Drain into Brain Gain

About the role of Zico Kolter on the OpenAI Board

In August this year, Kolter took the role of Chair of the Safety and Security Committee for OpenAI’s board. He has the responsibility to oversee the safety policies OpenAI implements in its models. 

In September, OpenAI announced that the committee would oversee the security and safety processes for the company’s artificial intelligence model development and deployment. This change follows recommendations made by the committee to OpenAI’s board, which were made public for the first time.

“OpenAI’s researchers are the ones conducting safety analyses and developing the technology, but the board provides guidance and oversight to ensure the process aligns with safety and security standards,” shared Kolter. “OpenAI is heavily investing in building models that are not only safer but also capable of assessing their own safety. My goal on the board is to ensure these efforts meet the highest standards,” he added.

In the past two years, OpenAI had struggles in making their board work effectively. The power struggle was taking place between Altman and Ilya Sutskever, ex-Chief Scientist of OpenAI, over AI safety, commercialization, and cultural divides. 

On the question of how LLM could prevent spreading misinformation, Kolter shared that: “Current commercial models have guardrails that prevent the generation of overtly harmful content. For instance, if you ask for an article promoting false claims about vaccines, the system will refuse. However, tackling misinformation at a deeper level is harder. Even statements that sound plausible but are factually inaccurate can be a source of misinformation.”

Nevertheless, Kolter also adds that open-source models are especially vulnerable to misuse. A bad actor with sufficient resources can fine-tune these models or exploit them with adversarial attacks to generate disinformation. “We are already at a point where misinformation can be created relatively easily by those who intend to do so. Combating this requires continuous advancements in both the models’ guardrails and detection mechanisms,” he explains. 

“To be fair, we’ve always been able to generate information easily – simply by writing it ourselves. However, these new technologies significantly increase the speed and scale at which information can be produced. This makes it a challenging problem to address.”

Kolter emphasized that in some ways, people reverted to relying on their communities, trusted networks, and preferred sources for information. “While this might sound pessimistic, it could simply reflect the way the world and the human condition function. Changing someone’s mind, whether with fake or real information, is incredibly difficult,” Kolter concluded.

Read more:  Think ChatGPT Should Be Banned From Schools? Here's Why It Would Be Wrong

 

Download the State of AI In CEE Report 2024 HERE!

Tags:

Help us grow the emerging innovation hubs in Central and Eastern Europe

Every single contribution of yours helps us guarantee our independence and sustainable future. With your financial support, we can keep on providing constructive reporting on the developments in the region, give even more global visibility to our ecosystem, and educate the next generation of innovation journalists and content creators.

Find out more about how your donation could help us shape the story of the CEE entrepreneurial ecosystem!

One-time donation

You can also support The Recursive’s mission with a pick-any-amount, one-time donation. 👍

https://therecursive.com/author/teodoraatanasova/

Teodora Atanasova is a News Editor at The Recursive. She covers everything around funding rounds, exits, startups expanding to international markets, big tech opening R&D in CEE, meaningful for the ecosystem partnerships.