Search for...

Tackling the AI Safety Elephant, Bulgarian-founded LatticeFlow Introduces New Solution to Boost AI Model Reliability

Bulgarian-founded startup LatticeFlow introduces a new solution called Intelligent Workflows to help machine learning engineers solve errors and ensure the reliability and robustness of AI model performance in production.
Image credit: LatticeFlow
, ~

Only have 1 minute? Here are 3 key takeaways from the piece:

 • Bulgarian-founded startup LatticeFlow introduces a new solution called Intelligent Workflows to help machine learning engineers solve errors and ensure the reliability and robustness of AI model performance in production. 

 • The solution aims to eliminate blind spots early in the AI development cycle and enables enterprise machine learning teams to accelerate the improvement of model performance and mitigate risks.

 • The company’s AI platform looks to make an impact across a wide range of industries, including defense, manufacturing, and healthcare among others

AI safety and trustworthiness has been one of main topics entailed in the European EU Act, currently being discussed by European lawmakers. And while the debate has also highlighted challenges and risks associated with various AI applications, startups and companies in the realm of AI safety have been working on various solutions on how to tackle these issues. 

Bulgarian-founded AI startup LatticeFlow is one of them. Earlier this month, LatticeFlow announced its strategic expansion into the US market through the creation of LatticeFlow USA. Last year, the company raised a $12M investment, after a year of rapid growth and the widespread adoption of its platform for robust AI models, used by the US Army, Germany’s Federal Office for Information Security, and companies such as Siemens among others.

Now, the company has launched a new solution, which according to its CEO and co-founder Petar Tsankov, aims to solve a significant challenge regarding the severe difference between impressive AI demos and the subsequent underperforming AI models in production. 

“This presents a dual challenge. On one hand, business leaders need clear guidelines to ensure model reliability before deploying business-critical AI, to prevent disruptions with high-stake AI deployments. On the other hand, machine learning teams struggle to systematically build and deploy accurate and robust AI models, hindered by the technology’s brittleness and complexity,” Tsankov tells The Recursive.

Read more:  LegalTech Trends and Startups to Follow in 2023

Thus, with the increased use of AI in business operations, the task of integrating high-performing AI models into real-world applications has grown increasingly critical. This is what LatticeFlow’s new solution aims to tackle while making life easier for ML engineers. 

“Some perceived model errors aren’t truly errors – they result from inaccurately labeled data, a human error. ML engineers are currently tasked with the challenging job of manually resolving such issues, a daunting endeavor considering the vast scale and intricacy of today’s datasets and models. To address this, our team released what we termed Intelligent Workflows, simple-to-use steps that help machine learning engineers proactively find and fix such errors at scale,” Tsankov further explains. 

How does it work in real life? Tsankov gives the example of diagnosing breast cancer, which affects millions of women annually. 

“Although these models often achieve high prediction accuracy, we have witnessed real-world instances where the model’s performance degrades by more than five times in seemingly benign scenarios, such as the presence of a bright line that occasionally appears in X-ray images. While this happens in only approximately 2% of X-ray images, it impacts the diagnosis in tens of thousands of breast cancer screenings,” Tsankov points out. 

A key feature that Intelligent Workflows has is that it can also understand and analyze custom AI models that users provide. 

“This means that users can integrate their own models, which are already tailored to and understand the specific tasks they are addressing. They can then utilize our intelligent workflows to identify and resolve data and model issues in a generic and intuitive manner,” Tsankov adds. 

Many of LatticeFlow’s customers are also worried about not knowing where or how to find problems when their models fail. As a result, organizations often struggle for years to figure out why their models aren’t working.

“This is why we’ve engineered our Model Diagnostics tool, which guides machine learning engineers through a series of intelligent workflows to detect and access model blind spots, regardless of whether they use off-the-shelf AI models or custom architectures,” Pavol Bielik, CTO and Co-founder of LatticeFlow, added.

Read more:  LatticeFlow’s Petar Tsankov on AI’s Role in Amplifying Human Capabilities

Encouraging businesses to follow AI safety procedures

While discussions on AI safety have not directly impacted the pace of innovation, according to Tsankov, so far the impact of the debate has been positive.

“This has heightened the urgency of AI safety research and encouraged business leaders to proactively establish processes and frameworks for validating data quality and model safety. For all high-risk AI applications, these measures are prerequisites to widespread adoption,” Tsankov points out. 

When it comes to AI safety, crucial developments are expected in the “unacceptable risk” category, Tsankov explains.

“This category aims to prohibit AI applications that are deemed harmful to society, such as mass surveillance. This approach is similar to how society has managed other powerful technologies in the past. Emphasizing testing and validation before deploying AI in ‘high-risk’ categories is also wise. These categories significantly impact sectors like medical, insurance, and financial services, where AI decisions can have substantial effects on human lives,” he tells The Recursive. 

Furthermore, the US and UK are also actively participating in this global strategic debate, as demonstrated by US President Joe Biden’s executive order on AI Safety and Trustworthiness and the UK’s Safety Summit held this year, which also highlights the importance of ensuring the safety and trustworthiness of AI technologies on a global scale.

 

Help us grow the emerging innovation hubs in Central and Eastern Europe

Every single contribution of yours helps us guarantee our independence and sustainable future. With your financial support, we can keep on providing constructive reporting on the developments in the region, give even more global visibility to our ecosystem, and educate the next generation of innovation journalists and content creators.

Find out more about how your donation could help us shape the story of the CEE entrepreneurial ecosystem!

One-time donation

You can also support The Recursive’s mission with a pick-any-amount, one-time donation. 👍

https://therecursive.com/author/bojanstojkovski/

Bojan is The Recursive’s Western Balkans Editor, covering tech, innovation, and business for more than a decade. He’s currently exploring blockchain, Industry 4.0, AI, and is always open to covering diverse and exciting topics in the Western Balkans countries. His work has been featured in global media outlets such as Foreign Policy, WSJ, ZDNet, and Balkan Insight.