Search for...

LatticeFlow’s Petar Tsankov on AI’s Role in Amplifying Human Capabilities

In an interview with The Recursive, LatticeFlow's Petar Tsankov discusses the societal benefits and AI's ability to scale healthcare expertise for the betterment of society.
Image credit: Freepik
, ~

The positive impacts of AI can be felt everywhere, especially when discussing the technology’s role in amplifying human capabilities and opening doors to previously infeasible tasks. For LatticeFlow’s CEO and co-founder Petar Tsankov, this is particularly highlighted in the healthcare sector as a clear example, where AI’s potential addresses challenges posed by rising expenses and diminishing quality. 

In an interview with The Recursive, Tsankov discusses the societal benefits, pointing to AI’s ability to scale healthcare expertise for the betterment of society. However, Tsankov also acknowledges the emergence of negative impacts, emphasizing the importance of regulatory boundaries.

The following interview was conducted as a part of The Recursive’s “State of AI in CEE” report. Download the full report with insights from 40+ experts and an analysis of 900 AI product companies from CEE here.

The Recursive: Can you share what were the biggest milestones LatticeFlow has achieved in the past year and what are some of the biggest challenges that your company has faced during the development of your products?

Petar Tsankov: Over the past year, particularly in the recent span, I’d like to highlight some crucial milestones for LatticeFlow. For context, it’s important to know that we’ve been actively operating in the market for about two and a half years now. Our notable achievement during this timeframe was the successful launch of our product at the close of the previous year. This marked a pivotal step as we transitioned into the production phase.

Speaking of the current year, our primary focus has revolved around internal processes and refining them. This involved extensive efforts in understanding our customer base more comprehensively and enhancing our capacity to provide robust customer support. Our journey shifted from primarily tackling technical hurdles to grappling with go-to-market challenges. 

These encompassed strategic decisions such as selecting the right partners and companies to collaborate with. It also entailed streamlining our internal workflows to ensure optimal support for our clients. This holistic approach ensures that we are not only addressing the technical aspects but also delivering tangible solutions and value that resonate with the companies we engage with.

Now, let me highlight a couple of significant achievements amidst these challenges. We’re immensely proud of our consecutive nominations for the CB Insights 2023 list. This prestigious recognition reaffirms our position as a top-tier AI company globally, an honor we’ve secured for the second year in a row. Furthermore, we recently unveiled groundbreaking outcomes resulting from our partnership with the US Army. We introduced a revolutionary concept termed ‘resilient AI models.’ 

This approach marks a critical stride toward deploying AI solutions that exhibit unparalleled reliability in real-world scenarios. It’s a pursuit of the ultimate goal – to ensure AI performs seamlessly in mission-critical contexts. It’s worth noting that the significance of this achievement extends beyond defense; it has applications in sectors ranging from medical care to manufacturing. It’s an imperative for any AI application pivotal to business operations.

Lastly, a noteworthy development we’re excited about but not yet public – we’re embarking on an expansion into the United States. Specifically, we’re establishing a presence in the Bay Area. This strategic move is underscored by the colossal significance of the US market in our domain. We’re in the process of assembling an adept team in that region, a step that further solidifies our commitment to growth and impact.

In terms of LatticeFlow’s collaboration with the U.S. Army to develop resilient AI systems for mission-critical applications – Are there plans or expectations for similar collaborations with armies from the CEE region? How might such partnerships influence the AI-driven defense capabilities in the region?

We’ve initiated partnerships with the German government, specifically centered around the development of secure AI. This is of significant importance for Germany due to its prominent automotive industry that’s venturing into AI for self-driving cars. Within the domain of secure and resilient AI, we’re collaborating with major German car manufacturers.

However, when we look at Central and Eastern Europe, our collaborations haven’t extended in the same vein. Our existing partnerships span countries such as Singapore and Germany, but not within the Central and Eastern European region. It’s worth mentioning that our primary Eastern European collaboration remains with INSAIT, a government-affiliated university partnership, albeit with a distinct focus.

What are currently your main markets to sell your products? And what are your company’s plans for product and business development in the next 12-24 months?

At present, our key focus is on forging close collaborations with businesses for which AI holds paramount importance, particularly those in sectors where AI plays a mission-critical role. This encompasses industries like manufacturing, defense, and insurance. These segments stand out as our primary targets due to the essential role AI plays in their operations.

Read more:  There’s Been an Explosion of AI R&D in Greece in the Last 5 Years. What Drives It?

Now, as we set our sights on the upcoming 12 to 24 months, I’ll address both the dimensions of product development and business expansion. In terms of our product evolution, we’re witnessing a remarkable expansion from our initial niche product, which centered on data and model assessment. This has organically evolved into a comprehensive platform designed to facilitate the creation and deployment of robust AI models. Our scope has expanded to encompass everything beyond the fundamental processes of data labeling and training. 

Essentially, LatticeFlow is transitioning into a holistic solution spanning the entire AI lifecycle. Our product development strategy is dedicated to addressing quality and reliability at every stage of this lifecycle, from development to deployment. This signifies a significant shift toward embracing the broader spectrum of AI domains. While we initially emphasized computer vision, we’re also integrating language and speech models to cater to a wider array of AI applications.

Shifting gears to business development, our primary focus is on amplifying our presence within the pivotal verticals we currently serve. While I won’t delve into precise numerical targets, our overarching strategy involves cultivating a more extensive footprint within our existing key sectors. Expanding our market share and influence in manufacturing, defense, and insurance is a central pursuit. Although I won’t provide specific figures, this direction aligns with our broader industry practices and aspirations.

How do you perceive the current AI regulatory landscape in Europe and the US, and how does it impact businesses like LatticeFlow?

The landscape of AI regulations is currently in the process of taking shape, and it’s important to acknowledge that it’s an ongoing development. One positive aspect is that it’s serving to raise awareness within the industry about the significance of responsible AI deployment. Personally, I hold the belief that this technology’s potential demands a collective understanding of its usage and implications. As for my perception of the current situation, I view these regulations with a positive lens. They’re particularly crucial in establishing clear boundaries – what I refer to as ‘red lines’ – delineating AI use cases that should be off-limits. This level of clarity is paramount.

One pivotal consideration, especially in Europe, centers around the concept of high-risk sectors. These sectors involve AI applications that come with specific, stringent requirements for companies operating within them. While this approach is well-intentioned, there’s a need to strike a balance to ensure that businesses aren’t unduly stifled. It’s a cautious path to tread, especially given that the full extent of the regulations isn’t yet finalized. Flexibility is key, ensuring that the regulations are adaptable enough to foster growth while maintaining necessary controls.

Reflecting on LatticeFlow’s experiences, we conducted preliminary assessments for German car manufacturers and Swiss banks even prior to the official regulations. Internally, these companies recognize the imperative of ensuring their AI models’ optimal performance, as subpar performance could translate into substantial business losses. What we’ve noticed is a considerable gap between regulatory standards and the practical realities of the AI landscape. The standards function as a baseline, providing inspiration, but the complex technical assessments we conduct on these models delve far deeper. It’s almost akin to the standards serving as guideposts while the assessments are comprehensive benchmarks.

The inherent risk lies in a potential misalignment between regulators and the dynamic real-world requirements for delivering dependable AI. Bridging this gap is paramount to ensuring that regulatory frameworks are not only relevant but also effective in fostering reliable AI deployment.

How do you think the upcoming EU AI regulatory framework will impact the adoption of AI products and services? Given the rapid advancements in AI, do you believe the current regulations in the EU are sufficient or is there a need for more stringent measures?

From my perspective, if the implementation of the EU AI regulatory framework is executed thoughtfully – and it seems to be progressing in that direction – it could serve as a catalyst rather than an impediment. This is particularly relevant for sectors subject to regulations, such as the medical field. For instance, in medical applications, a smoother adoption process is vital. Companies are often hesitant to adopt new technologies without a clear regulatory framework. Thus, if this framework is well-structured, it could potentially enhance global AI adoption rates in regulated industries. It’s important to underline that this hinges on a well-executed regulatory strategy.

Read more:  All Roads Lead to AI: How Companies Can Embrace AI to Propel Their Products

Turning to the adequacy of the current regulations, I find the direction commendable. There’s no immediate call for more stringent measures. The emphasis on establishing clear ‘red lines’ and the focus on high-risk sectors are promising steps. However, it’s imperative to ensure that the regulations translate into actionable directives for companies. A potential challenge lies in avoiding regulations that are theoretically feasible but computationally unattainable. Let me illustrate with an example from the AI act – consider copyright-related aspects. 

While they may appear sound on paper, they should also be practically enforceable. This is a critical juncture where missteps could occur. If regulations require actions that are computationally infeasible to carry out due to hardware limitations, it creates a quandary. We anticipate a period of iterative refinement to bridge the gap between legal language and the practical realities of AI implementation. This collaborative process will likely involve adjustments and recalibrations as lawyers and machine learning practitioners work in tandem to strike the right balance.

LatticeFlow aims to diagnose and improve AI vision models. How do you ensure that these models are free from biases, especially when they are used in critical sectors like healthcare? Can you share an instance where LatticeFlow identified a significant ethical concern in an AI model and how it was addressed?

The core predicament with these models lies in their evaluation, often relying on aggregate metrics like an impressive 90% accuracy. However, these metrics don’t capture the intricate nuances of performance across diverse data sets. This can inadvertently lead to biases, manifesting in disparities based on factors like age or gender. What sets LatticeFlow apart is its distinctive capability. We don’t solely assess known biases – we transcend that by unearthing previously unnoticed biases that can have significant ramifications. While not all biases are ethical concerns, they can wield substantial impact.

A tangible instance involves our audit of a medical model designed to detect breast cancer. In practice, medical images like X-rays are fed into the model. In some cases, the imaging process introduces artifacts, such as a vertical bright line inadvertently appearing in the image’s center. While not an ethical issue, this artifact caused a substantial bias as the model underperformed significantly whenever this artifact was present. This instance highlights a reliability concern. The solution was to adapt the model’s training. By altering the training data and ensuring that these artifacts didn’t distort the model’s learning process, we effectively mitigated the bias. This fix was achieved by addressing the model’s training data composition.

Balancing AI’s potential with ethical considerations is crucial. How do you envision striking this balance, especially as AI continues to revolutionize industries?

Our approach revolves around neutrality, striving to remain net neutral in our stance. We refrain from imposing a predefined notion of ethics. Instead, our core mission is to empower the global community with the tools to construct dependable AI systems that align with their understanding of ethical principles. This, in itself, is an intricate endeavor. We opt for a neutral position, not solely due to our Swiss location, but because we acknowledge the diversity of opinions and perspectives that encompass ethical considerations. Defining this ethical landscape transcends the scope of our role.

The challenge lies in enabling the technological capability to create reliable AI systems while allowing the broader discourse on ethics to unfold naturally. It’s crucial to comprehend that this dialogue spans nations, regions, and governments, including entities in Europe and the US. We’re enthusiastic about participating in these discussions as contributors, but we refrain from dictating a one-size-fits-all ethical framework. Our clients, for instance, aren’t subjected to a singular ethical stance that we impose. This realm of ethical considerations is a collaborative pursuit involving a multitude of stakeholders.

As AI becomes increasingly integrated into our daily lives, what positive and negative societal impacts do you foresee? Philosophically, where should we draw the line in AI capabilities to ensure human values and ethics aren’t compromised?

Starting with the positive impacts, they’re quite discernible, as AI serves to amplify human capabilities. This technological augmentation opens doors to tasks that might have been previously infeasible or simply of great interest. A clear example would be the healthcare sector, where AI’s potential could address pressing challenges. Take the scenario of the US’s rising healthcare expenses accompanied by diminishing quality. AI has the potential to bridge this gap, ensuring more accurate and timely diagnoses, a critical aspect given the vast number of unexamined radiology images due to personnel limitations. Here, the positive societal impact is evident as AI helps scale healthcare expertise for the betterment of society.

Read more:  Hungary's AI Future: Csongor Bias Talks Education, Innovation, and Regulatory Vision

However, the realm of negative impacts emerges where the regulatory boundaries of the AI act come into play. The act’s essence lies in establishing limits to prevent potential pitfalls. A notable example is mass surveillance, a consequence that we collectively wish to avoid. The AI act’s purpose aligns with this, meticulously demarcating boundaries to avert such negative ramifications.

Reflecting on AI capabilities, my perspective aligns with the principle of evolution – relentless advancement. Restricting AI’s potential goes against this trajectory. As a result, I don’t believe in imposing arbitrary limits on its capabilities. If AI surpasses human capabilities, it’s an extension of natural progression. Hence, I advocate against hampering AI’s potential. It’s essential to establish boundaries in terms of application, wherein we wield AI responsibly to serve humanity. This approach maintains our autonomy while embracing the transformative potential of AI.

In your view, how does the AI innovation ecosystem in CEE compare to Western Europe? What are the key drivers for the competitiveness for CEE-based companies or its unique strengths?

I’d be glad to share my insights on this matter, particularly in the context of Europe. On the affirmative side, there are discernible strengths and challenges. In Eastern Europe, specifically, there exists a favorable ecosystem conducive to innovation. The region boasts a reservoir of robust engineering talent, a critical asset in crafting superior products. Additionally, the business conditions, including tax structures, present an advantageous landscape. These factors have led us to establish an office in Sofia, acknowledging the potential that Eastern Europe offers.

Conversely, Western Europe contends with certain shortcomings. Labor laws and associated regulations create a less appealing environment for businesses, especially in comparison to the conditions in Eastern Europe. Personally, I would hesitate to consider opening an office in many Western European countries due to these constraints. This decision aligns with our strategic goal of capitalizing on the unique strengths of Eastern Europe while mitigating the challenges.

One notable challenge prevalent in Eastern Europe, however, pertains to talent. The pool of individuals well-versed in state-of-the-art AI technology remains somewhat limited. This limitation emanates from the educational framework and the level of research in the region. Addressing this discrepancy presents a more complex challenge with longer-term implications, one that requires thoughtful consideration and investment. Despite this drawback, it’s imperative that we navigate these dynamics with awareness and discernment.

Recognizing LatticeFlow’s connection to INSAIT, how do you view the role of research institutions like INSAIT in enhancing the competitive edge of the CEE region? In light of this, could you elaborate on the potential for increased collaboration between academia and businesses? 

Certainly, the relationship between LatticeFlow and INSAIT holds profound significance. To delve into this, it’s crucial to grasp the essential role that cutting-edge research institutions like INSAIT play in shaping the competitive landscape of the CEE region. Such institutions are pivotal for establishing a foothold as a leader in the AI domain. In concrete terms, this entails producing groundbreaking research that achieves recognition in top-tier academic conferences, such as ICML and NeurIPS. Without this foundation, becoming a frontrunner in the AI field is nearly unattainable. While it’s plausible to serve as an engineering hub, the absence of in-depth comprehension of the latest advancements, such as intricate language models, hinders comprehensive progress.

Transitioning to the topic of collaboration, it’s imperative to understand the benefits that institutions like INSAIT and academia bring to the table. The relationship transcends merely connecting industry products and academic research. Rather, academia’s primary objective revolves around nurturing world-class individuals that  understand and advance state-of-the-art methods. In this context, INSAIT’s role becomes pivotal – producing talents who can not only grasp but also push the boundaries of the state-of-the-art AI landscape. The symbiosis between academia and business manifests as a conduit for developing top-tier professionals who can effectively bridge theory and application.

While collaboration is certainly valuable, the nucleus lies in fostering a pool of adept talent within Eastern Europe. This endeavor supersedes solely connecting academia and businesses. The ultimate aim is to equip the region with the cognitive prowess required to thrive in the AI landscape. Collaborative efforts can certainly contribute, but nurturing skilled individuals remains the crux. In essence, nurturing this talent pool is pivotal for the growth of impactful AI enterprises within the region.


Help us grow the emerging innovation hubs in Central and Eastern Europe

Every single contribution of yours helps us guarantee our independence and sustainable future. With your financial support, we can keep on providing constructive reporting on the developments in the region, give even more global visibility to our ecosystem, and educate the next generation of innovation journalists and content creators.

Find out more about how your donation could help us shape the story of the CEE entrepreneurial ecosystem!

One-time donation

You can also support The Recursive’s mission with a pick-any-amount, one-time donation. 👍

Snezhana Simeonova is a creative marketer who believes that storytelling is the key to engaging customers and driving demand. Drawing inspiration from some of her favourite brands like Disney, Pixar, Marvel, and Star Wars, she specializes in crafting strategies and copy that capture the imagination and build lasting connections.