Taming the AI Beast: Transforming AI Innovation into Operational Reality
“In the past year, super smart computer programmes called AIs have gotten way better and faster… It’s like going from a few smart robots in one lab to whole cities full of helpful robots.” – Dave Ruane
Taming the AI Beast: Artificial intelligence is no longer the future – it’s the present. From content generation to translation, AI is everywhere. But while experimentation is rampant, few companies are succeeding in turning AI pilots into enterprise-grade production systems. In the latest Elevate Innovate session, Dave Ruane (Director of Client Solutions, Lion People Global), Olga Blasco (M&A Principal Partner, Lion People Global), and leaders from Phrase: Georg Ell (CEO) and Simone Bohenberger-Rich (CPO) explored the stark realities and massive potential of implementing AI.

Why 90% of AI Projects Never Make It
Simone indicated struggles occur when moving from AI experimentation to real-world execution. She pointed to research from Gartner and other sources, which shows that 90% of AI projects still fail to progress beyond the proof-of-concept stage.
She insisted large language models (LLMs) are not plug-and-play solutions. They require structure, clean data, and well-defined risk controls to function consistently at scale.
“…an LLM, as deceiving as it looks, we can all use it, is not a solution out of the box. But it helps you with our kitchen to tell you what recipe you can create based on what’s in your fridge. But when it actually comes to turning it into a reliable solution that produces the same output time and time again, it falls short.” – Simone Bohenberger-Rich
So why do most AI deployments stall? Common blockers include:
Simone’s advice is to establish solid frameworks, manage risk carefully, and always remember that clean, proprietary data is the most powerful ingredient in successful AI deployments.
Tiered Workflows: The Secret to Managing AI Risk
As pressure mounts across industries to deploy AI, Olga Blasco offers a critical reminder: successful implementation isn’t just about plugging in technology, it’s about strategic orchestration. Large Language Models, AI agents, and human experts must work together in tiered workflows tailored to content type and risk level.
The concept of a tiered approach – long discussed in theory – is now becoming a practical necessity as companies look to scale while protecting brand integrity and operational accuracy.
“Everybody wants to scale their content solutions and in order to do that, you really need to know where you can take more risk and where you should not take any risk.” – Olga Blasco
For organisations that rushed into “AI-first” strategies, the need for structure and clarity is more urgent than ever. Many now face challenges in managing quality, risk, and ROI.
“They just need to be educated and they need to be, I think, taken by the hand and shown how they can get that return on content and get all those efficiencies and speed while still minimising the risk.” – Olga Blasco
Innovation Cadence approach: Phrase’s Culture of Launching
Just a few years ago, Phrase was a collection of strong standalone products. Today, it’s a fully integrated language technology platform, a transformation built not just on features, but on philosophy. To extract real enterprise value from AI, Simone and Georg emphasized the need for a multi-component system: clean proprietary data, structured workflows, and flexible AI architecture.
“I believe you need a universe of AIs. There’s not going to be one AI to rule them all. So you need access to many different models and systems that will have strengths and weaknesses.” – Georg Ell
That mindset – openness to complexity, iteration, and layered tooling is mirrored in the company’s internal culture. At Phrase, innovation starts at the top. Georg described how every new hire hears this innovation-first ethos from day one. Mistakes are not just tolerated, they’re welcomed as part of a learning mindset. Employees are trained, empowered with tools like ChatGPT, and celebrated in weekly all-hands meetings where they share use cases and best practices.
“It’s about this constant drive, restlessness, and striving for innovation and new ideas.” – Georg Ell
Supporting this culture is Phrase’s Innovation Cadence – a structured, agile rhythm of product development. Every 90 days, the company launches new features in a coordinated cycle involving every department from engineering to customer success. These launches are planned years in advance, tracked with “T-minus” checkpoints, and executed with precision.
“We haven’t missed a day in 11 quarters… and it gets better every time.” – Georg Ell
At Phrase, innovation isn’t accidental; it’s engineered. And their cadence-driven approach proves that structure and creativity aren’t in conflict, they’re what make sustainable innovation possible.
Watch the full session recording here:
Not Just AI-Ready: AI-First
The real power of AI isn’t in cutting costs – it’s in multiplying value. At Phrase, this belief shapes every strategic decision. Instead of viewing AI as a means to reduce headcount, the company sees it as a force multiplier, enabling teams to achieve dramatically more, faster.
“There are companies that I’ve met that say AI is going to allow them to eliminate some heads, and we all know those stories.” – Georg Ell
But Phrase takes a very different approach.
“There’s a journey that we all need to get on now, which is a journey to 10x productivity, because I’d rather have 10 people be 10 times more productive than have saved two heads and only have eight. That 20% cost efficiency is much less interesting to me than a 10x productivity efficiency, and I think ultimately AI puts all of us into an innovation arms race.” – Georg Ell
This AI-first mindset reflects a deliberate shift from short-term cost-cutting to long-term value creation. At Phrase, it has sparked a cultural transformation: every employee is equipped with AI tools, encouraged to experiment, and empowered to iterate rapidly. The result is a business where AI doesn’t replace people – it elevates them.
Agentic AI: The Dream and the Danger
Agentic AI – autonomous systems that make decisions and take action independently, is one of the most hyped concepts in today’s AI landscape. According to Simone Bohenberger-Rich, it holds enormous potential for hyper-automation, offering lightning-fast, high-quality, and fully personalised content delivery.
But that potential comes with serious challenges.
“If each of these workflow steps [in agentic AI] is right at about 80%, you end up with about 50% accuracy, and I call it that a coin toss.” – Simone Bohenberger-Rich
Simone warns that the more complex the system, the greater risk it introduces. Autonomous agents often pull data from various sources: spreadsheets, databases, and even web searches. With each added integration, the likelihood of error, cost overruns, or security breaches rises.
“Agentic AI compounds all the challenges we already have with LLMs.” – Simone Bohenberger-Rich
The autonomy of these agents can also be their greatest liability. Some systems are designed to detect and use available tools without human validation. If a malicious or faulty tool is introduced – say, through a cyberattack – the agent might use it without question.
Simone’s recommendation? Start small. Constrain your agents, limit tool access, and focus on well-defined, low-risk use cases where outcomes can be measured and controlled.
Agentic AI may shape the future of automation, but it needs to be deployed with caution, clear guardrails, and an acute awareness of the risks involved.
From TMS to LTP: The Rise of Language Technology Platforms
The language services industry is undergoing a major transformation – not just in tools, but in terminology. What was once known as a Translation Management System (TMS) is now evolving into a Language Technology Platform (LTP). This shift reflects the increasing complexity of today’s localization ecosystems, which now integrate AI, APIs, automation, and human-in-the-loop curation.
“Platforms are broad… things that you build an ecosystem on top of, and platforms are the future. Point solutions are not the future.” – Georg Ell
But this change isn’t purely technological, it’s cultural. As Olga Blasco noted, while platform thinking isn’t new, its application in the AI era demands a more consultative, service-led approach.
“You’re using technology to provide scale and speed and tiered quality outcomes if you like, but it’s true professional services.” – Olga Blasco
She emphasized that the real differentiator today is human value. The industry has moved beyond translation alone. Today’s experts are no longer just language providers, they’re solution architects, orchestrating workflows, interpreting data, and applying judgment where automation falls short.
Olga described the future as one where intelligent systems trigger human intervention precisely when and where it’s needed, especially for regulated, high-risk content.
From triangles to hexagons – whatever the future shape of localization workflows – language technology platforms must evolve to fuse automation with expert human insight, at scale, and with confidence.
Are you interested in innovations? Watch our other sessions here
Final Advice: Taming the Beast Takes Strategy and Action
To wrap the session, each speaker offered a key takeaway for leaders navigating the complexities of AI adoption.
- Georg Ell urged decisiveness: “If you didn’t [start], start today.”
- Olga Blasco emphasized agility and realism: “Test and learn and test and learn and fail fast and learn… not all that glitters is gold.”
- And Simone Bohenberger-Rich encouraged a focus on real business impact: “Start with a really clearly defined business problem (…) and try to solve that with AI and set really clear success metrics for it.”
Whether you’re deploying LLM-driven workflows or reinventing your content strategy, the message was clear: successful AI adoption demands clarity, cadence, and culture. It’s not about hype – it’s about building sustainable, human-centered systems that solve real problems and scale with confidence.