In 2026, the conversation about AI in large organizations has shifted from “this has great potential, we need to extract value from it” to a much more uncomfortable question: are we generating revenue, cost savings, or real competitive advantage… or are we stuck in pilots?
Executive committees have moved past the hype phase. GenAI has stopped being a flashy gimmick and has become just another line in the budget. And that line competes with many other priorities: cybersecurity, core modernization, mergers, talent. If AI does not demonstrate impact, it loses the battle.
AI, GenAI, and Intelligent Agents: “Show Me the EBITDA”
During the first years of the GenAI explosion, the logic was: experiment quickly, try many things, show prototypes. It was funded through innovation budgets, digital budgets, or even vague “technology exploration” lines.
But as we end 2025 and enter 2026, the logic changes. AI has started to be funded like any other business initiative: recurring OPEX, serious projects, clear return expectations. The narrative of “we have to learn” no longer works; the question is simpler: what have we gained or saved?
From the perspective of an executive committee, the evaluation is quite cold:
- Have we increased revenue attributable to AI?
- Have we reduced structural costs, not just one-off project costs?
- Have we reduced regulatory, operational, or reputational risks?
- What organizational capability are we building beyond the specific use case?
Here, AI agents emerge as a second wave that goes beyond the classic “model that predicts X.” Agents do not only generate text or code: they make decisions, execute tasks, orchestrate end-to-end workflows. The leap is not just one of accuracy, but of capacity for action.
But if that capability is not connected to business processes, indicators, and a clear narrative for the committee, it ends up being just another experiment. The challenge for companies in 2026 is not so much mastering the technology, but translating it into economic return and explaining it clearly to the people who make budget decisions.
Vertical Agents and DSLMs: Specialization as an Advantage
In the first wave of corporate GenAI, generalist or horizontal copilots dominated: RAG-based assistants that helped “a bit with everything.” Writing emails, summarizing documents, generating generic code. They were (and still are) useful and created a sense of progress, but they had a medium-term limitation: their understanding of the business was superficial.
The next stage is marked by a simple yet powerful idea: specialization wins. Instead of one agent trying to know everything, many agents that know a great deal about a very specific area:
- An agent that understands the product catalog, commercial rules, and margins, and can propose viable personalized offers.
- An agent that fully understands the claims process, applicable regulations, and decision history, and can automate most of the cycle.
- An agent that masters demand planning, logistical constraints, and inventory policy, making decisions that directly impact capital.
This specialization relies on domain-specific language models, the so-called DSLMs. They do not try to be the model that answers anything in the world, but the model that answers extremely well what matters in your business: your jargon, your regulations, your decision logic, your historical data. All of this supported, of course,by high-quality, well-governed data.
The practical result is clear: fewer hallucinations, less ambiguity, more alignment with real processes, and much more reliable automation. It is not about abandoning generalist models, which still play an important role as an interface or a “general reasoning layer,” but about combining them with vertical agents that truly operate where value is generated.
In 2026, the question is not “which big model do we use?” but “which combination of agents and models—generalist and domain-specific—do we need to succeed in our key processes?”
Strategy vs. Isolated Use Cases: The Importance of the Roadmap
Another very common trap in large organizations is the “infinite list” of use cases. In the best scenario, each business area identifies relevant use cases. All seem important, all have potential, and the result is often an overwhelming agenda that generates frustration.
The reality is tougher: not all use cases have the same business impact; the effort required to build them varies; and they are not always viable—sometimes due to lack of quality data. Moreover, not all of them help build reusable capabilities.
That is why it is essential to change the approach: less catalog and more roadmap. It is not just about selecting, but about sequencing them smartly over time:
- Start with those cases where the economic impact is clear and the technical and organisational effort is reasonable.
- Prioritise those that, in addition to their impact, help build platform elements that can later be reused (connectors, domain models, base agents).
- Leave for later those cases that may be powerful but require deep changes in processes, regulation, or talent.
At Keepler, this is exactly the approach we use: impact vs. effort, but with real data and a realistic perspective, not as a theoretical exercise. Our methodology, applied to large organizations, does not stay on a canvas; it becomes an actionable plan combining quick wins, “anchor products,” and progressive capacity building.
The result is not just a pretty list. It is, for example, what has allowed our last three clients to link their AI product portfolio designed under this framework to a potential EBITDA impact of more than €100M. Not through a single “star project,” but through a well-sequenced portfolio.
The lesson for any leader is clear: without a roadmap, AI becomes diluted; wwithout a roadmap, AI gets diluted; with a roadmap, AI becomes a strategic lever managed with the same seriousness as any other significant investment.
The AI Center of Excellence: The Control Tower
When an organization moves from experimentation to industrialization, an almost inevitable need appears: someone has to direct the traffic. This someone is not an individual hero nor an isolated department, but a well-designed AI Center of Excellence (AI CoE).
An effective CoE is not the “AI police” or the place where everything is centralized and slowed down. It is, rather, a control tower that ensures three things:
- That the organization shares a common language regarding data, models, and agents.
- That solutions that work are scaled up, rather than reinvented in each country or unit.
- That risk is controlled: there are no experiments in production that compromise regulation, security, or reputation.
In practice, the CoE defines the reference architecture, selects strategic tools, sets security and observability standards, drives component reuse, and acts as a business partner for teams wanting to advance with AI.
But two often-forgotten conditions are critical:
- Clear mandate from senior leadership: without explicit support, the CoE becomes a group of “friendly evangelizers” with little real decision power.
- Business KPIs, not only technical ones: it is not enough to measure deployed models or average latency; the CoE must also be accountable for value generated, efficiency, and risk mitigation.
When these conditions are met, the CoE becomes the place where AI stops being a mosaic of scattered initiatives and becomes a structural capability of the company.
People: The True Center of the Transformation
There is a part of this entire composition that is often treated as an appendix when it should actually be at the core: people.
Within Keepler’s ethical framework for AI, the Human-Centered approach is essential. If AI does not improve people’s professional lives, it is not a good solution. Technology should complement human capabilities, not replace them. AI does not only automate tasks; it questions professional identities. It changes what it means to be “a good analyst,” “a good salesperson,” “a good operations manager.” It introduces new ways of working, new tools, new languages—and this inevitably generates a mix of excitement and resistance.
Additionally, it has been demonstrated that the ‘human-in-the-loop’ approach to developing AI-based products allows for greater context to be gathered on the business logic that AI is intended to cover. It also allows systems to be validated and therefore improved, either through retraining in AI projects or through agent improvement and re-prompting in generative AI projects.
If AI is perceived as a threat, every possible “antibody” appears: distrust in results, silent resistance, preference for manual processes “the way they’ve always been done.” If, however, it is perceived as an ally that frees people from repetitive tasks and increases decision-making capacity, adoption accelerates.
If AI is perceived as a threat, every possible “antibody” appears: distrust in results, silent resistance, preference for manual processes “the way they’ve always been done.” If, however, it is perceived as an ally that frees people from repetitive tasks and increases decision-making capacity, adoption accelerates.
This is why it is essential to design the transformation with people inside the picture, not in an appendix.
This means:
- Involving real users from the design phase of each AI agent or product.
- Explaining with total transparency what is changing, which tasks are being automated, and what new opportunities are emerging.
- Creating new roles (AI product owners, agent designers, data specialists) and giving them legitimacy.
- Supporting this with an appropriate training plan, not just a one-off “look how cool this is” session.
In the end, we always ask the same question: do our teams trust these systems enough to rely on them when something important is at stake? If the answer is no, no architecture, model, or agent, no matter how sophisticated, will be enough.
Looking Ahead to 2026: From Projects to Structural Capability
In parallel, the regulatory context—with the European AI framework progressing, increasing sensitivity around privacy, and social pressure for responsible use—means that it is no longer enough to “do things with AI.” They must be done well, in a governed and explainable way.
Taking everything together, the AI agenda for a large organization heading into 2026 can be summarized as follows:
- Demonstrate real economic impact, defensible before the executive committee
- Move from generalist models to an intelligent combination of vertical agents and domain models
- Build the foundations for scalability: data, platforms, governance, security
- Evolve from agent islands to an Agentic Mesh orchestrating a connected intelligence network
- Manage AI as a strategic portfolio with a prioritized roadmap and regular reviews
- Establish a Center of Excellence acting as control tower and accelerator
- Place people at the center—not the margins—of the transformation
The insights from the State of AI 2026 point clearly in this direction: the issue is no longer deciding whether to bet on AI but doing so in a way that truly pays off.
The companies that succeed will be those that stop seeing AI as a set of scattered projects and turn it into what it can truly be: a structural, agentic, and connected capability that amplifies the organization’s collective intelligence. The others will keep doing pilots.
And make no mistake: executive committees already know quite clearly which of the two groups they want to be in.
We are an Advanced Data Analytics company that makes amazing things possible by applying Public Cloud Technology and Artificial Intelligence to your data.




0 Comments