
No organisation operating in 2025 and going into 2026 would say we are “NOT using AI.” The perceived value of AI being part of an organisation is now so high that no company can afford to be left behind. Once you dig deep, the most common findings would be the AI pilots, proof of concepts, Chatbots, some internal tools, models and slide decks full of ambition. While all of that makes sense, but once you look closely, very few organisations can say AI has actually delivered sustained business value. Ask the leaders, and the gap between promise and impact is something they struggle to explain.
To put this into perspective, we came across an interesting recent research presented at NorwAI, by Alae Ajraoui (NTNU) and colleagues. The findings of this research stood out not just because it claimed AI is overhyped but because it clearly explained why organizations struggle to make AI work in real time.
The uncomfortable truth coming from this research is spot on – AI doesn’t fail because the models pursued are weak. It fails because organizations treat AI like a tool, not a transformation!
The start point dilemma: most AI initiatives don’t move the needle
The research is straight up blunt with its findings. Across industries, nearly 70% of companies report minimal impact from their AI investments. Adding very little value. This isn’t because companies aren’t trying. This is mostly due to following the same familiar patterns:
· A small AI team runs pilots
· Use cases are defined in isolation
· Business teams are “consulted” late
· Solutions don’t scale
· Adoption remains low
· The organisation quietly moves on to the next initiative
While from the outside, this spells progress! From the inside, it feels frustrating and fragmented. Important thing to note, during the research, this problem surfaced even in digitally mature environments, including the Norwegian companies studied by NTNU and NorwAI.
The real problem isn’t AI, its misalignment
One of the most important ideas in Ajraoui and team research is deceptively simple: Every new technology starts misaligned with the organisation. While this isn’t new. History tells us when technologies are introduced, they don’t readily fit in existing structures, incentives, workflows and skills.
However, AI makes this even harder for one main reason. AI is not static. Unlike traditional systems like ERP or automation tools, AI systems evolve continuously. They learn, change behaviour and influence decisions over time. Which translates to the organization to adapt continuously too, not just once for implementation but repeatedly. Most companies underestimate this dynamic and end up assuming AI is to be deployed as a one-time-project and the changes can be contained within IT or data teams only, while business adoption will follow.
Why pre-LLM AI efforts stalled
The research highlights a striking pattern. Before large language models (LLM) came into the picture and became widespread, AI initiatives were typically siloed. Findings from the research elaborate this scenario:
· Technical teams defined use cases
· Solutions were built in isolation
· Business relevance was assumed, not validated
· Outputs were difficult to reuse across the organisation
The output: AI tools technically worked but were rarely used. This aligns perfectly with the paradox many organisational leaders experienced – “Our AI models are good..so why isn’t anyone using them?.” This is because usefulness is organisational, not technological.
LLMs didn’t magically fix AI, they exposed the real problem
With the arrival of LLMs, they didn’t suddenly turn everything around and fixed all issues making organisations good at AI, but what they did was expose the issue. Now, suddenly, AI was visible to everyone not just data teams. Employees were given the flexibility to experiment on their own. Leadership could no longer treat AI as niche shared with select departments. Hence, the gap between experimentation and enterprise readiness became obvious.
According to Ajraoui and team research findings, this triggered a shift – In the post LLM world, organisations realized that AI adoption requires coordinated changes across multiple levels and not isolated departments or projects.
One of the most practical contributions of the NorwAI / NTNU work is how clearly it maps AI implementation across three organisational levels.
1. Strategic level: where intent is set
This is where leadership can define what AI is (and isn’t) used for. Accordingly, allocate resources, set checks on governance, ethics and risk boundaries. The intent should be to signal long-term commitment to lay the foundation of the transformative journey. Many companies stop here. They publish AI principles, guidelines etc and believe their job is done and everything can be now run on auto mode. It doesn’t.
2. Tactical level: where AI either scales or stalls
This is the most overlooked layer. Here, organizations can focus on building data pipelines and infrastructure. Make decisions on which tools will be shared vs local. Invest in employee skills and enablement while coordinating between various business units and the technical teams so everyone is aligned. Without strong tactical coordination, AI remains fragmented despite a strong leadership intent.
3. Operational level: where value is actually created
This is where AI lives day-to-day. The ideal functioning for the operational level will include domain experts collaborating with AI teams. Use cases are refined on a day-to-day basis based on real work. Solutions are reused, not reinvented, and AI eventually becomes part of workflows, not just an add-on. Most AI failures in organisations happen because this level is treated as an afterthought.
Collectively, here are some of the most common reasons cited in the research for AI failures. Because organisations tend to:
· Over-invest in tools
· Under-invest in coordination
· Treat AI as a project, not a capability
· Expect adoption without redesigning workflows
· Assume value will emerge automatically
As the research makes clear, resources alone don’t create value. What matters is how organisation’s structure, combine, and enact those resources as capabilities. This is not a technology problem. It’s an organisational one.
Where this matters for leaders right now
The lesson isn’t “slow down AI,” in fact it’s the opposite. AI is moving too fast to rely on ad-hoc pilots and isolated teams. Without the right structure, companies risk wasting investments, shadow AI usage, fragmented data, inconsistent decisions, increasing operational risk. What organizations really need isn’t to build more hype around AI usage, but it’s practical scaffolding around AI.
This is exactly where platforms like Deep Current focus their attention. At Deep Current, we see the same pattern the research describes. That’s why products like Ada (Deep Current’s AI tool that handles the inbox and manages client queries in real time) and Documus Prime (Deep Current’s AI tool that handles all the paperwork – double-checks logistics documents so teams can work smoothly without the usual manual hassle) aren’t positioned as “AI magic”, but as operational enablers.
This article is inspired by research presented at NorwAI by Alae Ajraoui (NTNU), Nhien Nguyen, and Alf Steinar Sætre. All interpretations are our own, with full credit to the original authors and NorwAI for advancing this work.

