Technology / Organisational AI

Most AI rollouts fail because they start from tools. We start from how the company actually works.

Custom AI stacks built around your workflows, your knowledge, your decisions — not around the things vendors found easy to demo.

Productivity gains are real. They are also bounded.

A company adopts Copilot, deploys a couple of internal chatbots, runs a round of training, and then plateaus. The reason is structural: generic tools sit on top of unstructured knowledge, undefined workflows and unmeasured decisions. The bump is real, but it tops out.

Durable advantage never shows up because nothing about the organisation has been re-architected. The shape of the work is the same. AI just makes the same work slightly faster.

Davenport and Ronanki (HBR, 2018) made the case that AI adoption belongs next to business capabilities, not next to technology hype. We extend that: AI adoption belongs next to the epistemic infrastructure of the organisation — what it knows, how it decides, where its knowledge breaks.

Before the stack, the map.

We map the organisation's epistemic state using the OBS framework from Project OIDA, the KVA-accelerated venture working on epistemic knowledge for the AI era. The map is the foundation. Tools come later.

01

Where knowledge is created, where it is stored, where it gets quietly lost.

02

Where decisions are actually made — and under what beliefs about the world.

03

Where contradictions, decay and version drift are slowly eroding decision quality.

04

Where AI can produce real epistemic leverage — not just another productivity bump.

Four steps. Sequenced, not parallel.

01

Audit

We map knowledge flows, decision points, time loss and quality breakdowns. Driven by the OBS framework, not by a deck of generic best practices.

02

Opportunity map

Prioritised use cases scored on epistemic impact, feasibility and strategic fit. Most maps come back with fewer items than the client expected — and stronger ones.

03

Stack design

Models, retrieval, knowledge architecture, agents, governance, integration points. Built around your workflows, not around our preferences or anyone’s vendor relationships.

04

Build, deploy, embed

Implementation, plus the organisational training the system needs in order to survive contact with real workflows. The deploy is not the finish line.

“Use GPT” is not analysis. It is a vendor pitch.

Inside our stack design phase we run task-specific benchmarks: model performance on the client's actual workflows, with the client's actual data, against quality and cost metrics that matter to the business. Not against an academic leaderboard.

Our benchmarking covers retrieval quality, agentic reliability, reasoning depth and cost-per-decision — calibrated to the use case, ranked on what actually moves the work.

Products, frameworks, engagements.

People & learning

TsunAI

AI for people management and organisational learning. Workforce development, learning paths, capability mapping, team composition. In commercial transition.

Compliance training

JyraIA

Course authoring with a compliance focus. Turns regulatory and policy text into structured, trackable, organisation-specific training paths — on the same Knowledge Object substrate as the rest of the stack.

Methodology

Sunai People Framework

AI applied to people, learning and organisational development. Integrates with TsunAI and with the broader KVA cognitive and agency assessment system (16 archetypes, 4 quadrants, separate management and specialist tracks).

Training

Corporate AI Academy

Training layer for organisations that need real AI literacy and adoption capability — sequenced after the assessment, never before. Skill follows clarity.

Discovery

AI Transformation Workshops

Leadership and team workshops that move people from generic AI curiosity to a real list of use cases. We use them as discovery instruments, not as a standalone product.

Engagement

Custom AI stacks

Marketing (LAMMS framework), sales, research, operations, HR, compliance, investment analysis, executive decision support. Designed and integrated end-to-end.

What an engagement actually produces.

01

Epistemic & process audit

02

AI opportunity map with prioritisation

03

Custom AI stack architecture

04

Model benchmarking on your workflows

05

Implementation, deployment, embedding

06

Capability building and training

Build a stack around your organisation. Not around the demo.

Engagements start with a focused audit. From there we design the stack, run the benchmarks, and embed it in the work.