← Internal Press

Internal Press · November 2025 · Portfolio: ShikAI · White Paper

By Giacomo Conti (Università di Torino) and Alberto Trivero (Kakashi Venture Accelerator)

8 min read

AI Compliance, Simplified: How ShikAI Maps the Regulatory Maze for European Enterprises

The EU AI Act is now in force. Combined with GDPR, the Digital Services Act, and the Digital Markets Act, it creates a regulatory environment so dense that many companies are choosing not to innovate rather than risk non-compliance. ShikAI's first white paper cuts through the complexity — and offers a practical path forward.

↓ Download the full white paper (PDF)

The problem: regulation as innovation barrier

The European Union has produced the world's most comprehensive regulatory framework for artificial intelligence. That is both its strength and its challenge. The AI Act alone runs to hundreds of pages. It intersects with GDPR on data protection, with the DSA on platform transparency, and with the DMA on competition. For a mid-sized enterprise trying to deploy an AI-powered customer service tool or automate parts of its compliance workflow, the question is no longer “can we use AI?” but “can we afford the legal uncertainty of using AI?”

This is the problem ShikAI was built to address. The venture, part of the KVA portfolio, focuses on making AI compliance operationally tractable — not by simplifying the law, but by translating it into structures that organisations can actually follow.

The quantity of regulation, its length, its complexity, and its scattered cross-references across different legislative acts put the principle of legal certainty itself into question — and create obstacles for innovation that push companies to abandon market opportunities altogether.

What the white paper covers

ShikAI's white paper takes an omnibus approach. Rather than treating the AI Act in isolation, it maps the combined regulatory surface that any AI user in Europe must navigate: AI Act, GDPR, Digital Services Act, and Digital Markets Act together. The paper is structured in three parts.

Part one: the regulatory landscape

The paper begins with a clear-eyed survey of the current state of EU regulation. The AI Act introduces a dual governance model — a European AI Office within the Commission for coordination, and national surveillance authorities in each member state for enforcement. The legislative style is deliberately outcome-oriented: rather than prescribing specific technical measures, the Act sets objectives and leaves implementation to organisations. This flexibility is intentional — it avoids stifling technological progress — but it creates a practical dilemma: how do you prove compliance with a regulation that tells you what to achieve but not how to achieve it?

The consequences of getting it wrong are not abstract. Violations of the AI Act can result in fines of up to 7% of global annual turnover for the most serious infractions.

Part two: the three-tier risk system

The paper distils the AI Act's risk classification into a framework that non-specialists can work with:

Unacceptable risk

Prohibited outright. Social scoring, manipulative AI, real-time biometric surveillance (with narrow law enforcement exceptions).

High risk

Regulated heavily. Healthcare, finance, hiring, justice, education. Requires human oversight, risk assessment, documentation, audit trails.

Low risk

Lighter obligations. Chatbots, assistants, recommendation engines. Transparency disclosure required; no manipulation.

But the paper's real contribution is in showing that risk classification under the AI Act is only the beginning. Even low-risk AI users must comply with GDPR requirements on data minimisation, explicit consent, and the right to explanation of automated decisions. And if the AI service operates through a large digital platform, DSA and DMA obligations on algorithmic transparency and data sharing may also apply.

Part three: compliance checklists

The final section translates the regulatory analysis into practical tables and questions. These are not abstract governance principles — they are operational checklists designed to be used by compliance teams, legal departments, and IT leads. The paper addresses five specific risk categories:

The five risk domains

1. Robustness risks — system vulnerability to unexpected inputs or data drift
2. Theft and tampering — unauthorized access to models or training data
3. Adversarial attacks — intentional manipulation of AI behaviour
4. Unreliable or inappropriate outputs — erroneous, offensive, or illegal results
5. Illegal data collection and use — GDPR violations in training or operation

For each domain, the paper provides concrete questions that an organisation should be able to answer affirmatively: Does the company have a fallback plan if the AI system malfunctions? Are there systems to detect unauthorized access to AI models? Is there a protocol for validating and correcting erroneous outputs? Has the company verified that training data complies with GDPR?

The compliance-by-design principle

A recurring theme in the paper is the concept of compliance by design — integrating regulatory conformity into AI systems from the earliest stages of development and deployment, rather than retrofitting it after the fact. The paper articulates three pillars of this approach: transparency and explainability (the AI system must be interpretable enough for both developers and end users to understand how decisions are reached), continuous monitoring and auditing (AI systems are not static — their performance must be tracked over time to detect drift), and human oversight mechanisms (any decision with significant legal effects on individuals must ultimately be confirmed by a human).

The main question is this: how can you be sure that your use of an AI system is compatible with the law, while simultaneously protecting yourself from sanctions following inspections by national authorities?

Data sovereignty and server location

The paper also addresses a frequently overlooked dimension of AI compliance: where the data physically resides. The EU restricts the transfer of personal data outside the European Economic Area unless the destination country has received an adequacy decision from the European Commission. This means that AI solutions hosted on cloud infrastructure in countries without such decisions cannot be used without violating GDPR. Companies must verify that their AI providers comply with European data protection standards and, where necessary, adopt appropriate technical and contractual safeguards such as standard contractual clauses or data localisation.

Why this matters for the KVA portfolio

ShikAI operates in the exact space identified in OIDA's end-to-end example as a contested market segment: B2B AI compliance for regulated industries. The white paper is not merely a thought-leadership exercise — it is the intellectual foundation for ShikAI's product thesis. By mapping the regulatory landscape with this level of precision, ShikAI positions itself as the operational layer between the law and the enterprise — translating regulatory complexity into auditable, repeatable compliance processes.

The timing is deliberate. With national surveillance authorities beginning their implementation phase across EU member states, the window between regulation and enforcement is closing. Companies that invest in compliance infrastructure now will have a structural advantage over those that wait.

Key figures from the regulatory landscape

Up to 7% of global annual turnover in fines for the most serious AI Act violations
4 intersecting regulatory frameworks: AI Act, GDPR, DSA, DMA
2-tier governance: EU AI Office (central) + national surveillance authorities (local)
3 risk levels determining the intensity of compliance obligations


Editorial note — This white paper was originally published under the venture's previous name, Shikamaru. The company now operates as ShikAI.

Download the white paper

AI Compliance, Simplified: How ShikAI Maps the Regulatory Maze

PDF · 22 pages

Download PDF

For enquiries about the venture or the compliance framework, contact the KVA team.

EU AI ActGDPRDSA / DMACompliance by designRisk classificationData sovereignty

Internal Press · Kakashi Ventures Accelerator Srl · Turin, Italy
Published November 2025