Skip to main content
Internal AI Governance

Complaix governs its own AI
before it governs yours.

The Complaix System is applied internally before it is applied to any client. This page is the live record of how Complaix governs its own AI use - the AI Surface Registry, Decision Accountability Matrix, Exposure Score, and governance principles that apply to every AI tool in Complaix's operations.

This is not a policy document. It is an operational record, verified against the live platform codebase and internal tooling. It is updated quarterly.

Last reviewed: Q2 2026 (May 2026)
Next review: Q3 2026
7 tools registered
Governance Principles

How we govern AI at Complaix

Six non-negotiable principles that govern every AI tool in use across Complaix's operations and platform.

01

Every AI tool is registered

Complaix maintains a live AI Surface Registry of every AI tool in use across its operations. No AI tool is used without being documented in the registry. The registry records the tool, category, use case, data handling approach, and the accountable human owner. This registry is reviewed quarterly and updated whenever a new tool is adopted or an existing tool's capabilities change.

02

Every AI-influenced decision has a named human owner

No AI output is published, shared with clients, or used in a decision without a named human reviewing and approving it. The accountable owner for all AI-influenced decisions at Complaix is Lucas Daidimos, Founder & CEO. This is not a policy - it is the operational reality of a founder-led practice. As Complaix scales, this accountability structure will be formalised into a Decision Accountability Matrix covering each function.

03

Client data handling is transparent and documented

The Complaix OS platform uses AI to generate board reports and governance documents. These features process aggregated governance metrics (exposure scores, risk counts, compliance percentages) - not personal data or raw client documents. This is disclosed in the platform's Data Processing Agreement. For internal advisory work, no client PII is ever submitted to third-party AI tools. Prompts are anonymised before submission.

04

AI outputs are always reviewed before use

Complaix does not use autonomous AI publication, automated client communications, or AI-generated content without human review. Every AI output that reaches a client, appears on this website, or informs a governance recommendation has been reviewed and approved by a human. This applies to platform-generated documents, marketing content, regulatory analysis, and all other AI-assisted outputs.

05

Governance posture is reviewed quarterly

The AI Surface Registry, Decision Accountability Matrix, and Exposure Score are reviewed on a quarterly basis. New tools are assessed before adoption. Existing tools are re-evaluated as their capabilities and data handling practices evolve. This review is documented and available to clients on request. The current review cycle is Q2 2026, with the next scheduled for Q3 2026.

06

EU AI Act and ISO 42001 obligations assessed

Complaix has completed a self-assessment against the EU AI Act's four core obligations: risk management, human oversight, accountability chains, and technical documentation. All four assessed as Compliant. No Complaix AI system meets the definition of a high-risk AI system under Annex III. ISO 42001 alignment is tracked in the Complaix OS compliance module.

AI Exposure Score

Complaix's current score

The AI Exposure Score is calculated across four dimensions using the Complaix AI Exposure Scoring methodology. A score of 48/100 reflects honest acknowledgement of the platform's AI dependency, offset by strong accountability controls and low data sensitivity. This is not a risk score - it is a governance posture indicator.

48
/ 100
Medium

Score calculated using the Complaix AI Exposure Scoring methodology. Last reviewed: Q2 2026 (May 2026). Next review: Q3 2026.

Data Sensitivity32 / 100

Aggregated governance metrics (not PII) processed by platform AI. Internal advisory work uses anonymised prompts. Low-medium sensitivity.

Operational Dependency58 / 100

Platform's core AI features (board reports, document generation, regulatory feed) rely on LLM. Human review required before all outputs are used.

Regulatory Exposure35 / 100

EU AI Act self-assessment completed. No high-risk AI system classification. ISO 42001 alignment tracked. UK GDPR compliant.

Accountability Coverage92 / 100

Full human ownership of all AI-influenced decisions. Named accountable owner for every AI use case. No autonomous publication.

AI Surface Registry

Every AI tool in use at Complaix

The following registry documents every AI tool currently in use across Complaix's operations and platform. Each entry includes the use case, data handling approach, accountable human owner, and review process. Verified against the live platform codebase and internal tooling. Last updated: Q2 2026 (May 2026).

Platform-Embedded AI (Complaix OS)

Gemini 2.5 Flash (via Manus Forge)

Language Model - Platform Core

Medium Risk
Use Case

Powers the Complaix OS platform's AI features: regulatory intelligence feed, LinkedIn content generation, board report drafting, document generation (NDAs, MSAs, policies)

Data Handling

Board reports include aggregated governance metrics (exposure scores, risk counts, compliance percentages). No personal data, individual names, or raw client documents are included in prompts. Clients are informed via the platform DPA.

Accountable Owner

Lucas Daidimos, Founder & CEO

Review Process

All AI-generated documents reviewed before delivery. No autonomous publication. Clients review and approve all outputs.

Internal-Use AI (Complaix Team)

Claude (Anthropic)

Language Model - Internal Use

Medium Risk
Use Case

Internal research, content drafting, strategic analysis, and advisory preparation. Used by the Complaix team, not embedded in the client-facing platform.

Data Handling

No client PII processed. Prompts are anonymised before submission. Used for internal operational tasks only.

Accountable Owner

Lucas Daidimos, Founder & CEO

Review Process

All outputs reviewed before use. No autonomous publication or client-facing use without human approval.

ChatGPT / GPT-4o (OpenAI)

Language Model - Internal Use

Medium Risk
Use Case

Research synthesis, regulatory analysis, framework documentation, and internal content drafting. Used by the Complaix team, not embedded in the client-facing platform.

Data Handling

No client PII processed. Prompts are anonymised before submission. Used for internal operational tasks only.

Accountable Owner

Lucas Daidimos, Founder & CEO

Review Process

All outputs reviewed before use. No autonomous publication or client-facing use without human approval.

Manus AI

Agentic / Workflow Automation

Medium Risk
Use Case

Website development, internal workflow automation, and scheduled task execution (OS notifications, contract reminders, monthly governance reports, quarterly board packs).

Data Handling

No client PII processed in agentic tasks. Scheduled tasks operate on aggregated platform metrics only. All automation outputs are logged.

Accountable Owner

Lucas Daidimos, Founder & CEO

Review Process

All agentic outputs reviewed and approved before deployment or client delivery. Scheduled tasks are monitored and auditable.

Third-Party Integrations (Data Processing)

Attio (AI-enhanced CRM)

CRM / Sales Intelligence

Low

Stores client contact data (name, email, company, deal stage). No sensitive governance data stored. Data processing covered by Attio's DPA.

Owner: Lucas Daidimos, Founder & CEO

PandaDoc

Document Automation / E-Signature

Low

Processes client names, email addresses, and document content. No AI-generated content sent to clients without human review. Data processing covered by PandaDoc's DPA.

Owner: Lucas Daidimos, Founder & CEO

Resend

Transactional Email

Low

Processes recipient email addresses and email content. No sensitive governance data in email bodies beyond what the recipient already holds. Data processing covered by Resend's DPA.

Owner: Lucas Daidimos, Founder & CEO

Registry last reviewed: Q2 2026 (May 2026). Next scheduled review: Q3 2026. To request the full Decision Accountability Matrix or governance documentation, contact Complaix directly.

Decision Accountability Matrix

Who is accountable for every AI-influenced decision

Every AI-influenced decision at Complaix has a named human owner. No AI output is used in a client-facing context, published externally, or used to inform a governance recommendation without human review and approval.

Decision TypeAI Tool InvolvedAccountable Human OwnerReview RequirementStatus
Platform board report generationGemini 2.5 FlashLucas DaidimosClient reviews and approves before useActive
Platform document drafting (NDAs, MSAs, policies)Gemini 2.5 FlashLucas DaidimosHuman review before delivery to clientActive
Regulatory intelligence feedGemini 2.5 FlashLucas DaidimosReviewed before display; fallback data if LLM unavailableActive
LinkedIn content generationGemini 2.5 FlashLucas DaidimosAll posts reviewed and edited before publicationActive
Internal research and analysisClaude / ChatGPTLucas DaidimosAll outputs verified against primary sources before useActive
Internal content draftingClaude / ChatGPTLucas DaidimosAll content reviewed before publication or client useActive
Website development and automationManus AILucas DaidimosAll deployments reviewed and approved before going liveActive
Scheduled platform notificationsManus AI (scheduled)Lucas DaidimosTemplates reviewed; individual sends logged and auditableActive
Client contract creation and deliveryPandaDocLucas DaidimosAll documents reviewed before sending for signatureActive
CRM data managementAttioLucas DaidimosData reviewed quarterly; AI enrichment features monitoredActive
Common Questions

Questions clients ask about our AI governance

Confirmed

Does Complaix use AI to make decisions about my organisation?

No. AI tools at Complaix are used to assist with drafting, analysis, and document generation. Every AI output that relates to your organisation is reviewed and approved by a human before it is used. No autonomous decisions are made about your governance posture, risk profile, or compliance status.

Confirmed

Does my company's data get sent to AI tools like ChatGPT or Claude?

No. For internal advisory work, no client PII or confidential business information is submitted to third-party AI tools. For the Complaix OS platform, board report generation uses aggregated governance metrics (exposure scores, risk counts, compliance percentages) - not personal data or raw documents. This is disclosed in the platform DPA.

Confirmed

Which AI model powers the Complaix OS platform?

The Complaix OS platform uses Gemini 2.5 Flash, accessed via the Manus Forge API, for AI-powered features including board report generation, document drafting, and the regulatory intelligence feed. The model is not used for any autonomous decision-making - all outputs require human review before use.

Confirmed

Is Complaix compliant with the EU AI Act?

Complaix has completed a self-assessment against the EU AI Act's four core obligations: risk management, human oversight, accountability chains, and technical documentation. All four assessed as Compliant. No Complaix AI system meets the definition of a high-risk AI system under Annex III of the EU AI Act.

Confirmed

How do I know this registry is accurate?

This registry is verified against the live Complaix platform codebase and internal tooling. It is reviewed quarterly. The last verification was conducted in May 2026. Tools listed as 'not integrated' have been confirmed absent from the codebase. To request the full governance documentation package, contact Complaix directly.

Confirmed

What happens if Complaix adopts a new AI tool?

Any new AI tool must be assessed and registered in the AI Surface Registry before use. The assessment covers: use case, data handling approach, risk classification, accountable human owner, and review process. The registry is updated and this page is refreshed at the next quarterly review cycle.

Governance Changelog

What's changed

A live record of changes to Complaix's AI governance posture. Updated each quarter.

May 2026

AI Surface Registry updated - 7 tools verified

Full audit of AI tools in use across Complaix operations and platform. Registry updated to reflect live codebase: Gemini 2.5 Flash (platform), Claude and ChatGPT (internal), Manus AI (agentic), Attio, PandaDoc, and Resend (integrations). Tools not found in codebase removed: Claude (platform), GPT-4o (platform), Perplexity AI, Notion AI.

May 2026

Exposure Score revised to 48/100 (Medium)

Score updated to reflect the platform's AI dependency for core features (board reports, document generation, regulatory feed). Data Sensitivity raised to 32 (aggregated governance metrics in LLM prompts, not PII). Operational Dependency raised to 58. Accountability Coverage maintained at 92. Overall: Medium.

May 2026

Data handling disclosure updated for board report AI

Governance principle 03 updated to accurately reflect that aggregated governance metrics (exposure scores, risk counts, compliance percentages) are processed by the platform LLM for board report generation. Confirmed: no personal data or raw client documents in prompts. Disclosed in platform DPA.

Apr 2026

Manus AI added to AI Surface Registry

Manus AI (agentic workflow tooling) formally registered in the AI Surface Registry. Use case: website development, document automation, internal workflow tooling, and scheduled task execution. Risk band: Medium. Accountable owner: Lucas Daidimos, Founder & CEO.

Apr 2026

EU AI Act self-assessment completed

Complaix completed its first EU AI Act self-assessment against the four core obligations: risk management, human oversight, accountability chains, and documentation. All four assessed as Compliant. No high-risk AI system classification under Annex III.

Q1 2026

AI Exposure Score established at 38/100

First formal AI Exposure Score calculated across four dimensions: Data Sensitivity (28), Operational Dependency (45), Regulatory Exposure (35), Accountability Coverage (92). Overall: Low - Medium. Revised to 48/100 in Q2 2026 following platform AI feature expansion.

Q1 2026

Decision Accountability Matrix formalised

All AI-influenced decisions at Complaix formally assigned to a named human owner. Accountable owner: Lucas Daidimos, Founder & CEO. No autonomous AI publication policy implemented.

Q4 2025 / Q1 2026

AI Surface Registry initiated

Complaix's AI Surface Registry created with initial entries for Claude (Anthropic), GPT-4o (OpenAI), and Perplexity AI for internal use. Data handling protocols and review processes documented for each tool.

See how your organisation compares.

The free AI Accountability Assessment takes 10 minutes and produces your AI Exposure Score immediately. No commitment, no credit card.