Complaix governs its own AI
before it governs yours.
The Complaix System is applied internally before it is applied to any client. This page is the live record of how Complaix governs its own AI use - the AI Surface Registry, Decision Accountability Matrix, Exposure Score, and governance principles that apply to every AI tool in Complaix's operations.
This is not a policy document. It is an operational record, verified against the live platform codebase and internal tooling. It is updated quarterly.
How we govern AI at Complaix
Six non-negotiable principles that govern every AI tool in use across Complaix's operations and platform.
Every AI tool is registered
Complaix maintains a live AI Surface Registry of every AI tool in use across its operations. No AI tool is used without being documented in the registry. The registry records the tool, category, use case, data handling approach, and the accountable human owner. This registry is reviewed quarterly and updated whenever a new tool is adopted or an existing tool's capabilities change.
Every AI-influenced decision has a named human owner
No AI output is published, shared with clients, or used in a decision without a named human reviewing and approving it. The accountable owner for all AI-influenced decisions at Complaix is Lucas Daidimos, Founder & CEO. This is not a policy - it is the operational reality of a founder-led practice. As Complaix scales, this accountability structure will be formalised into a Decision Accountability Matrix covering each function.
Client data handling is transparent and documented
The Complaix OS platform uses AI to generate board reports and governance documents. These features process aggregated governance metrics (exposure scores, risk counts, compliance percentages) - not personal data or raw client documents. This is disclosed in the platform's Data Processing Agreement. For internal advisory work, no client PII is ever submitted to third-party AI tools. Prompts are anonymised before submission.
AI outputs are always reviewed before use
Complaix does not use autonomous AI publication, automated client communications, or AI-generated content without human review. Every AI output that reaches a client, appears on this website, or informs a governance recommendation has been reviewed and approved by a human. This applies to platform-generated documents, marketing content, regulatory analysis, and all other AI-assisted outputs.
Governance posture is reviewed quarterly
The AI Surface Registry, Decision Accountability Matrix, and Exposure Score are reviewed on a quarterly basis. New tools are assessed before adoption. Existing tools are re-evaluated as their capabilities and data handling practices evolve. This review is documented and available to clients on request. The current review cycle is Q2 2026, with the next scheduled for Q3 2026.
EU AI Act and ISO 42001 obligations assessed
Complaix has completed a self-assessment against the EU AI Act's four core obligations: risk management, human oversight, accountability chains, and technical documentation. All four assessed as Compliant. No Complaix AI system meets the definition of a high-risk AI system under Annex III. ISO 42001 alignment is tracked in the Complaix OS compliance module.
Complaix's current score
The AI Exposure Score is calculated across four dimensions using the Complaix AI Exposure Scoring methodology. A score of 48/100 reflects honest acknowledgement of the platform's AI dependency, offset by strong accountability controls and low data sensitivity. This is not a risk score - it is a governance posture indicator.
Score calculated using the Complaix AI Exposure Scoring methodology. Last reviewed: Q2 2026 (May 2026). Next review: Q3 2026.
Aggregated governance metrics (not PII) processed by platform AI. Internal advisory work uses anonymised prompts. Low-medium sensitivity.
Platform's core AI features (board reports, document generation, regulatory feed) rely on LLM. Human review required before all outputs are used.
EU AI Act self-assessment completed. No high-risk AI system classification. ISO 42001 alignment tracked. UK GDPR compliant.
Full human ownership of all AI-influenced decisions. Named accountable owner for every AI use case. No autonomous publication.
Every AI tool in use at Complaix
The following registry documents every AI tool currently in use across Complaix's operations and platform. Each entry includes the use case, data handling approach, accountable human owner, and review process. Verified against the live platform codebase and internal tooling. Last updated: Q2 2026 (May 2026).
Gemini 2.5 Flash (via Manus Forge)
Language Model - Platform Core
Powers the Complaix OS platform's AI features: regulatory intelligence feed, LinkedIn content generation, board report drafting, document generation (NDAs, MSAs, policies)
Board reports include aggregated governance metrics (exposure scores, risk counts, compliance percentages). No personal data, individual names, or raw client documents are included in prompts. Clients are informed via the platform DPA.
Lucas Daidimos, Founder & CEO
All AI-generated documents reviewed before delivery. No autonomous publication. Clients review and approve all outputs.
Claude (Anthropic)
Language Model - Internal Use
Internal research, content drafting, strategic analysis, and advisory preparation. Used by the Complaix team, not embedded in the client-facing platform.
No client PII processed. Prompts are anonymised before submission. Used for internal operational tasks only.
Lucas Daidimos, Founder & CEO
All outputs reviewed before use. No autonomous publication or client-facing use without human approval.
ChatGPT / GPT-4o (OpenAI)
Language Model - Internal Use
Research synthesis, regulatory analysis, framework documentation, and internal content drafting. Used by the Complaix team, not embedded in the client-facing platform.
No client PII processed. Prompts are anonymised before submission. Used for internal operational tasks only.
Lucas Daidimos, Founder & CEO
All outputs reviewed before use. No autonomous publication or client-facing use without human approval.
Manus AI
Agentic / Workflow Automation
Website development, internal workflow automation, and scheduled task execution (OS notifications, contract reminders, monthly governance reports, quarterly board packs).
No client PII processed in agentic tasks. Scheduled tasks operate on aggregated platform metrics only. All automation outputs are logged.
Lucas Daidimos, Founder & CEO
All agentic outputs reviewed and approved before deployment or client delivery. Scheduled tasks are monitored and auditable.
Attio (AI-enhanced CRM)
CRM / Sales Intelligence
Stores client contact data (name, email, company, deal stage). No sensitive governance data stored. Data processing covered by Attio's DPA.
PandaDoc
Document Automation / E-Signature
Processes client names, email addresses, and document content. No AI-generated content sent to clients without human review. Data processing covered by PandaDoc's DPA.
Resend
Transactional Email
Processes recipient email addresses and email content. No sensitive governance data in email bodies beyond what the recipient already holds. Data processing covered by Resend's DPA.
Registry last reviewed: Q2 2026 (May 2026). Next scheduled review: Q3 2026. To request the full Decision Accountability Matrix or governance documentation, contact Complaix directly.
Who is accountable for every AI-influenced decision
Every AI-influenced decision at Complaix has a named human owner. No AI output is used in a client-facing context, published externally, or used to inform a governance recommendation without human review and approval.
| Decision Type | AI Tool Involved | Accountable Human Owner | Review Requirement | Status |
|---|---|---|---|---|
| Platform board report generation | Gemini 2.5 Flash | Lucas Daidimos | Client reviews and approves before use | Active |
| Platform document drafting (NDAs, MSAs, policies) | Gemini 2.5 Flash | Lucas Daidimos | Human review before delivery to client | Active |
| Regulatory intelligence feed | Gemini 2.5 Flash | Lucas Daidimos | Reviewed before display; fallback data if LLM unavailable | Active |
| LinkedIn content generation | Gemini 2.5 Flash | Lucas Daidimos | All posts reviewed and edited before publication | Active |
| Internal research and analysis | Claude / ChatGPT | Lucas Daidimos | All outputs verified against primary sources before use | Active |
| Internal content drafting | Claude / ChatGPT | Lucas Daidimos | All content reviewed before publication or client use | Active |
| Website development and automation | Manus AI | Lucas Daidimos | All deployments reviewed and approved before going live | Active |
| Scheduled platform notifications | Manus AI (scheduled) | Lucas Daidimos | Templates reviewed; individual sends logged and auditable | Active |
| Client contract creation and delivery | PandaDoc | Lucas Daidimos | All documents reviewed before sending for signature | Active |
| CRM data management | Attio | Lucas Daidimos | Data reviewed quarterly; AI enrichment features monitored | Active |
Questions clients ask about our AI governance
Does Complaix use AI to make decisions about my organisation?
No. AI tools at Complaix are used to assist with drafting, analysis, and document generation. Every AI output that relates to your organisation is reviewed and approved by a human before it is used. No autonomous decisions are made about your governance posture, risk profile, or compliance status.
Does my company's data get sent to AI tools like ChatGPT or Claude?
No. For internal advisory work, no client PII or confidential business information is submitted to third-party AI tools. For the Complaix OS platform, board report generation uses aggregated governance metrics (exposure scores, risk counts, compliance percentages) - not personal data or raw documents. This is disclosed in the platform DPA.
Which AI model powers the Complaix OS platform?
The Complaix OS platform uses Gemini 2.5 Flash, accessed via the Manus Forge API, for AI-powered features including board report generation, document drafting, and the regulatory intelligence feed. The model is not used for any autonomous decision-making - all outputs require human review before use.
Is Complaix compliant with the EU AI Act?
Complaix has completed a self-assessment against the EU AI Act's four core obligations: risk management, human oversight, accountability chains, and technical documentation. All four assessed as Compliant. No Complaix AI system meets the definition of a high-risk AI system under Annex III of the EU AI Act.
How do I know this registry is accurate?
This registry is verified against the live Complaix platform codebase and internal tooling. It is reviewed quarterly. The last verification was conducted in May 2026. Tools listed as 'not integrated' have been confirmed absent from the codebase. To request the full governance documentation package, contact Complaix directly.
What happens if Complaix adopts a new AI tool?
Any new AI tool must be assessed and registered in the AI Surface Registry before use. The assessment covers: use case, data handling approach, risk classification, accountable human owner, and review process. The registry is updated and this page is refreshed at the next quarterly review cycle.
What's changed
A live record of changes to Complaix's AI governance posture. Updated each quarter.
AI Surface Registry updated - 7 tools verified
Full audit of AI tools in use across Complaix operations and platform. Registry updated to reflect live codebase: Gemini 2.5 Flash (platform), Claude and ChatGPT (internal), Manus AI (agentic), Attio, PandaDoc, and Resend (integrations). Tools not found in codebase removed: Claude (platform), GPT-4o (platform), Perplexity AI, Notion AI.
Exposure Score revised to 48/100 (Medium)
Score updated to reflect the platform's AI dependency for core features (board reports, document generation, regulatory feed). Data Sensitivity raised to 32 (aggregated governance metrics in LLM prompts, not PII). Operational Dependency raised to 58. Accountability Coverage maintained at 92. Overall: Medium.
Data handling disclosure updated for board report AI
Governance principle 03 updated to accurately reflect that aggregated governance metrics (exposure scores, risk counts, compliance percentages) are processed by the platform LLM for board report generation. Confirmed: no personal data or raw client documents in prompts. Disclosed in platform DPA.
Manus AI added to AI Surface Registry
Manus AI (agentic workflow tooling) formally registered in the AI Surface Registry. Use case: website development, document automation, internal workflow tooling, and scheduled task execution. Risk band: Medium. Accountable owner: Lucas Daidimos, Founder & CEO.
EU AI Act self-assessment completed
Complaix completed its first EU AI Act self-assessment against the four core obligations: risk management, human oversight, accountability chains, and documentation. All four assessed as Compliant. No high-risk AI system classification under Annex III.
AI Exposure Score established at 38/100
First formal AI Exposure Score calculated across four dimensions: Data Sensitivity (28), Operational Dependency (45), Regulatory Exposure (35), Accountability Coverage (92). Overall: Low - Medium. Revised to 48/100 in Q2 2026 following platform AI feature expansion.
Decision Accountability Matrix formalised
All AI-influenced decisions at Complaix formally assigned to a named human owner. Accountable owner: Lucas Daidimos, Founder & CEO. No autonomous AI publication policy implemented.
AI Surface Registry initiated
Complaix's AI Surface Registry created with initial entries for Claude (Anthropic), GPT-4o (OpenAI), and Perplexity AI for internal use. Data handling protocols and review processes documented for each tool.