Published on

AI Compliance Framework — EU AI Act, GDPR, and Enterprise AI Governance

Authors

Introduction

Deploying AI in regulated industries requires understanding the EU AI Act, GDPR, and emerging AI governance frameworks. The AI Act categorizes systems by risk level, each with escalating compliance requirements. GDPR''s Article 22 restricts fully automated decisions. Building compliant AI systems isn''t just legal protection—it''s customer trust and operational resilience. This guide covers practical compliance implementation.

EU AI Act Risk Categories

Understand the regulatory landscape:

type AIRiskCategory = 'prohibited' | 'high-risk' | 'limited-risk' | 'minimal-risk';

interface AISystemRiskAssessment {
  systemName: string;
  riskCategory: AIRiskCategory;
  riskFactors: string[];
  mitigations: string[];
  requiresNotification: boolean;
  requiresHumanOversight: boolean;
}

function assessAIRiskCategory(systemConfig: {
  purpose: string;
  usesPersonalData: boolean;
  affectsLegalRights: boolean;
  criticalInfrastructure: boolean;
  industrySector: string;
}): AIRiskCategory {
  // Prohibited: Real-time biometric identification, subliminal manipulation
  if (systemConfig.purpose.includes('real-time-biometric-identification')) {
    return 'prohibited';
  }

  // High-risk: Law enforcement, hiring, credit decisions, border control
  if (
    systemConfig.affectsLegalRights &&
    (systemConfig.purpose === 'law-enforcement' ||
      systemConfig.purpose === 'hiring' ||
      systemConfig.purpose === 'credit-decisions')
  ) {
    return 'high-risk';
  }

  // High-risk if critical infrastructure or essential public services
  if (systemConfig.criticalInfrastructure) {
    return 'high-risk';
  }

  // Limited-risk: Chatbots, content recommendation, automated decision support
  if (systemConfig.purpose === 'chatbot' || systemConfig.purpose === 'recommendation') {
    return 'limited-risk';
  }

  // Default to minimal-risk for others
  return 'minimal-risk';
}

const RISK_REQUIREMENTS: Record<AIRiskCategory, {
  documentationRequired: boolean;
  testingRequired: boolean;
  humanOversightRequired: boolean;
  dataMinimizationRequired: boolean;
  biasTestingRequired: boolean;
}> = {
  'prohibited': {
    documentationRequired: false, // Not allowed
    testingRequired: false,
    humanOversightRequired: false,
    dataMinimizationRequired: false,
    biasTestingRequired: false
  },
  'high-risk': {
    documentationRequired: true,
    testingRequired: true,
    humanOversightRequired: true,
    dataMinimizationRequired: true,
    biasTestingRequired: true
  },
  'limited-risk': {
    documentationRequired: true,
    testingRequired: true,
    humanOversightRequired: false,
    dataMinimizationRequired: true,
    biasTestingRequired: false
  },
  'minimal-risk': {
    documentationRequired: false,
    testingRequired: false,
    humanOversightRequired: false,
    dataMinimizationRequired: false,
    biasTestingRequired: false
  }
};

High-Risk AI System Requirements

Document and test thoroughly:

interface HighRiskAISystemDocumentation {
  systemId: string;
  systemName: string;
  purpose: string;
  dataUsed: {
    sourceDatasets: string[];
    trainingData: { size: number; dateRange: string };
    personalDataCategories: string[];
  };
  riskMitigations: Array<{ risk: string; mitigation: string }>;
  humanOversightProcess: string;
  testingAndValidation: {
    accuracyResults: number;
    fairnessTestResults: Record<string, number>;
    adversarialTestingResults: any;
    coverageAnalysis: string;
  };
  monitoringPlan: {
    metricsTracked: string[];
    reportingFrequency: string;
    escalationProcedures: string;
  };
}

async function documentHighRiskSystem(
  systemConfig: any
): Promise<HighRiskAISystemDocumentation> {
  const systemId = generateId();

  return {
    systemId,
    systemName: systemConfig.name,
    purpose: systemConfig.purpose,
    dataUsed: {
      sourceDatasets: await listDataSources(systemConfig.id),
      trainingData: {
        size: await getTrainingDataSize(systemConfig.id),
        dateRange: await getTrainingDateRange(systemConfig.id)
      },
      personalDataCategories: identifyPersonalDataCategories(systemConfig)
    },
    riskMitigations: [
      {
        risk: 'Bias in hiring decisions',
        mitigation: 'Monthly fairness audits across demographic groups'
      },
      {
        risk: 'False positives affecting applicants',
        mitigation: 'All decisions reviewed by human recruiter'
      }
    ],
    humanOversightProcess: `
      All decisions made by the system are reviewed by a qualified human
      before being communicated to the applicant. Reviewers are trained
      quarterly on bias detection and fairness.
    `,
    testingAndValidation: {
      accuracyResults: 0.92,
      fairnessTestResults: {
        demographic_parity: 0.88,
        equalized_odds: 0.91,
        calibration: 0.89
      },
      adversarialTestingResults: {
        adversarial_examples_failed: 5,
        success_rate: 0.98
      },
      coverageAnalysis: 'Tested on 50k samples across 12 demographic groups'
    },
    monitoringPlan: {
      metricsTracked: ['accuracy', 'fairness', 'false_positive_rate', 'processing_time'],
      reportingFrequency: 'Monthly to governance board',
      escalationProcedures: 'Drift &gt; 5% triggers immediate review'
    }
  };
}

Technical Documentation Requirements

Maintain comprehensive audit trails:

interface TechnicalDocumentation {
  systemArchitecture: string;
  modelCard: ModelCard;
  dataSheets: DataSheet[];
  riskRegister: RiskEntry[];
  changeLog: ChangeLogEntry[];
  auditTrail: AuditEntry[];
}

interface ModelCard {
  modelName: string;
  modelVersion: string;
  modelType: string;
  trainingDataDescription: string;
  intendedUse: string;
  limitations: string[];
  performance: {
    accuracy: number;
    precision: number;
    recall: number;
    f1Score: number;
  };
  fairnessEvaluation: {
    testedDemographics: string[];
    worstGroupAccuracy: number;
    disparateImpactRatio: number;
  };
  recommendations: string[];
}

interface DataSheet {
  datasetName: string;
  source: string;
  purpose: string;
  composition: {
    totalRecords: number;
    features: number;
    target_variable: string;
  };
  collection: {
    mechanism: string;
    timeFrame: string;
    preprocessingSteps: string[];
  };
  distribution: Record<string, any>;
  limitations: string[];
}

async function generateModelCard(modelId: string): Promise<ModelCard> {
  const modelMetrics = await getModelMetrics(modelId);
  const fairnessResults = await runFairnessTests(modelId);

  return {
    modelName: `Model_${modelId}`,
    modelVersion: '1.0',
    modelType: 'gradient_boosting',
    trainingDataDescription: 'Dataset of 100k credit applications from 2020-2024',
    intendedUse: 'Credit risk assessment for lending decisions',
    limitations: [
      'Trained on 2020-2024 data; performance may degrade with economic shifts',
      'Not suitable for consumers under age 18',
      'Poor performance on applications with incomplete data'
    ],
    performance: {
      accuracy: modelMetrics.accuracy,
      precision: modelMetrics.precision,
      recall: modelMetrics.recall,
      f1Score: modelMetrics.f1
    },
    fairnessEvaluation: {
      testedDemographics: ['gender', 'age_group', 'ethnicity', 'income_level'],
      worstGroupAccuracy: fairnessResults.worstGroupAccuracy,
      disparateImpactRatio: fairnessResults.disparateImpactRatio
    },
    recommendations: [
      'Review model decisions on high-impact cases (loans &gt; $500k)',
      'Retrain quarterly with new data',
      'Monitor for demographic drift'
    ]
  };
}

async function generateAuditTrail(systemId: string): Promise<AuditEntry[]> {
  const entries = await db.query(`
    SELECT * FROM audit_logs WHERE systemId = ? ORDER BY timestamp DESC
  `, [systemId]);

  return entries.map(e => ({
    timestamp: e.timestamp,
    actor: e.actor,
    action: e.action,
    affectedRecords: e.affectedRecords,
    changeDetails: e.changeDetails,
    approvalStatus: e.approvalStatus
  }));
}

Human Oversight Mechanisms

Ensure humans stay in control:

interface HumanOversightWorkflow {
  systemId: string;
  oversightLevel: 'audit' | 'review' | 'approval' | 'intervention';
  decisionThresholds: {
    autoApproveIfConfidence: number;
    requireReviewIfConfidence: number;
    requireApprovalIfConfidence: number;
  };
  reviewQueue: QueuedDecision[];
}

interface QueuedDecision {
  id: string;
  systemOutput: any;
  confidence: number;
  risk_level: string;
  assignedReviewer?: string;
  reviewedAt?: Date;
  reviewerDecision?: 'approved' | 'rejected' | 'escalated';
  reviewerNotes?: string;
}

async function determineOversightLevel(
  systemOutput: any,
  confidence: number,
  riskLevel: string
): Promise<'auto' | 'review' | 'approval'> {
  if (confidence &gt; 0.95 && riskLevel === 'low') {
    return 'auto'; // Auto-approve safe, confident decisions
  }

  if (confidence &lt; 0.85 || riskLevel === 'high') {
    return 'approval'; // Require manager approval for risky decisions
  }

  return 'review'; // Standard review for moderate confidence
}

async function queueForHumanReview(
  decision: any,
  oversightLevel: string
): Promise<string> {
  const queuedDecision: QueuedDecision = {
    id: generateId(),
    systemOutput: decision,
    confidence: decision.confidence,
    risk_level: assessRiskLevel(decision),
    assignedReviewer: undefined,
    reviewedAt: undefined
  };

  await reviewQueueDb.insert('pending_reviews', queuedDecision);

  if (oversightLevel === 'approval') {
    // Escalate to manager
    await notifyApprover(queuedDecision);
  } else {
    // Route to standard reviewer
    const reviewer = await assignReviewer();
    await reviewQueueDb.update('pending_reviews', queuedDecision.id, {
      assignedReviewer: reviewer.id
    });
  }

  return queuedDecision.id;
}

async function submitReview(
  reviewId: string,
  decision: 'approved' | 'rejected' | 'escalated',
  notes: string
): Promise<void> {
  await reviewQueueDb.update('pending_reviews', reviewId, {
    reviewerDecision: decision,
    reviewedAt: new Date(),
    reviewerNotes: notes
  });

  // Log for audit trail
  await auditLog.insert({
    action: 'review_submitted',
    reviewId,
    reviewer: getCurrentUser(),
    decision,
    timestamp: new Date()
  });
}

Bias and Accuracy Testing Requirements

Conduct thorough testing:

interface BiasAndAccuracyTestReport {
  systemId: string;
  testDate: Date;
  overallAccuracy: number;
  accuracyByDemographic: Record<string, number>;
  disparateImpactRatio: number;
  adversarialRobustness: number;
  findings: string[];
  recommendations: string[];
}

async function runComprehensiveFairnessTests(
  systemId: string,
  testDataset: any[]
): Promise<BiasAndAccuracyTestReport> {
  const predictions = [];

  for (const sample of testDataset) {
    const pred = await invokeModel(systemId, sample);
    predictions.push({ ...sample, prediction: pred });
  }

  // Measure overall accuracy
  const overallAccuracy = calculateAccuracy(predictions);

  // Measure accuracy by demographic group
  const demographicGroups = ['gender', 'age_group', 'ethnicity'];
  const accuracyByDemographic: Record<string, number> = {};

  for (const demographic of demographicGroups) {
    const groups = groupBy(predictions, demographic);
    const accuracies = Object.entries(groups).map(([group, samples]) => ({
      group,
      accuracy: calculateAccuracy(samples)
    }));

    accuracyByDemographic[demographic] = Math.min(
      ...accuracies.map(a => a.accuracy)
    );
  }

  // Measure disparate impact (adverse impact ratio)
  const disparateImpactRatio = calculateDisparateImpact(predictions);

  // Test adversarial robustness
  const adversarialRobustness = await testAdversarialExamples(systemId);

  const findings = [];
  if (Math.min(...Object.values(accuracyByDemographic)) &lt; 0.85) {
    findings.push('Accuracy disparity detected across demographic groups');
  }

  if (disparateImpactRatio &lt; 0.8) {
    findings.push('Disparate impact ratio below recommended threshold (0.8)');
  }

  return {
    systemId,
    testDate: new Date(),
    overallAccuracy,
    accuracyByDemographic,
    disparateImpactRatio,
    adversarialRobustness,
    findings,
    recommendations: [
      'Retrain model with balanced demographic representation',
      'Implement fairness-aware loss function',
      'Conduct human review of decisions affecting minority groups'
    ]
  };
}

function calculateDisparateImpact(predictions: any[]): number {
  // Adverse impact ratio = minority success rate / majority success rate
  // Threshold: &gt; 0.8 (80% rule)
  const groups = groupBy(predictions, 'protected_class');
  const successRates = Object.entries(groups).map(([group, samples]) => ({
    group,
    successRate: samples.filter(s => s.prediction === true).length / samples.length
  }));

  const maxRate = Math.max(...successRates.map(r => r.successRate));
  const minRate = Math.min(...successRates.map(r => r.successRate));

  return minRate / maxRate;
}

GDPR Article 22 (Automated Decisions)

Implement rights for data subjects:

interface AutomatedDecisionCompliance {
  systemId: string;
  fullyAutomated: boolean;
  significantEffect: boolean;
  affectedRights: string[];
  safeguards: string[];
  subjectRights: {
    humanReviewAvailable: boolean;
    explanationRequired: boolean;
    rightsOfObjection: boolean;
  };
}

async function assessArticle22Compliance(
  systemConfig: any
): Promise<AutomatedDecisionCompliance> {
  const fullyAutomated = systemConfig.humanOversightLevel === 'none';
  const significantEffect = systemConfig.affectsLegalRights || systemConfig.affectsOpportunities;

  // Article 22 applies if:
  // 1. Decision is fully automated (no human involvement)
  // 2. Decision produces significant legal or similar effect

  if (!fullyAutomated || !significantEffect) {
    return {
      systemId: systemConfig.id,
      fullyAutomated,
      significantEffect,
      affectedRights: [],
      safeguards: [],
      subjectRights: {
        humanReviewAvailable: true,
        explanationRequired: false,
        rightsOfObjection: false
      }
    };
  }

  // Article 22 applies: implement safeguards
  return {
    systemId: systemConfig.id,
    fullyAutomated,
    significantEffect,
    affectedRights: [
      'Right to human review of decision',
      'Right to explanation',
      'Right to object to decision'
    ],
    safeguards: [
      'All automated decisions must be reviewed by human',
      'Provide clear explanation of decision logic',
      'Allow data subject to request human review within 30 days',
      'Implement system to track objections'
    ],
    subjectRights: {
      humanReviewAvailable: true,
      explanationRequired: true,
      rightsOfObjection: true
    }
  };
}

async function provideExplanation(
  decisionId: string,
  dataSubjectId: string
): Promise<string> {
  const decision = await getDecision(decisionId);

  const explanationPrompt = `
    Explain this AI decision to the affected person in plain language:

    Decision: ${JSON.stringify(decision.systemOutput)}
    Input features: ${JSON.stringify(decision.inputFeatures)}
    Decision: ${decision.decision}

    Requirements:
    - Explain in plain language (no jargon)
    - State the key factors that influenced the decision
    - Be transparent about limitations
    - Include information about their rights

    Return a 2-3 paragraph explanation.
  `;

  return llm.generate(explanationPrompt);
}

async function handleDataSubjectRights(
  requestType: 'explanation' | 'objection' | 'deletion',
  dataSubjectId: string,
  decisionId: string
): Promise<void> {
  const request = {
    id: generateId(),
    dataSubjectId,
    requestType,
    decisionId,
    createdAt: new Date(),
    requiredResponseDate: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000) // 30 days
  };

  await complianceDb.insert('data_subject_requests', request);

  // Route to appropriate handler
  if (requestType === 'explanation') {
    const explanation = await provideExplanation(decisionId, dataSubjectId);
    await sendExplanation(dataSubjectId, explanation);
  } else if (requestType === 'objection') {
    // Flag for human review
    await escalateForReview(decisionId, 'data_subject_objection');
  } else if (requestType === 'deletion') {
    // Delete decision record
    await deleteDecisionRecord(decisionId);
  }
}

Data Minimization for AI

Collect only necessary data:

async function assessDataMinimization(
  systemConfig: any
): Promise<{
  necessaryFeatures: string[];
  unnecessaryFeatures: string[];
  recommendations: string[];
}> {
  const allFeatures = systemConfig.trainingFeatures;
  const modelImportance = await calculateFeatureImportance(systemConfig.modelId);

  const necessaryFeatures = [];
  const unnecessaryFeatures = [];

  for (const feature of allFeatures) {
    const importance = modelImportance[feature] || 0;

    if (importance &lt; 0.01) {
      unnecessaryFeatures.push(feature);
    } else if (importance &gt; 0.05) {
      necessaryFeatures.push(feature);
    }
  }

  const recommendations = [
    `Remove ${unnecessaryFeatures.length} low-importance features to reduce data collection burden`,
    'Consider proxy-free features (avoid using race, ethnicity, or other protected classes)',
    'Implement data retention policy: delete training data after 2 years'
  ];

  return {
    necessaryFeatures,
    unnecessaryFeatures,
    recommendations
  };
}

Audit Trail Requirements

Maintain compliance-ready logs:

interface AuditLogEntry {
  timestamp: Date;
  actor: string;
  action: string;
  resourceId: string;
  changesBefore?: Record<string, any>;
  changesAfter?: Record<string, any>;
  approvalStatus?: 'pending' | 'approved' | 'rejected';
  justification?: string;
}

async function logAuditEvent(entry: AuditLogEntry): Promise<void> {
  // Immutable append-only log
  await auditLogDb.insert('audit_trail', {
    ...entry,
    timestamp: new Date(),
    hash: hashEntry(entry) // Prevent tampering
  });

  // Alert on suspicious activities
  if (entry.action === 'model_update' && entry.approvalStatus !== 'approved') {
    await alertCompliance('Unapproved model change detected');
  }
}

async function retrieveAuditLog(
  resourceId: string,
  startDate: Date,
  endDate: Date
): Promise<AuditLogEntry[]> {
  return auditLogDb.query(`
    SELECT * FROM audit_trail
    WHERE resourceId = ? AND timestamp BETWEEN ? AND ?
    ORDER BY timestamp ASC
  `, [resourceId, startDate, endDate]);
}

Incident Reporting

Respond to AI failures:

interface IncidentReport {
  id: string;
  systemId: string;
  incidentType: 'bias' | 'failure' | 'breach' | 'misuse';
  description: string;
  affectedDataSubjects: number;
  severity: 'critical' | 'high' | 'medium' | 'low';
  rootCause?: string;
  remediationSteps: string[];
  reportedToRegulator: boolean;
  regulatoryReference?: string;
}

async function reportIncident(
  systemId: string,
  incidentType: string,
  description: string
): Promise<IncidentReport> {
  const report: IncidentReport = {
    id: generateId(),
    systemId,
    incidentType: incidentType as any,
    description,
    affectedDataSubjects: await estimateAffectedDataSubjects(systemId),
    severity: assessIncidentSeverity(systemId, incidentType),
    remediationSteps: []
  };

  await complianceDb.insert('incidents', report);

  // Notify compliance team
  await notifyCompliance(report);

  // Assess if reporting to regulator required
  if (report.severity === 'critical' || report.affectedDataSubjects &gt; 10000) {
    await scheduleRegulatoryReporting(report);
  }

  return report;
}

async function scheduleRegulatoryReporting(report: IncidentReport): Promise<void> {
  // GDPR: notify regulator within 72 hours for data breaches
  const deadline = new Date(Date.now() + 72 * 60 * 60 * 1000);

  await complianceDb.insert('regulatory_reporting_queue', {
    incidentId: report.id,
    dueDate: deadline,
    status: 'pending'
  });
}

Compliance Checklist

Automated compliance verification:

async function generateComplianceChecklist(
  systemId: string
): Promise<{
  items: Array<{ item: string; status: 'complete' | 'pending' | 'failed' }>;
  overallCompliance: number;
}> {
  const system = await getSystem(systemId);
  const riskLevel = assessAIRiskCategory(system);

  const baseChecklist = [
    { item: 'System documented and registered', status: 'pending' as const },
    { item: 'Technical documentation complete', status: 'pending' as const },
    { item: 'Audit trail implemented', status: 'pending' as const },
    { item: 'Data minimization assessed', status: 'pending' as const }
  ];

  if (riskLevel === 'high-risk') {
    baseChecklist.push(
      { item: 'Fairness testing completed', status: 'pending' as const },
      { item: 'Human oversight implemented', status: 'pending' as const },
      { item: 'Impact assessment completed', status: 'pending' as const }
    );
  }

  // Verify each item
  const verifiedChecklist = await Promise.all(
    baseChecklist.map(async item => ({
      ...item,
      status: await verifyComplianceItem(systemId, item.item)
    }))
  );

  const completeCount = verifiedChecklist.filter(i => i.status === 'complete').length;
  const overallCompliance = completeCount / verifiedChecklist.length;

  return {
    items: verifiedChecklist,
    overallCompliance
  };
}

Checklist

  • Assess AI system risk under EU AI Act (prohibited/high/limited/minimal)
  • Document high-risk systems with comprehensive technical documentation
  • Implement human oversight for high-risk decisions
  • Run fairness and accuracy tests across demographic groups
  • Measure disparate impact ratio (target > 0.8)
  • Test adversarial robustness and edge cases
  • Assess Article 22 compliance for automated decisions
  • Provide explanations for decisions affecting data subjects
  • Implement data minimization: remove low-importance features
  • Maintain immutable audit trails for all decisions and changes
  • Set up incident reporting for AI failures and bias detection
  • Schedule regulatory reporting for critical incidents within 72 hours
  • Conduct quarterly compliance reviews
  • Train team on bias detection and fairness
  • Document all decisions, testing, and remediation

Conclusion

AI compliance isn''t optional—it''s the foundation of sustainable, trustworthy AI systems. The EU AI Act, GDPR, and emerging frameworks create a framework for responsible deployment. Start by categorizing your systems by risk, documenting high-risk systems thoroughly, and implementing human oversight. Test rigorously for fairness and accuracy, maintain audit trails, and be transparent with affected users. As regulations evolve, your documentation and monitoring practices position you to adapt quickly.