Framework White Paper Assessment Services Regulatory About Contact Blog Request a Consultation

DPIF in the
Regulatory Landscape

Framework mapping current as of March 2026 · Reflects EU AI Act obligations in force and NIST AI RMF v1.0

DPIF was designed with regulatory interoperability as a core principle. It addresses a specific structural gap that existing frameworks leave open: deployment-level governance for AI-mediated digital representations of real persons.

Framework Alignment Status
EU AI Act Aligned
NIST AI RMF 1.0 Compatible
ISO/IEC 42001:2023 Complementary
C2PA Complementary
ISO/IEC JTC 1/SC 42 Engagement Planned
IEEE Standards Engagement Planned

DPIF v1.3 · March 2026 · CC BY 4.0


Positioning

Existing frameworks govern AI risk at the system level. None provide the deployment-level controls that digital representations of real persons require: the consent architecture, identity fidelity requirements, behavioural boundary specifications, and lifecycle governance that DPIF operationalises. DPIF fills this gap. It does not replace or supersede any of the frameworks it aligns with.

Alignment with Existing Frameworks

Each mapping below identifies areas of direct alignment, complementary coverage, and where DPIF extends beyond the scope of the referenced framework. The framing is consistent throughout: DPIF fills a specific gap these frameworks leave open — it does not render them redundant.

EU AI Act
Regulation (EU) 2024/1689 · In force August 2024, phased obligations to 2027

The EU AI Act entered into force in August 2024. Its obligations apply in phases: prohibited AI practices from February 2025, general-purpose AI model requirements from August 2025, and high-risk AI system obligations from August 2026. The Act establishes a risk-tiered framework for AI systems but does not address the deployment-level governance requirements that arise when AI mediates the identity and communications of specific, identifiable individuals.

DPIF's consent and authority controls map to Arts. 13–14 on transparency and human oversight. Its identity fidelity and semantic integrity requirements align with Art. 15's accuracy and robustness obligations. DPIF's four-tier context risk classification provides deployment-specific classification where the Act's risk categories operate at system level. For organisations deploying AI systems that represent real persons in high-risk contexts, DPIF offers concrete implementation evidence for Act compliance.

Direct Alignment
Arts. 13–14 transparency and human oversight; Art. 15 accuracy and robustness; Art. 9 risk management for high-risk AI systems
Complementary
DPIF's context risk tiers provide deployment-specific classification where the Act's risk categories operate at system level
DPIF Extension
Consent revocation protocols, identity drift monitoring, inter-deployment conflict resolution, and posthumous governance — none addressed in the Act's current scope
NIST AI Risk Management Framework
AI RMF 1.0 · NIST · January 2023

The NIST AI RMF provides a voluntary, non-prescriptive framework for managing AI risk through four core functions: Govern, Map, Measure, and Manage. Its broad applicability is a strength, but it intentionally does not prescribe domain-specific controls. For organisations deploying AI representations of real persons, the RMF leaves the specific control architecture to the implementing organisation.

DPIF's seven control categories and 18-control assessment structure directly support the Map function's emphasis on context and risk identification. DPIF's Scoring Rubric produces concrete Measure function artefacts for DRRP deployment contexts. The framework's lifecycle governance and audit requirements — including logging retention requirements calibrated by context risk tier — parallel the Manage function's focus on deployment monitoring.

Direct Alignment
Govern and Map functions; contextual risk identification and classification; deployment monitoring and audit under the Manage function
Complementary
DPIF's Scoring Rubric produces concrete Measure function implementation artefacts specific to DRRP deployment contexts
DPIF Extension
Principal-specific consent architecture, identity fidelity thresholds, and authority boundary enforcement — not addressed in the RMF's general scope
ISO/IEC 42001:2023
AI Management System Standard · ISO · December 2023

ISO/IEC 42001 establishes requirements for an artificial intelligence management system (AIMS), providing a risk-based governance approach consistent with other ISO management system standards. It addresses organisational governance of AI at a systems and processes level. DPIF is more specific: it addresses operational deployment controls for the specific case where AI mediates the identity and communications of an identifiable real person.

DPIF's instrument suite — scoring rubric, deployment lifecycle specification, conflict resolution procedures — provides concrete implementation artefacts that can support an organisation's ISO 42001 certification evidence. The governance pillars map to ISO 42001's risk-based approach, while DPIF's deployment-level controls extend into territory the standard does not prescribe.

Direct Alignment
Risk-based governance approach; organisational controls; documentation and audit trail requirements consistent with AIMS structure
Complementary
DPIF instruments provide domain-specific implementation artefacts that support ISO 42001 certification evidence in DRRP deployment contexts
DPIF Extension
ISO 42001 addresses management systems; DPIF addresses operational deployment controls — the deployment-level DRRP governance layer is outside ISO 42001's scope
C2PA
Coalition for Content Provenance and Authenticity · Technical specification

C2PA provides a technical standard for content provenance and authenticity, enabling cryptographic attribution of digital content to its origin. Its focus is on content-level provenance — establishing what was created, when, and by what system. DPIF's focus is on deployment-level governance — establishing who has consented to representation, within what boundaries, and under what conditions.

The two frameworks are architecturally compatible at the disclosure and attribution controls layer. DPIF's AC-2.3 (Output Attribution Traceability) and DC-4.1 (Contextual Disclosure Enforcement) controls map naturally to C2PA's provenance model. Formal interoperability guidance is planned.

Direct Alignment
Content attribution and provenance; disclosure mechanisms; audit trail requirements consistent with C2PA's cryptographic provenance model
Complementary
C2PA establishes content provenance at the technical layer; DPIF establishes consent and authority governance at the deployment layer
DPIF Extension
C2PA does not address consent architecture, identity fidelity, or lifecycle governance — the full DPIF deployment control set is outside C2PA's technical scope

The Gap

None of the frameworks above prescribe deployment-level controls for digital representations of specific, identifiable individuals. The consent architecture, identity fidelity requirements, behavioural boundary enforcement, and lifecycle governance that DPIF operationalises exist in no current international standard.

What Existing Frameworks
Do Not Cover

The following governance capabilities are specific to DPIF. They are not addressed — even in principle — by the EU AI Act, NIST AI RMF, ISO/IEC 42001, or C2PA. Each addresses a structural risk that emerges when AI mediates an identifiable person's identity and communications at scale.

Identity Fidelity & Drift Monitoring
Pre-deployment validation of recognisable likeness, voice fidelity, and behavioural consistency. Mandatory revalidation when model or rendering engine updates occur. Absence of monitoring constitutes unacceptable risk to Presence Integrity under DPIF.
Consent Architecture & Revocation
Formal Scope of Use declaration specifying purpose, medium, audience, and duration. Revocation timeframes calibrated by context risk tier: 72 hours (Low) to 1 hour (Regulated). Consent modifications versioned and logged. Revocation is irreversible.
Delegated Authority Boundary Enforcement
Formally documented communicative scope for each deployment with explicit exclusions. System-level block on autonomous generation of policy commitments, legal positions, or binding representations beyond defined authority. Authority cannot be self-expanded.
Deployment Lifecycle Governance
Formal state machine: Provisioning → Active → Suspended → Revoked → Archived. Only Active deployments are eligible for certification. Revocation is permanent. Active-to-Archived transitions are prohibited — all deployments must pass through Suspended or Revoked.
Inter-Deployment Conflict Resolution
When multiple deployments of the same or related DRRP produce contradictory outputs or conflicting authority, DPIF provides a formal precedence hierarchy and resolution procedure. No current international standard addresses multi-deployment governance for a single principal.
Posthumous & Incapacitation Governance
Default suspension timelines in the absence of an Advance Consent Instrument: 14 days post-death, 30 days for permanent incapacity, 180 days for temporary incapacity. Advance Consent Instruments have a maximum 5-year sunset, renewable. No comparable provision exists in any current AI governance framework.

Standards Body Engagement

The Presence Authority is pursuing engagement with international standards bodies to position DPIF within the emerging global governance landscape for AI-mediated identity. DPIF's deployment-level specificity addresses a gap not currently covered by any active standards programme.

ISO/IEC
JTC 1/SC 42
Artificial Intelligence · Subcommittee
ISO/IEC JTC 1/SC 42 is the primary international subcommittee for AI standardisation, covering AI concepts, terminology, trustworthiness, and governance. Its work programme includes AI risk management, bias mitigation, and explainability. DPIF's deployment-level controls for digital representations of persons address a use case not currently within scope of any active SC 42 work item.
The Presence Authority is preparing a formal contribution to ISO/IEC JTC 1/SC 42 proposing a new work item for deployment-level governance of AI-mediated representations of real persons.
IEEE
Standards Association
IEEE SA · AI Ethics & Governance programmes
IEEE Standards Association's AI Ethics programme includes work on autonomous systems, data privacy, and algorithmic bias. The IEEE P2089 and related initiatives address societal implications of AI. DPIF's governance architecture for consent, identity integrity, and lifecycle management of digital representations of persons is relevant to the IEEE SA's evolving AI governance portfolio.
The Presence Authority is preparing a formal contribution to IEEE Standards Association addressing consent, identity integrity, and lifecycle governance for AI-mediated representations of real persons.

Regulatory Consultation & Standards Alignment

The Presence Authority engages with regulators, policymakers, and standards bodies on DPIF's relevance to emerging governance requirements. If you are assessing DPIF's applicability to a regulatory consultation, standards contribution, or organisational compliance programme, we are available for direct technical discussion.

01
Regulatory mapping. Technical briefings on how DPIF maps to specific regulatory obligations, including the EU AI Act's phased requirements for high-risk AI systems.
02
Standards contribution. Structured support for organisations seeking to reference or cite DPIF in regulatory submissions, consultation responses, or standards contributions.
03
Advisory engagement. The Presence Authority is forming an advisory board with representation from AI governance, data protection law, digital identity standards, and regulated sector practice. Enquiries welcome.
04
DPIF is open. All normative instruments are published under CC BY 4.0 at github.com/PresenceAuthority/DPIF. Citation and adaptation are explicitly permitted under the licence terms.