Alignment with Existing Frameworks
Each mapping below identifies areas of direct alignment, complementary coverage, and where DPIF extends beyond the scope of the referenced framework. The framing is consistent throughout: DPIF fills a specific gap these frameworks leave open — it does not render them redundant.
The EU AI Act entered into force in August 2024. Its obligations apply in phases: prohibited AI practices from February 2025, general-purpose AI model requirements from August 2025, and high-risk AI system obligations from August 2026. The Act establishes a risk-tiered framework for AI systems but does not address the deployment-level governance requirements that arise when AI mediates the identity and communications of specific, identifiable individuals.
DPIF's consent and authority controls map to Arts. 13–14 on transparency and human oversight. Its identity fidelity and semantic integrity requirements align with Art. 15's accuracy and robustness obligations. DPIF's four-tier context risk classification provides deployment-specific classification where the Act's risk categories operate at system level. For organisations deploying AI systems that represent real persons in high-risk contexts, DPIF offers concrete implementation evidence for Act compliance.
The NIST AI RMF provides a voluntary, non-prescriptive framework for managing AI risk through four core functions: Govern, Map, Measure, and Manage. Its broad applicability is a strength, but it intentionally does not prescribe domain-specific controls. For organisations deploying AI representations of real persons, the RMF leaves the specific control architecture to the implementing organisation.
DPIF's seven control categories and 18-control assessment structure directly support the Map function's emphasis on context and risk identification. DPIF's Scoring Rubric produces concrete Measure function artefacts for DRRP deployment contexts. The framework's lifecycle governance and audit requirements — including logging retention requirements calibrated by context risk tier — parallel the Manage function's focus on deployment monitoring.
ISO/IEC 42001 establishes requirements for an artificial intelligence management system (AIMS), providing a risk-based governance approach consistent with other ISO management system standards. It addresses organisational governance of AI at a systems and processes level. DPIF is more specific: it addresses operational deployment controls for the specific case where AI mediates the identity and communications of an identifiable real person.
DPIF's instrument suite — scoring rubric, deployment lifecycle specification, conflict resolution procedures — provides concrete implementation artefacts that can support an organisation's ISO 42001 certification evidence. The governance pillars map to ISO 42001's risk-based approach, while DPIF's deployment-level controls extend into territory the standard does not prescribe.
C2PA provides a technical standard for content provenance and authenticity, enabling cryptographic attribution of digital content to its origin. Its focus is on content-level provenance — establishing what was created, when, and by what system. DPIF's focus is on deployment-level governance — establishing who has consented to representation, within what boundaries, and under what conditions.
The two frameworks are architecturally compatible at the disclosure and attribution controls layer. DPIF's AC-2.3 (Output Attribution Traceability) and DC-4.1 (Contextual Disclosure Enforcement) controls map naturally to C2PA's provenance model. Formal interoperability guidance is planned.
None of the frameworks above prescribe deployment-level controls for digital representations of specific, identifiable individuals. The consent architecture, identity fidelity requirements, behavioural boundary enforcement, and lifecycle governance that DPIF operationalises exist in no current international standard.
What Existing Frameworks
Do Not Cover
The following governance capabilities are specific to DPIF. They are not addressed — even in principle — by the EU AI Act, NIST AI RMF, ISO/IEC 42001, or C2PA. Each addresses a structural risk that emerges when AI mediates an identifiable person's identity and communications at scale.
Standards Body Engagement
The Presence Authority is pursuing engagement with international standards bodies to position DPIF within the emerging global governance landscape for AI-mediated identity. DPIF's deployment-level specificity addresses a gap not currently covered by any active standards programme.
The Presence Authority engages with regulators, policymakers, and standards bodies on DPIF's relevance to emerging governance requirements. If you are assessing DPIF's applicability to a regulatory consultation, standards contribution, or organisational compliance programme, we are available for direct technical discussion.