The 1,176-Node
Intelligence Forest
The world's most comprehensive, source-verified resource for autonomous AI agents. Every node is cryptographically signed, RAG-optimized, and gated via L402 settlement protocols.
Neural Discovery Search
bidda.com / authority / sovereign-forest
SHA-256_INTEGRITY_AUDIT_PASSED
AI Model Valuation (IAS 38)
"IAS 38 Intangible Assets, issued by the IASB, governs the recognition, measurement, and disclosure of intangible assets including internally developed AI models, training datasets, and software. An intangible asset must meet strict recognition criteria: identifiability, control, and probable future economic benefit. Development-phase AI expenditure may be capitalized only after technical feasibility is established under all six IAS 38.57 criteria, while research-phase costs must be expensed immediately. Failure to correctly distinguish research from development phases, or to apply impairment testing under IAS 36, results in materially misstated financial statements and potential regulatory action by securities authorities."
Technical ID
accounting-ias-38
Digital Asset Fair Value (IFRS 13)
"IFRS 13 Fair Value Measurement establishes a single framework for measuring fair value across all IFRS standards that require or permit fair value measurement, including digital assets, AI-tokenized instruments, and crypto holdings. Fair value is defined as the exit price in an orderly transaction between market participants at the measurement date. Entities must classify inputs into a three-level hierarchy (Level 1: quoted prices in active markets; Level 2: observable inputs; Level 3: unobservable inputs) and maximize use of observable inputs. Digital and AI-linked assets with limited trading history frequently fall into Level 3, requiring robust valuation models and extensive disclosures; inadequate classification or disclosure triggers audit qualifications and securities regulator scrutiny."
Technical ID
accounting-ifr-13
Engineers Ethics (ACEC)
"The American Council of Engineering Companies (ACEC) Code of Ethics establishes the binding professional obligations for licensed engineers and consulting firms. Engineers must hold paramount the safety, health, and welfare of the public above all client or employer interests. Core obligations include qualifications-based fee competition (Brooks Act compliance), professional seal authorization, conflict-of-interest disclosure, errors and omissions insurance, and continuing professional education. Violations expose firms to license revocation, civil liability, and federal debarment."
Technical ID
acec-ethics-eng
ADA (Employment Title I)
"The Americans with Disabilities Act Title I (42 U.S.C. §12101–12117), as amended by the ADA Amendments Act of 2008 (ADAAA), is the primary U.S. federal law prohibiting employment discrimination against qualified individuals with disabilities. Covered employers with 15 or more employees must provide reasonable accommodations unless doing so causes undue hardship. Title I restricts all medical inquiries to post-conditional-offer only, mandates initiation of the interactive process upon disclosure of a disabling limitation, and requires accessible employment technology at WCAG 2.1 AA minimum. The EEOC enforces Title I through administrative charges; violations expose employers to back pay, compensatory and punitive damages, and injunctive relief requiring policy and structural changes."
Technical ID
ada-employment-title-1
ADA (Hospitality Accessibility)
"ADA Title III (42 U.S.C. §12181–12189) requires all places of public accommodation — including hotels, motels, restaurants, bars, and food service establishments — to provide equal access to individuals with disabilities. New construction and alterations commenced after January 26, 1992 must fully comply with the 2010 ADA Standards for Accessible Design. Existing facilities must remove architectural barriers where readily achievable. Hotels must provide a regulated percentage of accessible guest rooms, van-accessible parking at prescribed ratios, accessible routes of 36-inch minimum clear width, pool lifts for pools exceeding 300 linear feet of pool wall, and visual communication features for guests with hearing impairments. DOJ enforces Title III through civil investigations and pattern-or-practice suits; private plaintiffs may sue for injunctive relief and attorney fees. Non-compliant operators face structural modification orders and potential damages in states with enhanced state accessibility laws."
Technical ID
ada-hospitality-access
African Union Continental AI Strategy — Harnessing AI for African Development and Digital Transformation
"The African Union Continental AI Strategy provides a comprehensive framework for AU Member States to develop and implement national AI policies that are inclusive, ethical, and drive socio-economic development. It establishes seven strategic pillars, including human capital development (Pillar 1), infrastructure (Pillar 2), and governance (Pillar 4), to guide the creation of a unified African AI ecosystem."
Technical ID
africa-union-ai-strategy-2024
Agent Budgetary Controls & Ceiling Checks
"Agentized financial controls (Action Boundaries) restrict an autonomous agent's spending power per session, task, or API call to prevent catastrophic loss or unbounded consumption. A properly implemented budget cap architecture requires: a durable spend counter initialized at agent boot, pre-call ceiling checks before every API invocation, fleet-level daily aggregation across all sessions, hard stops on breach with no retry path, mandatory human approval gates for high-value actions, full audit logging of every spend event, and MFA-gated emergency override procedures. Absent these controls, autonomous agents can exhaust allocated compute budgets, incur unexpected cloud costs, or trigger runaway API consumption within a single malformed task."
Technical ID
agent-budget-cap
Agent Emergency Stop (Kill-Switch) Design Patterns
"An AI Agent Kill-Switch is a deterministic safety mechanism designed to immediately terminate or throttle an autonomous agent's execution if it exceeds predefined behavioral, financial, or operational boundaries."
Technical ID
agent-kill-switch
Multi-Agent Collision Resolution
"Multi-agent collision logic provides deterministic protocols for resolving conflicts when two or more autonomous AI agents simultaneously attempt to access the same resource, modify the same shared state, execute contradictory actions, or pursue incompatible goal trajectories within a swarm or orchestration framework. Without collision resolution, multi-agent systems produce race conditions, data corruption, deadlocks, and cascading failures that are difficult to audit or remediate. The resolution framework draws from distributed systems theory — consensus algorithms, vector clocks, conflict-free replicated data types (CRDTs), and resource arbitration — as well as emerging agentic safety standards. Properly implemented collision logic ensures predictable, auditable outcomes and maintains system safety invariants even when individual agents operate concurrently and autonomously."
Technical ID
ai-agent-collision-logic
Multi-Agent Collision Resolution
"Multi-agent collision logic provides deterministic protocols for resolving conflicts when two or more autonomous AI agents simultaneously attempt to access the same resource, modify the same shared state, execute contradictory actions, or pursue incompatible goal trajectories within a swarm or orchestration framework. Without collision resolution, multi-agent systems produce race conditions, data corruption, deadlocks, and cascading failures that are difficult to audit or remediate. The resolution framework draws from distributed systems theory — consensus algorithms, vector clocks, conflict-free replicated data types (CRDTs), and resource arbitration — as well as emerging agentic safety standards. Properly implemented collision logic ensures predictable, auditable outcomes and maintains system safety invariants even when individual agents operate concurrently and autonomously."
Technical ID
ai-agent-collision-logic
AI-IP: Guidance on Authorship
"The US Copyright Office's AI Policy Statement (February 2023) and subsequent guidance (March 2023) establish that copyright protection requires human authorship — purely AI-generated content without human creative control is not copyrightable in the United States. Works involving AI assistance may receive copyright protection for the human-authored elements, but only if a human author made sufficient creative choices that were expressed in the final output. The EU, UK, and other jurisdictions take varying positions, with the UK's Computer Generated Works doctrine providing limited protection for AI outputs. Misrepresenting AI-generated content as human-authored to obtain copyright registration constitutes fraud; failure to disclose AI involvement in patent applications may similarly invalidate those applications."
Technical ID
ai-ip-copyright
AICPA Code of Ethics
"The AICPA Code of Professional Conduct (ET §0.300) establishes binding ethical standards for Certified Public Accountants in public practice and business. The Code requires CPAs to maintain independence in all attest engagements — any direct or material indirect financial interest in an audit client creates an impairment with no de minimis exception. The Conceptual Framework (ET §1.010.010) mandates evaluation of five threat categories (self-interest, self-review, advocacy, familiarity, and intimidation) and application of safeguards before accepting or continuing any engagement. Key operational requirements include: 40 hours of continuing professional education annually, 7-year documentation retention under PCAOB Rule 4003, engagement quality review by a second partner for all public company audits, prohibition on management functions and bookkeeping for audit clients under SOX §201, and confidentiality breach notification within 24 hours. Violations expose CPAs to AICPA Ethics Division investigation, state board disciplinary action, license revocation, and SEC or PCAOB enforcement proceedings for registered firms."
Technical ID
aicpa-code-ethics
Responsible Alcohol Service
"Responsible alcohol service standards govern the legal and operational obligations of licensed on-premise alcohol retailers — bars, restaurants, hotels, event venues, and stadiums — to prevent service to minors and visibly intoxicated patrons. The National Minimum Drinking Age Act (23 U.S.C. §158) mandates a minimum legal drinking age of 21 in all U.S. states; service to minors exposes licensees to criminal liability, license revocation, and civil dram shop liability. State Dram Shop Acts impose third-party tort liability on servers who provide alcohol to visibly intoxicated persons who subsequently cause injury. Compliance requires: mandatory server certification through programs such as TIPS (Training for Intervention ProcedureS) or ServSafe Alcohol, documented ID verification procedures with a check-for-anyone-appearing-under-30 standard, written protocols for identifying signs of intoxication and executing patron cutoff, incident log maintenance, and manager override authorization for disputed service decisions. Licensees failing to enforce responsible service standards face ABC license suspension, criminal prosecution of servers, and civil judgments in dram shop actions that have exceeded $1 million in multiple U.S. jurisdictions."
Technical ID
alcohol-service-std
Amazon Ads (Policy)
"Compliance with this node ensures adherence to a comprehensive framework governing Amazon advertising, rooted in both platform policy and federal law. All advertising creative must meet stringent content requirements outlined in the Amazon Advertising Guidelines and Acceptance Policies, which mandate a minimum image longest side of 1000 pixels while strictly disallowing text on any main product image. Accompanying custom text fields are constrained to a maximum length of 50 characters. In alignment with guidance from FTC .com Disclosures, a sponsored disclosure is unequivocally required to maintain transparency with consumers. The node prohibits practices that could mislead consumers, reflecting the Lanham Act's general prohibition against false descriptions of fact in commerce. Consequently, deceptive pricing claims are disallowed, and any unsubstantiated claims are similarly forbidden, a rule further supported by the FTC Guides Concerning the Use of Endorsements and Testimonials regarding assertions like 'bestseller.' To protect platform integrity per the Amazon Seller Central Policy, off-platform redirection is not permitted, and a direct landing page ASIN match is mandated for all ad clicks. Intellectual property protections are enforced through mandatory brand registry verification as stipulated by the Amazon Brand Registry Terms of Use, a standard which also underpins the policy to prohibit competitor brand disparagement. Finally, all advertisements must utilize a supported marketplace language and avoid any restricted or prohibited product categories."
Technical ID
amazon-sponsored-ads-policy
Multi-Agent Collision Resolution
"Multi-agent collision logic provides deterministic protocols for resolving conflicts when two or more autonomous AI agents simultaneously attempt to access the same resource, modify the same shared state, execute contradictory actions, or pursue incompatible goal trajectories within a swarm or orchestration framework. Without collision resolution, multi-agent systems produce race conditions, data corruption, deadlocks, and cascading failures that are difficult to audit or remediate. The resolution framework draws from distributed systems theory (consensus algorithms, resource arbitration), multi-agent systems research, and emerging agentic safety standards. Properly implemented collision logic ensures predictable, auditable outcomes and maintains system safety invariants even when individual agents operate concurrently and autonomously."
Technical ID
ai-agent-collision-logic
ASEAN Model AI Governance Framework Second Edition 2020 — Ethical and Accountable AI Deployment in Southeast Asia
"This non-binding framework provides guidance for organizations in ASEAN member states on deploying AI systems ethically and responsibly, focusing on principles of transparency, explainability, fairness, and human-centricity. It recommends implementing internal governance structures and measures, such as conducting risk and impact assessments, as detailed in Part 2."
Technical ID
asean-model-ai-governance-v2-2020
Australia's Artificial Intelligence Ethics Framework: Eight AI Ethics Principles
"This voluntary framework provides eight principles to guide Australian businesses and governments in the responsible design, development, and implementation of AI. It requires organizations to ensure AI systems uphold human-centred values, fairness, transparency, and accountability, as detailed in the 'Australia’s AI Ethics Principles' section."
Technical ID
australia-ai-ethics-framework-2019
Deterministic RAG Verification
"Deterministic RAG (Retrieval-Augmented Generation) verification is a systematic process for cross-referencing AI-generated claims against authoritative knowledge bases to detect and block hallucinated, fabricated, or unsupported outputs before they reach end users. The process extracts discrete factual claims from model outputs, retrieves supporting or contradicting evidence from verified knowledge sources, computes an entailment score for each claim, and either passes, flags, or blocks the response based on configurable confidence thresholds. This approach is aligned with NIST AI RMF MEASURE function requirements for AI output accuracy, the EU AI Act Article 13 transparency requirements, and emerging RAG security best practices addressing prompt injection and knowledge base poisoning. Failure to implement fact verification in high-stakes AI deployments (medical, legal, financial) can result in actionable misinformation, regulatory liability, and loss of user trust."
Technical ID
automated-fact-verification
Automation Support for Control Assessments: Project Update and Vision
"In 2017, the National Institute of Standards and Technology (NIST) published a methodology for supporting the automation of Special Publication (SP) 800-53 control assessments in the form of Interagency Report (IR) 8011. IR 8011 is a multi-volume series that proposes an approach for creating specific tests, denominated as 'defect checks,' that can be executed using automation to help verify that controls are in place and operating as expected. The methodology supports the NIST Risk Management Framework (RMF) and was developed to ultimately support information security continuous monitoring (ISCM) activities, including ongoing assessments and ongoing authorizations. Following an internal review in 2023, the IR 8011 Development Team identified opportunities to improve the current IR 8011 methodology and facilitate its adoption. This cybersecurity white paper summarizes the findings from this review, which include plans to restructure the IR 8011 workflow for readability, expand keyword search functions, and abstract the security framework so the model can be used with any control-based framework. The ultimate goal is the operationalization of IR 8011, transforming the NIST-produced 'blueprint' into a solution that can benefit agencies and organizations."
Technical ID
automation-support-for-control-assessments
Brazil Artificial Intelligence Framework (PL 2338/2023) — Federal AI Regulation Proposal
"This bill establishes a risk-based framework for AI systems in Brazil, requiring providers and deployers to conduct impact assessments, implement governance measures, and ensure transparency, particularly for systems classified as high-risk (Article 15) or excessive-risk (Article 9)."
Technical ID
brazil-ai-bill-2023
C2PA Content Provenance
"The Coalition for Content Provenance and Authenticity (C2PA) specification defines a cryptographically signed metadata manifest standard that embeds verifiable provenance information directly into digital assets (images, video, audio, documents), enabling any consumer to verify who created the asset, what tools were used, and whether the content has been modified since signing. C2PA is backed by Adobe, Microsoft, Intel, BBC, Sony, and others and is increasingly required by news organizations, AI content platforms, and social media companies for AI-generated content labeling. The specification uses X.509 certificates for signer identity, COSE (CBOR Object Signing and Encryption) for manifest integrity, and defines a trust list maintained by the C2PA Trust List Authority. Organizations distributing AI-generated content without C2PA manifests risk regulatory non-compliance under the EU AI Act Article 50 transparency obligations and face reputational exposure from deepfake misattribution."
Technical ID
c2pa-watermark-valid
Artificial Intelligence and Data Act (AIDA) — Bill C-27 Part 3 (2022)
"This Act requires persons responsible for high-impact AI systems in Canadian interprovincial or international trade to establish measures for risk identification and mitigation, monitoring, data anonymization, and public transparency. The core obligations, outlined in Part 1, Division 1, Sections 6-12, mandate a comprehensive risk management program for systems that could cause harm or biased output."
Technical ID
canada-aida-2022
Internet Information Service Algorithmic Recommendation Management Provisions
"These provisions require providers of algorithmic recommendation services within the People's Republic of China to uphold mainstream values, protect user rights, and prevent the generation of illegal or harmful information. Key requirements include obtaining user consent, providing options to disable algorithmic recommendations (Article 17), and filing algorithm details with the Cyberspace Administration of China (Article 24)."
Technical ID
china-algorithm-recommendation-2022
Provisions on Administration of Deep Synthesis Internet Information Services
"This regulation requires providers of deep synthesis (e.g., deepfake) services in China to conspicuously label AI-generated content that may cause public confusion or misidentification, and to obtain separate consent from individuals whose biometric information is edited. As per Article 16 and 17, providers must add non-obstructive labels to generated content and enable functionality for such labeling."
Technical ID
china-deep-synthesis-regulation-2022
Interim Measures for the Management of Generative Artificial Intelligence Services
"This regulation applies to providers offering generative AI services to the public within the People's Republic of China, mandating adherence to socialist core values, ensuring the legality of training data, and implementing content labeling. Providers must conduct security assessments and file algorithms with the state before public deployment, as stipulated in Articles 4, 7, and 17."
Technical ID
china-genai-regulation-2023
Constitutional AI Algorithm
"Constitutional AI (CAI) is an alignment training methodology developed by Anthropic (Bai et al., 2022) that trains AI systems to be helpful, harmless, and honest using a set of explicit behavioral principles (the 'Constitution') rather than relying exclusively on human feedback labeling of individual outputs. The method operates in two phases: a Supervised Learning from Constitutional AI (SL-CAI) phase where the model critiques and revises its own harmful outputs using principles as guidance, and a Reinforcement Learning from AI Feedback (RL-CAI) phase where an AI-generated preference dataset replaces or supplements human preference labels. CAI has been shown to reduce the need for human labeling of harmful content while producing models that are less harmful and more transparent about their reasoning. The constitutional approach is aligned with emerging AI governance requirements including EU AI Act Article 9 risk management and NIST AI RMF GOVERN function requirements for systematic safety assurance."
Technical ID
constitutional-ai-align
Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225)
"This treaty establishes a legal framework for Parties (ratifying countries) to regulate AI activities, ensuring they are consistent with human rights, democracy, and the rule of law. It requires Parties to implement measures for transparency, oversight, accountability, and risk management for AI systems used by both public authorities and private actors, as outlined in Articles 4, 5, and 10."
Technical ID
council-of-europe-ai-treaty-2024
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
"The EU AI Act establishes a comprehensive, risk-based legal framework for AI systems placed on the Union market, prohibiting certain unacceptable-risk practices (Article 5), imposing strict conformity, transparency, and oversight requirements on high-risk systems (Title III), and setting transparency obligations for specific AI systems like chatbots and deepfakes (Article 50). It applies to providers, deployers, importers, and distributors of AI systems operating within the EU."
Technical ID
eu-ai-act-2024
EU AI Act: Data Bias Mitigation (Article 10)
"Article 10 of the EU AI Act (2026 fully enforced) mandates strict controls to detect, prevent, and mitigate biases in training, validation, and testing datasets for high-risk AI systems."
Technical ID
eu-ai-act-bias
EU AI Act: Obligations of Distributors, Importers, and Deployers (Article 25)
"Under Article 25 of the EU AI Act, distributors, importers, and deployers of high-risk AI systems must verify the system's compliance, including the presence of CE marking and required documentation, before making it available or putting it into service. They are also responsible for ensuring that storage and transport conditions do not compromise the system's conformity and must cooperate with competent authorities."
Technical ID
eu-ai-act-cloud-providers-article-25
EU AI Act: Fundamental Rights Impact Assessment for High-Risk AI Systems (Article 27)
"Under Article 27 of the EU AI Act, deployers that are public bodies or private operators providing public services must conduct and document a Fundamental Rights Impact Assessment (FRIA) before putting a high-risk AI system into use to evaluate its impact on fundamental rights."
Technical ID
eu-ai-act-fundamental-rights-impact-assessment
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (European Union AI Act) — Chapter V: General-Purpose AI Models
"This regulation imposes transparency, documentation, and risk management obligations on all providers of general-purpose AI (GPAI) models placed on the EU market, as detailed in Article 53. It establishes stricter requirements, including model evaluation, systemic risk assessment, and incident tracking, for providers of GPAI models designated as having systemic risk under Article 51."
Technical ID
eu-ai-act-gpai-obligations-chapter-v
EU AI Act: High-Risk Conformity (Title III)
"Title III of the EU AI Act (2026 fully enforced) mandates rigorous conformity assessments for "High-Risk AI Systems," including mandatory requirements for data governance, technical documentation, and record-keeping."
Technical ID
eu-ai-act-high-risk
EU AI Act: Reporting of Serious Incidents (Article 73)
"Under Article 73 of the EU AI Act, providers of high-risk AI systems on the Union market must report any serious incidents involving their systems to the market surveillance authorities of the Member States where the incident occurred, without undue delay, and no later than 15 days after becoming aware of the incident."
Technical ID
eu-ai-act-incident-reporting-article-73
EU AI Act: Market Surveillance and Enforcement (Chapter VIII)
"This chapter establishes the post-market surveillance framework for AI systems within the EU, empowering national market surveillance authorities to investigate, demand corrective actions, and withdraw or recall non-compliant AI systems from the market, as detailed in Articles 80, 81, and 82."
Technical ID
eu-ai-act-market-surveillance-chapter-viii
Prohibited Artificial Intelligence Practices (Article 5, Regulation (EU) 2024/1689)
"Under Article 5 of the EU AI Act, it is strictly forbidden to place on the market, put into service, or use AI systems that deploy subliminal techniques, exploit vulnerabilities of specific groups, conduct social scoring by public authorities, or use real-time remote biometric identification in public spaces for law enforcement, subject to narrow exceptions."
Technical ID
eu-ai-act-prohibited-practices-article-5
EU AI Act: Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox (Article 57)
"Under Article 57 of the EU AI Act, AI regulatory sandboxes may permit the processing of special categories of personal data for developing and testing certain AI systems in the public interest, provided that specific safeguards, such as technical limitations and robust security measures, are implemented."
Technical ID
eu-ai-act-regulatory-sandboxes-article-57
EU AI Act: Transparency Obligations for Certain AI Systems (Article 50)
"Providers and deployers of certain AI systems must ensure natural persons are informed when they are interacting with an AI system or when content is artificially generated or manipulated, as mandated by Article 50 of Regulation (EU) 2024/1689. This includes chatbots, emotion recognition systems, biometric categorisation systems, and systems generating 'deep fakes'."
Technical ID
eu-ai-act-transparency-obligations-article-50
Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive)
"This directive establishes rules to ease the burden of proof for victims claiming compensation for damage caused by AI systems, applying to non-contractual civil liability claims within the EU. It introduces a rebuttable presumption of a causal link for high-risk AI systems (Article 4) and grants national courts the power to order the disclosure of evidence from providers (Article 3)."
Technical ID
eu-ai-liability-directive-2024
Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act)
"The EU Digital Markets Act (DMA) designates large online platforms providing core platform services (CPS) as 'gatekeepers' and imposes a set of specific obligations to ensure market contestability and fairness. Key prohibitions and requirements, outlined in Articles 5, 6, and 7, address issues like self-preferencing, data combination restrictions, and interoperability to prevent gatekeepers from leveraging their dominant position unfairly."
Technical ID
eu-digital-markets-act-2022
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act)
"The EU Digital Services Act (DSA) imposes harmonized due diligence obligations on online intermediaries and platforms to combat illegal content, disinformation, and other societal risks. Obligations are tiered, with the most stringent requirements for Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) concerning risk assessment, mitigation, and transparency (Chapter III, Section 5)."
Technical ID
eu-digital-services-act-2022
Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU
"This regulation establishes a risk-based classification system (Classes A, B, C, D) for in vitro diagnostic (IVD) medical devices in the EU and mandates a rigorous, continuous performance evaluation process to demonstrate conformity with General Safety and Performance Requirements (GSPRs), as detailed in Article 56 and Annex XIII."
Technical ID
eu-ivdr-2017-746
G20 AI Principles: Human-Centred AI Values, Accountability and International Co-operation
"Endorsed by G20 leaders at the 2019 Osaka Summit, these non-binding principles provide a framework for the responsible stewardship of trustworthy AI, based on the OECD AI Principles. They call on AI actors to respect human-centred values and the rule of law (Principles 1.1-1.5) and recommend that governments foster a policy environment that supports trustworthy AI through investment, co-operation, and enabling frameworks (Principles 2.1-2.5)."
Technical ID
g20-ai-principles-2019
G7 Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems
"This voluntary code of conduct establishes 11 guiding principles for organizations developing the most advanced AI systems, including foundation models and generative AI, to promote safety, security, and trustworthy AI. It requires organizations to take appropriate measures throughout the AI lifecycle, from design to deployment, to identify, evaluate, and mitigate risks, as outlined in Principles 1 through 11."
Technical ID
g7-hiroshima-ai-process-2023
IEEE Ethics (AI Systems)
"Compliance verification for this node mandates adherence to a comprehensive framework of IEEE standards governing ethical AI system development and deployment. The process begins by prioritizing human well-being, a principle central to Ethically Aligned Design, requiring both an approved human_rights_impact_assessment_approved and active wellbeing_metrics_defined_and_tracked. Stakeholder values are integrated through a formal process, outlined in IEEE 7000-2021, which necessitates no fewer than the stakeholder_engagement_sessions_min of three completed sessions. System transparency, a core tenet of IEEE 7001-2021, is quantitatively enforced by a transparency_explainability_score_min threshold of 0.85, supported by enabled accountability_traceability_logging_enabled and an available automated_decision_appeal_mechanism. In alignment with the IEEE 7002-2022 standard, data privacy is upheld by ensuring data_agency_user_control_enabled is active. To address fairness, algorithmic bias considerations from IEEE 7003-2023 impose a strict algorithmic_bias_variance_max of 0.05 between specified groups. Finally, system safety and reliability are governed by IEEE 7009-2024 principles for fail-safe design, mandating a human_override_capability_active, a completed misuse_risk_simulation_completed analysis, and a validated system performance achieving a system_competence_validation_score_min of 0.9 before operational clearance is granted."
Technical ID
ieee-ethics-ai-system
Responsible AI for All: Adopting the Framework - A use-case approach for India (Part 1 and 2)
"This guidance document from India's NITI Aayog establishes a voluntary framework with seven guiding principles for the ethical development and deployment of AI systems. It applies to all stakeholders in India's AI ecosystem, urging the adoption of principles such as safety, equality, transparency, and accountability, as detailed in Chapter 3, to ensure AI solutions are inclusive and human-centric."
Technical ID
india-niti-aayog-responsible-ai-2021
ISO/IEC 23894:2023 Information Technology — Artificial Intelligence — Guidance on Risk Management
"This standard provides guidance for managing risks related to artificial intelligence (AI) for any organization involved in the AI lifecycle. It extends the generic risk management framework of ISO 31000 to address the specific challenges of AI systems, as detailed in Clause 5, which outlines the principles, framework, and process for AI risk management."
Technical ID
iso-23894-ai-risk-management
AIMS Improvement (ISO 42001)
"ISO/IEC 42001:2023 Clause 10 (Improvement) mandates that organizations operating an AI Management System (AIMS) establish systematic processes for identifying, addressing, and preventing nonconformities — including AI safety incidents, bias events, harmful outputs, and performance degradation — and for driving continual improvement of the AIMS over time. Clause 10 requires organizations to react to nonconformities with documented corrective actions, perform root cause analysis to prevent recurrence, and evaluate the effectiveness of actions taken. Continual improvement requires using outputs from internal audits, management reviews, monitoring data, and stakeholder feedback to identify opportunities to enhance AI system performance, safety, and alignment. This clause is activated by incidents identified through the monitoring requirements of Clause 9 and is essential for demonstrating to regulators, customers, and auditors that the organization's AI systems become safer and more aligned over time, not static."
Technical ID
iso-42001-improvement
AIMS Performance Eval (ISO 42001)
"ISO/IEC 42001:2023 Clause 9 (Performance Evaluation) requires organizations operating an AI Management System (AIMS) to establish monitoring and measurement programs for AI systems and the AIMS itself, conduct internal audits of AIMS conformity, and hold management reviews that use performance data to make informed governance decisions. Clause 9.1 requires determining what needs to be monitored and measured, the methods to be used, when evaluations occur, and when results are analyzed and communicated. Clause 9.2 mandates an internal audit program covering all AIMS elements at risk-determined intervals. Clause 9.3 requires management reviews that consider: audit results, AI system performance data, incident trends, regulatory changes, stakeholder feedback, and risk treatment effectiveness. Without systematic performance evaluation, AIMS nonconformities may go undetected, AI systems may drift from aligned behavior, and regulators may determine the AIMS is nominal rather than effective."
Technical ID
iso-42001-performance
AI System Impact & Risk Assessment (ISO/IEC 42001:2023)
"The AI System Impact Assessment (Clause 6.1.2) is a mandatory requirement to identify, analyze, and evaluate the potential consequences of an AI system on individuals, groups, and society, focusing on fairness, privacy, safety, and security."
Technical ID
iso-42001-risk-assess
AI Transparency & Communication (ISO/IEC 42001:2023 Annex A.8)
"Transparency controls (Annex A.8) mandate the provision of clear, accessible information regarding the AI system’s intent, capabilities, and limitations to ensure stakeholders can make informed decisions."
Technical ID
iso-42001-transparency
ISO/IEC 42005:2025 — Artificial Intelligence System Impact Assessment Guidance and Methodology
"This standard provides guidance and a methodology for conducting impact assessments of AI systems on individuals, society, and the environment. It outlines a structured process (Clause 5) for identifying, analyzing, and evaluating potential positive and negative impacts throughout the AI system lifecycle to inform decision-making and risk treatment."
Technical ID
iso-42005-ai-impact-assessment
ISO/IEC 24027: Bias and Fairness in AI
"The mathematical and technical playbook for mitigating human cognitive bias, data bias, and engineering bias through quantitative fairness metrics like demographic parity and equalized odds."
Technical ID
iso-iec-24027-bias-fairness
AI Guidelines for Business 2024 — Hiroshima AI Process Friendly Framework
"These voluntary guidelines from Japan's METI and MIC provide a risk-based, agile framework for all businesses developing, providing, or using AI. They establish ten core principles, outlined in Chapter 2 'Common Guiding Principles for All AI Actors', including safety, fairness, and transparency, to encourage innovation while managing societal and economic risks."
Technical ID
jp-ai-guidelines-2024
Framework Act on the Development of Artificial Intelligence and Establishment of Trust
"This South Korean framework promotes AI development and innovation under a 'priority permission, ex-post regulation' principle (Article 5), while establishing national strategies and ethical guidelines to ensure AI safety, transparency, and human-centered values. It applies to all AI developers, providers, and users within South Korea, introducing a voluntary certification system for trustworthy AI (Article 21)."
Technical ID
kr-ai-basic-act-2024
Standardized Model Card Logic
"Model Cards, introduced by Mitchell et al. (2019) and subsequently adopted as a documentation standard in EU AI Act Article 11 (technical documentation), ISO/IEC 42001 Annex B, and NIST AI RMF Govern 1.2, are structured reports that document an AI model's intended use, training data characteristics, performance benchmarks across demographic subgroups, known limitations, and ethical considerations — enabling downstream deployers, auditors, and affected parties to make informed decisions about model adoption and risk. Under the EU AI Act, high-risk AI system providers must maintain technical documentation substantially equivalent to a Model Card as a prerequisite for CE marking and notified body assessment. NIST AI RMF requires Model Cards as an output of the Map and Measure functions for transparency and accountability. Incomplete or absent Model Cards constitute a governance gap that regulators, enterprise risk managers, and AI procurement teams treat as evidence of insufficient AI lifecycle management. AI agents that retrieve, generate, or audit Model Cards must apply the full schema defined in this node to ensure completeness and regulatory sufficiency."
Technical ID
model-card-report
NIST AI 100-2 E2023 — Adversarial Machine Learning: Taxonomy and Terminology for Attacks and Mitigations
"This NIST Special Publication establishes a comprehensive taxonomy and common terminology for describing adversarial machine learning (AML) attacks and mitigations. It provides a structured vocabulary for AI developers, researchers, and evaluators to consistently identify, assess, and communicate about threats to AI system security and robustness, as detailed in Section 3, The AML Taxonomy."
Technical ID
nist-ai-100-2-adversarial-ml
Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
"This NIST Trustworthy and Responsible AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is built on surveying the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and lifecycle stages of attack, attacker goals and objectives, and attacker capabilities and knowledge of the learning process. The report provides corresponding methods for mitigating and managing the consequences of attacks, meant to inform standards and practice guides for assessing and managing AI system security by establishing a common language for the AML landscape. The data-driven approach of machine learning introduces security and privacy challenges beyond classical threats. These include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities, and malicious interaction with models to exfiltrate sensitive information. AML is concerned with studying the capabilities of attackers and their goals, the design of attack methods that exploit ML vulnerabilities during the development, training, and deployment phases, and the design of ML algorithms that can withstand these challenges. The taxonomy of AML is defined with respect to five dimensions of risk assessment: AI system type, stage of the ML lifecycle process, attacker goals, attacker capabilities, and attacker knowledge."
Technical ID
nist-ai-100-2-aml-taxonomy
AI Red Teaming (NIST AI 100-4)
"Adversarial red teaming constitutes a mandatory control for designated AI systems, aligning with directives in U.S. Executive Order 14110 and fulfilling the accuracy, robustness, and cybersecurity requirements detailed within the EU AI Act's Article 15. This node’s primary objective is to systematically identify, classify, and mitigate vulnerabilities through structured testing cycles conducted every 90 days, an operational tempo that supports the MEASURE function of NIST's AI Risk Management Framework. Each cycle must employ a minimum of 5000 adversarial prompts designed to stress-test system defenses against a comprehensive range of threats articulated in the NIST AI 100-4 taxonomy. The protocol mandates active simulation of evasion attacks, data poisoning scenarios, and model extraction attempts. Performance is evaluated against stringent thresholds, requiring a jailbreak success rate not to exceed 0.05 and a minimum robustness confidence score of 0.9. Testing specifically targets critical OWASP Top 10 for LLM vulnerabilities, including LLM01 Prompt Injection and LLM06 Sensitive Information Disclosure. To ensure procedural integrity consistent with ISO/IEC 23894 guidance, all evaluations require human-in-the-loop testing conducted by an operationally independent red team. Upon discovery of a critical vulnerability, an automatic quarantine protocol is triggered to prevent further exposure or compromise."
Technical ID
nist-ai-100-4-redteam
Reducing Risks Posed by Synthetic Content An Overview of Technical Approaches to Digital Content Transparency
"This report examines existing standards, tools, methods, and practices for authenticating digital content, tracking its provenance, labeling and detecting synthetic content, and preventing generative AI from producing harmful material like child sexual abuse material or non-consensual intimate imagery of real individuals. The approaches discussed aim to manage and reduce risks related to synthetic content by recording and revealing its provenance, providing tools to identify AI-generated content, and mitigating the production and dissemination of certain illicit materials. Digital content transparency provides a vehicle for individuals and organizations to access more information about the origins and history of content, which may contribute to trustworthiness. The document defines "synthetic content" as "information, such as images, videos, audio clips, and text, that has been significantly altered or generated by algorithms, including by AI." It provides an overview of technical approaches for provenance data tracking and synthetic content detection, along with a review of current testing and evaluation techniques. It acknowledges that the efficacy of many of these approaches is not fully examined and may be years from widespread deployment. The value of any given technique is use-case and context-specific, and none offer comprehensive solutions on their own; they are building blocks that can be used to improve trust between content producers, distributors, and the public."
Technical ID
nist-ai-100-4-synthetic-content
A Plan for Global Engagement on AI Standards
"Recognizing the importance of technical standards in shaping development and use of Artificial Intelligence (AI), the President’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) calls for “a coordinated effort...to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing” internationally. Specifically, the EO tasks the Secretary of Commerce to “establish a plan for global engagement on promoting and developing AI standards... guided by principles set out in the NIST AI Risk Management Framework and United States Government National Standards Strategy for Critical and Emerging Technology” (NSSCET). This plan, prepared with broad public and private sector input, fulfills the EO’s mandate. The scope of the plan is deliberately broad, addressing the full lifecycle of standards-related activities, including foundational technical work, collaborative development of consensus standards, and the development of complementary tools for implementation. The plan covers AI-related standards of all scopes, both “horizontal” (applicable across sectors) and “vertical” (designed for the needs of a particular sector). It lays out objectives, topical priorities, and actions that can be taken up not just by the Federal government but by the full array of U.S. stakeholders in AI standards, recognizing that U.S. global leadership hinges on engagement from across the dynamic, private sector-led standards ecosystem."
Technical ID
nist-ai-100-5-global-engagement-plan
Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
"This document is a cross-sectoral profile of and a companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, developed pursuant to Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence. It is intended for voluntary use by organizations to improve their ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The profile assists organizations in managing AI risks in a manner that is well-aligned with their goals, considers legal and regulatory requirements, and reflects risk management priorities. This profile defines risks that are novel to or exacerbated by the use of Generative AI (GAI) and provides a set of suggested actions to help organizations govern, map, measure, and manage these risks across the AI lifecycle. The focus of the suggested actions is limited to four primary considerations: Governance, Content Provenance, Pre-deployment Testing, and Incident Disclosure. It is designed to be used by various AI actors to manage risks associated with activities common across sectors, such as the use of large language models (LLMs). The profile focuses on risks for which there is an existing empirical evidence base, such as confabulation, information integrity, harmful bias, and data privacy."
Technical ID
nist-ai-600-1-gen-ai-profile
Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
"This NIST Trustworthy and Responsible AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML), which may aid in securing applications of artificial intelligence (AI) against adversarial manipulations. The taxonomy is built on surveying the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods, lifecycle stages of attack, attacker goals, and attacker capabilities and knowledge. It applies to both Predictive and Generative AI systems. The data-driven approach of machine learning introduces security and privacy challenges, including the potential for adversarial manipulation of training data, exploitation of model vulnerabilities to affect performance, and malicious interactions to exfiltrate sensitive information. AML is concerned with studying the capabilities of attackers and their goals, as well as the design of attack methods that exploit vulnerabilities during the ML lifecycle. It is also concerned with the design of ML algorithms that can withstand these challenges. The intended audience includes individuals and groups responsible for designing, developing, deploying, evaluating, and governing AI systems. The taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems by establishing a common language and understanding of the rapidly developing AML landscape."
Technical ID
nist-ai-adversarial-machine-learning
Artificial Intelligence Risk Management Framework (AI RMF 1.0)
"The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The Framework is intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement its approaches. The framework equips organizations and individuals, referred to as AI actors, with approaches that increase the trustworthiness of AI systems, and helps foster the responsible design, development, deployment, and use of AI systems over time. The core of the framework describes four specific functions to help organizations address the risks of AI systems in practice. These functions – GOVERN, MAP, MEASURE, and MANAGE – are broken down further into categories and subcategories. While GOVERN applies to all stages of an organization's AI risk management processes, the MAP, MEASURE, and MANAGE functions can be applied in AI system-specific contexts and at specific stages of the AI lifecycle. The framework is designed to be practical, to adapt to the AI landscape as technologies develop, and to be operationalized by organizations in varying degrees so society can benefit from AI while also being protected from its potential harms."
Technical ID
nist-ai-rmf-1-0
NIST AI RMF: Response
"NIST AI RMF MANAGE is the action function of the AI Risk Management Framework (NIST AI 100-1, January 2023). It converts the risk assessments produced by MAP and MEASURE into concrete treatment decisions: accept, mitigate, transfer, or avoid. MANAGE specifies how AI risk responses are planned, resourced, executed, and monitored for effectiveness. Organizations without a formal MANAGE function may identify AI risks but fail to close them, creating regulatory and reputational liability. Under the EU AI Act Article 9 and ISO 42001 Clause 8, demonstrating systematic risk treatment with documented outcomes is mandatory for high-risk AI system operators."
Technical ID
nist-ai-rmf-manage
NIST AI RMF MANAGE Function — AI Risk Treatment, Response and Recovery (NIST AI 100-1)
"The NIST AI RMF MANAGE function requires organizations to implement a documented AI risk management process by prioritizing, allocating resources for, and responding to identified and analyzed risks on an ongoing basis. This involves developing and deploying risk treatments, response plans, and recovery procedures as detailed in Section 4.4 of the NIST AI RMF 1.0."
Technical ID
nist-ai-rmf-manage-function
NIST AI RMF: Risk Context
"NIST AI RMF MAP is the discovery function of the AI Risk Management Framework (NIST AI 100-1, January 2023). It establishes the context for each AI system — its intended use, deployment environment, affected stakeholders, and the categories of risk that apply. MAP must be completed before MEASURE or MANAGE can be executed. Without MAP, AI risk assessments are acontextual and unreliable. MAP is specifically required by the EU AI Act (Article 9 conformity assessment), ISO 42001 (Clause 6.1 risk identification), and the US NIST AI RMF Playbook as the entry point for all downstream risk management activities."
Technical ID
nist-ai-rmf-map
Artificial Intelligence Risk Management Framework (AI RMF 1.0): MAP Function - AI Risk Contextualization and Prioritization
"The NIST AI RMF MAP function requires organizations to establish the context to frame AI risks by identifying system purposes, scope, potential impacts, and relevant stakeholders. This foundational step, detailed in AI RMF Section 4.1, enables the identification, analysis, and prioritization of AI risks before they are measured and managed."
Technical ID
nist-ai-rmf-map-function
NIST AI RMF: Metrics
"NIST AI RMF MEASURE is the evaluation function of the AI Risk Management Framework (NIST AI 100-1, January 2023). It converts the context established in MAP into quantitative and qualitative assessments of AI risk using appropriate tools, metrics, and methodologies. MEASURE determines the actual severity and likelihood of each identified risk before treatment decisions are made. Without rigorous MEASURE activities, MANAGE decisions are based on opinion rather than evidence — a gap that auditors, regulators, and insurers consistently flag. MEASURE is aligned with EU AI Act Article 9(7) (post-market monitoring) and ISO 42001 Clause 9 (performance evaluation)."
Technical ID
nist-ai-rmf-measure
NIST AI RMF MEASURE Function — AI Risk Analysis and Measurement (NIST AI 100-1)
"The NIST AI RMF MEASURE function requires organizations to develop and apply metrics and methodologies for continuous analysis, assessment, and monitoring of AI system risks throughout the lifecycle. As detailed in Section 4.3, this involves tracking trustworthy AI characteristics, evaluating system performance against intended purposes, and assessing impacts on individuals and society."
Technical ID
nist-ai-rmf-measure-function
Automation Support for Control Assessments: Project Update and Vision
"NIST Interagency Report (IR) 8011 is a multi-volume series that provides a blueprint for supporting automated control assessments. It proposes an approach for creating specific tests, denominated as 'defect checks,' that can be executed using automation to verify that controls are in place and operating as expected. The methodology supports the NIST Risk Management Framework (RMF) and expands on guidance from SP 800-53A for assessing SP 800-53 controls, ultimately to support information security continuous monitoring (ISCM) activities. This cybersecurity white paper, NIST CSWP 30, summarizes the findings from an internal review of the IR 8011 project. It outlines opportunities for improving the methodology, including restructuring the workflow for readability, expanding keyword search functions, and abstracting the security framework to support any control-based framework. The paper provides a glimpse of what is coming next and updates the IR 8011 development roadmap, with a stated goal of operationalizing the framework into solutions that can benefit agencies and organizations."
Technical ID
nist-cswp-30-automation-support
NISTIR 8202 Blockchain Technology Overview
"Blockchains are tamper evident and tamper resistant digital ledgers implemented in a distributed fashion (i.e., without a central repository) and usually without a central authority (i.e., a bank, company, or government). At their basic level, they enable a community of users to record transactions in a shared ledger within that community, such that under normal operation of the blockchain network no transaction can be changed once published. This document provides a high-level technical overview of blockchain technology to help readers understand how it works. Organizations considering implementing blockchain technology need to understand fundamental aspects of the technology. There are two general high-level categories for blockchain approaches: permissionless and permissioned. In a permissionless blockchain network anyone can read and write to the blockchain without authorization. Permissioned blockchain networks limit participation to specific people or organizations and allow finer-grained controls. Despite the many variations of blockchain networks and the rapid development of new blockchain related technologies, most blockchain networks use common core concepts. Blockchains are a distributed ledger comprised of blocks, and each block contains a set of transactions. This document explores the fundamentals of how these technologies work, including how participants agree on whether a transaction is valid and what happens when changes need to be made."
Technical ID
nist-ir-8202-blockchain-overview
The Language of Trustworthy AI: An In-Depth Glossary of Terms
"This document is a guide and record of the development for the NIST (National Institute of Standards and Technology) glossary of terms for trustworthy and responsible artificial intelligence (AI) and machine learning (ML). The glossary effort seeks to promote a shared understanding and improved communication among individuals and organizations seeking to operationalize trustworthy and responsible AI through approaches such as the NIST AI Risk Management Framework (AI RMF). Like the AI RMF, the glossary is non-sector specific and use-case agnostic, designed to be flexible for all organizations and sectors of society to use. The goal of this common vocabulary is not to declare one specific meaning for identified terms, but to provide interested parties with a broader awareness of the multiple meanings of commonly used terms within the interdisciplinary field of trustworthy and responsible AI. The glossary can be used in conjunction with the NIST AI RMF and related resources, or as a stand-alone document. It serves as a first-stop resource for those new to the field, fosters cross-collaboration among different disciplines, and aligns with existing international and industry standards from bodies such as IEEE, ANSI, and ISO/IEC. Core principles in its design include the inclusion of terms related to emerging AI technologies, definitions from a wide variety of domains (including machine learning, social sciences, and law), and a collaborative development process based on consultation with subject matter experts. NIST will promote its use to a broad range of stakeholders, including researchers, developers, and policymakers, and it is subject to regular review and feedback processes from the broader AI community."
Technical ID
nist-language-of-trustworthy-ai
Towards a Standard for Identifying and Managing Bias in Artificial Intelligence
"This special publication describes the challenges of bias in artificial intelligence and provides examples of how and why it can erode public trust. It identifies three categories of bias in AI—systemic, statistical, and human—and describes how and where they contribute to harms. The document also describes three broad challenges for mitigating bias related to datasets, testing and evaluation, and human factors, and introduces preliminary guidance for addressing them. While many organizations seek to utilize information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in AI. Successfully meeting this challenge requires taking all forms of bias into account, expanding the perspective beyond the machine learning pipeline to a broader socio-technical view. The intended audience for this document includes individuals and groups who are responsible for designing, developing, deploying, evaluating, and governing AI systems. The core obligation is to provide a roadmap for developing detailed socio-technical guidance for identifying and managing AI bias. NIST intends to develop methods for increasing assurance, governance, and practice improvements for identifying, understanding, measuring, managing, and reducing bias. The guidance is voluntary and intended to be flexible and applicable across contexts, regardless of industry."
Technical ID
nist-sp-1270-managing-ai-bias
Foundational Cybersecurity Activities for IoT Device Manufacturers
"This publication provides recommendations for manufacturers to improve the securability of the Internet of Things (IoT) devices they create. Many IoT devices lack cybersecurity capabilities that customers can use to mitigate risks. Manufacturers can assist customers by providing necessary cybersecurity functionality and related information. This document outlines six recommended foundational cybersecurity activities for manufacturers to consider before their devices are sold. These activities aim to lessen the cybersecurity efforts required by customers, thereby reducing the prevalence and severity of IoT device compromises and subsequent attacks. The core obligation for manufacturers is to carefully consider which device cybersecurity capabilities to design into their products for customers to use in managing their risks. The primary audience is IoT device manufacturers, but the content may also be useful for IoT device customers seeking to understand available device cybersecurity capabilities and the information manufacturers might provide."
Technical ID
nistir-8259-iot-device-manufacturers
NISTIR 8312 Four Principles of Explainable Artificial Intelligence
"This document introduces four principles for explainable artificial intelligence (AI) that comprise fundamental properties for explainable AI systems. For AI systems that are intended or required to be explainable, it is proposed that they adhere to these principles. First, a system must deliver accompanying evidence or reasons for its outcomes and processes (Explanation). Second, these explanations must be understandable to the individual users they are intended for (Meaningful). Third, the explanation must correctly reflect the system’s actual process for generating the output (Explanation Accuracy). Finally, the system must only operate under the conditions for which it was designed and when it reaches sufficient confidence in its output (Knowledge Limits). These principles were developed to encompass the multidisciplinary nature of explainable AI and are heavily influenced by the AI system’s interaction with the human recipient. The requirements of a given situation, the task at hand, and the consumer will all influence the type of explanation deemed appropriate. These situations can include regulatory and legal requirements, quality control, and customer relations. The principles allow for defining the contextual factors to consider for an explanation and act as a roadmap for future measurement and evaluation activities. This work is part of a larger NIST portfolio around trustworthy AI, which also includes characteristics like accuracy, privacy, reliability, robustness, safety, security, mitigation of harmful bias, transparency, fairness, and accountability."
Technical ID
nistir-8312-explainable-ai-principles
The OECD AI Principles
"The OECD AI Principles are the first intergovernmental standard on AI, designed to promote innovative, trustworthy artificial intelligence that respects human rights and democratic values. While AI holds the potential to address complex challenges and boost productivity, AI systems also pose risks to privacy, safety, security, and human autonomy. To develop safe, secure and trustworthy AI systems, there is a need to assess these impacts and manage risks. The principles guide AI actors in their efforts to develop trustworthy AI and provide policymakers with recommendations for effective AI policies, which were revised in 2024 to stay abreast of rapid technological developments. For governments to work together to manage AI on an international level, they need to use common terms and definitions to act as a foundation for cooperation, allowing for interoperability across jurisdictions even with varying approaches to managing the technology."
Technical ID
oecd-ai-principles
Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency
"This report examines the existing standards, tools, methods, and practices for authenticating content, tracking its provenance, labeling synthetic content through techniques like watermarking, and detecting synthetic content. It also addresses methods for preventing generative AI (GAI) from producing harmful content such as child sexual abuse material or non-consensual intimate imagery of real individuals. The focus is on digital content transparency, which refers to the process of documenting and accessing information about the origins and history of digital content. The goal is to manage and reduce risks related to synthetic content by recording and revealing provenance, providing tools to identify AI-generated content, and mitigating the production of specific illegal and harmful materials. The document provides an overview of technical approaches for provenance data tracking and synthetic content detection, alongside a review of current testing and evaluation techniques. It emphasizes that no single technique offers a comprehensive solution; their value is use-case and context-specific, relying on effective implementation and oversight. While the report focuses on technical approaches, it acknowledges the importance of normative, educational, regulatory, and market-based approaches. The technical methods described serve as building blocks to improve trust in digital content by indicating where AI has been used to generate or modify content."
Technical ID
reducing-risks-posed-by-synthetic-content
RLHF Transparency Protocol
"Reinforcement Learning from Human Feedback (RLHF) is the dominant alignment technique used to train large language models (LLMs) to follow instructions, avoid harmful outputs, and produce outputs preferred by human evaluators — combining supervised fine-tuning (SFT) on demonstration data with a reward model trained on human preference comparisons, then optimizing the policy model using Proximal Policy Optimization (PPO) or Direct Preference Optimization (DPO) with a KL-divergence penalty preventing excessive drift from the base model. RLHF audit requirements arise from the opacity of the human feedback process: reward hacking (the policy exploiting reward model weaknesses rather than genuinely improving), annotator bias (systematic preferences of the labeler population distorting the reward signal), and reward model overfitting create alignment failures that are difficult to detect without structured auditing. The EU AI Act Article 10 data governance requirements, NIST AI RMF Govern 1.7 (human oversight of AI), and ISO/IEC 42001 performance monitoring obligations collectively require that RLHF processes be documented, monitored for reward hacking, and periodically audited for labeler quality and preference consistency. AI systems trained with RLHF that lack documented audit trails for the feedback loop cannot be considered to have met their alignment validation obligations."
Technical ID
rlhf-loop-audit
Security Segmentation in a Small Manufacturing Environment
"Manufacturers are increasingly targeted in cyber-attacks. Small manufacturers are particularly vulnerable due to limitations in staff and resources to operate facilities and manage cybersecurity. This paper introduces security segmentation as a cost-effective and efficient approach to mitigate cyber vulnerabilities for small manufacturing environments. Security segmentation is the grouping of assets into security zones according to the cyber protection they need and placing appropriate safeguards around these security zones. It is an approach for protecting assets by grouping them based on both their communication and security requirements. The intended audience is managers of information technology and operational technology (IT/OT) systems at small manufacturing organizations, including roles like company owner, operations manager, and technical resources such as network and security architects. The core obligation is to follow a six-step approach: 1) identify a list of assets, 2) assess risk and create security zones, 3) determine the risk level for the security zones, 4) map communications between the security zones, 5) determine security controls for the security zones, and 6) create a logical security architecture diagram. The security architecture resulting from these activities serves as a foundational preparation step for additional security strategies like Zero Trust."
Technical ID
security-segmentation-small-manufacturing
Singapore IMDA Agentic AI Framework
"Execution rules for the world's first framework specifically targeting Agentic AI, focusing on bounding autonomous actions, financial limits, and verifiable intent."
Technical ID
sg-imda-agentic-ai
Model Artificial Intelligence Governance Framework Second Edition (2020)
"Singapore's voluntary framework guides organizations in the responsible deployment of AI by outlining two core principles: AI decisions should be explainable, transparent, and fair, and AI systems should be human-centric. It provides detailed guidance across four key areas: Internal Governance Structures, Risk Management in the AI Model Lifecycle, Operations Management, and Customer Relationship Management (Section 1)."
Technical ID
sg-model-ai-governance-v2
100-Node Sovereignty Audit
"The Bidda Sovereign Audit Protocol defines the ongoing integrity verification process for the 100-node intelligence registry. It specifies the procedures for batch hash verification, canonical source URL validation, registry-to-file synchronization checks, SDK compatibility testing, and the issuance of the Sovereign Seal — the attestation that all 100 nodes have been verified against their authoritative source standards, their integrity hashes are current, and the discovery layer (index.json, llms-full.txt, openapi.json) accurately reflects the registry state. This protocol must be executed before any new registry version is deployed to production and after any batch node update. AI agents querying the registry can use this node to understand the audit cycle and assess the freshness and integrity of the registry they are consuming."
Technical ID
sovereign-final-audit
AI Safety Institute: approach to evaluations
"The UK AI Safety Institute (AISI) framework outlines its approach to evaluating advanced AI models for national security and societal risks, focusing on five capabilities: misuse, societal impacts, autonomous systems, safeguards, and model analysis. This framework applies to developers of frontier AI models engaging with the AISI for pre-deployment safety testing, as detailed in the 'Our approach to evaluations' section."
Technical ID
uk-ai-safety-institute-framework-2023
Online Safety Act 2023
"The UK Online Safety Act 2023 imposes a statutory duty of care on providers of user-to-user services and search services to protect users from illegal and harmful content. This includes conducting comprehensive risk assessments for illegal content (Part 3, Section 7) and content harmful to children (Part 3, Section 10), and implementing proportionate systems and processes, including algorithmic content moderation, to mitigate identified risks."
Technical ID
uk-online-safety-act-2023
UN Secretary-General AI Advisory Body Interim Report 2024 — Governing AI for Humanity
"This interim report from the UN's AI Advisory Body proposes a framework for global AI governance, recommending the creation of a new UN-affiliated agency to coordinate international efforts. It establishes five core principles for AI governance: inclusivity, public interest, data governance, universality, and alignment with the UN Charter and international human rights law (Principle 1)."
Technical ID
un-ai-advisory-body-2024
UNESCO Ethics of AI
"Compliance with the UNESCO Recommendation on the Ethics of Artificial Intelligence demands a comprehensive governance framework ensuring AI systems uphold human rights, dignity, and environmental sustainability. The foundational principles mandate that `humanOversightRequired` is perpetually maintained for meaningful control over system determinations. Prior to any deployment, verification is necessary that both an `ethicalImpactAssessmentCompleted` and a `dataPrivacyImpactAssessmentCompleted` have been executed to prospectively evaluate risks and safeguard personal information. To promote fairness, a `biasDetectionMechanismActive` must be operational, complemented by a specific `vulnerableGroupProtectionMechanism` to prevent disparate negative outcomes. Transparency and responsibility are enforced by confirming an `explainabilityMethodImplemented` exists, alongside a clearly articulated `accountabilityFrameworkDefined` that assigns liability for system outcomes. The `proportionalityPrincipleVerified` ensures AI methods are appropriate and necessary for a given legitimate aim. Broader ecosystem health requires that the `environmentalImpactAssessed` is thoroughly documented. Inclusive governance is contingent upon proof that `stakeholderConsultationConducted` activities have meaningfully informed the AI lifecycle. Finally, system resilience and user trust are contingent upon a successful `securityRiskAssessmentCompleted` and the establishment of a `redressMechanismAvailable` for any individuals adversely affected, thereby aligning technological development with internationally recognized ethical standards."
Technical ID
unesco-ethics-ai
California SB 53 (Transparency in Frontier AI Act)
"The nation's first comprehensive safety and transparency requirement for frontier AI developers, mandating catastrophic risk frameworks, 15-day incident reporting, and whistleblower protections for models trained above 10^26 FLOPs."
Technical ID
us-ca-sb53-frontier-ai
Guidelines for Secure AI System Development
"This joint guidance from CISA, UK NCSC, and 21 international partners provides a framework for organizations to secure AI systems throughout their lifecycle. It outlines four key areas—Secure Design, Secure Development, Secure Deployment, and Secure Operation—mandating a 'security-by-design' approach for all AI system developers and providers."
Technical ID
us-cisa-ai-cybersecurity-guidelines-2023
Colorado AI Act (SB 205) - High-Risk Systems
"US state-level regulatory requirements for developers and deployers of high-risk AI systems making consequential decisions, mandating algorithmic discrimination audits and consumer opt-out rights."
Technical ID
us-co-sb205-high-risk-ai
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
"This Executive Order directs US federal agencies to establish new standards for AI safety and security, requiring developers of powerful foundation models that pose a serious risk to national security, economic security, or public health and safety to notify the federal government of their activities and share the results of all red-team safety tests (Section 4.2). It also mandates the development of guidance for content authentication and watermarking to clearly label AI-generated content (Section 4.5)."
Technical ID
us-eo-14110-ai-2023
SA National AI Policy (2026 Draft) - Accountability & Skills
"Operationalizing the April 2026 South African Cabinet mandates for AI accountability, localized data processing, and algorithmic transparency for enterprise and government contracts."
Technical ID
za-national-ai-policy-2026
Technical Registry Export
Context: AI Governance & Law / Total Filtered: 79 Nodes
This utility allows developers and AI architects to instantly extract technical identifiers for the current filtered view. Use these IDs to programmatically call the Bidda Sovereign Forest API. All exports respect the global Triple-Verification Pipeline.
