Scope & Methodology

This structured feed aggregates all identified material regulatory updates, consultations, proposed legislation, major enforcement actions, and official guidance on Artificial Intelligence released by key global regulators and other trusted sources, strictly within the timeframe of December 23, 2025 – January 22, 2026.

Each entry is mapped by geography, regulatory authority, sector, and risk domain, with official source links, and actionable summaries.

United States

1. FDA & EMA: Joint Guiding Principles for AI in Drug Development

Regulator / Issuing Body: U.S. Food and Drug Administration (FDA), European Medicines Agency (EMA)

Jurisdiction & Region: United States, European Union

Status: Final (released January 2026)

Domain / Risk Area: Drug Development, Clinical AI/ML, Medicines Regulation, Ethics, Transparency

Key Obligation Summary:

  • FDA and EMA published 10 high-level, jointly developed principles establishing baseline expectations for AI and machine learning across all phases of drug development, from early research through post-market monitoring.

  • Key requirements include human-centric and ethical design, proportionate risk-based oversight, robust data governance, traceable decision-making, lifecycle management, and transparency of AI systems.

  • Organizations must ensure clear documentation, risk assessment proportionate to intended use, and that humans retain oversight (including review of key decisions).

  • Although non-binding, these principles are expected to shape future risk-based regulatory frameworks and industry best practices in both the U.S. and EU.

  • These guidelines were coordinated to avoid regulatory fragmentation and increase cross-Atlantic harmonization for AI in medicinal products.

Business Impact / Compliance Takeaway:

  • R&D, regulatory, and compliance teams must ensure AI and ML tools used during development are fully documented, audited, and maintain clear records of data provenance and model decisions.

  • The requirement for robust risk and data governance means additional process controls and possible audit requirements for regulated organizations.

  • Companies should review their internal processes, documentation, and risk controls for alignment with new expectations.

Source Links:

2. FDA: Relaxed Oversight of AI-Enabled Devices and Wearables

Regulator / Issuing Body: U.S. Food and Drug Administration (FDA)

Jurisdiction & Region: United States

Status: Final Policy Update (January 2026)

Domain / Risk Area: Digital Health, Medical Devices, Clinical Decision Support, AI Software

Key Obligation Summary:

  • The FDA announced a reduction in premarket review obligations for digital health products and AI-enabled devices that issue a single clinical recommendation, provided they meet all general statutory requirements.

  • Oversight will increasingly target only higher-risk devices, including autonomous or multi-function AI systems, while lower-risk products now face less regulatory friction.

  • Developers and manufacturers should reassess their products against these new FDA risk categories to determine whether and what kind of premarket submissions are required.

Business Impact / Compliance Takeaway:

  • Companies introducing new AI-enabled health technology may now gain faster market access if their products fall into the reduced oversight category, streamlining U.S. launch plans.

  • It remains critical for device makers to verify risk classification and continue meeting all baseline regulatory requirements.

Source Links:

European Union

3. EDPB / EDPS: Joint Opinion 1/2026 on Digital Omnibus (AI Act Amendments)

Regulator / Issuing Body: European Data Protection Board (EDPB) & European Data Protection Supervisor (EDPS)

Jurisdiction & Region: European Union

Status: Official Opinion/Consultation (adopted January 20, 2026)

Domain / Risk Area: Data Privacy, AI Governance, Digital Regulation, AI Act Implementation

Key Obligation Summary:

  • The Joint Opinion welcomes the Digital Omnibus’s practical simplifications—such as regulatory sandboxes and lighter obligations for SMEs but insists that fundamental rights, especially regarding personal data and high-risk AI, remain non-negotiable.

  • Recommends:

    • Direct Data Protection Authority (DPA) involvement in the operation of any AI sandboxes.

    • Strict oversight and robust safeguards for processing special-category (sensitive) personal data.

    • Retention of the mandatory registration of high-risk AI systems to ensure transparency and accountability.

    • Clear delineation of roles for supervisory authorities to prevent regulatory conflicts or gaps.

  • Cautions against weakening any existing protections in pursuit of administrative streamlining.

Business Impact / Compliance Takeaway:

  • Organizations developing or deploying AI in the EU must prepare for possible modifications to AI Act implementation (especially around sandboxes and SME obligations), and should anticipate stricter requirements for handling special category/personal data and mandatory registration for high-risk AI.

  • Continuous monitoring of EU-level consultations and the evolving delineation of roles among supervisory authorities is necessary.

Source Links:

Asia-Pacific (APAC)

4. South Korea: Framework Act on Artificial Intelligence in Force (January 2026)

Regulator / Issuing Body: National Assembly of the Republic of Korea, with oversight by designated ministries (e.g., Ministry of Science & ICT)

Jurisdiction & Region: South Korea (APAC)

Status: In force (January 2026)

Domain / Risk Area: AI Governance, AI Safety, National Digital Policy

Key Obligation Summary:

  • Institutes a statutory, national framework for the regulation and management of AI.

  • Requires:

    • Establishment of a National AI Committee (policy direction, oversight)

    • Creation of an AI Safety Research Institute (risk assessment, safety standards)

    • Regulation of “high-impact” AI (risk/impact assessments, special obligations)

    • Mandatory labeling and disclosure for generative AI and transparency of outputs

    • Designation of a local representative for foreign-based AI providers offering services within Korea.

Business/Compliance Takeaway:

  • All businesses supplying or operating AI systems in the Korean market must assess whether their offerings qualify as “high-impact” and, if so, undertake risk/impact assessments and meet new disclosure/labelling obligations.

  • Foreign AI companies must appoint and disclose a local representative.

Source Link: CX Network

Further Reading

No posts found