Scope & Methodology
This structured feed aggregates all identified material regulatory updates, consultations, proposed legislation, major enforcement actions, and official guidance on Artificial Intelligence released by key global regulators and other trusted sources, strictly within the timeframe of October 25, 2025 – November 24, 2025.
Each entry is mapped by geography, regulatory authority, sector, and risk domain, with precise publication dates, official source links, and actionable summaries. Where no updates were issued by a mandated regulator, this is also stated with supporting search methodology.
United States
1. FDA Digital Health Advisory Committee – Generative AI in Mental Health Medical Devices
Date: November 6, 2025
Regulator: U.S. Food and Drug Administration (FDA)
Jurisdiction/Region: United States / Federal
Status: Public meeting and open consultation (Consultation, Draft)
Domain/Risk Area: Healthcare, Digital Mental Health, Generative AI, Patient Safety, Human Oversight
Key Obligation Summary:
The FDA’s Digital Health Advisory Committee (DHAC) held a public session focusing on regulatory pathways for generative AI in digital mental health medical devices. Points of discussion included:Application of a risk-tiered, Total Product Lifecycle (TPLC) regulatory approach.
Emphasis on rigorous trial design, robust human/clinician oversight protocols for higher-risk uses (including suicide prevention/escalation).
Solicitation of comments on device performance evaluation, post-market monitoring, and user safeguards.
Enforcement priorities articulated as focusing on high-risk AI deployments in mental health contexts over lower-risk applications.
Public comments are accepted through December 8, 2025.
Additionally, the FDA has updated its AI-Enabled Medical Devices List to highlight devices that incorporate large language models and foundation model architectures, reflecting increasing internal use of generative AI for reviews and workflow optimization.
Recommended Action:
All stakeholders with interests in digital mental health or generative AI medical devices should review the materials, assess impact on product development or clinical deployment, and submit comments before the deadline.Primary Sources:
2. White House Draft Executive Order on Federal AI Law Preemption
Date: Mid-November 2025 (Draft, not signed as of November 24, 2025)
Regulator: The White House (coordinating the Department of Justice, Commerce Department, FTC, FCC)
Jurisdiction/Region: United States / Federal (direct state-level impact)
Status: Draft Executive Order (not in force)
Domain/Risk Area: Cross-sector AI, Preemption, Litigation, Consumer Protection
Key Obligation Summary:
Proposes federal preemption of state-level AI laws deemed “burdensome,” to be enforced via a newly formed Department of Justice AI Litigation Task Force. Key points include:Directs the FTC to clarify its authority regarding state laws that mandate alteration or labeling of truthful AI outputs.
Instructs the Commerce Department to catalogue and report on state AI laws for possible federal challenge or funding restrictions.
Would direct the FCC to consider federal AI disclosure/reporting rules superseding inconsistent state requirements.
Directs federal agencies to actively oppose enforcement of conflicting state AI rules.
Catalysts include new California and Colorado AI statutes targeting transparency and anti-discrimination.
As of report date, this is a draft and not yet operative; status remains closely watched and likely to trigger legal contest if signed.
Recommended Action:
Monitor for the final published version and legal effect. If enacted, organizations should be prepared for rapid transition from varying state compliance strategies to a single federal regime.Primary Sources:
3. State Attorneys General AI Task Force Launch
Date: Early November 2025
Regulator/Issuing Body: North Carolina & Utah Attorneys General (in collaboration with additional states and major AI developers, e.g., OpenAI, Microsoft)
Jurisdiction/Region: United States / Multi-state Coalition
Status: Task force launch (Active)
Domain/Risk Area: AI Safety, Child Protection, Consumer Safeguards
Key Obligation Summary:
Establishment of a bipartisan AI task force that collects best practices and frames policy/enforcement strategies for consumer protection and safety, particularly regarding generative and consumer-facing AI. Focus areas include:AI-facilitated child safety and bias/harms mitigation.
Use of existing consumer protection and anti-discrimination statutes to regulate harmful AI uses, as demonstrated by a recent Texas settlement with a healthcare AI vendor.
Enhanced cooperation with technology firms for both policy input and technical expertise.
Recommended Action:
AI companies operating in U.S. states should closely track new task force outputs and be prepared for changing enforcement expectations or harmonized multi-state compliance requirements.Primary Sources:
4. FTC, FCC AI Preemption Policy Directions (as referenced in Draft EO)
Date: Mid-November 2025
Regulator: Federal Trade Commission (FTC) and Federal Communications Commission (FCC), by White House directive
Jurisdiction/Region: United States / Federal
Status: Policy directives described in draft order (Draft proposal, not independently enacted)
Domain/Risk Area: AI Regulation, Consumer Protection, Transparency, Litigation
Key Obligation Summary:
If and when the Executive Order is signed, both the FTC and FCC would be expected to issue or update standards that preempt state-imposed AI requirements for output modification and disclosure. This would create unified national frameworks for AI system transparency and operational reporting.Recommended Action:
Prepare for a shift from fragmented state to unified federal standards for AI outputs and disclosures, pending finalization of the EO and federal rulemaking.Primary Sources:
European Union / EEA / United Kingdom
5. EU Digital Omnibus Package – Proposed Amendments to the AI Act and Digital Regulation
Date: November 19, 2025
Regulator: European Commission / EU AI Office
Jurisdiction/Region: European Union
Status: Legislative proposal (Consultation / Draft, not law)
Domain/Risk Area: Cross-sector AI Risk, Data Privacy, Digital Economy
Key Obligation Summary:
This major omnibus proposal seeks to overhaul elements of the AI Act and the wider digital regulatory framework. Key provisions include:Extending compliance deadlines for certain high-risk AI system categories, potentially postponing enforcement until 2027/28.
Introducing grace periods for legacy generative AI developed before August 2026, with deferred transparency requirements.
Eliminating mandatory registration for “not high-risk” AI while maintaining documentation for audit.
Centralizing supervision of general-purpose AI (GPAI) under the soon-to-be-established EU AI Office.
Proposing GDPR updates to permit “legitimate interest” as a legal basis for personal data processing in AI, subject to minimum safeguards and documentation.
The proposal is accompanied by critical reviews from civil society stakeholders concerned about the dilution of original safeguards and compliance clarity.
Recommended Action:
All businesses subject to the AI Act should review the proposed roadmap extension and assess eligibility for compliance grace periods, documentation implications, and audit process impacts. They should engage in the ongoing consultation process and monitor for possible scope amendments before enactment.Primary Sources:
6. EDPS (European Data Protection Supervisor): Updated Generative AI Guidelines for EU Bodies
Date: Early November 2025
Regulator: European Data Protection Supervisor (EDPS)
Jurisdiction/Region: European Union (Institutional focus)
Status: Finalized regulatory guidelines
Domain/Risk Area: Data Privacy, Public Sector Generative AI Use, Accountability
Key Obligation Summary:
The EDPS released detailed guidelines and compliance checklists to clarify the lawful deployment of generative AI in EU agencies and institutions. Specifically, the guidelines address:Lawful bases for processing personal data in AI.
Rights management for data subjects.
Data minimization and accountability documentation.
The guidance aims to ensure that EU public sector AI projects align with both the AI Act and data protection obligations.
Recommended Action:
All EU institutions, agencies, and their contractors utilizing or piloting generative AI must promptly review the checklist and update internal data handling policies.Primary Sources:
7. United Kingdom: Government Outlines Blueprint for AI Regulation
Date: Late October/Early November 2025
Regulator: UK Government (Department for Science, Innovation and Technology)
Jurisdiction/Region: United Kingdom
Status: Policy blueprint (Pre-legislative, strategic, under consultation)
Domain/Risk Area: Cross-sector AI Regulation, Regulatory Sandboxes, Innovation Enablement
Key Obligation Summary:
The blueprint sets out the government’s medium-term strategy to:Launch “AI Growth Labs” as regulatory sandboxes for real-world AI testing under regulator supervision.
Develop new licensing and certification systems for high-risk AI.
Prioritize sectoral pilots for healthcare, finance, and transport.
Propose changes to copyright law to address fair compensation for artists whose works are used in AI training (a subject currently under separate consultation).
Recommended Action:
UK businesses should follow sectoral calls for pilot participation in AI Growth Labs, monitor ongoing legislative and copyright consultations, and prepare for potential changes to sectoral rules on AI use and risk management.Primary Sources:
Asia-Pacific (APAC) & Rest of World
8. India – Draft Rules for Mandatory Labeling of AI-Generated Content
Date: November 2025
Regulator: Ministry of Electronics and Information Technology
Jurisdiction/Region: India
Status: Draft rules (Consultation, not final)
Domain/Risk Area: Content Regulation, Platform Liability, Transparency
Key Obligation Summary:
The proposed rules require:All AI-generated content on digital platforms to include permanent, machine-embedded labeling.
Platforms/intermediaries to face statutory liability for removal/failure to flag such content.
Additionally, broader “AI Governance Guidelines” have been released in parallel (non-binding), advocating for responsibility, human-centric design, and broad sectoral oversight.
Recommended Action:
Digital platforms, content providers, and social media intermediaries should prepare technical plans for content labeling and compliance systems and engage in the open consultation process prior to rule finalization.Primary Sources:
9. Malaysia, Vietnam, South Korea, China – Legislative Progress and Enforcement Rulemaking
Date: Updates through November 2025
Regulators: Respective national digital authorities and ministries
Jurisdiction/Region: Malaysia, Vietnam, South Korea, China (APAC)
Status: Draft laws, enforcement decrees, and policy proposals (Not yet final)
Domain/Risk Area: High-risk AI, AI Content Labeling, Biometric Regulation, Transparency
Key Obligation Summary:
Malaysia: Advanced legislative proposal for a risk-based AI law with mandatory compliance and incident-reporting for “high-risk” AI systems.
Vietnam: Ongoing consultation on a risk-tiered AI law with ambiguity in risk classifications, slated for introduction in early 2026.
South Korea: Enforcement rules for the AI Basic Act include plans for mandatory watermarking of AI-generated content, with grace periods for industry adaptation.
China: As of September 2025, compulsory labeling of AI-generated content has been introduced along with new rules for facial recognition and biometric AI in healthcare/retail.
Recommended Action:
Multinational firms should actively map jurisdiction-specific compliance requirements, especially regarding upcoming content-labeling and biometric controls, as final rules and enforcement deadlines approach or are published.Primary Sources:




