AI Regulation

AI has been garnering lots of attention over the past few years and governments around the world are in various stages of regulating it.

China

The Interim Measures for the Management of Generative Artificial Intelligence Services is a set of temporary guidelines designed to regulate generative AI. These measures aim to address immediate concerns related to the deployment of AI systems that create content, such as text, images, or videos.

Key aspects include:

  1. Safeguards to prevent misuse or harmful outputs.
  2. Mechanisms for monitoring and accountability.
  3. Generative AI technologies must adhere to the core socialist values.
  4. Models must be reviewed in advance by the state.

The interim measures are intended to provide a framework for managing generative AI while more comprehensive regulations are developed. The regulation took effect August 15, 2023.

EU

The EU AI Act is a comprehensive regulatory framework proposed by the European Union to govern artificial intelligence technologies. The Act classifies AI applications into categories based on their risk levels, and establishes requirements for each category. High-risk AI systems, such as those used in critical infrastructure or law enforcement, face stricter regulations and oversight. The Act also emphasizes transparency, requiring AI systems to provide clear information about their functionality and limitations.

The regulation took effect August 1, 2024.

US

The US is approaching AI regulation in a decentralized manner. California’s AB3211 bill is at the forefront and requires:

  1. Digital watermarks for AI-generated photos, videos and audio clips.
  2. Large online platforms (ex. Facebook, Instagram, X) to label AI-generated content in such a way that consumers can understand it.
  3. Conduct adversarial testing exercises.

Adobe, Microsoft, and OpenAI originally opposed this bill back in April, but are supporting the newly revised version. The bill’s author Assemblymember Buffy Wicks should be commended for her leadership and willingness to work with AI companies to refine it over the past year.

Conclusion

AI is rapidly advancing and regulators are struggling to keep up.

Key aspects that I believe regulators should be addressing:

AI Companies:

  1. Risk-based approach
    • Mitigating controls should be proportional to the impact and potential damage
  2. Assessment by regulators
    • New models should be provided to regulators prior to release for risk assessments
  3. Safeguards to prevent misuse or harmful outputs
    • Guardrails should be implemented to prevent malicious actors from mis-using legitimate AI technologies
  4. Transparency and explainability
    • AI technologies should explain to users their capabilities, limitations, biases, and how an output was generated
  5. Mechanisms for monitoring and accountability
    • Appropriate audit logging and labeling of AI-generated media (provenance) should be implemented by AI companies
  6. Data security
    • Training dataset and data entered by other users should be regulated for privacy and intended use

Other Companies:

  1. Risk-based approach
    • Mitigating controls should be proportional to the impact and potential damage
  2. Social networks (ex. Facebook, Instagram, X, Reddit)
    • Social network companies should label or prevent AI-generated content
    • Social network companies should prevent AI-driven content interaction that manipulates sentiment and impacts content impressions
  3. Online games (ex. gambling sites)
    • Online gaming companies should prevent AI-driven bots from using their platform, including condoned bots (ex. fake users to create content, poker gambling bots to increase the number of active games and create liquidity)

Support for regulations that align with the above items is evaluated for each AI model and incorporated into the Governance category of my scoring.