Steering the AI tide: Unpacking the new international guidelines

Ray Fleming writes about the emerging situation around the world:

There are rapidy evolving guidelines or regulations being published for the use of AI systems in different countries. In the same way that companies running online services have to comply with privacy legislation in the countries and regions of their users, it's likely that the situation will become similar for AI services.

European Union

In the European Union, the EU unveiled a comprehensive set of regulations in 2021. These regulations underscore the balance between leveraging AI's potential and safeguarding public interest. They propose a legal framework that categorises AI systems based on their risk levels, imposing stricter scrutiny on 'high-risk' AI while fostering innovation in less risky applications. Noteworthy is the emphasis on transparency, accountability, and data protection, aligning with Europe's robust data privacy standards.

These developments prompt a re-evaluation of AI strategies among organisations, urging adherence to the outlined ethical and legal standards. It's a nudge towards responsible AI, ensuring that technology serves humanity while spurring economic growth.

More on the European Regulations here

United Kingdom

The recent unveiling of AI guidelines by the UK's Competition and Markets Authority (CMA) is a proactive step towards ensuring a balanced and consumer-centric AI landscape. These guidelines aim to foster responsible development and use of Foundation Models* (FMs) - like the ones used in ChatGPT and Microsoft's Co-Pilots, underlining the importance of consumer protection and competitive fairness. The guidelines foresee a vibrant economy spurred by AI, provided it's harnessed responsibly. Here’s a look at the proposed principles:

  • Accountability: Ensures that developers and deployers are answerable for the outputs provided to consumers.

  • Access: Promotes ongoing ready access to key inputs without undue restrictions.

  • Diversity: Encourages a range of business models, both open and closed.

  • Choice: Provides businesses the autonomy to decide how to use FMs.

  • Flexibility: Allows for switching and/or using multiple FMs based on need.

  • Fair dealing: Forbids anti-competitive conduct including self-preferencing, tying, or bundling.

  • Transparency: Ensures consumers and businesses are informed about the risks and limitations of FM-generated content.

More on the UK proposals here

What does this mean?

Companies developing AI for international use will need to navigate varying regulatory landscapes, adhering to both the EU's stringent regulations and other distinct frameworks like the UK's. This might necessitate designing flexible AI systems that can be easily adjusted to comply with different regional laws. It could also spur a global move towards more transparent and ethical AI practices, fostering trust among users and authorities. The varied regulations may pose challenges but also opportunities for companies to showcase responsible AI deployment.

* A quick note on terminology: "Foundation Models" or FMs are broad AI systems adaptable to various purposes. They differ from "Language Models" (LLMs) which primarily focus on processing and generating human language, similar to "Generative AI" which also revolves around generating new content based on the patterns learned from data.

Previous
Previous

Generative AI's Security-Creativity Pyramid

Next
Next

AI is making everyone above average