Global Pursuit of Ethical Artificial Intelligence Regulation

Artificial intelligence (AI) systems are advancing rapidly and becoming entrenched in everyday life, from virtual assistants to content recommendation algorithms. This proliferation raises critical ethical questions around bias, accountability, and the need for regulation to ensure AI is deployed safely and responsibly. Countries around the world are scrambling to find answers, taking varied approaches to governing AI.

In Africa, progress has been steady but limited. No country has yet adopted comprehensive AI legislation, though many address AI risks through data protection laws. These laws, now present in over half of African countries, typically guard against solely automated decision-making that significantly impacts individuals. Continental bodies like the African Union (AU) have also made moves towards collective governance, establishing working groups to develop common African positions. Individual countries like Mauritius, Egypt and Rwanda have adopted national AI strategies and policies, though legislative and policy responses overall remain sparse.

Globally, approaches differ significantly across regions:

The European Union’s proposed AI Act takes a sweeping approach, aiming to create a harmonized legal framework for the development and use of AI in the EU. It designates AI systems as high or low risk, creating obligations like transparency requirements, cybersecurity rules, and fines for non-compliance. High-risk systems must be registered in an EU database. Certain practices like use of ‘subliminal’ techniques or real-time biometric ID are prohibited outright.

In contrast, regulation in the United States has been more fragmented, sector-specific and led by federal agencies like the FTC. Rather than binding laws, the Biden administration has focused on voluntary industry commitments to address AI risks. New strategy documents also stress principles like transparency, fairness and human oversight. Proposed legislation like an Algorithmic Accountability Act would require assessing systems for bias.

China’s approach aligns AI governance with state ideology and control. Rules require that generative AI services “adhere to the core values of socialism” while banning content that could threaten state power or social stability. China also mandates licenses and security assessments for providers of high-risk systems.

Despite these differences, common themes emerge in AI governance worldwide:

Transparency and explainability: Most efforts require or encourage AI systems to be transparent about how they work so users can interpret outputs appropriately. Systems should be able to explain the reasoning behind significant decisions.

Preventing discrimination: There is widespread concern about algorithmic bias and AI perpetuating historical patterns of discrimination. Rules typically aim to ensure fairness and prohibit unjustified discrimination based on protected characteristics like race, gender or age.

Safety and reliability: Regulations often mandate testing, risk assessments and ongoing monitoring to ensure systems function reliably and safely throughout their lifespan. Safety requirements apply both to external risks like hacking as well as direct physical or psychological risks.

Data privacy: Strong data protections are widely viewed as essential, given AI’s dependence on vast troves of data. Rules aim to limit data collection and retention to what is strictly necessary and prevent abusive practices.

Human oversight: Most governance frameworks stress that humans must remain accountable and in control of AI systems, not the other way around. Companies and developers should remain liable.

Registration/licensing: Some laws propose requiring AI providers to register high-risk systems in a public database and obtain licenses to operate, facilitating oversight.

Beyond binding legislation, voluntary industry commitments, soft law instruments like guidelines and principles, and multi-stakeholder collaboration are also shaping the AI governance landscape. For instance, major AI companies recently made voluntary safety and security commitments at the urging of the US government. Groups like the OECD and UNESCO have developed influential but non-binding principles on AI ethics.

However, key questions remain unsettled in the fast-moving field of AI regulation:

  • How to balance precaution and supporting innovation? Overly burdensome rules may stifle progress but laissez-faire approaches abdicate responsibility.
  • Should new centralized regulators be created to oversee AI? Europe opts for this approach but other countries spread oversight across agencies.
  • What’s the best way to operationalize ethical principles like transparency and fairness? Abstract ideals must become practical policies.
  • How can laws keep pace with rapid AI advances? Today’s rules may be obsolete tomorrow.

For Africa, critical next steps include:

  • Adopting clear national policies and strategies on AI governance
  • Building capacity of policymakers, lawmakers and regulators around AI
  • Ensuring governance approaches are tailored for local contexts
  • Participating actively in global AI norm-setting bodies

The quest to responsibly govern AI has begun worldwide, but it remains complex. By understanding regulatory trends and advocating strategic reforms tailored to local needs, Africa can help lead the way. Constructive policy engagement on AI governance will only grow in importance as these systems become further entrenched.