Wednesday, January 1, 2025

South Africa: Artificial Intelligence Regulation in South Af…

Share


Next year’s G20 Summit could be an opportunity for leaders to address AI concerns regarding transparency, accountability and human oversight.

Just months after the European Union’s (EU) landmark Artificial Intelligence (AI) Act came into force in July 2024, South Africa’s roadmap for harnessing the benefits of AI was published in October. Now the public has been invited to participate in consultations.

In this context, South Africa urgently needs local regulation to guide the use of AI applications in a realistic and achievable way.

President Cyril Ramaphosa announced the establishment of a Presidential Commission on the Fourth Industrial Revolution in 2020, of which AI technologies would form a part. He predicted that by 2030, South Africa would have ‘fully harnessed the potential of technological innovation to grow our economy and uplift our people.’

As the continent’s second-largest economy, South Africa might be expected to lead this, but Nigeria, Mauritius and Rwanda are already developing their own AI strategies or policies. South Africa’s National AI Policy Framework sets the scene for how policy- and lawmakers want to use AI to enhance social and economic prosperity.

Continentwide, Africa is expected to generate US$1.2 trillion from AI innovation by 2030, representing a 5.6% rise in the continent’s GDP.

For South Africa, much will depend on good governance. So far the country hasn’t taken steps to ban AI technologies it doesn’t consider safe, and local advocacy organisations have raised concerns about minority groups being targeted with technology that’s largely imported from other countries.

From February 2025, the EU will ban numerous AI practices, including AI-based predictive risk assessments for crime and using AI systems for biometric identification in public places.

AI technology advances in both the private and public sectors raise questions of privacy – much of these addressed by legislation such as South Africa’s Protection of Personal Information Act. Yet there are also issues of sovereignty, including who owns the data, as much of it resides in data centres outside of Africa.

Africa is also the site of intense geopolitical competition for accessing its data – a valuable commodity – and is seen largely as an ‘untapped market’. Guardrails are needed to ensure global power politics do not overshadow African citizens’ interests.

Rwanda has taken steps to ensure data sovereignty by designating much of its open data, including public records, financial records, social security records, etc., as a national asset as part of its Vision 2025 strategy.

Data is considered ‘essential for fuelling digital progress and advancing the country’s social and economic goals,’ and is seen as necessary to achieve the United Nations’ Sustainable Development Goals. However, it needs the means to store the data locally.

Along with ownership issues, there’s the question of AI applications’ impact on human security. Will AI reinforce inequalities based on class, race, etc.? How do you minimise biases baked into AI technology from its origins elsewhere? And how do you ensure transparency in AI applications?

For example, in the case of government data collection, clarity on how and how much data is collected and for what purpose serves as an important check against executive overreach. Warnings from Myanmar, where Amnesty International claimed Facebook’s algorithms’ contributed to the atrocities perpetrated by the Myanmar military against the Rohingya people,’ should serve as a warning to policymakers across Africa.

Data rights groups fear that AI could be weaponised to attack the weakest in society or undermine democracy in fragile states.

South Africa will also need the know-how and locally relevant research and development to understand the impact of AI technology to ensure its use can mitigate existing societal and economic divisions. A white paper on South Africa’s AI planning sheds light on the lack of research continentwide, and highlights the need to understand impact more clearly.

The chart below demonstrates how prepared African states are based on infrastructure, human capital, technological innovation, and legal frameworks.

One of Africa’s biggest challenges is accessing data, says South African-based media and internet policy expert Guy Berger – especially data on social media platforms. He points to the African Commission on Human and Peoples’ Rights’ recent adoption of an African Alliance for Access to Data resolution to allow more scrutiny over how social media platforms in particular work.

‘In Africa and South Africa, we would be naïve to treat AI as tech on its own and not consider where it is coming from, particularly with so much coming from the US,’ says Berger. ‘We cannot sit here in Africa and assume the AI tools we can access are neutral. These tools in both their foundation models and applications have a linguistic bias, algorithmic bias and cultural bias.’

AI technology requires access to vast troves of training data. That training data must reflect the geography and context in which the technology is being used. But much of the data behind AI is generated outside of Africa, so governments, universities, etc. should help facilitate that by pushing for data access and supporting local innovation to enable relevant AI to be developed.

Increasingly, some AI applications are considered problematic to democratic principles as they are used to support so-called information operations. Amani Africa recently reported that alongside AI’s benefits for peacebuilding is the use of deepfakes to impersonate political figures to propagate false information, posing existential risks to fragile states.

In Sudan, for example, AI-generated voice recordings purported to be those of a military leader ordering civilian killings, viewed by hundreds of thousands of accounts on social media platforms, could have real-world consequences.