31 Oct 2023

Shaping the Future: Lawyers at the Vanguard of AI and Sustainable Development

On the 1 and 2 November the first ever global artificial intelligence (AI) Safety Summit will be held in London. Its focus is to “fully embrace the extraordinary opportunities of AI” and “to ensure it develops safely in the years ahead” according to our Prime Minister, Rishi Sunak. An essential prerequisite to success appears to be the need to work closely with leading nations to agree a shared approach to harness AI for international development.

Certainly, as AI continues to advance at an extraordinary pace, understanding its likely impact is paramount and more so in countries in the Global South. For this reason, the role of the law and lawyers in shaping this transformation is of utmost importance and requires utmost speed.

AI is transforming our world, offering solutions to complex challenges. One of the most pressing global initiatives is the UN Sustainable Development Goals (SDGs) which represent a shared vision for a better future. These 17 goals aim to address various issues from poverty and hunger to climate change and gender equality. SDG 17, “Partnership for the Goals,” encompasses two vital targets, 17.7 and 17.8, emphasising the necessity of sustainable technologies in developing nations. Moreover, SDG 9 focusing on “Industry, Innovation and Infrastructure” whose target SDG 9.8 emphasises universal access to information and communications technology as a cornerstone for ensuring equal access to information and financial markets, in turn facilitating job creation and resilient infrastructure. The critical role of technology makes it a linchpin for attaining all the SDG targets by 2030.

AI’s Current Role in SDG Implementation

In a world witnessing an unprecedented pace of technological advancement, AI has taken undoubtably taken centre stage. The proliferation of AI, facilitated by user-friendly platforms like ChatGPT, has made it more accessible than ever. Consequently, it is unsurprising, that AI is increasingly seen as the solution to achieving SDGs. Predictive AI, for example, is being rapidly deployed in the Global South for agriculture, healthcare and infrastructure as a means of reducing inequality in resource-constrained environments. In areas where a shortage of skilled personal hampers efforts to tackle socio-economic and political problems perpetuating inequality, AI can be harnessed to asses causes and solutions.

One salient application of AI is in drone technology. Drones have been implemented in the Global South as a means of delivering essential medical and agricultural supplies to remote areas that are hard to reach by traditional means. They are also used for aerial observation, improving agricultural methods, enhancing crop productivity, and providing farmers with real-time data on their crops. Al technology opens doors to rapid development opportunities for the Global South, bypassing the otherwise costly and time-consuming processes. However, as reliance on AI grows, so do concerns about its potential negative consequences.

The Ethical and Regulatory Challenges

Alongside the development sector, AI has attracted interest and investment from international businesses raising ethical concerns. The absence of a comprehensive regulatory body overseeing AI’s development, deployment, and use has given rise to worries regarding corporate interests.

AI algorithms, designed by humans, have been found to perpetuate biases existing in society. Reports have shown that AI used in Courts displayed racial biases, affecting the risk assessment of prisoners. Similar biased systems are used in crime hotspot identification, potentially resulting in disproportionate police presence in predominately black areas. Furthermore, AI used in recruitment websites such as Linkedin has shown preferences for male names in searches, all demonstrating serious implications for equality and justice.

AI software is rapidly entering decision-making processes for asylum-settlement cases which are governed by international law and involve life or death decisions. AI must be free from bias to uphold the principles of non-refoulement and non-discrimination. Unfortunately, there are cases occurring already where claimants were denied asylum status based on poor AI translations.

Whilst the European Parliament has prohibited the use of facial recognition surveillance technology, it failed to ban ‘discriminatory profiling and risk assessment systems to control border movements’ for refugees and asylum seekers. Amnesty International has called for a ban on the use of such technology in asylum cases, arguing that there is ‘no human rights compliant way to use remote biometric identification’.

The hoarding of this data exposes migrants to security breaches, which could result in serious safety concerns if data falls into the hands of the actor or state they are seeking refuge from. In 2021, thousands of Rohingya refugees had their personal biometric data passed on to the Bangladeshi government, which then passed the data on to authorities in Myanmar. For many the result of this data breach has been involuntary repatriation to Myanmar. While everyone has the right to the protection of personal data, refugees and asylum seekers are particularly vulnerable to breaches of this right, given their unfamiliarity with EU systems and willingness to comply in the pursuit of asylum.  

The collection of data using AI is a widely debated topic. While wealthy nations have put policies into place governing AI-collected data, countries in the Global South often lack the same protections. Moreover, AI developed in the Global North and deployed in the Global South may echo colonial structures of power given the  imbalance of representation in development and decision making. UK tech firms such as Cambridge Analytica have already set a poor precedent in this area by working with foreign governments during election campaigns. Using data mined through Facebook, Cambridge Analytica played a large role in the election campaign of Kenya’s former president – Uhuru Kenyatta. In doing this, Cambridge Analytica broke UK and EU data privacy laws; however, as Kenya had less stringent laws in place, they were able to employ these methods. Not only does this pose problems around the protection of individuals’ data, it also raises questions around the exploitation of lack of AI awareness in the Global South as a means for Global North companies to interfere in their politics.

The Environmental Footprint

In addition to ethical implications, unregulated AI use has environment consequences which have been relatively unexplored. Data storage centres that AI relies on consume a staggering amount of power and water, with some reports suggesting ChatGPT emits 8.4 tons of  carbon dioxide per year and that 700,000 litres of fresh water were used during the development of ChatGPT. While these instances are not independently enough in themselves to raise the alarm, the growth projection of AI suggests the environmental impact could become a major cause for concern.

As AI’s growth continues, these environmental impacts could become significant. Notably, while the Global North benefits from AI development, the Global South bears the brunt of climate-related disasters, which may worsen with AI’s expansion.

The Role of Lawyers and Advocacy

Amid these challenges, lawyers and activists have a unique role to play. While AI is still in its infancy, they can shape the regulations that protect the privacy and interests of the most vulnerable. Policy recommendations include:

In conclusion, the 2023 AI Safety Summit presents a significant opportunity to address the multifaceted impact of AI on the Global South. While AI holds the potential to uplift economies, revolutionise healthcare and education and ensure food security, it also brings the risks of economic disruption, ethical concerns, technological dependence, and environmental challenges. The key to realising the benefits of AI in the Global South lies in responsible and equitable development, guided by international cooperation and ethical principles. While AI raises significant questions and challenges, it also offers promise and hope to the development sector.

Finally, the role of the law in shaping this landscape cannot be overstated. Legal frameworks have the potential to safeguard ethical AI practices, protect vulnerable populations, promote economic growth, and reduce environmental harm. Lawyers have the opportunity to lead the way in shaping the ethical and regulatory landscape of AI to ensure its responsible and equitable use. Advocates for International Development (A4ID) is committed to being at the forefront of the AI dialogue and urges the legal profession to work together with us to pursue research into the use and regulation of AI technology through its SDG Legal Initiative, a knowledge hub for lawyers dedicated to sustainable development. Together we can harness AI’s potential while safeguarding the rights and wellbeing of all and in so doing make this vision a reality.

Read more