top of page
Search

Building AI That Africa Can Trust: The Case for Explainable AI

  • Writer: Masakhane Mtshali
    Masakhane Mtshali
  • Sep 11
  • 4 min read

Building AI that humans can trust
Building AI that humans can trust

The other day, I was chatting with a friend in tech about AI. One of those conversations that starts casual and then suddenly you’re debating whether robots will save us or ruin us.

The point that turned our debate into an argument: “AI can be bad… and should be regulated.” 2 hours later, with both egos shattered, we still couldn't come to a conclusion. That statement stuck with me because honestly, the answer isn’t simple.


The Mojo Jojo Problem

Remember The Powerpuff Girls? (If you don’t, stick with me, you’ll get the idea.) Mojo Jojo, the iconic villain, wasn’t some genius inventor creating groundbreaking tools. What he did was take something good, something meant to help or improve the world and twist it for his own ends. He leveraged power and intelligence, but without ethics or foresight, the outcomes were destructive.

This for me, is a perfect metaphor for AI. Artificial Intelligence itself is neutral. It’s not inherently good or evil. It’s lines of code, massive datasets, and computing power. On its own, it doesn’t have intent. But the decisions made by the people who design, deploy, and govern AI determine whether it serves society or causes harm.


The Promise and the Warning

If you watch Jensen Huang, CEO of NVIDIA, speak about AI, you’ll hear unshakable optimism. He sees AI as a catalyst for innovation across industries: healthcare breakthroughs, smarter energy systems, more efficient infrastructure. For Huang, AI is the next great leap in human progress.

Contrast that with Mo Gawdat, former Chief Business Officer at Google X, who offers a sobering counterpoint. He warns that the next 15 years could be turbulent, what he calls “hell before heaven.” In his view, AI will disrupt jobs, reshape societies, and exacerbate inequalities before we fully learn how to manage it.

Between these perspectives lies the reality: AI is powerful. It will shape economies and societies whether we’re ready or not. Which means Africa cannot afford to be a bystander in this conversation.


Why Africa’s Context Matters

In Africa, the stakes are particularly high. On one hand, AI has the potential to address deep systemic challenges by improving access to healthcare in rural areas, enhancing agricultural productivity, tailoring education to individual learners, and accelerating entrepreneurship. On the other hand, there are significant risks. AI trained on non-African datasets may misrepresent local realities, producing biased outcomes. Weak privacy protections could lead to data exploitation or surveillance. And because digital adoption is uneven across the continent, there is a real danger of AI widening the gap between those with access and those without.

There is also the issue of trust. Too often, technologies have been introduced to African markets without transparency, accountability, or equitable benefit. This history of extraction and one-sided “innovation” means that new tools are often met with skepticism. If AI is to succeed here, it must earn trust, not demand it.


Why Explainability Matters

AI today often feels like a black box. An algorithm tells you whether your loan is approved, a chatbot answers your query, or a fraud detection model flags a transaction but do you really know why? For many, and especially for me, the lack of transparency creates anxiety, and in regions where trust in institutions is already fragile, this can stall adoption entirely.

Explainable AI (XAI) is about opening that black box. It ensures that when AI makes a decision, people can see the reasoning behind it. In healthcare, this could mean showing doctors not just a diagnosis but also the data patterns that informed it. In banking, it could mean explaining why one loan application was approved and another denied.


What Ethical AI Should Look Like

So what does it mean to build AI that Africa can trust? First, it means transparency. People should be able to understand how AI systems make decisions, rather than being left in the dark by opaque algorithms. It also means fairness. AI must not entrench the biases of the societies it reflects. If unchecked, systems can amplify inequality, reinforcing existing patterns of exclusion. Privacy and data sovereignty are equally critical. African citizens and governments need assurance that data generated here is governed here, and that it is not simply exported for profit elsewhere.

Finally, ethical AI requires accountability. When an AI system causes harm or produces unintended consequences, someone must take responsibility. And beyond accountability, inclusivity is essential. AI built without African voices risks becoming irrelevant or worse, harmful to African realities.


The Role of Regulation

Here’s where the debate between my good friend and I got heated. Some argue that too much regulation will stifle innovation. Others insist that without strong oversight, AI will spiral into misuse. The truth is that both perspectives carry weight.

Africa needs regulation but it needs smart regulation. Heavy-handed rules that mirror Western frameworks may not fit local contexts. At the same time, a lack of rules altogether risks undermining public trust. The way forward lies in balanced governance, policies that protect citizens while encouraging responsible innovation.

That could mean regional collaboration, where bodies like the African Union establish shared principles for AI ethics and data protection. It could mean creating oversight mechanisms that bring together governments, private sector players, and civil society. And it should definitely include widespread public education, so people understand both the potential and the limitations of AI.


Responsible AI as Brand Leadership

This is where companies like Confer play a crucial role becuase AI isn’t just about algorithms; it’s also about marketing and communication. By shaping transparent and responsible narratives around AI solutions, organisations can become leaders in trustworthy and credible innovation. Thought leadership isn’t only about showcasing the latest tools; it’s about showing users, regulators, and society that technology is being built with responsibility, fairness, and accountability at its core.


The case for explainable AI in Africa isn’t just technical, it’s cultural, ethical, and social. We need systems that aren’t just smart but also human-centred, bridging the gap between complexity and clarity. When AI is explainable, it empowers people to engage with it, question it, and ultimately trust it.

If we get this right, Africa won’t just adopt AI, it will shape it. We’ll avoid the Mojo Jojo problem by building AI that is powerful, understandable, and designed for people first. That’s how we ensure that innovation doesn’t just happen to Africa but with Africa.

 
 
 

Comments


bottom of page