In our latest Q&A, AI ethics advocate Titilola Olojede, from the National Open University of Nigeria, highlights the pressing need for more diversity and thoughtful design in the development of AI. She discusses the wide-ranging challenges the field faces—from gaps in education and questions surrounding academic integrity, to the vital role of Indigenous AI systems. Recognised as one of the 100 Brilliant Women in AI Ethics 2024, she believes that “pedagogy must drive technology.”

Can you tell us what inspired your journey into AI ethics?
I don’t actually come from a technical background, such as computer science or engineering. My training is in philosophy. I focused on bioethics for both my undergraduate and master’s dissertations, and I’ve always felt at home with ethical theories. I was teaching ethics, both theoretical and applied, as well as the history of ethics, when conversations around AI ethics began gaining momentum. I saw numerous calls for input and thought, ‘this is something I can contribute to.’ That’s how I got started, and the rest, as they say, is history.
We hear the term all the time, but what exactly is generative AI?
Generative AI like ChatGPT is an example of weak AI, which produces texts, voice, pictures, and videos based on prompts. AI refers to machines that mimic or simulate certain human cognitive abilities. Weak (or narrow) AI is designed to handle specific tasks, whereas strong AI seeks to match or exceed human capability and intelligence. We don’t yet have strong AI.
Why is it important to have more diversity in AI ethics?
In the AI space, there’s this idea of the “wisdom of the crowd” or collective wisdom, which suggests that the more diverse the data, the better the output of an AI system. This means we need more representation in the data used to train AI, so that the algorithms are improved and biases are minimised.
It is important for us to build diversity into AI systems from the very beginning and not as an afterthought. We need diverse stakeholders involved from the very start, from conceptualisation and design to development and deployment. We also need to audit and review AI systems to ensure the data is diverse and of high quality throughout the process, with an ethics review board that can assess the system and certify its fairness.
What steps can be taken to challenge Western-centric approaches, particularly the dominance of white male perspectives, in AI research?
There is a need to build indigenous AI systems. In Africa, we need to build our own AI systems, to ensure these technologies reflect our own values, cultures and needs, rather than just popular Western perspectives. This approach will help create AI systems that are truly representative, not just of global norms, but of the diverse values we want to share with the world.
Starting with small, local systems can make a big difference in shifting the narrative. Currently, in Nigeria, there are a number of AI systems being developed, such as Awarri (derived from the Yoruba word “Awari”, meaning “seek and find”) and Bunce (a customer engagement application) and Farmspeak which focuses on enabling poultry farmers. We also have new centres being developed like the National Centre for Artificial Intelligence and Robotics (NCAIR), and the Three Million Tech Talent initiatives, which is a Nigerian government initiative to train 3 million people in digital and tech skills by 2027.
There have also been collaborations with big tech companies such as Google. While this is promising, these partnerships must be equitable and mutually beneficial, rather than exploitative to avoid a rehash of historical colonialism in digital colonialism.

You gave a talk at the youth AI conference on how AI can enhance social justice. Do you have any examples of this, and the potential risks that exist? How can AI help and exacerbate social inequalities?
People are using AI for language assistance for interpretation. For example, AI tools help learners practice and improve more effectively. It can also help people with disabilities such as dyslexia as AI assists with spelling and grammar.
Another advantage is AI’s constant availability, as it is available 24 hours a day, 7 days a week, provided there is internet access and data. This can be especially valuable for accessing information and basic education, offering opportunities that might otherwise be out of reach.
There are also various use cases of AI in healthcare in AI-powered telemedicine, remote patient monitoring. This can provide healthcare access to underserved or remote areas, and also in areas with limited health infrastructure potentially reducing geographical disparities, increasing access, early detection of ailments, and reducing costs. On a general note, predictive analytics in AI is invaluable in many sectors for enhancing efficiency.
Nonetheless, the bane of AI is bias and a lack of diversity in the dataset. These can pose serious risks to its use. For instance, the use of facial recognition in criminal justice is known to discriminate against dark skin, against women, and is less accurate for African Americans and Asians. Less reliable or unreliable at best, people of certain skin tones have made big tech companies like IBM, Amazon, and Microsoft put a moratorium on the technology. Further, in resource-constrained settings, access to stable internet and electricity can militate against access, thus creating a wider gap in the digital divide and undermining social justice.
AI, however, can potentially be an equaliser if the limitations of access to infrastructure, bias and discrimination are addressed. Without representative, inclusive data, AI risks reproducing or even amplifying existing biases and undermining social justice.
What strategies/tools can be implemented to ensure ethical accountability in AI systems, and how can we prevent algorithmic biases, especially against women?
Worldwide, only about 22% of women are in the field of artificial intelligence compared to 78% of men. You can read more about the lack of representation in my paper, Reflecting on Diversity and Gender Equality in Artificial Intelligence in Africa. We definitely need more women in the space. Incentives, grants, a community of practice, and continuous training targeted at women in the field are necessary to encourage more women to remain in the field, thereby increasing representation. Women’s groups such as Women in AI Africa, She Shapes AI and Women in AI Ethics are doing an excellent job at driving initiatives for equity and inclusion through their work. But there needs to be more organisations like these.

How is the world of education changing in response to the realities of AI systems? Do you have an AI policy at your university?
As a faculty member at the National Open University of Nigeria, the world of education is undergoing tremendous changes in terms of teaching, learning, and our assessment journey. The prevalence of AI tends to challenge how we do things. What many educationists are doing is to make assessments more authentic is to mirror real-life experiences, as well as situations that may arise in the workplace.
Another change brought about by AI is the development of policy guidelines for AI use. These policies are not just about punishment or enforcement; they are about offering support and clarity to faculty, students, and administrators to understand how to use AI responsibly and sustainably in their various contexts, when not to use it and the extent to which it could be used. Both faculty and students alike are also being upskilled in critical AI literacy.
The draft AI policy at the National Open University of Nigeria will soon be unveiled and made available for all.
How can we uphold academic integrity?
There are so many pressing questions that we need to address: What does research integrity look like in the age of AI? How do we redefine concepts like fabrication, falsification, and even plagiarism when AI tools are involved? Is transparency enough?
Many publishers call for clear disclosure of AI use, but even that seems inconsistent. A colleague had a paper rejected recently because they disclosed AI use, while another journal accepted mine with full transparency. These contradictions expose a growing grey area. It brings up the question: to what extent can we take ownership of a paper in which AI was used to generate it?
We need open, thoughtful discussion to shape what academic integrity should look like in this new era. For instance, is it acceptable to use synthetic data generated by AI? And if so, where do we draw the line?
So, do you think we need to rethink the “publish or perish” culture in the context of AI advancements?
We may also be moving beyond “publish or perish,” as AI enables near-constant output. But I don’t subscribe to the “visible or vanish” mindset either. It’s not about quantity—it’s about quality. Quality research is what truly resonates with society and serves future generations.
My question is what does quality look like in the age of AI? We need to keep asking this, with critical reflection guiding the way.
Do you have any specific tips for academics and researchers to navigate their own digital data landscape? What advice would you give them?
I have four pieces of advice. Firstly, pedagogy should drive technology, rather than the other way around. Start by asking: What do my students need to learn to achieve their goals? Then, you can choose the technology and tools to support those goals.
My second point is that AI use should be intentional and strategic. Teachers are central to using AI, but they do need to be trained on how to use technology to enhance pedagogy and understand its use and boundaries, as AI may not be suitable for every task.
Students also need to build core cognitive skills first. This is the ability to create, to innovate, to synthesise ideas, and even to paraphrase – as when you paraphrase by yourself without using AI, you tend to retain and gain a better understanding of the idea.
Finally, continuous monitoring, reviewing and evaluation of AI systems in our educational system is critical. We need to assess whether AI is enhancing pedagogy or is it being inimical to it?
So, how do we ensure privacy in AI?
Firstly, as an individual, I do not recommend subscribing to all AI tools, as that means sharing your data with new technologies, and this is something we must continue to be critical of and careful about. Also, make sure to check the settings. Many AI systems opt you into training their models with your data by default, instead of asking you to ‘opt in’.
Institutions must develop clear guidelines for data usage. They also need to ensure that they are collecting only necessary data.
As teachers, use it with caution. For example, if you are allowed to submit students’ work to AI for assessment, remove the personal details of those students.
Students also need to be trained in anonymisation. One way of doing this is by removing all personal identifiers to safeguard their privacy while using generative AI.
My next question refers to your paper ‘Africa dreams, artificial intelligence’. How can AI drive sustainable development innovation across Africa? And what key challenges must we overcome to ensure that these technologies benefit all communities?
AI is already being used for sustainable development, for example, to analyse big data and to predict trends, and to optimise resource management.
There is a civic organisation in Nigeria, called BudgIT , that uses an AI-powered tool to simplify data on budgets so that it can track government expenditures. Also, there is an AI-powered tool called Kuzi. It’s used to predict low cost breeding and migration of locusts to mitigate against crop devastation in East Africa and the Horn of Africa.
However, for AI to drive sustainable development effectively, we must consider its environmental impact. It consumes a significant amount of energy and water to cool the data centres that power these systems. Recently, OpenAI CEO, Sam Altman, made the statement that when you say ‘thank you’ or ‘please’ to AI in ChatGPT, you’re consuming lots of electricity. It’s made me reconsider how I use AI and influenced me to become more careful with my use of it.
We face significant infrastructural challenges in Africa, such as persistent issues with electricity, the high cost and quality of internet access, and limited bandwidth. These constraints are compounded by the high costs of computing infrastructure and a lack of representative data.
These challenges seriously hinder our ability to leverage AI for sustainable development. Yet, despite these barriers, many individuals and institutions are still forging ahead, determined to mitigate challenges and ensure that Africans can benefit from AI. If these barriers are reduced, they could open the door for Africa and other countries in the Global South to fully participate in and benefit from the global digital evolution that is currently underway.
Can you tell us what’s taking place in Nigeria and other AI hubs in the African AI landscape?
In Nigeria, over the past year, we have seen the development of a National AI strategy, which I was a part of. This is, of course, a giant stride for the country when it is eventually released, as it details how AI will be used in our country and in education.
The Nigerian Universities Commission has also constituted a committee to look into how AI could be integrated into the Nigerian University system. There is also a Service-Wise GPT in the pipeline for the civil service to streamline access to critical government information, draft policies, and enhance administration.
Kenya and Rwanda are hubs for AI in sub-Saharan Africa. Egypt and Morocco are also leaders. Africa is catching up; it is implementing various measures regarding regulations and governance to leverage AI effectively. Rwanda and Ghana are leveraging AI for governance and digital transformation, while Morocco and Tunisia are scaling up AI research and policy development.
If we continue to improve these systems and ensure that discussions like this one inform their development, we can make AI more representative and ultimately beneficial to humanity.
Further Reading
Peters and Olojede (2025). Influence of Generative Artificial Intelligence (GenAI) in Nigerian Higher Education. Agidigbo
Olojede H. T. (2024) Reflecting on Diversity and Gender Equality in Artificial Intelligence. The Thinker. Vol. 101 No. 4. Quarter 4 – 2024 / Volume 101.
Olojede H. T. and Polo Etaoghene (2024).In Praise of Normative Science: Arts and Humanities in the Age of AI. International Journal of Social Sciences and Humanities: Africa Research Corps Network.
Olojede H.T. & Olakulehin F.K (2024). Africa Dreams of AI: A Critical Analysis of Its Limits in Open and Distance Learning. Journal of Ethics in Higher Education.
Olojede H.T. & Ayo Fadahunsi (2024) On Decolonising AI. Agidigbo. Vol.12, No.2
Olojede H.T. (2024). Techno-solutionism a Fact or Farce? A Critical Assessment of GenAI in Open and Distance Education. Journal of Ethics in Higher Education. (4), 193-216
Olojede H.T. (2023) Towards African Artificial Intelligence Ethical Principles.AI4D Lab. IEEE.
Olakulehin, Felix Kayode & Olojede, Helen Titilola (2025). From Knowledge to Action: Awareness and Utilisation of Artificial Intelligence (AI) for Open, Distance and eLearning in West Africa. In Tijani Hakeem Ibikunle & Akinwale Akeem Ayofe eds. Global Footprints: Leading the Future and Transforming Distance Learning in the Digital Age. A Festschrift in Honour of Professor Olufemi Peters.



