,

Technology for Social Good with Mala Kumar

Mala Kumar is a globally recognised leader in technology for social good, with expertise in UX research design, open-source software, and the evolving field of artificial intelligence and machine learning. Her work has taken her across continents, particularly to sub-Saharan Africa, where she has driven innovative solutions for social change. In this Q&A, Mala discusses the transformative power of technology in advancing Sustainable Development Goals (SDGs), the potential role of AI and how her career has influenced her creative writing.

*Note this interview occurred before the Trump Administration’s executive orders to dismantle the overwhelming majority of USAID programs. Some information below may have changed since the interview.

A profile of a woman

Can you tell me a bit about your professional background?

My career began in a field called ‘Tech for International Development’, which is how technology can transform international development efforts. A classic example was if you had a project aimed at reducing maternal mortality rates,  you might have started with a health communications campaign on physical posters in clinics, but then you move your comms to an SMS-based platform. A key question that might have followed was what if the only person in the family with a phone was the husband? How would that change the comms strategy? 

I spent a decade in the area, mostly focusing on UX research and design. My role was to design, deploy, and implement tools for global initiatives, often with  the United Nations (UN). After that, I moved into big tech, where I became the Director of Tech for Social Good at GitHub, which is now owned by Microsoft. If you are using the internet, chances are high that some of the code behind what you are interacting with is hosted on GitHub. It’s one of the most widely-used platforms for software developers in the world.

In 2023, I left GitHub, and went back to the UN for a year where I became a Senior Advisor at the World Health Organization. Last year, I worked for  an organisation called MLCommons, where we built an AI safety benchmark. I was the Director of  Program Management and led one of the working groups.  

And could you tell me a bit more about your role at GitHub?

That was a highlight of my career. I led five programs focused on using GitHub’s tools, communities, products, and networks to support international development, humanitarian work, and other social good initiatives. A lot of this work aligned with the UN’s Sustainable Development Goals (SDGs), and that’s how we tracked our impact.

Most people are probably familiar with the SDGs, but just to give a quick background: in 2000, the Millennium Development Goals (MDGs) were introduced, which were the first real attempt to quantify basic human development – things like food security, public health, education, and gender equality. Then in 2015, the SDGs came into place, the key difference being the addition of goals around climate change and environmental sustainability.

We aligned our work with the SDGs to describe how our efforts positively contributed to social good  objectives. I started  a qualitative research project on open-source software for social good. We looked at the challenges, opportunities, and the growing ecosystem of using open-source software to advance the SDGs, particularly in public health. 

I was at GitHub during the COVID-19 pandemic. We helped the World Health Organisation (WHO) revamp how they built software and launched the first-ever open source programme office within the UN under WHO. We also started a skills-based volunteering programme – that grew so much I had to hire a Programme Manager. This initiative engaged around 10% of GitHub employees to help build digital products – mainly for the UN but also for other NGOs.

We also did a lot of work around Digital Public Goods. This concept involves open-source software that not only serves social good, but also meets certain sustainability criteria.

What do you think the future of AI could look like, and what could the role of AI be with SDGs?

I have a series of videos that explain this in more depth, but in short, we’re still in the very early days of AI, and there are different AI disciplines.These days, when most people talk about AI – especially people who don’t work in AI (artificial intelligence) or ML (machine learning) – they are usually referring to generative AI. That’s basically predictive text powered by neural networks, where the model is trained to “fill in the blank.”

But AI is much broader than that. There are other disciplines, like computer vision, which I think are more tested and proven. They focus on much narrower machine learning problems – like training a model to recognise cats, which is the classic example. Right now, computer vision is making real strides, not necessarily in public health, but in medical fields – things like patient case management and diagnosing malignant tumors through image analysis. That’s where AI is already being applied in a meaningful way.

I think AI and ML have the capacity and could do a lot of good, but mostly for rich people. That’s the reality. It’s incredibly expensive to train, deploy, test, and ensure these systems aren’t just making things worse. AI works best with high volumes of data, and for languages like English – so-called “high-resource” languages – there are massive datasets available to train on. So, for hospitals that can afford the infrastructure, or elite universities that can quickly adapt pedagogy around it, AI will probably have a big impact in the next decade.

But when it comes to AI actually advancing the Sustainable Development Goals? That’s still to be determined. Historically, digital technology projects in international development have been severely underfunded. I was just on a podcast talking about this (Humanitarian Frontiers in AI). Even in the ICT4D and “tech for good” space, organisations like the UN, the World Bank, and major INGOs are still chasing the new, innovative product. Tech has been completely productised – everyone wants to fund the first flashy version, but no one wants to invest in the second, third, fourth, or fifth iteration.

And that’s the problem – because it takes years of iterating (and, honestly, sometimes a lot of wasted money) to reach a point where a product is actually usable, scalable, and widely adopted – the continuous development cycle just doesn’t exist in tech for social good, and with increasingly complex technologies like generative AI, we’re seeing the consequences of that.

Anyone can take a large language model, slap an interface on it, and call it a chatbot. But what happens when they realize – six months or a year later – that it’s only producing factually accurate information 50% of the time? Then what’s the point? At that stage, you’ve basically created a pile of junk – something that’s not reliable, not tested, and not grounded in reality – because you didn’t do the work needed to make it actually deployable.

I’m not optimistic that we’ll see any major positive impact from AI and ML in the next few years. I do think there will be a ton of experimentation, and a lot of big claims about what’s coming. But historically, these kinds of technologies take at least five to ten years before they’re meaningfully adopted. Of course, there will be exceptions – some organisations will figure out great use cases and do really well. But overall? I don’t think that’s going to be the norm.

Could you explain a bit more about what open source software for good is?

So, first let me explain what “open source software,” means in simple terms: If you use an app on your phone, visit a website, or interact with anything in the digital world, you’re using software – essentially code that’s packaged together to perform a specific task. There are two main types of software in terms of licensing: proprietary and open source.

With proprietary software, you can’t access the actual code behind it or download it, modify it, or share it. Open source software is governed by a set of licenses managed by an organisation called the Open Source Initiative (OSI). These licenses allow you to do things like download, distribute, modify, and even repurpose the software. The idea is that, once you make changes or improvements, you ideally contribute those revisions back to the original project.

Open source software started as far as the 1950s and 1960s in academia, (particularly in the United States). But it really started picking up steam in the 1980s in academic circles. Unfortunately, as often happens with early tech movements, open source wasn’t immune to issues like sexism, anti-Semitism, and other forms of bigotry. But despite that, it thrived in certain academic and early tech spaces.

Startups like GitHub were born from the idea of organising open source software, providing a central place where people could easily find it, see the license, and contribute back. Eventually, Microsoft acquired GitHub in 2018, which was a huge moment in the world of open source software and over the past decade, big companies such as Microsoft have started to notice the potential of open source. 

The biggest contributors of OSS are now corporate  tech companies. But alongside that, a branch of open source software developed with a particular focus on social good. At GitHub, we called this “open source for good.” The idea was to use open source software to advance things like the SDGs (Sustainable Development Goals).

The area where this has been most impactful is public health, which ties into SDG 3: good health and well-being. Many public health organisations have developed open source software tools, and just as importantly, interoperable data standards. For example, if you’re running public health clinics globally and you want to track disease outbreaks, you need standardised ways to count and report cases. Open source tools make it possible for different organisations, governments, and NGOs to share data in a consistent, actionable way.

Can you tell me a bit more about your work with the United Nations?

I’ve done a lot of really cool work at the UN. My first internship was with UNFPA (United Nations Population Fund) in Senegal, where I focused on semi-nomadic populations and the underutilisation of community health centres. Even though I was in northeast Senegal, one of the poorest regions in a low-income country, there were public health facilities set up for migrant groups that were being underused The transhumant  spend half the year in their village d’attache –their home base – and the other half migrating with their cattle and other livestock to ensure their animals have enough to eat. My project with UNFPA focused on understanding why they were not accessing the health services available to them. That was my first real exposure to public health. Throughout graduate school I was more drawn to the technology side of things. But I don’t find tech interesting unless it serves a social good, and at the same time, I don’t find a lot of SDG-related work compelling unless it has a digital component. Pretty quickly, I realised that was my sweet spot.

Thankfully, my first full-time role at the UN let me dive deep into that intersection. I worked as a programme officer on an initiative called the African Risk Capacity, which was more than just a project – it was a $150 million initiative under the World Food Programme (WFP). It was a disaster risk insurance mechanism at the national level. Microinsurance was popular then—farmers could buy policies to receive payouts if their crops failed, covering expenses or reinvestment.

The problem with microinsurance is that it puts all the risk on the individual. People had to choose between using their limited money to support their family or paying for insurance that might never pay out. 

So, the African Risk Capacity transferred the risk to the national level rather than placing the risk on individuals. Governments would purchase insurance plans. If a country hit a certain threshold – like a major drought – the government would receive a payout and could quickly distribute funds to prevent people from selling off their productive assets, pulling kids out of school, or abandoning the progress they’d made over generations. The system was parametric, meaning it was based on data. We pulled datasets from the National Oceanic and Atmospheric Administration (NOAA), crop data from WFP, and other sources to build a software platform that helped governments make these decisions.

I travelled a lot for this project, and we’d sit with government ministers – Ministers of Finance, Agriculture, etc. – asking them to select parameters for their country’s insurance coverage. And we were making them do this on a software platform that was absolutely terrible to use. We were asking them to make $30 million decisions – huge chunks of their national budgets – on a tool that was barely usable. So, obviously, that was a big problem.

I started running usability tests and eventually secured a partnership with Google to overhaul the interface. That was my introduction to UX research and design, and it was a lightbulb moment for me: if the UN is going to keep building software and essentially forcing people to use it to access funding or make critical decisions, the software has to be good. It’s that simple. People need to understand what they are looking at. That realisation shaped my career.

I became one of the first people in the UN system to really focus on UX. Beyond the World Food Programme, I worked at UNICEF (United Nations International Children’s Emergency Fund)  on the gender team for a few years, designed apps for an NGO in London, and created UNICEF’s first online Evidence Gap Map – a flagship research tool that helped country offices understand the corpus of relevant literature on a topic. That project was so successful that multiple teams within UNICEF ended up replicating the work.

Could you tell me a bit more about the evidence gap map itself? And how is that being replicated and used now?

Evidence gap maps are a visual snapshot of a literature review. You define two axes based on whatever methodology you’re using, and then you map out the state of existing research at their intersections.

I didn’t develop the methodology for categorising the research – that was done by someone with a PhD and a lot of experience in that kind of work – but I did design the actual digital interface. This was around 2015, so several  tools we have now for quickly visualising data just didn’t exist back then. We had to get really creative with how we built it.

I was responsible for the  information architecture, digital categorisation, and ensuring it rendered well across different devices. A lot of teams had never gone through proper UX research and design before. They focused heavily on what the research papers said but hadn’t really thought about how people would actually find and engage with them. So, for many people on the team, it was eye-opening.

You’re also a creative writer and an author of two novels, The Paths of Marriage and What it Meant to Survive and how has your work in tech impacted your creative writing practice?

Honestly, writing has been my creative outlet away from tech. That said, there are actually a lot of practical overlaps between writing books and working in tech. For example, after I finished my first draft of my second novel, I sent it to my development editor, and she came back with a full list of edits. Typically, a development editor writes a letter outlining what they liked, what they didn’t like, plot holes, character gaps, and areas that need improvement.

So, what I did was take that letter and broke the edits down into categories. Then I sat down with my wife – who had already read the draft – and we just generated ideas for each category. We wrote down as many ideas as possible for each issue my editor pointed out. We then did a card sort – which is something I do in tech projects all the time. We sorted all the ideas, discussed them, and then I picked two to three solutions for each of the main issues.

After that, I went back to the draft, made the changes, and sent it off as the second draft. My editor came back and said, ‘this is great, you don’t need any more major revisions’. That was just one draft. So, in terms of organisation and workflow, my tech background definitely helped streamline that process.

And of course, writing a book is not just about writing – it’s also about promoting the book. That means organising events, managing social media, and creating digital assets. My book cover was created by a designer friend of mine and she did an amazing job, and we worked on the final cover together. This was possible because I have enough of a graphic design background, I understood what was feasible and what was a ridiculous request.

Then there’s the whole digital marketing side – posting, organising events, etc. So while writing itself is separate from my tech work, my background has been hugely useful in the organisational aspects, the design elements, and making sure everything is running smoothly behind the scenes.

We’ll be posting Part Two of Mala’s Q&A in the coming weeks where she shares more about her creative writing practice – watch this space!

Mala Kumar Linkedin / Mala Kumar Tik Tok


Please note that the Hub operates under the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license and our posts can be republished in print and online platforms without our permission being requested, as long as the piece is credited correctly.