What do we learn when we listen to the Global Majority on AI?
Perspectives on AI Gathered Through Participatory Practices in 16 Global Majority Countries
By Jeremy Boy, with inputs from Mirko Ebelshaeuser and Drasko Draskovic.
The 2025 UNDP Human Development Report posits that making Artificial Intelligence (AI) a driver of human development is a matter of choice. The question is, who gets a say in making these choices?
For the past five years, the UNDP Accelerator Labs Network has been actively engaged in participatory data innovation and AI action research throughout Global Majority countries, whether it be through collective intelligence design; through national consultations and dialogues on AI with a wide variety of stakeholders, ranging from government to civil society; or through the use of generative AI to facilitate participatory processes.
Following work we did last year with Columbia’s School of International and Public Affairs (SIPA) graduate students to design a framework for conducting national dialogues on frontier technologies, we teamed up with our colleagues in UNDP’s Human Development Report Office to consult with and learn from Labs in 16 countries what they had picked up through their work on people’s perceptions of AI. All had concrete, practical experience of engaging with national stakeholders on the topic of AI, whether from government, private sector or civil society.
Awareness of AI is growing, but deliberate use and understanding often remains limited
All Labs reported a general awareness of AI from the many people they interacted with through their work — whether it be through reference to popular applications like ChatGPT, in terms of strategic business or institutional priorities, or simply as a concept. However, many also pointed out a limited depth of technical understanding, of knowledge of when AI was embedded in commonly used systems or applications, or of the concrete possibilities offered by AI. Some issues raised included low digital literacy and a perception that AI was a “black box”. Additionally, Labs sometimes had to mitigate unrealistic expectations that AI could be applied to almost anything.
Most Labs further reported that people, from TikTok content creators in Bangladesh to rural communities in Trinidad and Tobago, generally had mixed feelings about the deployment of AI, with perceptions varying from techno-optimism to considerable skepticism. The Lab in Belarus highlighted a general positivity towards AI, with almost 50% of respondents to a nation-wide survey they conducted reaching 11K people affirming they trusted AI — albeit in most cases, on the condition that they understood how the algorithms worked. The Lab in Trinidad and Tobago mentioned that cautious optimism was often met with healthy skepticism, with 62% of respondents to a survey they conducted in six different locations around the country reaching 211 people mentioning mixed feelings about AI, and a clear divide on whether to trust AI to make decisions that could affect human lives (46% trust versus 42% distrust). Finally, speaking specifically about small shop owners, the Lab in Ecuador stated that people perceived AI as somewhat irrelevant in their day-to-day lives.
This is in part due to the challenge of discussing AI as a monolithic technology: it can lead to misrepresentations, unrealistic expectations, and misplaced trust or distrust. There is a dire need for clarity about the possibilities that AI systems can offer, as well as their limitations, in different contexts and for different populations, sectors, and institutions. Conversations about AI need to be situated and based on concrete examples that resonate with people’s experiences.
It is not just a matter of AI literacy
One way to anchor conversations about AI is to pay attention to the specific applications or use cases people refer to, and to use these as proxies for the broader socio-technical system. This can lead to insightful conversations about the perceived or anticipated implications of such applications. For example, the Lab in the Ukraine, reporting numbers from the Ministry of Digital Transformation’s White Paper on AI Regulation in the Ukraine, mentioned that 57% of the population were familiar with chatbots; the Lab in Paraguay claimed that civil society often associated AI with targeted ads; and the Lab in Bosnia and Herzegovina evoked some level of public awareness of more specialized applications like AI-assisted radiology and automated customer service. Conversely, the Lab in Guinea stated that individuals with limited digital literacy tended to refer to interacting with AI assistants as “talking to robots,” revealing a fear of not knowing who might be controlling them. Many Labs also reported that people were concerned about AI-facilitated deepfakes and information integrity, data privacy and job replacements or displacements.
This suggests that increasing people’s agency over AI is not just a matter of literacy. It is also a matter of demystifying how we collectively refer to and engage with the socio-technical system. This can enable people to seize opportunities, as well as engage in matters of concern and advocate for desirable technological futures.
There is not one AI divide, but a multitude of intersecting AI divides
The Labs highlighted important variations in awareness and understanding of AI, as well as inequities in access and opportunity, within countries and across socio-economic groups. These include geographic divides, for example, between rural and urban areas; generational divides, with younger populations often appearing to be more informed and optimistic about AI; and gender divides, with women expressing less positivity about AI than men, as well as greater concerns about the potential misuse of their images to generate deepfakes.
Just as it is problematic to discuss AI in monolithic terms, it is unrealistic to expect that it will uniformly permeate and impact different economies, societies and demographics around the world. It is important to critically assess where AI can truly make a positive impact on development, and for whom. Frontier technologies like AI need to be considered from national, and even local perspectives, not just from a global one.
Incentives behind the development and deployment of AI need to be addressed
The private sector is moving faster than the public sector’s ability to adopt or regulate AI, as many Labs confirmed. Several Labs also discussed infrastructural issues, as well as perceptions of colonial legacy, foreign influence and new dependencies. Others provided a reality check, suggesting that widespread adoption of AI was a distant mirage given the current digital landscape in their countries.
For example, the Lab in Panama mentioned that academics thought most about the use of AI for productivity gains in the private sector, as opposed to its deployment in public services; the Labs in Belarus, Bosnia and Herzegovina, and North Macedonia noted that data and AI governance was a work in progress in their countries, and that regulations were taking time to be established. The Lab in North Macedonia further pointed out that discussions about AI were mainly driven by business, not by the government or civil society. The Lab in Colombia also shared the view that private companies were shaping the narrative and highlighted that while the country had an Ethical Framework for AI (Marco Ético para la IA), there was a lack of mechanisms for verifying and enforcing compliance. The Lab in Ecuador further pondered how balancing sustainability with digital growth was a major challenge in the country.
These international dynamics between private and public sectors raise important questions about balancing economic incentives — predominantly those of large, Global North conglomerates — and the global public good. There is a need to guarantee that the advantages of AI are distributed equitably, ensuring that those who contribute much of the primary resources such as data also benefit from the economic returns.
Towards being more democratic about technology
Building trust and enabling greater agency over AI are critical endeavors. As UNDP actively works on re-imagining trust and safety for AI, we remain committed to including a plurality of voices from the Global Majority, as indeed, our collective technological futures should be a matter of choice. To this end, discrete consultations and engagements are necessary, but not enough. They need to be multiplied across all regions of the globe, even in places that may not be particularly AI ready and should be committed to “participation as justice,” i.e., a more continuous, long-term form of engagement that is based on mutual benefits, reciprocity, equity and justice. We encourage you to check out our framework for conducting national dialogues on frontier technologies, and to use it to foster more participation at your level, wherever you are.
In doing so, we hope to move away from the paradigm of “democratizing technology” to one of being more democratic about technology.
Full list of Labs represented: Belarus, Bosnia and Herzegovina, North Macedonia, Kazakhstan, Ukraine, Argentina, Barbados and the Eastern Caribbean, Colombia, Ecuador, Panama, Paraguay, Trinidad and Tobago, Guinea, Kenya, Morocco, Bangladesh.
Methodological note: It is important to note that the insights reported in this blogpost are derived from subjective observations that are based on experience. They should be considered as signals rather than representative views held in any given country. Our goal in sharing these insights is to contribute to a plurality of perspectives on AI that, importantly, come from places that are generally far from the centers of research, development, and funding of AI.
Acknowledgements: we would like to acknowledge the insightful contributions from everyone in the UNDP Accelerator Labs Network who took part in this work, as well as our colleagues in the Human Development Report Office with whom we led the learning circle discussed in this post: Josefin Pasanen, Prachi Paliwal, and Antonio Gonzalez.