“We Want AI to Serve Humanity, not Just Markets”: Ginette Azcona on Governing AI Without Exclusion

Capacity, Power, and Participation

Artificial intelligence is no longer arriving. It is already embedded.

Across healthcare systems, classrooms, farms, welfare offices, and labor markets, algorithmic systems are shaping access, outcomes, and opportunity. They influence who receives care, who qualifies for support, which risks are priced as insurable, and whose labor is valued. Yet the rules governing these systems are still being negotiated as if their effects were abstract, technical, or confined to the future.

This gap between deployment and governance is no longer theoretical. It is structural.


For Ginette Azcona, the problem is not that artificial intelligence lacks principles. It is that governance has remained detached from capacity, participation, and power.

“The conversation around AI governance often stays at the level of values and principles,” Azcona said in conversation with Danilo McGarry on the AI: Alternative Intelligence podcast. “But principles alone do not tell us who gets to participate, who has the tools to engage, or whose realities are reflected when systems are designed.”

Azcona’s work sits precisely at that fault line.

From Measuring Inequality to Designing Infrastructure

Before turning her attention to artificial intelligence, Azcona spent more than a decade inside the global development system. At UN Women, she led quantitative research on gender equality and served as lead author of the SDG Gender Snapshot, one of the most widely cited annual assessments of progress under the 2030 Agenda.

The work taught her a recurring lesson. Inequality is rarely invisible. It is simply not counted.

“What data exists, what data is missing, and who decides what gets measured,” she explained on the podcast, “these are political choices, not technical ones.”

Over time, that insight expanded. Data gaps were not only shaping development outcomes. They were shaping the design of emerging technologies. As artificial intelligence systems increasingly relied on large datasets, the same exclusions that distorted global indicators were being encoded into automated decision-making.

The problem, Azcona realized, was no longer just measurement. It was governance.

Why AI Governance Has Struggled to Deliver

The last several years have produced no shortage of AI ethics frameworks, declarations, and voluntary commitments. Transparency, accountability, and fairness appear regularly in communiqués issued by governments, corporations, and multilateral bodies.

Yet on the ground, outcomes remain uneven.

According to Azcona, this is not a failure of intent. It is a failure of institutional design.

“AI governance has largely been shaped by national security concerns and industry priorities,” she noted in her discussion with McGarry. “Public interest considerations often enter later, if at all.”

This imbalance matters. Governance regimes that privilege security and market competitiveness tend to concentrate power among actors who already possess technical capacity, capital, and proximity to decision-making spaces. Those most affected by AI systems, particularly in the Global South, are left responding to rules they did not help shape.

The result is fragmentation. Ethical principles exist without enforcement. Consultations occur without continuity. Inclusion is discussed without infrastructure to support it.

The Missing Question: Who Gets to Participate

For Azcona, participation is not symbolic. It is operational.

Artificial intelligence is already transforming healthcare delivery, education systems, agriculture, and labor markets across low- and middle-income countries. These are not experimental deployments. They are real systems affecting real lives.

Yet the communities navigating these realities are often absent when global standards, procurement norms, and governance frameworks are negotiated.

“That disconnect is dangerous,” Azcona said. “If those living with the consequences of AI systems are not part of the conversation, governance will default to perspectives that are distant from impact.”

This is where capacity becomes decisive. Participation requires more than invitations. It requires access to tools, shared language across disciplines, and sustained collaboration rather than one-off consultations.

The KAIA Network

To address this gap, Azcona is building KAIA Network, an initiative anchored at The New School that treats AI governance as connective infrastructure.

KAIA stands for Knowledge and AI for All. Its purpose is straightforward but ambitious. It is designed to link social scientists, AI practitioners, policymakers, and funders across regions that are typically excluded from global technology governance.

KAIA is not a think tank. It is not an advocacy platform. It is not another forum for abstract debate.

Instead, it functions as a collaboration layer. A system that enables co-creation across disciplines and geographies, allowing those with contextual knowledge to work directly with those building technical systems.

On the Alternative Intelligence podcast, Azcona described KAIA as an attempt to move beyond proximity-based advantage. “Right now, who gets to build AI is heavily shaped by where you are, who you know, and what resources you can access,” she said. “KAIA is about changing that equation.”

From Fragmentation to Connection

One of the defining challenges in the AI-for-social-good space is fragmentation. Researchers operate in silos. Practitioners lack access to technical collaborators. Funders struggle to identify credible, context-aware projects. Policymakers consult without mechanisms for follow-through.

KAIA is designed to reduce this friction.

By creating structured pathways for collaboration, the network aims to make participation practical rather than performative. Social scientists bring contextual insight. AI practitioners bring technical expertise. Institutions provide legitimacy and scale. Funders support execution rather than pilots that end at proof of concept.

The goal is not consensus. It is capacity.

Reframing AI Governance Around the Public Interest

Central to Azcona’s work is a reframing of what AI governance should prioritize.

Security and competitiveness matter. Industry innovation matters. But governance that excludes public interest considerations by default will reproduce existing inequities at greater speed and scale.

“AI governance needs to center people,” Azcona said. “Not as an afterthought, but as a starting point.”

This does not mean abandoning rigor. It means recognizing that accountability requires institutions capable of enforcing standards, auditing systems, and supporting participation over time.

It also means moving beyond aspirational language. Concrete outcomes matter more than statements of intent.

Why Now?

The global architecture of AI governance is still being formed. Standards are emerging. Procurement norms are being established. Institutional roles are solidifying.

Once embedded, these structures will be difficult to change.

That is why Azcona views the current moment as pivotal. Not because AI is new, but because its governance pathways have not yet hardened into infrastructure.

“There is still space to design systems that are inclusive by default,” she observed. “But that window is closing.”

From Conversation to Construction

Governing AI without exclusion does not require perfect foresight. It requires deliberate design.

It requires institutions that connect rather than fragment. Capacity that enables participation. Governance frameworks that reflect lived realities rather than abstract ideals.

Through KAIA and her work at The New School, Ginette Azcona is focused on building those conditions. Quietly, deliberately, and with an understanding that power in AI does not only reside in models or code, but in the systems that decide who gets to build them.

Governance, after all, is already happening. The only remaining question is whether it will be inclusive by design or exclusionary by default.

Disclaimer: VoD Capsules are AI-generated, and human-reviewed.
They synthesize publicly available evidence from reputable institutions (UN, World Bank, AfDB, OECD, academic work, and other such official data sources). Always consult the original reports and primary data for verification.

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these