The future of AI governance

Thursday 20th March 2025 03:07 EDT
 
 

Divya Siddarth, co-founder and executive director of the Collective Intelligence Project (CIP) is working to create democratic, community-driven governance models for AI through innovative platforms and research.

At CIP, she is pioneering democratic governance models for AI, ensuring public interest remains at the core of technological advancements. Through initiatives like GlobalDialog.AI, she is gathering global insights on AI policies, challenging Western-centric narratives, and exploring how different societies prioritise AI’s risks and benefits.

Her team also evaluates AI models for their societal impact, cultural competence, and ethical considerations. Additionally, CIP’s Community Models platform empowers local groups—from book clubs to content moderators—to co-create AI tailored to their specific needs.

In this exclusive interview, Siddarth discusses global AI policies, the future of the technology and how her work at CIP is helping to shape a future where AI reflects the needs of people worldwide.

As someone deeply involved in AI, what do you see as the biggest risks AI poses today, and what are its most significant benefits, particularly in the context of governance?

The biggest risk is the concentration of power—where a small group of people gain disproportionate control over decision-making, affecting everyone else. Another major concern is the gradual erosion of human agency. As we increasingly delegate decision-making and productive work to AI models, we risk reaching a point where humans are no longer in charge of their own destiny. Even if some control remains, it may be limited to only a select few.

However, there are ways to address these risks. Building collective governance structures, ensuring equitable distribution of AI’s benefits, and making sure AI models are unbiased and reflective of diverse global perspectives are crucial. If AI is to be used in critical areas, it must fairly represent all people. That said, AI has immense potential. It can generate beautiful writing and art, revolutionise healthcare and education, and drive incredible advancements. The world is changing rapidly—I just hope it evolves in a way that preserves human freedom and agency.

How can we ensure that emerging economies, underrepresented voices, and diverse perspectives are better integrated into these global discussions?

I believe coalition building plays a crucial role in ensuring smaller voices are heard. Essentially, when underrepresented groups band together, they gain greater influence. While India is one of the largest and most powerful voices in AI policy, even for them, forming coalitions is key. For example, Global South nations could recognise their shared interests and negotiate collectively—such as purchasing computing power as a bloc rather than individually to secure better rates. Additionally, highlighting AI’s impact and investing in domestic development can be transformative.

AI lowers the barrier to entry across industries, offering brilliant but under-resourced individuals a chance to succeed. If we can identify and support these talents, we can foster stronger domestic AI capacity. Ultimately, success lies in building alliances, clearly defining goals, and presenting a united front in global discussions to advocate for fair and equitable AI policies.

The recent AI Summit has been a major talking point. In your view, what were the most significant outcomes of the summit?

I actually helped organise the first AI Safety Summit at Bletchley Park in London. That summit had a very clear goal: to bring together a few powerful countries such as the US, China, the UK, and key European nations to acknowledge the major risks posed by AI. While that framing may not have been the best, it was a specific and focused one.

Over time, things have evolved in some ways. The French Summit was much more global, with deep representation from around the world, something that was notably lacking at Bletchley Park. That is a crucial improvement. However, the goals of the French Summit became more vague. Was it about investing in AI applications? Addressing catastrophic risks? Labour issues? Bias? National AI strategies? It covered many topics but lacked a singular focus.

Some tangible outcomes from the French Summit included global commitments from various countries to prioritise AI governance and the establishment of AI safety institutes. However, much of the most valuable progress seemed to happen on the sidelines—through new research collaborations, coalition-building among experts on AI risks, and informal agreements. While the summit itself may not have had a clear overarching objective, it did create one of the most globally inclusive spaces for AI governance discussions, which is a significant achievement in its own right.

How would you describe your experience as part of the founding team of the AI Task Force and the AI Safety Institute? Given the UK’s ambition to lead in AI safety, how successful do you think the AI Safety Institute has been so far?

It was an honour to be part of the founding team of the AI Task Force and the AI Safety Institute. In an unbiased way, I genuinely believe that the UK AI Safety Institute is an incredible accomplishment. It’s rare for a government to build frontier technical capacity so quickly—something governments are famously not great at doing. Yet, the UK AI Safety Institute has emerged as one of the best AI safety organisations in the world. Given that other leading organisations in this space have billions in venture capital funding and can move much faster, the UK's achievement is particularly impressive, and I’m grateful to have been part of it.

Recently, they announced a name change to the AI Security Institute, likely due to the shifting political landscape. This seems to reflect a narrowing focus, moving away from some of the societal impact work I led—such as research on anthropomorphisation, human interaction with language models, and associated risks—toward national security applications. While I understand the political reasoning behind this shift, I find it somewhat disappointing, as I believe governments should also prioritise broader public interest concerns.

That being said, the AI Safety Institute and the ecosystem built around it—especially in such a short time—are very impressive. The UK has a strong domestic AI landscape, and it will be interesting to see where things go from here.

With so much happening with AI right now, what more needs to be done?

I see two key aspects to consider. First, there is still immense untapped potential in the technology we already have. AI could significantly improve people's lives in ways we haven't fully explored yet. However, many talented individuals worldwide who could build incredible AI-driven solutions lack access, funding, or opportunities to contribute. Expanding accessibility to AI is a huge opportunity that remains underutilised.

On the other hand, I am deeply concerned that we are not economically prepared for the rapid shifts in the labour market. The automation of jobs—even in wealthier nations—is happening faster than our systems can adapt to. In an ideal world, automating difficult jobs would be universally beneficial, but we don’t live in an ideal world; we live in one marked by inequality. The economic adjustments required to support displaced workers have not been adequately addressed.

Additionally, I worry about the implications for democracy. A fundamental reason democracies have persisted is that governments are incentivised to keep people employed and satisfied. While this may not be the most inspiring reason for democracy to exist, it has played a crucial role in its survival. If large-scale automation disrupts this balance, it could have serious consequences. This is one of the reasons I am so passionate about ensuring AI development prioritises human agency and well-being.


comments powered by Disqus



to the free, weekly Asian Voice email newsletter