For a long time, Big Tech companies have painted a shiny picture of how AI can massively improve people’s lives, along with its technological advancements. But who benefits and how many benefits have become secondary conversations. Additionally, the safety of these AI advancements is barely mentioned.
In the Big Tech world, every new technology is assumed to be safe and for the greater good, until it isn’t. But for high-risk physical AI systems like self-driving vehicles, robotics etc., we can’t chance our luck and hope everything will “play out fine” in the name of innovation! The consequences can be life changing, or even worse, life threatening. Equally we can’t afford to create inequalities in our society by restricting access to such potentially beneficial AI systems to a chosen few.
This week, as the world’s AI stalwarts descend in New Delhi for the India AI Impact Summit 2026, under the leadership of Hon'ble Prime Minister Narendra Modi Ji, the narrative of the conversations has broadened from AI Safety to AI Usage. India has very thoughtfully and timely chosen the summit’s theme as“Sarvajana Hitaya, Sarvajana Sukhaya” meaning “welfare for all, happiness for all”. This has brought the concepts of democratisation and inclusivity of AI’s benefits at the heart of this year’s summit and the discussions.
There is no denying that democratisation of AI has huge benefits and is a powerful ambition. But with power, comes responsibility. Democratisation enables access of AI’s power to the masses. However, it puts the onus on the policymakers and the AI ecosystem to ensure safe use of this AI power by the masses. Larger the scale, higher the risk of negative outcomes, if left unchecked. For example, deepfakes causing fraud or spreading misinformation.
To tackle this, governments worldwide have adopted diverse tactics to regulate AI and its use. The approaches vary across the intervention continuum from guidelines to stringent regulation (e.g. EU AI Act). Regardless of the approach, the fundamental questions remain unanswered: how you regulate or intervene in a fast-moving AI technology world; and can you drive global consensus in this endeavour.
The speed of AI’s technology development has often been as an excuse or to pressurise to stop or dilute any regulation or guidelines. The EU Commission faced pressure from the US and other tech giants to dial down its AI Act, claiming that it will delay the development of cutting-edge AI technologies.
Some justification for the delay to the EU AI Act has merit, as the processes directed by it can prove to be onerous for start-ups and small businesses, who may not have the resources of big organisations. However, if they or Big Tech companies want to develop high-risk AI systems, the leeway is limited. Yes, that does mean that the barrier to entry into the market is high, but so is the cost of life!
Physical AI systems like self-driving vehicles, will hit UK roads in the next 6-12 months. Instead of de-regulating AI, I suggest the regulators consider proportional regulation. To enable this, assessing the datasets used for training and testing of AI systems should be one mandatory requirement as it will not only ensure (in part) safety but also bias (lack of) and fairness, key principles of the EU AI Act and a theme for the summit.
As I travel to Delhi to speak at the AI Impact Summit on the session on AI’s trustworthiness, I am left thinking, safety is not a choice, no matter how pioneering AI’s technological innovation might be.

