AI surveillance is still in its nascent phase
The AI SafetY Summit held in London recently concluded with 28 countries including the US, China, India and the European Union signing the Bletchley Declaration agreeing to work together to address the risks associated with AI and for testing of the advanced AI models. This was followed by the announcement of the United Nations that it would extend its support for an expert AI panel to be set up on the lines of the Intergovernmental Panel of Climate Change.
Some leaders like Tesla’s Elon Musk have spoken about the existential risk for the human race with the advent and advancement of AI in the future. Others feel the short term risks of data privacy, bias creep and ownership of models are more important to be immediately addressed. Some countries in the EU have framed their own laws to protect the wellbeing of their citizens or the society at large. USA has stated the safety standards will be set by the US National Institute of Standards and Technology. The UK, on the other hand, has currently no plan for any regulation stating it would come in the way of innovation.
Therefore it appears that AI surveillance is still in its nascent phase and nations are moving forward at their own pace with respect to defining the regulations. The Editorial of Nature Magazine (October 31, 2023) highlights the necessity for both – cultivating innovation and establishing regulatory framework for AI. It talks about the importance of the principles of regulation to be put in place – transparency in data modelling and legally binding standards for monitoring, compliance and liability. The implications of AI solutions being created have bearings on multiple stakeholders at various stages of design, development, testing, implementation and usage. Therefore, inter disciplinary participation including that of social scientists is necessary to create the framework for establishing the international standards and regulations to safeguard everyone’s interest.
Home to the world’s largest number of coders and digital practitioners and supporting major multinational corporations in building and maintaining their digital solutions, India should take the lead in propagating the significance of universal standards. Further, as the world’s largest population with a wide range of diversities, it has the responsibility of protecting them from biases and discrimination that may be built into the AI solutions.
Several nations have witnessed the dangers of elections and citizens’ uprisings being influenced by AI models. As the Chair for Global Partnership on Artificial Intelligence (GPAI) to be held in New Delhi in December 2023, India has maintained that AI should be guided by the principles of safety & trust for users and accountability for platforms. We should seize this opportunity to build a platform for collaboration with other nations that would lead to defining the boundaries and standards for AI models of the future, safeguarding the human interests.
Originally appeared in Financial Express