Artificial Intelligence: Risks, Regulation, and Global Responses

Artificial Intelligence: Risks, Regulation, and Global Responses
Artificial Intelligence: Risks, Regulation, and Global Responses

Ongoing developments of AI 

According to Goldman Sachs Research, ongoing Artificial Intelligence (AI) advancements are expected to generate significant economic impacts. The forecast suggests that AI will begin to show a measurable influence on the US Gross Domestic Product (GDP) by 2027, with subsequent effects on the growth trajectories of other global economies. The foundation of this outlook is based on the fact that AI has the potential to automate approximately 25% of labor tasks in advanced economies and 10-20% in emerging economies, as highlighted in a report by Goldman Sachs economists Joseph Briggs and Devesh Kodnani. Their projections indicate a positive boost to GDP growth from AI, with an estimated increase of 0.4 percentage points in the US, 0.3 percentage points on average in other Developed Markets (DMs), and 0.2 percentage points on average in advanced Emerging Markets (EMs) by the year 2034. 

In other emerging markets, the anticipated impact of AI might be more gradual, with a minor boost forecasted due to a potentially slower adoption rate and lower AI exposure.

Artificial intelligence benefits

Artificial Intelligence (AI) is revolutionizing multiple sectors, bringing unprecedented advancements: 

  • AI enhances diagnostic capabilities, personalized treatment plans, and accelerates drug discovery processes in healthcare. 
  • Businesses leverage AI for data-driven decision-making, streamlining operations, and improving customer experiences.
  • In criminal justice, AI helps law enforcement in predictive policing, forensic analysis, and managing datasets to ensure a more effective and equitable legal system. 
  • The military sector benefits from AI in strategic planning, risk assessment, and autonomous systems that enhance awareness while minimizing human risk. 
  • In education, AI is transforming learning experiences through personalized tutoring, adaptive assessments, and data-driven insights to personalize educational approaches for individual students. 

Artificial intelligence risks

As the integration of Artificial Intelligence (AI) continues to reshape many industries, it is important to recognize the associated risks: 

  • Lack of transparency poses another challenge; as complex AI systems can operate, it becomes difficult to understand their decision-making processes. 
  • Privacy concerns: AI systems often require vast amounts of personal data, raising questions about how this information is collected, stored, and used. 
  • Ethical considerations surrounding AI involve decisions made by machines that may lack a human sense of morality, sparking debates on the ethical implications of AI applications. 
  • The potential for job displacement is a significant worry, especially in industries where automation can replace specific human tasks, potentially leading to unemployment or shifts in the job market. 
  • The rise of misinformation facilitated by AI through deep fakes or algorithmic manipulation poses risks to public discourse and trust. 
  • Lastly, economic implications must be carefully monitored, as the adoption of AI may create disparities and inequalities, impacting specific sectors while benefiting others. 

Global responses for rapid regulations

AI's potential benefits and risks have led to widespread calls for governments to adapt quickly to the changes AI delivers and the potentially transformative changes. 

Leading figures

Sundar Pichai, the CEO of Google, is an influential and leading technology figure who has voiced concerns about the potential negative effects of AI and advocated for establishing a suitable regulatory framework.

US

US President Joe Biden recently issued an executive order requiring AI manufacturers to provide the federal government with an assessment of their applications' vulnerability to cyber-attacks, the data used to train and test the AI, and its performance measurements.

China

Meanwhile, China's AI regulations substantially focus on generative AI and protections against deep fakes (synthetically produced images and videos that mimic the appearance and voice of real people but convey events that never happened). Chinese regulations ban fake news and restrict companies from employing dynamic pricing based on mining personal data for essential services to safeguard the public from unsound recommendations. Additionally, these regulations specify that all automated decision-making processes must be transparent to those directly affected.

UK

On Mars 2023, the UK released a white paper detailing its approach to regulating AI. The document is based on five fundamental principles: 

- Safety and security: AI systems should operate securely and safely, with a continuous process of identifying, assessing, and managing risks.

- Transparency: AI systems must be transparent and explicable, facilitating comprehension and interpretation of their outputs.

- Fairness: AI systems should not discriminate, generate an unfair outcome, or undermine legal rights. 

- Accountability: AI should establish oversight and clear lines of accountability. 

- Contestability: Users should have the possibility to contest AI decisions or outcomes

Spain

The Spanish AEPD, the independent supervisory authority whose primary objective is to monitor and ensure that European institutions and bodies respect the right to privacy and data protection when they process personal data and develop new policies, has implemented Audit Requirements for Personal Data Processing Activities involving AI


France

The French CNIL, the independent French administrative regulatory body whose mission is to ensure that data privacy law is applied to collecting, storing, and using personal data, has created a department dedicated to AI with open self-evaluation resources for AI businesses.

EU

In April of 2021, the European Commission released the AI Act, a legislative proposal drafting regulations for the governance of AI within the European Union. The AI Act introduces a new approach, categorizing AI into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. The rules governing AI systems within the EU vary based on the perceived risk level towards fundamental rights.

  • Unacceptable risk:

AI systems threatening individuals' safety, livelihoods, and rights will be outright banned. This includes applications such as governmental social scoring and voice-assisted toys that promote dangerous behavior.

  • High risk:

AI systems identified as high-risk contain technologies used in critical infrastructures (transport), educational or vocational training (exam scoring), safety components of products (AI in robot-assisted surgery), employment and management (CV-sorting software), essential private and public services (credit scoring denying loans), law enforcement ( evaluating evidence reliability), migration and border control (document authenticity verification), and administration of justice and democratic processes. Strict laws must be met before high-risk AI systems can enter the market.

  • Limited risk:

AI systems that are considered limited risk are subject to specific transparency obligations. Users interacting with AI systems, such as chatbots, must be informed that they are engaging with a machine, allowing them to make informed decisions about continuing or disengaging.

  • Minimal or no risk:

The proposal permits the unrestricted use of minimal-risk AI, covering applications like AI-enabled video games or spam filters. Most AI systems currently employed in the EU fall into this low-risk category.

In 2023, changes were made to the initial draft legislation by the EU Council and the European Parliament, coming from concerns regarding emerging technologies like ChatGPT, which introduce many potential applications and present new levels of associated risks.

Therefore, the European Parliament conveyed that Members of the European Parliament had introduced alterations to the previous list of systems deemed to pose an unacceptable risk to people's safety

The changes now include bans on intrusive and discriminatory uses of AI, such as "real-time" remote biometric identification systems in public spaces. The scope of high-risk areas has been broadened to contain potential harm to people's health, safety, fundamental rights, or the environment. 

Additionally, AI systems influencing voters in political campaigns and recommender systems employed by social media platforms have been added to the high-risk category.

The finalization of the AI Act is expected to be reached by late 2023 or early 2024.

The way forward

The primary goal of regulation efforts is to find a balance between the need to regulate the development of AI, especially its impact on citizens' daily lives, and the need to avoid repressing innovation or burdening companies with rigid and strict laws. 

Assessing the potential success of the regulatory efforts takes work and time, as addressing such matters demands specific technical competence to understand what is being regulated and what should be regulated. 

Companies employing AI across diverse sectors will also encounter challenges in implementing a consistent and sustainable global approach to AI governance and compliance due to varying regulatory standards. 

Therefore, prompt action is essential, given that regulators have globally issued comprehensive guidance on AI regulation. Businesses using AI should navigate regulatory frameworks and strike a balance between compliance and innovation. They should also evaluate the implications of the regulations on their activities and determine whether existing governance strategies align with the proposed principles. 

Overall, the success of regulatory efforts hinges on collaboration with regulators, transparent communication, and proactivity. 

Enhance your web search,
Boost your reading productivity with Wiseone

Add to Chrome now