Alphabet’s Google (GOOGL) will no longer hold back on how it leverages artificial intelligence. That’s the new stance on the tech giant shelving and an initial policy that bared it from using revolutionary technology on harmful applications. The change of tact means the company is now open to using AI in weapons and surveillance, going against its initial policy.
Google AI Policy Change
Previously, Google stated that it would not work on technologies that gather or use information for surveillance, violating internationally accepted norms. Fast forward, the company says it will stay within the widely accepted principles of international law and human rights even as it uses AI on potentially harmful applications.
The policy change comes on the tech giant acknowledging the stiff competition around AI leadership in the highly complex geopolitical world. In a blog post, Google insists that democracies should be at the forefront of leading AI development to safeguard core values like freedom, equality and respect for human rights.
Google established its AI principles in 2018 after declining to extend a government contract for using AI to analyze and interpret drone footage. The policy came into place after thousands of workers signed a petition opposing the contract before it was terminated, and dozens of them quit protesting Google’s involvement. Additionally, the company stated at the time that it “couldn’t be sure” that it would be in line with its AI principles, which is one of the reasons it withdrew from the bidding for a $10 billion Pentagon cloud contract.
Google’s updated principles are a reflection of the company’s expanding goals to make its AI technology and services available to more users and clients. The shift is also consistent with growing rhetoric about a winner-take-all AI race between the United States and China.
European AI Act
Meanwhile, the European Union has begun enforcing its historic artificial intelligence law, opening the door for stringent regulations and the possibility of steep penalties for violators. Implementation means companies are now required to abide by the limitations and risk fines if they don’t.
The new law covers certain AI applications that pose an unacceptable risk to citizens. They include manipulative AI tools, social scoring systems, real-time facial recognition and other biometric identification methods that classify individuals based on their sexual orientation, sex life, race, and other characteristics.
Businesses that violate the EU AI Act risk fines of up to $35.8 million or 7% of their annual sales, whichever is higher. Additionally, the size of the penalties will boil down to the infringement and the size of the company found liable. Consequently, violators stand to pay much higher fines than is possible under the EU’s digital privacy law.
The implementation of the new AI law elicits mixed reactions. While most tout it as a crucial measure needed to ensure the technology is used for the betterment of society, others are disgruntled. There are growing concerns that the AI Act will end up strangling innovation in the nascent sector.