In a significant shift, Google has removed its commitment to not use artificial intelligence (AI) for developing technologies that could cause "overall harm," including weapons and surveillance. This decision marks a notable departure from a pledge the company made seven years ago, driven by employee concerns rooted in their original motto: "Don't be evil." This article delves into the implications of this change, exploring the ethical considerations and potential future impacts on both warfare and surveillance technologies.
The initial pledge not to engage in AI development for harmful purposes was a response to internal pressure from Google employees. They argued that the company's involvement in projects like potential drone technology for the Pentagon directly contradicted its ethical stance. The removal of the "Don't be evil" motto from Google's code of conduct foreshadowed this week's announcement. Now, Google officials state they can no longer promise to refrain from AI weapons development.
In a blog post, Google executives James Manyika and Demis Hassabis explained that in today's "increasingly complex geopolitical landscape," democracies should lead in AI development, guided by values like freedom and human rights. They further stated that collaboration between companies, governments, and organizations sharing these values is crucial to create AI that protects people, promotes global growth, and supports national security. This pivot suggests a prioritization of national security interests over previously stated ethical concerns.
Previously, Google explicitly pledged to avoid AI applications in:
Now, the commitment has been replaced with a pledge to implement "appropriate human oversight," align work with international law and human rights, and use testing and monitoring to mitigate harmful outcomes.
The removal of the AI ethics pledge has sparked widespread criticism. Margaret Mitchell, former co-leader of Google's ethical AI team, told Bloomberg that this action erases the work of ethical AI advocates and signals Google's potential involvement deploying technology directly that can harm people.
Sarah Leah Whitson, a human rights advocate, went further. She labeled Google a "corporate war machine," pointing to Google's donations to Donald Trump's inaugural committee and CEO Sundar Pichai's presence at the inauguration alongside Elon Musk.
Parul Koul, a Google software engineer underscored the "long-standing employee sentiment that the company should not be in the business of war."
This decision raises critical questions about the future of AI and its role in society, particularly in relation to:
In an era where truth and ethical considerations face manipulation and suppression, organizations must commit to upholding the principles of truth, accessibility, and community. This is especially crucial as government and corporate influences grow, threatening independent journalism and media.
This is why, now more than ever, support for independent media is vital. In the face of increasing threats to truth and ethical behavior, supporting journalism that commits to community-first models is essential.
Google's decision underscores the urgent need for a broader societal conversation about the ethical implications of AI. This includes:
By prioritizing ethical considerations, promoting transparency, and engaging in open dialogue, we can ensure that AI is used to benefit humanity, rather than to cause harm.