stayontheblack.com

Business

Google to continue working with military, but will avoid nukes and spying

Polina Godz  Jacobin

The document, titled Artificial Intelligence at Google: Our Principles doesn't go into specifics about its involvement in the drone project but does firmly state that the company would not develop AI weapons, however, it says it will continue to work with the military in "in many other areas".

Following the anger, Google decided not to renew the "Maven" AI project with the US Defence Department after it expires in 2019. Despite Google's commitment not to use AI to build weapons, employees questioned whether the principles would explicitly prohibit Google from pursuing a government contract like Maven in the future. This officially turned Google into a defense contractor, which is a company that provides products or services to the USA military or US intelligence agencies.

Be made available for uses that accord with these principles. However, Google could improve by adding more public transparency and working with the United Nations to reject autonomous weapons, he said. Google lists seven core values for its AI research and lists several applications that are off-limits.

Over 4,000 Google employees ended up protesting Google's involvement with the Pentagon, saying in an open letter that Google should not be in the "business of war".

In a blog post this morning, Google CEO Sundar Pichai outlined the principles that will govern the company's military work going forward. The principles also state that the company will work to avoid "unjust impacts" in its AI algorithms by injecting racial, sexual or political bias into automated decision-making.

Uphold high standards of scientific excellence.

Google is one of the leading technology companies in artificial intelligence, which landed it a juicy government contract a year ago to work on "Project Maven".

CNBC also noted that Pichai's vow to "work to limit potentially harmful or abusive applications" is less explicit than previous Google guidelines on AI.

It's interesting that Google mentioned worldwide human rights laws here, because just recently, the United Nations' Special Rapporteur called on technology companies to implement global human rights laws by default into their products and services, instead of their own filtering and censorship rules, or even the censorship rules of certain local governments.

We will incorporate our privacy principles in the development and use of our AI technologies.

"While this means that we will not pursue certain types of government contracts", Greene wrote, "we want to assure our customers and partners that we are still doing everything we can within these guidelines to support our government, the military and our veterans".

The new principles follow months of debate inside Google over AI technology it had developed for the USA military for analyzing drone footage as part of what was known as Project Maven.

"These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe", he wrote.