Google CEO Sundar Pichai recently called for an international regulatory approach to artificial intelligence technology in an op-ed published in the Financial Times.
“Now there is no question in my mind that artificial intelligence needs to be regulated,” he wrote. “It is too important not to. The only question is how to approach it.”
He said that government regulation should come in addition to ethical development principles within technology companies, citing Google’s own efforts that include conducting independent human-rights assessments of new products.
Noting that the EU and the US are already starting to develop regulatory proposals, he called for international alignment of such regulations built upon an agreement on core values. “Companies such as ours cannot just build promising new technology and let market forces decide how it will be used,” he wrote. “It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”
Pichai also ticked off a few ethical concerns with AI that have been raised by others, including deepfakes and “nefarious uses of facial recognition.” While work is being done to address such concerns, he added, “there will inevitably be more challenges ahead that no company or industry can solve alone.”
Tesla’s Elon Musk and other tech luminaries have been worrying publicly for years about unbridled AI and its negative consequences, but Pichai’s commentary takes the discussion to another level by calling for regulatory involvement. Pichai doesn’t describe any regulations specifically needed, but does mention that the European General Data Protection Regulation can serve as a strong foundation. He added that good regulations will consider safety, explainability, fairness and accountability. “Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities,” Pichai wrote.
Pichai’s commentary also gave him a chance to tout Google’s work in AI, including helping doctors spot breast cancer in mammograms with higher accuracy. That work was published in Nature on Jan. 1 and involved Google, Northwestern Medicine and two screening centers in the UK.
That research team used mammograms along with follow-up exams to train a deep-learning AI model to identify breast cancer in screening images. The findings showed an absolute reduction in false negatives of up to 9% and a reduction in false positives of up to 5%. Similar studies have used AI training to spot lung cancers.
In addition to deep-learning software, Google is one of several large technology companies building AI hardware at the chip level. In 2019, Google Cloud offered the ability to link together its Tensor Processing Unit chips with liquid cooling for faster performance in the training phase of AI systems.
Other companies that are building AI chips or augmenting existing chips include Intel, Nvidia, Qualcomm and Apple. Tesla is building Model S, X and 3 cars with its own AI processor for self-driving capabilities. AI inference work can happen in edge devices with some of these chips, but AI training usually involves massive systems found in cloud companies like Google, Amazon, IBM and Microsoft.
Pichai's commentary comes at a time when U.S. lawmakers and federal authorities are scrutinizing the activities of major tech companies like Facebook and Youtube, especially for how their social engagement platforms shape public opinion.