Amazon to warn customers on limitations of its AI

Follow us on Google News:

By Jeffrey Dastin and Paresh Dave


LAS VEGAS (Reuters) - Amazon.com Inc is planning to roll out warning cards for software sold by its cloud-computing division, in light of ongoing concern that artificially intelligent systems can discriminate against different groups, the company told Reuters.

Akin to lengthy nutrition labels, Amazon's so-called AI Service Cards will be public so its business customers can see the limitations of certain cloud services, such as facial recognition and audio transcription. The goal would be to prevent mistaken use of its technology, explain how its systems work and manage privacy, Amazon said.

The company is not the first to publish such warnings. International Business Machines Corp, a smaller player in the cloud, did so years ago. The No. 3 cloud provider, Alphabet Inc's Google, has also published still more details on the datasets it has used to train some of its AI.

Yet Amazon's decision to release its first three service cards on Wednesday reflects the industry leader's attempt to change its image after a public spat with civil liberties critics years ago left an impression that it cared less about AI ethics than its peers did. The move will coincide with the company's annual cloud conference in Las Vegas.

Michael Kearns, a University of Pennsylvania professor and since 2020 a scholar at Amazon, said the decision to issue the cards followed privacy and fairness audits of the company's software. The cards would address AI ethics concerns publicly at a time when tech regulation was on the horizon, said Kearns.

"The biggest thing about this launch is the commitment to do this on an ongoing basis and an expanded basis," he said.

Amazon chose software touching on sensitive demographic issues as a start for its service cards, which Kearns expects to grow in detail over time.

SKIN TONES

One such service is called "Rekognition." In 2019, Amazon contested a study saying the technology struggled to identify the gender of individuals with darker skin tones. But after the 2020 murder of George Floyd, an unarmed Black man, during an arrest, the company issued a moratorium on police use of its facial recognition software.

Now, Amazon says in a service card seen by Reuters that Rekognition does not support matching "images that are too blurry and grainy for the face to be recognized by a human, or that have large portions of the face occluded by hair, hands, and other objects." It also warns against matching faces in cartoons and other "nonhuman entities."

In another warning card seen by Reuters, on audio transcription, Amazon states, "Inconsistently modifying audio inputs could result in unfair outcomes for different demographic groups." Kearns said accurately transcribing the wide range of regional accents and dialects in North America alone was a challenge Amazon had worked to address.

Jessica Newman, director of the AI Security Initiative at the University of California at Berkeley, said technology companies were increasingly publishing such disclosures as a signal of responsible AI practices, though they had a way to go.

"We shouldn't be dependent upon the goodwill of companies to provide basic details of systems that can have enormous influence on people's lives," she said, calling for more industry standards.

Tech giants have wrestled with making such documents short enough that people will read them yet sufficiently detailed and up to date to reflect frequent software tweaks, a person who worked on nutrition labels at two major enterprises said.

(Reporting By Jeffrey Dastin in Las Vegas and Paresh Dave in Oakland; Editing by Bradley Perrett)

Disclaimer

The above content is directly sourced from Reuters under a contractual arrangement. The content is being provided as a convenience and for informational purposes only; and does not constitute an endorsement or approval by Kalkine Media of any of the products, services, or opinions of the organization or individual. The user is apprised that Kalkine Media bears no responsibility for the accuracy, legality, or content of Reuters, any external sites, or for that of subsequent links. The user is requested to contact Reuters directly for answers to questions regarding the content. Please note that Kalkine Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.