back to top

Call us at : 011 4106 5208 / +91-7011197831

Top 6 Famous AI disasters: When Technology Went Awry

Top 6 famous AI Disasters

Data has become one of the most valuable resources in the modern era. There are plenty of organization across the globe who are investing and continuing to invest in the field of data and analytics. But every coin has two sides, similarly, data and analytics have a darker side too. As per a latest report, 26% of the IT individuals have said that the most investment in the IT sector is driven by machine learning and AI.

The Machine Learning algorithm can have both pros and cons—it can give a tough competitive advantage to the organization. On the other hand, it can also draw some mistakes whose consequences can be quite worse for the organization in terms of revenue and reputation.

Hence, it is quite important to firstly understand the data and what conclusion it is bringing for you. It is important to know and understand the data and tools associated with it and keeping the value of your organization in your mind. In this blog, we shall discuss the top 6 famous AI disasters and understand what was wrong with it.

01. Air Canada paid damages for the incorrect information provided to a passenger

Incorrect information was provided by a virtual assistant in Air Canada. It was ordered to pay for the damage that the chatbot caused.

A passenger named Jake Moffatt asked the virtual assistant about the fare of bereavement as his grandmother died in 2023. The virtual assistant told him that he could buy a ticket from Vancouver to Toronto and then apply for the said discount within 90 days of buying the ticket. He followed the advice and purchased ticket from Vancouver to Toronto. But when he claimed the refund, the airline rejected his demand by saying that once the tickets have been purchased, the bereavement fares can’t be claimed.

Moffatt raised his voice in the court by claiming that Air Canada has misguided information through its chatbot. In its defence, Air Canda said that the information given by a virtual assistant can’t be relied and held liable.

Christopher Rivers sustained the argument by giving the statement that Air Canada is responsible for the accuracy of its chatbot. He ordered the airline to pay CA$812.002 along with CA$650.88 for the damage caused by the virtual assistant.

02. AI generated articles were published in Sports Illustrated

An online magazine Futurism said that AI generated articles were published in Sports Illustrated. The magazine said that “fake authors” were involved to publish articles in the sports magazine and they were generated by AI as well. Author headshots were found in the question listed on the site.

The publisher of Sports Illustrated ‘The Arena Group’ has completely denied the accusation by stating that the contents were licensed and derived through a third party named AdVon Commerce. AdVon has assured that the articles are human written.

Howsoever, those articles which were published under pseudonyms has now been removed from the sports website. Arena Group is asked to answer all the allegations imposed on the articles.

03. Gannett announced to not use AI tool

The newspaper chain Gannett announced that it will not use AI tools as the content generated by the AI tool is repetitive, key details are missing and the content is poorly written.

One example is pointed out by CNN which opened up as “The Worthington Christian defeated the Westerville North in an Ohio boys soccer game on Saturday.” Similar articles are written by the AI tool in other papers as well.

After being mocked on social media platform, Gannett decided to not use AI and to correct all the problems it has caused.

04. iTutor Group recruited AI which rejected the applicants due to their age

A tutoring company iTutor Group used a software which was AI powered. It was adopted to provide remote tutoring services to the Chinese students. The AI generated tool rejected the female applicants aged 55 and older and male applicants who were 60 and above.

The US Equal Employment Opportunity Commission said that more than 200 qualified candidates were rejected automatically by the AI generated software. The chairperson of EEOC Charlotte A Burrows has given a statement in which he calls the discrimination based on age as ‘unjust’ and ‘unlawful’. He said that the employer is responsible for the discrimination.

05. The healthcare AI tools failed to identify Black patients

A study was published in 2019 in Science which was about the failure of AI tools in healthcare industry. It was used in the hospitals and insurance companies to identify the patients who were in need of high-risk management programs throughout the US. But it failed to identify the Black patients.

In high-risk management programs, trained nursing staff are trained to care and monitor chronically ill patients in order to prevent any serious complications. But the AI generated tool was more effective for white people and failed to flag the Black patients.

The study conducted by researchers came to the conclusion that the lower quality care is the result of certain colour. As per them, certain color and race are likely poor and have lesser income than whites. Hence, they fail to avail all the best health care facility.

06. The AI tool on Amazon preferred only Male candidates

In 2014, Amazon used AI software which can help its HR function screen applications for the best candidates. But the problem raised when the AI tool preferred male candidates only.

Like other shopping apps, Amazon also seeks a rating from1 to 5 from its candidates.

But the problem raised here also when the ML models started penalizing phrases that included the word ‘women’ and even demote the candidates from women colleges. The ML models were trained in such a manner in which it analysed the 10 years resume which were submitted to Amazon, obviously most of them were from men.

Amazon said that this tool was not used by them to evaluate or hire any candidate. Though they tried to make the tool neutral, but failed to do so. Ultimately, they could not guarantee whether it will work or not and ended the project.

Add Business Connect magazine to your Google News feed

Must Read:-

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Startup's

Taksha Smartlabz

Taksha Smartlabz EDUCATION FOR ALL: Transforming Lives And Careers With the world sheltering itself inside their houses in a bid to escape from the virus, online education has been seen becoming...

Stock Market

Person of the month

Related Articles

Protecting Your Team: A Guide to Group Personal Accident...

Protecting Your Team: A Guide to Group Personal Accident Insurance A group personal accident insurance is a comprehensive policy that...

Using Education Insurance as a Tool for College Funding

Using Education Insurance as a Tool for College Funding A large financial strain now faced by many families is the...

What are the most effective advertising strategies for reaching...

What are the most effective advertising strategies for reaching today's digital-savvy consumers? As consumers, we have all seen us our...

Best advertising practices for small businesses in a competitive...

Best advertising practices for small businesses in a competitive market Small business marketing focuses to establish a strong online presence....