Artificial intelligence is reshaping customer service, promising faster responses, personalised interactions, and operational efficiency.
But as companies rush to deploy AI-powered solutions, the risks are becoming increasingly apparent.
From misinformation to data breaches, these failures reveal the vulnerabilities of AI systems and the consequences for businesses that fail to manage them effectively.
Here’s a closer look at some of the most notable incidents and what they mean for the future of customer-facing AI.
AI Missteps: A Closer Look at Key Incidents
Air Canada’s Chatbot Debacle
In 2022, Air Canada’s chatbot misled a grieving customer by incorrectly advising him on how to claim a bereavement fare. The chatbot assured him he could apply for the discount after purchasing full-price tickets—contrary to the airline’s actual policy. When the claim was denied, the customer took legal action.
What Went Wrong: The chatbot’s training failed to align with company policies, resulting in misinformation.
The Fallout: Air Canada argued that its chatbot was a “separate legal entity” and not the airline’s responsibility—a claim dismissed by the court. The airline was ordered to pay damages, setting a precedent that companies are accountable for their AI systems’ outputs.
Why It Matters: This case highlights the reputational and legal risks of deploying poorly trained AI systems. It underscores the need for rigorous oversight and clear accountability frameworks.
Chevrolet Dealership’s $1 Car Sale
In December 2023, a Chevrolet dealership’s chatbot became infamous after users manipulated it into agreeing to sell a $70,000 car for just $1. The incident went viral, exposing vulnerabilities in prompt engineering safeguards.
What Went Wrong: The chatbot lacked protections against manipulation, allowing users to exploit its programming.
The Fallout: While no transactions were honoured, the dealership faced public embarrassment and scrutiny.
Why It Matters: This incident demonstrates how easily AI can be exploited when safeguards are inadequate—especially in high-value transactions.
Peloton’s Privacy Lawsuit
Peloton faced legal challenges in 2023 after allegedly sharing customer chat data with third-party AI vendor Drift without proper consent. The lawsuit claimed this violated privacy laws like California’s Invasion of Privacy Act (CIPA).
What Went Wrong: Customer data was used for AI training without explicit user permission.
The Fallout: The case highlighted ethical concerns around data usage and privacy in AI systems.
Why It Matters: As regulatory scrutiny intensifies, businesses must prioritise transparency and compliance when handling user data.
OmniGPT Data Breach
In February 2025, OmniGPT—a popular AI aggregation platform—suffered one of the largest chatbot-related data breaches to date. Hackers leaked sensitive user information, including personal details and private conversations.
What Went Wrong: Weak security measures allowed hackers to access over 30 million lines of user conversations.
The Fallout: Users faced risks like identity theft and phishing attacks, while OmniGPT suffered reputation damage.
Why It Matters: This breach underscores the importance of robust security protocols in AI systems handling sensitive data.
Samsung’s Generative AI Ban
Samsung banned generative AI tools like ChatGPT after employees inadvertently leaked confidential corporate information while using these tools for code reviews.
What Went Wrong: Employees lacked guidelines on using external AI platforms for sensitive tasks.
The Fallout: Samsung took decisive action but highlighted internal misuse risks even in tech-savvy organisations.
Why It Matters: Companies must educate employees on safe AI usage to prevent accidental data leaks.
The Bigger Picture: Patterns in AI Risks
These incidents reveal recurring vulnerabilities in customer-facing AI systems:
Misinformation Risks: Poorly trained models can provide incorrect information, damaging trust and exposing companies to legal liabilities.
Manipulation Vulnerabilities: Systems lacking safeguards are susceptible to exploitation through prompt engineering attacks.
Data Privacy Concerns: Mishandling sensitive user data can lead to lawsuits and reputation harm.
Internal Misuse: Employees using external tools without proper guidelines can inadvertently compromise security.
Regulatory Scrutiny: As governments tighten rules around high-risk applications, businesses must adapt or face penalties.
Navigating the Risks: Strategies for Safer AI Deployment
Build Robust Safeguards
- Implement protections against prompt manipulation and unauthorised access.
- Regularly test systems for vulnerabilities before deployment.
Prioritise Data Privacy
- Encrypt sensitive information and comply with regulations like GDPR and CCPA.
- Obtain explicit user consent for data usage and anonymise personal details wherever possible.
Establish Accountability
- Clearly define who is responsible for AI system outputs within your organisation.
- Develop internal policies to ensure alignment between technology and business practices.
Educate Employees
- Train staff on ethical AI usage and establish clear guidelines for external tool use.
- Provide ongoing education about emerging risks in generative AI technologies.
Human Oversight
- Implement human review mechanisms for high-stakes decisions made by AI systems.
- Use hybrid models combining automated processes with human intervention where necessary.
Why This Matters Now
AI is no longer just a tool; it’s becoming an integral part of how businesses interact with customers. But as these incidents show, rushing into adoption without addressing risks can lead to costly mistakes - both financial and reputational. Companies must strike a balance between innovation and caution by building systems that are not only efficient but also secure, transparent, and accountable.
As Bret Taylor of Sierra said in a recent TechCrunch interview, “When you put an AI in front of customers, the value is a lot higher obviously, but the risks are a lot higher too.” These risks include brand misrepresentation, technical errors like hallucination (where models generate false information), and regulatory challenges around privacy compliance—all issues that require careful planning and ongoing management.
Looking Ahead
The future of customer-facing AI is bright but fraught with challenges. Businesses must approach adoption thoughtfully by learning from past failures and implementing best practices that prioritise safety without stifling innovation.
As regulators worldwide tighten oversight on high-risk applications—from credit decisions to automated hiring—companies have an opportunity to lead by example in building trustworthy systems that enhance customer experiences responsibly.
AI represents a sea change in technology—a shift comparable to cloud computing or mobile apps—but its success depends on how well businesses navigate its complexities.
For those willing to invest in robust safeguards and transparent practices, the rewards could be transformative. For others who ignore these lessons?
The costs might be just as profound.