Jake Moffatt was booking a flight to Toronto and asked the bot about the airline’s bereavement rates – reduced fares provided in the event someone needs to travel due to the death of an immediate family member.
Moffatt said he was told that these fares could be claimed retroactively by completing a refund application within 90 days of the date the ticket was issued, and submitted a screenshot of his conversation with the bot as evidence supporting this claim.
The airline refused the refund because it said its policy was that bereavement fare could not, in fact, be claimed retroactively.
Air Canada argued that it could not be held liable for information provided by the bot.
I wonder how anyone in their right mind would propose the defense “we can’t be held liable for what the chatbot we purposefully put on our website said”. Did Air Canada’s lawyers truly think this would fly?
If you don’t want to be held to AI hallucinations, don’t put an AI chatbot on your website, seems easy enough.
My organization won’t even allow auto translation widgets on our site. Instead, we refer people to using web translation services on their own, with clear language that says we’re not liable for third party mistranslations. (In multiple languages, by a company that has signed an indemnity agreement with us if their translation becomes an issue.)
It’s a bit heavy-handed, but the lawyers hold more sway than the communications folks, and I don’t disagree with the approach – you don’t want users misunderstanding what your site says, and being able to blame you for it.
Probably not, but they’re paid to try their best.
Lol… “Think this would fly” I see what you did there.