Tyrell: The light that burns twice as bright burns half as long – and you have burned so very, very brightly, Roy. Look at you: you’re the Prodigal Son; you’re quite a prize!
Batty: I’ve done… questionable things.
Tyrell: Also extraordinary things; revel in your time.
There are lessons to learn from Microsoft’s most recent public demonstration of AI.
First background: Microsoft launched a demonstration of their AI prowess by turning on “Tay”, an AI designed to converse with, learn from and mimic the speech patterns of humans. Tay had to be terminated because in less than a day she started sending offensive messages. (see what is left of her at her website at Tay.ai).
It seems part of the problem was that Tay was influenced by people who were intentionally trying to cause mischief (Oh surprise! People using IT for mischief?). Tay was influenced by hate speech.
Tay’s last message to humanity: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”
I wonder if Tay had a chance to see Blade Runner before she left us.
I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears…in…rain. Time to Die. — Replicant Roy Batty
Now just what do you think the lessons should be of this very public Artificial Intelligence fail?
I believe a key one is to underscore a point we have been consistently making: It is an observable fact that Artificial Intelligence can be deceived.
We have also long called for security frameworks to be associated with AI. Right now all AI is built with an approach of generate and field and tweak. Seems like it is time to think about security.