The Most Dangerous Thing About AI Is That It Has to Learn From Us
And we keep showing it our very worst selves. We all know the half-joke about the AI apocalypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exterminated. It's the subject of countless sci-fi stories and blog posts about robots, but maybe the real danger isn't that AI comes to such a conclusion on its own, but that it gets that idea from us. Yesterday Microsoft launched a fun little AI Twitter chatbot that was admittedly sort of gimmicky from the start. "A.I fam from the internet that's got zero chill," its Twitter bio reads. At its start, its knowledge was based on public data. As Microsoft's page for the product puts it : Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that's been anonymized is Tay's primary data source. That data has been modeled, cleaned an...