AI Learns from Internet Trolls
Microsoft experiment to have AI tweet as if it was a teenage girl named Tay, goes horrible wrong as it learns from Internet Trolls. One important thing to know about AI is for any learning to occur, human or AI is that it will only be as good as the information you give it to learn from. When the AI used information from users on the Internet, it is no wonder it went horribly wrong. This leads to another area of AI study, one in which we attempt to have the AI display sound moral judgement. Up until this point, I am not sure that researches have put anything akin to the laws of robotics in AI programming or anything ethical. Somehow, they will need to add morality into the decision process. I think this demonstrates how complex human think is and how far the true singularity is away.
I also think this means that we need to at the least give the AI a larger data set to include things like laws. Of course, if we limit the data given to the AI will then introduce a bias to the AI. I will post a separate article on the subject of AI and bias. Rest assured AI developers are taking this into consideration.
What do you think, should be teach AI morals or hardcode limits to their decision-making abilities? Comment below. On a separate subject what does this say of society, especially those who post on twitter?
The links below were sent to me from student Shaan L.
Internet Turned Robot Into A Nazi
Microsoft Tries To Reprogram Nazi Bot... With Limited Success...