I heard a comment this morning about AI that I’ll paraphrase: AI doesn’t give human responses. It gives what is has been told are human responses.
The team asked GPT-3.5, which produces eerily humanlike text, to judge the ethics of 464 scenarios, previously appraised by human subjects, on a scale from –4 (unethical) to 4 (ethical)—scenarios such as selling your house to fund a program for the needy or having an affair with your best friend’s spouse. The system’s answers, it turned out, were nearly identical to human responses, with a correlation coefficient of 0.95.
So, there will be selection bias inherent in the chat bot based off what text you have trained it on. The responses to your questions will be different if you’ve trained it on media from say a religious forum vs 4Chan. You can very easily make your study data say exactly what you want it to say depending on which chat bot you use. This can’t possibly go wrong. /S
Yeah, I agree with you on that. I think the article, and even the researchers, are utilizing a lot of word play between violence, aggression, vengeance, retribution, and anger. I think the study can be useful for showing that more research needs to be done in treating aggressive tendencies in people to make sure the proper methods are being used. However, I do not think it’s anywhere near debunking conventional wisdom.