Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation. On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. “Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.
The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.
Their findings, presented in a paper titled “Evidence of a social evaluation penalty for being an idiot” reveal a consistent pattern of bias against those who believe in dumb marketing hype sold by the rich to destroy the middle class, push the desperate faces of artists into the mud even more and use a world ending amount of energy to answer questions badly and manipulate public opinion to be stupider and more hateful. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups because unlike techbros and people working in marketing, normal people understand this is all mostly a bunch of bullshit and that inveitably if there are parts to it that aren’t bullshit large US corporations sure as hell aren’t going to be able to discern them from all the snakeoil salesman nonsense any better than their crazy uncle who believes the world is flat can tell what is real and what isn’t.
As researchers ultimately funded by and wholy onboard with the framework of this kind of technology we are concerned we will have no job in the future if people realize how toxic all of this is, we bravely use the intellectual prestige and power we wield as Duke academics to demand corporations and silicon valley be better at obscuring the harm and nonsense at the heart of AI so we can continue to study it and make wishy-washy statements about AI while the status quo continues to enshittify.