Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation. On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. “Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.

  • I have compared several more traditional translation engines (Google Translate, Baidu Translate, Bing Translate, DeepL, etc.) vs. several LLM-based translation engines (DeepSeek, Perplexity, and ChatGPT).

    There is a HUGE difference in quality. Like you can’t even compare them. The latter do far more idiomatic translation than do the former and the quality of the output is higher and more directly usable.

    But …

    You absolutely must do a back-translation check to ensure that it didn’t hallucinate something into your translation. Take your document in A and have it translate into B. Then start a new session, take that translated document B and translate it back to A. Also tell it to analyze B for possible translation errors, unclear areas, etc. If it comes back with nothing more than nit-picky suggestions you’re fine. If it translates back stuff with hallucinated content or serious grammatical errors, etc. try again.

    It’s still faster than and far higher quality than Google/Baidu/Bing/DeepL translation, even with the extra checking step.

    Translation is one of the few places I’ll say LLMs have value, though if you trust it you absolutely will get burned. You need to check its output.