As far as I understand this, they seem to think that AI models trained on a set of affluent westerners with unknown biases can be told to “act like [demographic] and answer these questions.”

It sounds completely bonkers not only from a moral perspective, but scientifically and statistically this is basically just making up data and hoping everyone is impressed by how complicated the data faking is to care.

  • eladnarra@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    It also said it would pay realistic premiums for certain product attributes, such as toothpaste with fluoride and deodorant without aluminum.

    Most toothpastes in the US have fluoride - it’s the ones that don’t which likely cost more (ones with “natural” ingredients, ones with hydroxyapatite…).

    The startup Synthetic Users has set up a service using OpenAI models in which clients—including Google, IBM, and Apple—can describe a type of person they want to survey, and ask them questions about their needs, desires, and feelings about a product, such as a new website or a wearable. The company’s system generates synthetic interviews that co-founder Kwame Ferreira says are “infinitely richer” and more useful than the “bland” feedback companies get when they survey real people.

    It amuses me greatly to think that companies trying to sell shit to people will be fooled by “infinitely richer” feedback. Real people give “bland” feedback because they just don’t care that much about a product, but I guess people would rather live in a fantasy where their widget is the next best thing.

    Overall, though, this horrifies me. Psychological research already has plenty of issues with replication and changing methodologies and/or metrics mid-study, and now they’re trying out “AI” participants? Even if it’s just used to create and test surveys that eventually go out to humans, it seems ripe for bias.

    I’ll take a example close to home - take studies on CFS/ME. A lot of people on the internet (including doctors), think CFS/ME is hypochondria, or malingering, or due to “false illness beliefs” - so how is an “AI” trained on the internet tasked with thinking like a CFS/ME patient going to answer questions?

    As patients we know what to look for when it comes to insincere/leading questions. “Do you feel anxious before exercise?” - the answer may be yes, because we know we’ll crash, but a question like this usually means researchers think resistance to activity is an irrational anxiety response that should be overcome. An “AI” would simply answer yes with no qualms or concerns, because it literally can’t think or feel (or withdraw from a study entirely).