The 21 million includes everyone, not just registered voters. Until 2015, I couldn’t vote because I wasn’t a citizen. Still had to live with the shitty policies that Floridian politicians passed into law.
The 21 million includes everyone, not just registered voters. Until 2015, I couldn’t vote because I wasn’t a citizen. Still had to live with the shitty policies that Floridian politicians passed into law.
Quite a few folks have mentioned Outer Wilds, so I’ll add the DLC soundtrack. The titular song (Echoes of the Eye) that plays at the end of the DLC makes me burst into tears every time I hear it. But in a good way, haha.
It also said it would pay realistic premiums for certain product attributes, such as toothpaste with fluoride and deodorant without aluminum.
Most toothpastes in the US have fluoride - it’s the ones that don’t which likely cost more (ones with “natural” ingredients, ones with hydroxyapatite…).
The startup Synthetic Users has set up a service using OpenAI models in which clients—including Google, IBM, and Apple—can describe a type of person they want to survey, and ask them questions about their needs, desires, and feelings about a product, such as a new website or a wearable. The company’s system generates synthetic interviews that co-founder Kwame Ferreira says are “infinitely richer” and more useful than the “bland” feedback companies get when they survey real people.
It amuses me greatly to think that companies trying to sell shit to people will be fooled by “infinitely richer” feedback. Real people give “bland” feedback because they just don’t care that much about a product, but I guess people would rather live in a fantasy where their widget is the next best thing.
Overall, though, this horrifies me. Psychological research already has plenty of issues with replication and changing methodologies and/or metrics mid-study, and now they’re trying out “AI” participants? Even if it’s just used to create and test surveys that eventually go out to humans, it seems ripe for bias.
I’ll take a example close to home - take studies on CFS/ME. A lot of people on the internet (including doctors), think CFS/ME is hypochondria, or malingering, or due to “false illness beliefs” - so how is an “AI” trained on the internet tasked with thinking like a CFS/ME patient going to answer questions?
As patients we know what to look for when it comes to insincere/leading questions. “Do you feel anxious before exercise?” - the answer may be yes, because we know we’ll crash, but a question like this usually means researchers think resistance to activity is an irrational anxiety response that should be overcome. An “AI” would simply answer yes with no qualms or concerns, because it literally can’t think or feel (or withdraw from a study entirely).
I’m so glad they recommended it for all ages. Get boosted and wear an n95 mask, folks! This surge isn’t looking good.