This is the sort of thing machine learning algorithms are pretty good at at.
Coupled with however many millions of interactions a day, you would have no problem correlating changes to your algorithm against increases in revenue.
But. It’s often not that impressive. Humans are equally good at noticing patterns.
All it takes is for one person at FB to see their wife or daughter delete a post, ask them “why did you delete that post” and take away from the response of “It made me look fat” to go “there’s a new targeted ad that’ll get me a bonus”.
In a similar vein, 80% of your banks anti-fraud systems isn’t deep learning models that detect fraudulent behaviour. Instead it’s “if the user is based in Russia, add 80 points, and if the account is at a branch in 10km of Heinersdorf Berlin, add another 50…. We’re pretty sure a Russian scammer goes on holiday every 6 months and opens a bunch of accounts there, we just don’t know which ones”.
I’d bet on it being algorithmic from Facebook because leaning into algorithms is part of that company’s culture. A bunch of manual tweaks require maintenance, though it wouldn’t surprise me if someone was thinking about this when deciding that deleted selfie should be a different signal to the algorithm than deleted picture of cat.
The most generous assumption is that they use statistics to determine correlations like this (e.g., deleted selfies resulted in a high CTR for beauty ads so they made that a part of their algo). The least generous interpretation is exactly what you’re thinking: an asshole came up with it because it’s logical and effective.
Either way, ethics needs to be a bigger part of the programmers education. And we, as a society, need to make algorithms more transparent (at least social media algorithms). Reddit’s trending algorithm used to be open source during the good ole days.
Who the fuck comes up with this stuff?
This is the sort of thing machine learning algorithms are pretty good at at.
Coupled with however many millions of interactions a day, you would have no problem correlating changes to your algorithm against increases in revenue.
But. It’s often not that impressive. Humans are equally good at noticing patterns.
All it takes is for one person at FB to see their wife or daughter delete a post, ask them “why did you delete that post” and take away from the response of “It made me look fat” to go “there’s a new targeted ad that’ll get me a bonus”.
In a similar vein, 80% of your banks anti-fraud systems isn’t deep learning models that detect fraudulent behaviour. Instead it’s “if the user is based in Russia, add 80 points, and if the account is at a branch in 10km of Heinersdorf Berlin, add another 50…. We’re pretty sure a Russian scammer goes on holiday every 6 months and opens a bunch of accounts there, we just don’t know which ones”.
I’d bet on it being algorithmic from Facebook because leaning into algorithms is part of that company’s culture. A bunch of manual tweaks require maintenance, though it wouldn’t surprise me if someone was thinking about this when deciding that deleted selfie should be a different signal to the algorithm than deleted picture of cat.
The most generous assumption is that they use statistics to determine correlations like this (e.g., deleted selfies resulted in a high CTR for beauty ads so they made that a part of their algo). The least generous interpretation is exactly what you’re thinking: an asshole came up with it because it’s logical and effective.
Either way, ethics needs to be a bigger part of the programmers education. And we, as a society, need to make algorithms more transparent (at least social media algorithms). Reddit’s trending algorithm used to be open source during the good ole days.
Can you make the algorithm open source that determined it was ok for you to murder Tuvix tho
if (ugly) { kill_child(child_name); } else { ( ͡° ͜ʖ ͡°) }
JANEWAY DID WHAT SHE HAD TO DO
People who traded morals for money.
The kind of person whose past probably includes more than a few vivisected animals.