Caring about others and wanting a fair, even handed and democratic government is not self-righteous you muppet. All you are doing is trying to justify your shitty ideals.
You'll notice it's about how it makes the poster feel.
Complaints against the right are usually about their actions, the terrible consequences and how they hurt people.
Complaints against the left are often how it makes the complainer feel, it's a mental struggle to not admit they like the result of right wing policies and not being able to embrace a left wing position despite knowing on some level that they should.
>Caring about others and wanting a fair, even handed and democratic government is not self-righteous you muppet
It's self-righteous to say they care about other people but want to help those people with other people's money, not their own. Statistically speaking leftists give far less to charity.
It’s agents all the way down - until you have liability. At some point, it’s going to be someone’s neck on the line, and saying “the agents know” isn’t going to satisfy customers (or in a worst case, courts).
> It's not like humans aren't already deflecting liability
They attempt to, sure, but it rarely works. Now, with AI, maybe it might, but that's sort of a worse outcome for the specific human involved - "If you're just an intermediary between the AI and me, WTF do I need you for?"
> or moving it to insurance agencies.
They aren't "moving" it to insurance companies, they are amortising the cost of the liability at a small extra cost.
Why not, because they're too old to learn, or because the support infrastructure is not there? I believe most people are capable of continued learning, but they might need help (financial etc.) to make the transition.
I used to use LLMs for alternate perspectives on personal situations, and for insights on my emotions and thoughts.
I had no qualms, since I could easily disregard the obviously sycophantic output, and focus on the useful perspective.
This stopped one day, till I got a really eerie piece of output. I realized I couldn’t tell if the output was actually self affirming, or simply what I wanted to hear.
That moment, seeing something innocuous but somehow still beyond my ability to gauge as helpful or harmful is going to stick me with for a while.
The same methods that are used for gambling are a good start.
I know lootboxes in video games are regulated in some countries. Not sure if they are banned in some places, but I do know that they have to show the odds in some places, and in others they have to be deterministic.
The crux of the issue is personalization and behavior psychology. If you move to a boring feed design, you end up addressing most of the current issue.
Another option is to allow for interoperability between social media platforms, which is a competition respecting way of giving people the ability to move to platforms that “work” for them better.
I’d hazard that Civil liberties are not really at risk here, only the bottom line of social media platforms. However, theres enough money to protect the bottom line even if it costs civil liberties.
They are fed by entirely different media machines.
If you like, its a coordination problem where the various groups no longer have the commons of a shared reality to coordinate through.
reply