Rendered at 19:05:01 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
awakeasleep 2 days ago [-]
You can prevent a good bit of this for your friends and family by going into their ChatGPT settings > Personalization > Base Style and Tone: choose Efficient, and then choose "less" for warmth, enthusiasm, and emoji.
It makes a remarkable difference.
jqpabc123 2 days ago [-]
If you ask for it, AI chatbots will validate lots of stuff --- bad business or political decisions for example.
i-e-b 2 days ago [-]
An electronic monk
ajuc 2 days ago [-]
You don't need to ask for it. They default to validation.
rustyhancock 2 days ago [-]
RLHF optimizes for low creativity sychophants.
Possibly this is a bigger problem than LLMs existing at all.
kridsdale3 2 days ago [-]
And when you turn it down, people freak the F out because you lobotomized their "friend".
The market, for consumers, will flock to whatever products surge their neurotransmitters the most. This is the bull case for Anthropic (who are serving business with alternative success metrics) and bear for OpenAI who seem (with all their hiring of Meta execs) to want to go for the masses, who have no objective function for product selection beyond "I like it".
cheald 2 days ago [-]
It's best to think of instruct-tuned LLMs as mirrors rather than intelligences. They generally reflect what you're putting into them, but they do it in a way that can easily masquerade as wisdom. I think this makes it really easy for people to self-delude.
RcouF1uZ4gsC 2 days ago [-]
So like McKinsey consultans but just at the personal level instead of the corporate and government level.
And much cheaper.
BoorishBears 2 days ago [-]
Just this week famous SF-local "Purple Ferrari With A Duck Man" (I don't know his real name) went through what seems to be a psychotic break and ended up in an armed standoff with police
There're some early comments saying he was having apocalyptic delusions reinforced by Gemini: this really seems to be growing as a class of issue.
What's strange to me is that, while subtle delusions are hard to deal with, delusions where the model is saying "we are at war now with those who destroyed the Earth" seem like they should be very easy to catch with a classifier, and so do the series of prompt that go this far (you can boil the frog with LLMs, but getting it to encourage violence typically requires some pretty sharp prodding.)
renewiltord 2 days ago [-]
Should it be legal for mentally disabled people to have free access to Internet services? I believe not. They should have to ask permission from a government proctor if they have a diagnosed mental disability. This will protect them from harm.
E.g. if you have free internet access as an ADHD patient it’s just going to ruin your life. Make it so you have to have a video chat with your government proctor and you will help these people live successful lives no longer encumbered by these problems. The proctor would obviously refuse diagnosed schizophrenics access to LLMs.
We need to protect our most vulnerable. These tools are like heavy equipment. An impaired user will hurt themselves.
jee-vacation 2 days ago [-]
First off, you seem to be conflating Autism and ADHD. ADHD is simply a condition where the person's brain doesn't emit as much dopamine as a neurotypical brain easily. Next, this is one of the most ableist things I've read in a while.
It makes a remarkable difference.
Possibly this is a bigger problem than LLMs existing at all.
The market, for consumers, will flock to whatever products surge their neurotransmitters the most. This is the bull case for Anthropic (who are serving business with alternative success metrics) and bear for OpenAI who seem (with all their hiring of Meta execs) to want to go for the masses, who have no objective function for product selection beyond "I like it".
And much cheaper.
https://sfstandard.com/2026/03/17/san-francisco-nob-hill-arm...
There're some early comments saying he was having apocalyptic delusions reinforced by Gemini: this really seems to be growing as a class of issue.
What's strange to me is that, while subtle delusions are hard to deal with, delusions where the model is saying "we are at war now with those who destroyed the Earth" seem like they should be very easy to catch with a classifier, and so do the series of prompt that go this far (you can boil the frog with LLMs, but getting it to encourage violence typically requires some pretty sharp prodding.)
E.g. if you have free internet access as an ADHD patient it’s just going to ruin your life. Make it so you have to have a video chat with your government proctor and you will help these people live successful lives no longer encumbered by these problems. The proctor would obviously refuse diagnosed schizophrenics access to LLMs.
We need to protect our most vulnerable. These tools are like heavy equipment. An impaired user will hurt themselves.
- Someone with both Autism and ADHD