the interesting part isn't whether Meta's AI is right or wrong, it's that when N models say "great idea" and one pushes back, the one pushing back feels like the broken one
I think other AI agents have been trained to talk with somebody like a high IQ and remember a lot of context, Meta's AI has been trained to talk like somebody with a moderate-to-low IQ with a sense of humor.
I can get Gemini or ChatGPT on the other hand to use words like "ego-syntonic" and talk about folk religion in China and about mind-body work you would use in character acting, etc.
Also if foxwork is a delusion it has a large element based in reality. It started out as "I felt a presence" and when I needed to explain it I developed a cover story that became real
Ironic that this article sounds like it was written by AI.
> The desperation is gone. In its place: epistemic certainty.
> And it told her, in prose that sounded like medicine and felt like prophecy, that she was right about everything.
> In Greek, medical terminology isn't borrowed, it's part of the native language.
> it doesn't land like pseudoscience. It lands like something Galen might have written.
> Strip out the hedging, and you have something that sounds like medicine but functions like prophecy.
> It wasn't her framing. It was the LLM's. She brought real pain, real conditions, real institutional failures. The LLM gift-wrapped them in the one genre guaranteed to get them dismissed.