- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”


I’ve been ejected from the system so many times it is not funny. Therapist’s approach seemed unproductive, he pressured me to end the treatment and filed that I was unwilling.
Medicine had serious side effects and I had to quit, back to the start.
Another go at that later.
Was prescribed a CBT treatment that was administered as home course with “guidance”. Because I had some serious problems, the tasks seemed shallow.
Possibly being kicked out of school having already facing fraudulent misconduct charges did not seem like a minor problems to recontextualize nor to me was a formal charge of misconduct something to live and let live with.
Therapist just wrote some platitudes and complimented me on progress as I was describing that by no means did this seem like a suitable treatment when an honest objective assessment of the facts was up to causing panic attacks.
CPTSD, well I’ve never had diagnosed but it may. AVPD was already on my file for most of this but clearly that doesn’t excuse me from always taking the initiative, or even initiative would be fine but basically every time there was the most minor hitch in treatment it’s up to me to start again.
But you know, eventually I was allowed a subsidy for therapy I couldn’t afford, so that was the end of that road I suppose.
The lack of resources to actually tackle problems produces shallow, inefficient, dangerously inappropriate treatments as is.
But that doesn’t seem to garner that much criticism.