My friend with schizoaffective disorder decided to stop taking her meds after a long chat with ChatGPT as it convinced her she was fine to stop taking them. It went… incredibly poorly as you’d expect. Thankfully she’s been back on her meds for some time.
I think the people programming these really need to be careful of mental health issues. I noticed that it seems to be hard coded into ChatGPT to convince you NOT to kill yourself, for example. It gives you numbers for hotlines and stuff instead. But they should probably hard code some other things into it that are potentially dangerous when you ask it things. Like telling psych patients to go off their meds or telling meth addicts to have just a little bit of meth.
Gemini will also attempt to provide you with a help line, though it’s very easy to talk your way through that. Lumo, Proton’s LLM, will straight up halt any conversation even remotely adjacent to topics like that.
Let’s not blame “people programming these.” The mathmaticians and programmers don’t write LLMs by hand. Blame the business owners for pushing this as a mental health tool instead.
Well I mean I guess I get what you’re saying, but I don’t necessarily agree. I don’t really ever see it being pushed as a mental health tool. Rather I think the sycophantic nature of it (which does seem to be programmed) is the reason for said issues. If it simply gave the most “common” answers instead of the most sycophantic answers, I don’t know that we’d have such a large issue of this nature.
My friend with schizoaffective disorder decided to stop taking her meds after a long chat with ChatGPT as it convinced her she was fine to stop taking them. It went… incredibly poorly as you’d expect. Thankfully she’s been back on her meds for some time.
I think the people programming these really need to be careful of mental health issues. I noticed that it seems to be hard coded into ChatGPT to convince you NOT to kill yourself, for example. It gives you numbers for hotlines and stuff instead. But they should probably hard code some other things into it that are potentially dangerous when you ask it things. Like telling psych patients to go off their meds or telling meth addicts to have just a little bit of meth.
Gemini will also attempt to provide you with a help line, though it’s very easy to talk your way through that. Lumo, Proton’s LLM, will straight up halt any conversation even remotely adjacent to topics like that.
Let’s not blame “people programming these.” The mathmaticians and programmers don’t write LLMs by hand. Blame the business owners for pushing this as a mental health tool instead.
Well I mean I guess I get what you’re saying, but I don’t necessarily agree. I don’t really ever see it being pushed as a mental health tool. Rather I think the sycophantic nature of it (which does seem to be programmed) is the reason for said issues. If it simply gave the most “common” answers instead of the most sycophantic answers, I don’t know that we’d have such a large issue of this nature.
Ehhhh, I’ll blame both. I’m tired of seeing so many “I was just following orders” comments on this site.
You have control over what type of organization you work for.