Over the years, there have been various developments in algorithms to identify and flag those who show signs of self-harm or suicide. As AI becomes more complex, researchers hope that it can lower suicide mortality rates.
There are problems with AI therapy. It still is not as personable or helpful as talking to a human being, and it can be biased. Further, the large tech companies that own AI like ChatGPT can amass more sensitive information that can be hacked or sold. However, there are opportunities to improve a mental-health system that is increasingly overloaded, and AI chatbots could support both doctors and patients.
Read more at the New Yorker.
Comments