I was in the doctor’s office today with a depressed guy going on and on about how his insurance had changed and was trying to kill him. I’m in the USA, so he is entirely correct.

I recognize AI’s value as an outlet for emotional connection. It is helpful to talk about things like disability and feelings of isolation etc., with someone that can be tolerant and understanding. Is there anything competent for AI character/friends online that is worth recommending to someone that is not technically capable and with no revenue or exploitation value to exploit? Just looking for a way to maybe save a guy from himself.

  • @DrakeRichards
    link
    English
    411 months ago

    I’m really conflicted on this. On the one hand, you’re right that just having someone to talk with can really help. On the other hand, good therapists don’t just listen: they offer advice backed by decades of research to help you resolve those problems. I’m curious how much research has been done about whether LLMs would be useful or actually damaging in cases like this.

    • @j4k3OP
      link
      English
      311 months ago

      As a disabled person with problems related to posture, I can say first hand it makes a big difference. But at the same time I am technically capable about like a junior dev.

      The primary thing to look at is the person’s Myers-Briggs profile and assess how it stacks up against Maslow’s Hierarchy of human needs. A Llama2 70B does particularly well with this, especially if persistent profiles are created to address MB/MH.

      Personally I know exactly where my deficiencies are and how AI can help. I get no benefit from therapy as it is just some one saying ‘yeah, your right’ a whole lot. Others may have different results.

      I see the biggest benefit coming from simply expressing one’s self externally as some thoughts that feel well defined in the mind are not as clear when put to words. This is probably the biggest benefit I am referring to with AI and mental health.

      The chief benefit to the professional therapist is identifying things the patient is unaware of, or is unable to address on their own. If a person is a victim of circumstance, where they are entirely self aware but unable to alter their circumstances, there is little to no benefit in professional therapy - IMO.

  • shootwhatsmyname
    link
    fedilink
    English
    41 year ago

    Not open-source, but you could take a look at Pi. It’s meant to be a therapist/counselor type of resource

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    11 months ago

    I’m not sure if I’d recommend doing self-therapy with AI. It is completely unsupervised and can go in all sorts of directions, including making it worse and reinforcing bad things.

    A better way would be finding a good book on the specific topic that maybe helps them understand the dynamics and offers some proven coping-strategies. And/Or talk to people in similar circumstances who can empathize. Maybe even something like a group, people who have been through that and now volunteer to offer their wisdom to other people. I’m not sure if such a thing exists. I only ever hear the advertising in podcasts for some company who offers online/phone therapy cheaper in the USA. If you’re lucky you have friends and family who listen to you.

    A LLM (AI) will tell you anything. Most likely something that sounds like a popular opinion on Reddit, because it was trained on data like that. And not a well reasoned answer that is factoring in the unique situation and the proper way to get out of it.

    However…

    That being said, I -personally- think a Chatbot can help combat lonelyness. And sometimes it comes up with solutions that I haven’t thought of, yet. And it will listen patiently. And sometimes just talking or writing down your struggles really helps.

    Precautions I’d take: Not take any advice it gives seriously. Always use your brain and think about it yourself. Don’t use AI if you’re not able to do this anymore. From time to time: take a step back and reevaluate your situation. Does it help? Are you in a downwards-spiral? Did it make it worse in other places or just replace your issues with other issues? Is this too addictive or do you rely on it too much and it takes away from you being self-sufficient?

    In the early days people used Replika AI for companionship. But they restricted that severely, it isn’t useful anymore. Nowadays there are “safe” AIs available (like ChatGPT) and “uncensored” ones. I’m not sure what to recommend to other people. The safe ones will refuse to talk about certain topics. Including this topic and they will instead lecture you a lot and tell you to go to the doctor. The “uncensored” (or less restricted ones) will happily engage. But the downside is, they aren’t safe. They are biased and will also give unsafe or even harmful advice.

    I’ve talked to people. And more than one person told me talking to Chatbots about personal struggles helps.

    Easy to use are the paid services. Big and well-known ones include character.ai and poe.com

    I think I’d go for some of the NSFW waifu services. There are a lot of them: lite.koboldai.net agnai.chat kajiwoto.ai venus.chub.ai chatfai.com crushon.ai janitorai.com charstar.ai candy.ai risuai.xyz …

    Some of them have a free tier or trial and some of them have therapist characters available. I myself use KoboldCpp and let the AI run on my own computer. There is also Oobabooga’s WebUI and a few others that are easy to install on a computer. And SillyTavern if you have some technical knowledge. But you need to choose a model and download a character description yourself, with these solutions.

    I don’t want to recommend a specific model. I’m not sure what level of unrestricted-ness or creativity will be beneficial here. There are lots of models available. From safe-for-work nice companions, to multi-purpose models that have been made by Meta or Mistral and probably have less bias than others, to completely unrestricted NSFW models that can also help you roleplay your fantasies. Maybe start slow and try the therapist characters from some of the services above before recommending something. If you need a model maybe start with something like Carl or Samantha, but I’m really unsure.

    • @j4k3OP
      link
      English
      211 months ago

      I think those are all great warnings, and we are likely at a similar personal level of use and perspective. I just wish there was a way to translate the usefulness to others that are less capable.

      I look at it as a situation where (good) models are like non expert analog people; the digital variety are no less fallible. The challenge is telling CS folks that a binary machine having non binary imperfections is still useful. No one expects everyone they meet has a PhD in whatever subject they speak about. LLMs just require a similar healthy skepticism all the time. I must caveat, I might view that differently if I only used online models that I do not control. I have only used a few obscure models posted online for only a few prompts for jailbreaking humor.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        11 months ago

        I agree. AI is a polarizing topic. And in general opinions are way exaggerated. I mean there are several articles highlighting that people used some chatbots for therapy or companionship and the AI told them to end their lives. Or that people roleplay and abuse their virtual companions and this reinforces negative thoughts.

        In media I read those articles more often than I’d read an in-depth talk about chances to make it useful… I don’t think I’ve ever read such an article.

        I don’t think computer science people are the problem, they rarely think in binary, it’s more maths and thinking of ways to handle data/information what they do. It often requires out-of-the-box thinking, creativity, balancing things and compromise. (Depending on the exact field.) I think they do understand.

        All of that isn’t that simple. At least that’s my opinion. AI can be viewed as a tool. It can be used for good, evil, everything in-between and it can also be applied correctly or wrong. It can be the correct tool for a task or not so much.

        I’m a bit hesitant to recommend it to someone who isn’t well. If he were mentally healthy, I’d say yes, go try and see if it helps. But that has nothing to do with AI. Same applies to seeking for advice on Reddit. Or doing self-diagnosis with TikTok. It can help, but it can also lead you astray.

        I’d say AI is probably the better option and I’d use it if it were me.

        Of course doing inference costs money. So it’s either free and complicated or paid and somewhat easy if you choose the correct service. Unfortunately AI is hyped so there are hundreds of services and I really don’t know if there is one that stands out and can be recommended more than the others.

        I don’t think there is that much harm in telling people. I also tell people I like chatbots and think they’re useful. I usually don’t go into detail in real-life conversations. But I’ve also done roleplay and talked to it about random stuff and I think it is nice. Some people don’t understand because all they’ve seen is ChatGPT and how it can re-phrase emails. And roleplay or a virtual companion is really something different.

        (I’ve also seen people overestimate ChatGPT. They ask important factual questions, let it summarize complicated stuff, let it explain a scientific paper to them. And that’s a bit dangerous. The output always looks professional. But sometimes it’s riddled with inaccurate information, sometimes plain wrong. And that’d be bad if you mistook it for an expert or confused it with an actual therapist. As long as you’re aware, I think it’s alright and you can judge whether that’s okay. And I’m sure ChatGPT and AI will get better, hallucinate less and research will come up with ways to control factuality and creativity.)

        • @j4k3OP
          link
          English
          311 months ago

          Do you have any new better-than-Llama2-70B models you’ve tried recently?

          I haven’t tried anything new in awhile because of code I changed in Oobabooga and Linux mainline kernel w/Nvidia issues. I basically have to learn git to a much better level and manage my own branch for my mods. I tried koboldcpp but I didn’t care to install the actual Nvidia CUDA toolkit because Nvidia breaks everything they touch.

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            11 months ago

            Hehe. I’ve recently spent $5 on OpenRouter and tried a few models from 7B to 70b and even one hundred and something billion parameters. They definitely get more intelligent. But I have determined that I’m okay within the 7B to 33B range. At least for my use-case. I’ve tested creative storywriting and dialogue in a near-future setting where AI and androids permeate human society. I wasn’t that impressed. The larger models still did some of the same mistakes and struggled with spacial positions of the characters and the random pacing of the plot points didn’t really get better.

            This wasn’t a scientific test whatsoever, I just took random available models, some were fine-tuned for similar purposes, some not and I just clicked my way through the list. So your mileage may vary here. Perhaps they’re much better with factual knowledge or reasoning. I read a few comments from people who like for example chatting with the Llama(2) base model at 65b/70b parameters and say this is way better than the 13b fine-tunes I usually use.

            And I also wasn’t that impressed with OpenRouter. It makes it easy and has some ‘magic’ to add the correct prompt formatting with all the different instruct formats. But I still had it entangle itself in repetition loops or play stupid until I went ahead and disabled the automatic settings. And once again tried to find the optimal prompt format and settings.

            So I’m back to KoboldCpp. I’m familiar with it’s UI and all the settings. I think the CUDA toolkit within the Debian Linux repository is somewhat alright. I’ve deleted it because it takes up too much space and my old GPU with 2GB of VRAM is useless anyways. We cerainly all had our ‘fun’ with the proprietary NVidia stuff.

  • @[email protected]
    link
    fedilink
    English
    311 months ago

    A while who I found a model called Carl 33b. I can’t remember all the specifics but I think it was a therapist AI designed specifically for stress and I think it was based on llama 1.