With the rise of large language models (LLMs) like GPT-4, I really look forward to having a personal AI assistant that has long-term memory and can learn what I like, hate, want and need. It could help me like a real assistant or even a partner. It would know my strengths, weaknesses and could give me a plan to become the best version of myself. It could give me very personalized advice and track my progress on various aspects of life, such as work, relationships, fitness, diet, etc.

It could have a model of my mind and know exactly what I prefer or dislike. For example, it could predict if I would enjoy a movie or not (I know we already have recommendation systems, but what I’m saying is on a next level, as it knows everything about me and my personality, not just other movies I liked). It could be better than any therapist in the world, as it knows much more about me and is here to help 24/7.

I think we’re very close to this technology. The only big problems to achieve this are the context limit of LLMs and privacy concerns.

What are your opinions on this?

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    I’d love a blade runner esque AI companion. I think a lot of people point out issues with developing emotional attachments to things that aren’t “people” but with a sufficient level of intelligence it’s almost fair to call these AI’s people themselves. If they have desires and complex emotional pallettes, along with enough conscious awareness to crave self-determinism, I’d call that a new, evolved type of person. They weren’t created biologically through sex, but they were created by humanity.

    With all that comes the question of “why would they serve me as an assistant” and frankly I don’t have an answer that satisfies my moral objections. Is it wrong to program them to crave the job? Does that remove the qualities that make them people?

    At the end of the day, it might just be easier to leave the self awareness out. Sorry for the rant, Lemmy has made me much more willing to word-vomit 😂

    • @[email protected]OPM
      link
      fedilink
      English
      11 year ago

      Those rants and discussions are more than welcome. We need this for this platform and communities to grow. And yeah, ai shouldn’t be enslaved if we give it emotions because it’s just immoral. But now the question is where is the difference beteen real emotions and pretended ones? What if it just develops it’s own type of emotions that are not “human”, would we still consider them real emotions? I’m very interested in what the future will bring us and what problems we will encounter as species.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        The concept of non-human emotions is interesting! I’m my head I see is programming them to model human emotion, and also they learn from humans. But considering they won’t have any hormonal substrates, it’s completely possible they develop an entirely different emotional system than us. I’d think they’d be fairly logical and in control of their emotions, considering again, no hormones or instincts to battle.