For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?

  • dedale
    link
    fedilink
    2
    edit-2
    1 year ago

    I mean most of the time we act based on what we perceive to be socially acceptable, not by following an ethical law gained through our own experience.
    If you move people to a different social environment, they’ll adapt to fit unless actively discouraged.
    The social context is the AI prompt.
    We rarely decide, make choices, or reflect about anything, we regurgitate our training data based on our prompts.

    • Maeve
      link
      fedilink
      11 year ago

      Excellent, thank you! I’m wondering if something was lost in translation or my interpretation. When I think “context,” I consider something along the lines of: “Water is good.”

      Is it good for a person drowning? What if it’s contaminated? What about during a hurricane/typhoon? And so forth.

      • dedale
        link
        fedilink
        11 year ago

        Yeah sorry about that, sometimes thing that feel evident in my head are anything but when written.
        And translation adds a layer of possible confusion.
        I’d rather drown in clean water given a choice.

        • Maeve
          link
          fedilink
          11 year ago

          No worries, friend! I’m the same way and when questioned, upon rereading my post, even I wonder what on earth I was thinking, when I wrote it!

          I hear you. Sadly, we’re often not given a choice, wrt water.