Been liking Alex O’Connor’s ChatGPT explanation videos and ChatGPT related experiments.

Alex O’Conner makes content related to philosophy and religion but I particularly enjoyed, in addition to this video, one where he gaslights ChatGPT using moral dilemmas.

In this video he tells you the reason why it is so hard to get ChatGPT to do this. Short Answer: most images you find of wine are either empty glasses or partially full because who fills their wine to the top?

  • billwashere
    link
    English
    32 days ago

    And this is a prime example of why these trained models will never be AGIs. It only knows what it’s been trained on and can’t make inferences or extrapolations. It’s not really generating an image’s much as really quickly photoshopping and merging images it already knows about.

    • TheoOP
      link
      English
      12 days ago

      It’s just patterns of pixels. It recognizes an apple as just a bunch of reddish pixels etc, then when given an image of a similar colored red ball, or something, it is corrected until it ceases to recognize something not an apple as an apple. It really does not know what an apple looks like to begin with. It’s like declaring a variable. The computer does not know what the variable really means just what to equate it to.