• kronisk
    link
    English
    58 months ago

    when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

    Is it really? You make it sound like this is a proven fact.

    • Cosmic Cleric
      link
      English
      4
      edit-2
      8 months ago

      Is it really? You make it sound like this is a proven fact.

      I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video.

        • @Dasus
          link
          English
          28 months ago

          I know I’m responding to a bot, but… how does a PipedLinkBot get “Kyle Hill” wrong to “Kyke Hill”? More AI hallucinations?

    • KillingTimeItself
      link
      fedilink
      English
      28 months ago

      i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.

      Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.

    • @UmeU
      link
      English
      18 months ago

      Better than one of those pesky unproven facts