• Ignotum
    link
    English
    15 months ago

    Tried the same prompt:

    I have a riddle for you. With a boat, how can a man and a goat get across the river?

    This riddle is reminiscent of classic river-crossing puzzles. The exact solution depends on the additional conditions or constraints that might be implied (like if there are other characters involved, such as a wolf or cabbage). Assuming it’s just the man and the goat, here’s a straightforward solution:

    1. The man takes the goat across the river.
    2. The man leaves the goat on the other side.
    3. The man returns alone to the original side to get the boat. So, in short, the man and the goat cross together in the boat, and then the man returns to the original side to get the boat. If there are additional details or constraints to consider, please let me know!

    The goal is to get both to the other side, not just the goat

    I see! If the goal is to get both the man and the goat to the other side of the river, here’s how it can be done:

    1. The man and the goat both get into the boat.
    2. The man rows the boat with the goat across the river.
    3. They both disembark on the other side of the river. In this way, both the man and the goat successfully reach the other side of the river together.

    Asking questions because you know the dataset is biased towards a particular solution isn’t showing the fault in the syatem, much like asking a human a trick question isn’t proving humans are stupid. If you want to test the logical reasoning you should try questions it is unlikely to have ever heard before, where it needs to actually reason on its own to come to the answer.

    And i guess people with anterograde amnesia cannot be intelligent, are incapable of cognition and are worthless, since they can’t form new memories

    • @LANIK2000
      link
      English
      1
      edit-2
      5 months ago

      It’s not much of a trick question, if it’s absolutely trivial. It’s cherry picked to show that the AI tries to associate things based on what they look like, not based on the logic and meaning behind them. If you gave the same prompt to a human, they likely wouldn’t even think of the original riddle.

      Even in your example it starts off by doing absolute nonsense and upon you correcting it by spelling out the result, it finally manages, but still presents it in the format of the original riddle.

      You can notice, in my example I intentionally avoid telling it what to do, rather just question the bullshit it made, and instead of thinking “I did something wrong, let’s learn”, it just spits out more garbage with absolute confidence. It doesn’t reason. Like just try regenerating the last answer, but rather ask it why it sent the man back, don’t do any of the work for it, treat it like a child you’re trying to teach something, not a machine you’re guiding towards the correct result.

      And yes, people with memory issues immediately suffer on the inteligence side, their lives a greatly impacted by it and it rarely ends well for them. And no, they are not worthless, I never said that they or AI is worthless, just that “machine learning” in its current state (as in how the technology works), doesn’t get us any closer to AGI. Just like a person with severe memory loss wouldn’t be able to do the kind of work we’d expect from an AGI.