• @[email protected]
    link
    fedilink
    English
    118 hours ago

    This is a solvable problem, but it requires humans to write the algorithms. For example, AIs can’t add, but there are ways to hook in external software that can do addition, which the AI will know to use. Similarly, we can train AI to solve logic puzzles if you give it an algorithm, but it can’t solve a logic puzzle an algorithm cannot.

    • @jacksilver
      link
      English
      17 hours ago

      Some AI can do math, but LLMs and Neural Networks aren’t designed to do complex math, as all of their operations are (typically) based around addition. If it has enough pathways it can learn multiplication through sheer brute force, but only within the input space (hence the articles comments about 3 or 4 digit numbers).

      At the end of the day, LLMs are for processing text. Multimodal models are generally converting audio/images to text so you have a common means of evaluation. However, none of that is concerned about logic, which is fundamental for math.

  • Lvxferre [he/him]
    link
    fedilink
    English
    123 hours ago

    It’s relevant to note that this sort of inferential logic is essential to language; we [humans] use this all the time, and we expect each other to use it. It is a necessary part for a language model; otherwise you have a grammar bot instead.

  • hendrik
    link
    fedilink
    English
    1
    edit-2
    1 day ago

    Meh. They can not do everything in one shot. But we don’t do that. We have thinking/reasoning models these days. And those theoretical limitations don’t appy there. So it’s quite the opposite from the headline. We’re beginning to overcome fundamental limitations.