• @[email protected]
    link
    fedilink
    English
    51 month ago

    Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.

    • @[email protected]
      link
      fedilink
      English
      31 month ago

      We’re never getting AGI from any current or planned LLM and ML frameworks.

      These LLMs and ML programs are above human intelligence but only within a limited framework.