Courtesy of Reddit user /u/TheBlueRefinery29

Interesting experimental logical AI has promising implications for AI Safety.

Claiming to have created a language that enables developers to create software and AI that can reason over its own future versions.

Original post: https://x.com/TauLogicAI/status/1841813606154793354

Abstract Summarizing their process and the language tech: https://tau.net/Logical-AI-Software-Specification-Reasoning-GSSOTC.pdf

Full paper: https://tau.net/Theories-and-Applications-of-Boolean-Algebras-0.25.pdf

But the full paper is super long and goes over my head, the abstract is much easier to digest.

  • @just_another_person
    link
    English
    11 month ago

    It’s a nice thought, but none of the major players trying to make money and create companies on this will sign on.

    A good frame of reference to where we are right now in the “AI” battle from the corporate standpoint is similar to where graphics companies were when OpenGL was being pushed. Nobody wanted to use it, and the hardware makers were all pushing their own implementations of hardware abstraction interactions (Remember 3dfx?).

    First, there needs to need a MAJOR contraction and failure in the “AI” industry. Then, there needs to be a bunch of consolidation in the FOSS community to rally around a few different projects that make integration better/easier. THEN there needs to be a major reason to not work with companies that are still alive.

    This is a cyclical process that comes around about every decade for whatever new thing people are hyped about, and plays out the exact same way every time, like reading a book. 10 years ago it was all the non-AI “assistant” bullshit in homes and on mobile, and now it’s this stupid thing. It will play out again the exact same way.