• Madis
    link
    fedilink
    English
    41 month ago

    But it would use less energy afterwards? At least that was claimed with the 4o model for example.

    • @fuck_u_spez_in_particular
      link
      English
      31 month ago

      4o is also not really much better than 4, they likely just optimized it among others by reducing the model size. IME the “intelligence” has somewhat degraded over time. Also bigger Model (which in tha past was the deciding factor for better intelligence) needs more energy, and GPT5 will likely be much bigger than 4 unless they somehow make a breakthrough with the training/optimization of the model…

      • @[email protected]
        link
        fedilink
        English
        21 month ago

        4o is optimization of the model evaluation phase. The loss of intelligence is due to the addition of more and more safeguards and constraints by the use of adjunct models doing fine turning, or just rules that limit whole classes of responses.