Writing a 100-word email using ChatGPT (GPT-4, latest model) consumes 1 x 500ml bottle of water It uses 140Wh of energy, enough for 7 full charges of an iPhone Pro Max

  • zerozaku
    link
    English
    162 months ago

    I have read the comments here and all I understand from my small brain is that, because we are using bigger models which are online, for simple tasks, this huge unnecessary power consumption is happening.

    So, can the on-device NPUs we are getting on flagship mobile phones solve these problems, as we can do most of those simple tasks offline on-device?

    • @[email protected]
      link
      fedilink
      English
      8
      edit-2
      2 months ago

      I’ve run an LLM on my desktop GPU and gotten decent results, albeit not nearly as good as what ChatGPT will get you.

      Probably used less than 0.1Wh per response.

      • @Monsieurmouche
        link
        English
        12 months ago

        Is this for inferencing only? Do you include training?

        • @[email protected]
          link
          fedilink
          English
          22 months ago

          Inference only. I’m looking into doing some fine tuning. Training from scratch is another story.

        • @[email protected]
          link
          fedilink
          English
          22 months ago

          Training is a one time thing. Tge more it get use, the less energy per query it will take

          • @Monsieurmouche
            link
            English
            12 months ago

            Good point. But considering the frequent retraining, the environmental impacts can only be spread on a finite number of queries.

            • @[email protected]
              link
              fedilink
              English
              12 months ago

              They have already reached diminishing returns on training. It will become much less frequent soon. Retraining on the same data if there isn’t a better method is useless. I think the ressources consumed per query should only include those actually used for inference. The rest can be dismissed as bad faith argumentation.

    • Avieshek
      link
      English
      3
      edit-2
      2 months ago

      Yes, kind of… when those businesses making money out of the subscriptions are willing to ship with the OS for free which something only Apple has the luxury to do instead of OpenAI who doesn’t ship hardware or software (like Windows) beyond an app that’s less than 100MB. Servers would still be needed but not for general cases like help me solve this math or translation. Stable Diffusion or Flux is one example where you only need the connection to internet when downloading a certain model like you wouldn’t necessarily want to download every kind of game in the world when the intention is to play games arises.