• @[email protected]
    link
    fedilink
    English
    -421 month ago

    I’m not going to parse this shit article. What does interference mean here? Please and thank you.

    • @filisterOP
      link
      English
      421 month ago

      That’s a very toxic attitude.

      Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.

    • @xodoh74984
      link
      English
      191 month ago

      Training: Creating the model
      Inference: Using the model