• @[email protected]
    link
    fedilink
    English
    11 year ago

    But I don’t know if Google cares enough about privacy to bother training individual models to avoid cross contamination. Each model takes years worth of super computer time, so the fewer they’d need to train, the less costly.

    • Natanael
      link
      fedilink
      English
      11 year ago

      Extending existing models (retraining) doesn’t need years, it can be done in far less time.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        Hmm, I thought one of the problems with LLMs was they’re pretty baked in in the training process. Maybe that was only with respect to removing information?

        • Natanael
          link
          fedilink
          English
          11 year ago

          Yeah, it’s hard to remove data already trained into a model. But you can retrain them to add capabilities to an existing model, so if you copy one based on public data multiple times and then retrain with different sets of private data then you can save a lot of work