• @cyd
    link
    English
    11
    edit-2
    1 year ago

    The context is that LLMs need a big up front capital expenditure to get started, because of the processor time to train these giant neural networks. This is a huge barrier to the development of a fully open source LLM. Once such a foundation model is available, building on top of it is relatively cheaper; one can then envision an explosion of open source models targeting specific applications, which would be amazing.

    So if the bulk of this €300M could go into training, it would go a long way to plugging the gap. But in reality, a lot of that sum is going to be dissipated into other expenses, so there’s going to be a lot less than €300M for actual training.

    • @interceder270
      link
      English
      51 year ago

      Is there any way we can decentralize the training of neural networks?

      I recall something being released awhile ago that let people use their computers for scientific computations. Couldn’t something similar be done for training AI?

      • @[email protected]
        link
        fedilink
        English
        41 year ago

        There is a project (AI Horde) that allows you to donate compute for inference. I’m not sure why the same doesn’t exist for training. I think the RAM/VRAM requirements just can’t be lowered/split.

        Another way to contribute is by helping with training data. LAION, which created the dataset behind Stable Diffusion, is a volunteer effort. Stable Diffusion itself was developed at a tax-funded public university in Germany. However, the cost of the processing for training, etc. was covered by a single rich guy.

      • @Sanyanov
        link
        English
        11 year ago

        Btw yes! Why not include such project in something like BOINC and let people help training free AI?

      • Dojan
        link
        English
        -31 year ago

        Folding at home.

        I dunno. I wouldn’t lend my spare power to put people out of a job.