• @[email protected]
    link
    fedilink
    English
    132 months ago

    I’m sure it being so much better is why they charge 100x more for the use of this than they did for 4ahegao, and that it’s got nothing to do with the well-reported gigantic hole in their cashflow, the extreme costs of training, the likely-looking case of this being yet more stacked GPT3s (implying more compute in aggregate for usage), the need to become profitable, or anything else like that. nah, gotta be how much better the new model is

    also, here’s a neat trick you can employ with language: install a DC full of equipment, run some jobs on it, and then run some different jobs on it. same amount of computing resources! amazing! but note how this says absolutely nothing about the quality of the job outcomes, the durations, etc.

    • @tee9000
      link
      English
      02 months ago

      Their proposed price increases are insane and yeah even though they are getting lots of funding right now, they arent covering their expenses with subscriptions. I cant imagine they would successfully charge regular users that much without kicking 99% of them off their platform. Now that would be dystopian to me… to price out regular users when their model uses the same computing power.

      You are saying they are overstating their models ability? My understanding of the claim is that the model just makes less simple arithmetic mistakes. Ive still noticed hallucinations and mistakes when assisting with my code but to be fair the language im using has limited documentation. I dont see their claims as exaggarated yet but id be lying if i said i have used the new preview model enough to understand it. Its certainly slower…