• @grue
    link
    English
    811 months ago

    How do you “leak” something that’s open-source?

  • @Brcht
    link
    English
    811 months ago

    I have tried the version available at hugginchat and it’s almost gpt4 level for programming tasks but not so much for general knowledge

    • rezz
      link
      English
      3
      edit-2
      10 months ago

      What’s the best overall in your opinion for programming tasks?

      I’ve had pretty flawless experience with the free ChatGPT3.5 for simple scripts but don’t generally stray beyond that.

      • @[email protected]
        link
        fedilink
        English
        311 months ago

        I’ve had reasonably good results with deepseek 6.7b

        It struggles and hallucinates once you get into more niche languages, but it’ll spit out a snake in Python or write file io functions pretty well - I’ve used it to read/write huge json in a buffer with good results.

        I haven’t tried gpt4, but compared to 3.5 it’s pretty great

      • @Brcht
        link
        English
        111 months ago

        I have gpt4 with copilot at work so i mostly use that, it is pretty good for programming, you see its limitations when you start relying on it a lot on autocompletions, maybe because the code itself is not too coherent and gpt gets confused

        Maybe chat tasks are simpler because it writes the whole thing from scratch, though I mostly ask it for limited functionalities, eg write a function that takes x and returns y or use a certain ORM to change a value.

        I would suggest you try both gpt 3.5 and the free huggingface minstral 7b version (you can probably run it on pc too, it’s not huge) for programming tasks and see for yourself. For general knowledge though, gpt wins hands down over minsral 7b

    • @[email protected]
      link
      fedilink
      English
      311 months ago

      I can’t find miqu-1-70b in huggingchat. Do I need to enable anything special anywhere?

    • @bassomitron
      link
      English
      211 months ago

      Do you know if there are any plans to quantize it? I’d love to test it, but my 3090 can’t handle 70b models without quantization, unfortunately.

      • midnight
        link
        fedilink
        3
        edit-2
        11 months ago

        There are quantized versions on hugging face. There’s a q2 version, but idk how well that performs

      • ffhein
        link
        English
        211 months ago

        Only quantized versions of the model were leaked. If you see any unquantized version of it then it’s something which was recreated from these, and not the original model. People have also requanted it from GGUF to EXL2 and probably other formats too.

  • @[email protected]
    link
    fedilink
    English
    411 months ago

    Note that he did not confirm is was mistral-medium. He says that’s a retrained llama2-70B model, but hints that it is not the fully trained one. Sounds a bit like damage control but is not a 100% confirmation of the claim.