Would you like to see some plugins which integrate with local/self-hosted AI instead of sending it to ChatGPT? Or don’t you care about privacy there as long as the results are good?

You might be interested in GPT4All (https://gpt4all.io/index.html), which can be easily downloaded as Desktops GUI. Simply download a model (like nous Hermes) for 7.5GB and run in even without a GPU right on your CPU l (albeit slightly slowish).

It’s amazing what’s already possible with local AI instead of relying on large scale expensive and corporate-dependent AIs such as ChatGPT

  • @dethb0y
    link
    101 year ago

    I’ve not thought of a good use case for the technology myself, but if i were to use it i’d prefer it be local just for conveniences sake.

  • @[email protected]
    link
    fedilink
    31 year ago

    I am generally interested in giving an LLM more context about my data and hobby project code, but I’d never give someone else such deep access. GPT4ALL sounds great and it’s making me hopeful that we won’t have to rely entirely on commercial GPTs in the future. It’s the AI equivalent to what Linux and FreeBSD are to OSs.

    But that still leaves the question of what to do with it. I see 2 main purposes:

    • Asking the GPT questions about your material
    • Writing more content for your vault

    That both seems useful at first, but I don’t think it’s really necessary. A good fuzzy search like obsidian has and a good vault and note structure makes the first point pretty irrelevant.

    Also, writing more content is really two things:

    • Text generation/completion
    • Research

    I think a plugin might be nice and user friendly UI for the first point, but research is much better done in a chat-like environment. And for that I don’t need an integration, as I probably have a web browser open anyways.

    • gelberhut
      link
      fedilink
      21 year ago

      For me the significant part of the notetaking value is the fact that I create the text of the note myself. This process also helps me to remember things better. AI generated note is mostly useless for me.

      Extrac data…probably, but hard to imagine something more impressive than search on steroids.

    • @[email protected]OP
      link
      fedilink
      21 year ago

      hopeful that we won’t have to rely entirely on commercial GPTs in the future.

      This is very much the case. People have been calling for a stable diffusion equivalent of DallE2 for LLMs, and since Llama was leaked a few months back, the improvements on open source LLMs have been impressive. Like it took 3 weeks for the open source Llama adaptations to catch up to Googles Bard AI!

      Check this well written internal Google memo out: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

      I find the developments impressive and nice. I’m looking forward for how far the local, private and open-source will be in just a year from now!

      • @[email protected]
        link
        fedilink
        11 year ago

        Yeah I am quite excited to try out GPT4ALL or something similar as soon as time allows it, integrated as a plugin or not. To me it’s especially impressive that small models are inching up to the big ones just by being trained on better/gpt-generated data. Seeing it run on modern and even older consumer hardware is mind-boggling. This was all so far away before chatgpt and still quite far away with chatgpt.

  • @DrakeRichards
    link
    31 year ago

    How is this possible? I thought that local LLM models nearly all required ludicrous amounts if VRAM, but I don’t see anything about system requirements on their website. Is this run in the cloud?

    • @[email protected]OP
      link
      fedilink
      31 year ago

      It actually runs locally! I did that just two days ago. it’s amazing!

      it’s all based on research by many people who wanted to make the LLMs more accessible because gating them behind large computational work isn’t really fair/nice.

    • @[email protected]
      link
      fedilink
      11 year ago

      There are some smaller parameter models (7B) that you can kind of get by running on a Mac book Air. It’s been a wild west of innovation as they figure out how to encode the weights in progressively smaller numbers of bits.

      One model I’ve seen averages about 2.56 bits/weight.

      I’m not sure how it will fit into my particular use of Obsidian. One thing I keep in mind is I want to be very sure to segregate the AI-generated text from my own.

  • @[email protected]
    link
    fedilink
    11 year ago

    I would love this! And thanks for the info on GPT4All; I hadn’t heard of that local LLM before.