• NVIDIA released a demo version of a chatbot that runs locally on your PC, giving it access to your files and documents.

• The chatbot, called Chat with RTX, can answer queries and create summaries based on personal data fed into it.

• It supports various file formats and can integrate YouTube videos for contextual queries, making it useful for data research and analysis.

    • @Kyrgizion
      link
      English
      49 months ago

      2xxx too. It’s only available for 3xxx and up.

    • @anlumo
      link
      English
      29 months ago

      The whole point of the project was to use the Tensor cores. There are a ton of other implementations for regular GPU acceleration.

    • @CeeBee
      link
      English
      29 months ago

      Just use Ollama with Ollama WebUI

      • Dojan
        link
        English
        199 months ago

        There were CUDA cores before RTX. I can run LLMs on my CPU just fine.

      • halfwaythere
        link
        English
        6
        edit-2
        9 months ago

        This statement is so wrong. I have Ollama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.