I’ve only seen RAG work well with small, curated files for retrieval - a Claude project with 10% of its knowledge full, or a NotebookLM project with related docs. With those projects, your custom instructions can include what the scope of the bots knowledge is along with how to handle prompts outside that scope. With internet search RAG, there is no “out of scope,” and I’ve yet to see an implementation that doesn’t hallucinate too much. I still use Perplexity from time to time, but I have to follow the primary sources it links. The ChatGPT search doesn’t link its sources as often, and in the use case I just tested, ALL the links were to unrelated sites.
I’ve only seen RAG work well with small, curated files for retrieval - a Claude project with 10% of its knowledge full, or a NotebookLM project with related docs. With those projects, your custom instructions can include what the scope of the bots knowledge is along with how to handle prompts outside that scope. With internet search RAG, there is no “out of scope,” and I’ve yet to see an implementation that doesn’t hallucinate too much. I still use Perplexity from time to time, but I have to follow the primary sources it links. The ChatGPT search doesn’t link its sources as often, and in the use case I just tested, ALL the links were to unrelated sites.