Like most of you, I used reddit as solely my only source for finding information. Looking to hear your guys’ thoughts on this topic, and hopefully explain and share some knowledge in a more sophisticated manner than I can describe. (also, I hope this is an appropriate place to post?)
I have ran into this discussion a few times across the fediverse, but I can’t for the life of my find those threads and comments lol
I believe that a non-corporate owned platform with user-generated information is most optimal, like wikipedia. I don’t know the technicalities, but I feel like AI can’t replace answers from human experiences - humans who are enthusiasts and care about helping each other and not making money. This is one of those things where I feel like I know the “best” way to find information, but I don’t know the deep answers of why, and what makes the other platforms worse (aside from the obvious ads, bloatware, and corporate greed)
I don’t know much about this topic, but I’m curious if you guys have actual real answers! Thread-based services like this and stack overflow (?) vs chatgpt vs bing vs google, etc.
EDIT: Wow, all your responses are fantastic. I’m not very knowledgeable about the subject so I can’t really continue everyone’s responses with a discussion, but I love and appreciate the insight in this thread! But I’ll try to think of some follow up questions :)
The worst part about ai as a search engine is that it doesn’t (or at least can’t reliably) give you the original source. It can tell you lots of stuff but there’s no link to a news article or wiki page where it got it from. A traditional search engine can give you unreliable results, but at least you can look at them yourself and decide if they’re reliable or not. An AI search engine has you just take what it says at face value, true or not.
Bing AI cites its sources in its replies
Deep learning randomly recombines its input (e.g., text OpenAI found crawling Reddit and Stack Overflow) based on how frequently various groups of words are used together in its input. So its output is not “information” in the ordinary sense, or if you like, the only information you can glean from it is a vague sense of how other people have used words before, in general.
As a result, its credible use cases are pretty slim. If you need large quantities of extremely banal text, ChatGPT is your man. If you want to learn something, look elsewhere.
Though I will mention the use case I’m exploring at the moment, with… moderate success: using it as a Spanish chat bot, since I really need to learn Spanish.
To generate answers is not to search answers. If I need a search engine, I want a search engine. If I need a text generation model I want a text generation model.
I really like this. It’s so true and I feel like you really hit the spot!
Can you elaborate on what defines a text generation model or give examples? Is it literally just like “write me a story about x”? I can’t think of other real examples a TGM could be useful for
It’s ironic, though. Google’s search engine is pretty horrendous so we literally just use it for searching within reddit (since reddit’s in-platform search is also poor, but for other reasons)
I think people are way to quick to dismiss AI on the basis that it’s not always factual. Searching for stuff and adding Reddit is a great way to get non factual information as well. Everyone that has great insight into a subject knows how horrible many highly upvotes comments are.
Wether you use AI, Reddit or Google, you have to do a quick analysis of how credible it seems. I use all three of them, but more and more AI for niche searches that are hard to get good results for.
Yea, I sort of agree - that’s kind of why I think that doing research yourself by looking across dozens of sources, posts, and comments, then making your own judgment call is the way. Idk I guess maybe it’s just my experience, but I usually find that a comment with misinformation is downvoted to oblivion with responses as to why it’s wrong, and the most helpful solution is usually upvoted, with replies like “you’re a life saver” or “this is the real answer. thank you!!”. Obviously I don’t mean like 100% of content is like this, there will be bad content everywhere, but I take every solution with a grain of salt while looking at other solutions, and decide for myself (or maybe I misunderstood you lol)
There are sooooooo many times, way more than I can count, that I had an INCREDIBLY niche problem (usually tech related) and bam, someone on reddit had the same issue, and either figured it out & posted the solution, or the wack solution is in the comments. I never ever find this information on other random articles or “official help threads”. This happened so much that I didn’t google a single piece of information without adding “reddit” to it
I feel like AI would be better for simple things like an excel formula, or ordinary information? For now, at least. But, I am trying to learn more about this subject and truly see the legitimate capabilities of AI
Then again, what do I know lol
Machine learning seems to be very good at generating believable persuasive writing, and not at all good at determining truth from fiction, even worse than people. This is an absolutely deadly combination and our rush to use it in this capacity is profoundly stupid.
I’m not against these algorithms mind you. I think they have a lot of useful potential. It’s just that the first things people have dived for to use it seem to me to be the absolute most foolhardy ways to apply it.
I completely agree. It makes sense that AI is not good at determining truth vs fiction. I think it’s more important for us as users to just search for information on our own, then determine the “end answer” with our own judgement after reviewing different sources and experiences (taking each individual answer with a grain of salt)
That’s why, I personally think AI search engine won’t be the best all-rounder for all types of information that’s not niche, deep searching which is IMO better found on forum-like platforms where people (enthusiasts) share sources, their experiences, what worked, what didn’t work, and why. For AI, maybe just simple bland information, like an excel formula, or how to hot wire a car, is better
yeah, AI does perform very well when given a specific and goal-oriented task. I think the coolest use I’ve seen for it was an emergency doc who was getting it to write explanatory documents for patients. Like “Please write a friendly, empathetic, simple-english explanation for why CPR would not be effective on a frail person with severe osteoporosis and advanced dementia” and things. This allows the doc to give the patient more detail than they’d have time to present, but it can be very closely tailored to the scenario, and it’s the sort of information AI shines at producing.
I personally do not like the idea of AI powered “search” engines since AI has been known in the past to absolutely make stuff up and site fake articles that don’t actually exist.
I don’t remember the exact article, but I do remember the story of either a lawyer or law professor (I can’t remember which) who asked an AI chatbot about himself and it came up citing a fake news article about him having sexual relations with a student of his (if I am remembering this all correctly).
Also, I prefer a traditional search where I am given a ton of varying links to different web pages displayed in a listed order so that way I can open a link and if I don’t find what I’m looking for, just close said link and try another one. Compare that to any time I’ve used Perplexity chatbot where at most at the end of each response I’m given a few different links that may or may not contain the answer I’m looking for if they’re even legitimate.
Yeah I don’t either. Do you know if it makes stuff up because it searches the internet for answers, and then comes up with its’ own answer? Or its’ answer is purely based on it’s poor ability to find accurate information, which leads to nonsense
How does it cite fake articles that don’t exist? I had thought that it doesn’t even provided sources. Or do you mean like it would say something vaguely like “according to a NY times article”, or articles that do exist but are just completely filled with incorrect information?
Same here, I feel like traditional searching will always be superior. I don’t think AI will ever be able to give organic responses, because in order to do that, it’ll have to either have it’s own experiences in every subject, or know how to pull from valid sources correctly in an efficient manner. Like I want to look up results and feedback about something from a real person who’s experienced it and used it, and not giving out answers just for profit. Same reason why I avoid basically any article of “official source” at all costs. Anytime I go to a “recipe website” I go straight to the bottom and read the comments, but I honestly stopped going to those and used reddit instead lol
Bad. Chatbots can and has given out wrong, nonsense, and potentially dangerous info. All they do is synthesize info, and that includes the same bad info that made search engines less useful in the first place.
I’ve used a few here that were suggested by others in this thread - some of them gave sources and actually pointed to reddit. I had 2 recent questions I was trying to find information on this past week, and it passed 1/2 questions. But if I have a niche question, especially tech related, forget it lol
It seems to just give a list of all the possible answers for the question. But it doesn’t seem to pinpoint (unless I’m mistaken) arguably the most “optimal” method or answer (which users point out or endorse), and I assume wouldn’t also explain why or the experience. Because of that, I don’t think it’ll replace the value of finding information via thread-based platforms (especially if user-controlled)
This is probably a bit of a pessimistic take, but it feels like Google and some of the other search engines are already essentially giving you AI results in the form of the top content they display. For many searches, what you’ll see are a variety of pages either written with AI or so heavily SEO-optimized that it’s clear they’re written to maximize ad revenue, not to help people find real answers. I think that sort of thing is inevitable with the monetization issues we have today, so I’m not sure what the answer is. Personally I don’t ever use generative AI to give me a trustworthy answer. I think it’s better employed for coming up with ideas or spurring creativity. Folks using it for fact checking should probably look elsewhere.
I do agree with you that a forum of answers from real people, something like Reddit became, is probably the ideal. And I think there are some industry-specific sites that achieve this reasonably well, like G2 for software and business reviews.
Edit: As an aside, information literacy is truly one of the great social problems of the day. For example, I can’t count the number of times I’ve seen folks screenshot the blurb from Google that “answers” a question and use it to try to prove their point in an online argument. Yes, that works fine in some instances, but the reliance on that snippet is what’s concerning to me.
It can be challenging and time-consuming to find real information, and the state of current search engines only exacerbates the problem.
Without solving the “hallucination problem”, is very risk for let that became mainstream. And also, it’s extremely expensive to do that, since running LLM prompts costs more energy comparing with simple searches. RN, running smaller models locally seems more interesting.
Also, is seems more useful not as chat, but as voice assistants.
I mostly have experience with Bing. And it’s because they keep forcing their shitty AI search splash page on me every time I want to do a normal web search. I turned it off in the Edge browser but what do you know, it keeps coming back.
Any new feature a company repeatedly forces on me is going to be starting from a hole it has to dig out of. The bigger the corporation, the more immediately resistant I will be to it. “ChatGPT” and “AI” as the latest buzzphrases grate on me.
Outside the big corporations, I’m keen to tinker around with it some. I’ve done some machine learning stuff in years past, but this a large step change in what is available to hobbyists.
I’m with you on this one. Personally, there are a myriad of issues with replacing search engines with AI-generated answers:
- the accuracy. Without going into what is truth or falsehood, can you trust AI generated answers? I use Brave Search occasionally, and it has an AI summary text at the top. A lot of the time it strings multiple conflicting answers together into a paragraph and the result is laughably bad.
When I look something up that isn’t trivial, I typically use multiple search results and make the call myself. This step is removed if you use AI, unless one explicitly ask it to iterate all the top conflicting answers (along with sources) so the user can decide for themselves. However, as far as I know, its amalgamated answer is being treated as a source of truth, even if the content has nuanced conflicts a human can easily spot. This alone deters me from AI search in general.
-
I feel like doing this will degenerate my reading/skimming comprehension and research skills, and can lead to blindly trusting direct and easy to access answers.
-
In the context of technical searches like programming or whatnot, I’m not that pressed for time to take shortcuts. I don’t mind working stuff out from online forums and documentation, purely because I enjoy it and it’s part of the process.
-
Sometimes, looking things up yourself means you also can discover great blogs and personal wikis from niche communities, and related content that you can save and look back later.
-
Centralizing information makes the internet bland, boring and potentially exploitative. If it becomes normalized to pay a visit to one or two Big AI search engines instead of actually clicking on human-made sources then the information-providing part of the internet will become lost to time.
There’s also problems with biases, alignment, training AI on AI-generated content, etc., make of that what you will but that sounds worse than spending a couple of minutes selecting sources for yourself. Top results are already full of generic, AI generated stuff. The internet, made by us, for us, must prevail.
Anecdotally, I’ve used ChatGPT once or twice when I was really pressed for time with something I couldn’t find anywhere, and because my university professor wasn’t replying to my email regarding the topic. I was somewhat impressed at its performance, but this was after 6 or 7 prompts, not a single search away.
Maybe the next generation of AI search users who’s never looked a thing up manually will grimace at the thought of pre-AI search engines.
I’ve been trying out an IaC services’ (Pulumi) chatbot to answer questions about how to spin up architecture. It’s really bad. Totally makes up properties that don’t exist and at times spins up code that doesn’t even make sense syntactically. Not to mention that the code it generates has the potential to cost not insignificant amounts of money.
Definitely not a replacement for stack overflow, github, forums, or random blog posts. Not for a service that spins up critical infrastructure. Like, you have to know to some degree how that stuff works. And if you know how that stuff works, what’s the point of the service? Saving a few minutes typing stuff out and looking at documentation?
I use ChatGPT sometimes for work. We’re building some new digital infrastructure.
Mostly when I’m stuck on a weird bug I do a quick search in ChatGPT to get an idea of where to look and use that in Google search.
An other use is to create some simple Powershell scripts or commands. I ofcourse test them in a test environment before I use it in the production environment we’re creating.
Yes sometimes it can come in handy to give you an idea or help you on the way. But you need to be careful and take answers with a grain of salt.
AI generated search is a huge improvement over what we had before. Before, when you searched a question or topic that doesn’t have a Wikipedia page or easy answer, you get a ton of SEO spam. Stack Overflow is still a great resource for programming, but for most general knowledge, AI generated is so much more useful.
I’ve been using Brave Search and the AI summarizer is pretty good, and I get to avoid loading shitty websites. For specific questions that aren’t easy for search engines to answer, I really like how ChatGPT is conversational and lets you ask followup questions. Another thing I’ve been using is perplexity.ai, it can actually search the internet and cite sources.
Overall, AI has been a big help to me, and lets me avoid going to other websites which are usually just awful now. Most websites are full of ads, trackers, cookie notices, “Checking your connection”, I’m done with dealing with that. Search engines are there when I forget the domain for a service or link to a GitHub project.
By “what we had before”, you probably mean normal google searching, right? If so, yea I feel like AI searching would be better than that. I honestly don’t know what happened to google or when it stopped having quality. It would be interesting to watch a documentary or visual about how google changed over time, what changed it, and why it’s searching experience is so poor now
Do you think that thread-based platforms that have user-generated content is the “best” source for information? (With the exception of SO, which I agree is probably superior for programming)
I haven’t heard of perplexity.ai before, but I just tried it with the first recent issue that I had that came to mind, and I’m already not super impressed - I know this is literally one question so far, but it gave me a source for steam deck which my question wasn’t about
This is an example of a question that someone would ask and need to provide info on what they already tried, setup details, more info about the situation, etc - and then people would comment likely with follow-up questions, or instantly know the solution from having to deal with it elsewhere. Obviously, this would go in the “troubleshooting” category, which seems that AI would have to redirect the user to rather than having the definitive answer to; at which point we’re probably better off searching ourselves
Another recent thing I have had to look up, was various tricks to pull out a screw off the back of my mechanical keyboard that had been stripped and we couldn’t get it out. Here’s what I get when using perplexity:
Pretty good, my engineering S/O knows all sorts of tricks for this, and a few of them are actually listed here. However, I feel that I need to validate the legitimacy of the AI’s response by going to the sources directly - which seems like an extra step that defeats the purpose of AI searching
Either way - it’s interesting! And I think could be a useful tool in some ways. Also 100% agree on being done with the bullshit that is ads, bloatware, etc. Sorry for the wall btw lol
I definitely think ai search engines are the next step. The way most people use Google is already a human readable prompt which gpt handles very well. We just need to improve the results and figure out a way for it to not steal and suppress the views from the websites.
Interesting, I think I agree with you on this. It could be better than traditional searching, but only if it is able to pull accurate organic content with sources. I think only then would it be more accurate and efficient than looking through forum-like platforms.
Discussions and comments are super important too so I guess it would have to pull sources that include that, which I guess could work? That’s super important for probably everything, because you might see comments that say
"add 1/4 c flour instead of 1/3 and it was perfect"
or"I used this and it caused a spark in my usb port, here's what I did and my setup, take caution"
or"if you use a 3 monitor setup though, be careful using 2 hdmi and 1 dp, for these reasons, 3 dp is better for these reasons"
or"if you want a more efficient way to farm this item, talk to this npc and do this quest instead"
etc (I just made those up for examples) - but the point is that people comment on posts with tweaks, improvements, warnings, positive feedback, negative feedback, etc. That’s super valuable for making a final decision on your own about the problem, which is partially why I don’t think AI will ever be the most successful way to find information, because I don’t know if it can achieve this more efficiently than forum-like platforms