- cross-posted to:
- aboringdystopia
- [email protected]
- cross-posted to:
- aboringdystopia
- [email protected]
Alternative title: “We found a random substack and decided to correlate with market fluctuation because it seemed like a very good idea to drive clicks!”
It’s a sure sign of a healthy non-bubble economy when a random Substack post can cause a stock market crash.
Is the stock market crash in the room with us?
Really? Looks like a normal day at the office to me:

Do you not simply believe the assumption AI will be super powerful any day now, titled
“AI 2027”“The 2028 Global Intelligence Crisis”?It’s gonna happen for real this time.
CryptocurrencyNFTsAI and cryptocurrency will upend the market with how incredible they have been.Then, agentic commerce, coupled with stablecoins, gets rid of transaction fees and upends the business models of payment processors like Mastercard and card-focused banks like American Express.
/s
Wait a minute. Is the DOW not ***OVER 50,000 DOL… 50,000. IT’S OVER 50,000!" ??
I’m pretty sure the 1% drop is attributable to Trump pushing more tariffs.
2 to 4 years out guys I’m seriously this time. Gaaaaaawwwww.
Good, fuck the stock market. Let it all crash and get people hungry, maybe then we’ll finally eat the rich, if that doesn’t do it I dong know what will
You are lazy
Silly, AI doesn’t even work, devs are being rehired to fix their mistakes, blows the whole theory up before it starts.
AI is a broad category of systems, not any one thing. “AI doesn’t work” is like saying “plants taste bad”
You know exactly what we’re talking about when we look at this article and say “AI doesn’t work.” If you want to feign outrage, save it for the tech companies that muddy the waters.
Even if someone’s inaccurately using “AI” as a synonym for LLMs, that claim would still be false - because LLMs work. You can use one right now.
One spitting out false information isn’t a sign they’re not working. That’s not what LLMs are designed for. They’re chatbots - not generally intelligent systems. They don’t think - they talk.
If you can understand the sentence “AI doesn’t work” is about LLMs, surely you can also understand that not working is synonymous for returning incorrect outputs.
I have literally no idea what else you’d be arguing. Its ability to generate words? Everybody knows it can do that
The vast majority of people aren’t educated on the correct terminology here. They don’t know the difference between AI, LLM, AGI, ASI, etc. That makes it near impossible to have real discussions about AI - everyone’s constantly talking past each other and using the same words to mean completely different things.
My original comment wasn’t even challenging their claim that “AI doesn’t work.” I was just pointing out that AI and LLM aren’t synonymous. It’s my one-man fight against sloppy, imprecise use of language. I’d rather engage with what people are actually saying, not with what I assume they’re saying.
When it comes to LLMs, it’s not just a “word generator.” It’s a system that generates natural-sounding language based on statistical probabilities and patterns. In other words: it talks. That’s all. Saying an LLM “doesn’t work” because it spits out inaccurate info is like saying a chess bot doesn’t work because it can’t play poker. No - that’s user error. They’re trying to use the tool for something it was never designed to do.
When they are pushing any plant, for everything, everywhere disregarding concerns for quality or appropriateness yeah plants taste bad, and will probably kill a few people
From ai 2027 to this, it seems that even hyped ai predictions are getting humbled down





