- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
The onrushing AI era was supposed to create boom times for great gadgets. Not long ago, analysts were predicting that Apple Intelligence would start a “supercycle” of smartphone upgrades, with tons of new AI features compelling people to buy them. Amazon and Google and others were explaining how their ecosystems of devices would make computing seamless, natural, and personal. Startups were flooding the market with ChatGPT-powered gadgets, so you’d never be out of touch. AI was going to make every gadget great, and every gadget was going to change to embrace the AI world.
This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.
There was just one problem with the whole theory: the tech still doesn’t work. Chatbots may be fun to talk to and an occasionally useful replacement for Google, but truly game-changing virtual assistants are nowhere close to ready. And without them, the gadget revolution we were promised has utterly failed to materialize.
In the meantime, the tech industry allowed itself to be so distracted by these shiny language models that it basically stopped trying to make otherwise good gadgets. Some companies have more or less stopped making new things altogether, waiting for AI to be good enough before it ships. Others have resorted to shipping more iterative, less interesting upgrades because they have run out of ideas other than “put AI in it.” That has made the post-ChatGPT product cycle bland and boring, in a moment that could otherwise have been incredibly exciting. AI isn’t good enough, and it’s dragging everything else down with it.
Archive link: https://archive.ph/spnT6
Generations* Let’s not forget we produce 3 or 4 models of phones a year, per manufacturer. That’s an alarming planet amount of E-waste and we don’t have the raw materials to keep up this pace forever.
I would argue that they moved to LLMs because they had run out of ideas on actually improving cellphones. It wasn’t that they were distracted by them. They are trying to distract us because they need to cell new phones every year and nothing they’ve come up with is really justifying shelling out $1200 for a phone that’s virtually the same as the previous 3-5 iterations.
This “new phone every year” is the worst consumer crapfest we have going. AI features feel like clutching at straws when seemingly everyone hates the battery life on every single phone. Slap a larger battery in there? Well now you get shit AI that burns whatever extra capacity was gained. I can’t name a single quality on an iPhone model from the last 6 years that I truly wanted, other than the size of my 13 mini. It works fine and it fits in my pocket. Now make one that stays on for a full 24 hours and doesn’t need a battery replacement every 2 years.
Blame the isheep for purchasing every crap offered.
Me breathing a sigh of relief for still using my S10.
It makes calls, send texts and I can read Lemmy with the app. What more do I need?
There are plenty on Android as well and they also existed before smartphones.
I’m not sure if that’s a typo or brilliant. They need to “cell” new phones every year, indeed.
Celling cell phones is indeed profitable.
It’s more boring than this, I think. The AI fomo is real, so they cram that in rather clumsy and ultimately pointless. But there were so many missed opportunities on Apple and Samsung flagships this year and it boils down to the capitalistic urge to save money and charge customers the same, and having no real competition. OPPO, one plus, vivo all have better devices, but importing them and getting them to work on US carriers is basically not possible. Not to mention the incentives the carriers throw at you to keep you locked in to that manufacturer.
I’ve been using a Sunbeam flip phone for a year or so. Paid for the phone up front, and pay $3/mo for use of maps, speech recognition, and continued bugfixes.
Even if phones never got new features, dev time still needs to be committed to security updates, and services (like Siri) need to be paid for. The model of getting 100% of your revenue from new phone sales is starting to break. If I could pay $3/mo for Siri or whatever and never have my phone go obsolete, I think that’d be a good deal.
What the heck are you on about. That’s the worst possible solution to this, are you some sort of masochistic?
If Siri is something that needs to be paid for, don’t bundle it with the system. Charge extra from the start, and people can opt in to that shit.
Also, they run a massively profitable software store, and THAT is what justifies and pays for the bug fixing and security patches to the overall OS.
The “cell a year” practice isn’t to cover development costs, it’s to bring in massive profit by milking the consumeristic herd that buys their crap.
I’m curious as to what the opinion of AI will be in 10 years
Blockchain 10 years ago was hyped like AI now.
Probably the same as we have now, “be neat if and when it eventually arrives”.
I’m betting the same opinion we have today about 3D TVs
Honestly yeah, none of the crap being made right now is going to appear relevant in the future, just like 3d tvs
3d tvs is my favorite analogy. Easiest way to illustrate the bubble of hype.
That’s the saddest part, I loved my 3dtv until they stopped making media for it. It was a fun gimmick, but I was definitely not “most consumers” lol
I’ve heard it put very well that AI is either having a Napster moment in which case we will not recognise the world 10 years from now, or it’s having an iPhone moment and it will get marginally better at best but is essentially in it’s final form.
I personally think it’s more like 3D movies and in 20 years when it comes back around we’ll look at this crap like it was Red and Blue glasses.
I think it’s iphone stage. We’ve had predictive text in some form or other for a long time now. But that’s just LLMs. Can’t speak for the image/video generators, but I expect those will become another tool in the box that gets better but does the same thing.
I just can’t see a whole lot of improvement in these products making any changes top how we use them already.
Transformer based LLMs are pretty much at their final form, from a training perspective. But there’s still a lot of juice to be gotten from them through more sophisticated usage, for example the recent “Atom of Thoughts” paper. Simply by directing LLMs in the correct flow, you can get much stronger results with much weaker models.
How long until someone makes a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
… a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
Turing Completeness maybe?
I read that as “if you do the thinking for them, LLMs are quite good”
When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.
When LLMs get things wrong they aren’t hallucinating. They are bullshitting.
source: https://thebullshitmachines.com/lesson-2-the-nature-of-bullshit/index.html
guess, and sound confident while doing it.
Right, and that goes for the things it gets “correct” as well, right? I think “bullshitting” can give the wrong idea that LLMs are somehow aware of when they don’t know something and can choose to turn on some sort of “bullshitting mode”, when it’s really all just statistical guesswork (plus some preprogrammed algorithms, probably).
Detecting a hallucination programmatically is the hard part. What is truth? Given an arbitrary sentence, how does one accurately measure the truthfulness of it? What about the edge cases, like a statement that is itself true but misrepresents something? Or what if a statement is correct in a specific context, but generally incorrect?
I’m an AI optimist but I don’t see hallucinations being solved completely as long as LLMs are statistical models of languages, but we’ll probably have a set of heuristics and techniques that can catch 90% of them.
I mean, in the end, I think it’s literally an unsolvable problem of intelligence. It’s not like humans don’t “hallucinate” ourselves. Fundamentally your information processing is only as good as the information you get in, and if the information is wrong, you’re going to be wrong. Or even just mistakes. We make mistakes constantly, and we’re the most intelligent beings we know of in the universe.
The question is what issue exactly we’re attempting to solve regarding AI. It’s probably more useful to reframe it as “The AI not lying/giving false information when it should know better/has enough information to know the truth”. Though, even that is a higher bar than we humans set for ourselves
Yeah, like, have you ever met one of those crazy guys who think the pyramids were literally built by aliens? Humans can get caught in a confidently wrong state as well.
We used to call those the AI winters. Barely any progress for years until someone has a great idea and suddenly there is a new form of AI and a new hype cycle again ending I in AI winter.
In a few years, somebody will find a way that leaves LLM in the dust but comes with its own set of limitations.
AI image generation is pretty cool. If it’s used in moderation and as a test bed. It’s a tool, not a complete piece of work imo.
I could see text gen being usful for some things. But i feel like it can very easily and sloppily be a crutch. If it’s used in the same spirit as a spreadsheet I’d feel better about it.
LLMs are just ridiculous to me.
I haven’t gotten anything of use from Apple Intelligence. Even just using it is difficult, and Siri is possibly dumber than she was before.
siri has not been integrated with AI yet. they pushed that to 2026.
Based on what I’ve seen of my partners phone, it provides an assessment of text messages. Why would someone want that?
I’ve used the “writing tools” extensively for minor changes, like changes to capitalization on a large block of text. It makes the phone a little less of a consumption-only device.
I’ve also found the image editing tools handy from time to time, and the automatic calls to ChatGPT on the more complex natural-language questions can sometimes be handy, even if you need to wait a while for the response.
The notification summaries are sometimes very handy and sometimes absurdly incorrect and misleading.
I’m really looking forward to Siri being less frustratingly stupid, but we’ve got a while to wait for that, and we probably shouldn’t set our expectations too high. I do respect that they’ve not shipped it rather than shipping something broken, though.
I just don’t get why they haven’t put AI in the already established ‘assistants’ yet.
Why isn’t siri or google home not integrated? Why make new things instead of improving the tech you already have?
If I had to guess, its either because of branding, or because they know it doesn’t work that well yet. Probably both.
This has been a huge let down. Thought at the very least home assistants which are marginally useful could become less infuriating with an intelligence boost, but not at all. I’d be happy if I could simply upload a damn 64 Kb thesaurus at this point to my alexa so she would not ignore everything I say if I don’t remember the exact right commands.
Sounds like you should check out Homeassistant.
Yeah maybe. Switching infrastructure would be a headache and expensive though. Last I checked the off the shelf versions which is how I would want to start at least didn’t have wifi capability. Is there a turnkey version that does now?
Yeah, ODROID partnered with them to create an off the shelf product. It’s pretty pricy, though, but honestly you could run it on a pi 3b+ for pretty cheap.
- It doesnt work that well.
- If they do that then they cant trick everyone into buying new devices, thus helping recoup the untold billions dumped into LLM-based content theft.
A whole generation of basically disposable devices at that
My iPhone 14 Pro has no AI and still works as wonderfully well as it did the first day I bought it. And I know that on iOS, you can simply disable the AI element.
But, yeah, the “promise of AI” was always bullshit.
AI is about as useful as when there was a movement to take away human assistants to troubleshoot issues and replacing them all with centralized hubs. These hubs are built with the assumption that they will answer everything and anything people have a concern with. However, their fundamental flaw is that they don’t cover every base and people are left with limited options. They can forget it and just live with it. They can just go through a few more hoops until they’re talking with a human.
And this kind of over-reliance on AI is what will turn people off from it. I’m seeing AI implemented in places where nobody asked it for it to be implemented in. Whereas, there are missed opportunities for AI to be implemented in but aren’t for some reason.
AI in of itself isn’t an entirely bad thing. It is just once again, another great idea, ruined by blind executors in big tech that just don’t get it.
Someone should just sue Apple for false advertising at this point. Apple “Intelligence” is utterly useless in its current form.
But it can make custom emoji! 🫨 That alone is worth the hundreds of millions Apple spent.
Even for devices that will stand the test of time on their own, they’re still being unnecessarily modified by the addition of extra nonsense to support AI boondoggles.
I was talking to our company’s account manager from one major PC manufacturer, he agreed that a generation of laptops with a likely-to-be-useless-in-future Copilot button permanently emblazoned on their keyboard will really date this era.
The computers themselves will be fine - they have some extraneous hardware but that doesn’t really detract from their usability - but there’s a better than even chance that logo will exist as a reminder long after memories of what it was supposedly for begin to fade.