I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.
Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?
Where I work, there’s:
- a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
- a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
- quarterly goals where almost every one has some amount of “with AI” in it
- letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
- a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
- teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output
Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?
if you look at the comments on YC Hacker News, it’s a relatively sane group of people RE: AI. Skeptical first adopters that have experience in the industry usually. It’s worth your time.
I worked at one that actually wasn’t too bad except we had a peer review system for client reports and I was horrified to see how many people had such poor english grammatical understanding that they just assumed the AI was always the correct and better output than human.
And I don’t mean people whose second language was english, I mean native english speakers were giving me AI feedback to change sentences that would completely change the context or horribly maim phrases into past tense where tense of the subject was very much important.
I could easily ignore the changes from coworkers, but a handful of managers would then give performance feedback telling me to utilize AI and grammarly to improve my report quality, even though all of their report feedback was utter garbage lol.
On a related note, grammarly can also go screw itself. That joke of a software suite still doesn’t hold a candle to Word 2007’s editor.
I fucking hate grammarly. And the modern Outlook webmail suggestions can go eat a back of dicks as well.
I’m too old for this shit - too old for the original show, I mean, but for some reason, my brain wants to make that title work:
Who works at a (tech) company that’s not delirious about AI?
SPONGE! BOB! SQUARE! PANTS!
It completely doesn’t work.
I’m not a lyricist, but this is at least closer…
Who works for a place that licks AI’s taint
Well, you put way more into it than I had, so I feel I have a refinement to give back as thanks - it just needs a single extra syllable. Perhaps:
Who works for a place that just licks AI’s taint
Now it scans. :)
I haven’t seen the whole show but I have been under the impression that SpongeBob and intelligence don’t cross paths very often.
who works at a tech company that’s not delicious about AI?
-- OP
I work at Tech Company that loves AI
-- people with poor reading comprehension replying to this thread
I required an outlet to bitch regardless of my ability to reed werds gud.
I’m sure I’m not the only one : D
Honestly, fair
The one I work at went “all in” about a month ago. I started noticing a dramatic increase in garbage/nonsensical code at the end of last week. I didn’t make the connection between the two until Tuesday.
I’ve got a manager that usually listens and they asked me to try it and take notes because they know I’ll tell them the truth. … I’ve got a lot of examples prepped for our next meeting.
The hard part is definitively blaming LLMs because I don’t have time to track down every single commit and analyze it for LLM usage but there’s 100% a correlation.
Yeah, I wish git blame could highlight the lines written by Claude/Codex. Usually when I ask my colleagues ‘so did you use AI much for this one’ they will say yes. But it makes code review that much harder, especially when they then take my PR comments and feed them to the LLM, so I’m coding by playing telephone with a bot.
Unfortunately they’ll never do that because they’re owned by Microslop and they can’t allow any marring of AI’s reputation
We have offshore devs that I think found the copilot button in vscode recently…seeing lots of em dashes in code review today 🫠
I work at a small software company. There is a push to use AI but I would say in a reasonable way. It does speed up some tasks but no one is vibe codding and pushing things without proper review. So far no one is tracking the usage or pushing us to use it more. It’s just a new tool we’re encouraged to be familiar with and use reasonably.
Software company here. There’s a strong external push for us to shove AI into every corner of our UI, but so far we’ve largely kept it out.
The one place we are using it is a pretty strong use-case (essentially sentiment analysis). We’ve had a chatbot in dev for a while, but are struggling to find a valid usecase for it. I think most of us are hoping the AI craze dies down and suddenly our lack of AI is no longer a marketing point our competitors use against us.
Advertise your lack of AI it will draw customers who are sick of the slop
Not in tech, but LLMs have been great for my safety and compliance consulting business. I can honestly say LLMs have made me thousands of euros.
Before LLMs, I would spend quite a bit of my regular workday on creating safety plans and coming up with systems to improve conditions and ensure compliance.
Now, with the power of LLMs, management can generate those plans themselves. So instead of me spending my normal workday on it, I get to bill my emergency rate when the hallucinated slop gets rejected and they need something actually legal at the last minute.
I sometimes have to get involved with writing safety protocols. Not my favourite task, but I’ve always been super nervous about using AI to assist because it’s such a specific, rigid and important thing, that needs to be expressed as simply as possible, all of which AI is bad at. Care to share how you use it?
They don’t, they said their thing is charging emergency rates to bail out other idiots who do use it and trust the output blindly.
That’s on me for not reading. Thanks. I gotta learn that pre coffee commenting should be double checked.
You had me in the first half
urge to downvote rising… rising…
…calm
rising… rising… falling… rising
AI slop clean up is the new highest paying job.
oh, got it! going to found a startup for AI slop cleanup. we could use LLM to automate…
And probably a lot of meh paying ones too, eventually, when the bubble bursts and people realise they’ll never actually be able to trust LLMs.
My company is approaching AI like it’s been approaching anything for the past 40 years: with extreme caution. It’s coming alright, but the engineers are carefully evaluating it for coding, and it certainly isn’t being rolled out recklessly.
I’m one of several die-hards who flat-out refuse to use it - not so much because it’s AI, but because it’s provided by an American company - and my choice is respected. Our CEO sees old-timers like me as the fallback is AI ends up shitting the company’s bed.
I work at a renowned tech company that frequently reminds its employees that AI hallucinates. We do a lot of work for the army, a mistake caused by hallucinating AI would be a disaster.
Like blowing up a girls school or worse like 9/11 the sequel John has planned?
Meanwhile we’re just waiting until Hegseth accidentally turns a Bethesda-area Target into a smoking crater because he was drunk-Grokking and fucks up ordering an airstrike to cheer himself up after the mainstream librul media hurt his fee-fees.
Work in a big multi national company. not a software company, but I’m on an engineering team.
Leadership makes a lot of noises about AI.
The engineers can’t even use git competently. I’ve suggested quietly maybe we should focus on learning software fundamentals instead of chasing dreams but no one here listens to me.
I run a tech company that doesn’t use any AI:
https://sciactive.com/human-contribution-policy/
We make an email service, and we have a hard stance against any AI in our product:
Y’all hiring? I’m tired of my place being like “AI IS BEST, YOU SHOULD ALL USE IT”
Not yet, but hopefully soon. :)
If you’re willing to hire a fully remote Brit lemme know in a DM
My wife’s at a major video game company that, oddly enough, hasn’t gone crazy over AI. Since she’s in localization, she uses DeepL which has some machine learning, but not really an LLM and LLMs aren’t really being pushed on her since it’s a downgrade. From what I can tell, their dev team is also just keeping things human made, although they’re in Japan so that might contribute.
They aren’t saints, they did try to union bust a few years back, but their stance on AI, as well as creativity first mentality and recent pay raise guarantees and whatnot, kinda show they’re paying attention.
Government - great at research, terrible at generation. If you ask it to find and summarise laws and regulation, does a great job, quotes info, can even generate reasonable overviews with a handhold.
Ask it to generate anything that isn’t directly quoted in a specific doc and it goes WILD. Even with some solid training in prompt engineering, it makes you work for focused outputs unless you give it clear everything (data, prompt, target template, revision and scoring process). But once the workflow has been solidly validated a few times I’d rate it “usable”.
Medical device industry here. Some of our software and electrical engineers are using Claude as a sounding board for ideas, or as a starting point to find possible paths forward when they get stuck with a hard problem. Nobody trusts the model to give an accurate answer. Nobody is being encouraged to use AI models. At the end of the day, all work committed to a project is done by real humans with the normal review processes.
Management is cautiously looking at potential uses for AI in our products, but there is a healthy dose of skepticism all around. If your machine is displaying diagnostic data to a doctor there cannot be any question as to whether the machine is hallucinating.
Honestly, this is probably the best use case for LLM’s.
Tom Scott did something recent 2-3 years ago where he fed a bunch of his video titles into an LLM and had it come up 100 new names with a similar style. Most of the output sucked, a handful he had already done, and a few more sounded plausible but didn’t exist. But he got 8-10 that he could have turned into actual videos (doing all the work himself) and even did so for a couple.
The hallucination of AI can be used to help a human artist or programmer, designer, scientist, etc.) make a new connection they couldn’t before, and they can then use that new connection to implement their new idea. But LLM’s generally suck for anything more than that, and over-reliance on them slowly erodes people’s ability to think and create over time












