image description (contains clarifications on background elements)

Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg “big brother is watching” poser, two images of fluttershy (a pony from my little pony) one of them reading “u only kno my swag, not my lore”, a picture of parkzer parkzer from the streamer “dougdoug” and a slider gameplay element from the rhythm game “osu”. The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesn’t cause too much hate. i just wanna know what u people and creatures think <3

  • @[email protected]
    link
    fedilink
    519 hours ago

    this matches my experience too. good IDEs or editors with LSP support allll the way.

    also wanna add that it’s weird to me that we turn to LLMs to generate mountains of boilerplate instead of… y’know, fixing our damn tools in the first place (or using them correctly, or to their fullest) so that said boilerplate is unnecessary. abstractions have always been a thing. it seems so inefficient.

      • Badabinski
        link
        fedilink
        118 hours ago

        I also 100% agree with you. My work has a developer productivity team that tries to make sure we have access to good tools, and those folks have been all over AI like flies on shit lately. I’ve started to feel a bit like a crazy Luddite because I do not feel like Copilot increases my productivity. I’m spending like 90% of my time reading docs, debugging and exploring fucked up edge cases, or staring off into space while contemplating if I’m about to introduce some godawful race condition between two disparate systems running in kubernetes or something. Senior developers usually do shit that would take hours to properly summarize for a language model.

        And yeah, if I have to write a shitload boilerplate then I’m writing bad code and probably need to add or fix abstraction. Worst case, there’s always vim macros or a quick shell oneliner to generate that shit. The barrier to progress is useful because it warns me that I’m being a dummy. I don’t want to get rid of that when the only benefit is that I get to context switch between code review mode and system synthesis mode.

        • @[email protected]
          link
          fedilink
          217 hours ago

          Yeah, with seniors it’s even more clear how little LMs can help.

          I feel you on the AI tools being pushed thing. My company is too small to have a dedicated team for something like that, buuuut… As of last week, we’re wasting resources on an internal server hosting Deepseek on absurd hardware. Like, far more capable than our prod server.

          Oh, an we pride ourselves on being soooo environmentally friendly 😊🎉