• 0 Posts
  • 87 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle


  • ya know it’s kind of funny here on the other side of the LLM epoch. 10 years ago, scraping data from all over, wherever and however you could get it, and training models using whatever compute you could get your hands on was cool and edgy and had a sort of pirate hacker vibe. kind of like with pirating movies. the ethics were murky, but if you word things the right way, turns out it’s the copyright system itself that’s the problem and we’re just taking advantage of a broken system as underdogs. turn a few pages and now it’s not researchers and bedroom hackers but multibillion dollar corporations who bought up all the pirates and turned them into privateers. from the perspective of the ones building the pipelines nothing has changed except the amount of resources they have access to. the major thing that’s changed is where the resources come from and who is driving the policy.

    not a rebuttal. just a thought.



  • i worked in Android development for years. i used Kotlin and Jetpack Compose in alpha. their docs weren’t perfect but they existed. in my experience good documentation evolves with the project and isn’t tacked on as an afterthought.

    otherwise i get you point and really don’t know enough about the drama to pick a side. i just want my software to work 🤷‍♂️

    ETA: honestly documentation as a first class primitive in Nix is a selling point. if only that consistency could be applied to high level docs


  • The Nix project has long intended to release version 3.0 when flakes 4 are stable. With Determinate Nix 3.0, we’ve fulfilled that promise

    i noticed this language recently as well. i’m glad Nix upstream is defending themselves, but honestly, the place where Nix “3rd party” tooling shines is in documentation. i swear to god the #1 things holding back Nix adoption is piss poor documentation. and i love the idea of Nix to be clear, but if the official docs are years out of date for installing popular user space software like CUDA and the Rust toolchain, for which the docs are either far out of date or using solutions that are not standard or otherwise clunky, then it’s silly to recommend for my work. and also to be clear, i could pull string and make this happen at my company—we’ve done it for Rust—, but i will not stick my neck out for this kind of tribalism.

    on one hand tho, Determinate Systems provided clear install instructions for flakes (which is an important feature, for a lot of maintainers for sure) and did make it clear what the differences were (some of which were clearly better defaults), even if the verbiage is a bit aggressive. i honestly don’t know what it will take. i’m slowly but surely becoming competent in the ecosystem, but i get the vibe from forum posts (which i’m forced to read in lieu of docs) that there’s this “why don’t you already get this” from the already established community. and maintainers act like there’s no reason for these “soft forks” to exist. Nix is not straightforward, and, no, the language isn’t simple enough to learn in an hour. adoption requires good docs




  • i’d say generally you’re right to keep them so that you don’t have to install them again on updates. depends on how heavy the dependencies are, how often you update, if you’re planning on removing the package soon, etc. it’s gonna be tough to make a recommendation without knowing your situation, but for me personally i’d be on the lookout for a binary distribution or other more efficient install options. barring other options i’d probably keep them as long as they aren’t overriding another system library.




  • this is just combining existing data scraping tools with LLMs to create a pretty flimsy and superfluous product. they use the data to do what they say. if they wanted to scrape data on you they can already do that. all they get from this is your interest and maybe some other PII like your email address. the LLM is just incidental here. it’s honestly not even as bad privacy wise as a “hot or not” or personality quiz.



  • chrash0toTechnologySteve Ballmer was an underrated CEO.
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    5 months ago

    you have to do a lot of squinting to accept this take.

    so his wins were copying competitors, and even those products didn’t see success until they were completely revolutionized (Bing in 2024 is a Ballmer success? .NET becoming widespread is his doing?). one thing Nadela did was embrace the competitive landscape and open source with key acquisitions like GitHub and open sourcing .NET, and i honestly don’t have the time to fully rebuff this hot take. but i don’t think the Ballmer haters are totally off base here. even if some of the products started under Ballmer are now successful, it feels disingenuous to attribute their success to him. it’s like an alcoholic dad taking credit for his kid becoming an actor. Microsoft is successful despite him


  • chrash0toLinux@lemmy.mlWhat desktop enviroment do you use and why?
    link
    fedilink
    arrow-up
    26
    arrow-down
    2
    ·
    5 months ago

    these days Hyprland but previously i3.

    i basically live in the terminal unless i’m playing games or in the browser. these days i use most apps full screen and switch between desktops, and i launch apps using wofi/rofi. this has all become very specialized over the past decade, and it almost has a “security by obscurity” effect where it’s not obvious how to do anything on my machines unless you have my muscle memory.

    not that i necessarily recommend this approach generally, but i find value in mostly using a keyboard to control my machines and minimizing visual clutter. i don’t even have desktop icons or a wallpaper.




  • All programs were developed in Python language (3.7.6). In addition, freely available Python libraries of NumPy (1.18.1) and Pandas (1.0.1) were used to manipulate data, cv2 (4.4.0) and matplotlib (3.1.3) were used to visualize, and scikit-learn (0.24.2) was used to implement RF. SqueezeNet and Grad-CAM were realized using the neural network library PyTorch (1.7.0). The DL network was trained and tested using a DL server mounted with an NVIDIA GeForce RTX 3090 GPU, 24 Intel Xeon CPUs, and 24 GB main memory

    it’s interesting that they’re using pretty modest hardware (i assume they mean 24 cores not CPUs) and fairly outdated dependencies. also having their dependencies listed out like this is pretty adorable. it has academic-out-of-touch-not-a-software-dev vibes. makes you wonder how much further a project like this could go with decent technical support. like, all these talented engineers are using 10k times the power to work on generalist models like GPT that struggle at these kinds of tasks, while promising that it would work someday and trivializing them as “downstream tasks”. i think there’s definitely still room in machine learning for expert models; sucks they struggle for proper support.