• @fuck_u_spez_in_particular
    link
    English
    172 days ago

    I get the tech, and still agree with the preposter. I’d even go so far as that it probably worsens a lot currently, as it’s generating a lot of bullshit that sounds great on the surface, but in reality is just regurgitated stuff that the AI has no clue of. For example I’m tired of reading AI generated text, when a hand written version would be much more precise and has some character at least…

    • @[email protected]
      link
      fedilink
      English
      123 hours ago

      If you are blindly asking it questions without a grounding resources you’re gonning to get nonsense eventually unless it’s really simple questions.

      They aren’t infinite knowledge repositories. The training method is lossy when it comes to memory, just like our own memory.

      Give it documentation or some other context and ask it questions it can summerize pretty well and even link things across documents or other sources.

      The problem is that people are misusing the technology, not that the tech has no use or merit, even if it’s just from an academic perspective.

      • @fuck_u_spez_in_particular
        link
        English
        115 hours ago

        Yes, I know, I tried all kinds of inputs, ways to query it, including full code-bases etc. Long story short: I’m faster just not caring about AI (at the moment). As I said somewhere else here, I have a theoretical background in this area. Though speaking of, I think I really need to try out training or refining a DeepSeek model with our code-bases, whether it helps to be a good alternative to something like the dumb Github Copilot (which I’ve also disabled, because it produces a looot of garbage that I don’t want to waste my attention with…) Maybe it’s now finally possible to use at least for completion when it knows details about the whole code-base (not just snapshots such as Github CoPilot).

    • @[email protected]
      link
      fedilink
      English
      42 days ago

      Try getting a quick powershell script from Microsoft help or spiceworks. And then do the same on GPT

          • @fuck_u_spez_in_particular
            link
            English
            21 day ago

            So unreliable boilerplate generator, you need to debug?

            Right I’ve seen that it’s somewhat nice to quickly generate bash scripts etc.

            It can certainly generate quick’n dirty scripts as a starter. But code quality is often supbar (and often incorrect), which triggers my perfectionism to make it better, at which point I should’ve written it myself…

            But I agree that it can often serve well for exploration, and sometimes you learn new stuff (if you weren’t expert in it at least, and you should always validate whether it’s correct).

            But actual programming in e.g. Rust is a catastrophe with LLMs (more common languages like js work better though).

            • @[email protected]
              link
              fedilink
              English
              223 hours ago

              I use C# and PS/CMD for my job. I think you’re right. It can create a decent template for setting things up. But it trips on its own dick with anything more intricate than simple 2 step commands.

    • @surph_ninja
      link
      English
      -92 days ago

      It’s one thing to be ignorant. It’s quite another to be confidently so in the face of overwhelming evidence that you’re wrong. Impressive.

      • @fuck_u_spez_in_particular
        link
        English
        32 days ago

        confidently so in the face of overwhelming evidence

        That I’d really like to see. And I mean more than the marketing bullshit that AI companies are doing…

        For the record I was one of the first jumping on the AI hype-train (as programmer, and computer-scientist with machine-learning background), following the development of GPT1-4, being excited about having to do less boilerplaty code etc. getting help about rough ideas etc. GPT4 was almost so far as being a help (similar with o1 etc. or Anthropics models). Though I seldom use AI currently (and I’m observing similar with other colleagues and people I know of) because it actually slows me down with my stuff or gives wrong ideas, having to argue, just to see it yet again saturating at a local-minimum (aka it doesn’t get better, no matter what input I try). Just so that I have to do it myself… (which I should’ve done in the first place…).

        Same is true for the image-generative side (i.e. first with GANs now with diffusion-based models).

        I can get into more details about transformer/attention-based-models and its current plateau phase (i.e. more hardware doesn’t actually make things significantly better, it gets exponentially more expensive to make things slightly better) if you really want…

        I hope that we do a breakthrough of course, that a model actually really learns reasoning, but I fear that that will take time, and it might even mean that we need different type of hardware.

        • @surph_ninja
          link
          English
          01 day ago

          Any other AI company, and most of that would be legitimate criticism of the overhype used to generate more funding. But how does any of that apply to DeepSeek, and the code & paper they released?

          • @fuck_u_spez_in_particular
            link
            English
            11 day ago

            DeepSeek

            Yeah it’ll be exciting to see where this goes, i.e. if it really develops into a useful tool, for certain. Though I’m slightly cautious non-the less. It’s not doing something significantly different (i.e. it’s still an LLM), it’s just a lot cheaper/efficient to train, and open for everyone (which is great).

            • @surph_ninja
              link
              English
              11 day ago

              What’s this “if” nonsense? I loaded up a light model of it, and already have put it to work.

              • @fuck_u_spez_in_particular
                link
                English
                01 day ago

                Have you actually read my text wall?

                Even o1 (which AFAIK is roughly on par with R1-671B) wasn’t really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren’t just capable to deliver this.

                I still need to try it out whether it’s possible to train it on my/our codebase, such that it’s at least possible to use as something like Github copilot (which I also don’t use, because it just isn’t reliable enough, and too often generates bugs). Also I’m a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.

                • @surph_ninja
                  link
                  English
                  022 hours ago

                  Ahh. It’s overconfident neckbeard stuff then.

                  • @fuck_u_spez_in_particular
                    link
                    English
                    018 hours ago

                    You’re just trolling aren’t you? Have you used AI for a longer time while coding and then tried without for some time? I currently don’t miss it… Keep in mind that you still have to check whether all the code is correct etc. writing code isn’t the thing that usually takes that much time for me… It’s debugging, and finding architecturally sound and good solutions for the problem. And AI is definitely not good at that (even if you’re not that experienced).