• The Giant Korean
    link
    English
    1371 year ago

    Does it tell you to Google the problem and then downvote you?

    • @[email protected]
      link
      fedilink
      English
      331 year ago

      Hence recursion since Google just takes you back, which leads to stack overflow because there is no exit condition.

      • The Giant Korean
        link
        English
        81 year ago

        Which would be especially messed up if your original question was about recursion.

      • @[email protected]
        link
        fedilink
        English
        61 year ago

        This bullshit happens too often lmao

        “Googles problem, finds post”

        “Why are you asking this use Google”

        Gee, thanks

    • AnonymousLlama
      link
      fedilink
      51 year ago

      “to keep the quality of answers high, we may arbitrarily close questions, regardless of how many upvotes it gets and how helpful it is” - stackoverflow

  • @[email protected]
    link
    fedilink
    English
    471 year ago

    That would be pretty easy.

    return "Why are you even trying to do it this way?\n$link_to_language_spec\nThis should be closed.;

    • wagesj45
      link
      fedilink
      151 year ago

      I thought the point was a mental BDSM exercise where you come to others for help and are instead punished for your ignorance.

  • @[email protected]
    link
    fedilink
    English
    251 year ago

    It really puts their stance on “no AI generated answers” in a different light.

    Basically, “no AI generated answers unless we do it”.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      Well, using ai-generated answers to train their own ai would bring down the quality of answers and worse quality means lesser money. Don’t you want them to make any money??!!

  • Carlos Solís
    link
    fedilink
    English
    241 year ago

    Stack Overflow is unique as a page, in the sense that its contributions are under a license that allows for reuse (Creative Commons Share-Alike) as long as the individual users are properly credited. Does this mean that OverflowAI keeps the credit metadata and knows who wrote each individual part of an answer?

    • @[email protected]
      link
      fedilink
      English
      141 year ago

      AI doesn’t work that way. No one wrote “part of the answer.” It’s more like each contributor casted a vote on what the next token should be and it randomly picks one of the top ten voted tokens. (Very very roughly.)

      • Carlos Solís
        link
        fedilink
        English
        21 year ago

        Fair enough, but at least there should be a way for OverflowAI to list which contributors had the strongest link to the given answer, right?

        • @[email protected]
          link
          fedilink
          English
          14
          edit-2
          1 year ago

          Edit: definitely read the other responses because apparently there are some techniques I wasn’t aware of and don’t understand nearly as well as I understand the underlying AI technology - and I’m only an enthusiast layman.

          I don’t think there is any way of doing that. AI is like a huge matrix that says ‘if (’ is followed by

          ’ x’: 60%

          ’ foo’: 19%

          ’ person’: 9%

          Etc.

          And then it does it all over again for the next token based on randomly selecting one of the tokens and then saying ‘if ( person’ is followed by

          ‘.id’: 30%

          ‘.name’: 27%

          Etc.

          So just to write a simple ‘if person.name.startsWith(“foo”) {’ is the aggregate result of thousands of contributors - really pretty much every author of every code snippet ingested from the training material.

          There is no single author even if the code matches existing code token for token. The only exception would be code that is so esoteric that there is only a single author writing code that does a particular thing. But even in that case, there is nothing in the probability matrix to indicate that a particular sequence of tokens is unique to a certain author. Best you could do is full text search a line of code to see if it matches anything in the training data and if there is a very small set of authors to whom credit might be assigned. That might be possible, but it would be an add-on (and significant performance hit) to the actual AI itself. Sort of like how browser integrated AI just runs a search and feeds the result into the context to make the output more likely to contain information in the top results.

        • ShustOne
          link
          fedilink
          English
          61 year ago

          Check out the article and feature video. It does appear to link to answers it pulled from. Bing and Bard do the same. Posters saying it’s impossible are mistaken.

          • Carlos Solís
            link
            fedilink
            English
            41 year ago

            Thanks for the TLDW - I could ogle a bit of the article but since I was at work, I couldn’t just play the video out loud.

          • wagesj45
            link
            fedilink
            41 year ago

            Posters aren’t saying that its impossible to put search results through an LLM and ask it to cite the source it reads. They’re saying that the neural networks, as used today in LLMs, do not store token attribution in the vocabulary or per node. You can implement a system for the neural network to work in that provides it the proper input (search results) and prodding (a prompt that encourages the network to biasing toward citation), not that the single LLM can conceptualize of that on its own.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            1 year ago

            If it’s doing a search for the code, pulling it in to the context, and then spitting it back out in slightly modified form, then it can attribute the source it pulled in. That’s a very different thing from the AI because code that is pulled into context by a search had a strong influence on the output. The output is still generated the same way but it would be reasonable to credit the author of the code that is pulled in. However, the code in the training data cannot be credited. How you would pull in just the right piece of code in the first place though is a bit of a mystery to me.

    • @[email protected]
      link
      fedilink
      English
      91 year ago

      Then I’m guilty of breaking the license. I have always been stealing code from Stack Overflow. Well, since I’m a senior dev right now I steal only from answers.

    • ShustOne
      link
      fedilink
      English
      51 year ago

      It does seem to do that in the feature video. It appears to link to all the answers it pulled from.

  • @TwinTurbo
    link
    English
    201 year ago

    No users to answer questions? No problem…

  • @gbuttersnaps
    link
    English
    181 year ago

    The only answer you ever get is “Closed: Marked as duplicate question.”

  • @genericnickname
    link
    English
    161 year ago

    I’m not liking the announced changes to search. That sounds like we will be losing the lexical search and in exchange we will be getting the same technology that allows google to answer questions different to the one we asked.

    How many minutes between starting to use OverflowAI until we get something like “As a large language model trained by the Stack Exchange Network i can not answer duplicated questions”.

    • @[email protected]
      link
      fedilink
      11 year ago

      That’s when I go back to ChatGPT or Google Bard. It’s helped me with problems and less aggravation than SO

    • @[email protected]
      link
      fedilink
      English
      81 year ago

      I use ChatGPT frequently for programming and I’ve found it to be pretty good.

      The key is using it conversational nature as this gets better results.

      Start simple and expand. You can’t just ask it wrote huge chunks of code.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        Yeah works well, as long as the code is rather simple and it occurred rather often in the training set. But I seldom use it currently (got a little bit more complex stuff going on). It’s good though to find new stuff (as it often introduces a new library I haven’t known yet). But actual code… I’m writing myself (tried it often, and the quality just isn’t there… and I think it even got worse over the last couple of months as also studies suggest)

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      Agreed. I got ChatGPT to convert python code to JavaScript and I got a buggy code sample back with new bugs.

      • @fidodo
        link
        11 year ago

        I’ve found it great for asking documentation questions. It saves me a ton of time having to search through documentation myself. The problem is when it encounters something it doesn’t have information on, it’ll just confidently make shit up, and if you’re not enough of an expert to recognize when that happens, you can be mislead. It still saves me time, but I use it as a recall tool to get me started when I’m learning to do something new, I’d never use the code it puts out without reading through it line by line. I’m also experienced enough to know when it’s wrong and how to refactor its examples. People new to programming could get set down the wrong path by over relying on gpt to teach them.

    • @fidodo
      link
      21 year ago

      I’ve gotten really good results asking chat gpt for programming help. Problem is that it’s wrong like 10% of the time, and when it’s wrong it’s very confidently incorrect. That wasn’t a problem for me because I knew when it was wrong and could course correct it and get the correct solution and it still saved me time and helped me eventually get to the right solution. But if someone who’s still getting started is trying to use chat gpt to learn, they could easily be mislead because they won’t know when its output is wrong.

        • @fidodo
          link
          21 year ago

          Definitely depends on the type of question. I find for documentation type questions I get the 90% good answers, like how do I do something with this library, it’s good, which makes sense because that libraries documentation is probably in the training data. But for more open ended questions, like how do I solve this problem, I see similar performance to what you’re saying. I think it’s a good retrieval and synthesises tool which can really save a ton of time if you already have a high level plan of action and just use it to fill in some specific details.

    • @[email protected]
      link
      fedilink
      -11 year ago

      The code it gives me generally just throws me into the debug stage, skipping right over the me writing buggy code stage.

  • @[email protected]
    link
    fedilink
    English
    131 year ago

    I get the whole community resource and all that hoorah, but what bothers me the most is that C*O somewhere that’s padding his bonus and CV, waiting for the ship to sink so he can move on to the next thing where he can sing praises to the AI revolution.

  • JackbyDev
    link
    fedilink
    English
    131 year ago

    I feel like a better solution is to have a community answer as generative AI to every new question and have folks upvote or downvote it like normal.

    • @alehc
      link
      English
      11 year ago

      I don’t think this is it. I wouldn’t want newbies to wait for the community to tell them that running sudo rm -Rf / or other useless/dangerous command is a bad idea…

  • @[email protected]
    link
    fedilink
    English
    111 year ago

    I understand Google and Microsoft getting into it as it makes sense as a “better” Google search but for StackOverflow that sounds like they have just given up on their current platform.