• Chozo
    link
    fedilink
    855 minutes ago

    “Man who works 10 hours per year tells underlings to work 60 hours per week.”

  • @[email protected]
    link
    fedilink
    English
    116 minutes ago

    I highly recommend Kara Swisher’s recent book “Burn Book” for insights into the Tech lads like Brin, etc, as she’s known most of them since the 90s.

    Really helps contextualize the crazy cocktail of engineering/commercial power with general naivety a lot of these guys have going.

  • @rottingleaf
    link
    English
    238 minutes ago

    So he’s saying they’ve exhausted the pool of applicants so badly to replace that with normal work weeks, just 150% amount of Googlers or maybe 200% amount of Googlers?

    Power and fame break a man. Even if he wasn’t broken from the beginning.

  • @shortrounddev
    link
    English
    132 hours ago

    Who gives a fuck what Sergey brin thinks

  • @[email protected]
    link
    fedilink
    English
    303 hours ago

    Thought this was an Onion article!

    Hey plebs! I demand you work 50% more to develop AGI so that I can replace you with robots and fire all of you and make myself a double plus plutocrat! Also, I want to buy an island, small city, Bunker, Spaceship, And/Or something.

  • arthurpizza
    link
    English
    137 minutes ago

    I guess we don’t need it then.

  • @[email protected]
    link
    fedilink
    English
    82 hours ago

    AGI requires a few key components that no LLM is even close to.

    First, it must be able to discern truth based on evidence, rather than guessing it. Can’t just throw more data at it, especially with the garbage being pumped out these days.

    Second, it must ask questions in the pursuit of knowledge, especially when truth is ambiguous. Once that knowledge is found, it needs to improve itself, pruning outdated and erroneous information.

    Third, it would need free will. And that’s the one it will never get, I hope. Free will is a necessary part of intelligent consciousness. I know there are some who argue it does not exist but they’re wrong.

    • @[email protected]
      link
      fedilink
      English
      146 minutes ago

      The human mind isn’t infinitely complex. Consciousness has to be a tractable problem imo. I watched Westworld so I’m something of an expert on the matter.

  • @Phegan
    link
    English
    415 hours ago

    He can fuck all the way off.

  • Tony Bark
    link
    fedilink
    English
    184 hours ago

    They warn us about AGI while simultaneously attempting to sell it to us.

    • @rottingleaf
      link
      English
      134 minutes ago

      Black PR is PR too, it’s like warnings about weapons of the future and combat robots and antiutopia for many people worked as an ad, and they want that exact future.

      I think it’s the same with AGI. People think Skynet is cool and want Skynet, because they think it’s the future.

      Except it’s a bit less, like real fascism doesn’t look similar to Warhammer, just to a criminal district ruled by a gang, scaled for a country.

  • Kairos
    link
    fedilink
    English
    315 hours ago

    Or just hire 50% more engineers? Or wait 50% longer?

    • @[email protected]
      link
      fedilink
      English
      154 hours ago

      with “hire more” you do run up against the “9 women can have a baby in 1 month” limit, but in this case it’s likely to help.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮
    link
    fedilink
    English
    16
    edit-2
    5 hours ago

    For how many years? Cuz y’all ain’t anywhere near AGI. You can’t even get generative AI to not suck compared to your competition in that market (which is a pretty low bar) lol

  • @oakey66
    link
    English
    1048 hours ago

    AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There’s no thought or reasoning. They don’t understand inputs. They mimic human speech. They’re not presenting anything meaningful.

    • @raspberriesareyummy
      link
      English
      277 hours ago

      I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.

      • @SinningStromgald
        link
        English
        156 hours ago

        There are at least three of us.

        I am worried what happens when the bubble finally pops because shit always rolls downhill and most of us are at the bottom of the hill.

        • @raspberriesareyummy
          link
          English
          105 hours ago

          Not sure if we need that particular bubble to pop for us to be drowned in a sea of shit, looking at the state of the world right now :( But silicon valley seems to be at the core of this clusterfuck, as if all the villains form there or flock there…

    • @Jesus_666
      link
      English
      106 hours ago

      That undersells them slightly.

      LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.

      LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.

      They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.