Article summarized by AI below: The article argues that artificial intelligence (AI) is not a threat to humanity, but a powerful tool to solve global challenges such as climate change, poverty, disease, and inequality. It gives examples of how AI is already being used to improve health care, education, agriculture, and energy efficiency. It also discusses the ethical and social implications of AI, and how we can ensure that it is aligned with human values and goals. The article concludes that AI will save the world if we use it wisely and responsibly.

  • ddh
    link
    fedilink
    English
    51 year ago

    This article all but dismisses the ethical and social implications of AI.

    The climax reads: “The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.”

    Then it exhorts us to barrel ahead at full speed with zero concern for anything other than ‘beating’ China.

    It’s so one-sided it’s actually kind of adorable.

  • @realitista
    link
    English
    41 year ago

    Not that I entirely disagree with the case he’s putting forth, but it’s very telling that the examples he chose for things that people thought would destroy the earth, automobiles and electricity, are exactly the items that are fueling runaway global warming, and killing millions every year. Not to say we didn’t get great benefit out of it, but the fact that this obvious contradiction is lost on him means he probably has his blinders on a bit too tight:

    Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked a moral panic – a social contagion that convinces people the new technology is going to destroy the world, or society, or both.

  • @[email protected]
    link
    fedilink
    English
    41 year ago

    The article concludes that AI will save the world if we use it wisely and responsibly.

    That’s a big if.

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

    first red flag - does this guy genuinely think that war is waged with the intent to minimize casualties? That almost seems unserious at face value, but I’ll keep reading

    But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages.

    What???

    And besides, why isn’t he arguing that AI will take your jobs? The compelling argument for AI here is that AI will take your job and that will be a good thing. That AI will do all of the tasks necessary to keep humans alive and thriving and you will still have to work is extremely dystopian. He justifies this with a quote fom Milton “I am favor of cutting taxes under any circumstances and for any excuse, for any reason, whenever it’s possible” Friedman.

    Next up is his completely intellectually toothless argument that the owners of AI will not keep all of the profits from AI because factory owners didn’t keep the profits of their factories either???

    As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – inevitably steal all societal wealth from the people who do the actual work – the proletariat. This is another fallacy that simply will not die no matter how often it’s disproved by reality.

    You don’t have to be a Marxist to look around and realize that owning the thing allows you to take as much of the money from it as you want to. Is he seriously claiming that being a CEO or a major shareholder doesn’t make you richer?

    The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible. The largest market in the world for any product is the entire world, all 8 billion of us. And so in reality, every new technology – even ones that start by selling to the rarefied air of high-paying big companies or wealthy consumers – rapidly proliferates until it’s in the hands of the largest possible mass market, ultimately everyone on the planet.

    […]

    In short, everyone gets the thing – as we saw in the past with not just cars but also electricity, radio, computers, the Internet, mobile phones, and search engines.

    And he can’t identify that everyone gets a car but nobody gets a car factory? This seems glaring, especially with companies fighting so aggressively against the right to restore, clawing every last bit of ownership of the production and maintenance of products into their own hands.

  • modulus
    link
    fedilink
    English
    21 year ago

    Pretty awful article. I’ll just point out a few issues.

    So in practice, even when the Baptists are genuine – and even when the Baptists are right – they are used as cover by manipulative and venal Bootleggers to benefit themselves.

    If so, if the “baptists” are right, the solution is not ignoring them, though.

    First, recall that John Von Neumann responded to Robert Oppenheimer’s famous hand-wringing about his role creating nuclear weapons – which helped end World War II and prevent World War III…

    Nuclear weapons didn’t help end World War II, and if they helped prevent world War III, a proposition yet to be determined, it wasn’t thanks to von Neumann, who said: “With the Russians it is not a question of whether but of when. If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not one o’clock?” Forgive me, but taking lessons on existential risks from von Neumann may not be the most advisable thing.

    Once a framework for restricting even egregiously terrible content is in place […] a shockingly broad range of government agencies and activist pressure groups and nongovernmental entities will kick into gear and demand ever greater levels of censorship and suppression of whatever speech they view as threatening to society and/or their own personal preferences. […] This cycle in practice can run apparently forever, with the enthusiastic support of authoritarian hall monitors installed throughout our elite power structures. This has been cascading for a decade in social media and with only certain exceptions continues to get more fervent all the time.

    Aside from the overwrought free expression discourse, the original text links to Twitter on certain exceptions. Considering Twitter an exception in this regard is simply not serious.

    AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China.

    And this is a bad thing why?

    The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

    Why?

    This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision.

    Ah, the… much darker Chinese vision. Ok, yeah, that sure convinced me.

    In summary, while AI may not be risky in the ways many AI doomers claim, this article makes a poor way of arguing for it, and the actual risk it concerns itself with, isn’t.