Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • @[email protected]
    link
    fedilink
    -210 months ago

    In the case of an LLM-type AI though, the bars can be swapped in a sense. LLMs are strange, because they can talk but not feel.

    You can’t argue that a series of tensor calculations are sentient (def. able to perceive or feel) - capable of experiencing life from the “inside”. A dog is sentient by most definitions, it could be argued to have a “soul”. When you look at a dog, the dog looks back at you. An LLM does not. It is not conscious, not “alive”.

    However an LLM does put on a fair appearance of being sapient (def. intelligent; able to think) - they contain large stores of knowledge and aside from humans are now the only other thing on the planet that can talk. You can have a discussion with one, you can tell it that it was wrong and it can debate or clarify using its internal knowledge. It can “reason” and anyone who has worked with one writing code can attest to this as they’ve seen their capability to work around restrictions.

    It doesn’t have to be sentient to be able to do this sort of thing, even though we used to think that was practically a prerequisite. Thus the philosophical confusion around them.

    Even if this is simply a clever trick of a glorified autocomplete algorithm, this is something the dog cannot do despite its sentience. Thus an LLM with a decent number of parameters is “smarter” than a dog, and arguably more sapient.