The most succinct way I can make this argument to the layperson is that “AI”, as it exists today, is terrifyingly good at mimicry. But that’s all it can do. Attributing more to this synthetic neural network makes about as much sense as saying a parrot understands grammar and syntax because it can perfectly reproduce a few words in the right context, or with the right prompt.
From this vantage point, we can clearly see how this technology is severely limited. It can be asked to synthesize new outputs, but that’s merely an extrapolation of the input training set. While this isn’t all that different from what people can, and often do, it’s not a fully rational intelligence that solves problems outside that framing. For that, one needs a general intelligence, capable of extrapolating meaning from context and generating novel concepts.
Moreover, if you want an AI to generate something, you first need to define the general ballpark for the right answer(s). Data gathering, cleaning, categorization (tagging), is a big labor problem that feeds into the AI itself. So there are also a lot of real world problems that don’t fit this model for a whole bunch of reasons. Like not having a working dataset at all, information that doesn’t digitize well, or areas that are too small to properly feed this process in the first place. People function just fine in those spaces, so again, we can see a gap that is not easily closed.
I agree.
The most succinct way I can make this argument to the layperson is that “AI”, as it exists today, is terrifyingly good at mimicry. But that’s all it can do. Attributing more to this synthetic neural network makes about as much sense as saying a parrot understands grammar and syntax because it can perfectly reproduce a few words in the right context, or with the right prompt.
From this vantage point, we can clearly see how this technology is severely limited. It can be asked to synthesize new outputs, but that’s merely an extrapolation of the input training set. While this isn’t all that different from what people can, and often do, it’s not a fully rational intelligence that solves problems outside that framing. For that, one needs a general intelligence, capable of extrapolating meaning from context and generating novel concepts.
Moreover, if you want an AI to generate something, you first need to define the general ballpark for the right answer(s). Data gathering, cleaning, categorization (tagging), is a big labor problem that feeds into the AI itself. So there are also a lot of real world problems that don’t fit this model for a whole bunch of reasons. Like not having a working dataset at all, information that doesn’t digitize well, or areas that are too small to properly feed this process in the first place. People function just fine in those spaces, so again, we can see a gap that is not easily closed.