For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
What your saying doesn’t exist is an Artificial General Intelligence, something approaching the conscious human mind. Your right that doesn’t exist.
AI doesn’t just mean that though.
What we’re dealing with right now is the computer equivalent of growing mouse brain cells in a petre dish, plugging them into inputs and outputs & getting them to do useful things for us.
The way you describe chat GPT not being creative, is also theoretically how our own brains work in the creative processes. If you study story structure & mythology you’d find that ALL successful stories boil down to a very minimalist set of archetypes & types of conflict.
While technically true, when people colloquially say AI, they basically mean AGI. This distinction just confuses people. The OP and normal people asking about AI generally just mean AGI.
Unfortunately, people label ChatGPT etc as “AI” because it’s technically true, but it’s basically the reason why confusion like this post exist.
What we’re dealing with is randomly choosing options from a weighted distribution. The only thing intelligent about that is what you’ve chosen as the data set to generate that distribution.
And that intelligence lies outside of the machine.
There’s really no need to buy into tech bros delusions of grandeur about this stuff.