Someone in the chinese room would not know anything about their in- or output. Sure you memorized that a certain set of symbols means your output should contain another set of symbols, but what do you actually “know” about these symbols.
But you have no idea what it’s about. Is it a greeting? A recipe for some pasta? Instructions to build a bomb?
Could be anything.
I’m pretty well steeped in this question, from both a technological and philosophical perspective.
And it’s funny to see all of these posters, who are upvoting comments that expose a fundamental lack of understanding about how LLMs and ai work, acting like the book is already closed on the answer.
don’t know why you got downvoted, an LLM is essentially a chinese room, and whether such a room “knows” is still the question.
no, it fucking isn’t. (see the postscript in linked article.)
Thanks for that read.
why the visceral reaction?
Good god it’s a hydra
don’t know why you got banned
???
Someone in the chinese room would not know anything about their in- or output. Sure you memorized that a certain set of symbols means your output should contain another set of symbols, but what do you actually “know” about these symbols.
But you have no idea what it’s about. Is it a greeting? A recipe for some pasta? Instructions to build a bomb? Could be anything.
I’m pretty well steeped in this question, from both a technological and philosophical perspective.
And it’s funny to see all of these posters, who are upvoting comments that expose a fundamental lack of understanding about how LLMs and ai work, acting like the book is already closed on the answer.