- cross-posted to:
- aicompanions
- cross-posted to:
- aicompanions
I often see a lot of people with outdated understanding of modern LLMs.
This is probably the best interpretability research to date, by the leading interpretability research team.
It’s worth a read if you want a peek behind the curtain on modern models.
Yes, but people forget that our brains, and therefore our minds, are also “simply” statistics, albeit very complex.
Yeah I found this kind of reductionist talk pushes people to overlook the emerging properties of the system, which is where the meat of the topic is. It’s like looking at a living cell and saying “yeah well this is just chemistry”.