I often see a lot of people with outdated understanding of modern LLMs.
This is probably the best interpretability research to date, by the leading interpretability research team.
It’s worth a read if you want a peek behind the curtain on modern models.
Yes, good topic, good research…
( you have a few typos : intobthe … into the, predicr … predict, im … i am. )