“The new device is built from arrays of resistive random-access memory (RRAM) cells… The team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.”
Article is based on this paper: https://www.nature.com/articles/s41928-025-01477-0


But it only does 16x16 matrix inversion.
Oh noes, how could that -possibly- scale?
To a billion parameter matrix inverter? Probably not too hard, maybe not at those speeds.
To a GPU, or even just the functions used in GenAI? We don’t even know if those are possible with analog computers to begin with.
@TheBlackLounge @kalkulat LLM inference is definitely theoretically possible on analog chips. They just may not scale :v