Use code tldrnews at the link below to get an exclusive 60% off an annual Incogni plan: https://incogni.com/tldrnewsLast week, Nvidia saw the biggest single-...
But the uses of massively parallel math are still in their infancy. Scientific compute, machine learning, all kind of different simulations. Nvidia has been setting themselves up for all of it with cuda for years. At least until we get better options to physically replicate neurons (primarily how interconnected they are in a brain), GPUs and cuda specifically are how most AI is going to happen. And as the power increases, the ability to do increasing complex physics simulations of increasingly complex phenomena is going to become more and more relevant. Right now, it’s stuff like proton folding, fluid dynamics, whatever. But there’s way more coming. And all of it is going to use GPUs.
Do bubbles burst?
LLMs are a bubble.
But the uses of massively parallel math are still in their infancy. Scientific compute, machine learning, all kind of different simulations. Nvidia has been setting themselves up for all of it with cuda for years. At least until we get better options to physically replicate neurons (primarily how interconnected they are in a brain), GPUs and cuda specifically are how most AI is going to happen. And as the power increases, the ability to do increasing complex physics simulations of increasingly complex phenomena is going to become more and more relevant. Right now, it’s stuff like proton folding, fluid dynamics, whatever. But there’s way more coming. And all of it is going to use GPUs.