Gaming is one thing, a lot is GPU bound anyways, probably the same with “physical modeling”
But you cannot tell me your “data processing” would not be greatly sped up by using a newer proc (assuming it’s not also GPU bound). Does it work, sure, but if it takes you 2 hours for it to process now but <30 minutes on something newer that’s just a waste of time, resources and money. It’s incredibly inefficient.
On the flip side, if all your work is GPU bound no wonder a 3rd gen proc from 2012 is keeping up lol
My modelling is CPU bound as it’s a model made in Fortran by physicists (me included). The fact is that I wouldn’t get a 4x boost, and a model running overnight still would. When I actually need performance I use a 1000 cores compute cluster for multiple days, so that would never run on any consumer CPU anyways.
For the data processing, the real bottle neck is disk access and my scripting speed, so the CPU doesn’t really need to be amazing.
Gaming, working (data processing, physical modelling).
The trick is to use a lower overhead OS than Windows.
Gaming is one thing, a lot is GPU bound anyways, probably the same with “physical modeling”
But you cannot tell me your “data processing” would not be greatly sped up by using a newer proc (assuming it’s not also GPU bound). Does it work, sure, but if it takes you 2 hours for it to process now but <30 minutes on something newer that’s just a waste of time, resources and money. It’s incredibly inefficient.
On the flip side, if all your work is GPU bound no wonder a 3rd gen proc from 2012 is keeping up lol
My modelling is CPU bound as it’s a model made in Fortran by physicists (me included). The fact is that I wouldn’t get a 4x boost, and a model running overnight still would. When I actually need performance I use a 1000 cores compute cluster for multiple days, so that would never run on any consumer CPU anyways.
For the data processing, the real bottle neck is disk access and my scripting speed, so the CPU doesn’t really need to be amazing.