filister@lemmy.world to Technology@lemmy.worldEnglish · 2 days agoDeepSeek R1 Is Reportedly Running Inference On Huawei's Ascend 910C AI Chips, Showing China's Growing AI Capabilitieswccftech.comexternal-linkmessage-square14fedilinkarrow-up1150arrow-down15
arrow-up1145arrow-down1external-linkDeepSeek R1 Is Reportedly Running Inference On Huawei's Ascend 910C AI Chips, Showing China's Growing AI Capabilitieswccftech.comfilister@lemmy.world to Technology@lemmy.worldEnglish · 2 days agomessage-square14fedilink
minus-squareaaron@lemm.eelinkfedilinkEnglisharrow-up1arrow-down42·2 days agoI’m not going to parse this shit article. What does interference mean here? Please and thank you.
minus-squarefilister@lemmy.worldOPlinkfedilinkEnglisharrow-up42·2 days agoThat’s a very toxic attitude. Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.
minus-squareaaron@lemm.eelinkfedilinkEnglisharrow-up25arrow-down7·2 days agoYeah, I misread because I’m stupid. Thanks for replying, non-toxic man.
minus-squarexodoh74984@lemmy.worldlinkfedilinkEnglisharrow-up19·2 days agoTraining: Creating the model Inference: Using the model
minus-squareRobotToaster@mander.xyzlinkfedilinkEnglisharrow-up13·2 days agoInference? It’s the actual running of the ai when you use it, as opposed to training.
minus-squareaaron@lemm.eelinkfedilinkEnglisharrow-up14arrow-down2·2 days agoSorry. I forgot to mention that I’m dumb.
I’m not going to parse this shit article. What does interference mean here? Please and thank you.
That’s a very toxic attitude.
Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.
Yeah, I misread because I’m stupid. Thanks for replying, non-toxic man.
Training: Creating the model
Inference: Using the model
Inference? It’s the actual running of the ai when you use it, as opposed to training.
Sorry. I forgot to mention that I’m dumb.