It’s so weird, i read this in a bunch of jon listings nowadays. How the fuck is it a requirement?!?! You should be fluent in CPP, but also please outsource your brain and encourage the team to do so as well. People are weird man.
It’s a publicly traded company, isn’t it? Most likely there is some investor in the CEO’s ear asking him to push this down on all staff… so they come up with bright ideas like putting silly “requirements” like this in their job descriptions as well. And in any case, AI investors are so desperate these days, chances are that they’re doing everything they can to create general LLM FOMO in a similarly desperate push to increase adoption.
That’s what I’m guessing at least. Even to me it sounds a little like a conspiracy theory, but then again these people have a lot of influence.
And no, it’s not to use his staff in a secret evil plot to gain third hand investment returns by investing in the current hype cycle and then hiring staff to use that investment…
The future looks to involve a mixture of AI and traditional development. There are things I do with AI that I could never touch the speed of with traditional development. But the vast majority of dev work is just traditional methods with maybe an AI rubber duck and then review before opening the PR to catch the dumb mistakes we all make sometimes. There is a massive difference between a one-off maintenance script or functional skeleton and enterprise code that has been fucked up for 15 years and the AI is never going to understand why you can’t just do the normal best practice thing.
A good developer will be familiar enough with AI to know the difference, but it’ll be a tool they use a couple times a month (highly dependent on the job) in big ways and maybe daily in insignificant ways if they choose.
Companies want a staff prepared for that state, not dragging their heels because they refuse to learn. I’ve been at this for thirty year’s and I’ve had to adapt to a number of changes I didn’t like. But like a lot of job skills we’ve had to develop over the years — such as devops — it’ll be something that you engage for specific purposes, not the whole job.
Even when the AI bubble does burst, AI won’t go away entirely. OpenAI isn’t the only provider and local AI is continuing to close the gap in terms of capability and hardware. In that environment, it may become even more important to know when the tool is a good fit and when it isn’t.
I am aware of that. I occasionally use AI for coding myself if I see fit.
Just the fact that active use of AI tools is listed under job requirement and that I have seen that in more than a few job listings rubs me the wrong way and would definitively be the first question in the interview to clarify what the extent of that is. I just don’t wanna deal with pipelines that break because they are partially rely on AI or an code base nobody knows their way around because nobody actually has written it themselves.
Frankly that’s why I think it’s important for AI centrists to occupy these roles rather than those who are all in. I’m excited about AI and happy to apply it where it makes sense and also very aware of its limitations. And in the part of my role that is encouraging AI adoption, critical thinking is one of the things I try my hardest to communicate.
My leadership is targeting 40-60% efficiency gains. I’m targeting 5-10% with an upward trajectory as we identify the kinds of tasks it is specifically good at within this environment. I expressed mild skepticism about that target to my direct manager during my interview (and he agreed) but also a willingness to do my best and a proven track record of using AI successfully.
I would suggest someone like yourself is perhaps well-suited to that particular duty — though whether the hiring manager sees it that way is another issue.
The real source of wisdom is social media users who approach a topic with bad faith, outrage farming framing. I mean just look at the upvotes, and you can easily tell how right you are, it’s basically science.
I’m sorry the only way you know how to write code is with an LLM holding your hand, but I believe if you really devote yourself to it you could learn to be a real programmer. Good luck!
Clearly you didn’t read the conversation because they were less inaulting and dumb than the peraon they replied to. Why are you so interested in defending trolls?
‘The job listing does not say anything about outsourcing your brain.’
But, everyone knows that because it is obvious on the face.
The subtext, as always, isn’t about commenting on the subject of the article or even making any kind of cognizant point that could actually be rebutted. Much like the top comment, it is just running ‘ai bad’ through an LLM so that it fits the post.
Would you honestly say that the comment that I responded to was made in good faith?
We are past chatbots in programming for a while now. It’s llms with tool calling capabilities now which work in an agentic loop. LLMs are extrapolators so the input context is important (extrapolating from missing information leads to hallucinations). With this workflow the LLM can construct its own context by using tools which leads to better results.
I haven’t met a lot of people who actually understood machine learning that say things like LLMs ‘a known scam’.
I agree that the industry, is massively overhyping the future capabilities of this kind of software in order to maintain their valuations… but the framing that AI (neural network-based machine learning) is useless is social media brain rot, not an accurate survey of the state of machine learning.
Have upvotes disabled so i don’t know how many upvotes it got. I just pointed out that it’s weird that it’s under the requirements, which sounds like they would require you to use training wheels. Which is normally not something you say there. I do not understand what your problem is.
Maybe. We can’t say, there is zero information there that even hints at how or how much they use AI.
It isn’t like they’re saying something specific like ‘Must be able to use Cursor, Mercurial and be able to direct multi-agent workflows’.
That bullet point read like it is more there to include a hot keyword on job searching sites than an actual specification that describes the job.
It’s kind of like including the word in your comment, so that you grab all of the bot upvotes and can farm outrage in a way that is objectively off-topic and unrelated to the actual post, which is about GOG moving to support Linux, not and not about AI.
It’d be one thing if there was something specific about the job related to AI, or if anyone involved in these comments had actually said anything of substance other than, literally, ‘ew’.
So, to my pattern recognition, this looks like every other ‘ai bad’ thread shoehorned into posts and full of toxic attacks while being light on actual discussion of the topic in the OP.
It’s sad that this is basically everywhere these days, and employers will weigh your performance review based on whether you’re using AI and how well you’re using it. It’s terrible.
This is a “big part” of my job. In five months what I’ve accomplished is adding AI usage to jira along with a way to indicate how many story points it wound up saving or costing. Let’s see how this plays out.
If AI collapses as many expect it to, this job will still be there without that requirement.
Yeah, self-hosted open-source models seem okay, as long as their training data is all from the public domain.
Hopefully RAM becomes cheap as fuck after the bubble pops and all these data centers have to liquidate their inventory. That would be a nice consolation prize, if everything else is already fucked anyway.
Unfortunately, server RAM and GPUs aren’t compatible with desktops. Also, NVidia have committed to releasing a new GPU every year, making the existing ones worth much less. So unless you’re planning to build your own data centre with slightly out-of-date gear - which would be folly, the existing ones will be desperate to recoup any investment and selling cheap - then it’s all just destined to become a mountain of e-waste.
Maybe that surplus will lay the groundwork for a solarpunk blockchain future?
I don’t know if I understand what blockchain is, honestly. But what if a bunch of indie co-ops created a mesh network of smaller, more sustainable server operations?
It might not seem feasible now, but if the AI bubble pops, Nvidia crashes spectacularly, data centers all need to liquidate their stock, and server compute becomes basically viewed as junk, then it might become possible…
Like AI, blockchain is a solution in search of a problem. Both have their uses but are generally part of overcomplicated, expensive solutions which are better done with more traditional techniques.
Maybe I didn’t mean blockchain, cause I’m still not really certain what it is. I mean like the fediverse itself, or a mesh network, where a bunch of hobbyist self-hosting their own servers can federate as a system of nodes for a more distributed model.
Instead of all the compute being hoarded in power-hungry data centers; regular folks, hobbyists, researchers, indie devs, etc., would be able to run more powerful simulations, meta-analyses, renderings, etc., and then pool their data/collaborate on projects, and ultimately create a more efficient and intelligently guided use of the compute instead of simply “CEO says generate more profit! 24/7 overdrive!!!”
At the very least, a surplus of cheap RAM would expand the computing capabilities of everyone who isn’t a greedy corporation with enough money to buy up all the expensive RAM.
I would imagine any program running simulations, rendering environments, analyzing metadata, and similar tasks would be able to use it.
It would be useful for academic researchers, gamers, hobbyists, fediverse instances. Basically whatever capabilities they have now, they would be able to increase their computing power for dirt cheap.
Someone could make a fediverse MMO. That could be cool, especially when indie devs start doing what zuck never could with VR.
Google Stadia wasn’t exactly a responding success…
From a previous job in hydraulics, the computational fluid dynamics / finite element analysis that we used to do would eat all your compute resource and ask for more. Split your design into tiny cubes, simulate all the flow / mass balance / temperature exchange / material stress calculations for each one, gain an understanding of how the part would perform in the real world. Very easily parallelizable, a great fit for GPU calculation. However, it’s a ‘hundreds of millions of dollars’ industry, and the AI bubble is currently ‘tens of trillions’ deep.
Yes, they can be used for other tasks. But we’ve just no use for the amount that’s been purchased - there’s tens of thousands of times as much as makes any sense.
So there would be an enormous surplus and a lot of e-waste. That’s a shame, but that’s going to happen anyway. I’m only saying that the silver lining is that it means GPU and RAM would become dirt cheap (unless companies manufacture scarcity like the snakes they are).
Industrial applications aren’t the only uses for it. Academic researchers could use it to run simulations and meta-analyses. Whatever they can do now, they could do more powerfully with cheap RAM.
Gamers who self-host could render worlds more powerfully. Indie devs could add more complex dynamics to their games. Computer hobbyists would have more compute to tinker with. Fediverse instances would be able to handle more data. Maybe someone could even make a fediverse MMO. I wonder if that would catch on.
Basically, whatever people can do now, more people would be able to do more powerfully and for cheaper. Computations only academia and industry can do now would become within reach of hobbyists. Hobbyists would be able to expand their capacities. People who only have computers to tinker with now would be able to afford servers to tinker with.
“Trickle-down” is a bullshit concept, as everything gets siphoned to the top and hoarded. But when that cyst bursts, and those metaphorical towers come crashing down, there’s gonna be a lot of rubble to sift through. It’s going to enable the redistribution of RAM on a grand scale.
I’m not pretending it’ll solve everyone’s problems, and of course it would have been better if they had left the minerals in the ground and data centers had never grown to such cancerous proportions. But when the AI bubble bursts and tech companies have to liquidate, there’s no denying that the price of RAM would plummet. It’s not a magic bullet, just a silver lining.
I read I think just last week but for sure in the last month that someone has created an AI card that lowers power usage by 90%. (I know that’s really vague and leaves a lot of questions.) It seems likely that AI-specific hardware and graphics hardware will diverge — I hope.
I think it’s called an inferencing chip. I read about it a few months ago.
Basically, the way it was explained, the most energy-intensive part of AI is training the models. Once training is complete, it requires less energy to make inferences from the data.
So the idea with these inferencing chips is that the AI models are already trained; all they need to do now is make inferences. So the chips are designed more specifically to do that, and they’re supposed to be way more efficient.
I kept waiting to see it in devices on the consumer market, but then it seemed to disappear and I wasn’t able to even find any articles about it for months. It was like the whole thing vanished. Maybe Nvidia wanted to suppress it, cause they were worried it would reduce demand for their GPUs.
At one point I had seen a smaller-scale company listing laptops for sale with their own inferencing chips, but the webpage seems to have disappeared. Or at least the page where they were selling it.
Agreed, AI has uses but c-suite execs have no idea what they are and are paying millions to get their staff using them in hopes of finding what those uses are. In reality they are making things worse with no tangible benefit because they are all scared that someone will find this imaginary golden goose first.
I mean yes, but maybe if you can interview in good faith, that’s not what becomes part of the job.
“I saw here that the use of AI is required. I’m willing to compromise and use AI for some workflows, but I’m skeptical of wide scale adoption. I think its potentially bad for the long term code base maintenance and stability, which is what GOG is founded on. If I find that it’s truly helpful in code writing, then I’ll continue to work it into my larger workload, but do keep in mind that the Linux community as a whole is more technical than other OS consumers and this will be bad PR.”
If this is possible then your AI workflows are catastrophically broken. Even my dumbass company knows AI needs human supervision at all times.
Reddit and lemmy are so extreme on this topic it’s impossible to express a nuanced opinion on the issue. AI is an undeniably powerful tool for any good programmer, but it needs to be used properly.
People being this irresponsible with it must work on software where there are no legal consequences if it breaks. As brainwashed as my company is on AI they would never allow us to create a process that releases unreviewed code.
Oh they are lol. Our company was full steam on it and is just now pumping the brakes as they’ve seen the chaos.
Don’t get me wrong. I think Gen AI can be, gasp, useful! It’s great in small pockets where you can handhold it and verify output. It’s great for cut through the noise that google and others have failed to address. It’s good at summarizing text.
I’m not so high on it being this massive reckoning that’s going to replace people. It’s just not built for that. Text prediction can only go so far and that’s all GenAI is.
We’ve had multiple instances of AI slop being automatically released to production without any human review, and some of our customers are very angry about broken workflows and downtime, and the execs are still all-in on it. Maybe the tune is changing to, “well, maybe we should have some guardrails”, but very slowly.
The incident above I mentioned was the final straw but I’ve slowly seen the enthusiasm for LLMs start to whittle away.
It’s still the shiny new toy that everyone must play with but we went from “drop your entire roadmap for AI” to “eh maybe we don’t scrap all UIs just yet”
You are not competitive as a programmer if you disown llm auto complete today. But for a lead dev, it’s not that important. Nobody actually vibe codes professionally, but equally nobody on the lean street disown the cli llm tools
It’s hard for me to take these comments seriously. If people genuinely think AI is useless it’s because they’re using a bad model or they haven’t actually tried and are just making stuff up.
It’s extremely helpful for many different tasks, you just need to supervise it.
While this might be true, there’s a big difference in using LLMs for auto-completions, second opinion PR reviews, and maybe mocking up some tests than using it to write actual production code. I don’t see LLMs going away as a completion engine because they’re really good at that, but I suspect companies that are using it to write production code are realizing/will soon realize that they might have security issues and that for a human to work on that codebase it would likely have to be thrown away entirely and redone, so using slop it only costed them time and money without any benefits. But we’ll see how that goes, luckily I work at a company where managers used to be programmers so there’s not much push for us to use it to generate code.
there’s a lot to be excited for, but
ew.
That’s every company right now
Ew.
It’s so weird, i read this in a bunch of jon listings nowadays. How the fuck is it a requirement?!?! You should be fluent in CPP, but also please outsource your brain and encourage the team to do so as well. People are weird man.
It means that the parent company has major investors in the LLM space.
CDPR is a major john in LLM?
GOG isn’t under CDPR umbrella any more.
It’s a publicly traded company, isn’t it? Most likely there is some investor in the CEO’s ear asking him to push this down on all staff… so they come up with bright ideas like putting silly “requirements” like this in their job descriptions as well. And in any case, AI investors are so desperate these days, chances are that they’re doing everything they can to create general LLM FOMO in a similarly desperate push to increase adoption.
That’s what I’m guessing at least. Even to me it sounds a little like a conspiracy theory, but then again these people have a lot of influence.
GOG is now owned by Michał Kiciński, one of the original founders. He can do whatever he wants.
And no, it’s not to use his staff in a secret evil plot to gain third hand investment returns by investing in the current hype cycle and then hiring staff to use that investment…
The future looks to involve a mixture of AI and traditional development. There are things I do with AI that I could never touch the speed of with traditional development. But the vast majority of dev work is just traditional methods with maybe an AI rubber duck and then review before opening the PR to catch the dumb mistakes we all make sometimes. There is a massive difference between a one-off maintenance script or functional skeleton and enterprise code that has been fucked up for 15 years and the AI is never going to understand why you can’t just do the normal best practice thing.
A good developer will be familiar enough with AI to know the difference, but it’ll be a tool they use a couple times a month (highly dependent on the job) in big ways and maybe daily in insignificant ways if they choose.
Companies want a staff prepared for that state, not dragging their heels because they refuse to learn. I’ve been at this for thirty year’s and I’ve had to adapt to a number of changes I didn’t like. But like a lot of job skills we’ve had to develop over the years — such as devops — it’ll be something that you engage for specific purposes, not the whole job.
Even when the AI bubble does burst, AI won’t go away entirely. OpenAI isn’t the only provider and local AI is continuing to close the gap in terms of capability and hardware. In that environment, it may become even more important to know when the tool is a good fit and when it isn’t.
I am aware of that. I occasionally use AI for coding myself if I see fit.
Just the fact that active use of AI tools is listed under job requirement and that I have seen that in more than a few job listings rubs me the wrong way and would definitively be the first question in the interview to clarify what the extent of that is. I just don’t wanna deal with pipelines that break because they are partially rely on AI or an code base nobody knows their way around because nobody actually has written it themselves.
Frankly that’s why I think it’s important for AI centrists to occupy these roles rather than those who are all in. I’m excited about AI and happy to apply it where it makes sense and also very aware of its limitations. And in the part of my role that is encouraging AI adoption, critical thinking is one of the things I try my hardest to communicate.
My leadership is targeting 40-60% efficiency gains. I’m targeting 5-10% with an upward trajectory as we identify the kinds of tasks it is specifically good at within this environment. I expressed mild skepticism about that target to my direct manager during my interview (and he agreed) but also a willingness to do my best and a proven track record of using AI successfully.
I would suggest someone like yourself is perhaps well-suited to that particular duty — though whether the hiring manager sees it that way is another issue.
Yeah, what does GOG know?
The real source of wisdom is social media users who approach a topic with bad faith, outrage farming framing. I mean just look at the upvotes, and you can easily tell how right you are, it’s basically science.
I’m sorry the only way you know how to write code is with an LLM holding your hand, but I believe if you really devote yourself to it you could learn to be a real programmer. Good luck!
Why did you attack the commenter personally? Are you not able to defend the idea without stooping so low?
Clearly you didn’t read the conversation because they were less inaulting and dumb than the peraon they replied to. Why are you so interested in defending trolls?
The irony here is rich.
Yea it is, mr troll.
And we open the book of troll arguments to chapter 1: Ad hominem
Keep going, it really makes you look like the rational one.
Maybe try a red herring next, or a straw man those are always popular.
Bruh, your only “rebuttal” was a straw man and an appeal to authority. Make a better argument before you go accusing people of being trolls.
Oh ok.
‘The job listing does not say anything about outsourcing your brain.’
But, everyone knows that because it is obvious on the face.
The subtext, as always, isn’t about commenting on the subject of the article or even making any kind of cognizant point that could actually be rebutted. Much like the top comment, it is just running ‘ai bad’ through an LLM so that it fits the post.
Would you honestly say that the comment that I responded to was made in good faith?
It’s lemmy. Average user is more technical than the average investor.
Also we all know by “AI tools” they just mean chatbots, and they are a known scam by now.
We are past chatbots in programming for a while now. It’s llms with tool calling capabilities now which work in an agentic loop. LLMs are extrapolators so the input context is important (extrapolating from missing information leads to hallucinations). With this workflow the LLM can construct its own context by using tools which leads to better results.
I haven’t met a lot of people who actually understood machine learning that say things like LLMs ‘a known scam’.
I agree that the industry, is massively overhyping the future capabilities of this kind of software in order to maintain their valuations… but the framing that AI (neural network-based machine learning) is useless is social media brain rot, not an accurate survey of the state of machine learning.
Have upvotes disabled so i don’t know how many upvotes it got. I just pointed out that it’s weird that it’s under the requirements, which sounds like they would require you to use training wheels. Which is normally not something you say there. I do not understand what your problem is.
They know some things I’ll give you that. But pattern recognition tells me for this example it’s more likely they’re wrong.
Maybe. We can’t say, there is zero information there that even hints at how or how much they use AI.
It isn’t like they’re saying something specific like ‘Must be able to use Cursor, Mercurial and be able to direct multi-agent workflows’.
That bullet point read like it is more there to include a hot keyword on job searching sites than an actual specification that describes the job.
It’s kind of like including the word in your comment, so that you grab all of the bot upvotes and can farm outrage in a way that is objectively off-topic and unrelated to the actual post, which is about GOG moving to support Linux, not and not about AI.
It’d be one thing if there was something specific about the job related to AI, or if anyone involved in these comments had actually said anything of substance other than, literally, ‘ew’.
So, to my pattern recognition, this looks like every other ‘ai bad’ thread shoehorned into posts and full of toxic attacks while being light on actual discussion of the topic in the OP.
hell nah
It’s sad that this is basically everywhere these days, and employers will weigh your performance review based on whether you’re using AI and how well you’re using it. It’s terrible.
This is a “big part” of my job. In five months what I’ve accomplished is adding AI usage to jira along with a way to indicate how many story points it wound up saving or costing. Let’s see how this plays out.
If AI collapses as many expect it to, this job will still be there without that requirement.
I hope the bubble pops soon, and only smaller and more sustainable models stay
Yeah, self-hosted open-source models seem okay, as long as their training data is all from the public domain.
Hopefully RAM becomes cheap as fuck after the bubble pops and all these data centers have to liquidate their inventory. That would be a nice consolation prize, if everything else is already fucked anyway.
Unfortunately, server RAM and GPUs aren’t compatible with desktops. Also, NVidia have committed to releasing a new GPU every year, making the existing ones worth much less. So unless you’re planning to build your own data centre with slightly out-of-date gear - which would be folly, the existing ones will be desperate to recoup any investment and selling cheap - then it’s all just destined to become a mountain of e-waste.
Maybe that surplus will lay the groundwork for a solarpunk blockchain future?
I don’t know if I understand what blockchain is, honestly. But what if a bunch of indie co-ops created a mesh network of smaller, more sustainable server operations?
It might not seem feasible now, but if the AI bubble pops, Nvidia crashes spectacularly, data centers all need to liquidate their stock, and server compute becomes basically viewed as junk, then it might become possible…
I’m just trying to find a silver lining, okay?
Like AI, blockchain is a solution in search of a problem. Both have their uses but are generally part of overcomplicated, expensive solutions which are better done with more traditional techniques.
Maybe I didn’t mean blockchain, cause I’m still not really certain what it is. I mean like the fediverse itself, or a mesh network, where a bunch of hobbyist self-hosting their own servers can federate as a system of nodes for a more distributed model.
Instead of all the compute being hoarded in power-hungry data centers; regular folks, hobbyists, researchers, indie devs, etc., would be able to run more powerful simulations, meta-analyses, renderings, etc., and then pool their data/collaborate on projects, and ultimately create a more efficient and intelligently guided use of the compute instead of simply “CEO says generate more profit! 24/7 overdrive!!!”
At the very least, a surplus of cheap RAM would expand the computing capabilities of everyone who isn’t a greedy corporation with enough money to buy up all the expensive RAM.
I wonder if the Server gpus can be used for other tasks than computing llms
I would imagine any program running simulations, rendering environments, analyzing metadata, and similar tasks would be able to use it.
It would be useful for academic researchers, gamers, hobbyists, fediverse instances. Basically whatever capabilities they have now, they would be able to increase their computing power for dirt cheap.
Someone could make a fediverse MMO. That could be cool, especially when indie devs start doing what zuck never could with VR.
Google Stadia wasn’t exactly a responding success…
From a previous job in hydraulics, the computational fluid dynamics / finite element analysis that we used to do would eat all your compute resource and ask for more. Split your design into tiny cubes, simulate all the flow / mass balance / temperature exchange / material stress calculations for each one, gain an understanding of how the part would perform in the real world. Very easily parallelizable, a great fit for GPU calculation. However, it’s a ‘hundreds of millions of dollars’ industry, and the AI bubble is currently ‘tens of trillions’ deep.
Yes, they can be used for other tasks. But we’ve just no use for the amount that’s been purchased - there’s tens of thousands of times as much as makes any sense.
So there would be an enormous surplus and a lot of e-waste. That’s a shame, but that’s going to happen anyway. I’m only saying that the silver lining is that it means GPU and RAM would become dirt cheap (unless companies manufacture scarcity like the snakes they are).
Industrial applications aren’t the only uses for it. Academic researchers could use it to run simulations and meta-analyses. Whatever they can do now, they could do more powerfully with cheap RAM.
Gamers who self-host could render worlds more powerfully. Indie devs could add more complex dynamics to their games. Computer hobbyists would have more compute to tinker with. Fediverse instances would be able to handle more data. Maybe someone could even make a fediverse MMO. I wonder if that would catch on.
Basically, whatever people can do now, more people would be able to do more powerfully and for cheaper. Computations only academia and industry can do now would become within reach of hobbyists. Hobbyists would be able to expand their capacities. People who only have computers to tinker with now would be able to afford servers to tinker with.
“Trickle-down” is a bullshit concept, as everything gets siphoned to the top and hoarded. But when that cyst bursts, and those metaphorical towers come crashing down, there’s gonna be a lot of rubble to sift through. It’s going to enable the redistribution of RAM on a grand scale.
I’m not pretending it’ll solve everyone’s problems, and of course it would have been better if they had left the minerals in the ground and data centers had never grown to such cancerous proportions. But when the AI bubble bursts and tech companies have to liquidate, there’s no denying that the price of RAM would plummet. It’s not a magic bullet, just a silver lining.
I read I think just last week but for sure in the last month that someone has created an AI card that lowers power usage by 90%. (I know that’s really vague and leaves a lot of questions.) It seems likely that AI-specific hardware and graphics hardware will diverge — I hope.
I think it’s called an inferencing chip. I read about it a few months ago.
Basically, the way it was explained, the most energy-intensive part of AI is training the models. Once training is complete, it requires less energy to make inferences from the data.
So the idea with these inferencing chips is that the AI models are already trained; all they need to do now is make inferences. So the chips are designed more specifically to do that, and they’re supposed to be way more efficient.
I kept waiting to see it in devices on the consumer market, but then it seemed to disappear and I wasn’t able to even find any articles about it for months. It was like the whole thing vanished. Maybe Nvidia wanted to suppress it, cause they were worried it would reduce demand for their GPUs.
At one point I had seen a smaller-scale company listing laptops for sale with their own inferencing chips, but the webpage seems to have disappeared. Or at least the page where they were selling it.
Agreed, AI has uses but c-suite execs have no idea what they are and are paying millions to get their staff using them in hopes of finding what those uses are. In reality they are making things worse with no tangible benefit because they are all scared that someone will find this imaginary golden goose first.
agreed
I mean yes, but maybe if you can interview in good faith, that’s not what becomes part of the job.
“I saw here that the use of AI is required. I’m willing to compromise and use AI for some workflows, but I’m skeptical of wide scale adoption. I think its potentially bad for the long term code base maintenance and stability, which is what GOG is founded on. If I find that it’s truly helpful in code writing, then I’ll continue to work it into my larger workload, but do keep in mind that the Linux community as a whole is more technical than other OS consumers and this will be bad PR.”
No wonder just one headcount. .
Oh. So it’s going to get vibe-coded. Well… Fuck.
They’ll change their tune when a few of their new workflows go rogue and auto commit prs it shouldn’t and cause build issues.
If this is possible then your AI workflows are catastrophically broken. Even my dumbass company knows AI needs human supervision at all times.
Reddit and lemmy are so extreme on this topic it’s impossible to express a nuanced opinion on the issue. AI is an undeniably powerful tool for any good programmer, but it needs to be used properly.
People being this irresponsible with it must work on software where there are no legal consequences if it breaks. As brainwashed as my company is on AI they would never allow us to create a process that releases unreviewed code.
Oh they are lol. Our company was full steam on it and is just now pumping the brakes as they’ve seen the chaos.
Don’t get me wrong. I think Gen AI can be, gasp, useful! It’s great in small pockets where you can handhold it and verify output. It’s great for cut through the noise that google and others have failed to address. It’s good at summarizing text.
I’m not so high on it being this massive reckoning that’s going to replace people. It’s just not built for that. Text prediction can only go so far and that’s all GenAI is.
We’ve had multiple instances of AI slop being automatically released to production without any human review, and some of our customers are very angry about broken workflows and downtime, and the execs are still all-in on it. Maybe the tune is changing to, “well, maybe we should have some guardrails”, but very slowly.
The incident above I mentioned was the final straw but I’ve slowly seen the enthusiasm for LLMs start to whittle away.
It’s still the shiny new toy that everyone must play with but we went from “drop your entire roadmap for AI” to “eh maybe we don’t scrap all UIs just yet”
I have a feeling it’s gonna drop more from there.
You are not competitive as a programmer if you disown llm auto complete today. But for a lead dev, it’s not that important. Nobody actually vibe codes professionally, but equally nobody on the lean street disown the cli llm tools
It’s hard for me to take these comments seriously. If people genuinely think AI is useless it’s because they’re using a bad model or they haven’t actually tried and are just making stuff up.
It’s extremely helpful for many different tasks, you just need to supervise it.
Wether you (or I) like it or not, Pandora’s box has been opened. There is no future in software development without the use of LLMs.
While this might be true, there’s a big difference in using LLMs for auto-completions, second opinion PR reviews, and maybe mocking up some tests than using it to write actual production code. I don’t see LLMs going away as a completion engine because they’re really good at that, but I suspect companies that are using it to write production code are realizing/will soon realize that they might have security issues and that for a human to work on that codebase it would likely have to be thrown away entirely and redone, so using slop it only costed them time and money without any benefits. But we’ll see how that goes, luckily I work at a company where managers used to be programmers so there’s not much push for us to use it to generate code.
I appreciate your opinion, but I don’t believe you.
K 👍
Thanks for making the job market slightly easier for the rest of us.
Enjoy.
Say hi to the PMs and QA for me.
100% correct