• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle



  • aard@kyu.detoTechnology@lemmy.worldThe decline of Intel..
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    Not just that - intel did dual core CPUs as a response to AMD doing just that, by gluing two cores together. Which is pretty funny when you look at intels 2017 campaign of discrediting ryzen by calling it a glued together CPU.

    AMDs Opteron was wiping the floor with intel stuff for years - but not every vendor offered systems as they got paid off by intel. I remember helping a friend with building a kernel for one of the first available Opteron setups - that thing was impressive.

    And then there’s the whole 64bit thing which intel eventually had to license from AMD.

    Most of the big CPU innovations (at least in x86 space) of the last decade were by AMD - and the chiplet design of ryzen is just another one.


  • One fascinating example is one owner that replaced the DC barrel jack with a USB-C port, so they could utilize USB-PD for external power.

    Oddly enough that’s also an example for bad design in that notebook: The barrel jack is soldered in. With a module that is plugged into the board that’d be significantly easier to replace - and also provide strain relief for power jack abuse. All my old thinkpads were trivial to move to USB-C PD because they use a separate power jack with attached cable.

    The transparent bottom also isn’t very functional - it is pretty annoying to remove and put back, due to the large amount of screws required. For a notebook designed for tinkering I’d have wanted some kind of quick release for that. Also annoying is the lack of USB ports on the board - there’s enough space to integrate a USB hub, but just doing that on the board and providing extra ports would’ve been way more sensible.

    The CPU module also is a bit of a mixed bag - it pretty much is designed for the first module they developed, and later modules don’t have full support for the existing ports. I was expecting that, though - many projects trying to offer that kind of modular upgrade path run into that sooner or later, and for that kind of small project with all its teething problems ‘sooner’ was to be expected. It still is very interesting for some prototyping needs - but that’s mostly companies or very dedicated hackers, not the average linux user.






  • Not entirely sure about that. I have a bunch of systems with the current 8cx, and that’s pretty much 10 years behind Apple performance wise, while being similar in heat and power consumed. It is perfectly fine for the average office and webbrowsing workload, though - a 10 year old mobile i7 still is an acceptable CPU for that nowadays, the more problematic areas of IO speed are better with the Snapdragon. (That’s also the reason why Apple is getting away with that 8GB thing - the performance impact caused by that still keeps a usable system for the average user. The lie is not that it doesn’t work - the lie is that it doesn’t have an impact).

    From the articles I see about the Snapdragon Elite it seems to have something like double the multicore performance of the 8cx - which is a nice improvement, but still quite a bit away from catching up to the Apple chips. You could have a large percentage of office workers use them and be happy - but for demanding workloads you’d still need to go intel/AMD/Apple. I don’t think many companies will go for Windows/Arm when they can’t really switch everybody over. Plus, the deployment tools for ARM are not very stable yet - and big parts of what you’d need for doing deployments in an organization have just been available for ARM for a few months now (I’ve been waiting for that, but didn’t have a time to evaluate if they’re working).


  • It also is perfectly fine for running a few minute long compile cycles - without running into thermal throttling. I guess if you do some hour long stuff it might eventually become an issue - but generally the CPUs available in the Airs seem to be perfectly fine with passive cooling even for longer peak loads. Definitely usable as a developer machine, though, if you can live with the low memory (16GB for the M1, which I have).

    I bought some Apple hardware for a customer project - which was pretty much first time seriously touching Apple stuff since the 90s, as i’m not much of a friend of them - and was pretty surprised about performance as well as lack of heat. That thing is now running Linux, and it made me replace my aging Thinkpad x230 with a Macbook Pro - where active cooling clearly is required, but you also get a lot of performance out of it.

    The real big thing is that they managed to scale power usage nicely over the complete load range. For the Max/Ultra variants you get comparable performance (and power draw/heat) on high load to the top Ryzen mobile CPUs - but for low load you still get a responsive system at significantly less power draw than the Ryzens.

    Intel is playing a completely different game - they did manage to catch up a bit, but generally are still running hot, and are power hogs. Currently it’s just a race between Apple and AMD - and AMD is gimped by nobody building proper notebooks with their CPUs. Prices Apple is charging for RAM and SSDs are insane, though - they do get additional performance out of their design (unlike pretty much all x86 notebooks, where soldered RAM will offer the same throughput as a socketed on), but having a M.2 slot for a lower speed extra SSD would be very welcome.







  • aard@kyu.detoLemmy@lemmy.mlHow Lemmy's Communist Devs Saved It
    link
    fedilink
    English
    arrow-up
    31
    ·
    6 months ago

    A major difference is how they interact with feedback - the main reason I never did my own mastodon instance is the developers attitude. “We’re not interested in helping you because you didn’t set it up exactly as in the guide” was (and maybe still is) all over the mastodon bug tracker.

    That was the first thing I looked for when lemmy became popular - and found they were taking deployment issues to even the most absurd system seriously.

    Additionally they treat suggestions seriously - even if they personally think it is stupid - and even implement some of that. Pretty much no chance of anything of that happening with mastodon.



  • Ethernet is awesome. Super fast, doesn’t matter how many people are using it,

    You wanted to say “Switched Ethernet is awesome”. The big problem of Etherpad before that was the large collision domain, which made things miserable with high load. What Ethernet had going for it before that was the low price - which is why you’ve seen 10base2 setups commonly in homes, while companies often preferred something like Token Ring.


  • It wasn’t really a replacement - Ethernet was never tied to specific media, and various cabling standards coexisted for a long time. For about a decade you had 10baseT, 10base2, 10base5 and 10baseF deployments in parallel.

    I guess when you mention coax you’re thinking about 10base2 - the thin black cables with T-pieces end terminator plugs common in home setups - which only arrived shortly before 10baseT. The first commercially available cabling was 10base5 - those thick yellow cables you’d attach a system to with AUI transceivers. Which still were around as backbone cables in some places until the early 00s.

    The really big change in network infrastructure was the introduction of switches instead of hubs - before that you had a collision domain spanning the complete network, now the collision domain was reduced to two devices. Which improved responsiveness of loaded networks to the point where many started switching over from token ring - which in later years also commonly was run over twisted pair, so in many cases switching was possible without touching the cables.