![](https://lemmy.blahaj.zone/pictrs/image/7yBccNRuwS.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
How much of that is cached state based on the percentage of ram available?
How much of that is cached state based on the percentage of ram available?
Yeah I completely forgot about the consumer side of things. I was expecting there being Cisco iOS/FRR router configs, not a full web dashboard.
As someone who works with 100Gbps networking:
Nah, just grab the domain and redirect it to X. Watch him explode.
How good are the RISC-V vector instructions implementations IRL? I’ve never heard of them. My experience with ARM is that even on certain data center chips the performance gains are abyssal (when using highly optimized libraries such as dpdk)
Harder to write compilers for RISC? I would argue that CISC is much harder to design a compiler for.
That being said there’s a lack of standardized vector/streaming instructions in out-of-the-box RISC-V that may hurt performance, but compiler design wise it’s much easier to write a functional compiler than for the nightmare that is x86.
Oh nice! A new tool! Do you happen to know how this compares to win10privacy?
Vanced got taken down due to trademark violations.
They need something more substantial for revanced. Especially since it’s only a set of binary patches and there is no redistribution of YT source code.
I assert that this tech is biased towards bears and racoons.
This probably sounds pedantic but based on this the issue isn’t that the software is Russian. It’s that the software is under the regulation of an authoritarian government (which is Russia)
Nginx is 2-clause BSD, which I would argue is more “Open Source” than Arch Linux (official repo contains proprietary components such as discord, steam, multimedia codecs). You could argue that the majority of it (and it’s build system) is open source, but probably not “Arch Linux” is fully Open Source.
Out of curiosity what do you think of Nginx, which was Russian based and used to have its main offices in Russia (that also got raided by Russian police) or Arch Linux, where one of the main packagers (up to 30% of official packages) is managed by Felix Yan (which I believe is a Chinese citizen)? Where is the line drawn? Is it only for profit companies, security software, or something specific?
It was always there, but we’ve long ignored the warnings. It invented the Internet, which we took for granted. It wasn’t until Gore seeped through a series of tubes that we realized, but by then it was too late.
It had already taken over the windmills.
You can build a risc core using an fpga. Plenty of people have done that.
Performance will probably be an issue.
Bitwarden has TOTP support with a pro license. Or you can just selfhost (using vaultwarden) and have all the features instead.
The argument is that processing data physically “near” where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.
Personally, I’d say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn’t allow loops, recursion, etc). No matter how fast your fancy new architecture is, it’s worthless if most programmers on the job market won’t be able to work with it. Second, there’re too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It’s just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)
I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.
Not sure about GreaseMonkey, but V8 compiles JS to an IL.
Nodejs has an emit IL debugging feature to see the emitted IL code.