Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 96 Posts
  • 4.24K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle








  • I think that the question of whether an industry would benefit is a hard one. It depends on your perspective and what benefits one is gonna aim for.

    I think that if I had to choose one category, I’d do CAD.

    So, this covers a wide range of different industries and roles. 3D and 2D mechanical engineering. Chip and circuit board design. Designing 3D objects for 3D printers.

    There is open-source CAD software out there, of varying degrees of sophistication and for different purposes. But in general, I kind of expected to stumble into a huge wealth of world-beating software. I mean, it’s a field wirh a lot of technically-oriented people who donlt mostly compete on the software as their core competency. I could see a lot of people wanting to scratch itches, and the situation to be kinda like it is for mathematics software, with strong open-source entrants. But that isn’t the case. There’s very much usable stuff, depending upon what you want to do. But the big boys in the field are proprietary.

    There’s FreeCAD. I use openscad to do code-oriented design of objects for 3d printers. I wouldn’t call Blender a CAD package, more a modeler, though it’s adjacent to the field and there is some CAD-related add-on stuff. There’s QCAD. I don’t know how practical BRL-CAD is today, but it’s out there.



  • I mean, Valve could explicitly say that they have some trusted hardware and software stack or something and let games know whether the environment’s been modified.

    That’d require support from Valve and be about the only way that you could have both a way to run in locked down mode for multiplayer games where addressing cheating is a problem (and where I think the “closed console system” model is probably mote appropriate and the “open PC model” is at best kludged into kimda-sorta working like a console) and also let the system still run in an “open mode”.

    My own approach is just to not play most multiplayer competitive games on PCs. I’ve enjoyed them in the past, but for anything seriously reflex-oriented like FPSes, your reflexes go downhill with age anyway. And they come with all kinds of issues, even on a locked-down system that successfully avoids cheating. People griefing. You can’t generally pause the game to use the toilet, deal with a screaming kid, or answer the door. The other players, unlike game AIs, aren’t necessarily going to be optimized to play a “fun” game for me. You don’t need an Internet connection, and being in a remote area isn’t a limiting factor.

    I think that the future is gonna be shifting towards better game AIs. Hard technical problems to solve there, but it’s a ratchet — we only get better over time.


  • I have a Bluetooth Ultimate (keep in mind that 8BitDo makes a wide range of “Ultimate” controllers with extremely-confusingly-similar names, which don’t have the same hardware and have a wide range of prices, so be very careful when buying to ensure that you’re getting what you want; for example, when I bought mine, the “Bluetooth Ultimate” had Hall effect thumbsticks and the “Ultimate” did not. The “Bluetooth Ultimate” didn’t have a Xbox-style face button layout available, just a Nintendo one; you could remap this in software, of course, but the gamepad itself couldn’t do the mapping. Then there’s an “Ultimate C”, and it sounds like also an “Ultimate 2”).

    I’m fine with its ergonomics.

    But, then…I’m also fine with the ergonomics of a bunch of other gamepads that I have.

    My own take is that pretty much all controller ergonomics are fine. The only gamepad I’ve ever used that I’d call outright bad was the original rectangular NES gamepad from the 1980s. These have a hard, squared-off D-pad that will absolutely kill your thumb with enough use.

    Probably dishonorable mention goes to a wired Logitech controller dating to the 1990s, and to a lesser extent, a later Logitech controller; these had a D-pad that rolled to the diagonal too easily.

    All modern controllers that I’ve used are noticeably more-comfortable for extended use than gamepads from the '80s and '90s.

    I’ve owned a wide range of Playstation, Xbox, third-party, etc controllers, not to mention joysticks and other game control devices, and I’ve always been generally pretty happy with the ergonomics. That doesn’t mean that they don’t differ, but it’s pretty doable to adapt to the differences. Symmetric Playstation-style thumbstick layout, asymmetric Xbox-layout. Some are heavier, but nothing enough to really bug me. Nintendo face button layout vs XBox face button layout can be remapped in software. I’ve been able to adapt to different trigger pull force levels. Clicky face buttons that are popular on some new controllers versus no-tactile-feedback buttons. Controller bodies of slightly different size and shape. A new, different controller might feel weird at first, but in general, I’ve found that the brain is pretty good at bridging the differences.

    Some have more buttons, and in recent years I’ve had enough bad luck with stick drift that I’ve moved to Hall effect thumbsticks. Some don’t have rumble motors. Some have RGB lights. One could prefer a gamepad over another for various functionality reasons, but…I think that on ergonomics, vendors have pretty much done a good job.


  • I mean, you can run a Linux phone now:

    [email protected]

    Downside is that aren’t going to have a large software library optimized for touchscreen use. The hardware options are pretty disappointing compared to Android. Not all hardware functionality may be supported, if it’s on a repurposed Android phone. Android or iOS software is mostly designed to expect that it’s on a fast/WiFi connection some of the time and on a slow/limited mobile data link some of the time and be able to act accordingly; most GNU/Linux software is not. Battery life is often not fantastic.

    I still haven’t been pushed over the edge, but I’m definitely keeping my eye on it. I’m just not willing to develop software for Android. I know that GNU/Linux phones will stay open. I am not at all sure that Android won’t wind up locked down by Google at some point, and over the years, it’s definitely shifted in the locked-down direction.

    My current approach is to carry around a Linux laptop and try to shift my usage more towards using the Android phone as a tethering device for the laptop, to get Internet access everywhere. That’s not always reasonable — you need to sit down to use the laptop — but the only thing that the phone really has to be used for is dealing with text messages and calls. If you really wanted to do so, as long as the laptop was on, you could run SIP to get VoIP service off the Internet from a provider of that from the laptop over the phone’s data service, not even rely on the phone’s calling functionality. The laptop isn’t really set up to be able to idle at very low power the way a phone is, be able to wake up when a call comes in, though, so it’s not really appropriate for incoming calls.

    If I need to access something one-handed without sitting down, I can fall back to using the phone.

    And it does have some nice benefits, like having a real keyboard, a considerably more-powerful system, a much larger library of software, a better screen and speakers, a 3.5mm headphones jack (all those phone space constraints go away on a laptop!) and so forth. You can move the phone to somewhere where its radio has good reception and just have it relay to the laptop, which isn’t an option if you’re using the phone itself as the computing device.

    You can, though I don’t, even run Android software on the laptop via Waydroid.

    I don’t presently use it in this role, but there’s a software package, KDE Connect, that lets one interface a phone and a Linux desktop (well, laptop in this case), and do things like happily type away in text message conversations on the laptop, if one has the laptop up and running.

    I’m thinking that that approach also makes it easier to shift my use to a GNU/Linux phone down the line, since mostly, all I absolutely need from a GNU/Linux phone then is to act as a tethering device, handle phone calls and texts. It’s sorta the baby-steps way to move off Android, get my dependence down to the point where moving is no big deal.


  • [continued from parent]

    Here’s an example firejail profile that I use with renpy on Wayland, for example, which is a software package that runs [visual novels](https:. Note that this won’t run everything, especially since one is using a different version of renpy than a game ships with, but generally, with this in place, one can just go to a renpy game’s directory and type firejail renpy . and it’ll run. This doesn’t isolate RenPy games against each other, but it does keep them from mucking with the rest of the system:

    renpy firejail profile
    # whitelist profile for RenPy (game)
    noblacklist ${HOME}/.renpy
    
    include disable-common.inc
    include disable-programs.inc
    include disable-devel.inc
    
    caps.drop all
    net none
    nogroups
    nonewprivs
    noroot
    seccomp
    
    tracelog
    
    private-dev
    private-tmp
    
    mkdir     ~/.renpy
    whitelist ~/.renpy
    
    # All Renpy games need to be stored under here.
    whitelist ${HOME}/m/restricted-game/
    read-only ${HOME}/m/restricted-game/
    read-write ${HOME}/m/restricted-game/renpy
    
    nodvd
    notv
    nou2f
    seccomp.block-secondary
    

    More of a tool for letting one run that non-packaged software in isolation…but one needs to generally set up the profiles oneself. For example, that profile blocks network access to renpy games…but there are games that will fail if they can’t access the network (though you could say that this is desirable, if you don’t want those games phoning home).


  • There are a couple routes to doing this, and what’s appropriate here depends on what one is doing. One tends to do this if one is concerned about software potentially being malicious, or wanting to limit the scope of harm if non-malicious software is compromised in some way.

    Virtual Machines

    I guess the most-straightforward is to basically create a virtual machine. You’re creating another “computer” that runs atop your own. You install an operating system on it, then whatever software you want. This “guest” computer runs on your “host” computer, and from its standpoint, the “host” computer doesn’t exist. Software running in the “guest” computer can’t touch the “host” computer.

    Pros:

    • It’s pretty hard to make mistakes and expose the host computer to the guest computer.

    • As long as you know how to install an operating system and software on the thing, you know most of what’s involved to set this up. Mostly just need to learn how to use whatever software interacts with the guest.

    • You can run a different operating system. I sometimes run a Windows VM on my Linux machine, run isolated Windows software.

    • You can (usually at the cost of performance) run software designed for a different achitecture.

    • Software running in the guest can’t eat up all the memory on the host.

    • It’s pretty safe, hard to accidentally let malicious software in the guest touch the host.

    Cons:

    • While things have gotten better here, because you’re running another operating system, it tends to be relatively-heavyweight. Running many isolated VMs uses more memory. Disk space adds up, because you’re having to install whole operating systems, and their filesystems need to typically live on a “disk image”, a file on the host computer that stores the entire contents of what looks like a disk drive to the guest.

    • Networking can be more complicated, since one traditionally has what looks like an entire separate computer. For some applications, one can set up network address translation in the same sort of way that a consumer broadband router typically makes all computers on a home network appear to come from one IP address by intercepting its outbound connections to the Internet and opening connections on its behalf, one can have the host computer do network address translation. But it can be kind of obnoxious to, say, run a server on the guest.

    • Without adding special “paravirtualization” software that “breaks the walls” between the guest and the host — and bugs in that software might create holes where software in the guest might affect the host — transferring files between the guest and host can be pretty inefficient. Same thing for doing things like allocating more memory to the guest Doing things like file interchange between the guest and host or altering the amount of memory can also be relatively inefficient.

    • Traditionally, and while I haven’t looked recently, I believe still in 2025, on Linux, there still isn’t really a great way to share GPU hardware on the host with the guest, to create a “virtual 3D video card”. This means that this isn’t a great route for running 3D games on the guest. There are some ways to “pass through” hardware directly to a guest, so one could simply allocate a whole physical 3D video card to a guest.

    One open-source software package to do this on Linux is QEMU (which you’ll sometimes see referred to as KVM, after a second piece of software used to accelerate its execution on Linux). A graphical program to create virtual machines and interact with them on the desktop is virt-manager. An optional paravirtualization package is virtio.

    I’d typically use this as a reliable way to run a single piece of potentially-sketchy Windows software on Linux without being able to get at the host.

    Containers

    These days, Linux can set up a “container” — a sort of isolated environment where particular pieces of Linux software can run without being able to see software outside the “container”.

    Pros:

    • Efficient. Unlike virtual machines, this uses no more resources than running software on the host.

    • Not too complicated. Depending upon what one’s doing, this does require spending some time to learn software involved with the containerization.

    • You can typically run other Linux distros in the “guest” aside from using their kernel; there’s software to help assist in this.

    • Disk space usage can be more-efficient than a virtual machine, since it’s pretty straightforward to share part of a directory hierarchy on the host with the guest. By the same token, file interchange can be efficient.

    • The same is generally true for memory — it’s easy for the kernel to efficiently share a limited amount of (or all) the host memory with software running in the container.

    • Using the network is pretty straightforward, if one wants to run a server and wants it to look like it’s running on the host.

    Cons:

    • You can’t run other operating systems or other kernels, since they’re all sharing the host kernel. This is good for running (most) Linux software, but not useful for running other operating systems.

    • The main “window” between the host and the guest is the Linux kernel. This is a relatively large piece of software, with a larger “edge” than with VMs — different kernel APIs that might all have security holes and let malicious “guest” software break out.

    • I understand that it’s possible to do some level of GPU sharing (this is of interest for people running potentially-malicious generative AI software, where a lot of software is being rapidly written and shared these days). But in general, it’s probably going to be a pain to do things like run a typical game under.

    This has been increasingly popular as a way to efficiently run server software in isolation.

    While Linux can technically containerize things using lxc, it’s common to use higher-level software on top of it to provide some additional functionality.

    Docker. This has been popular as a way to distribute servers that come with enough of a Linux distribution that they can run without regard for the distribution that the host is running. This can efficiently store “images” — one can start with an existing, mini Linux distro and make a few changes and then just distribute the changes over the network. A newer, upcoming mostly-drop-in replacement is podman.

    Another system is flatpak. This internally uses bubblewrap, and is aimed at running desktop software in isolation. Notably, one can run Steam (and all games it runs) in a flatpak; I have not done this. Typically one expects the software provider to provide a flatpak.

    firejail

    Probably this is best-referred to as a containerized route, but I’ll split it out. This uses LXC and a range of of other techniques to set up an isolated environment for software. It’s more oriented towards simply letting you run a piece of software that you would normally run on the host in an environment, and sharing a number of resources from the host. I’ve found this useful for running 2D games in the past that would normally run on the guest and aren’t packaged by anyone else. It’s a nice way, if you know what you’re doing, to simply remove access to things like the filesystem, the network, or make parts of the filesystem only accessible read-only.

    Pros:

    • Outside of maybe flatpaked Steam, probably the most-practical route to run a arbitrary games that you’d normally run on the host. and I believe that it should be able to run 3D games via Wayland, though I haven’t done this myself.

    • Efficient.

    • One doesn’t need to have an existing package, like a Docker image or flatpak downloaded from the network, or go to the work of generating one oneself — this is oriented towards a minimal-setup way to run software already on the host in isolation.

    Cons:

    • “By default insecure”. That is, normally all host resources are shared with the guest — software can access the filesystem and everything. This is kind of a big deal, since if one makes an error in restricting resources, one might let software run unsandboxed in some aspect.

    • Takes some technical knowledge to set up and diagnose any problems (e.g. a given software package doesn’t like to run with a particular directory read-only).

    • There are “profiles” set up for a small number of software packages that ship with firejail, but in general, it’s aimed at you creating a profile yourself, which takes time and work.

    [continued in child comment]