• 1 Post
  • 27 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • emerges from a brand you’ve probably never heard of

    Writing this on a Tuxedo Pulse 14 / gen 3 as we speak. Great little laptop. I’d wanted something with a few more pixels than my previous machine, and there’s a massive jump from bog-standard 1080p to extremely expensive 4K screens. Three megapixel screen at a premium-but-not-insane price, compiles code like a champion, makes an extremely competent job of 3D gaming, came with Linux and runs it all perfectly.

    “Tuxedo Linux”, which is their in-house distro, is Ubuntu + KDE Plasma. Seemed absolutely fine, although I replaced it with Arch btw since that’s more my style. Presumably they’re using Debian for the ARM support on this new one? This one runs pretty cold most of the time, but you definitely know that you’ve got a 54W processor in a very thin mobile device when you try eg. playing simulation games - it gets a bit warm on the knees. “Not x64” would be a deal-breaker for my work, but for most uses the added battery life would be more valuable than the inconvenience.




  • Stephen King’s books tend to be both very long and contain a lot of internal monologue. That’s very much not film-friendly. “Faithful” adaptions tend to drag and have a lot of tell-don’t-show, which makes for a “terrible” film. Unfaithful ones tend to change and cut a lot, which makes them “terrible” adaptions. For instance, “The Shining” film has very little to do with the book, but is an absolutely phenomenal movie. King hated it.

    “IT” the Tim Curry version has Tim Curry in it, who was absolutely fantastic. A lot of material from the book was cut out - I’m thinking it could be 80% or more. That includes the scene where the children have a gang bang in the sewer. Out of nowhere, with no foreshadowing, and it’s never mentioned again if I remember correctly. That might make it a “terrible” unfaithful adaption, but you know something? I’m alright without seeing that.


  • Yeah.

    There’s a couple of ways of looking at it; general purpose computers generally implement ‘soft’ real time functionality. It’s usually a requirement for music and video production; if you want to keep to a steady 60fps, then you need to update the screen and the audio buffer absolutely every 16 ms. To achieve that, the AV thread runs at a higher priority than any other thread. The real-time scheduler doesn’t let a lower-priority thread run until every higher-priority thread is finished. Normally that means worse performance overall, and in some cases can softlock the system - if the AV thread gets stuck in a loop, your computer won’t even respond to keyboard input.

    Soft real-time is appropriate for when no-one will die if a timeslot is missed. A video stutter won’t kill you. Hard real-time is for things like industrial control. If the anti-lock breaks in your car are meant to evaluate your wheels one hundred times a second, then taking 11 ms to evaluate that is a complete system failure, even if the answer is correct. Note that it doesn’t matter if it gets the right answer in 1 ms or 9 ms, as long as it never ever takes more than 10. Hard real-time performance does not mean good performance, it means predictable performance.

    When we program up PLCs in industrial settings, for our ‘critical sections’, we’ll processor interrupts, so that we know our code will absolutely run in time. We use specialised languages as well - no loops, no recursion - that don’t let you do things that can’t be checked for an upper time bound. Lots of finite state machines! But when we’re done, we know that we’ve got code that won’t miss a time slot in the next twenty years of operation.

    That does mean, ironically, that my old Amiga was a better music computer than my current desktop, despite being millions of times less powerful. OctaMED could take over the whole CPU whenever it liked. Whereas a modern desktop might always have to respond to a USB device or a hard drive, leading to a potential stutter at any time. Tiny probability, but not an acceptable one.


  • addie@feddit.uktolinuxmemes@lemmy.worldbtw
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 month ago

    I don’t think that even 8 years ago, the ‘business’ choices would have been SUSE / Fedora / Debian. If you’re paying for support, then you’d be paying for RHEL, and the second choice would have been Centos, not Fedora. Debian in third place maybe, as it was the normal choice for ‘webserver’ applications, and then maybe SUSE in fourth.





  • Really? If it’s a big enough treatment works to warrant a SCADA, then I doubt an automation engineer with the experience to set it all up would be asking this question, but here goes. You’ve a couple of obstacles:

    • every contract I’ve ever seen for industrial automation has either specified which control plane they want directly, or they’ll have a list of approved suppliers which you must use. Someone after you will have to maintain this. Those maintainers will only accept the things that they have been trained on. Those things are Windows PCs running Windows software. They will reject anything else. The people running network security on those machines will have a very short list of the acceptable operating systems for running SCADA systems. That list will be a couple of versions of Windows Server. They will also reject anything else.

    • that’s not nearly enough information to make a recommendation. Which PLCs? Allen Bradley, Siemens, Mitsubishi, …? I can’t think of a job I’ve ever been on where the local HMI hasn’t matched the PLCs. The SCADA software almost invariably matches the PLCs used in the main motor control centre, with perhaps a couple of oddball PLCs for proprietary panels and such like. Could maybe ask the supplier if they’ve a Linux alternative? Siemens will laugh at you and Mitsi won’t understand the question, but AB just might.

    Sorry - I’m a Linux evangelist, but I don’t think it’s a good fit for here. SCADA performance generally isn’t bad due to Windows Server - it’s fine, does what it’s intended to - but because eg. STEP 7 is an appallingly slow and bloated piece of software which would bring a mainframe to its knees. Which is bizarre - the over-the-wire protocol connecting the machines is generally a short binary blob described in the PLC configuration - these bits are the drive statuses, these bits are an int or a float for an instrument readout - and it shouldn’t be at all slow updating it all, but slow it is.



  • There are, but it’s complicated. Doom (2016) for instance - it doesn’t handle the very large Vulkan swap chain that’s possible on some modern graphics cards, crashes on start-up. Someone patched Proton around that time so that Doom would start; the patch was later reverted since it broke other games. Other games based off of that engine - couple of Wolfensteins, Doom Eternal - have the problem fixed in the binaries, and so run on up-to-date Proton, but depending on your hardware, only a few specific, old, versions of Proton, will do for Doom.

    Regressions get fixed - that’s okay. Buggy behaviour which depended on regressions that got fixed - that’s a problem.



  • Spot on advice. I’d observe that media files tend to be quite large, and if all that the disk has been used for has been copying these files onto it, then they’re likely to be both relatively defragmented and at the start of the disk, so the reduction in partition size isn’t going to be as slow as it usually is. (Which is very slow.)

    Since media files are relatively infrequently read, I’d probably want to use a filesystem that checks against bit rot instead of ext4 - make sure that they’ve not become corrupt when you want to use them. But that’s Linux holy war territory, so I’ll leave it alone.





  • addie@feddit.uktoLinux@lemmy.mlWhy Are Arch Linux Users So TOXIC?
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    I’d kind of hope that we’re not all like that - I use Arch btw. for my computers at home, but at work we use a combination of Redhat, Centos Stream and Amazon Linux, and I spend a lot of my day helping people with ‘Linux admin’ issues even though that’s not strictly my job.

    When getting started with Linux, there’s a certain ‘glossary gap’ if you’ve come from Windows, not even knowing the right term to search for. Newstarts will complain that “waa! Linux is terrible, doesn’t recognise my hardware”, to which invoking a couple of udev commands seems like magic. Some people will get irritated when answering the same question in the same way for the tenth time and just post a link; really, those people need to step back and let someone else pick it up. The Arch wiki is fantastic, but it’s particularly fantastic if you already know what all the words mean and just need your memory prodding a bit. Having someone able to interpret a page for you is a huge benefit over having to fall down a wiki hole, which is very dispiriting when you’re trying to learn.

    And yes, Arch is great for gaming - latest version of everything, look at my extra frames - but really, it’s only a tiny bit better than eg. Pop! OS, and that’s a much better choice for someone who’s never used Linux before.