• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle

  • Jajcus@kbin.socialtolinuxmemes@lemmy.worldHtop too
    link
    fedilink
    arrow-up
    81
    ·
    5 months ago

    Well behaving programs give control back to the kernel as soon as they are done with what they are doing. If they don’t the control is forcefully taken away after some assigned time.

    It looks something like this:

    Something happens – e.g. a key is pressed – a process waiting for this event is woken up and gets e.g. 100ms to do it stuff. If it can handle the key press in 50ms, kernel notes it used 50 ms of CPU time and can give control to another process waiting for an event or busy with other work. If the key press triggered long computation the process won’t be done in 100ms, the kernel notes it used 100ms of CPU time and gives control to other processes with pending events or busy with other work.
    After one second the kernel may have noted:

    Process A: used 50ms, then nothing, then 100ms, another 100ms and another 100ms
    Process B: was constantly busy doing something, so it got allocated 6 * 100ms in that one second
    Process C: just got one event and handled it in 50ms
    Process D: was not waken at all

    So total of 1000ms was used – the CPU was 100% busy
    Of that 60% was process B, 35% process A and 5% process C.

    And then that information is read from the kernel by top and displayed.

    How does the OS even yank the CPU away from the currently running process?

    Interrupts. CPU has means triggering and interrupt at a specific time. Interrupt means that CPU stops what it is doing and runs selected piece of kernel code. This piece of kernel code can save the current state of user process execution and do something else or restore saved execution of another process.




  • What is more interesting, Google came as a more friendly alternative, with drastically less ads than Alta Vista and other search engines of that time had. Today’s Google is only just approaching now the amount of ads on the search results page that those had. It is just a bit smarter about mixing ads with actual search results and the ads are more targeted (which is not necessarily a good thing).



  • Jajcus@kbin.socialtoSelfhosted@lemmy.worldIntroducing Raspberry Pi 5
    link
    fedilink
    arrow-up
    89
    arrow-down
    3
    ·
    9 months ago

    Doesn’t sound like the ‘cheap small computer you can run your hobby electronics project on’ that the original Pi used to be. It is not as cheap and a power hungry beast, still small, though. More and more like a PC and less and less a small cheap embedded platform. For some people it is a plus (I guess for most people here), for some not so much.

    I tend to build my projects on Raspberry Pi Pico now, but sometimes I would need something more powerful and Raspberry Pi 5 will be too much.


  • The idea is you package the software once and it works forever, because all dependencies for it are provided in the exact right version. And the dependencies may include things that would not be included in the base system (like super new versions of some important libraries).

    That is true, but that is also the problem: both the package and all its dependencies may be left never updated.

    In traditional Linux distribution, like Debian, every package must be compiled within the same system, which usually means specific version of all key libraries. And when the key libraries are upgraded some packages compiled for older versions won’t work, the package might not even compile with newer version of the libraries. And it is often not possible/practical to provide multiple different version of libraries (or other shared system components). The result is distribution developers have a lot of hard work updating all the packages. When there is no one to fix a package for the next version of the package, the package will be removed from the distribution. That happens when package is not maintained upstream and/or no one cares enough to maintain it in the distribution. In that case – is it worth to keep it?

    Snap makes packaging applications much easier, and more decoupled from the operating system ‘core’. Less maintenance is needed… but that also means less maintenance will be done, which is not necessarily good.

    On the other hand, Snap allows application to be maintained more rapidly than the distro core – in that case it can make things safer – fix in applications and their dependencies can be fixed that it could be done in the normal Debian release process. But that depends on maintainers of the specific snap and its dependencies.





  • Differences between 2.4 and 2.6 were quite big, I don’t think there was another such big change in kernel releases any time later. But that was also the time when Linux was transitioning from being a hobby project (already useful for serious stuff) to being a serious professional operating system – the last moment for major refactoring.

    Linux kernel is still changing and being constantly refactored, but now the changes tend to be more gradual and version numbers matter much less.