• 5 Posts
  • 16 Comments
Joined 2 months ago
cake
Cake day: April 4th, 2025

help-circle
  • Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.

    What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can’t think logically and certainly don’t do math even if the result sometimes happens to be correct.

    For the dynamic vs static typing debate, see the article by Dan Luu:

    https://danluu.com/empirical-pl/

    But this is not the central point of the above blog post. The central point of it is that, by the very nature of LKMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-exoerimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.

    And the quibbling about what “thinking” means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become “but it seems to be thinking to me” even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they ‘know’ and don’t ‘know’, or can not optimize decisions to multiple complex and sometimes contradictory objectives (which is absolutely critical to sny sane software architecture).

    What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.

    And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.


  • Are you saying that it is not possible to use scientific methods to systematically and objectively compare programming tools and methods?

    Of course it is possible, in the same way as it can be inbestigated whuch methods are most effective in teaching reading, or whether brushing teeth is good to prevent caries.

    And the latter has been done for comparing for example statically vs dynamically typed languages. Only that the result there is so far that there is no conclusive advantage.


  • What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

    Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

    The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

    Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.





  • So, how many users of Debian would even think about creating own packages?

    I already have a hunch what went wrong: they were probably trying to package software that has no standard build system. This is painful because the standard tools, like GNU autotools for C programs, or cmake, or setuptools or its newer siblings for python, make sure that the right commands are used to build a package on whatever platform, and that, importantly, its components are installed into the right places. If they don’t use these, they will have a problem to build packages for any standard distribution.

    Guix has support for all the mayor build systems (otherwise, it could not support building of 50000 packages).



  • Yes, Nix solves the same problem. The main difference is that the language used for package descriptions is less attractive to some developers compared to the language which Guix uses, which is Guile Scheme. Guile is very mature, well documented and has good performance.

    I think that will give Guix an advantage in the long run, since for a successful disyribution, one needs a bunch of packages and for this, volunteers need to write package definitions and maintain them. Guix makes it easier to write definitions.

    Clearly the strict focus on FLOSS will prevent some packages like NVidia drivers from appearing there. But on the other hand, this gives you a system which you will be able to completely compile from source in 10 years time.


  • Guix is really making fantastic progress and is a good alternative in the space between stable and fully FOSS distributions, likes Debian, and distributions which are more up-to-date, like Arch.

    And one interesting thing is that the number of packages is now so large that one can frequently install additional more recent packages on a Debian systems, or ones that are not packaged by Debian.

    For example, I run Debian stable as base system, Guix as extra package manager (and Arch in a VM for trying out latest software for programming).

    The thing is now Guix often provides more recent packages tham Debian, like many Rust command line tools, where Debian is lagging a bit. There are many interesting ones, and most are recent because Rust is progressing so fast. Using Guix, I can install them without using the language package manager, regardless whether iy is written in Rust, Go, or Python 3.13.

    Or, today I read an article about improvements in spaced repetition learning algorithms. It mentioned that the FLOSS software Anki provided it, and I became curious and wanted to have a look at Anki. Well, Debian has no “anki” package - and it is written, among other languages, im Python and Rust, so good luck getting it on Debian stable. But for Guix, I only had to do “guix install anki” and had it installed.

    This works a tad slower than apt-get … but it still saves time compared to installing stuff and dependencies manually.


  • I don’t get that people constantly complain that the Guix project does not distributes or actively supports distribution of binary, propietary software. That is like complaining that Apple does not sells their Laptop with Linux, Microsoft does not sells Google’s Chromebooks, or that Amazon does not distribute free eBooks from project Gutenberg, ScienceHub or O’Reilly.

    And users can of course use the nonguix channel to get their non-free firmware or whatever, but they should not complain and demand that volunteers of other projects do more unpaid work. Instead, they should donate money or volunteer do do it themselves.

    But guess what? I think these complaints come to a good part from companies which want to sell their proprietary software. Valve and Steams show that a company can very well sell software for Linux, with mutual benefit, but not by freeloading on volunteer work.

    And one more thing, Guix allows to do exactly what Flatpaks etc. promise: Any company, as well as any lonely coder, team of scientists, or small FLOSS project, can build their own packages founded on a stable Guix base system, with libraries and everything, binary or from source, and distribute it from their own website in a company channel - just like any Emacs user can distribute his own, self-written Emacs extensions from a Web page. And thanks to the portability of the Guix package manager, this software can be installed on any Linux system, resting on a fully reproducible base.





  • If you like screen or tmux, you might like a tiling window manager like i3 or sway, or GNOME with paperwm extension. It can have real advantages for older folks (like me) which don’t have perfect vision any more, because it is much more conservative with screen space. After a few days learning, it becomes also really fast to switch windows and desktops. This is not black-or-white: The desktop WMs do have keyboard shortcuts and windows layouts which mimick tiling WMs, and tiling WMs may have a few desktop features. The former are a bit more convenient and easy for beginners, while the latter are blazingly fast.



  • For me, this result is also not too surprising:

    1. If allowing / using Undefined Behavior (UB) would allow for systematically better optimizations, Rust programs would be systematically slower than C or C++ programs, since Rust does not allow UB. But this is not the case. Rather, sometimes Rust programs are faster, and sometimes C/CC++ programs. A good example is the Debian Benchmark Game Comparison.

    2. Now, one could argue that in the cases where C/C++ programs turned out to be faster than Rust programs, that at least im these cases exploiting UB gave an advantage. But, if you examine these programs im the Debian benchmark game (or in other places), this is not the case either. They do not rely on clever compiler optimizations based on assumptioms around UB. Instead, they make heavy use of compiler and SIMD intrinsics, manual vectorization, inline assembly, manual loop unrolling, and in general micro-managing what the CPU does. In fact, these implementations are long and complex and not at all idiomatic C/C++ code.

    3. Thirdly, one could say that while these benchmark examples are not idiomatic C code, one at times needs that specific capability to fall back to things like inline assembly, and that this is a characteristic capability of C snd C++.

    Well, Rust supports inline assembly as well, though it is rarely used.



  • Another big plus of that approach: If your laptop or PC breaks, you can just move the VM image with Windows exactly like any other file you have backed up (you do backups, don’t you?) to the new hardware and use it as before. This esoecially breaks the problem of forced OS upgrades if the new hardware does not support the old windows version, or you do not have the installer and license keys any more, but the new Windows version does not support your old documents, media formats, or pheriperals like scanners.

    Also, if you modify your Windows install and it might break, you can just make a snapshot of the VM image - which is a copy of a file - and restore it when needed.



  • A few more thoughts here:

    • for a first Distribution, Ubuntu is fine, too. Also, you could ask people arounf you what they know best und whether they like to help you. For example, Debian is a bit harder to install but is rock solid once it runs.
    • if you are concerned about security, you should practice a strict separation between trusted software installed by you, and untrusted data presented to you via web, mail or Internet. Never run untrusted code. Windows blurs that line and this is fatal.
    • In respect to hardware support: Most standard PC hardware will work very well with Linux, even old scanners that have no more Windows driver support. NVidia is the bad exception, and the bad rap is still justified because of Wayland, the new graphics display server. If you are not really poor you might consider to buy something better. The hardware support landscape is different for laptops. Here, refurbished Lenovo Thinkpad or Dell laptops are first choice, and also best value for the money.