Just an ordinary myopic internet enjoyer.

Can also be found at lemmy.dbzer0, lemmy.world and Kbin.social.

  • 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle

  • I’m probably one of those weirdos who use VSCode, Kate, Nano, and sometimes KWrite all in their different niches.

    I do most of my programming work in VSCode, but most of my shell scripting in Kate. When I edit configuration files, I’m usually using the command line and thus use Nano (sorry, I’m too stupid to use either Emacs nor Vim, let alone Vi). When I’m just looking at text files (or doing a quick edit) via my file manager, I use KWrite. With the exception of VSCode, they’re all provided in my installation by default.

    Having said that, trying out different editors will enable you to pick the editor that better fits your requirements. Kate is too powerful for what I use it for, but since it’s already there, the additional features are nice to have. I actually had to explore a bit before I‌ settled on VSCode for my programming work, and while there’s probably one that better fits my needs, my workflow has already adapted to working with what I currently have.


  • Isn’t that making the problem worse though? If you have a tool that resolves your problem for you, wouldn’t that make you dependent on it, and thus, be even more helpless when moving to another ecosystem (like, yeah, Arch)?

    Arch is built for a particular kind of Linux user though, btw. It’s probably the worst choice for a “not a computer person” move into, issues of dependency hell aside.


  • Having learned how to use computers via MS-DOS, then growing to mostly use Windows machines, and then moving to daily-drive Linux in the past handful of years, I think the problem is more about context. If I see an error message, it’s not that I don’t read them. ‌Rather, if I lack the context to understand what it is trying to tell me—and more importantly, what I‌ can do to resolve the problem I’m having, I’m out of luck and I’d have to ignore it.

    It was when I switched to using Linux that I’ve picked up the habit of searching the error message online, and then browsing the various pages (mostly Stackoverflow, sometimes Arch Linux wiki pages) which might or might not lead me to the context behind the error message. If I get lucky, I could find a clue to resolving my problem on top of understanding what the error message is about. Other times, I end up being even more confused and give up.

    And then there’s the monstrosity that is the logs. I’m pretty much illiterate when it comes to them, and reading them might as well be reading arcane records of eldritch daemons keeping my machine working (in a way, they indeed are). Copy-pasting some snippets from them into an online search is a crapshoot. I may find something that fits my context, but a lot of times, it’s for a different problem. It might not even be for my OS/distro/package/version.


  • I was actually thinking of something like markdown or HTML forming the base of that standard. But it’s almost impossible (is it?) to do page layout with either of them.

    But yeah! What I was thinking when I mentioned a LaTeX-based standard is to have a base set of “modules” (for a lack of a better term) that everyone should have and that would guarantee interoperability. That it’s possible to create a document with the exact layout one wants with just the base standard functionality. That things won’t be broken when opening up a document in a different editor.

    There could be additional modules to facilitate things, but nothing like the 90’s proprietary IE tags. The way I’m imagining this is that the additional modules would work on the base modules, making things slightly easier but that they ultimately depend on the base functionality.

    IDK, it’s really an idea that probably won’t work upon further investigation, but I just really like the idea of an open standard for documents based on LaTeX (kinda like how HTML has been for web pages), where you could work on it as a text file (with all the tags) if needed.





  • Ah, yay is an AUR helper, though I personally see it as a pacman helper as well. Link here. Some of the flags and options that can be used for pacman can be used for yay, thus, some of the flags in the aliases I use are actually for pacman. Anyways, on to the breakdown.

    alias yy='yay -Y --needed --norebuild --nocleanafter --nodiffmenu --noredownload --nocleanmenu --removemake --sudoloop'

    This one is what I use to look up for packages. The result of runnning yy «search term» would be a list of packages matching the search term and prompting the user on which package(s) to install.

    flag description
    -Y performs yay-specific operations.
    --needed (pacman) do not reinstall up to date packages
    --norebuild skips package build if in cache and up to date
    --nocleanafter do not remove package sources after successful build
    --noredownlod skip pkgbuild download if in cache and up to date
    --nodiffmenu don’t show diffs for build files
    --nocleanmenu don’t clean build PKGBUILDS
    --removemake remove makedepends after install
    --sudoloop loop sudo calls in the background to avoid timeout

    alias ya='yay -S --needed --norebuild --nocleanafter --nodiffmenu --noredownload --nocleanmenu --removemake --sudoloop'

    This one is what I use for installing packages. Useful if I already know what package I would be installing.

    flag description
    -S (pacman, extended by Yay to cover AUR as well) Synchronize packages. Packages are installed directly from the remote repositories, including all dependencies required to run the packages.

    alias yu='yay -R --recursive --nosave'

    This one is what I use when uninstalling packages. I usually check the package name with something like yay -Qi «package-name-guess» beforehand.

    flag description
    -R (pacman, extended by Yay to also remove cached data about devel packages) Remove package(s) from the system.
    --recursive (pacman) Remove each target specified including all of their dependencies, provided that (A) they are not required by other packages; and (B) they were not explicitly installed by the user. This operation is recurisve and analogous to a backwards --sync operation.
    --nosave (pacman) Instructs pacman to ignore file backup designations. (This avoids the removed files being renamed with a .pacsave extension.)

    I actually don’t know much about both yay and pacman myself, since the aliases were just passed onto me by the same friend who helped me (re-)install my system (long story) and set-up the aliases. Having looked all these up, however, I might make a few changes (like changing the --nocleanafter and --nocleanmenu options to their clean ones`).


  • This is a separate reply since I didn’t know that you can include shell functions here.

    I made this little function read_latest_log() because I just want to “read the latest log file” in a directory full of timestamped log files. I made a helper function separator_line_with_text() to help with the output, basically setting off the file-info portion (just the filename for now) from the file contents.

    # # separator_line_with_text
    # # Centers text in a separator line
    # #
    # # Usage:
    # # separator_line_with_text «separator_char» «text»
    separator_line_with_text() {
    local separator_char="$1"
    local contents_str="$2"
    
    # Calculate separator_length
    local separator_length=$(( $(tput cols) - 2 - ${#contents_str} ))
    
    # Calculate the width of the left and right parts of the separator line
    local half_line_width=$(( (${separator_length}) / 2 ))
    
    # Construct the separator line using the $separator_char and $contents_str
    for ((i = 0; i « half_line_width; i++))
    do
    echo -n ${separator_char}
    done
    
    echo -n ${contents_str}
    
    for ((i = 0; i < half_line_width; i++))
    do
    echo -n ${separator_char}
    done
    
    echo ""
    }
    
    # # read_latest_log
    # # Reads the latest log file with a timestamp in the filename.
    # #
    # # Usage:
    # # read_latest_log [[«name_filter»] «extension»] «separator» «timestamp_field_number»
    read_latest_log () {
      # Check if the function has sufficient parameters
      if [[ $# -lt 2 ]]; then
        echo "Error: insufficient parameters."
        echo "Usage: read_latest_log [[«name_filter» = *] [«extension» = log] «separator» «timestamp_field_number»"
        return 1
      fi
    
      # Supposing only two parameters are provided
      # «name_filter» parameter is "*"
      # «extension» parameter is "log"
      if [[ $# -eq 2 ]]; then
        local name_filter="*"
        local extension="log"
        local separator="$1"
        local field="$2"
      fi
    
      # Supposing only three parameters are provided,
      # assume that the «name_filter» parameter is "*"
      if [[ $# -eq 3 ]]; then
        local name_filter="*"
        local extension="$1"
        local separator="$2"
        local field="$3"
      fi
    
      # If all parameters are provided, assign them accordingly
      if [[ $# -eq 4 ]]; then
        local name_filter="$1"
        local extension="$2"
        local separator="$3"
        local field="$4"
      fi
    
      # Find all log files with the specified extension, sort them based on the separator and field
      local log_files=$(find . -type f -name "${name_filter}.${extension}" | sort -n -t "${separator}" -k "${field}")
    
      # If no log files are found, display a message and return
      if [[ -z "$log_files" ]]; then
        echo "No log files found."
        return 0
      fi
    
      # Get the latest log file and its full path
      local latest_log_file=$(echo "$log_files" | tail -1)
      local full_path=$(realpath "$latest_log_file")
    
      # Define the strings for the separator line and
      # calculate the appropriate length of the separator line
      local contents_str=" Contents "
      local separator_char="—"
    
      separator_line_with_text ${separator_char} ""
      separator_line_with_text " " ${full_path}
      separator_line_with_text ${separator_char} ${contents_str}
      cat "$(echo "$log_files" | tail -1)"
    }
    

    Sorry for all the edits, for some reason anything that looks like an HTML tag gets erased.


  • Some QoL stuff my good friend set-up for me.

    # ALIASES -- EXA
    alias ls='exa --group-directories-first --color=auto -h -aa -l --git'
    
    # ALIASES -- YAY
    alias yy='yay -Y --needed --norebuild --nocleanafter --nodiffmenu --noredownload --nocleanmenu --removemake --sudoloop'
    alias ya='yay -S --needed --norebuild --nocleanafter --nodiffmenu --noredownload --nocleanmenu --removemake --sudoloop'
    alias yu='yay -R --recursive --nosave'
    
    # ALIASES -- CP
    alias cp="cp --reflink=auto -i"
    

    And then there’s a bunch of stuff from the output of alias, most of them are git aliases. Those which aren’t git-related are listed below:

    -='cd -'
    ...=../..
    ....=../../..
    .....=../../../..
    ......=../../../../..
    1='cd -1'
    2='cd -2'
    3='cd -3'
    4='cd -4'
    5='cd -5'
    6='cd -6'
    7='cd -7'
    8='cd -8'
    9='cd -9'
    _='sudo '
    cp='cp --reflink=auto -i'
    egrep='grep -E --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn,.idea,.tox}'
    fgrep='grep -F --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn,.idea,.tox}'
    history=omz_history
    l='ls -lah'
    la='ls -lAh'
    ll='ls -lh'
    ls='exa --group-directories-first --color=auto -h -aa -l --git'
    lsa='ls -lah'
    md='mkdir -p'
    rd=rmdir
    run-help=man
    which-command=whence
    




  • Agreed. Though in the context of trying to convince someone to commit to Linux, suggesting that they buy new hardware would make them have second thoughts (about Linux, given that their stuff works okay with Windows).

    Sure, it’s better for them to have a better idea of what they’re getting into (NVIDIA and Linux mixes like oil and water), but that might be better off stated as “If you’re intending to upgrade your hardware, better stay away from NVIDIA.” (Or something along those lines.)

    I’m now way more willing to switch to hardware that’d play nice with Linux now that I’ve made the jump. In fact, if I have the money, I would have already ditched my graphics card for something better (looking at getting an RX 6650 XT).


  • I have the same graphics card as the OP, due to circumstances beyond my control, and I can see where you’re coming from (looking to replace this video card sooner than later with an AMD one–seeing how troublesome NVIDIA is with Linux, I just don’t want to support them). However, this “just buy better hardware, lol!” line of reasoning is counter-productive to convincing someone to make the jump to Linux.

    One of the things that convinced me to make the jump is the argument that Linux can run on any junk machine destined for e-waste. Seeing the argument about buying better hardware, or buying the right brand of hardware just pains me (despite being true to some extent).


  • Looks great, and it works great on my desktop. I also quite like its simplicity.

    However, after playing around with it for a bit, I noticed one glaring flaw. It stores its playlists in one (potentially) huge playlists.json file. It’s great if you’re manually creating playlists from scratch. However, I’ve had several playlists that I’ve compiled from my time in iTunes, and then Media‌Monkey, all of which are now in an m3u format. I can play them in Harmonoid just fine, though it only shows song info for the first song in the playlist, even though it does play the rest of the songs just fine.

    Meanwhile, since Harmonoid also has a mobile version, I also played around with it. My playlists worked better over there as it shows the track information for all tracks, not just the first one. I haven’t dug up the files to see how they’re being stored, though.

    I guess a feature like “playlist import/export” can be requested. Personally though, looking at the JSON data within the playlist.json file, however, IDK:

    spoiler
    {
        "playlists": [
            {
                "name": "History",
                "id": -2,
                "tracks": [
                    {
                        "uri": "file:///home/user/Music/ArtistName/AlbumName/TrackNumberAndTrackName.mp3",
                        "trackName": "TrackName",
                        "albumName": "AlbumName",
                        "trackNumber": 1,
                        "discNumber": 1,
                        "albumLength": 1,
                        "albumArtistName": "AlbumArtistName",
                        "trackArtistNames": [
                            "TrackArtistNames"
                        ],
                        "genres": [
                            "Genre"
                        ],
                        "timeAdded": 1554270035,
                        "duration": 0,
                        "bitrate": 0
                    },
                    // More tracks…
                ]
            },
            {
                "name": "Liked Songs",
                "id": -1,
                "tracks": []
            }
        ]
    }
    

  • The intent’s great, but I agree with the sentiment that if a beginner has to ask which distro is good for them, that questionnaire only cause them more trouble through choice paralysis.

    I answered it in the mindset I had when I was just first installing my first Linux daily-driver, and I‌ got a lot of results, with Linux Mint, Zorin OS and Elementary OS being the top three. Haven’t really gone through the distro-hopping phase (nor do I think I’d have the patience to), but I’m intrigued with the other two. It also says something about me who uses Arch, btw, but “gravitating” towards Ubuntu-based distros (or at least, that’s what the results seems to be telling me).


  • The following sums up my experience with Linux thus far: “It’s never been easier for the newb to jump right in, but heavens help them if they ever stray from the straight path”.

    There’s been a lot of effort to make things easier for a newb (used to Windows and all that shit) to do what they need to do in most cases. There’s been all sorts of GUI-based stuff that means for the ‘average’ user, there’s really no need for them to interact with the command line. That’s all well and good until you need to do something that wasn’t accounted for by the devs or contributors.

    All of a sudden, you’d have not only to use the command line, you may also have to consult one of the following:

    • Well-meaning, easy to understand, but ultimately unhelpfully shallow help pages (looking at you, Libre Office), or the opposite: deep, dense, and confusing (Arch) Wiki pages.
    • One of the myriads of forum pages each telling the user to RTFM, “program the damned thing yourself”, “go back to Windows”, all of the above, or something else that delivers the same unhelpful message.
    • Ultra-dense and technical man pages of a command that might possibly be of help.

    And that’s already assuming you’ve got a good idea of what the problem was, or what it is that you are to do. Trouble-shooting is another thing entirely. While it’s true that Linux has tons of ways to make troubleshooting a lot easier, such as logs, reading through them is a skill a lot of us don’t have, and can’t be expected of some newb coming from Windows.

    To be fair to Linux though, 90% of the time, things are well and good. 9% of the time, there’s a problem here and there, but you’re able to resolve it with a little bit of (online) help, despite how aggravating some of that “help” might be. 1% of the time, however, Linux will really test your patience, tolerance, and overall character.

    Unfortunately, it’s that 10% that gives Linux its “hard to use” reputation, and the 1% gives enough scary stories for people to share.