Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP).

(header photo by Brian Maffitt)

  • 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle



  • From the submission:

    Not a rival, just an alternative

    The realization that led us to develop PeerTube is that no one can rival YouTube or Twitch. You would need Google’s money, Amazon servers’ farms… Above all, you would need the greed to exploit millions of creators and videomakers, groom them into formatting their content to your needs, and feed them the crumbs of the wealth you gain by farming their audience into data livestock.

    Monopolistic centralized video platforms can only be sustained by surveillance capitalism.

    Even though we cannot pinpoint the exact budget Framasoft spent on PeerTube since 2017, our conservative estimate would be around 500 000 €

    With these two perspectives it seems to be doing well, even if it can’t / won’t entirely displace the major players.


  • Which is exactly how the real world works. Harm has to be identified to suggest solutions.

    According to the submission, some harms have been identified, and some solutions have been suggested [that could reduce the same and similar harms from occurring to new and existing users] (but mostly it sounds like a “more work needs to be done” thing).

    I imagine your perspective on the issues being discussed are different from those of the author. The helicopter parent analogy makes sense in a low-danger environment; I think what the author has suggested is that some people don’t feel like it’s a low-danger environment for them to be in (though I of course – not being the author or one such person – may be mistaken).

    Edit: [clarified] because I realised it might seem contradictory if read literally.




  • Unless you’re also throwing money at YouTube premium (etc), isn’t this by definition unsustainable to do? So it’s not really a viable long-term strategy either.

    Like don’t get me wrong, I don’t want all the tracking and stuff either, but somebody has to pay those server bills. If it’s not happening through straight cash then it’s going to be through increasingly aggressive monetization and cost-cutting strategies.


  • MHLoppy@fedia.iotoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    79
    arrow-down
    6
    ·
    edit-2
    9 months ago

    Yes, though just nitro basic. Discord doesn’t show ads and claims to not sell my data. While I can afford to do so, I’d much rather pay a few bucks a month to keep it that way.

    The number of people in this thread aggressively against a free-to-use service having any kind of way to pay employees and server bills makes me fucking depressed, and helps to explain why most free services I enjoy never seem to stay afloat with just an optional payment-based membership thing.

    Edit: To people suggesting less corporate-based (whether FOSS or not) alternatives, that’s totally cool! Just remember that the people behind these projects need some way to pay the bills the same way the corporate ones do, so I encourage you to contribute to them, whether that’s through e.g., code improvements (which doesn’t pay bills but is still helpful!) or plain old donations.



  • Hahaha, I think you’re giving me a bit too much credit - I was just curious enough to run some tests on my own, then share the results when I saw a relevant post about it!

    My interest in image compression is only casual, so I lack both breadth and depth of knowledge. The only “sub-field” where I might quality as almost an actual expert is in exactly what I posted about - image compression for sharing digital art online. For anything else (compressing photos, compressing for the purpose of storage, etc) I don’t really know enough to give recommendations or the same level of insight!

    Edit: fixed typo and clarified a point.


  • It depends a lot on what’s being encoded, which is also why different people (who’ve actually tested it with some sample images) give slightly different answers. On “average” photos, there’s broadly agreement that WebP and MozJpeg are close. Some will say WebP is a little better, some will say they’re even, some will say MozJpeg is still a little better. Seems to mostly come down to the samples tested, what metric is used for performance, etc.

    I (re)compress a lot of digital art, and WebP does really well most of the time there. Its compression artifacts are (subjectively) less perceptible at the level of quality I compress at (fairly high quality settings), and it can typically achieve slightly-moderately better compression than MozJpeg in doing so as well. Based on my results, it seems to come down to being able to optimize for low-complexity areas of the image much more efficiently, such as a flatly/ evenly shaded area (which doesn’t happen in a photo).

    One thing WebP really struggles with by comparison is the opposite: grainy or noisy images, which I believe is a big factor in why different sets of images seems to produce different results favoring either WebP or JPEG. Take this (PNG) digital artwork as an extreme example: https://www.pixiv.net/en/artworks/111638638

    This image has had a lot of grain added to it, and so both encoders end up with a much higher file size than typical for digital artwork at this resolution. But if I put a light denoiser on there to reduce the grain, look at how the two encoders scale:

    • MozJpeg (light denoise, Q88, 4:2:0): 394,491 bytes (~10% reduction)
    • WebP (light denoise, Picture preset, Q90): 424,612 bytes (~29% reduction)

    Subjectively I have a preference for the visual tradeoffs on the WebP version of this image. I think the minor loss of details (e.g., in her eyes) is less noticeable than the JPEG version’s worse preservation of the grain and more obvious “JPEG compression” artifacts around the edges of things (e.g., the strand of hair on her cheek).

    And you might say “fair enough it’s the bigger image”, but now let’s take more typical digital art that hasn’t been doused in artificial grain (and was uploaded as a PNG): https://www.pixiv.net/en/artworks/112049434

    Subjectively I once again prefer the tradeoffs made by WebP. Its most obvious downside in this sample is on the small red-tinted particles coming off of the sparkler being less defined, [see second edit notes] probably the slightly blockier background gradient, but I find this to be less problematic than e.g., the fuzz around all of the shooting star trails… and all of the aforementioned particles.

    Across dozens of digital art samples I tested on, this paradigm of “WebP outperforms for non-grainy images, but does comparable or worse for grainy images” has held up. So yeah, depends on what you’re trying to compress! I imagine grain/noise and image complexity would scale in a similar way for photos, hence some of (much of?) the variance in people’s results when comparing the two formats with photos.


    Edit: just to showcase the other end of the spectrum, namely no-grain, low complexity images, here’s a good example that isn’t so undetailed that it might feel contrived (the lines are still using textured [digital] brushes): https://www.pixiv.net/en/artworks/112404351

    I quite strongly prefer the WebP version here, even though the JPEG is 39% larger!

    Edit2: I’ve corrected the example with the sparkler - I wrote the crossed out section from memory from when I did this comparison for my own purposes, but when I was doing that I was also testing MozJpeg without chroma subsampling (4:4:4 - better color detail). With chroma subsampling set to 4:2:0, improved definition of the sparkler particles doesn’t really apply anymore and is certainly no longer the “most obvious” difference to the WebP image!