• 0 Posts
  • 87 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • Usually I sympathize with sentiments like this (“people use X because of uncontrolled circumstances”), but browsers are not one of them.

    If you have a website that requires the use of Chrome, then just use Chrome for that website! It’s not an either-or thing – you can install both browsers and use Firefox as the primary one.

    And some people will want to stay on Chrome.

    And that’s what makes this statement so problematic. You don’t earn anything by staying exclusively on Chrome, when both it and Firefox can work alongside each other.


  • Last time I asked around about this question, the answer was surprisingly “probably not much”! When a low-power x86 chip (like those mobile chips) is idling (which is pretty much all the time if all you are doing is hosting a server on it) it consumes very little power, about the same level as an idling Pi. It is when the frequency ramps up that performance-per-watt gets noticeably worse on x86.

    Edit: My personal test showed that my x86 laptop fared slightly worse than my Pi 3 in idling power (~2 watts higher it seems), but that laptop is oooooooold.



  • One of the issues at hand is that X11, the predecessor of Wayland, does not have a standardized way to tell applications what scale they should use. Applications on X11 get the scale from environment variables (completely bypassing X11), or from Xft.dpi, or by providing in-application settings, or they guess it using some unorthodox means, or simply don’t scale at all. It’s a huge mess overall.

    It is one of the more-or-less fundamentally unfixable parts of the protocol, since it wants everything to be on the same coordinate space (i.e. 1 pixel is 1 pixel everywhere, which is… quite unsuitable for modern systems.)

    Wayland does operate like how you say it and applications supporting Wayland will work properly in HiDPI environments.

    However a lot of people and applications are still on X11 due to various reasons.



  • Yeah I get the display server part. What I meant was that 200% scaling gets you 1920x1080 logical resolution on HiDPI applications – LoDPI applications continue to be blurry just as if you set your actual resolution to 1080p, but HiDPI applications will enjoy the enhanced visual acuity.

    Even on smaller screens like the 14" ones, the quality of very high resolution (e.g. 4K) is still quite visible IMO, especially when it comes to text rendering. But it could very well just be my eyes.




  • Agreed. HiDPI is the way to go and we should appreciate Framework for putting that in their laptops instead of continuing the use of shitty 1366x768 screens.

    Xorg is the reason why OP is facing the scaling issues. OP, try to force the apps to run on native Wayland if they support it but don’t default to it. The Wayland page on Arch wiki has instructions on that. Immensely improved my HiDPI experience.



  • The modern electronic devices are far more railroaded than it was back in the day tho.

    Want to download an application? There’s the App Store. No need to download random .exes from sketchy websites (and learn what a “computer virus” is the hard way)

    Downloaded a picture? It’s instantly inside your gallery. Back then we needed to find a folder called “Download” or “My Documents” using something called the Explorer!

    iPhone and Android made a lot of things dumber and easier to take in, but I feel like it had a detrimental effect on digital literacy.


  • orangeboats@lemmy.worldtoLinux Gaming@lemmy.worldJust Switch Over
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    3
    ·
    edit-2
    3 months ago

    The “quit having fun” meme is ironically becoming as cringey as the thing it is originally complaining about.

    You will help the community more by telling non-Linux people why Linux gaming is better, and this meme is doing the exact opposite of it – “oh Linux can’t play some games, yada yada. But we are still better! Switch over!” – like what’s the logic of it?

    What’s the purpose of this meme other than circlejerking?

    Disclamer: I am a Linux user myself, started with Debian and is now using Arch Linux.

    I will share some advantages I experienced in Linux gaming:

    1. Alt-tabbing old fullscreened games won’t mess with my monitor.

    2. The compatibility of Wine when it comes to some older games is wild. SimCity 4 actually crashed less when I played it on Linux.

    3. Better performance across the board. Granted it’s just a mere 5% difference but I will take it, why not.



  • orangeboats@lemmy.worldtoProgrammer Humor@programming.devJavaScript
    link
    fedilink
    arrow-up
    27
    arrow-down
    2
    ·
    5 months ago

    This is why I try my damnedest not to write in weakly typed languages.

    string + object makes no logical sense, but the language will be like “'no biggie, you probably meant string + string so let’s convert the object to string”! And so all hell breaks loose when the language’s assumption is wrong.




  • I have a 64-bit computer, it can address up to 18.4 exabytes, but my computer only has 32GB, so I will never use the vast majority that address space. Am I “wasting” it?

    You are using the addressing bits in the form of virtual memory. Right now. Unless you run a unikernel system, then in that case you could be right, but I doubt it.

    Anyway, this is apples and oranges. IP addresses are hierarchical by design (so you have subnets of subnets of subnets of …), memory addresses are flat for the most part, minus some x86 shenanigans.

    Yes they are all “used” but you don’t need them. We are not using 2^128 ip addresses in the world.

    But we do need them! The last 64 bits of your IPv6 addresses are randomized for privacy purposes, it’s either that or your MAC address is used for them. We may not be using those addresses simultaneously but they certainly are used.

    Despite that, there still are plenty of empty spaces in IPv6, that’s true. But they will still be used in the future should the opportunity arise. Any “wastage” is artificial, not a built-in deficiency of the protocol. Whereas if we restricted the space to 40 bits, there will be 24 bits wasted forever no matter how.


  • You’re not “wasting” them if you just don’t need the extra bits

    We are talking about addresses, not counters. An inherently hierarchical one at that (i.e. it goes from top to bottom using up all bits). If you don’t use the bits you are actually wasting them.

    you can gradually make the other bits available in the form of more octets

    So why didn’t we make other bits available for IPv4 gradually? Yeah, same issue as that: Forwards compatibility. If you meant that this “IPv5” standard should specify compulsory 64-bit support from the very beginning, then why are you arbitrarily restricting the use of some bits in the first place?

    If you’re worried about wasting registers it makes even less sense to switch from a 32-bit addressing space to a 128-bit one in one go

    All the 128 bits are used in IPv6. ;)


  • Every time there’s a “just add an extra octet” argument, I feel some people are completely clueless about how hardware works.

    Most hardware comes with 32-bit or 64-bit registers. (Recall that IPv6 came out just a year before the Nintendo 64.) By adding only an extra octet, thus having 40 bits for addressing, you are wasting 24 bits of a 64-bit register. Or wasting 24 bits of a 32-bit register pair. Either way, this is inefficient.

    And there’s also the fact that the modern internet is actually reaching the upper limits of a hypothetical 64-bit IPv5: https://lemmy.world/comment/10727792. Do we want to spend yet another two decades just to transition to a newer protocol?