• 0 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • It’s not a matter of knowledge, it’s a matter of what they want.

    One may desire to be advantaged/superior to some others, and particularly nice and easy if race or gender is a convenient shorthand for knowing who is ‘in’ and ‘out’, as long as you are in the ‘in’ group of course.

    So life is just plain easier if women are just supposed to sit there and please them. If the ‘natural order’ justifies that convenience, then one may be attracted to that thought. To the extent fairness and equality makes their life harder, they are inclined to be upset at that obstackle. It’s convenient if the legal and labor world gives their race preferential treatment, and other groups are left desperate enough to do whatever they need done but don’t want to do, and scared enough of the government to not get “uppity”.

    Sometimes overt evil, sometimes more subconscious manifesting as being very receptive to narratives that correlate with those feelings.



  • Well, we got to see roughly something play out with the xz thing. In which case only redhat were going to be impacted because they were the only ones to patch ssh that way.

    Most examples I can think of only end of affecting one slice or another of the Linux ecosystem. So a Linux based heterogenous market would likely be more diverse than this.

    Of course, this was a relative nothing burger for companies that used windows but not crowdstrike. Including my own company. Well except a whole lot fewer emails from clients today compared to typical Fridays…





  • Related, I predict Windows on ARM will be a massive failure, again.

    Windows is Windows because a critical mass of their market is terrified of being vaguely incompatible with any software they use today. Wine will never give them enough confidence just like ARM emulation of x86 will never give them confidence.

    Extra bizarre, from what I’ve seen the Windows devices vendors are treating the ARM variants as a premium model and charging more for them, despite having no real compelling story for the customers. You can either have an x86 offering that’s from all appearances just as overall capable and absolutely able to run your software today, on an ARM offering that is more expensive and maybe a bit less compatible, with maybe better battery life (either sincerely or at least a belief).

    Mac is able to force the issue because the hardware and software all wanted to make ARM happen and forced it, but with Windows on ARM, only Qualcomm really cares, Microsoft and all the device vendors would prefer to hedge their bets, which in this case tie goes to the incumbent.



  • First, this is not really science so much as it is science-themed philosophy or maybe “religion”. That being said, to make it work:

    • We don’t have anyway of knowing the true scale and “resolution” of a hypothetical higher order universe. We think the universe is big, we think the speed of light is supremely fast, and we think the subatomic particles we measure are impossibly fine grained. However if we had a hypothetical simulation that is self-aware but not aware of our universe, they might conclude some slower limitation in the physics engine is supremely fast, that triangles are the fundamental atoms of the universe, and pixels of textures represent their equivalent of subatomic particles. They might try to imagine making a simulation engine out of in-simulation assets and conclude it’s obviously impossible, without ever being able to even conceive of volumetric reality with atoms and subatomic particles and computation devices way beyond anything that could be constructed out of in-engine assets. Think about people who make ‘computers’ out of in-game mechanics and how absurdly ‘large’ and underpowered they are compared to what we would be used to. Our universe could be “minecraft” level as far as a hypothetical simulator is concerned, we have no possible frame of reference to gauge some absolute complexity of our perceived reality.

    • We don’t know how much we “think” is modeled is actually real. Imagine you are in the Half Life game as a miraculously self-aware NPC. You’d think about the terribly impossibly complex physics of the experiment gone wrong. Those of us outside of that know it’s just a superficial model consisting of props to serve the narrative, but every piece of gadget that the NPC would see “in-universe” is in service of saying “yes, this thing is a real deep phenomenon, not merely some superficial flashes”. For all you know, nothing is modeled behind you at anything but the most vague way, every microscope view just a texture, every piece of knowledge about the particle colliders is just “lore”. All those experiments showing impossibly complex phenomenon could just be props in service of a narrative, if the point of the simulation has nothing to do with “physics” but just needs some placeholder physics to be plausible. The simulation could be five seconds old with all your memories prior to that just baked “backstory”.

    • We have no way of perceiving “true” time, it may take a day of “outside” time to execute a second of our time. We don’t even have “true” time within our observable universe, thanks to relativity being all weird.

    • Speaking of weird, this theory has appeal because of all the “weird” stuff in physics. Relativitiy and quantum physics are so weird. When you get to subatomic resolution, things start kind of getting “glitchy”, we have this hard coded limit to relative velocity and time and length get messed up as you approach that limit. These sound like the sort of thing we’d end up if we tried simulating, so it is tempting to imagine a higher order universe with less “weirdness”.


  • Define “the OS package manager”. If the distro comes with flatpack and dnf equally, and both are invoked by the generic “get updates” tooling, then both could count as “the” update manager. They both check all apps for updates.

    Odd to advocate for docker containers, they always have the app provider also on the hook for all dependencies because they always are inherently bundled. If a library has a critical bug fix, then your docker like containers will be stuck without the fix until the app provider gets around to fixing it, and app providers are highly unreliable on docker hub. Besides, update discipline among docker/podman users is generally atrocious, and given the relatively tedious nature of following updates with that ecosystem, I am not surprised. Even best case, docker style uses more disk space and more memory than any other option, apart from VM.

    With respect to never having to worry about bundled dependencies with rpm/deb, third party packages bundle or statically link all the time. If they don’t, then they sometimes overwrite the OS provided dependency with an incompatible one that breaks OS packages, if the dependency is obscure enough for them not to notice other usage.



  • You don’t need the distro to package your sodtware through their package management systems though. Apt and dnf repositories are extensible, anyone can publish. If you go to copr or ppa you can have a little extra help too, without distro maintainers.

    The headache comes up when multiple third party repositories start conflicting with each other when you add enough of them, despite they’re best efforts. This scenario starts needing flatpack, which can, for example concurrently provide multiple distinct library versions installed that traditionally would conflict with each other. This doesn’t mean application has to bundle the dependency, that dependency can still be external to the package and independently updated, it just means conflicts can be gracefully handled.







  • Yep, and I see evidence of that over complication in some ‘getting started’ questions where people are asking about really convoluted design points and then people reinforcing that by doubling down or sometimes mentioning other weird exotic stuff, when they might be served by a checkbox in a ‘dumbed down’ self-hosting distribution on a single server, or maybe installing a package and just having it run, or maybe having to run a podman or docker command for some. But if they are struggling with complicated networking and scaling across a set of systems, then they are going way beyond what makes sense for a self host scenario.


  • Based on what I’ve seen, I’d also say a homelab is often needlessly complex compared to what I’d consider a sane approach to self hosting. You’ll throw all sorts of complexity to imitate the complexity of things you are asked to do professionally, that are either actually bad, but have hype/marketing, or may bring value, but only at scales beyond a household’s hosting needs and far simpler setups will suffice that are nearly 0 touch day to day.