• 8 Posts
  • 169 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • Good point, will do. Currently I’m just focusing just on the lights themselves. Dimming even with a off-the-shelf constant current module would most likely be enough so I could set it once for suitable level and leave it at that, but of course it would be nice to extend into more adaptive lights while I’m working on it anyways.

    Other what I’ve been thinking is if I should go full RGBW and just set colour temperature and intensity as a whole to something nice, but that would mean more wires and more complex setup as a whole and I’m not too sure if that’s worth the effort. It’s sauna after all, not a disco.


  • In our company we (at least IT department) get to choose our own bags (within reason). I have some generic lenovo backpack they had laying around when I started and it’s decent enough. Maybe a bit smaller side on what I’d like, but it carries my laptop, headset, random cables, power supply, notepad and stuff like that just fine. And it doesn’t have any kind of visible logo on it at all, unless you count the Think® colour scheme on zipper tabs.

    And it’s also a security thing. Should someone steal my backpack it does not have any logos to pinpoint which company it belongs unless I’ve left my lanyard in the pocket with my rfid-tag. And of course if you open the laptop it has AD forest name on there, so it’s pretty trivial to figure out, but at least I’m not advertising ‘steal my things if you want access to this company’ everywhere.


  • Is my current set up secure, assuming strong passwords were used for everything?

    Network security is a complicated beast to manage. If general public can access your services over the internet, that’s a threat you need to mitigate. Strong passwords is a good start on that, but it doesn’t take into account if there’s a flaw or bug on the service you’re running. Also if you have external users, they might reuse their passwords and leak for those might cause a threat too, specially if there’s privilege escalation bugs on the software you’re running.

    And so on, it’s a too wide field to cover in a short comment here, but when you’re building your stuff, and what is maybe the most disticntive feature on a good professional between a not so good one, is to think ahead and prepare for every imaginable scenario where something goes wrong. Every time you add a way to access your network, no matter how minuscle, think what happens if that way gets compromised and what it might mean on the very worst case.

    Maybe you want to add another access point to your network since your terrace isn’t properly covered. That’s nice to have, but now everyone around 100 meters around your house/apartment might have access to your stuff if they can break your wifi security. Maybe you set up a reverse proxy or tailscale on the stack. Now the whole internet can at least probe your stuff and try to find vulnerabilities, try to use stolen credentials and even try to social engineer their way into your stuff. Or maybe you made an mistake and left something open that shouldn’t be.

    I’m not trying to scare you off out of anything. Go ahead and play with your stuff, break things, learn how to fix them, have fun while doing it. Just remember to think ahead about worst case scenarios, weigh their risks, think ahead and then go on. Learn about DNAT, reverse proxies, VPN and network layers and whatever you come across on your adventure but keep in mind that shit will hit the fan at some point. And learn to accept that, learn from your mistakes and do better next time.



  • I’ve seen some shit. But I’m also old enough to not care. I’m a freaking system administrator, not a surgeon. No one has died if their email is unreacable for an hour or two. Shit happens, then you deal with it and that’s all. Difference between a junior and a seasoned veteran is that old guys with battle scars is that the seasoned guy knows that something will break, shit will hit the fan and everything might turn up into a chaos and plan accordingly. Juniors will either endure and learn along the way or crumble.

    When you’ve been in the business for few decades it’s not that big of a deal to cause an outage. You know how to fix your shit, you know how to work with a severely crippled environment and you know how to build the whole circus from the ground up. And you also know that no matter how disappointed or loud the C** suits are, they’ll calm down once you get them out of the hole.

    Just today I had a meeting with discussion on what to do if some obscure edge-case ruins our ~5k users and few continents wide AD tree. Sure, if that would happen, it would most definetly suck balls to get back up and it would hurt the company bottom line and it would mean few nights with very little sleep, but no one would still die and our team is up to the task to build the whole crap out of nothing if needed. So, it’s just business as usual. But all of us have been in the business long enough that we know how to avoid the common pitfalls and we trust eachother enough that should the shit hit the fan in the big way we could still recover the whole situation.

    And still, even if the whole thing burns up in the flames, I’ve got the experience and skillset under my belt which will be valuable to some other business entity. I just don’t care if the main office building is on literal fire. It’s not my problem to fix immediately and when it is it’s still just work. I put in the hours they pay for me and do whatever I can but when I’m off the clock the employer doesn’t really exist in my world.





  • Don’t know what Elmos minions are doing, but I’ve written code at least equally unefficient. It was quite a few years ago (the code was in written in perl) and I at least want to think that I’m better now (but I’m not paid to code anymore). The task was to pull in data from a CSV (or something like that, as I mentioned, it’s been a while) and it needed conversion to XML (or something similar).

    The idea behind my code was that you could just configure which fields you want from arbitary source data and on where to place them on the whatever supported destination format. I still think that the basic idea behind that project is pretty neat, just throw in whatever you happen to have and have something completely else out of the other end. And it worked as it should. It was just stupidly hungry for memory. 20k entries would eat up several gigabytes of memory from a workstation (and back then it was premium to have even 16G around) and it was also freaking slow to run (like 0.2 - 0.5 seconds per entry).

    But even then I didn’t need to tweet that my hard drive is overheating. I well understood that my code is just bad and I even improved it a bit here and there, but it was still so very slow and used ridiculous amounts of RAM. The project was pretty neat and when you had few hundred items to process at a time it was even pretty good, there was companies who relied on that code and paid for support. It just totally broke down with even a slightly bigger datasets.

    But, as I already mentioned, my hard drive didn’t overheat on that load.


    1. VM running on a proxmox host. Tips: make sure you know your backups are in a state you can restore data from them.
    2. Nightly backup via proxmox to Hetzner Storage box with 2 day retention. I’d like a local copy too but I don’t currently have hardware for it.
    3. Don’t know. Personally I have a DNAT rule on firewall and my instance is directly open to the internet. You might not want that and I might not recommend it, but right now, for me, it works. I’d need to look in a VPN solution for android I could replace the current ‘open for all’ situation.


  • How much RAM your system has? Zfs is pretty hungry for memory and if you don’t have enough it’ll have quite significant impact on performance. My proxmox had 7x4TB drives on zfs pool and with 32 gigs of RAM there was practically nothing left for the VMs under heavy i/o load.

    I switched the whole setup to software raid, but it’s not offically supported by proxmox and thus managing it is not quite trivial.



  • The exchanged mails between the IMAP host and the MTA need a unique identifier to organize contents of the DB, and this would not be possible or automatic if your switched the upstream MTA.

    It sure is possible. I’ve copied maildirs over different software, different servers, local copies back to the server and so on. Also if you just rely on your own IMAP server the upstream doesn’t matter as fetchmail (or whatever you choose to use) anyways communicates between hosts on their preferred protocols.

    Obviously there’s a tradeoff since now you’re responsible for your backups and maintaining your server, but it can sit nicely on your private LAN with access only locally or via VPN without direct access to the internet. And you don’t need MTA to run IMAP server in the first place.





  • Commodore 64 is a home computer released at 1982. Modern expansions for it allows the thing to actually have tcp/ip stack and it can run things like telnet, but your single mastodon server, in comparison of what was available in 1980s, is pretty much equal of the whole bandwidth and storage of the internet (or arpanet, depending on how you want to time things).

    Mastodon server requires (roughly) at least 2 gigabytes of memory and 20 gigabytes of storage. And with that it needs at least dual core 2GHz CPU to run it.

    Commodore 64 had 1Mhz. A million hertz sounds like a big number, but we’re talking (at minimum) of two processor cores running with 2000 million hertz. Also, C=64 had 64 000 bytes of memory while the absolute minimum to run mastodon instance is 2 000 000 000 bytes.

    And then there’s the storage. Your minimum mastodon instance should have at least 20GB of storage. 1541 used 5,25" floppy disks which could store up to 170 kilobytes. So you’d need someone to change disks as needed on a over 400 meter tall tower of floppy disks.

    So, please tell me again where to get disk images to run mastodon server on a C=64 and how you just know that plain old email is garbage and old people just don’t know what they’re talking about.