Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 412 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • The issue DNS solves is the same as the phone book. You could memorize everyone’s phone number/IP, but it’s a lot easier to memorize a name or even guess the name. Want the website for walmart? Walmart.com is a very good guess.

    Behind the scenes the computer looks it up using DNS and it finds the IP and connects to it.

    The way it started, people were maintaining and sharing host files. A new system would come online and people would take the IP and add it to their host file. It was quickly found that this really doesn’t scale well, you could want to talk to dozens of computers you’d have to find the IP for! So DNS was developed as a central directory service any computer can request to look things up, which a hierarchy to distribute it and all. And it worked, really well, so well we still use it extensively today. The desire to delegate directory authority is how the TLD system was born. The host file didn’t use TLDs just plain names as far as I know.


  • There’s definitely been a surge in speculation on domain names. That’s part of the whole dotcom bubble thing. And it’s why I’m glad TLDs are still really hard to obtain, because otherwise they would all be taken.

    Unfortunately there’s just no other good way to deal with it. If there’s a shared namespace, someone will speculate the good names.

    Different TLDs can help with that a lot by having their own requirements. .edu for example, you have to be a real school to get one. Most ccTLDs you have to be a citizen or have a company operating in the country to get it. If/when it becomes a problem, I expect to see a shift to new TLDs with stronger requirements to prove you’re serious about your plans for the domain.

    It’s just a really hard problem when millions of people are competing to get a decent globally recognized short name, you’re just bound to run out. I’m kind of impressed at how well it’s holding up overall despite the abuse, I feel like it’s still relatively easy to get a reasonable domain name especially if you avoid the big TLDs like com/net/org/info. You can still get xyz for dirt cheap, and sometimes there’s even free ones like .tk and .ml were for a while. There’s also several free short-ish ones, I used max-p.fr.nf for a while because it was free and still looks like a real domain, it looks a lot like a .co.uk or something.


  • Because if they’re not owned, then how do you know who is who? How do we independently conclude that yup, microsoft.com goes to Microsoft, without some central authority managing who’s who?

    It’s first come first served which is a bit biased towards early adopters, but I can’t think of a better system where you go to google.com and reliably end up at Google. If everyone had a different idea of where that should send you it would be a nightmare, we’d be back to passing IP addresses on post-it notes to your friends to make sure we end up on the same youtube.com. When you type an address you expect to end up on the site you asked, and nothing else. You don’t want to end up on Comcast YouTube because your ISP decided that’s where youtube.com goes, you expect and demand the real one, the same as everyone else.

    And there’s still the massive server costs to run a dictionary for literally the entire Internet for all of that to work.

    A lot of the times, when asking those kinds of questions, it’s useful to think about how would you implement it such that it would work. It usually answers the question.


  • In case you didn’t know, domain names form a tree. You have the root ., you have TLDs com., and then usually the customer’s domain google.com., then subdomains www.google.com.. Each level of dots typically hands over the rest of the lookup to another server. So in this example, the root servers tell you go ask .com at this IP, you go ask .com where Google is, and it tells you the IP of Google’s DNS server, then you query Google’s DNS server directly. Any subdomain under Google only involves Google, the public DNS infrastructure isn’t involved at that point, significantly reducing load. Your ISP only needs to resolve Google once, then it knows how to get *.google.com directly from Google.

    You’re not just buying a name that by convention ends with a TLD. You’re buying a spot in that chain of names, the tree that is used to eventually go query your server and everything under it. The fee to get the domain contributes to the cost of running the TLD.


  • Mostly because you need to be able to resolve the TLD. The root DNS servers need to know about every TLD and it would quickly be a nightmare if they had to store hundreds of thousands records vs the handful of TLDs we have now. The root servers are hardcoded, they can’t easily be scaled or moved or anything. Their job is solely to tell you where .com is, .net is, etc. You’re supposed to query those once and then you hold to your cached reply for like 2+ days. Those servers have to serve the entire world, so you want as few queries to those as possible.

    Hosting a TLD is a huge commitment and so requires a lot of capital and a proper legal company to contractually commit to its maintenance and compliance with regulations. Those get a ton of traffic, and users getting their own TLDs would shift the sum of all gTLD traffic to the root servers which would be way too much.

    With the gTLDs and ccTLDs we have at least there’s a decent amount of decentralization going, so .ca is managed by Canada for example, and only Canada has jurisdiction on that domain, just like only China can take away your .cn. If everyone got TLDs the namespace would be full already, all the good names would be squatted and waiting to sell it for as much as possible like already happens with the .com and .net TLDs.

    There’s been attempts at a replacement but so far they’ve all been crypto scams and the dotcom bubble all over again speculating on the cool names to sell to the highest bidder.

    That said if you run your own DNS server and configure your devices to use it, you can use any domain as you want. The problem is gonna get the public Internet at large to recognize it as real.



  • That should be mostly the default. My secondary Vega 64 is reporting using only 3W which, on a laptop would be worth it but I doubt 3W affects your electricity. It’s nothing compared to the overall power usage of the rest of the desktop, the monitors. Pretty sure even my fans use more.

    The best way to address this would be to first take proper measurements. Maybe get a kill-a-watt and measure usage with and without the card installed to get the true usage at the wall. Also maybe get a baseline with as little hardware as possible. With that data you can calculate roughly how much it costs to run the PC and how much each component costs, and from there it’s easier to decide if it’s worth.

    Just the electric bill being higher isn’t a lot to go with. Could just be that it’s getting cold, or hot. Little details can really throw expectations off. For example, mining crypto during the winter is technically cheaper than not for me because I have electric heat, so between 500W in a heating strip or 500W mining crypto, they both produce the same amount of heat in the room but one of them also made me a few cents as a byproduct. You have to consider that when optimizing for cost and not maximizing battery life on a laptop.





  • Everyone’s approaching this from the privacy aspect, but the real reason isn’t that the cashier thought you were weird, they’re just underpaid and under a lot of pressure from management to try multiple times and in some cases they even get written up for not doing it because it’s deemed part of their job. They hate it just as much as you. Same when you try to cancel your cable subscription or whatever: the calls are recorded and their performance is monitored and they make damn sure they try at least 3 times to upsell you, even when it’s painfully obvious you’re done with them.

    Just politely decline until they asked however many times they’re required to ask and move on.


  • With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:

    • Containers have their own namespace including network namespace, so each container have a blank iptables just for them.
    • For container communication, that goes through the FORWARD table, not the INPUT/OUTPUT ones.
    • Docker adds its own rules to ensure that this works as expected.

    The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.

    When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.



  • Also, series F but they’re only deploying on one server? Try scaling that to a real deployment (200+ servers) with millions of requests going through and see how well that goes.

    And also no way their process passes ISO/SOC 2/PCI certifications. CI/CD isn’t just “make do things”, it’s also the process, the logs, all the checks done, mandatory peer reviews. You can’t just deploy without the audit logs of who pushed what when and who approved it.





  • You have to keep in mind, when you write JavaScript, there’s an entire runtime written in C++ to run it under the hood, with some crazy optimizations to make it reasonably performant. What type of languages do you use to write that runtime? A systems programming language like Rust and C++.

    You don’t have to use Rust if you don’t like it. Not everything must be written in Rust. The whole pick a language also involves a lot of picking your tradeoffs. Picking a interpreted/JIT language for speed of development is a perfectly valid tradeoff, but not one you can universally make. Sometimes the performance cost becomes really expensive currency-wise, where you can save thousands of dollars on server costs by simply having a more efficient application that only needs a fraction of the hardware to run it. Even in JavaScript, a fair chunk of libraries you use end up calling to C++ native code because it would be too slow in pure JavaScript. Sometimes the tradeoff is pick the popular language so it’s easier to hire for cheaper.

    Even at the dawn of time, most computers shipped with a variant of BASIC so people could write simple applications easily. But if you wanted to squeeze out every bit of power in your Apple II or C64, you sure did reach for assembly. Assembly sucks so we made C, then C++. Rust is still a language that’s made to eventually compile to assembly/binary and have the same performance as if you wrote it in assembly.

    And low spec hardware still exists: the regular Pis have gotten pretty fast but if you run on an RP2040 then suddenly, you’re back in like 300MHz dual core land with pitiful amounts of memory, so you do need to write optimized and fast code for those.

    Rust’s type system is actually really, really good. Most of the time, if it compiles it runs. It eliminates a ton of errors other than memory safety: the system is so powerful you can straight up make invalid state unrepresentable. You can’t forget to close a connection, you can’t pass the wrong data, you can’t forget to unlock a lock. It does a lot more to enforce correctness of a program well beyond memory safety.