• 4 Posts
  • 353 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • I’ll look at the wiki article again but I can pretty much promise that Ada doesn’t have dependent types. They are very much a bleeding edge language feature (Haskell will get them soon, so I will try using them then) and Ada is quite an old fashioned language, derived from Pascal. SPARK is basically an extra-safe subset of Ada with various features disabled, that is also designed to work with some verification tools to prove properties of programs. My understanding is that the proof methods don’t involve dependent types, but maybe in some sense they do.

    Dependent types require the type system to literally be Turing-complete, so you can have a type like “prime number” and prove number-theoretic properties of functions that operate on them. Apparently that is unintentionally possible to do with C++ template metaprogramming, so C++ is listed in the article, but actually trying to use C++ that way is totally insane and impractical.

    I remember looking at the wiki article on dependent types a few years ago and finding it pretty bad. I’ve been wanting to read “The Little Typer” (thelittletyper.com) which is supposed to be a good intro. I’ve also played with Agda a little bit, but not used it for real.



  • solrize@lemmy.worldtoProgramming@programming.devSafe C++
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 days ago

    Dependent types only make sense in the context of static typing, i.e. compile time. In a dependently typed language, if you have a term with type {1,2,3,4,5,6,7} and the program typechecks at compile time, you are guaranteed that there is no execution path through which that term takes on a value outside that set. You may need to supply a complicated proof to help the compiler.

    In Ada you can define an integer type of range 1…7 and it is no big deal. There is no static guarantee like dependent types would give you. Instead, the runtime throws an exception if an out-of-range number gets sent there. It’s simply a matter of the compiler generating extra code to do these checks.

    There is a separate Ada-related tool called SPARK that can let you statically guarantee that the value stays in range. The verification method doesn’t involve dependent types and you’d use the tool somewhat differently, but the end result is similar.




  • In Ada? No dependent types, you just declare how to handle overflow, like declaring int16 vs int32 or similar. Dependent types means something entirely different and they are checked at compile time. SPARK uses something more like Hoare logic. Regular Ada uses runtime checks.


  • In Ada, the overflow behaviour is determined by the type signature. You can also sometimes use SPARK to statically guarantee the absence of overflow in a program. In Rust, as I understand it, you can control the overflow behaviour of a particular arithmetic operation by wrapping a function or macro call around it, but that is ugly and too easy to omit.

    For ordinary integers, an arithmetic overflow is similar to an OOB array reference and should be trapped, though you might sometimes choose to disable the trap for better performance, similar to how you might disable an array subscript OOB check. Wraparound for ordinary integers is simply incorrect. You might want it for modular arithmetic and that is fine, but in Ada you get that by specifying it in the type declaration. Also in Ada, you can specify the min and max bounds, or the modulus in the case of modular arithmetic. For example, you could have a “day of week as integer” ranging from 1 to 7, that traps on overflow.

    GNAT imho made an error of judgment by disabling the overflow check by default, but at least you can turn it back on.

    The RISC-V architecture designers made a harder to fix error by making everything wraparound, with no flags or traps to catch unintentional overflow, so you have to generate extra code for every arithmetic op.








  • It was ok at the time, and if it isn’t ok now, that means you want to run something that is too bloated for its own good.

    Really though, special hardware for this doesn’t make too much sense. A raspberry pi with two ethernet interfaces would be great, but if you can live with ethernet plus wifi, the current rpi’s will do it. Otherwise there are lots of similar boards that really do have two ethernet.

    I have not really felt much use for self hosted server hardware at home. I use VPS’s for that and it’s less hassle. Maybe it doesn’t count as completely self hosted, but conceptually it’s a miniature colo box.



  • Your public key block is a cumbersome thing and it’s enough to just append its fingerprint, if you consider email to be trusted against forgery but not against eavesdropping. The other person can then use the hash to authenticate your key that they get some other way (or they could just ask you to email it).

    Back in the day, lots of nerds would have their PGP key fingerprint (32 hex digits) printed across the bottom of their business cards. So if someone got a card in person, they could use the fingerprint to authenticate a key that they later received by email.

    Your post doesn’t ask about signing your emails without a good reason, but some commenter seems to think you are asking about that. That can be good, bad, or both, since it means that anyone who gets a copy of the message, including attackers, can now authenticate that the message came from you. Anything that gives attackers capabilities that they didn’t already have, must be examined critically. Dan Bernstein came up with an clever authenticator scheme designed to prevent this exact attack, but PGP doesn’t implement it and I actually don’t know of any software that does.

    Finally, at least some of the old-time PGP community now thinks that PGP solved, to some extent, the wrong problem. It not only made no attempt to conceal metadata, but it actually advertised it, in the form of key servers and key signatures connected with keys. Even if the attackers couldn’t read the encrypted messages, they could still tell who was talking to who, which is almost as bad. Remailer and broadcatch systems tried to solve this, with mixed success. A quote by cryptographer Silvio Micali has stuck with me for a long time: “a good disguise does not reveal the person’s height”. I.e. don’t just try to conceal the message contents from attackers while letting them collect other information. Rather, don’t give them ANY information.

    It’s possible to get rather “Spy vs Spy” about this type of stuff but it can help you think about security. As always, “Security Engineering” by Ross Anderson is a fantastic book if you’re interested in the general topic of how to be paranoid. Or to quote the proverb, it’s not paranoia if they really are out to get you ;). The book is here, 1st and 2nd editions downloadable as pdfs: https://www.cl.cam.ac.uk/~rja14/book.html


  • These days all those closed containers are virus spreaders so better to fly and get less exposure time. Wear an N95 either way.

    In the old days I did some long bus and train trips and it wasn’t so bad unless there were noisy or otherwise annoying people on board. Basically bring a long book to read, sleep when you can, and enjoy the scenery when there is something to look at. A travel pillow can be a help, and also warm clothes or a blanket. On international trains (EU) you can meet interesting people too.




  • Sure. Normally I think of visiting a site in a browser as navigating to that site on purpose. If Mozilla sells placement in the browser so that the browser navigates to that site automatically (unless you disable that), it’s invasive. That said I do remember Mozilla sets the default start page to something annoying and I had to reconfigure it to about:blank when I set up the system.