Thanks (grazie?)! I was looking for something similar and kanidm looks great feature wise and simple to deploy!
Thanks (grazie?)! I was looking for something similar and kanidm looks great feature wise and simple to deploy!
I struggled with this for a long time, and then I just decided to use synology photos.
It has albums, tagging, geolocation, sharing. It has phone picture backup, it is inherently a backup as it’s on my NAS and I back that data up again.
I want to keep the thing that I really care about the most friction free and also not too dependent on myself so that I can still experiment.
I didn’t try PiGallery2 though, maybe I will have a look!
Super cool project, thanks for sharing! I think I will try to integrate it with my static sites.
Did it sound cold? Because I didn’t mean that, I just meant to actually answer the question from my PoV. Just for the record, I also did not down vote you.
So yeah, use whatever footgun you prefer, I don’t judge :)
Or rustic! It is compatible with restic but has some nice additions, for example the fact that supports a config files. It makes operations a bit easier IMHO (I am currently using both).
I really thought swarm was dead :)
To be honest, some kubernetes distributions make the cluster operations minimal (I use k0s managed via ansible)!
Either way, the moment you go from N containers on one box to N containers on M boxes you need to start considering how to handle stateful applications, load balancing, etc. And that in general requires knowledge on a domain which is different from having simply applications wrapped in containers locally.
Yeah ultimately every container has it’s own veth interface, so you can do shaping using tc on those.
Edit: I had a look at docker-tc. It does what you want, BUT. Unless your use case is complex, I would really think twice about running a tool written in bash which has access to the docker socket (I.e. trivial node escape) and runs with NET_ADMIN capability.
That’s a lot of power to do something you can also do with a few lines of code executed after you start the container. Again, provided that your use case is not complex.
Cgroups have the ability to limit TCP and total network bandwidth. I don’t know from the top of my mind whether this can be configured at runtime (I.e. via docker run), but you can specifcy at runtime the cgroup parent to use. This means you can pre-create the cgroup, set the limits and start the container with that parent cgroup.
You can also run some hook script after launch that adds the PID to a cgroup every time the container is launched, or possibly use tc.
I am not aware of the ability to only limit uplink bandwidth, but I have not researched this.
I think k8s is a different beast, that requires way more domain specific knowledge besides server/Linux basic administration. I do run it, but it’s an evolution of a need, specifically when you want to manage a fleet of machines running containers.
Because the lxc way is inherently different from the docker/podman way. It’s aimed at running full systems, rather than mono process containers. It has it’s use cases, but they are not as common IMHO.
You have a bunch of options:
kubectl run $NAME --image=$IMAGE
this just creates a pod running the specific image. If you kill the pod, or it terminates, it won’t be run again. In general though, you probably want to do some customization before running (maybe you need volumes, secrets, env, ports, labels, securityContext, etc.) and for that you can simply let kubectl generate the boilerplate YAML and then simply make some edit:
kubectl run $NAME --image=$IMAGE --dry-run=client -o yaml > mypod.yaml
# edit mypod.yaml
kubectl create -f mypod.yaml
You can do the same with a deployment
or statefulset
:
kubectl create deployment $NAME -n $NAMESPACE [...] --dry-run=client -o yaml > deployment.yaml
In case you don’t need anything fancy, the kubectl create
subcommand allows you to create simple workload, so probably that’s the answer to your question.
Docker can run rootless too, see https://docs.docker.com/engine/security/rootless/
I would say Docker. There is no substantial benefit in running podman, while docker is a widely adopted tool (which means more tooling in the ecosystem, easier to find answers to questions etc.). The difference is not huge tbh, and some time ago the biggest advantage for podman was being able to run rootless, while docker was stuck with a root daemon. This is not the case anymore (docker can run rootless), so I would say unless you have some specific argument to use podman, stick with docker.
I am curious about the details of that conversation, because I remember reading Dev’s comments in some post on Lemmy where they mentioned this option.
One thing I have never understood and keep repeating in this context: Beehaw has >7k$ balance. If they really have a few issues that would solve 90% of the problems, why not putting a 500/1000/2000$ bounty of that feature.
Just to add to this, Valve is a company with a very peculiar organization, in which the structure is very horizontal and that does its own thing (the structure is not without problems, but it’s still very interesting). They also have a surprisingly little amount of employees for such a company! Numbers vary between 350 and 1000!
Your just focusing on strict technical definitions and completely ignoring the context of things. I described before how TLS is useful in the context of some SMTP e2e encrypted solution.
Yes, mentioning things that have not to do with e2ee. Anything that is encrypted with TLS is not e2ee in the context of emails. You talked about metadata, but the server has access to those because it terminates the connection, therefore, they are not e2ee. It’s a protection against leakage between you and the server (and between server and other server, and between server and the destination of your email), not between you and the destination, hence, irrelevant in the context of e2ee. Metadata such as destination can obviously never be e2ee, otherwise the server wouldn’t know where to send the email, and since it needs access to it, it’s not e2ee, whether you use TLS or not. TLS in this context doesn’t contribute at all to end to end encryption. Your definition is wrong, e2ee is a technical definition, is not an abstract thing: e2ee means that only the two ends of a conversation have access to the data encrypted. TLS is by definition between you and your mail server, hence it doesn’t provide any benefit in the context of e2ee. It is useful, but for other reasons that have nothing to do with e2ee.
never questioned that. With SMTP you can do true e2ee using PGP and friends
Exactly, and this is what Proton does. You simply don’t accept that Proton decided to write another client that is tightly coupled with their mail service, which is absolutely nothing malicious or vendor-locky, compared to using an already made client. Proton is simply PGP + SMTP.
with CalDAV/WebDAV you’ll need to have some kind of middle man doing the encryption and decryption - it’s a fair trade off for a specific problem.
Yes, and this middle-man is proton client, which sits on the client’s side. I am glad you understood how the only way to have e2ee with *DAV automatically technically impedes you to use “whatever server”. If anybody else but the client does the encryption/decryption, you lost the end-to-end part. I am not saying e2ee in this context is absolutely necessary, you might not care and value more the possibility to plug other *DAV servers, good. Proton is not for you in that case.
but about SMTP where you can have true e2ee
Yes, you can using a PGP client, like OpenPGP of Proton webmail, or Proton bridge. You need stuff on top of SMTP.
Your previous comment of “SMTP requires server to access the data” is simply wrong.
Nope, you are simply misinterpreting it. In SMTP the server requires access to the data because it’s the one delivering it. PGP is built so that the data it’s a ciphertext and not plaintext, so that the server can’t see the actual content of the mail, but it needs to have the data and ship it, in contrast for example to a p2p protocol. PGP is however on top of SMTP and requires a client doing it for you. OpenPGP or Proton do exactly this. There is no way to support SMTP “natively” and offer e2ee. You would like Proton not to do e2ee and leave the responsibility of the client to do the PGP part, with the freedom of picking whatever client you want? Well, that’s exactly the opposite of their business model, since what they aimed is to make PGP de-facto transparent to the users so that it’s available even to people who are not advanced users.
Do you have any proof that they use CalDAV/CardDAV?
https://github.com/search?q=org%3AProtonMail+CalDAV&type=code you can dig yourself into the code if you are curious to understand.
In the same way they don’t do IMAP/SMTP.
I sent you already a GIthub search of their clients for SMTP, look for yourself in the code. Do you think that makes any sense at all for them to reinvent the wheel and come up with ad-hoc protocols when all they need is a client? You can also have a look at the job offers they post: https://boards.eu.greenhouse.io/proton/jobs/4294852101 You can see SMTP mentioned and experience with Postfix in production. It’s very likely that they are running that in the background.
Get over yourself and your purist approaches, when a company provides a service that is standardized in a specific set of protocols and they decide to go ahead and implement their own stuff it is, at least, a subtle, form of vendor lock-in. End of story.
No it’s not. Vendor lock means:
In economics, vendor lock-in, also known as proprietary lock-in or customer lock-in, makes a customer dependent on a vendor for products, unable to use another vendor without substantial switching costs.
Proton uses open standards, and just builds clients that wrap them. This means, emails are in a format that can easily be imported elsewhere, same for Calendar and Contacts. You are now watering down the definition of vendor lock to try to make your claim less wrong, but it is wrong. I repeat, and you are welcome to prove me wrong:
This means that I can change vendor easily without significant cost, hence I am not locked-in.
What you actually mean is that while using Proton you can’t interoperate easily with other tools, and this is a by-design compromise to have e2ee done in the way they wanted to make it, which is available to mainstream population. You disagree with their approach? Absolutely legitimate, you prefer to use OpenPGP, handle keys and everything yourself? Then for sure, Proton is not worth for you as you can choose the tools you want if that’s important for you. But there is no vendor-lock, they simply bundled together the email client with the PGP client, so that you don’t have the full flexibility of separating the two.
You disagree with this definition of vendor lock? Awesome, give me your definition and link some source that use that definition. Because if you keep moving the goalpost and redefine what vendor lock means, there is no point to discuss.
So tell me, why is it that Proton simply didn’t go for OpenPGP + SMTP + IMAP and simply build a nice web / desktop client for it and kept it compatible with all the other generic solutions out there? What’s the point?
How is this relevant? I don’t know and I don’t care why they picked this technical solution.
It isn’t and I hope I’m never proven right about Proton for your own sake.
It is, and you have been proven wrong. Either that, or you completely misuse or worse misunderstand what vendor lock is.
Yes you move if you can.
It’s not if. You can.
It has all to do with vendor lock-in and it is already explained.
Yes, you explained interoperability that has nothing to do with vendor lock. They are two. different. things.
If a company uses shady stuff that restricts interoperability by definition it is a form of vendor lock-in as you won’t be able to move to another provider as quickly and fast as you might with others.
False. Again. Interoperability it’s a property that has to do with using the application. Interoperable applications potentially can totally vendor lock. Lemmy interoperates with Mastodon, but vendor locks you because you can not export everything and port all your content away. You definition is wrong. Just admit you misused the term and move on, there is no need to double down.
And it is, because there’s specific metadata that might get leaked on email headers if not minimized by other techniques that will get protected with TLS between servers.
They use TLS. TLS is useful for transport security. Proton uses TLS. TLS doesn’t have anything to do with e2ee in the context of emails because TLS is always terminated by the server. Therefore it is by definition not an e2ee protocol in this context. It is in the context of web, because there the two “ends” are your browser and the web server. It’s not in the context of messaging where the other “end” is another client.
It isn’t perfect
This has nothing to do with perfection, you are simply misunderstanding fundamentally what e2ee is in this context.
it’s better than having email servers delivering PGP mail over plain text connections
And in fact Proton doesn’t do that.
You should be ashamed of yourself for even suggesting that there’s no usefulness whatsoever for this use case.
I am not ashamed because I understand TLS, and I understand that it’s useless in the context of email e2ee. You simply don’t understand the topic but feel brave enough to evangelize on the internet about something you don’t fully understand.
here’s how decent companies deal with that: they encrypt the data in transit (using TLS) and when stored on their servers by using key store and that is itself encrypted with your login password / hash / derivation or some similar technique.
JFC. Proton uses TLS for transit connections. E2EE means that the server does not have access to the data. If the server has the key, in whatever form, and can perform a decryption, it’s not e2ee. The only way to have e2ee for these protocols is that the client(s) and only the clients do the encryption/decryption operations. This is exactly what Proton clients do. They use DAV protocols but they extend them with implementing encryption on the client side. Therefore, naturally, by design, they are not compatible with servers which -instead- expect data unencrypted to serve it, unencrypted (only via TLS, which again, it’s a transport protocol, has nothing to do with application data) to other clients.
Ironically, when saying what “decent companies” do, you have described what Proton does: they use your client key to encrypt data on client side. Then they transfer this data via a secure channel (TLS). The server has no keys and sees only encrypted data, and serves such data to other clients (Proton web, android etc.) that do the decryption/encryption operation back. Underlying it’s still CalDAV/WebDAV.
You clearly have bought into their propaganda but oh well think whatever you want, you’re the one using the service and it’s your data that will be hostage after all.
I don’t need to buy propaganda, I am a security professional and do this stuff for a living. I also understand what vendor lock is because all the companies I ever worked with had forms of vendor lock, and I am aware of Proton features instead.
Maybe you should really stop, reflect and evaluate if you really have the competence to make certain claims on the internet. I understand nobody is there keeping score and there are no consequences, but you are honestly embarassing yourself and spreading false information due to the clear lack of understanding about concepts such as e2ee, transport security, vendor locking, etc.
There are lots of ways to do e2e encryption on e-mail over SMTP (OpenPGP, S/MIME etc.)
Yes, and that requires using a client. The JS code of the webclient and the bridge are clients for PGP.
SMTP itself also supports TLS for secure server-to-server communications (or server-to-client in submission contexts) as well as header minimization options to prevent metadata leakage.
TLS is completely pointless in this conversation. TLS is a point-to-point protocol and it’s not e2e where the definition of the “ends” are message recipient and sender (i.e., their client applications), it only protects the transport from your client to the server, then the server terminates the connection and has access to the plaintext data. Proton also uses TLS, but again, it has no use whatsoever for e2ee.
And Proton decided NOT to use any of those proven solutions and go for some obscure propriety thing instead because it fits their business better and makes development faster.
They didn’t do anything obscure, they have opensource clients that do PGP encryption similar to how your web client would do. Doing encryption on the client is the only way to ensure the server can’t have access to the content of the emails. It just happens that the client is called “proton bridge” or “proton web” instead of OpenPGP.
only exists until they allow it to exist.
It’s their official product, and anyway it’s not a blocker for anything. They stop giving you the bridge? You move in less than 1h to another provider.
You don’t know if there are rate limits on the bridge usage and other small details that may restrict your ability to move large amounts of email around.
Do you know that there are, or are we arguing on hypotheticals?
Decent providers will give you an export option that will export all your e-mail using industry standard formats such as mbox and maildir.
True. You can still get the data out, whether they don’t do in a “best practice” way or not. It’s not vendor lock.
Proton mail is so closed that you can’t even sync your Proton mail contacts / calendars with iOS or Android - you can only use their closed source mail client to access that data or the webui.
https://github.com/ProtonMail. All the mail clients are opensource.
Also, WebDAV, CardDAV, CalDAV do not support e2ee. You need once again a client that extends it, which is what Proton also does!
So the question is very simple: do you prefer e2ee or you prefer native plain caldav/webdav/carddav? If the answer for you is the latter, Proton is simply a product that is not for you. If you prefer the former, then Proton does it. Either way, this is not again vendor-lock. They allow you to export contacts and calendar in a standard format, and you can move to a new provider.
Proton doesn’t respect the open internet by not basing their services on those protocols and then they feed miss-information (like the thing about e2e encryption being impossible on SMTP) and by using it you’re just contributing to a less open Internet.
SMTP does not allow e2ee by definition. I am not sure whether you don’t understand SMTP or how e2ee works, but SMTP is a protocol based on the server having access to the content. The only way you can do e2ee is using a client that encyrpts the content, like PGP (which is what Proton uses), before sending it to the server. This is exactly what happens with Proton, the webclients use SMTP to talk to proton server but before that they do client-side encryption (using PGP), exactly like you would do with any other client (see https://github.com/search?q=repo%3AProtonMail%2FWebClients smtp&type=code).
Now, you made a claim, which is that Proton vendor locks you:
So your claim that you are vendor locked it’s simply false, deal with it.
You made some additional claims about Proton not using plain standard protocols. That’s true. None of those protocols support e2ee, so they wrote clients that extend those protocols. All clients are opensourced, including the bridge. This has anyway nothing to do with being vendor locked, which in fact you completely did not explain. You talked about interoperability at most, which is not related to vendor lock.
You also made additional uniformed or false claims:
I personally like a lot the gazillion bangs also available, the personal up/downranking/blocking of websites and their quick answer is often fairly good (I mostly use it for documentation lookup). The lenses are definitely the best feature though, especially coupled with bangs. I converted even my wife who really loves it.