

This is why when an app pops up that permission dialog, you always say no. The number of permissions Meta apps ask immediately upon startup is a red flag on its own.
Can’t collect and upload what it doesn’t have.
This is why when an app pops up that permission dialog, you always say no. The number of permissions Meta apps ask immediately upon startup is a red flag on its own.
Can’t collect and upload what it doesn’t have.
For all its flaws and mess, NFS is still pretty good and used in production.
I still use NFS to file share to my VMs because it still significantly outperforms virtiofs, and obviously network is a local bridge so latency is non-existent.
The thing with rsync is that it’s designed to quickly compute the least amount of data transfer to sync over a remote (possibly high latency) link. So when it comes to backups, it’s literally designed to do that easily.
The only cool new alternative I can think of is, use btrfs or ZFS and btrfs/zfs send | ssh backup btrfs/zfs recv
which is the most efficient and reliable way to backup, because the filesystem is aware of exactly what changed and can send exactly that set of changes. And obviously all special attributes are carried over, hardlinks, ACLs, SELinux contexts, etc.
The problem with backups over any kind of network share is that if you’re gonna use rsync anyway, the latency will be horrible and take forever.
Of course you can also mix multiple things: rsync laptop to server periodically, then mount the server’s backup directory locally so you can easily browse and access older stuff.
Technically it wasn’t really designed with megainstances in mind that swallows the entire fediverse.
My instance has no problem whatsoever keeping up and storage is well under control. But we’re few here subscribed to a subset of available communities so my instance isn’t 90% filled with content I don’t care about and will never look at. Also reduces the moderation burden because it’s slow enough I can actually mostly see everything that comes through.
Lemmy itself is also pretty inefficient in that regard, you can very much make software that pulls instead and backfill local cache as needed.
Even my Reddit subscriptions would be pretty easy on my instance.
deleted by creator
One thing to keep in mind is ActivityPub isn’t exactly made for social media in the sense most people use it nowadays. It’s intended to be more like RSS feeds: you’re support to subscribe to stuff like news sites and be able to bring it all into a content aggregator. Seen that way, its design makes a lot of sense.
It kinda works well for public microblogging as well. It’s when you start involving moderation, voting, sharing, boosting that things get kinda weird.
I’ll add some of my comments to that discussion.
You guys have basically been describing Aether and Nostr
The main issue is when your instance starts federating, accounts are created with a key pair that you will lose when changing software, and generally a whole bunch of URLs will no longer be valid. The actor ID of your user is https://feddit.org/u/buedi
, not just buedi
. Mastodon might make it https://feddit.org/@buedi
instead. As per the spec, that is the canonical URL for the user/actor.
Other instances will still try to push content to your instance assuming the software it was registered with. So you may continue to receive data for Lemmy communities which Mastodon has no clue what that is or what to do with it.
You can host the API/frontend on a different domain no problem, but the actual ActivityPub service should be on a dedicated subdomain to avoid the issues.
That said, I believe after a couple days/weeks, it should eventually sort itself out as your instance keeps erroring out and gets dropped and reregisters with the new software.
That seems like a good way to get vendors to start only shipping firmware updaters that only runs on Windows again.
If that works, something with the PipeWire state might be weird. Tried deleting pipewire/wireplumber in ~/.local/state
followed by a reboot (or restarting pipewire and wireplumber). That should reset it.
Every other company is free to sponsor FOSS development too. You’re saying that like Microsoft and Windows aren’t already a defacto monopoly. Charge a FOSS contribution fee instead of the Windows license, done, Linux development sponsored by the manufacturers is solved and they too get to not get steamrolled by Microsoft.
A lot of it is FOSS so no, they can’t take it back even if GabeN dies and a hostile takeover happens. They can stop giving out updates, which would be disappointing but far from the end of it.
They might not be benevolent, they’re still a major breath of fresh air compared to basically every other gaming company out there.
No just the one at the start.
Makes it look like this
There’s YaCy. I’ve run a node for a while but it ended up filling up my server’s drive just indexing german wikipedia and the results were terrible.
And it’s still not private because you have to broadcast the query across the network.
The email used to be used to send you notices if your cert wasn’t renewed and other communications. They’ve just discontinued that feature, so the email isn’t super important.
It’s a good idea to provide a valid email address, but it’s not that important and doesn’t really matter for the purpose of issuing a certificate. It’s not part of the problem you’re having.
Ubuntu 7.10 so late 2007, but I guess the nerd part came when I installed Arch in 2011. Still running that very same install.
My own personal example: https://www.reddit.com/r/linux4noobs/s/8FM1ZvXi68
It just doesn’t look great nor serious nor welcoming.
The guy gives a ton of “I don’t care about anyone’s use cases except mines” vibes too. Also called Gnome and KDE teletubbies DEs when I mentioned xcomposite being an important feature. Basically considering the widely known issues around multimonitor vsync and mismatched resolutions and all as basically not real issues with Xorg.
XLibre is 100% a political fork because the guy claims Xorg is deprecated by a big tech conspiracy pushing inferior software onto users. There’s nothing wrong with wanting to continue Xorg’s legacy but come on we don’t have to pretend Xorg is this perfect thing that always works. Xorg has been hated for decades for a reason. This xkcd exists for a reason: https://xkcd.com/963/
That kind of makes sense? Aren’t the labs when they’re A/B testing or benchmarking new features before general release and toggle random people’s settings doing so? I vaguely recall some drama around that.
If I turn off telemetry I want those off too, it makes sense they’re linked. It you want a new feature there’s always nightly+about:config, but I don’t want it downloading random config toggles especially if it’s not reporting back that it broke my stuff. The code should be what I installed, not some random lab blob downloaded off their servers at runtime.
It’s derived by both a key from the TEE and the PIN/password.
The reason for that is so you need both the user’s correct password, and the TEE to agree to hand out the key, which it may refuse to do if there’s been too many attempts. When you factory reset it just generates a new key, instantly making all the previous data permanently inaccessible. The TEE will also wipe the key if you unlock the bootloader or try to break in the wrong way.
It’s still only roadblocks though, extract the key from the TEE and you have unlimited attempts on what are usually weak 4-6 digit PINs. It’s not a lot of tries. Then you better hope you had a good password.
You need to set up your PC to be on that IP address first, TFTP doesn’t magically listen to a particular IP, you need to configure the PC with that IP.
ip link set eth0 up ip addr add 10.10.10.3/24 dev eth0 ip addr add 10.10.10.1/24 dev eth0
Then you can start the TFTP server on the interface:
dnsmasq -d --port=0 --enable-tftp --tftp-root=/path/to/tftp/root -i eth0