And that seems entirely reasonable to me. Unless I am missing something
And that seems entirely reasonable to me. Unless I am missing something
I think you are obligated to share your entire known hosts file to prove this.
The nextcloud snap is the best and easiest way to selfhost nextcloud.
I said it. Fight me.
The “coreutils” that macos uses by default are all older shitter bsd versions. I discovered this when half of my scripts and commands didnt work properly.
Silly me thought I could just bring my cash scripts over and not have any major issues (I’m not doing anything crazy). But even something as simple as grep didn’t work right because it could recursively search directories in the old bad version Mac comes with.
All of the gnu versions are much better and you can install them with homebrew.
Would you mind educating us plebs then? I had a similar question to op, and I can assure you, I definitely don’t understand local auth services the way I probably should.
It might be worth taking a step back and looking at your objective with all of this and why you are doing it in the first place.
If it’s for privacy, then unfortunately that ship has sailed when it comes to email. It’s the digital equivalent of a post card. It’s inherently not private. Nothing you do will make it private. Even services like proton Mail aren’t private–unless you only email other people on proton.
I appreciate wanting to control your own destiny with it but there are much more productive things you could be spending your time on the improve your privacy surface area.
My thoughts on it are: as a developer, if you flag the issue for your management, and they want to move forward, then you’ve done your part.
Maybe put an extra comment in the code for posterity’s sake.
It’s not ultimately your problem and what else are you going to do? Work unpaid nights and weekends to fix it for some guy who might run into a problem 8 years from now?
If it’s working again all of the sudden I would lean towards f2b. I don’t know what your “timeout” is, but if f2b got tripped it would explain why you couldn’t get in yesterday but today it works (assuming your ban expires in 24hrs or so).
GPU with a ton of vran is what you need, BUT
An alternate solution is something like a Mac mini with an m series chip and 16gb of unified memory. The neural cores on apple silicon are actually pretty impressive and since they use unified memory the models would have access to whatever the system has.
I only mention it because a Mac mini might be cheaper than GPU with tons of vram by a couple hundred bucks.
And it will sip power comparatively.
4090 with 24gb of vram is $1900 M2 Mac mini with 24gb is $1000
Like he was saying, it’s more than just power loss. It’s a way of “sanitizing” the power as it comes in. This is “usually” not a problem. But dirty power is arguably worse than power outages. If the voltages fluctuate or get low for whatever reason that puts a big strain on your power supplies.
This could happen because you run a vacuum on the same circuit and your house is old, guy down the street electrocutes himself or the power coming in from the electric company is ‘dirty’ because they have an issue with transformers or up stream somewhere. It can be imperceptible to you, but your tech notices.
I switched the the snap package and it’s been rock solid and pain free the entire time.
I welcome any and all comments on why snap is Satan.
When I read your post I was expecting something much worse than what you linked to.
It wasn’t really all that bad. Imo, it was 1 or 2 emails too many.
Maybe I’m a little biased because I love the product so much. It’s a fantastic search engine again and all of the AI extras add value and aren’t obnoxious like everyone else’s.
I am still happily paying for kagi even if the CEO emails people that write shitty blogs about them.
I use vimwiki and wrote a bash script that pulls all of the Todo items from across my wiki and puts them in a single file with TODO and IN PROGRESS sections.
I have a keybind that pulls up the list and runs the script to refresh it.
It’s not linked to any calendar though. I keep my to-do list and calendar separate.
I use Gmail and have that calendar for my personal stuff. At work I am forced to use outlook.
Unless it’s the newest of new Nvidia GTX cards, it’s generally a wash.
You tradeoff issues from one to the other.
I had a 3070ti that I “upgraded” to a 6900xt and I kind of regretted it. I fell for the AMD is king on Linux hype.
Nvidia is way better than people let on and AMD isn’t nearly as great as people let on.
That’s my two cents.
That last part isn’t a fact. We don’t have a ruling and we don’t know how they will vote.
(God knows how this court will vote but don’t spread misinformation)
“rocinante” for my proxmox host.
“awkward, past his prime, and engaged in a task beyond his capacities.” From don Quixote’s wiki page.
It seemed fitting considering it is a server built from old PC parts…engaged in tasks beyond its abilities.
The rest of my servers (VMs moslty) are named for what they actually do/which vlan they are on (eg vm15) and aren’t fun or excitin names. But at least I know if I am on that VM it has access to that vlan(or that it’s segregated from my other networks).
I totally believe you can hit the ram limit on these. I was just saying Ive surprisingly managed to be fine with 8GB.
Android emulators are notoriously memory hungry and there are certain tasks that just flat out require more ram regardless of how well it’s managed.
The advice I heard about these a while back is: if you know 8GB isn’t enough for you, then you aren’t the market segment they are targeting with the basic models.
That said, no “pro” model should come with just 8GB. It just waters down what “pro” means.
I’ve been using a Macair with 8GB of ram since they came out. It was on sale at Costco and I had a gift card. I think I paid $500 out of pocket.
I was worried that 8GB would limit me but it was the one on sale so I rolled with it. I can say that after several years, the only time it’s limited me was when I tried to run an AI model that was 8GB. Obviously, that becomes an issue at that point.
But for all I do with my air, including creating a 1GB ramdisk for a script/automation ml job I run, I have never felt limited by the ram.
I open a bagillion ff tabs. Never close windows etc. it’s an air after all, not a workstation substitute, so my use ases arent overly taxing in the grand scheme of things. I’m not editing my 4k video or doing rendering with it. But ram hasn’t been an issue outside of the AI workload with the 8GB model–and tbh that’s only an issue because of the ML cores. They absolutely scream vs my 1080ti that’s in my server. My m1 with 8GB of ram runs circles around my 24 core 128GB ram server that has a 1080ti.
I did just get a MacBook pro for work that I requested 128GB of ram. But that’s because I wanted it for bigger AI models(and work is paying not me).
I’m blown away that you’ve managed to be offended by this.
I know that ploum blog post gets cited way too often on Lemmy, but this is a situation where I think Google has either intentionally or inadvertently executed a variation of the “embrace, extend, extinguish” playbook that Microsoft created.
They embraced open source, extended it until they’ve practically cornered the market on browser engine, and now they are using that position to extinguish our ability to control our browsing experience.
I know they are facing a possibly “break up” with the latest ruling against them.
It would be interesting to see if they force divestiture of chrome from the ad business. The incentives are perverse when you do both with such dominance and its a massive conflict of interest.