Nope, it’s all done on device with local machine learning models.
Nope, it’s all done on device with local machine learning models.
Smart studios will keep their writers for the main NPCs and use AI for the random filler NPCs that wander around cities and the like. We’ll see how many studios are smart I guess.
I just bought a drive from them last month (from Canada) and just received a $60 duty bill. The time before that I got nothing. YMMV
I think that’s my main complaint with the game. Once you find a way to beat the boss, you just go for that build every time. It’s so punishing and the path to get there is so long, that it’s a massive disincentive to try new things.
I’m currently using Unraid for pretty much every thing you listed, and I love it so much. I really appreciate being able to set up almost everything through the web interface. It makes my hobbies feel fun rather than just an extension of my day job.
That said, I bought the licence before they switched to a subscription model. So if I were starting over I might look into free alternatives.
You can still buy a lifetime subscription for Unraid, it’s just a lot more expensive.
The Firefox example is actually the reverse, Firefox funds the Mozilla Foundation. This is a case of an open source project successfully monetising through search referrals (mostly from Google).
You do if third party clients aren’t possible? You have control over what client the receiving end is using.
But apparently third party clients are possible, so it’s moot.
Of course, I fully agree! My point was just that you can eliminate the risk of poorly implemented cryptography at the endpoints. Obviously there’s a thousand and one other ways things could go wrong. But we do the best we can with security.
Anyway apparently third party clients are allowed after all? So it’s a moot point.
Excellent point! If I’m sending someone information that could get me killed if it were intercepted by the state, I’d sure as hell want some guarantees about how the other side is handling my data. Disallowing third party clients gives me at least one such guarantee.
The mere fact this technology exists gives legislators a tool in their toolbox. I could imagine a future where the EU mandates use of PPA in certain circumstances.
You may not care about financial shit, but that doesn’t change the reality of the situation. My point is precisely that the financial costs are so prohibitive, that the most likely scenario is that no one will be capable of stepping up long term.
As soon as there’s another spectre level security incident that requires a massive rewrite of the engine, any rendering engine developers with sub 100M budgets are sunk. Frankly 100M is probably being optimistic.
Cool makes sense, thanks for the reply! And yeah, I don’t think I’m quite there yet.
Out of curiosity, what’s the benefit of splitting those?
I’ve been meaning to try Caddy, but I just can’t even imagine something simpler than NginxProxyManager.
I too am skeptical.
Mozilla cares a lot about performance. It is monitored obsessively and there are entire teams dedicated to squeezing out every last drop of performance. Heaven and earth would be moved for a 30% perf boost. I’m guessing either there’s some very severe tradeoffs to these prefs, or setting them somehow breaks the methodology used to obtain this number.
Edit: also benchmarks can be notoriously misleading. I don’t have any opinions or knowledge on basemark (the benchmark used to get this 30% number), but speedometer v3 is the most state of the art and generally agreed upon benchmark for performance these days.
That doesn’t mean the 30% number is bogus… Just that it should be followed by “…on basemark” rather than implying it’s conclusive to overall performance.
This is the best take I’ve seen on the whole kerfuffle so far.
It’s just like all the steaming services. They’ll look the other way for a time, but then crack down whenever it makes the most financial sense.