This is more complicated than some corporate infrastructures I’ve worked on, lol.
This is more complicated than some corporate infrastructures I’ve worked on, lol.
I usually just use VS Code to do full-text searches, and write down notes in a note taking app. That, and browse the documentation.
Nah, LLMs have severe context window limitations. It starts to get wackier after ~1000 LOC.
Python is quite slow, so will use more CPU cycles than many other languages. If you’re doing data-heavy stuff, it’ll probably also use more RAM than, say C, where you can control types and memory layout of structs.
That being said, for services, I typically use FastAPI, because it’s just so quick to develop stuff in Python. I don’t do heavy stuff in Python; that’s done by packages that wrap binaries complied from C, C++, Fortran, or CUDA. If I need tight-loops, I either entirely switch to a different language (Rust, lately), or I write a library and interact with it with ctypes.
C# is actually pretty nice. Ecosystem, not so much, but D doesn’t really have one anyways :)
Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).
This is good, IMO. People don’t have to smoke as much, so less damage is done to their lungs. Vapes, edibles, and concentrates that are not combusted are probably even less damaging.
Both times I’ve received ChipDrops, the loads were an entire dump truck; ~20 cubic yards. I just used a wheelbarrow, a many-tined pitchfork, and a garden rake to make multiple large mulched beds, and a small pile in my back yard. I now have multiple large mulched beds, use it to cover food scraps in my compost bin, and use some in my vegetable beds/paths. It’s about a full day’s work to handle it all. I think ChipDrop also allows people to notify other users you’re giving some away if you can’t use it all, or you could try something like Craigslist.
If you’re talking about naive bayes filtering, it most definitely is an ML model. Modern spam filters use more complex ML models (or at least I know Yahoo Mail used to ~15 years ago, because I saw a lecture where John Langford talked a little bit about it). Statistical ML is an “AI” field. Stuff like anomaly detection are also usually ML models.
OSMC’s Vero V looks interesting. Pi 4 with OSMC or Librelec could work. I’m probably going to do something like this pretty soon. I just set up an *arr stack last week, and just using my smart TV with the jellyfin app installed ATM.
My PC running the Jellyfin server can’t transcode some videos though; probably going to put an Arc a310 in it.
Democracy was a failed form of government about 2000 years ago.
I’ve been using last.fm for, I guess, decades now. Looking at what my “neighbors” are listening to is the most helpful.
I’ve used them as a proxy for a web app at the last place I worked. Was just hoping they’d block unwanted/malicious traffic (not sure if it was needed, and it wasn’t my choice). I, personally, didn’t have any problems with their service.
Now, if you take a step back, and look at the big picture, they are so big and ubiquitous that they are a threat to the WWW itself. They are probably one of the most valuable targets for malicious actors and nation states. Even if Cloudflare is able to defend against infiltration and attacks in perpetuity, they have much of the net locked-in, and will enshittify to keep profits increasing in a market they’ve almost completely saturated.
Also, CAPTCHAs are annoying.
Yeah, torrents usually run 100-300KiB/s. I guess not too bad for smaller files. About an hour or three per GB.
I mean, you can be sued for anything, but it will get thrown out. Like, I guess the MPAA could offer a movie for download, then try to sue the first hop they upload a chunk to, but that really doesn’t make any sense (because they offered it for download in the first place). Furthermore, the first hop(s) aren’t the people that are using the file, and they can’t even read it. If people could successfully sue nodes, then ISPs and postal services could be sued for anything that passes through their networks.
I think similar, and arguably more fine-grained, things can be done with Typescript, traditional OOP (interfaces, and maybe the Facade pattern), and perhaps dependency injection.
Onion-like routing. It takes multiple hops to get to a destination. Each hop can only decrypt the next destination to send the packet to (i.e. peeling off a layer of the onion).
For the things you mentioned, the vegan and gluten-free options are processed much more. Beef, for example, is arguably a “whole food.”
Gluten-free isn’t healthier unless you have specific conditions. Most people can handle gluten fine, and some vegan foods are primarily gluten (such as seitan).
Vegan isn’t inherently healthy, especially if your eating mostly processed foods. A primarily whole-food vegan diet is likely healthier and cheaper than most people’s diets though.
Hmm, so looks like around 100kB/s. That’s about what I remember (100kB/s - 300kB/s).
I’ve recently been trying out Tribler, and it’s much faster than the last time I tried it (I’ve seen 2MB/s on popular torrents, but around 500kB/s on less popular). Not sure if there are simply more exit nodes with more bandwidth now or if there are more people on the Tribler network seeding.
Haven’t tried Gemini; may work. But, in my experience with other LLMs, even if text doesn’t exceed the token limit, LLMs start making more mistakes and sometimes behave strangely more often as the size of context grows.