• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • S410@kbin.socialtoPrivacy@lemmy.mlAndroid Microphone Snooping
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    8 months ago

    Android is sending a ton of data, though, even if you’re not doing anything internet related. It, also, kinda reacts to “okay, google”, which wouldn’t really be possible if it wasn’t listening.

    Now, it obviously doesn’t keep a continuous, lossless audio stream from the phone to some google server. But, it could be sending text parsed from audio locally, or just snippets of audio when the thing detects speech. Relatively normal stuff to collect for analytics purposes, actually.

    Now, data like that could “easily” get “misplaced”, of course, and end up in the ad-shoveling machine… Not necessary at Google’s hands: could be any app, really. Facebook, TickTok, random free to play Candy Crush clone, etc. But if that data gets into the interwoven clusterfuck of advertisement might, it will likely end up having an effect on the ads shown to the user.



  • Simply disabling registration of new accounts using Tor/VPN should be sufficient and won’t affect existing users.

    Although, requiring verification of accounts made via those would be a better approach. Require captchas to prevent automated posting. Automatically mark posts made from new accounts and/or via Tor or a VPN for moderation review.

    There are way to mitigate spam that aren’t as blunt and overreaching as blanket banning entire IP ranges. This approach is the dumbest, least competent way of ensuring any kind of security, and, honestly, awfully close to being needlessly discriminating. Fuck everyone from countries with draconian internet censorship, I guess?



  • You pull up. Get out. Put the nozzle in. Then you go inside. There, you wait in line for 5 minutes, because the dick from another pump decided to buy a fucking coffee and a sandwich, and the only employee is busy making those for him, instead of operating the pumps. Then you actually pay and get the gas flowing. By the time you’re back at the car, it’s already finished pumping.

    So, there can be a time gap of several minutes with multiple actions and distractions during it. Is it really that surprising people forget to pull the thing out, occasionally?


  • You’re linking a post… From 2010. AMD replaced radeon with their open source drivers (AMDgpu) in 2015. That’s what pretty much any AMD GPU that came out in the last 10 years uses now.

    Furthermore, the AMDgpu drivers are in-tree drivers, and AMD actively collaborate with the kernel maintainers and developers of other graphics related projects.

    As for Nvidia: their kernel modules are better than nothing, but they don’t contain a whole lot in terms of actual implementation. If before we had a solid black box, now, with those modules, we know that this black box has around 900 holes and what comes in and out of those.

    Furthermore, if you look at the page you’ve linked, you’ll see that “the GitHub repository will function mostly as a snapshot of each driver release”. While the possibility of contributing is mentioned… Well, it’s Nvidia. It took them several years to finally give up trying to force EGLStreams and implement GBM, which was already adopted as the de-facto standard by literally everybody else.

    The modules are not useless. Nvidia tend to not publish any documentation whatsoever, so it’s probably better than nothing and probably of some use for the nouveau driver developers… But it’s not like Nvidea came out and offered to work on nouveau to make up to par and comparable to their proprietary drivers.


  • k, so for the least used hardware, linux works fine.

    Yeah, basically. Which raises a question: how companies with much smaller market share can justify providing support, but Nvidia, a company that dominates the GPU market, can’t?

    The popular distros are what counts.

    Debian supports several DEs with only Gnome defaulting to Wayland. Everything else uses X11 by default.

    Some other popular distros that ship with Gnome or KDE still default to X11 too. Pop!_OS, for example. Zorin. SteamOS too, technically. EndeavorOS and Manjaro are similar to Debian, since they support several DEs.

    Either way, none of those are Wayland exclusive and changing to X11 takes exactly 2 clicks on the login screen. Which isn’t necessary for anyone using AMD or Intel, and wouldn’t be necessary for Nvidia users, if Nvidia actually bothered to support their hardware properly. But I digress.

    Worked well enough for me to run into the dozen of other issues that Linux has

    Oh, it’s no way perfect. Never claimed it is.

    I like most people want a usable environment. Linux doesn’t provide that out of the box.

    This both depends on the disto you use and on what you consider a “usable environment”.

    If you extensively use Office 365, OneDrive, need ActiveDirectory, have portable storage encrypted with BitLocker, etc. then, sure, you won’t have a good experience with any distro out there. Or even if you don’t, but you grab a geek oriented distro (e.g. Arch or Gentoo) or a barebones one (e.g. Debian) you, again, won’t have the best experience.

    A lot of people, however, don’t really do a whole lot on their devices. The most widely used OS in the world, at this point in time, is Android, of all things.

    If all you need to do is use the web and, maybe, edit some documents or pictures now and then, Linux is perfectly capable of that.

    Real life example: I’ve switched my parents onto Linux. They’re very much not computer savvy and Gnome with it’s minimalistic mobile device-like UI and very visual app-store-like program manager is significantly easier for them to grasp. The number of issues they ask me to deal with has dropped by… A lot. Actually, every single issue this year was the printer failing to connect to the Wifi, so, I don’t suppose that counts as a technical issue with the computer, does it?

    wacom tablets

    I use Gnome (Wayland) with an AMD GPU. My tablet is plug and play… Unlike on Windows. Go figure.






  • I use Arch + Gnome with VRR patches on my main PC.

    It find it actually easier to use than e.g. fedora or ubuntu due to better documentation and way more available packages in the repos… With many, many more packages being in AUR!

    By installing all the stuff commonly found on other distros (and which many consider bloat), you’ll get basically the same thing as, well, any other distro. I have all the “bloat” like NetworkManager, Gnome, etc. which is known to work together very well and which tries to be smart and auto-configure a lot of stuff. Bloat it may be, but I am lazy~

    Personally, I think it’s better to stick to upstream distros whenever possible. For example Nobra, which is being recommended in this thread quite a lot, is maintained by a single person. In reality, it’s not much more than regular Fedora with a couple of tweaks and optimizations. Vast majority of those one could do themselves on the upstream distro and avoid being dependent that one person. It is a single point of failure. after all.


  • Do you expect to find a company that sells a calendar-only subscription? “Calendar - 49c/month”?

    I’ve been looking at lot at all kinds of services and most start their pricing at around 5 USD/month. Regardless of how much actual features they actually provide.

    I’d say your best bet is NextCloud. You can rent some, self host or use a free instance (there’s a couple around).

    Personally, I’m self-hosting stuff on a VPS. For whopping 5USD/month I’m getting things I’d be paying 50, if not mere, if they were offered as separate products by your average service-providing companies.



  • Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn’t even use the overused term “AI”.

    LLMs, for example, are something like… a calculator. But for text.

    A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.

    When we want to create a solver for systems that aren’t as easily defined, we have to resort to other methods. E.g. “machine learning”.

    Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can’t even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).

    And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that “apple slices + batter = apple pie”, assuming it has been tuned (aka trained) right.


  • Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn’t even use the overused term “AI”.

    LLMs, for example, are something like… a calculator. But for text.

    A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.

    When we want to create a solver for systems that aren’t as easily defined, we have to resort to other methods. E.g. “machine learning”.

    Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can’t even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).

    And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that “apple slices + batter = apple pie”, assuming it has been tuned (aka trained) right.


  • Learning is, essentially, “algorithmically copy-paste”. The vast majority of things you know, you’ve learned from other people or other people’s works. What makes you more than a copy-pasting machine is the ability to extrapolate from that acquired knowledge to create new knowledge.

    And currently existing models can often do the same! Sometimes they make pretty stupid mistakes, but they often do, in fact, manage to end up with brand new information derived from old stuff.

    I’ve tortured various LLMs with short stories, questions and riddles, which I’ve written specifically for the task and which I’ve asked the models to explain or rewrite. Surprisingly, they often get things either mostly or absolutely right, despite the fact it’s novel data they’ve never seen before. So, there’s definitely some actual learning going on. Or, at least, something incredibly close to it, to the point it’s nigh impossible to differentiate it from actual learning.


  • It’s illegal if you copy-paste someone’s work verbatim. It’s not illegal to, for example, summarize someone’s work and write a short version of it.

    As long as overfitting doesn’t happen and the machine learning model actually learns general patterns, instead of memorizing training data, it should be perfectly capable of generating data that’s not copied verbatim from humans. Whom, exactly, a model is plagiarizing if it generates a summarized version of some work you give it, particularly if that work is novel and was created or published after the model was trained?