I doubt the school administrators who would be buying this thing or the people trying to make money off it have really thought that far ahead or care whether or not it does that, but it would definitely be one of its main effects.
I doubt the school administrators who would be buying this thing or the people trying to make money off it have really thought that far ahead or care whether or not it does that, but it would definitely be one of its main effects.
By attaching to your inflatable butt plug (you remembered to buy one, no?) and reading changes in pressure, this device detects pelvic floor contractions which happen just before and during orgasm in most people. By summing the magnitude and frequency of these contractions, orgasm can be detected and prevented.
$497.00
Sold out
wow
I think it’s less that they have found an “excuse” to raise prices (companies always want more money, that’s what companies do), and more that they have acquired the leverage to do so. Fast food restaurants have accumulated brand recognition and customers that are psychologically attached to their products. People are less used to cooking their own food and have less time with which they might do it. We are poorer in relative wealth terms, companies are richer and more vertically integrated, we are in a worse negotiating position.
I wonder if part of the reason for supporting this is that they like the secondary effect that all this information is now also available to governments
Can’t track mouse movements on mobile though
Unless it’s an emergency or you’re trying to contact a company/government entity that will stonewall you with template emails otherwise I think this is fine because if someone just calls me on the phone I’d hate it and I don’t want to inflict that on others
Obligatory LLMs see tokens not letters
The profit they get from the sale of the television should be enough that they don’t have to make the television shit to get slightly more profit, why do people even buy these
But televisions cost hundreds of dollars at least
“each new connected TV platform user generates around $5 per quarter in data and advertising revenue.”
Sounds like a pathetic amount of money for betraying your customers with a shitty ad infested smart tv
So this is a request made in the context of emails sent out gifting streamers promotional copies of the game, not a condition of paid promotional streams, am I getting that right?
How different it is to self motivate and the way the environment in front of you chains together to determine what you do. I feel like the way people are trained to live by school and financial obligations is very limited and probably few people are really in control of their own lives in any meaningful way.
tbf the article only assumes he told them no because of how implausible it seems the task would be, the actual details of what if anything was discussed and what happened are unknown.
Overall, it isn’t yet. Reddit has more developed niche subs, more in-depth posts and comments, and enough content even if you filter out the low effort stuff. Where Lemmy is better is that it is decentralized and not run like a corporate dictatorship with zero respect for its users the way Reddit is.
Implying that it was worse and has gotten better, or will get better to the point where data hoarding is unnecessary. I guess it would be nice if things turned out that well.
A good point, but from the article it sounds like the demographic for which this would be a problem is 300lbs+. The proportion of people meeting the criteria for being overweight is in the same ballpark, but I wonder if maybe there’s a more skewed distribution of people who are overweight enough to exceed the safety margin of a standard bicycle.
Privacy means personal agency and freedom from people, whether individuals, companies, or the government, controlling you with direct or implied threats, or more subtle manipulation, which they can do because they have your dox and because information is power.
A lack of privacy adds fuel to the polycrisis because if we can’t act in relative secrecy that basically means we can’t act freely at all, and nothing can challenge whoever runs the panopticon.
The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.
The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.
The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.
The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.
They are deterministic though, in a literal sense. Rather their behavior is undefined. And yes, a LLM is not a person and it’s not quite accurate to talk about them knowing or understanding things. So what though? Why would that be any sort of evidence that research efforts into AI safety are futile? This is at least as much of an engineering problem as a philosophy problem.
Even while it was happening much of the response was to try to pretend it wasn’t happening