• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: August 4th, 2023

help-circle
  • I can understand that Valve doesn’t want to give false impressions that a game runs perfectly when there are imperfections as mentioned

    Idk, I disagree with this. It means that games are being labeled as “not verified” because of things that don’t really hamper what people would care about - the keyboard popping up for naming your character or seeing “A” in a green circle isn’t going to make people be like “oh no, this doesn’t work well on my steamdeck, I’m not playing it”. Does it look unprofessional? Sure. But that’s not what people care about when looking at the ratings for compatibility. They just want to know if it’s going to run well.

    These systems are all about trust and evaluating the right metrics. Having the right button icons matters to Valve but not the player. Once players play games that aren’t verified and they run fine, and they play games that are verified but still have performance hitches in some places, etc, the rating system loses its credibility and then it’s meaningless.

    On top of this, developers are already shunning the verification and just not bothering. Some of the things they ask for don’t directly affect the playability of their game. It’s an extra hoop for the developer to jump through, and if people don’t trust the badge, there’s no point in chasing it. Valve is literally undermining their own system from both sides by doing this.

    There’s already people in this thread touting protonDB being a better evaluation. It’s exactly this that will happen and will continue to happen and continue undermining their rating system until Valve aligns their verification system with what users actually care about.


  • I’d actually bet it’s something different…

    It’s less that you game on a steam deck because it’s portable, and more that because it’s portable you can game. There are people here and there that are like “yeah, I have a steam deck so I use that instead” but the sentiment I see more often is “I wouldn’t be able to game at all if it wasn’t portable - I can’t sit down for that long, I only have time on the train, I need to be near my kids” etc.

    And this changes the dynamic. It’s less that these people have “desktop gaming” and “portable gaming” and are choosing to play the AAA games while portable. They only have portable gaming. And they choose to play the same good games everyone else is playing. The only gaming they do is on their deck. And they’re not going to be like “oh, why play a good game like BG3 if I can play a shitty portable game like xyz”.

    These are just people’s primary gaming devices now. And if they can, they will choose to play the same good games everyone else is choosing to play. It doesn’t matter if it only runs OK, playing a good game with OK graphics is still better than playing a shitty game.




  • As an introvert, as much as I feel weird aroind people, I feel even weirder video chatting with people I’ve never met in person. In that situation, I have no idea how to read people and the expectations are way harder to try to meet. This makes meetings even worse until I meet them.

    While I agree that forced in person work daily is insane, the OP is complaining about meeting people in person once after many years, which feels equally as ridiculous. IMO even for widely dispersed teams, meeting a few times a year seems ideal.



  • I’m actually shocked to find how many people agree with the OPs sentiment, but maybe there’s something about the demographics of who’s using a FOSS Reddit alternative or something. I’m not saying everyone is wrong or has something wrong with them or whatever, but I entirely agree with people finding this valuable, so maybe I can answer the OPs question here.

    I’ve been working remotely long since before the pandemic. I’ve worked remotely for multiple companies and in different environments. I am extremely introverted and arguably anti social. I tend to want to hang out with many of my friends online over in person. But that doesn’t mean I think there’s no advantage at all. To be honest, when I first started remote work, I thought the in person thing was total bullshit. After a few meetings my opinions drastically changed.

    I’ve pushed (with other employees, of course) to get remote employees flown in at least a few times a year at multiple companies. There are vastly different social dynamics in person than over video. Honestly, I don’t understand how people feel otherwise, especially if they’ve experienced it. I’ve worked with many remote employees over the years and asked about this, and most people have agreed with me. Many of these people are also introverted.

    I think one of the big things here is people harping on the “face” thing. Humans communicate in large part through body language - it’s not just faces. There’s also a lot of communication in microexpressions that aren’t always captured by compressed, badly lit video. So much of communication just isn’t captured in video.

    Secondly, in my experience, online meetings are extremely transactional. You meet at the scheduled time, you talk about the thing, then you close the meeting and move on. In person, people slowly mosy over to meetings. And after the meeting ends, they tend to hang around a bit and chat. When you’re working in an office, you tend to grab lunch with people. Or bump into them by the kitchen. There’s a TON more socializing happening in person where you actually bump into other people and talk them as people and not just cogs in the machine to get your work done.

    I find in person interactions drastically change my relationships with people. Some people come off entirely different online and it’s not until meeting them in person that I really feel like I know them. And then I understand their issues and blockers or miscommunications better and feel more understanding of their experiences.

    Maybe things are different if you work jobs with less interdepencies or are more solo. I’ve always worked jobs that take a lot of cooperation between multiple different people in different roles. And those relationships are just way more functional with people I’ve met and have a real relationship with. And that comes from things that just don’t happen online.

    Im honestly really curious how anyone could feel differently. The other comments just seem mad at being required to and stating the same stuff happens online, but it just doesn’t. I do wonder if maybe it has to do with being younger and entering the workplace more online or something. But I’ve worked with hundreds of remote employees and never heard a single one say the in person stuff to be useless. And I’ve heard many say exactly the opposite.




  • No, this does actually sound like a solution. But it’s a solution that should be scattered all throughout the process, and checked at multiple steps along the way. The fact that this wasn’t here to begin with is a bigger problem than the “client library failure” as it shows Wyze’s security practices are fucking garbage. And adding “one layer” is not enough. There should be several.

    To give a bit better context, which I can only be guessing at by reading between the lines of their vague descriptions and my first hand experience with these types of systems…

    Essentially your devices all have unique ids. And your account has an account/user ID. They’re essentially “random numbers” that are unique within each set, but there appear to be devices that have the same ID as a some user’s user ID.

    When the app wants to query for video feeds it’s going to ask the server “hey, get me the feed for devices A, B, and C. And my user ID is X”. The server should receive this, check if that user has access to those devices. But that server is just the first external facing step. It then likely delegates the request through multiple internal services which go look up the feed for those device IDs and return them.

    The problem that happened is somewhere in there, they had an “oopsie” and they passed along “get me device X, X, X for user ID X”. And for whatever reason, all the remaining steps were like “yup, device X for user X, here you go”. At MULTIPLE points along that chain, they should be rechecking this and saying “woah, user X only has access to devices A, B, and C, not X. Access denied.”

    The fact that they checked this ZERO times, and now adding “a layer” of verification is a huge issue imo. This should never have been running in production without multiple steps in the chain validating this. Otherwise, they’re prone to both bugs and hacks.

    But no, they clearly weren’t verified to view the events. Their description implies that somewhere in the chain they scrambled what was being requested and there were no further verifications after that point. Which is a massive issue.


  • It doesn’t even need to go that far. If some cache mixes up user ids and device ids, those user ids should go to request a video feed and the serving authority should be like “woah, YOU don’t have access to that device/user”. Even when you fucking mix these things up, there should be multiple places in the chain where this gets checked and denied. This is a systemic/architectural issue and not “one little oopsie in a library”. That oopsie simply exposed the problem.

    I don’t care if I was affected or how widespread this is. This just shows Wyze can’t be trusted with anything remotely “private”. This is a massive security failing.








  • So, I gather that what happened was iPhones and changes to coding languages (HTML5) which didn’t require an extra on the system (a plug in) to do it’s thing.

    … Sort of. That's a bit of an oversimplification and iPhone-centric, but generally the right idea.

    I'd slightly shift this and say it's more that flash and Java had many known problems and were janky solutions to the limits of HTML of the day. They were continued to be supported by browsers because they were needed for certain tasks beyond games that were actually important. Games were just a secondary thing that were allowed to exist because the tech was there for other problems.

    At the time, more "serious" games were mostly local installs outside your browser, and browser games were more "casual" and for the less technically inclined general audience. The main exception here was Runescape, and a couple others like Wizard 101 etc.

    But then smartphones started becoming more popular, and they just could not run flash/Java effectively. They were inefficient from a performance standpoint, and smartphones were very behind in performance and it just didn't work well. In the early days, many Android phones would run bits of flash/Java, sometimes requiring custom browsers, but it just wasn't very performant.

    Then HTML5 came along, solved most of the gaps in existing HTML tech, and the need for flash and Java greatly decreased. Because of the performance problems and security vulnerabilities, the industry as a whole basically gave up on them. There was no need beyond supporting games, as the functional shortcomings were covered, and HTML5 did somewhat support the same game tech, but it would take massive rewrites to get back there and there was basically no tooling. Adobe had spent over a decade building different Flash tools and people were being dumped to lower level tech with zero years of tooling development. Then came WebGL and some other tech… But nothing really made a good grip on the market.

    Unity and some other projects allowed easier compilation to HTML5 and WebGL over the years, so this was definitely still possible but simultaneously the interest was plummeting so there wasn't much point.

    Much of the popularity of web based games back in their day was you could just tell someone a URL and they could go play it on their home computer. Their allure was their accessibility, not the tech. The desire for high tech games was won over by standalone desktop games. But those were harder to find, required going to a store, making a purchase, bringing a CD home, installing said game, having the hardware to run it, etc.

    But at the time of the death of Flash and Java, everyone carried a smartphone. They all had app stores and could just search the app store once, install the game, and have it easily accessible on their device, running at native performance. Console gaming had become commonplace. PC gaming was fairly common, with pre-built gaming PCs being a thing. Now Steam existed and you didn't have to go to a store or understand install processes. Every competing tech to web games was way more accessible. Smartphone tech better covered "gaming for the general populace".

    What would be the point of a web game at that point? Fewer people have desktops so your market is smaller. If you're aiming for people's smartphones, doing stuff natively to two platforms is higher performance and easier to deal with. Console gaming is more common. PC gaming is a stable market. OK top of that, there's way less money in web based gaming. Stores like steam and console game stores have the expectation of spending money and an easy way to do so. Smartphones have native IAP support to make it easy to spend money on microtransactions. Web has… Enter your payment information into that websites payment processor they have to integrate, which feels less safe to the user and requires more work from the developer than the alternatives on console/pc/mobile.

    There's just no market for web based gaming anymore when people have so many more options available that are easy to access - what's the purpose of building a web based game at that point?


  • A bunch of these are also utter bullshit. “Purchase history” sounds like they can go through and read your Amazon purchases or something - they can’t. Diagnostic data sounds scary, but I’d rather use an app collecting diagnostic data because the alternative is a buggy mess. Them tracking what you do in their app is way more help than it is dangerous. Stuff like device ids and such are also likely only pulled for that reason or to confirm your purchases with them, etc.