• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 2nd, 2023

help-circle

  • Yeah that’s what we did last time. I implemented a basic framework on top of a very widespread system in our codebase, which would allow a number of requested minor features to be implemented similarly, with the minimal amount of required boilerplate, and leaving the bulk of the work to implementing the actual meat of the requests.

    These requests were completely independent and so could be parallelized easily. The “framework” I implemented was also incredibly thin (basically just a helper function and an human instruction in the shape of “do this for this usecase”) over a system that is preexisting knowledge. My expectation was to have to bring someone up to speed on certain things and then let them loose on this collection of tasks, maybe having to answer some question a couple times a day.

    Instead, since the assigned colleague is basically just a copilot frontend, I had to spend 80% or more of my days explaining exactly what needed to be done (I would always start with the whys od things since the whats are derived from them, but this particular colleague seems uninterested in that).

    So I was basically spending my time programming a set of features by proxy, while I was ostensibly working on a different set of features.

    So yeah, splitting work only works if you also have people capable of doing it in the first place. Of course I couldn’t not help this colleague either, that’s a bad mark on performance review you know. Even when the colleagues have no intention of learning or being productive in any way (I live in a country with strong employee regulations so almost nobody can be fired for anything concerning actual work performance, and this particular colleague doesn’t hide that they don’t care about actually doing a good job, except to managers so they still get pay raises for “improving”).

    Yeah, you can tell I’m unhappy


  • who is actually stopping them from dealing with it?

    Management. Someone in management sets idiotic deadlines, then someone tells you “do X”, you estimate and come up with “it will take T amount of time” and production simply tells you “that’s too long, do it faster”

    they don’t care about the details or maintenance

    They don’t, they care about time. If there are 6 weeks to implement a feature that requires reworking half the product, they don’t care to know half the product needs to be reworked. They only care to hear you say that you’ll get it done in 6 weeks. And if you say that’s impossible, they tell you to do it anyway

    you have to include the cost of managing technical debt

    I do, and when I get asked why my time estimations are so long compared to those of other colleagues I say I include known costs that are required to develop the feature, as well as a buffer for known unknowns and unknown unknowns which, historically, has been necessary 100% of the time and never included causing us development difficulties and us running over cost and over time causing delays and quality issues that caused internal unhappiness, sometimes mandatory overtime, and usually a crappy product that the customers are unhappy with. That’s me doing a good job right? Except I got told to ignore all of that and only include the minimum time to get all of the dozens of tiny pieces working. We went over time, over cost, and each tiny piece “works” when taken in isolation but doesn’t really mix with everything else because there was no integration time and so each feature kinda just exists there on its own.

    Then we do retrospectives in which we highlight all the process mistakes that we ran into only to do them all again next time. And I get blamed come performance review time because I was stressed and I wasn’t at the top of my game in the last year due to being chronically overburdened, overworked, and underpaid.








  • Thank you for the explanation, now I understand the context on the original message. It’s definitely an entirely different environment, especially the kind of software that runs on a bunch of servers.

    I have built business programs before being a game dev, still the kinds that runs on device rather than on a server. Even then, I always strived to write the most correct and performant code. Of course, I still wrote bugs like that time that a release broke the app for a subset of users because one of the database migrations didn’t apply to some real-world use case. Unfortunately, that one was due to us not having access to real world databases pr good enough surrogates due to customer policy (we were writing an unification software of sorts, up until this project every customer could give different meanings to each database column as they were just freeform text fields. Some customers even changed the schema). The migrations ran perfectly on each one of the test databases that we did have access to, but even then I did the obvious: roll the release back, add another test database that replicated the failing real world use case, fixed the failing migrations, and re released.

    So yeah, from your post it sounds that either the company is bad at hiring, bad at teaching new hires, or simply has the culture of “lol who cares someone else will fix it”. You should probably talk to management. It probably won’t do anything in the majority of cases, but it’s the only way change can actually happen.

    Try to schedule one on one session with your manager every 2 to 3 weeks to assess which systematic errors in the company are causing issues. 30 minutes sessions, just to make them aware of which parts of the company need fixing.


  • Sorry, this comment is causing me mental whiplash so I am either ignorant, am subject to non-standard circumstances, or both.

    My personal experience is that developers (the decent ones at least) know hardware better than IT people. But maybe we mean different things by “hardware”?

    You see, I work as a game dev so a good chunk of the technical part of my job is thinking about things like memory layout, cache locality, memory access patterns, branch predictor behavior, cache lines, false sharing, and so on and so forth. I know very little about hardware, and yet all of the above are things I need to keep in mind and consider and know to at least some usable extent to do my job.

    While IT are mostly concerned on how to keep the idiots from shooting the company in the foot, by having to roll out software that allows them to diagnose, reset, install or uninstall things on, etc, to entire fleets of computers at once. It also just so happens that this software is often buggy and uses 99% of your cpu taking it for spin loops (they had to roll that back of course) or the antivirus rules don’t apply on your system for whatever reason causing the antivirus to scan all the object files generated by the compiler even if they are generated in a whitelisted directory, causing a rebuild to take an hour rather than 10 minutes.

    They are also the ones that force me to change my (already unique and internal) password every few months for “security”.

    So yeah, when you say that developers often have no idea how the hardware works, the chief questions that come to mind are

    1. What kinda dev doesn’t know how hardware works to at least an usable extent?
    2. What kinda hardware are we talking about?
    3. What kinda hardware would an IT person need to know about? Network gear?

  • Seems like a bad idea unless she’s very familiar with the projects she would help document. Documentation is notoriously not something that can be produced by a newcomer, because it requires experience that a newcomer doesn’t have.

    I guess the best way for a newcomer to help would be to try to use the product and ask every little question they have to make sure they receive the correct answers and context and, at the end of the process, enough knowledge would be gained to contribute at least one piece of documentation. But the bulk of the knowledge would still come from people that already know the product, so in terms of efficiency it’s way worse than having the authors write it.

    Of course, if the authors are unwilling or unable to write good (or any, even) documentation, having someone that has the will to gather the scattered information into a central place and work on it so it’s digestible and high quality is still unbelievably useful.

    But yeah, my point being that documentation is far trickier than it seems as far as open source contributions go.


  • Ah, no idea about live streams as I don’t watch those. I would imagine they have a different format for those as two ads every 2 to 5 minutes wouldn’t work for those.

    Now that I think about it, it may be because I don’t have an account so maybe google has less data to harvest and sell and so I get more ads. Unfortunately they might think that this would make me think “I should make an account” or “I should buy youtube premium”. Instead, I just think I need to avoid that place as much as possible.



  • It is genuinely infuriating to the point I simply uninstalled youtube on my iPhone and switched to using web-based alternatives. And yes, no need to lecture me on apple, I only have an iPhone for reasons. I’d rather have a linux phone instead.

    2 ads play every time you start a video. Maybe you’re watching a playlist and realize 5 seconds into the video that you already watched this one, so you click the button to go to the next video.

    Two more ads, no matter that you got two ads literally 5 seconds ago.

    Looking for a specific video that you don’t quite remember the title of? That’s right, two ads every time you go “hmm no, it wasn’t this one”.

    Two more ads are also guaranteed to play within at most minute 2, usually just after 60 seconds. So that’s a minimum of 4 ads in the first or second minute of any video you watch. After that, the amount of ads varies, but in my experience it’s not less than two every 5 minutes, and they happen randomly.

    So every 5 minutes at most you get 10 - 20 seconds of advertisements in the middle of a sentence. Wanna go back 10 seconds to refresh the context that was lost by the jarring interruption? No problem, have 2 more ads. Sometimes as much as 3 times in a row.

    The worst offender I had was a 30-ish minute video where, and I swear this is neither exaggeration nor hyperbole, two ads would play every two minutes, for the whole video (it’s also the video where I got two ads playing when I scrolled back 10 seconds, 3 times in a row). So overall on that 30 minute video I must have got around 45 to 55 ads (2 at the start, 2 every 2 minutes, 2 almost every time I scrolled back 10 seconds).


  • What “it” is configurable? If the code is indented with 4 spaces, it is indented with 4 spaces. You can configure your editor to indent with 1 space if you want, but then your code is not going to respect the 4 spaces of indentation used by the rest of the code.

    I repeat, the only accessible indentation option is using tabs. This is not an opinion because every other option forces extra painful steps for those with vision issues (including, but not limited to, having to reformat the source files to tabs so they can work on them and then reformat them back to using spaces in order to commit them)





  • Looks to me like the ruling is saying that the output of a model trained on copyrighted data is not copyrighted in itself.

    By that logic, if I train a model on marvel movies and get something that is exactly the same as an existing movie, that output is not copyrighted.

    It’s a stretch, for sure, and the judge did say that he didn’t consider the output to be similar enough to the source copyrighted material, but it’s unclear what “close enough” is.

    What if my model is trained on star wars and outputs a story that is novel, with different characters with different voices. That’s not copyrighted then, despite the model being trained exclusively on copyrighted data?