archomrade [he/him]

  • 5 Posts
  • 127 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • Browsing their coms can be a pretty unique experience, especially if you go in with a preformed idea of what their communities are like. There’s a huge spread of interests and experiences, and sometimes you can be browsing a niche community and forget that these were the people posting BPB on lemmy.world threads a year ago.

    Knowing the academic writings and history they’re referencing helps a lot with understanding where they are coming from, even if you may not agree with all of it.


  • This is the most reasonable response.

    A lot of people here have long since made up their mind about hexbear based both on repeated meta posting on the topic and possibly a bad experience or two with them on a topic they assumed was uncontested but is a landmine topic for communists of a particular bent

    I’ve personally never had a bad experience with hexbears, possibly because I’m more empathetic to their perspective, but more likely because I know when it’s time to disengage. There are users on lemmy who feel strongly about a certain topic that’s abrasive to hexbear users and dig in their heels when jeered at (or maybe feel a personal responsibility to stand them down) and are usually the users here who have the most complaints, because the standard reaction from hexbear users is irreverence (both the users and the mods).

    Unlike a lot of liberals coming from reddit, communists often don’t have delusions about the neutrality of moderation and so they’ll ban you on a whim if they think you’re there to stir shit. They use the ban hammer judiciously even with users on their own instance. That’s often the biggest complaint both with hexbear and with lemmy.ml.




  • It depends on the attack vector. Typically you’re right, but malicious .lnk files are often paired with other malicious methods to infect machines. Sometimes they’re configured as a worm that copies and spreads when a flash drive is connected, sometimes they’re configured to download a remote payload when another script or program is started. The problem is that it’s a type of file that’s often overlooked because it seems innocent.

    It isn’t necessarily the case that the Trojan needs to be interacted with by the user in order to execute the malicious code. Just having the file on your machine opens the door for all kinds of attacks (especially if you’re using a headless setup: you wouldn’t necessarily know you have the .lnk file in the system unless you’re manually unpacking your downloads yourself). All it needs is for another piece of infected code to run and look for that file, and it can open the door for more traditional malicious code.


    Edit: just as a for-instance - If I was a black hat and wanted to spread some malicious code, I could include this .lnk file in a torrent (innocuous enough to slip by unnoticed by most people/unscrupulous pirates), and then maybe place a line of code in a jellyfin plugin or script that looks for that file and executes it if it’s found. Because the attack isn’t buried in the plugin or script itself (most people wouldn’t think much of a line of code that’s simply pointing to temp file already on your system), it could theoretically go unnoticed for long enough to catch a few hundred or thousand machines.





  • Lots of good suggestions here

    I’m a bit surprised by your budget. For something just running plex and next cloud, you shouldn’t need a 6 or even 3k system. I run my server on found parts, adding up to just $600-$700 dollars including (used) SAS drives. It runs probably a dozen docker containers, a dns server, and homeassistant. I don’t even remember what cpu I have because it was such a small consideration when I was finding parts.

    I’d recommend keeping g your synology as a simple Nas (maybe next cloud too, depending on how you’re using it) and then get a second box with whatever you need for plex. Unless you’re transcoding multiple 4k videos at once, your cpu/GPU really don’t need much power. I don’t even have a dedicated GPU in mine, but I’m basically unable to do live 4k transcodes (this is fine for me)


  • I used to think the same thing, but I did an effort post about this about a year ago (here’s the link)

    The article you linked to says something similar to my own understanding: basically, DRM circumvention for personal use is officially not allowed under DMCA and could absolutely be used against you in court, though the likelihood is low. The exceptions the author mentions are pretty nebulous, and the Library of Congress actually addresses the most common cases in their discussions and publication and affirms that they are not allowed.

    I don’t personally agree with their interpretation, but I think more people ought to know that it’s officially not legal to circumvent DRM for personal use.




  • Right now, the model in most communities is banning people with unpopular political opinions or who are uncivil. Anyone else can come in and do whatever they like, even if a big majority of the community has decided they’re doing more harm than good.

    You don’t need a social credit tracking system to auto-ban users if there’s a big majority of the community that recognizes the user as problematic: you could manually ban them, or use a ban voting system, or use the bot to flag users that are potentially problematic to assist on manual-ban determinations, or hand out automated warnings… Especially if you’re only looking at 1-2% of problematic users, is that really so many that you can’t review them independently?

    Users behave differently in different communities… Preemptively banning someone for activity in another community is already problematic because it assumes they’d behave in the same way in the other, but now it’s for activity that is ill-defined and aggregated over many hundreds or thousands of comments. There’s a reason why each community has their rules clearly spelled out in the side, it’s because they each have different expectations and users need those expectations spelled out if they have any chance of following them.

    I’m sure your ranking system is genius and perfectly tuned to the type of user you find the most problematic - your data analysis genius is noted. The problem with automated ranking systems isn’t that they’re bad at what they claim to be doing, it’s that they’re undemocratic and dehumanizing and provide little recourse for error, and when applied at large scales those problems become amplified and systemic.

    You seem to be convinced ahead of time that this system is going to censor opposing views, ignoring everything I’ve done to address the concern and indicate that it is a valid concern.

    That isn’t my concern with your implementation, it’s that it limits the ability to defend opposing views when they occur. Consensus views don’t need to be defended against aggressive opposition, because they’re already presumed to be true; a dissenting view will nearly always be met with hostile opposition (especially when it regards a charged political topic), and by penalizing defenses of those positions you allow consensus views remain unopposed. I don’t particularly care to defend my own record, but since you provided them it’s worth pointing out that all of the penalized examples you listed of my user were in response to hostile opposition and character accusations. The positively ranked comments were within the consensus view (like you said), so of course they rank positively. I’m also tickled that one of them was a comment critiquing exactly the kind of arbitrary moderation policies like the one you’re defending now.

    f you see it censoring any opposing views, please let me know, because I don’t want it to do that either.

    Even if I wasn’t on the ban list and could see it I wouldn’t have any interest in critiquing its ban choices because that isn’t the problem I have with it.




  • The problem is that somehow you wind up in long heated arguments with “centrists” which wander away from the topic and get personal

    I’m not surprised I was identified by the bot, but it’s worth pointing out that ending up in heated arguments happens because people disagree. Those things are related. If someone is getting into lots of lengthy disagreements that are largely positive but devolve into the unwanted behavior, doesn’t that at least give legitimacy to the concern that dissenting opinions are being penalized simply because they attract a lot of impassioned disagreement? Even if both participants in that disagreement are penalized, that just means any disagreement that may already be present isn’t given opportunity to play out. Your community would just be lots of people politely agreeing not to disagree.

    I have no problem with wanting to build a community around a particular set of acceptable behaviors -I don’t even take issue with trying to quantify that behavior and automating it. But we shouldn’t pretend as if doing so doesn’t have unintended polarizing consequences.

    A community that allows for disagreement but limits argumentation isn’t neutral - it gives preferences to status-quo and consensus positions by limiting the types of dissent allowed. If users aren’t able to resolve conflicting perspectives through argumentation, then the consensus view ends up being left uncontested (at least not meaningfully). That isn’t a problem if the intent of the community is to enforce decorum so that contentious argumentation happens elsewhere, but if a majority of communities utilizes a similar moderation policy then of course it is going to result in siloing.

    I might also point out that an argument that is drawn out over dozens of comments and ends in that ‘unwanted’ behavior you’re looking for isn’t all that visible to most users; if you’re someone who is trying to avoid ‘jerks’ then I would think the relative nested position/visibility of that activity should be important. I’m not sure how your bot weighs activity against that visibility, but I think even that doubt that brings into question the effectiveness of this as a strategy.

    Again, not challenging the specific moderation choices the bot has made, just pointing out the problem of employing this type of moderation on a large scale. As it has been employed in this particular community is interesting.


  • I know this will ring hollow, considering I am (predictably) on the autoban list, but:

    I don’t know how this isn’t a political-echochamber speedrun any%. People downvote posts and comments for a lot of reasons, and a big one (maybe the biggest one in a political community) is general disagreement/dislike, even simply extreme abstract mistrust. This is basically just crowdsourced vibes-based moderation.

    Then again, I think communities are allowed to moderate/gatekeep their own spaces however the like. I see little difference between this practice and .ml or lemmygrad preemptively banning users based on comments made on other communities. In fact, I expect the same bot deployed on .ml or hexbear would end up banning the most impassioned centrist users from .world and kbin, and it would result in an accelerated silo-ing of the fediverse if it were applied at scale. Each community has a type of user they find the most disagreeable, and the more this automod is allowed to run the more each space will end up being defined by that perceived opposition.

    Little doubt I would find the consensus-view unpalatable in a space like that, so no skin off my nose.