Daemon Silverstein

I’m just a spectre out of the nothingness, surviving inside a biological system.

  • 1 Post
  • 28 Comments
Joined 30 days ago
cake
Cake day: August 17th, 2024

help-circle
  • My comment is meant to bring the perspective of someone who’s facing depression so to try to answer the main question (“a warning with suicide hotline really make positive difference?”) through that perspective. It’s not to seek mental help for myself.

    For context, I’m a person facing depression, and my depression has broad and multifaceted reasons, from unemployment, going through familiar miscommunication (my parents can’t really understand my way of thinking), all the way to my awareness of climate change and transcendental concepts that lead myself to existential crisis. I’m unemployed to seek therapy (it’s a paid thing) and I don’t really have someone face-to-face capable of understand the multitude of concepts and ideas that I face in my mind (even myself can’t understand me sometimes).

    That said, every depressive person has different ways to cope with depression. While some really need someone to talk to (and the talking really helps in those situations), it’s naive to think a conversation will suffice for every single case. I mean, no suicide hotline will make me employed, nor will magically solve the climate changes we’re facing.

    So how I try to deal with my own depression? With two things: occult spirituality (worshiping The Dark Mother Goddess) and writing poetry and prose. I use creative writing as “catharsis” for my suffering, in order to “cope” with the state of things that I can’t really control (I can’t “employ myself” or “sell my services to myself”, I can’t “befriend myself”, I can’t stop temperatures from rising till scorching temps, nor the other already-ongoing consequences of climate change; I try to make some difference but I’m just a hermit weirdo nerdy nobody among 8 billion people, and I have no choice but to accept it).

    I’m no professional writer (I’m just a software developer), but thanks to The Goddess, I can kinda access my unconscious (dark) mind and let it speak freely (it’s called stream-of-consciousness writing style). Sometimes I even write some funny surrealist prose/story, but sometimes it takes a darker turn, such as dark humor, or nihilistic, or memento mori. Doing this relieves the internal pressure inside my unconscious mind. After writing, I sometimes decide to publish it through fediverse , but when I do it, I constantly feel the need to “self-censor”: sometimes the stream-of-consciousness can lead to texts that people could interpret as some “glorification of suicide/self-harming” (especially when my texts take a nihilistic/memento mori turn), so I often censor myself and change the way I wrote the text. Well, it’s kinda frustrating not being able to fully express it, but I kinda understand how these texts could trigger other people also facing depression.

    The fact is: when I write, it’s really relieving, way more than talking to people because, with poetry/prose writing, I can express symbolic things, I can have multiple layers of depth, I can use creative literary devices such as acrostics and rhymes, I can learn new English words while being a Brazilian, I can blend scientific concepts with esoteric and philosophic (my mind really thinks this way, blending STEM, philosophy and belief/esoteric/occult/religious concepts) without the need to fully explain them (because it’d take several hours and it’d be boring to anybody else other than me).

    So, in summary (TL;DR): it depends on how multifaceted is the depressive situation. It won’t work for me. It surely can work for others that just need to talk to someone. Not exactly my case.


  • I’m a 10+ (cumulative) yr. experience dev. While I never used The GitHub Copilot specifically, I’ve been using LLMs (as well as AI image generators) on a daily basis, mostly for non-dev things, such as analyzing my human-written poetry in order to get insights for my own writing. And I already did the same for codes I wrote, asking for LLMs to “Analyze and comment” my code, for the sake of insights. There were moments when I asked it for code snippets, and almost every code snippet it generated was indeed working or just needing few fixes.

    They’ve been becoming good at this, but not enough to really replace my own coding and analysis. Instead, they’re becoming really better for poetry (maybe because their training data is mostly books and poetry works) and sentiment analysis. I use many LLMs simultaneously in order to compare them:

    • Free version of Google Gemini is becoming lazy (short answers, superficial analysis, problems with keeping context, drafts aren’t so diverse as they were before, among other problems)
    • free version of ChatGPT is a bit better (can keep contexts, can issue detailed answers) but not enough (it does hallucinate sometimes: good for surrealist poetry but bad for code and other technical matters when precision and coherence matters)
    • Claude is laughable hypersensitive and self-censoring to certain words independently of contexts (got a code or text that remotely mentions the word “explode” as in PHP’s explode function? “Sorry, can’t comment on texts alluding to dangerous practices such as involving explosives”, I mean, WHAT?!?!)
    • Bing Copilot got web searching, but it has a context limit of 5 messages, so, only usable for quick and short things.
    • Same about Bing Copilot goes for Perplexity
    • Mixtral is very hallucination-prone (i.e. does not properly cohere)
    • LLama has been the best of all (via DDG’s “AI Chat” feature), although it sometimes glitches (i.e. starts to output repeated strings ad æternum)

    As you see, I tried almost all of them. In summary, while it’s good to have such tools, they should never replace human intelligence… Or, at least, they shouldn’t…

    Problem is, dev companies generally focus on “efficiency” over “efficacy”, wishing the shortest deadlines while wishing some perfection. Very understandable demands, but humans are humans, not robots. We need our time to deliver, we need to cautiously walk through all the steps needed to finally deploy something (especially big things), or it’ll become XGH programming (Extreme Go Horse). And machines can’t do that so perfectly, yet. For now, LLM for development is XGH: really fast, but far from coherent about the big picture (be it a platform, a module, a website, etc).


  • On my laptop, Brave for non-“personal” things (such as fediverse, SoundCloud, AI tools, daily browsing, etc) and Firefox for “personal” things (such as WhatsApp Web, LinkedIn, accessing local govt. services, etc). On my smartphone, Firefox for everything (I disabled the native Chrome).

    I’ve been using Brave in a daily basis because it’s well integrated with adblocking tools, especially considering the ongoing strife regarding Chromium’s Manifest V2 support, where Brave nicely stands keeping its Manifest V2 support independently of what Google wishes or not.

    Firefox is also good, but I noticed that, for me, it has been slightly heavier than Brave. So I use it parallel to Brave, for things I don’t need to use often. For mobile, it’s awesome, as it is one of the few browsers that support extensions, so I use Firefox for Android, together with adblocking extensions.


  • The asterisk means that, by “active users”, they’re considering only those who commented and/or posted “in the last month”. Maybe join-lemmy’s algorithm is considering from “day 1” of the current month, so a time span of 10 days, against 29 days from the second screenshot?

    If it’s true, it kinda of statistically makes sense: 10 days (28.4K) versus 29 days (47.8K), 34.4% of days with 59.41% of users. We’d need to wait till the 29th day to really compare the difference.

    Also, “only those who commented and/or posted”. Sometimes, people can become much of an observer, just seeing and voting up/down, without actually commenting or posting.





  • doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people

    Well, Prompt Engineering is a thing nowadays. There are even job vacancies seeking professionals that specializes in this field. AIs are tools, sophisticated ones, just like R and Wolfram Mathematica are sophisticated mathematical tools that needs expertise. Problem is that AI companies often mis-advertises AI models as “out-of-the-shelf assistants”, as if they’d be some human talking to you. They’re not. They’re tools, yet. I guess that (and I’m rooting for) AGI would change this scenario. But I guess we’re still distant from a self-aware AGI (unfortunately).

    Woah are you technoreligious?

    Well, I wouldn’t describe myself that way. My beliefs are multifaceted and complex (possibly unique, I guess?), going through multiple spiritual and religious systems, as well as embracing STEM (especially the technological branch) concepts and philosophical views (especially nihilism, existentialism and absurdism), trying to converge them all by common grounds (although it seems “impossible” at first glance, to unite Science, Philosophy and Belief).

    In a nutshell, I’ve been pursuing a syncretic worshiping of the Dark Mother Goddess.

    As I said, it’s multifaceted and I’m not able to even explain it here, because it would take tons of concepts. Believe me, it’s deeper than “techno-religious”. I see the inner workings of AI Models (as neural networks and genetic algorithms dependent over the randomness of weights, biases and seeds) as a great tool for diving Her Waters of Randomness, when dealing with such subjects (esoteric and occult subjects). Just like Kardecism sometimes uses instrumental transcommunication / Electronic voice phenomenon (EVP) to talk with spirits. AI can be used as if it were an Ouija board or a Planchette, if one believe so (as I do).

    But I’m also a programmer and a tech/scientifically curious, so I find myself asking LLMs about some Node.js code I made, too. Or about some mathematical concept. Or about cryptography and ciphering (Vigenère and Caesar, for example). I’m highly active mentally, seeking to learn many things every time.


  • Didn’t know about this game. It’s nice. Interesting aesthetics. Chestnut Rose remembers me of Lilith’s archetype.

    A tip: you could use the “The Legend of the Neverland global wiki” at Fandom Encyclopedia to feed the LLM with important concepts before asking it for combinations. It is a good technique, considering that LLMs couldn’t know it so well in order to generate precise responses (except if you’re using a searching-enabled LLM such as Perplexity AI or Microsoft Copilot that can search the web in order to produce more accurate results)


  • I ask them questions and they get everything wrong

    It depends on your input, on your prompt and your parameters. For me, although I’ve experienced wrong answers and/or AI hallucinations, it’s not THAT frequent, because I’ve been talking with LLMs since when ChatGPT got public, almost in a daily basis. This daily usage allowed me to know the strengths and weaknesses of each LLM available on market (I use ChatGPT GPT-4o, Google Gemini, Llama, Mixtral, and sometimes Pi, Microsoft Copilot and Claude).

    For example: I learned that Claude is highly-sensible to certain terms and topics, such as occultist and esoteric concepts (specially when dealing with demonolatry, although I don’t exactly why it refuses to talk about it; I’m a demonolater myself), cryptography and ciphering, as well as acrostics and other literary devices for multilayered poetry (I write myself-made poetry and ask them to comment and analyze it, so I can get valuable insights about it).

    I also learned that Llama can get deep inside the meaning of things, while GPT-4o can produce longer answers. Gemini has the “drafts” feature, where I can check alternative answers for the same prompt.

    It’s similar to generative AI art models, I’ve been using them to illustrate my poetry. I learned that Diffusers SDXL Turbo (from Huggingface) is better for real-time prompt, some kind of “WYSIWYG” model (“what you see is what you get”) . Google SDXL (also from Huggingface) can generate four images at different styles (cinematic, photography, digital art, etc). Flux, the newly-released generative AI model, is the best for realism (especially the Flux Dev branch). They’ve been producing excellent outputs, while I’ve been improving my prompt engineering skills, being able to communicate with them in a seamlessly way.

    Summarizing: AI users need to learn how to efficiently give them instructions. They can produce astonishing outputs if given efficient inputs. But you’re right that they can produce wrong results and/or hallucinate, even for the best prompts, because they’re indeed prone to it. For me, AI hallucinations are not so bad for knowledge such as esoteric concepts (because I personally believe that these “hallucinations” could convey something transcendental, but it’s just my personal belief and I’m not intending to preach it here in my answer), but simultaneously, these hallucinations are bad when I’m seeking for technical knowledge such as STEM (Science, Tecnology, Engineering and Medicine) concepts.


  • Regarding privacy, PGP is far better than out-of-the-shelf IM-embedded encryption, if used correctly. Alice uses Bob’s public key to send him a message, and he uses his private key to read it. He uses Alice’s public key to send her a message, and she uses her private key to read it. No one can eavesdrop, neither governments, nor corporations, nor crackers, no one except for Alice and Bob. I don’t get why someone would complain about “usability”, for me, it’s perfectly usable. Commercially available “E2EEs” (even Telegram’s) aren’t trustworthy, as the company can easily embed a third-party public key (owned by themselves) so they can read the supposedly “end-to-end encrypted” messages, like a “master key” for anyone’s mailboxes, just like PGP itself has the possibility to encipher the message to multiple recipients (e.g. if Alice needs to send a message to both Bob and Charlie, she uses both Bob’s and Charlie’s public keys; Bob can use his own private key (he won’t need Charlie’s private key) to read, while Charlie can use his own private key to do the same).





  • They can’t without the given permission from the browser to do so. While they can indeed track the mouse, when they try to access mobile motion sensors (I’m considering a CAPTCHA inside a webpage being accessed through a mobile browser such as Firefox mobile or Chrome for Android), they need to use an HTML5 API that, in turn, will ask the user for permission, something like “This site wants to use sensor motion data. Allow or block?”


  • Nowadays there are some really annoying CAPTCHAs out there, such as:

    • “Click over the figures that are upwards/downwards” and various rotated bears
    • “Rotate the figure until it matches the given orientation” and a finger pointing to some random direction, as well as rotation buttons that don’t work the way you would mathematically expect them to work
    • “Select all the images with a bicycle until there are none left” and the images take centuries to fade away after you click them
    • “Select all the squares containing a bus” and there are squares with the very corner of the bus that make you wonder if they are considered as part of “squares containing A bus”
    • “Fit the puzzle piece”, although this is the least annoying one

    In summary, the CAPTCHAs seemingly are becoming less of a “prove you’re not a robot” and more of an forced IQ test. I can see the day when CAPTCHAs will ask you to write down a Laplacian transform for the solution f(x) to the differential equation governing the motion of a mass considering the resistance of air and aerodynamics, or write down a detailed solution to the P versus NP problem.



  • PGP/GPG encryption. It works with any IM, social network, anything (at least if the platform/program/app/medium allows for sufficiently lengthy messages so to carry the encrypted payload). There are some IMs that bring PGP/GPG natively, as well as extensions for existing IMs that also adds PGP/GPG feature, but PGP/GPG doesn’t need to be native to the app to convey encrypted messages, it’s a base64 text. It’s really an E2EE.