Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • corytheboyd@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    In terms of hype it’s the crypto gold rush all over again, with all the same bullshit.

    At least the tech is objectively useful this time around, whereas crypto adds nothing of value to the world. When the dust settles we will have spicier autocomplete, which is useful (and hundreds of useless chatbots in places they don’t belong…)

    • SSUPII@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      For something that is showing to be useful, there is no way it will simply fizzle out. The exact same thing was said for the whole internet, and look where we are now.

      The difference between crypto and AI, is that as you said crypto didn’t show anything tangible to the average person. AI, instead, is spreading like wildfire in software and research and being used by people even without knowing worldwide.

      • fera@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        It takes some figuring out but it’s been amazing for spreadsheets, I’ll explain what I’m trying to do as if I were explaining it to a person and it’ll give me a huge script that does exactly what I want, with annotations and everything. It has enabled me to do things that I don’t have the knowledge to do and saved me a ton of time. For example, I had a really complicated formula, vlookup, hlookup, arrays, it was a monster of a formula that took me seriously like 12 hours to get working. With GPT, a few years and one pandemic later, I’d forgotten how I did it, so I tested gpt with it. 2 hours. It was frustrating, there were a lot of "nope"s and "got this error"s but it did it so much faster than I could have iterated on it, and that was only 3.5. GPT 4 is way better at that, I can do other stuff just as complex as that with the 25 reply tokens I have.

        That’s just one thing it’s good for, now that plugins are a thing it can use Wolfram Alpha and actually do math (don’t even try without that plugin). As a cook I might have a recipe that calls for a liter of soy sauce, but I only have 3/8l, I can just take a picture of the recipe on my phone, pull the text out with ocr, then I have a saved chat where I give it recipes with “adjust this for only 3/8l soy sauce” and it just gives me an updated recipe. I could pull up a note in my phone, multi-window a calculator, and do the math myself, but like why? It’s actually a pretty useful tool, at least for what I use it for.

      • ThunderingJerboa@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Why are we in the fallacy that we assume this tech is going to be stagnant? At the current moment it does very low tier coding but the idea we are even having a conversation about a computer even having the possibility of writing code for itself (not in a machine learning way at least) was mere science fiction just a year ago.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          And even in its current state it is far more useful than just generating “hello world.” I’m a professional programmer and although my workplace is currently frantically forbidding ChatGPT usage until the lawyers figure out what this all means I’m finding it invaluable for whatever projects I’m doing at home.

          Not because it’s a great programmer, but because it’ll quickly hammer out a script to do whatever menial task I happen to need done at any given moment. I could do that myself but I’d have to go look up new APIs, type it out, such a chore. Instead I just tell ChatGPT “please write me a python script to go through every .xml file in a directory tree and do <whatever>” and boom, there it is. It may have a bug or two but fixing those is way faster than writing it all myself.

  • fiasco@possumpat.io
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I guess the important thing to understand about spurious output (what gets called “hallucinations”) is that it’s neither a bug nor a feature, it’s just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there’s no meaning in that. Deep learning can’t be said to generate “true” or “false” information, or rather, it can’t be meaningfully said to generate information at all.

    So then people say that deep learning is helping out in this or that industry. I can tell you that it’s pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they’re either buying the hype or witnessing an odd series of coincidences.

    • the_wise_man@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Deep learning can be and is useful today, it’s just that the useful applications are things like classifiers and computer vision models. Lots of commercial products are already using those kinds of models to great effect, some for years already.

      • exohuman@kbin.socialOP
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        What do you think of the AI firms who are saying it could help with making policy decisions, climate change, and lead people to easier lives?

        • GizmoLion@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Absolutely. Computers are great at picking out patterns across enormous troves of data. Those trends and patterns can absolutely help guide policymaking decisions the same way it can help guide medical diagnostic decisions.

          • exohuman@kbin.socialOP
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            The article was skeptical about this. It said that the problem with expecting it to revolutionize policy decisions isn’t that we don’t know what to do, it’s that we don’t want to do it. For example, we already know how to solve climate change and the smartest people on the planet in those fields have already told us what needed to be done. We just don’t want to make the changes necessary.

    • Turkey_Titty_city@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      I mean AI is already generating lots of bullshit ‘reports’. Like you know, stuff that reports ‘news’ with zero skill. It’s glorified copy-pasting really.

      If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it’s not intelligence. It’s just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.

      • 🐝bownage [they/he]@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        The thing is - and what’s also annoying me about the article - AI experts and computational linguists know this. It’s just the laypeople that end up using (or promoting) these tools now that they’re public that don’t know what they’re talking about and project intelligence onto AI that isn’t there. The real hallucination problem isn’t with deep learning, it’s with the users.

  • brasilikum@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    In my opinion, both can be true and it’s not either one or the other:

    ML has surprised even many experts, in so far as a very simple mechanism at huge scale is able to produce some aspects of human abilities. It does not seem strange to me that it also reproduces other human abilities, like hallucinations. Maybe they are closer related then we think.

    Company leaders and owners are doing what the capitalistic system incentives them to do: raise their companies value by any means possible, call that hallucinating or just marketing.

    IMO it’s the responsibility of government to make sure AI does not become another capital concentration scheme like many other technologies have, widening the gap between rich and poor.

  • 🐝bownage [they/he]@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause “human extinction or similarly permanent and severe disempowerment of the human species”. Chillingly, the median response was that there was a 10% chance.

    How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides – except that these upsides are, for the most part, hallucinatory.

    Ummm how about the obvious answer: most AI researchers won’t think they’re the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.

    • alexdoom@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening

      • exohuman@kbin.socialOP
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.

        The AI apocalypse won’t look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isn’t sufficient social and political change all at once.

  • tinselpar@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    AI bots don’t ‘hallucinate’ they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.

    Techbro CEO’s are just creeps. They don’t believe their own bullshit, and know full well that their crap is not for the benefit of humanity, because otherwise they wouldn’t all be doomsday preppers. It all a perverse result of American worship of self-made billionaires.

    See also The super-rich ‘preppers’ planning to save themselves from the apocalypse

    • soiling@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      “hallucination” works because everything an LLM outputs is equally true from its perspective. trying to change the word “hallucination” seems to usually lead to the implication that LLMs are lying which is not possible. they don’t currently have the capacity to lie because they don’t have intent and they don’t have a theory of mind.

      • variaatio@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Well neither can it hallucinate by the “not being able to lie” standard. To hallucinate would mean there was some other correct baseline behavior from which hallucinating is deviation.

        LLM is not a mind, one shouldn’t use words like lie or hallucinate about it. That antromorphises a mechanistic algorhitm.

        This is simply algorhitm producing arbitrary answers with no validity to reality checks on the results. Since neither are those times it happens to produce correct answer “not hallucinating”. It is hallucinating or not hallucinating exactly as much regardless of the correctness of the answer. Since its just doing it’s algorhitmic thing.

        • Mirodir@lemmy.fmhy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?

          Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.

  • kiku123@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Thanks for sharing this article. I agree that those points mentioned are not possible for GenAI. It is a pipe dream that GenAI is capable of global governance, because it can’t really understand the implications of what it means. It’s a Clever Hans and just outputs what it thinks that you want to see.

    I think that with GenAI there are some job classes that are in danger (tech support continues to shrink for common cases, etc.), but mostly the entry-level positions. Ultimately, someone who actually knows what’s going on would need to intervene.

    Similarly for things like writing or programming, GenAI can produce okay work, but it needs to be prompted by someone who can understand the bigger picture and check it’s work. Writing becomes more editing in this case, and programming becomes more code review.

  • SSUPII@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 year ago

    It will, and is helping humanity in different fields already.

    We need to diverge PR speech from reality. AI is already being used in pharmaceutical fields, aviation, tracking (of the air, of the ground, of the rains…), production… And there is absolutely no way you can’t say these are not helping humanity in their own way.

    AI will not solve the listed issues on its own. AI as a concept is a tool that will help, but it will always end up on how well its used and with what other tools.

    Also, saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that doing so its not profitable.

    • Spzi@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that its not profitable.

      The economic incentives to churn out the next powerful beast as quickly as possible are obvious.

      Making it safe costs extra, so that’s gonna be a neglected concern for the same reason.

      We also notice the resulting AIs are being studied after they are released, with sometimes surprising emergent capabilities.

      So you would be right if we would approach the topic with a rational overhead view, but we don’t.