I guess the important thing to understand about spurious output (what gets called “hallucinations”) is that it’s neither a bug nor a feature, it’s just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there’s no meaning in that. Deep learning can’t be said to generate “true” or “false” information, or rather, it can’t be meaningfully said to generate information at all.
So then people say that deep learning is helping out in this or that industry. I can tell you that it’s pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they’re either buying the hype or witnessing an odd series of coincidences.
Deep learning can be and is useful today, it’s just that the useful applications are things like classifiers and computer vision models. Lots of commercial products are already using those kinds of models to great effect, some for years already.
Absolutely. Computers are great at picking out patterns across enormous troves of data. Those trends and patterns can absolutely help guide policymaking decisions the same way it can help guide medical diagnostic decisions.
The article was skeptical about this. It said that the problem with expecting it to revolutionize policy decisions isn’t that we don’t know what to do, it’s that we don’t want to do it. For example, we already know how to solve climate change and the smartest people on the planet in those fields have already told us what needed to be done. We just don’t want to make the changes necessary.
I mean AI is already generating lots of bullshit ‘reports’. Like you know, stuff that reports ‘news’ with zero skill. It’s glorified copy-pasting really.
If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it’s not intelligence. It’s just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.
The thing is - and what’s also annoying me about the article - AI experts and computational linguists know this. It’s just the laypeople that end up using (or promoting) these tools now that they’re public that don’t know what they’re talking about and project intelligence onto AI that isn’t there. The real hallucination problem isn’t with deep learning, it’s with the users.
I guess the important thing to understand about spurious output (what gets called “hallucinations”) is that it’s neither a bug nor a feature, it’s just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there’s no meaning in that. Deep learning can’t be said to generate “true” or “false” information, or rather, it can’t be meaningfully said to generate information at all.
So then people say that deep learning is helping out in this or that industry. I can tell you that it’s pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they’re either buying the hype or witnessing an odd series of coincidences.
Deep learning can be and is useful today, it’s just that the useful applications are things like classifiers and computer vision models. Lots of commercial products are already using those kinds of models to great effect, some for years already.
What do you think of the AI firms who are saying it could help with making policy decisions, climate change, and lead people to easier lives?
Absolutely. Computers are great at picking out patterns across enormous troves of data. Those trends and patterns can absolutely help guide policymaking decisions the same way it can help guide medical diagnostic decisions.
The article was skeptical about this. It said that the problem with expecting it to revolutionize policy decisions isn’t that we don’t know what to do, it’s that we don’t want to do it. For example, we already know how to solve climate change and the smartest people on the planet in those fields have already told us what needed to be done. We just don’t want to make the changes necessary.
I mean AI is already generating lots of bullshit ‘reports’. Like you know, stuff that reports ‘news’ with zero skill. It’s glorified copy-pasting really.
If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it’s not intelligence. It’s just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.
The thing is - and what’s also annoying me about the article - AI experts and computational linguists know this. It’s just the laypeople that end up using (or promoting) these tools now that they’re public that don’t know what they’re talking about and project intelligence onto AI that isn’t there. The real hallucination problem isn’t with deep learning, it’s with the users.