Nintendo has their own emulators for running these games on newer consoles.
Nintendo has their own emulators for running these games on newer consoles.
Yes, but at least there they still use “Earth time”, just slowed down. For the moon it gets a little bit more complicated I guess.
Time moves at a different speed due to the moon’s reduced gravity. It’s not just the length of a day.
RFCs aren’t really law you know. They can deviate, it just means less compatibility.
What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.
This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).
They merely mentioned these methods to show that it doesn’t matter which method you pick. The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.
But it’s easy to just define general intelligence as something approximating what humans already do.
No, General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.
Yes, hence we’re not “right around the corner”, it’s a figure of speech that uses spatial distance to metaphorically show we’re very far away from something.
Not just that, they’ve proven it’s not possible using any tractable algorithm. If it were you’d run into a contradiction. Their example uses basically any machine learning algorithm we know, but the proof generalizes.
Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.
That’s assuming that we are a general intelligence. I’m actually unsure if that’s even true.
That doesn’t mean they’ve proven there’s no pathway at all.
True, they’ve only calculated it’d take perhaps millions of years. Which might be accurate, I’m not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it’s still very unlikely and definitely not “right around the corner” as the AI-bros claim (and that near-future thing is what the paper set out to disprove).
Haha it’s good that you do though, because now there’s a helpful comment providing more context :)
I was more hinting at that through conventional computational means we’re just not getting there, and that some completely hypothetical breakthrough somewhere is required. QC is the best guess I have for where it might be but it’s still far-fetched.
But yes, you’re absolutely right that QC in general isn’t a magic bullet here.
The actual paper is an interesting read. They present an actual computational proof, stating that even if you have essentially infinite memory, a computer that’s a billion times faster than what we have now, perfect training data that you can sample without bias and you’re only aiming for an AGI that performs slightly better than chance, it’s still completely infeasible to do within the next few millenia. Ergo, it’s definitely not “right around the corner”. We’re lightyears off still.
They prove this by proving that if you could train an AI in a tractable amount of time, you would have proven P=NP. And thus, training an AI is NP-hard. Given the minimum data that needs to be learned to be better than chance, this results in a ridiculously long training time well beyond the realm of what’s even remotely feasible. And that’s provided you don’t even have to deal with all the constraints that exist in the real world.
We perhaps need some breakthrough in quantum computing in order to get closer. That is not to say that AI won’t improve or anything, it’ll get a bit better. But there is a computationally proven ceiling here, and breaking through that is exceptionally hard.
It also raises (imo) the question of whether or not we can truly consider humans to have general intelligence or not. Perhaps we’re not as smart as we think we are either.
Historically when games are delisted they’ll still be in your library.
I think you might be looking for this?
697? Geez that’s… Not great.
Except for the Aztecs and Poland I think.
Both WhatsApp and Signal show the same amount of chats to me (9 for both). WhatsApp does show a small sliver of a tenth chat, but it’s not really properly visible. There is a compact mode for the navigation bar in Signal, which helps a bit here.
From what I can see there’s slightly more whitespace between chats, and Signal uses the full height for the chat (eg same size as the picture), whereas WhatsApp uses whitespace above and below, pushing the name and message preview together.
In chats the sizes seem about the same to me, but Signal colouring messages might make it appear a bit more bloated perhaps? Not sure.
⛤
I think the current logo would work fine as a unicode character. I dislike the three anuses for a logo.
I doubt it’s looking anything up. It’s probably just grabbing the previous messages, reading the word “wrong” and increasing the number. Before these messages I got ChatGPT to count all the way up to ten r’s.
Windows is more prevalent with gamers so I guess it would make sense for the Steam survey to show less Linux users than there are in the general population.
LinEUx