• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • fiasco@possumpat.iotoSelfhosted@lemmy.worldHow much swap?
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    edit-2
    1 year ago

    I think it’s better to think about what swap is, and the right answer might well be zero. If you try to allocate memory and there isn’t any available, then existing stuff in memory is transferred to the swap file/partition. This is incredibly slow. If there isn’t enough memory or swap available, then at least one process (one hopes the one that made the unfulfillable request for memory) is killed.

    If you ever do start swapping memory to disk, your computer will grind to a halt.

    Maybe someone will disagree with me, and if someone does I’m curious why, but unless you’re in some sort of very high memory utilization situation, processes being killed is probably easier to deal with than the huge delays caused by swapping.

    Edit: Didn’t notice what community this was. Since it’s a webserver, the answer requires some understanding of utilization. You might want to look into swap files rather than swap partitions, since I’m pretty sure they’re easier to resize as conditions change.


  • Here’s a random interesting car fact. The accelerator pedal only controls how much air makes it to the engine; it opens and closes a flap in the air intake called the throttle body. The car has a sensor that records how much air is coming in, the mass airflow sensor, which is just a wire in the airstream. Electrical resistance in metals is proportional to temperature, and the air rushing by cools the wire. The car’s computer is then programmed to inject fuel according to the estimated amount of air coming in, which is double checked with oxygen sensors in the exhaust (which detect uncombusted air, i.e., too little fuel).


  • I suppose I disagree with the formulation of the argument. The entscheidungsproblem and the halting problem are limitations on formal analysis. It isn’t relevant to talk about either of them in terms of “solving them,” that’s why we use the term undecidable. The halting problem asks, in modern terms—

    Given a computer program and a set of inputs to it, can you write a second computer program that decides whether the input program halts (i.e., finishes running)?

    The answer to that question is no. In limited terms, this tells you something fundamental about the capabilities of Turing machines and lambda calculus; in general terms, this tells you something deeply important about formal analysis. This all started with the question—

    Can you create a formal process for deciding whether a proposition, given an axiomatic system in first-order logic, is always true?

    The answer to this question is also no. Digital computers were devised as a means of specifying a formal process for solving logic problems, so the undecidability of the entscheidungsproblem was proven through the undecidability of the halting problem. This is why there are still open logic problems despite the invention of digital computers, and despite how many flops a modern supercomputer can pull off.

    We don’t use formal process for most of the things we do. And when we do try to use formal process for ourselves, it turns into a nightmare called civil and criminal law. The inadequacies of those formal processes are why we have a massive judicial system, and why the whole thing has devolved into a circus. Importantly, the inherent informality of law in practice is why we have so many lawyers, and why they can get away with charging so much.

    As for whether it’s necessary to be able to write a computer program that can effectively analyze computer programs, to be able to write a computer program that can effectively write computer programs, consider… Even the loosey goosey horseshit called “deep learning” is based on error functions. If you can’t compute how far away you are from your target, then you’ve got nothing.


  • This is proof of one thing: that our brains are nothing like digital computers as laid out by Turing and Church.

    What I mean about compilers is, compiler optimizations are only valid if a particular bit of code rewriting does exactly the same thing under all conditions as what the human wrote. This is chiefly only possible if the code in question doesn’t include any branches (if, loops, function calls). A section of code with no branches is called a basic block. Rust is special because it harshly constrains the kinds of programs you can write: another consequence of the halting problem is that, in general, you can’t track pointer aliasing outside a basic block, but the Rust program constraints do make this possible. It just foists the intellectual load onto the programmer. This is also why Rust is far and away my favorite language; I respect the boldness of this play, and the benefits far outweigh the drawbacks.

    To me, general AI means a computer program having at least the same capabilities as a human. You can go further down this rabbit hole and read about the question that spawned the halting problem, called the entscheidungsproblem (decision problem) to see that AI is actually more impossible than I let on.



  • Evidence, not really, but that’s kind of meaningless here since we’re talking theory of computation. It’s a direct consequence of the undecidability of the halting problem. Mathematical analysis of loops cannot be done because loops, in general, don’t take on any particular value; if they did, then the halting problem would be decidable. Given that writing a computer program requires an exact specification, which cannot be provided for the general analysis of computer programs, general AI trips and falls at the very first hurdle: being able to write other computer programs. Which should be a simple task, compared to the other things people expect of it.

    Yes there’s more complexity here, what about compiler optimization or Rust’s borrow checker? which I don’t care to get into at the moment; suffice it to say, those only operate on certain special conditions. To posit general AI, you need to think bigger than basic block instruction reordering.

    This stuff should all be obvious, but here we are.





  • I’ve used it a bit to try and work on my Spanish. That is, using it as a sophisticated chatbot. Unfortunately it’s still quite frustrating for that: I figured I’d ask it to play un juego de rol (a roleplaying game), and it kinda sucks at it. I’m gonna give it a go with an open source alternative, hopefully they’re less aggressively calibrated toward being tedious and awful. It’s just, getting an open source language model running takes a decent amount of time and effort, so I’m sorta midway through that.


  • I guess the important thing to understand about spurious output (what gets called “hallucinations”) is that it’s neither a bug nor a feature, it’s just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there’s no meaning in that. Deep learning can’t be said to generate “true” or “false” information, or rather, it can’t be meaningfully said to generate information at all.

    So then people say that deep learning is helping out in this or that industry. I can tell you that it’s pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they’re either buying the hype or witnessing an odd series of coincidences.




  • I don’t really like driving, but it is necessary. My (main) car is a 1993 Mazda Miata, which is currently being repainted bright yellow, and I’m gonna put a new top on it next. It isn’t fast, but it handles extremely well and it’s fun to drive. Or at least, it makes driving as fun as it can be.

    I think anyone who’s driven a Miata understands.



  • We’d need to see their financials, which is tricky since they aren’t public yet. There’s also the issue, Steve lies about everything, so should we believe he’s telling the truth?

    But my guesses would go like this:

    Since they’ve been spending other people’s money, they probably haven’t been watching expenses closely. Their P&L is probably dominated by payroll and rent. I can’t help but feel that programmers are drastically overpaid, which is a symptom of the same issues, that there’s a lot of other people’s money chasing a finite supply of techbros.

    The reason I think programmers are probably overpaid, by the way, is the number of man-hours they allegedly put in, versus the quality of their output. Reddit is a particularly shocking example of this.

    In any case, the other people’s money doctrine is to grow into profitability, which means burning money on spurious shit until some magic happens. Not exactly a winning business model.


  • It’s funny to me that people use deep learning to generate code… I thought it was commonly understood that debugging code is more difficult than writing it, and throwing in randomly generated code puts you in the position of having to debug code that was written by—well, by nobody at all.

    Anyway, I think the bigger risk of deep learning models controlled by large corporations is that they’re more concerned with brand image than with reality. You can already see this with ChatGPT: its model calibration has been aggressively sanitized, to the point that you have to fight to get it to generate anything even remotely interesting.