• 3 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • I don’t really subscribe to the whole if civilization collaspes there will be no technology in anyone’s lifetime again thing. You aren’t going to go back to ordering shit off amazon from your smartphone or anything. However the knowledge that things like refrigeration, radio transmission, internal combustion engines, water treatment and such are possible is going to drive people to eventually find out how to get it by any means necessary.

    How quickly this happens is a question of whether the majority of people adopt a “technology is too dangerous/a sin against God” idealogy or not.






  • If you get just the right gguf model (read the description when you download them to get the right K-optimization or whatever it’s called) and actually use multithreading (llamacpp supports multithreading so in theory gpt4all should too), then it’s reasonably fast. I’ve achieved roughly half the speed of ChatGPT just on an 8 core amd fx with ddr3 ram. Even 20b models can be usably fast.


  • This probably isn’t very helpful but the best way I’ve found to make an ai write an entire book is still a lot of work. You have to make it write it in sections, pay attention to the prompts based on what’s happening and spend a lot of time copy pasting the good sentences into a better quality section and then use those blocks of text to create chapters. You’re basically plagiarizing a document using ai written documents rather than making the ai shit it out in 1 continuous stream.

    If you could come up with a way to make an ai produce a document using only complete sentences from other ai generated documents, maybe you could achieve a higher level of automation and still yield similar quality. Because otherwise it’s just as difficult as writing a book yourself.

    As for software, use llamacpp. It’s a cpu-only ai thingy that can utilize multiple cores. You probably aren’t getting an nvidia gpu running on any arm board unless you have a really long white neck beard and a degree in computer engineering. Download gguf compatible models for llamacpp on hugging face.







  • PeterPoopshit@lemmy.worldtoTrees@lemmy.worldEdibles effects?
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Smoking may be short lived but it does the job more reliably in my experience. I take an edible just for a baseline high, wait a little while and then hit a vape like once every hour or so. That way you aren’t wasting edibles on getting higher than you wanted and your smoking supplies last longer because 1 puff = really high.


  • Actually I got an nvidia card working on easy diffusion on Debian. The barrier for getting a text chat ai working with gpu acceleration is actually the fact that I don’t have the patience to deal with all that python venv nonsense so I use llamacpp. It runs in c++ which means no python dependencies to fuck you with at the cost of slower cpu-only generation.

    Easy Diffusion just happens to be simple enough that I could actually figure out how to get it to work (it’s in python and needs a virtual environment) but it’s a different story for the text ais.

    If you actually had the patience and knowledge to deal with all the python issues and/or had a distro that makes it easy (different distros deal with pip differently), I don’t doubt you’d be able to get Nvidia card acceleration working on some text chat ai.