• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • It’s cheaper if you don’t have constant load as you are only paying for resources you are actively using. Once you have constant load, you are paying a premium for flexibility you don’t need.

    For example, I did a cost estimate of porting one of our high volume, high compute services to an event-driven, serverless architecture and it would be literally millions of dollars a month vs $10,000s a month rolling our own solution with EC2 or ECS instances.

    Of course, self hosting in our own data center is even cheaper, where we can buy and run new hardware that we can run for years for a fraction of the cost of even the most cost-effective cloud solutions, as long as you have the people to maintain it.



  • I have, and use Calibre with LL instead and it still requires a lot of hand holding and manual grooming to get a clean library.

    My big issue with Readarr is that it had a hard time fetching data for various popular and/or prolific authors. So if I wanted to fetch all the books for a particular author, there was a high likelihood it wouldn’t actually fetch the necessary book data to do so.


  • brandon@lemmy.worldtoSelfhosted@lemmy.worldJellyseer for ebooks?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 months ago

    I prefer LazyLibrarian over Readarr but it still leaves a lot to be desired for end-user usability. One of the big issues with ebooks is that data is a mess, with each book having a billion different editions with spotty metadata support that makes it hard to tell what is what.

    Goodreads seems like it was a decent source of data for these types of projects but they shut off new API access a couple years ago and legacy access can go away at any moment. Hardcover seems like a promising API alternative but not sure if anyone has started integrating with them yet. Manga and comics seem to be in a better state, with a more rabid fanbase maintaining data but still nowhere near what’s available for movies and tv.


  • Unlikely to be feasible for gaming as you will run in latency and overhead issues. If you want 60 fps, you have 16-17ms to render each frame.

    At the bare minimum, you are probably going to lose a couple of ms from network latency from even the best home networking setups.

    Then there is the extra overhead of maintaining state in realtime between multiple systems as well as coordinating what work each system can actually do in parallel. Full set of textures and other data will most certainly need to be on both, as having a shared memory pool across the network would be unfeasible. As a result, you will most likely have the same memory constraints, especially on the gpu, for each machine as you would just using a single machine.