• 0 Posts
  • 103 Comments
Joined 10 months ago
cake
Cake day: November 7th, 2023

help-circle
  • the idea of it improving battery is that generating frames is less performance intensive than running a certain framerate (e.g 60 fps capped game with frame gen at double the framerate consumes less power than running the same game at 120 fps). though its slightly less practical because frame generation only makes sense when the base framerate is high enough (ideally above 60) to avoid a lot of screen artifacting. So in practical use, this only makes sense to “save battery” in the context that you have a 120hz+ screen and choose to cap framerate to 60-75fps.

    If one is serious about minmaxing battery to performance in a realistic value, people should have the screen cap at 40 hz, as it has half of the input latency between 30 and 60 fps, but only requires 10 more fps than 30 which is a very realistic performance target for maintaining a minimum on handheld.



  • the PS5 pro uses 60 CU rdna 4, so if you want to match that, buy the supposedly rumored 8800XT that amd is trying to pump more of as they forgoe top end end generation supposedly (basically similar to the RX 480 and RX 5700xt generations)

    keep in mind, console and pc sales and cost differ because of where they focus on making money. Sony for example makes money off accessory sales (the ps5 pro is disk driveless and no vertical stand) ontop of never adressing the rampant stick drift problem the dualsense has, ontop of paid online, none of which is any signicant factor on PC, which generally speaking is more front loaded cost heavy but overtime has lower cost in games, services and such.


  • devs on pc have to decide which set of hardware to optimize for. it’s a step that they choose based on harwdare adoption trends. There is always a point where something is too hardware demanding that it would greatly hinder sales when making a decision. With a fixed hardware platform, devs have a concentrated point in hardware adoption to target.

    For instance, say you developed a game where the minimum hardware requirement was slightly higher than a steam deck. If enough steam deck sales exist, the dev might have an incentive to optimize the game more just to get access to said market.






  • basically how i see it is it only makes better sense from a consumer standpoint if the decreased developer cost is ALSO decreasing the upfront user cost to buying the game, as the worst policy that Valve has on steam is that the games base price has to be the same on all storefronts.

    however in reality, most developers do not pass some of that savings to consumers and just take the cut for themselves. So devs are basically playing againt future benefits on growing a larger consumer base on a different platform for more upfront profit.

    basically most of the investment money that epic throws is thrown at development and developers, and basically outside of free games, none of it is thrown back into making the platform better for consumers. Developers can complain however much they want that steam has a “consumer monopoly” (while ignoring the fact that other companies like Riot, and mobile game companies with PC clients like Mihoyo does fine without steam). this will continue to happen until epic reinvests some of that money into their client, or devs actually use the benefits of taking a lower cut and biting the bullet and regularly passing some of it off to consumers.

    regardless of the situation, developers are stopping developers ultimately if they want to break the “consumer monopoly”











  • also to bring a rudamentary comparison:

    a cpu is a few very complicated cores, a gpu is thousands of dumb cores.

    its easier to make something doing something low in instructions(gpu) faster than something that has a shit ton of instructions(cpu) due to like you mention, branch prediction.

    modern cpu performance gains is focusing more on paralellism and in the case of efficiency cores, scheduling to optimize for performance.

    GPU wise, its really something as simple as GPUs are typically memory bottlenecked. memory bandwidth (memory speed x bus width with a few caveats with cache lowering requirements based on hits) its the major indicator on GPU performance. bus width is fixed on a hardware chip design, so the simpilist method to increase general performance is clocks.