Nvidia vs. AMD graphics on laptops

No mental gymnastics to perform here, i did research and come to my conclusion based on evidence.

No, that is wrong, stop multiplying bull.
Scree-Space means only use data available in view (like SSRTGI and shaders alike), what modern CryEngine Ray-Tracing (well not REALLY it’s just a buzz term for people to understand, but it is actually path-tracing combined with some voxel magic) implementation is not. Some of it’s effects are proximity based, but not screen-space, although obviously you can use screen-space effects if you want instead of rendering path-traced stuff real-time- engine allows to do that.

Thx for a history lesson and how this is comparable to what they’re doing now?
You don’t see any bleeding in their full-scene demo (unless inside dev-tools), again easy to check what it is by running it.

Anyway, it’s just besides the point it doesn’t matter what exactly CryEngine does - it just shows a precedent of how it should be properly done and why RTX sucks as a dead-end evolutionary mechanism.
In which direction is performance, independence and quality, and in which is money-grab and monopolization based brute-force ray-tracing (wink-wink RTX).

Yes, that is exactly my point, although engine-based is obviously much better than ReShade in terms of realism and helping game-devs to not think about stuff.

In my view software path-tracing is a future of “ray-tracing” (we’re talking broadly here), unless NoVydia will buy everyone’s brains or something :rofl:

1 Like

They have a lot of similarities, if you watch th DF video you can see the how and limitations/nuance of the Crytek and similar methods.

You can see bleeding and loss of detail in certain scenarios but you have to pixel peep to do it due to the fact that Crytek does a more accurate close up RT effect and distance objects are done with voxels/cubes in a low complexity representation. SSAO in example use depth data and a lower resolution version of the scene to generate approximations of AO, Crytek and SSRTGI obviously do something different but use performance techniques similar to SSAO to accomplish better results for AO and to accomplish RTing effects.

If youre really interested there are many technical blogs you can read regarding RTing and the different techniques. Im not aware of your level of knowledge regarding shaders, etc. so you may need to learn a bit as some can get very technical.

In a some very general idea ballpark maybe…
But still they’re very different, especially considering rendering stuff from behind and around the full scene.

It’s a bit like saying that Crysis uses similar 3D techniques to Wolfenstein 3D :joy:

I read a lot of stuff and know thing or two about shaders.
So sure - share some blogs, if you have some in mind, maybe there are some i’ve missed.

1 Like

my point is that they use similar performance optimization techniques to accomplish their results not that they do the same thing. They accomplish different things using almost the same kinda of performance optimizations, Youre just being pedantic here man.

here is a blog i like that has been around discussing RTing for over a decade now, the guy basically lives on RTing

here is one looking at SSRTGI in Unigine (similar to Reshades)

Here is a quote from MMs Patreon regarding Reshades RTGI. It operates within SS only having access to SS data while using minimal rays to allow for better performance but also likely some clever denoising to prevent it from being awful lol

my implementation uses neither voxels nor does it access or store data outside the screen. Even though these restrictions are pretty tight, the feature itself is coming along nicely!

Major constraint as this point is performance (ray tracing, duh) - the way ray/path tracing works cannot be cut short in any way so I have to find a solution that requires as few rays as possible while not being noisy and also temporally stable.

here is orgre3d talking about their VCT

and obviously crytek but theyre a bit vague on the details

So far i haven’t read only Andrew Pham, thx.

you can also read the paper on VCT from Nvidia linked in Phams blog

something to keep in mind to is that VCT also uses SS, in Nvidias example in the paper

We rely on a compact Gaussian-Lobes represen-
tation to store the filtered distribution of incoming light di-
rections, which is done efficiently in parallel by relying on a
screen-space quad-tree analysis. Our voxel filtering scheme
also treats the NDF (Normal Distribution Function) and the
BRDF in a view-dependent way. Finally, we render the scene
from the camera. For each visible surface fragment, we com-
bine the direct and indirect illumination. We employ an ap-
proximate cone tracing to perform a final gathering [Jen96],
sending out a few cones over the hemisphere to collect illu-
mination distributed in the octree

Crytek goes a slight step beyond this, in their implementation VCT and Cubemaps are used at distance and actual low ray count tracing is done in the immediate space around the camera with a fairly short distance for actual mesh traced reflections,etc. and uses a low resolution to prevent the performance from tanking. Cryteks implementation is 3/4 Screen Space and 1/4 the Immediate zone around the player/camera that has a low count ray tracing somewhat similar to RTX but lower ray count, resolution, and requiring more clever de-noising.

This allows Crytek to accomplish some nice self reflections and other near camera tricks at decent enough FPS while allowing VCT and Cubemaps to handle less important distance objects that wont have as much close up scrutiny and should be good enough in most situations to not be noticeable unless youre really picky about watching when they pop in.

one example of things not RT in the demo is you dont see the building in the distance in the puddle reflections, its only the immediate vicinity for obvious performance reasons. Lights and near camera are reflected close bybut if you really scrutinize the demo you can also find some issues with reflections on things behind the camera/ view port but thats w.e

Nvidia 2012 presentation on the topic https://on-demand.gputechconf.com/gtc/2012/presentations/SB134-Voxel-Cone-Tracing-Octree-Real-Time-Illumination.pdf

Its WAY more clever than throwing HW at doing a more classic Raytracing using plain old ray marching like AMD/Nvidia hardware does. While hardware accelerated RT is more accurate, higher resolution, etc. is the die space worth it? i feel not but we’re stuck with it now.

I would say that the time for an OSS driver Nvidia system is still a ways off. Your best bet is either Intel or AMD for graphics assuming you dont have a need for any of Nvidias features. If you get an Intel CPU with AMD GPU (idk if many of those are a thing atm) you could have the Intel encoder which is stellar and AMD gaming prowess on linux. If thats not an option just go with the best AMD system you can if youre manly concerned with having a more OSS friendly system and dont care about encoders, compute,etc.

1 Like
  • More accurate is a very blanket term, it might be more accurate depending on implementation of software you compare it with, however since they use ray-tracing instead of path-tracing it technically can not simulate some physical effects at all (without using other fakery)

  • Higher resolution of what? All real time ray-tracing use denoise, including NoVydia’s RTX. However RTX uses low-resolution + denoise for lighting / shadows, but path-tracing + de-noise (and similar in spirit techniques like voxels / cubemaps etc) uses much more sophisticated techniques, which allows to bounce significantly less rays without compromising anything visually, so on the same GPU it actually produces higher visual quality and significantly more performant results than NoVydia RTX, especially if you turn DLSS or other forms of upscaling off.

For example modern top-tier GPUs can handle CryTek 4k in 60fps, without any form of resolution upscaling, however RTX will crawl on it’s knees if you turn it off. And if you wish to use upscaling - it will outperform RTX as well.

That is true, however CryEngine allows to tweak that “distance of compromise” via config for as long as your hardware can handle it, since it’s release in 2019 - modern top GPUs can handle much higher distance of those cubemaps, and if you wish to upscale resolution it’s certainly will be so far it won’t be possible to notice.

1 Like

my last too OT reply, just to clarify my stance im perfectly happy with this kind of reflection @keybreak this looks gorgeous with the mountain and stuff in the ice on my switch so im pretty easy to please

mh

1 Like

I skip many comments here.

A short info:

Wayland improved graphic benchmarks 2022 when using full AMD system (AMD CPU and AMD GPU driver) that support native Wayland, It does not mean Xwayland.

If games support native Wayland, you would notice the performance difference between Xorg and Wayland.

1 Like

Nobody wins…it’s all a game. :rofl:

1 Like

looking at recent laptops, I commonly see Ryzen CPU with AMD GPU, but haven’t found Intel CPU with AMD GPU. I am also wondering about coding and compilation. Core i7 are blazing fast, are Ryzen CPU up to pare? Looks like the 12th gen core i7 have 12 core and AMD Ryzen 9 6000 Series commonly have 8 cores. That may be another factor, but probably another thread worth of a discussion.

If your primary concern is CPU you really can’t go wrong with Ryzen 6000

1 Like

I am the oddity here, my experience with Intel&Nvidia on a laptop is mostly fine so Optimus is not entirely hellscape.

Better to get Ryzen with AMD Graphics.

Edit: There are lots of Ryzen with Nvidia graphics also with different manufacturers.

1 Like