Nvidia vs. AMD graphics on laptops

Yes, that is exactly the way to do this…unfortunately very rarely you can find such models.

Other option back in a day, when there were simpler BIOS, instead of UEFI garbage - you could modify or reverse engineer BIOS for some laptop models (like some Sony VAIO) to have a proper switcher inside BIOS…

P.S. Not to say that it’s impossible to make Optimus laptop work decently (in fact on Linux it is simpler in my view), it’s just won’t be flawless because of absolute insanity of whole on-the-fly GPU-switching concept.

1 Like

You do live on another planet. Ryzen man …Ryzen! First of all i would never buy a laptop to do gaming on. I’m a desktop user and if i was a gamer that’s what i would be using. No hybrid GPU crap for me. On laptops I’ll stick with onboard graphics.

2 Likes

My advice would be to just get a gaming desktop with top-tier AMD graphics. You would pay less for it and have a much better time than on any laptop.

I find laptops to be more trouble than worth, especially high-end laptops you’ll pay a fortune for.

Optimus graphics is just rubbish, I wouldn’t even consider that. And NoVidea tends to drop support for older hardware, so proprietary drivers become too much trouble to maintain. After a while, they get dropped from the repos and soon, you’re stuck with noveau – which, while a valiant effort, is also utter rubbish. This is a much worse problem on a laptop than on a desktop, because you can’t give your old NoVidea graphics card to Tim Cook’s giraffe to step into, and just get a newer one (or, even better, AMD, as you’ll probably be pretty pissed off at NoVidea when that happens).

The only advantage a laptop has over a desktop is its mobility – having a laptop is nice when you’re travelling or to bring with you to work, but for anything else, a desktop is far superior. In fact, the money you’d save by getting a really good desktop, compared to a much crappier laptop, would be enough for you to get a cheaper laptop as well, which you can use to visit this forum while you’re away from home and discus how bad NoVidea is (that’s pretty much the only thing I use my laptop for). :rofl:

2 Likes

You can not like its proprietary nature but RTX is far from garbage and much more sophisticated than SSRTGI.

  1. Nvidia also absolutely trashes AMD when it comes to accelerated encoding/decoding.

  2. Nvidia has better OpenCL support and obviously CUDA. You can’t even get OpenCL support that matters with Flatpak with AMD but with Nvidia its easy.

  3. The render path for Blender with Nvidia is much better. Even with HIP the 6900xt is only about 3060 level in Blender

  4. Nvidia based laptops are generally better and more options than AMD even when paired with AMD CPU

The only scenario that Nvidia loses to AMD on Linux is OSS drivers and Gaming being generally better on AMD because of that, otherwise literally any other metric Nvidia flat out wins

This is very true, hybrid graphics are a big headache on Linux, less so on Windows but still a pain point unless you have a MUX switch

Not far, it’s utter garbage compared to engine-based path-tracing (SSRTGI is different story, but it works everywhere).
Implementation really sucks AND it’s proprietary.

What i mean is to read Future of Ray Tracing section to get some ideas

1 Like

Screen space effects by nature are less accurate, prone to issues, and rather noisy.

Again, you can not like RTX nature but to say its garbage is a plane lie. RTX is far beyond SSRTGI but I prefer universal options for gaming like SSRTGI.

It’s not lie, it’s your ignorance.

Software engine-based implementations like CryEngine example is not screen-space, if you actually read stuff. :man_facepalming:

Basically if you use power of human brain to implement similar calculations to SSRTGI while using all available data around you (hence you need game engine to do that) - there are no problems and even some benefits to such technologies, like being much more performant (since you can use not only CPU but GPU) and being cross-platform without using insane brute-force proprietary approach like NoVydia RTX.

Only areas where you might need that stupid insane RTX stuff is 3D modeling (and that’s only because of how 3D modeling software is currently implementing ray-tracing) and maybe rendering full-scene ray-traced games but we’re far away from even considering it, performance-wise.

Besides, it’s very easy to see why it’s garbage, if you know anything about path-tracing.
Even if they would make exactly the same stupid decisions they have made, but gone path-tracing instead of ray-tracing - we would already have full-scene traced games available worldwide playable with 60 FPS even in 4k.

1 Like

screen space is literally in the name lol (SSRTGI)

It doesn’t matter, because under the hood it’s not really.

I mean this, if you’re too lazy

It’s similar, they use voxels for light calculations for example, which is indistinguishable from RTX ray-tracing.
Fire CryEngine scene in engine, compare - see for yourself.

1 Like

you got some serious bias to perform those mental gymnastics friend

Both Crytek and Reshade perform the RTing in view port/screen space and part of the performance improvement comes from only considering things in screen space but also by being approximations. This is how Crytek originally managed Ambient Occlusion and brought that to the masses in the form of SSAO. They also much like SSAO operate at lower resolutions, generally 1/2 or 1/4 and objects at distance lose clarity and can suffer from some bleeding but its generally not a problem.

RTX uses ray marching casting at this moment a finite number of rays which is the only reason it suffers from the noise it does, as the hardware improves it from a technical perspective is far superior to SS effects due to being more accurate, higher detail, and able to consider things outside of SS. While i dont feel RTX or Hardware based RT is the way to go, much like when Tesselation became a thing i thought it was eh, it is the technically superior method from quality standpoint.

I much prefer what Cryengine, Reshade, and Unreal Egnine do that allows for RT kind of effects without having to have specific hardware to accomplish it.

here is an analysis DF did a while ago for this sort of technique, its really a fun way to accomplish it and is incredibly clever. It has its limitations though.

1 Like

No mental gymnastics to perform here, i did research and come to my conclusion based on evidence.

No, that is wrong, stop multiplying bull.
Scree-Space means only use data available in view (like SSRTGI and shaders alike), what modern CryEngine Ray-Tracing (well not REALLY it’s just a buzz term for people to understand, but it is actually path-tracing combined with some voxel magic) implementation is not. Some of it’s effects are proximity based, but not screen-space, although obviously you can use screen-space effects if you want instead of rendering path-traced stuff real-time- engine allows to do that.

Thx for a history lesson and how this is comparable to what they’re doing now?
You don’t see any bleeding in their full-scene demo (unless inside dev-tools), again easy to check what it is by running it.

Anyway, it’s just besides the point it doesn’t matter what exactly CryEngine does - it just shows a precedent of how it should be properly done and why RTX sucks as a dead-end evolutionary mechanism.
In which direction is performance, independence and quality, and in which is money-grab and monopolization based brute-force ray-tracing (wink-wink RTX).

Yes, that is exactly my point, although engine-based is obviously much better than ReShade in terms of realism and helping game-devs to not think about stuff.

In my view software path-tracing is a future of “ray-tracing” (we’re talking broadly here), unless NoVydia will buy everyone’s brains or something :rofl:

1 Like

They have a lot of similarities, if you watch th DF video you can see the how and limitations/nuance of the Crytek and similar methods.

You can see bleeding and loss of detail in certain scenarios but you have to pixel peep to do it due to the fact that Crytek does a more accurate close up RT effect and distance objects are done with voxels/cubes in a low complexity representation. SSAO in example use depth data and a lower resolution version of the scene to generate approximations of AO, Crytek and SSRTGI obviously do something different but use performance techniques similar to SSAO to accomplish better results for AO and to accomplish RTing effects.

If youre really interested there are many technical blogs you can read regarding RTing and the different techniques. Im not aware of your level of knowledge regarding shaders, etc. so you may need to learn a bit as some can get very technical.

In a some very general idea ballpark maybe…
But still they’re very different, especially considering rendering stuff from behind and around the full scene.

It’s a bit like saying that Crysis uses similar 3D techniques to Wolfenstein 3D :joy:

I read a lot of stuff and know thing or two about shaders.
So sure - share some blogs, if you have some in mind, maybe there are some i’ve missed.

1 Like

my point is that they use similar performance optimization techniques to accomplish their results not that they do the same thing. They accomplish different things using almost the same kinda of performance optimizations, Youre just being pedantic here man.

here is a blog i like that has been around discussing RTing for over a decade now, the guy basically lives on RTing

here is one looking at SSRTGI in Unigine (similar to Reshades)

Here is a quote from MMs Patreon regarding Reshades RTGI. It operates within SS only having access to SS data while using minimal rays to allow for better performance but also likely some clever denoising to prevent it from being awful lol
https://www.patreon.com/mcflypg/posts?filters[tag]=ray+tracing

my implementation uses neither voxels nor does it access or store data outside the screen. Even though these restrictions are pretty tight, the feature itself is coming along nicely!

Major constraint as this point is performance (ray tracing, duh) - the way ray/path tracing works cannot be cut short in any way so I have to find a solution that requires as few rays as possible while not being noisy and also temporally stable.

here is orgre3d talking about their VCT
https://www.ogre3d.org/2019/08/05/voxel-cone-tracing

and obviously crytek but theyre a bit vague on the details

So far i haven’t read only Andrew Pham, thx.

you can also read the paper on VCT from Nvidia linked in Phams blog

something to keep in mind to is that VCT also uses SS, in Nvidias example in the paper

We rely on a compact Gaussian-Lobes represen-
tation to store the filtered distribution of incoming light di-
rections, which is done efficiently in parallel by relying on a
screen-space quad-tree analysis. Our voxel filtering scheme
also treats the NDF (Normal Distribution Function) and the
BRDF in a view-dependent way. Finally, we render the scene
from the camera. For each visible surface fragment, we com-
bine the direct and indirect illumination. We employ an ap-
proximate cone tracing to perform a final gathering [Jen96],
sending out a few cones over the hemisphere to collect illu-
mination distributed in the octree

Crytek goes a slight step beyond this, in their implementation VCT and Cubemaps are used at distance and actual low ray count tracing is done in the immediate space around the camera with a fairly short distance for actual mesh traced reflections,etc. and uses a low resolution to prevent the performance from tanking. Cryteks implementation is 3/4 Screen Space and 1/4 the Immediate zone around the player/camera that has a low count ray tracing somewhat similar to RTX but lower ray count, resolution, and requiring more clever de-noising.

This allows Crytek to accomplish some nice self reflections and other near camera tricks at decent enough FPS while allowing VCT and Cubemaps to handle less important distance objects that wont have as much close up scrutiny and should be good enough in most situations to not be noticeable unless youre really picky about watching when they pop in.

one example of things not RT in the demo is you dont see the building in the distance in the puddle reflections, its only the immediate vicinity for obvious performance reasons. Lights and near camera are reflected close bybut if you really scrutinize the demo you can also find some issues with reflections on things behind the camera/ view port but thats w.e

demo

Nvidia 2012 presentation on the topic https://on-demand.gputechconf.com/gtc/2012/presentations/SB134-Voxel-Cone-Tracing-Octree-Real-Time-Illumination.pdf

Its WAY more clever than throwing HW at doing a more classic Raytracing using plain old ray marching like AMD/Nvidia hardware does. While hardware accelerated RT is more accurate, higher resolution, etc. is the die space worth it? i feel not but we’re stuck with it now.

I would say that the time for an OSS driver Nvidia system is still a ways off. Your best bet is either Intel or AMD for graphics assuming you dont have a need for any of Nvidias features. If you get an Intel CPU with AMD GPU (idk if many of those are a thing atm) you could have the Intel encoder which is stellar and AMD gaming prowess on linux. If thats not an option just go with the best AMD system you can if youre manly concerned with having a more OSS friendly system and dont care about encoders, compute,etc.

1 Like
  • More accurate is a very blanket term, it might be more accurate depending on implementation of software you compare it with, however since they use ray-tracing instead of path-tracing it technically can not simulate some physical effects at all (without using other fakery)

  • Higher resolution of what? All real time ray-tracing use denoise, including NoVydia’s RTX. However RTX uses low-resolution + denoise for lighting / shadows, but path-tracing + de-noise (and similar in spirit techniques like voxels / cubemaps etc) uses much more sophisticated techniques, which allows to bounce significantly less rays without compromising anything visually, so on the same GPU it actually produces higher visual quality and significantly more performant results than NoVydia RTX, especially if you turn DLSS or other forms of upscaling off.

For example modern top-tier GPUs can handle CryTek 4k in 60fps, without any form of resolution upscaling, however RTX will crawl on it’s knees if you turn it off. And if you wish to use upscaling - it will outperform RTX as well.

That is true, however CryEngine allows to tweak that “distance of compromise” via config for as long as your hardware can handle it, since it’s release in 2019 - modern top GPUs can handle much higher distance of those cubemaps, and if you wish to upscale resolution it’s certainly will be so far it won’t be possible to notice.

1 Like