Hi,
I noticed my GPU often goes to 99% and the fan runs wild, when gaming. It doesn’t even seem to depend on the complexity of the game, as it doesn’t matter if it is Baldur’s Gate 3, Cities: Skylines 1/2, Satisfactory or Rebel Galaxy Outlaws. All of them go to 99%. It doesn’t happen in Dead Cells, so my guess is, it happens in 3D games mostly.
As I am pretty new to Linux and had to notice I forgot to install some drivers when Cities: Skylines 2 looked weird it may be an error on my side, so I wanted to ask for experience, help or if it is just normal under Linux. I can’t remember my GPU fan going all out on Windows. I use coreCtrl as workaround, to limit the power limit and reduce the fan work. As the games still run smooth it shows the power isn’t really needed. It happens with steam games and HGL games, so it doesn’t depend on the platform.
I run a Ryzen 3700X with an Radeon 5700XT. inxi -Gaz (I did some research before and uninstalled the closed AMD drivers) shows:
Graphics:
Device-1: AMD Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT]
vendor: Sapphire driver: amdgpu v: kernel arch: RDNA-1 code: Navi-1x
process: TSMC n7 (7nm) built: 2019-20 pcie: gen: 4 speed: 16 GT/s
lanes: 16 ports: active: DP-3 empty: DP-1,DP-2,HDMI-A-1 bus-ID: 0a:00.0
chip-ID: 1002:731f class-ID: 0300
Device-2: ARC Camera driver: snd-usb-audio,uvcvideo type: USB rev: 2.0
speed: 480 Mb/s lanes: 1 mode: 2.0 bus-ID: 5-2.1:4 chip-ID: 05a3:9331
class-ID: 0102 serial: <filter>
Display: wayland server: X.org v: 1.21.1.13 with: Xwayland v: 24.1.2
compositor: kwin_wayland driver: X: loaded: amdgpu
unloaded: modesetting,radeon alternate: fbdev,vesa dri: radeonsi
gpu: amdgpu display-ID: 0
Monitor-1: DP-3 res: 2560x1440 size: N/A modes: N/A
API: EGL v: 1.5 hw: drv: amd radeonsi platforms: device: 0 drv: radeonsi
device: 1 drv: swrast gbm: drv: kms_swrast surfaceless: drv: radeonsi
wayland: drv: radeonsi x11: drv: radeonsi
API: OpenGL v: 4.6 compat-v: 4.5 vendor: amd mesa v: 24.1.6-arch1.1
glx-v: 1.4 direct-render: yes renderer: AMD Radeon RX 5700 XT (radeonsi
navi10 LLVM 18.1.8 DRM 3.57 6.10.6-arch1-1) device-ID: 1002:731f
memory: 7.81 GiB unified: no display-ID: :1.0
API: Vulkan v: 1.3.279 layers: 10 device: 0 type: discrete-gpu name: AMD
Radeon RX 5700 XT (RADV NAVI10) driver: mesa radv v: 24.1.6-arch1.1
device-ID: 1002:731f surfaces: xcb,xlib,wayland
lspci -k | grep -A 3 -E “(VGA|3D)” says:
0a:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] (rev c1)
Subsystem: Sapphire Technology Limited Radeon RX 5600 XT
Kernel driver in use: amdgpu
Kernel modules: amdgpu
Which makes me wonder, as those should be the drivers I uninstalled and after that I ran an update, hoping this would make sure the right drivers are used.
I’d appreciate and pointer or things to check. Thanks in advance
1 Like
I mean… that’s kind of supposed to happen? You’re using your graphics card’s power to play a game, which is an intensive graphics application. Your GPU being used to 99% means that it does its job. That’s what it is supposed to do. If you’re scared that the temperature of it is high, the maximum operating temperature of your card is 110C, according to this forum post on the AMD forums.
The driver installation process for AMD cards, according to the Arch Wiki, is:
- install
mesa
and the lib32 version of the same package;
- install
amdvlk
or vulkan-radeon
, not both at the same time, and the match lib32 version of package; Either one is fine, by the way.
As for the Cities: Skylines 2 problem, it seems that the game is just weird with Proton. Maybe this works?
I can’t compare apples to oranges, but gaming is very complicated, and it depends on your game settings as well as your drivers/kernel and your hardware.
I have a Radeon RX 7600 XT and am using the in-kernel drivers. I also have a Ryzen 7 7700 cpu. My games ran full tilt (including BG3) will cause the fans to run for sure, probably close to full. My CPU and GPU don’t peg, but it uses the majority of my resources. Temps on the CPU climb to the lower 80sC, and GPU goes to about 75C. Oh, I should throw in that I’m running 3440x1440 165 Hz.
If you want less resource consumption and less heat, lower the game settings.
Well, I wouldn’t complain on high level games with top notch graphics. As I said, the consumption wasn’t that bad on Windows and I can lower the power target down to 120 (from 180) without any impact on the game and reduce the load on the card. I wouldn’t say anything if I’d be using blender and my card is calculating the image.
I’m talking about 100% for Rebel Galaxy Outlaw which isn’t top notch and not Elden Ring or something like that. I’m just wondering, as I didn’t experience this high load with this type of game and never felt the need to regulate my card.
Sorry, I don’t have that game, so don’t know. Maybe someone else does?
For me, it’s just simply a miracle to be able to run Windows games on Linux
1 Like
It really is. GPU utilization doesn’t really bother me. They’re designed to be used at 99% and withstand high temps.
I’m old school, so temperatures of 95° bother me. But what bothers me more is the GPU using 180W, if it could be run with 120W or perhaps less to do nothing but turn electricity to heat and driving my bill up.
1 Like
I hear you there…having measured the amount of heat my rig produced (in an informal, I can feel the winter heating effect way), it’s amazing that much heat by product is necessary.
Who needs a space heater!
Have you checked your framerate limits? Tried turning Vsync on, maybe. Unless there is something limiting the output, the GPU will use all it’s capabilities to deliver as much as it can. - This could be why Windows is behaving differently, if it is using vsync in cases where Linux isn’t.
Modern GPUs are designed to run at 100% constantly, not even considering throttling unless the game itself is asking for less, or it hits it’s thermal throttle limit. (110C in most cases) - I personally appreciate this, as I would rather make use of all this performance we are paying no small amount of money for
It’s also worth pointing out that the “old school” cards were also getting this hot, they just didn’t have the sensors to detect it. They’d measure the edge temps (the 70-90 temps you are used to) and throttle based on that, while the internals were easily exceeding 100+ - Modern cards just have sensors on the internals now and can more accurately measure and throttle based on those.
I want the power to be used, when it is needed. Like Blender or a high level graphics game and the higher power is acutally giving me an advantage. But keeping 60FPS in a more modern Wing Commander 2 style graphics with 2/3 and not getting better at full power comes over as a waste of ressources to me. No matter if I play Planet Zoo, Rebel Galaxy Outlaw, C:S 2 (okay, that one is a bit waggy overall and is still being worked on), BG3 the output is the same, while the graphic level is not. I want my card to use what is needed and spare ressources (which will only be turned into heat anyway) if they are not needed.
RGO has 60FPS with 120W and 60FPS with 180W in the same settings. So why use 180? I will check vsync with other games, though and check if there is something in common. Dead Cells doesn’t drive the card to maximum for some reason, but that one is a 2D game, instead of 3D like the rest. So I thought perhaps I am using the wrong driver, or missing something. That’s why I posted the output I found in other graphic card questions.
Can you try to limit framerate in game to 60 or 90? If I playing in unlimited FPS and the Game is old so they skyrocking to like 200-300 FPS, the temps are usually mich higher than a framelimit to like 60-90
1 Like
As someone that used to have the RX 5700 XT and find always wanting to undervolt AMD GPUs, if you don’t mind potentially losing some performance to save on electricity costs (since from my games anyway I barely noticed differences), you can set the GPU power handling to manual and power cap it with amdgpu-clocks, and apply an undervolt curve to try and see if you can lower it to a point where it’ll draw heat less but without crashing occurring in games. Do make sure to read into how it operates.
This is the old config I used to use on my RX 5700 XT. As much as I want to lower the max voltage to something like 1.050V like I’ve seen on undervolting guides, my GPU was crashing already at 1.075V so I had to settle for this.
# For Navi (and Radeon7) we can only set highest SCLK & MCLK, "state 1":
OD_SCLK:
0: 800MHz
1: 1999MHz
OD_MCLK:
1: 875MHz
# More fine-grain control of clocks and voltages are done with VDDC curve:
OD_VDDC_CURVE:
0: 800MHz @ 750mV
1: 1399MHz @ 838mV
2: 1999MHz @ 1090mV
# Force power limit (in micro watts):
FORCE_POWER_CAP: 190000000
FORCE_PERF_LEVEL: manual
I currently own the RX 6800 XT, sadly Navi 2 has even more limited controls confined to minimum and maximum frequency as well as undervolt, no curve at all. Power capping also seems to not be working with the settings so I had to choose auto.
Sidenote: Yes, forcing power level to the low setting exists. But that was not reliable and sometimes kept the card running only at the lowest frequency.