RTX 3060 is never using full power

I use mangohud to monitor my internals and I have watched them for several days since I got this new machine, in all sorts of different scenarios.

Now I know that on this machine, the RTX3060 is rated for a maximum of 130W, but it seems to be capped at 80W. No matter the power mode I use, it always stays around 79-80W.

Any idea how I can get it to run at it’s full potential?

System:
  Kernel: 5.19.8-arch1-1 arch: x86_64 bits: 64 compiler: gcc v: 12.2.0
    parameters: BOOT_IMAGE=/vmlinuz-linux
    root=UUID=881825ad-972b-491a-b2ce-6d2722e5e85e rw rootfstype=ext4
    loglevel=3 ibt=off
  Desktop: GNOME v: 42.4 tk: GTK v: 3.24.34 wm: gnome-shell dm: GDM v: 42.0
    Distro: Arch Linux
Machine:
  Type: Laptop System: LENOVO product: 82JM v: Legion 5 17ITH6H
    serial: <superuser required> Chassis: type: 10 v: Legion 5 17ITH6H
    serial: <superuser required>
  Mobo: LENOVO model: LNVNB161216 v: NO DPK serial: <superuser required>
    UEFI: LENOVO v: H1CN49WW date: 08/16/2022
Battery:
  ID-1: BAT0 charge: 83.5 Wh (100.0%) condition: 83.5/80.0 Wh (104.4%)
    volts: 17.5 min: 15.4 model: Celxpert L20C4PC2 type: Li-poly
    serial: <filter> status: full
CPU:
  Info: model: 11th Gen Intel Core i7-11800H bits: 64 type: MT MCP
    arch: Tiger Lake gen: core 11 level: v4 built: 2020 process: Intel 10nm
    family: 6 model-id: 0x8D (141) stepping: 1 microcode: 0x40
  Topology: cpus: 1x cores: 8 tpc: 2 threads: 16 smt: enabled cache:
    L1: 640 KiB desc: d-8x48 KiB; i-8x32 KiB L2: 10 MiB desc: 8x1.2 MiB
    L3: 24 MiB desc: 1x24 MiB
  Speed (MHz): avg: 2345 high: 3696 min/max: 800/4600 scaling:
    driver: intel_pstate governor: performance cores: 1: 2300 2: 2300 3: 2300
    4: 2300 5: 1639 6: 2300 7: 2300 8: 2300 9: 2300 10: 2300 11: 2300
    12: 2300 13: 2300 14: 2300 15: 3696 16: 2300 bogomips: 73744
  Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
  Vulnerabilities:
  Type: itlb_multihit status: Not affected
  Type: l1tf status: Not affected
  Type: mds status: Not affected
  Type: meltdown status: Not affected
  Type: mmio_stale_data status: Not affected
  Type: retbleed status: Not affected
  Type: spec_store_bypass mitigation: Speculative Store Bypass disabled via
    prctl
  Type: spectre_v1 mitigation: usercopy/swapgs barriers and __user pointer
    sanitization
  Type: spectre_v2 mitigation: Enhanced IBRS, IBPB: conditional, RSB
    filling, PBRSB-eIBRS: SW sequence
  Type: srbds status: Not affected
  Type: tsx_async_abort status: Not affected
Graphics:
  Device-1: NVIDIA GA106M [GeForce RTX 3060 Mobile / Max-Q] vendor: Lenovo
    driver: nvidia v: 515.65.01 non-free: 515.xx+ status: current (as of
    2022-08) arch: Ampere code: GAxxx process: TSMC n7 (7nm) built: 2020-22
    pcie: gen: 1 speed: 2.5 GT/s lanes: 16 link-max: gen: 4 speed: 16 GT/s
    bus-ID: 01:00.0 chip-ID: 10de:2560 class-ID: 0300
  Device-2: Syntek Integrated Camera type: USB driver: uvcvideo
    bus-ID: 3-6:2 chip-ID: 174f:2459 class-ID: fe01 serial: <filter>
  Display: x11 server: X.org v: 1.21.1.4 with: Xwayland v: 22.1.3
    compositor: gnome-shell driver: X: loaded: nvidia unloaded: modesetting
    alternate: fbdev,nouveau,nv,vesa gpu: nvidia display-ID: :1 screens: 1
  Screen-1: 0 s-res: 1920x1080 s-size: <missing: xdpyinfo>
  Monitor-1: DP-4 res: 1920x1080 hz: 144 dpi: 128
    size: 382x215mm (15.04x8.46") diag: 438mm (17.26") modes: N/A
  Message: Unable to show GL data. Required tool glxinfo missing.
Audio:
  Device-1: Intel Tiger Lake-H HD Audio vendor: Lenovo driver: snd_hda_intel
    v: kernel bus-ID: 00:1f.3 chip-ID: 8086:43c8 class-ID: 0403
  Device-2: NVIDIA GA106 High Definition Audio driver: snd_hda_intel
    v: kernel pcie: gen: 1 speed: 2.5 GT/s lanes: 16 link-max: gen: 4
    speed: 16 GT/s bus-ID: 01:00.1 chip-ID: 10de:228e class-ID: 0403
  Sound Server-1: ALSA v: k5.19.8-arch1-1 running: yes
  Sound Server-2: JACK v: 1.9.21 running: no
  Sound Server-3: PulseAudio v: 16.1 running: yes
  Sound Server-4: PipeWire v: 0.3.58 running: yes
Network:
  Device-1: Intel Tiger Lake PCH CNVi WiFi driver: iwlwifi v: kernel
    bus-ID: 00:14.3 chip-ID: 8086:43f0 class-ID: 0280
  IF: wlan0 state: down mac: <filter>
  Device-2: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet
    vendor: Lenovo driver: r8169 v: kernel pcie: gen: 1 speed: 2.5 GT/s
    lanes: 1 port: 3000 bus-ID: 58:00.0 chip-ID: 10ec:8168 class-ID: 0200
  IF: enp88s0 state: up speed: 1000 Mbps duplex: full mac: <filter>
Bluetooth:
  Device-1: Intel AX201 Bluetooth type: USB driver: btusb v: 0.8
    bus-ID: 3-14:4 chip-ID: 8087:0026 class-ID: e001
  Report: rfkill ID: hci0 rfk-id: 3 state: up address: see --recommends
Drives:
  Local Storage: total: 1.16 TiB used: 585.47 GiB (49.3%)
  SMART Message: Unable to run smartctl. Root privileges required.
  ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: Samsung model: SSD 970 EVO Plus
    250GB size: 232.89 GiB block-size: physical: 512 B logical: 512 B
    speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> rev: 2B2QEXM7
    temp: 35.9 C scheme: GPT
  ID-2: /dev/nvme1n1 maj-min: 259:3 vendor: SK Hynix model: HFS001TDE9X084N
    size: 953.87 GiB block-size: physical: 512 B logical: 512 B
    speed: 31.6 Gb/s lanes: 4 type: SSD serial: <filter> rev: 41010C22
    temp: 37.9 C scheme: GPT
Partition:
  ID-1: / raw-size: 232.38 GiB size: 227.68 GiB (97.97%) used: 160.42 GiB
    (70.5%) fs: ext4 dev: /dev/nvme0n1p2 maj-min: 259:2
  ID-2: /boot raw-size: 511 MiB size: 510 MiB (99.80%) used: 87.6 MiB
    (17.2%) fs: vfat dev: /dev/nvme0n1p1 maj-min: 259:1
Swap:
  Kernel: swappiness: 60 (default) cache-pressure: 100 (default)
  ID-1: swap-1 type: zram size: 4 GiB used: 0 KiB (0.0%) priority: 100
    dev: /dev/zram0
Sensors:
  System Temperatures: cpu: 39.0 C mobo: N/A
  Fan Speeds (RPM): N/A
Info:
  Processes: 324 Uptime: 8m wakeups: 11 Memory: 15.48 GiB used: 2.85 GiB
  (18.4%) Init: systemd v: 251 default: graphical tool: systemctl
  Compilers: gcc: 12.2.0 clang: 14.0.6 Packages: pm: pacman pkgs: 1241
  libs: 421 tools: gnome-software,pamac,yay pm: flatpak pkgs: 0 Shell: Bash
  v: 5.1.16 running-in: gnome-terminal inxi: 3.3.21

I would suggest looking at it with GWE (Green With Envy)…you can change many settings. You may need to create a partial xorg.config or similar to be able to control what you want. There are several very good guides on Arch Wiki.

I’m stuch at this error and I can’t even get it to launch.

$ gwe
Traceback (most recent call last):
  File "/usr/bin/gwe", line 53, in <module>
    from gwe import __main__
  File "/usr/lib/python3.10/site-packages/gwe/__main__.py", line 52, in <module>
    locale.setlocale(locale.LC_ALL, locale.getlocale())
  File "/usr/lib/python3.10/locale.py", line 620, in setlocale
    return _setlocale(category, locale)
locale.Error: unsupported locale setting

My locale -a

C.UTF-8
en_GB.utf8utf8
en_US.utf8
POSIX

What is wrong with it?

Should that be

en_GB.utf8

Changing locale to anything other than what it is now somehow breaks gnome-terminal which refuses to launch after locale-gen.

Sounds like you need to regenerate your locale. I remember posts on the forum about that…

Can you monitor temperatures while gaming? If temps are high, the GPU may simply refusing to pull more power.

I can and I do constantly monitor temps. GPU stablizes at around 70°C, rarely, if ever going over, while the CPU hovers between 60°C and 85°C. As far as I understand, this kind of temps are quite normal, perhaps even on the lower side.

This is apparently a known issue and it also appears to have been addressed (sort of).

While reading this discussion I came upon this article.

TLDR:

systemctl enable nvidia-powerd.service
systemctl start nvidia-powerd.service

One reboot later and my GPU can now use up to 100W of power. If it can reach the Lenovo advertised 115W+15W (dynamic boost) remains to be seen, but this is still progress and I’ll take it.

EDIT: Only appears to be working on INTEL based systems.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.