NVIDIA/Tips and tricks - ArchWiki
Packages
Forums
Wiki
GitLab
Security
AUR
Jump to content
From ArchWiki
NVIDIA
Fixing terminal resolution
Since
NVIDIA#fbdev
is enabled by default, the
Linux console
should use the native monitor resolution without additional configuration.
If you have disabled
fbdev
or use an older version of the driver, the resolution may be lower than expected. As a workaround, you can set the resolution in your
boot loader
configuration.
For GRUB, see
GRUB/Tips and tricks#Setting the framebuffer resolution
for details.
[1]
[2]
For
systemd-boot
, set
console-mode
in
esp
/loader/loader.conf
. See
systemd-boot#Loader configuration
for details.
For
rEFInd
, set
use_graphics_for +,linux
in
esp
/EFI/refind/refind.conf
[3]
A small caveat is that this will hide the kernel parameters from being shown during boot.
Tip
If the above methods do not fix your terminal resolution, it may be necessary to disable Legacy BIOS mode entirely (often referred to as Compatibility Support Module, CSM, or Legacy Boot) in your UEFI settings. Before proceeding, make sure that all of your devices are configured to use UEFI boot.
Using TV-out
See
Wikibooks:NVIDIA/TV-OUT
X with a TV (DFP) as the only display
The X server falls back to some "default" screen resolution (usually 640x480) if no monitor is automatically detected. This can be a problem when using a DVI/HDMI/DisplayPort connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.
To force NVIDIA to use the correct resolution, store a copy of the EDID somewhere in the file system so that X can parse the file instead of reading EDID from the display.
To acquire the EDID, start
nvidia-settings
. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled
GPU-0
or similar), click the
DFP
section (again,
DFP-0
or similar), click on the
Acquire EDID...
button and store it somewhere, for example,
/etc/X11/dfp0.edid
If in the front-end mouse and keyboard are not attached, the EDID can be acquired using only the command line. Run an X server with enough verbosity to print out the EDID block:
$ startx -- -logverbose 6
After the X server has finished initializing, close it and extract the EDID block from the
Xorg log file
using
nvidia-xconfig
$ nvidia-xconfig --extract-edids-from-file ~/.local/share/xorg/Xorg.0.log --extract-edids-output-file ./dfp0.bin
Edit the Xorg configuration by adding to the
Device
section:
/etc/X11/xorg.conf.d/20-nvidia.conf
Option "ConnectedMonitor" "DFP"
Option "CustomEDID" "DFP-0:/etc/X11/dfp0.bin"
The
ConnectedMonitor
option forces the driver to recognize the DFP as if it were connected. The
CustomEDID
provides EDID data for the device, meaning that it will start up just as if the TV/DFP was connected during the X process.
This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.
Headless (no monitor) resolution
In headless mode, resolution falls back to 640x480, which is used by VNC or Steam Link. To start in a higher resolution
e.g.
1920x1080, specify a
Virtual
entry under the
Screen
subsection in
xorg.conf
Section "Screen"
[...]
SubSection "Display"
Depth 24
Virtual 1920 1080
EndSubSection
EndSection
Tip
Using headless mode may be tricky and prone to error. For instance, in headless mode, desktop environments and
nvidia-utils
do not provide a graphical way to change resolution. To facilitate setting up resolution one can use a DP or an HDMI dummy adapter which simulates the presence of a monitor attached to that port. Then resolution change can be done normally using a remote session such as VNC or Steam Link.
Check the power source
The NVIDIA X.org driver can also be used to detect the GPU's current source of power. To see the current power source, check the 'GPUPowerSource' read-only parameter (0 - AC, 1 - battery):
$ nvidia-settings -q GPUPowerSource -t
Listening to ACPI events
NVIDIA drivers automatically try to connect to the
acpid
daemon and listen to ACPI events such as battery power, docking, some hotkeys, etc. If connection fails, X.org will output the following warning:
~/.local/share/xorg/Xorg.0.log
NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon
NVIDIA(0): may not be running or the "AcpidSocketPath" X
NVIDIA(0): configuration option may not be set correctly. When the
NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will
NVIDIA(0): try to use it to receive ACPI event notifications. For
NVIDIA(0): details, please see the "ConnectToAcpid" and
NVIDIA(0): "AcpidSocketPath" X configuration options in Appendix B: X
NVIDIA(0): Config Options in the README.
While completely harmless, you may get rid of this message by disabling the
ConnectToAcpid
option in your
/etc/X11/xorg.conf.d/20-nvidia.conf
Section "Device"
...
Driver "nvidia"
Option "ConnectToAcpid" "0"
...
EndSection
If you are on laptop, it might be a good idea to install and enable the
acpid
daemon instead.
Displaying GPU temperature in the shell
There are three methods to query the GPU temperature.
nvidia-settings
requires that you are using X,
nvidia-smi
or
nvclock
do not. Also note that
nvclock
currently does not work with newer NVIDIA cards such as GeForce 200 series cards as well as embedded GPUs such as the Zotac IONITX's 8800GS.
nvidia-settings
To display the GPU temp in the shell, use
nvidia-settings
as follows:
$ nvidia-settings -q gpucoretemp
Attribute 'GPUCoreTemp' (hostname:0[gpu:0]): 49.
'GPUCoreTemp' is an integer attribute.
'GPUCoreTemp' is a read-only attribute.
'GPUCoreTemp' can use the following target types: GPU.
The GPU temps of this board is 49 °C.
In order to get just the temperature for use in utilities such as
rrdtool
or
conky
$ nvidia-settings -q gpucoretemp -t
49
nvidia-smi
Use
nvidia-smi
which can read temps directly from the GPU without the need to use X at all, e.g. when running Wayland or on a headless server.
To display the GPU temperature in the shell, use
nvidia-smi
$ nvidia-smi
Wed Feb 28 14:27:35 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1660 Ti Off | 00000000:01:00.0 On | N/A |
| 0% 49C P8 9W / 120W | 138MiB / 6144MiB | 2% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 223179 G weston 120MiB |
+-----------------------------------------------------------------------------------------+
Only for temperature:
$ nvidia-smi -q -d TEMPERATURE
==============NVSMI LOG==============
Timestamp : Wed Feb 28 14:27:35 2024
Driver Version : 550.54.14
CUDA Version : 12.4
Attached GPUs : 1
GPU 00000000:01:00.0
Temperature
GPU Current Temp : 49 C
GPU T.Limit Temp : N/A
GPU Shutdown Temp : 95 C
GPU Slowdown Temp : 92 C
GPU Max Operating Temp : 90 C
GPU Target Temperature : 83 C
Memory Current Temp : N/A
Memory Max Operating Temp : N/A
In order to get just the temperature for use in utilities such as
rrdtool
or
conky
$ nvidia-smi --query-gpu=temperature.gpu --format=csv,noheader,nounits
49
nvclock
Install
the
nvclock
AUR
package.
Note
nvclock
cannot access thermal sensors on newer NVIDIA cards such as Geforce 200 series cards.
There can be significant differences between the temperatures reported by
nvclock
and
nvidia-settings
nv-control
. According to
this post
by the author (thunderbird) of
nvclock
, the
nvclock
values should be more accurate.
Overclocking and cooling
Warning
Overclocking might permanently damage your hardware. You have been warned.
Enabling overclocking in nvidia-settings
Note
Some overclocking settings cannot be applied if the Xorg server is running in rootless mode. Consider
running Xorg as root
You may also need to run
nvidia-settings
as root.
Enabling DRM kernel mode setting may cause overclocking to become unavailable, regardless of the Coolbits value.
Depending on the driver version, some overclocking features are enabled by default. Some unsupported overclocking features need to be enabled via the
Coolbits
option in the
Device
section:
Option "Coolbits" "
value
Tip
The
Coolbits
option can be easily controlled with the
nvidia-xconfig
, which manipulates the Xorg configuration files:
# nvidia-xconfig --cool-bits=
value
The
Coolbits
value is the sum of its component bits in the binary numeral system. The component bits are:
(bit 3) - Enables additional overclocking settings on the
PowerMizer
page in
nvidia-settings
. Available since version 337.12 for the Fermi architecture and newer.
[4]
16
(bit 4) - Enables overvoltage using
nvidia-settings
CLI options. Available since version 346.16 for the Fermi architecture and newer.
[5]
If you use an unsupported version of the driver, you may also need to use these bits:
(bit 0) - Enables overclocking of older (pre-Fermi) cores on the
Clock Frequencies
page in
nvidia-settings
. Removed in version 343.13.
(bit 1) - When this bit is set, the driver will "attempt to initialize SLI when using GPUs with different amounts of video memory". Removed in version 470.42.01.
(bit 2) - Enables manual configuration of GPU fan speed on the
Thermal Monitor
page in
nvidia-settings
. Removed in version 470.42.01.
To enable multiple features, add the
Coolbits
values together. For example, to enable overclocking and overvoltage of Fermi cores, set
Option "Coolbits" "24"
The documentation of
Coolbits
can be found in
/usr/share/doc/nvidia/html/xconfigoptions.html
and
here
Note
An alternative is to edit and reflash the GPU BIOS either under DOS (preferred), or within a Win32 environment by way of
nvflash
and
NiBiTor 6.0
. The advantage of BIOS flashing is that not only can voltage limits be raised, but stability is generally improved over software overclocking methods such as Coolbits.
Fermi BIOS modification tutorial
Setting static 2D/3D clocks
Use
kernel module parameters
to enable PowerMizer at its maximum performance level (VSync will not work without this):
/etc/modprobe.d/nvidia.conf
options nvidia NVreg_RegistryDwords="PerfLevelSrc=0x2222"
Lowering GPU boost clocks
With
Volta (NV140/GVXXX)
GPUs and later, clock boost works in a different way, and maximum clocks are set to the highest supported limit at boot. If that is what you want, then no further configuration is necessary.
The drawback is the lower power efficiency. As the clocks go up, increased voltage is needed for stability, resulting in a nonlinear increase in power consumption, heating, and fan noise. Lowering the boost clock limit will thus increase efficiency.
Boost clock limits can be changed using
nvidia-smi
, running as root:
List supported clock rates:
$ nvidia-smi -q -d SUPPORTED_CLOCKS
Set GPU boost clock limit to 1695 MHz:
# nvidia-smi --lock-gpu-clocks=0,1695 --mode=1
Set Memory boost clock limit to 5001 MHz:
# nvidia-smi --lock-memory-clocks=0,5001
To optimize for efficiency, use
nvidia-smi
to check the GPU utilization while running your favorite game. VSync should be on. Lowering the boost clock limit will increase GPU utilization, because a slower GPU will use more time to render each frame. Best efficiency is achieved with the lowest clocks that do not cause the stutter that results when the utilization hits 100%. Then, each frame can be rendered just quickly enough to keep up with the refresh rate.
As an example, using the above settings instead of default on an RTX 3090 Ti, while playing Hitman 3 at 4K@60, reduces power consumption by 30%, temperature from 75 to 63 degrees, and fan speed from 73% to 57%.
Saving overclocking settings
Typically, clock and voltage offsets inserted in the
nvidia-settings
interface are not saved, being lost after a reboot.
Fortunately, there are tools that offer an interface for overclocking under the proprietary driver, able to save the user's overclocking
preferences and automatically applying them on boot.
Some of them are:
gwe
AUR
- graphical, applies settings on desktop session start
nvclock
AUR
and
systemd-nvclock-unit
AUR
- graphical, applies settings on system boot
nvoc
AUR
- text based, profiles are configuration files in
/etc/nvoc.d/
, applies settings on desktop session start
Otherwise,
GPUGraphicsClockOffset
and
GPUMemoryTransferRateOffset
attributes can be set in the command-line interface of
nvidia-settings
on
startup
. For example:
$ nvidia-settings -a "GPUGraphicsClockOffset[
performance_level
]=
offset
$ nvidia-settings -a "GPUMemoryTransferRateOffset[
performance_level
]=
offset
Where
performance_level
is the number of the highest performance level. If there are multiple GPUs on the machine, the GPU ID should be specified:
[gpu:
gpu_id
]GPUGraphicsClockOffset[
performance_level
]=
offset
Custom TDP limit
The factual accuracy of this article or section is disputed.
Reason:
It seems that not all cards support this. Among the 3 cards available to me: a desktop 3080 Ti, a mobile 1650 MaxQ, and a mobile 500 Ada, this only worked on the 3080 Ti; on the two laptops I got "not supported for GPU" warnings. Is this feature unavailable for mobile GPUs? (Discuss in
Talk:NVIDIA/Tips and tricks
Modern NVIDIA graphics cards throttle frequency to stay in their TDP and temperature limits. To increase performance it is possible to change the TDP limit, which will result in higher temperatures and higher power consumption.
For example, to set the power limit to 160.30W:
# nvidia-smi -pl 160.30
To set the power limit on boot (without driver persistence):
/etc/systemd/system/nvidia-tdp.timer
[Unit]
Description=Set NVIDIA power limit on boot
[Timer]
OnBootSec=5
[Install]
WantedBy=timers.target
/etc/systemd/system/nvidia-tdp.service
[Unit]
Description=Set NVIDIA power limit
[Service]
Type=oneshot
ExecStart=/usr/bin/nvidia-smi -pl 160.30
Now
enable
the
nvidia-tdp.timer
Set fan speed at login
The factual accuracy of this article or section is disputed.
Reason:
This will not work because manual configuration of GPU fan speed requires running
nvidia-settings
as root (even if Xorg itself is running as root). (Discuss in
Talk:NVIDIA/Tips and tricks
You can adjust the fan speed on your graphics card with
nvidia-settings
console interface. First ensure that your Xorg configuration has enabled the bit 2 in the
Coolbits
option.
Note
GeForce 400/500 series cards cannot currently set fan speeds at login using this method. This method only allows for the setting of fan speeds within the current X session by way of
nvidia-settings
Place the following line in your
xinitrc
file to adjust the fan when you launch Xorg. Replace
with the fan speed percentage you want to set.
nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=
You can also configure a second GPU by incrementing the GPU and fan number.
nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=
" \
-a "[gpu:1]/GPUFanControlState=1" -a [fan:1]/GPUTargetFanSpeed=
" &
If you use a login manager such as
GDM
or
SDDM
, you can create a desktop entry file to process this setting. Create
~/.config/autostart/nvidia-fan-speed.desktop
and place this text inside it. Again, change
to the speed percentage you want.
[Desktop Entry]
Type=Application
Exec=nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=
X-GNOME-Autostart-enabled=true
Name=nvidia-fan-speed
Note
Before driver version 349.16,
GPUCurrentFanSpeed
was used instead of
GPUTargetFanSpeed
[6]
To make it possible to adjust the fanspeed of more than one graphics card, run:
$ nvidia-xconfig --enable-all-gpus
$ nvidia-xconfig --cool-bits=4
Note
On some laptops (including the ThinkPad
X1 Extreme
and
P51/P52
), there are two fans, but neither are controlled by nvidia.
Simple overclocking script using NVML
The Nvidia Management Library (NVML) provides an API that can manage the GPU's core and memory clock offsets and power limit. To utilise this, you can install
python-nvidia-ml-py
and then use the following Python script with your desired settings. This script needs to be run as root after every restart to re-apply the overclock.
#!/usr/bin/env python
from pynvml import *
nvmlInit()
# This sets the GPU to adjust - if this gives you errors or you have multiple GPUs, set to 1 or try other values
myGPU = nvmlDeviceGetHandleByIndex(0)
# The GPU clock offset value should replace "000" in the line below.
nvmlDeviceSetGpcClkVfOffset(myGPU, 000)
# The memory clock offset should be **multiplied by 2** to replace the "000" below
# For example, an offset of 500 means inserting a value of 1000 in the next line
nvmlDeviceSetMemClkVfOffset(myGPU, 000)
# The power limit can be set below in mW - 216W becomes 216000, etc. Remove the below line if you don't want to adjust power limits.
nvmlDeviceSetPowerManagementLimit(myGPU, 000000)
Undervolting with NVML
The NVML API also allows undervolting a GPU, which reduces power consumption and temperatures with minimal performance loss or even a slight gain. This might be specially desirable for laptop users.
Note
Extreme undervolting can cause major instability issues, and specially if configured directly in the firmware, may render the computer unbootable. Because of this, some motherboards come with an undervolt protection setting, which must be disabled before proceeding (for example, this is the case with the Alienware m16 R1 laptop). Your mileage may vary depending on the motherboard's brand and model.
Moreover, undervolting, like overclocking, should be done in small incremental steps, while
testing the system's stability
in between.
Install
python-nvidia-ml-py
create
the following script and make it
executable
/usr/local/sbin/nvidia-undervolt.py
#!/bin/env python
from pynvml import *
from ctypes import byref
nvmlInit()
# This sets the GPU to adjust - if this gives you errors or you have multiple GPUs, set to 1 or try other values.
myGPU = nvmlDeviceGetHandleByIndex(0)
##print(f"myGPU value: {myGPU}")
# Get the minimum and maximum power values allowed.
##min_power, max_power = nvmlDeviceGetPowerManagementLimitConstraints(myGPU)
##print(f"Allowed range: {min_power} mW to {max_power} mW")
# The power limit can be set below in mW - 216W becomes 216000, etc.
# This value must be within the minimum and maximum allowed power limits.
# Remove or comment out the line below if you do not want to adjust power limits.
nvmlDeviceSetPowerManagementLimit(myGPU, 000000)
# Define the minimum and maximum clocks allowed.
# The clocks supported by your GPU can be verified with:
# nvidia-smi -q -d SUPPORTED_CLOCKS
nvmlDeviceSetGpuLockedClocks(myGPU,
210
2340
####################################
# ============ P0 State ============
####################################
# ============ Memory ============
# Uncomment and edit this section if desired.
# Note: The memory clock offset should be **multiplied by 2**.
# E.g. a desired offset of 500 means inserting
# a value of 1000 in the clockOffsetMHz line.
##infoMemP0 = c_nvmlClockOffset_t()
##infoMemP0.version = nvmlClockOffset_v1
##infoMemP0.type = NVML_CLOCK_MEM
##infoMemP0.pstate = NVML_PSTATE_0
##infoMemP0.clockOffsetMHz =
2000
### This offset is simply how much faster your memory will run.
### E.g. instead of running at 8000 MHz,
### the memory will run at 8000 + (2000 / 2) = 9000 MHz.
##nvmlDeviceSetClockOffsets(myGPU, byref(infoMemP0))
# ============ Graphics =============
infoGraphicsP0 = c_nvmlClockOffset_t()
infoGraphicsP0.version = nvmlClockOffset_v1
infoGraphicsP0.type = NVML_CLOCK_GRAPHICS
infoGraphicsP0.pstate = NVML_PSTATE_0
infoGraphicsP0.clockOffsetMHz =
270
## What this offset means is: The frequency-voltage
## curve is lifted up by 270 MHz.
## E.g. the voltage value originally assigned to 2070 MHz
## will now be used at 2070 + 270 = 2340 MHz.
nvmlDeviceSetClockOffsets(myGPU, byref(infoGraphicsP0))
nvmlShutdown()
The details of the functions used can be read in Section 5.18 of the
NVML API documentation
This Reddit post
expĺains why the undervolting is done this way, with a clock offset.
This script only applies the undervolt to the GPU's highest
P-state
. If you want to configure other P-states aside from P0, check
this Reddit post
for advice.
Note
Unfortunately, some GPUs do not support changing the power limit via the NVML API. This can be tested with
nvidia-smi -pl
000
as root.
If this happens to you, remove or comment out the power limit section of the script, and as a last resource, you can
configure Dynamic Boost.
Run the script manually as root to apply the settings to your GPU and
test your configuration
Do not apply your settings permanently unless you have tested them and made sure no problems occur, i.e. your configuration is stable.
After finding a good setup, you need to re-apply it at every boot. One way to do this is with a systemd service:
/etc/systemd/system/nvidia-undervolt.service
[Unit]
Description=Undervolt the NVIDIA GPU
[Service]
Type=oneshot
ExecStart=/bin/python
/usr/local/sbin/
nvidia-undervolt.py
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=graphical.target
Finally,
enable
the service so your settings are applied every time the system boots up.
Kernel module parameters
Some options can be set as kernel module parameters, a full list can be obtained by running
modinfo nvidia
or looking at
nv-reg.h
. See
Gentoo:NVidia/nvidia-drivers#Kernel module parameters
as well.
For example, enabling the following will enable the PAT feature
[7]
, which affects how memory is allocated. PAT was first introduced in Pentium III
[8]
and is supported by most newer CPUs (see
wikipedia:Page attribute table#Processors
). If your system can support this feature, it should improve performance.
/etc/modprobe.d/nvidia.conf
options nvidia NVreg_UsePageAttributeTable=1
On some notebooks, to enable any NVIDIA settings tweaking you must include this option, otherwise it responds with "Setting applications clocks is not supported" etc.
/etc/modprobe.d/nvidia.conf
options nvidia NVreg_RegistryDwords="OverrideMaxPerf=0x1"
Note
As per
Kernel module#Using modprobe.d
, you will need to
regenerate the initramfs
if using
early KMS
Preserve video memory after suspend
By default the NVIDIA Linux drivers save and restore only essential video memory allocations on system suspend and resume. Quoting NVIDIA:
The resulting loss of video memory contents is partially compensated for by the user-space NVIDIA drivers, and by some applications, but can lead to failures such as rendering corruption and application crashes upon exit from power management cycles.
Introduced as an "experimental" interface (originally named
NVreg_PreserveVideoMemoryAllocations
in the 430-590 series drivers), it enables saving all video memory (given enough space on disk or RAM). With 595+ drivers, it has been succeeded by the
NVreg_UseKernelSuspendNotifiers=1
kernel module parameter
which needs to be set to save and restore all video memory contents.
While NVIDIA does not set these by default, Arch Linux does so for the supported drivers, making preserve work out of the box.
To verify that
NVreg_UseKernelSuspendNotifiers
is enabled, execute the following:
# sort /proc/driver/nvidia/params
Which should have a line
UseKernelSuspendNotifiers: 1
, and also
TemporaryFilePath: "/var/tmp"
, which you can read about below. Drivers prior 595, should have a line
PreserveVideoMemoryAllocations: 1
for the same.
In the older 430-590 series drivers, the services
nvidia-suspend.service
nvidia-hibernate.service
, and
nvidia-resume.service
are required and
enabled
by default, as per upstream requirements.
The aforementioned services are
disabled
by default on 595+ drivers, as per upstream requirements, as video memory preservation is now handled by kernel suspend notifiers, making the nvidia suspend/hibernate services unnecessary.
See
NVIDIA's documentation
for more details.
Note
When using
early KMS
, i.e. when the loading of
nvidia
module happens in the initramfs, it has no access to
NVreg_TemporaryFilePath
which stores the previous video memory: early KMS should not be used if hibernation is desired.
As per
Kernel module#Using modprobe.d
, you will need to
regenerate the initramfs
if using early KMS.
The video memory contents are by upstream default saved to
/tmp
, which is a
tmpfs
NVIDIA recommends
using an other filesystem to achieve the best performance. This is also required if the size is not sufficient for the amount of memory. Arch Linux thus sets
nvidia.NVreg_TemporaryFilePath=/var/tmp
by default on supported drivers.
The chosen file system containing the file needs to support unnamed temporary files (e.g. ext4 or XFS) and have sufficient capacity for storing the video memory allocations (i.e. at least 5 percent more than the sum of the memory capacities of all NVIDIA GPUs). Use the command
nvidia-smi --query-gpu=memory.total --format=csv,noheader,nounits
to list the memory capacities of all GPUs in the system.
When using the 430-590 series drivers, the
nvidia-resume.service
is marked as required by NVIDIA, but it can be optional, as its functionality is also provided by a
systemd-sleep(8)
hook (
/usr/lib/systemd/system-sleep/nvidia
) and it is invoked automatically. Note that
GDM with Wayland
however explicitly requires
nvidia-resume.service
to be
enabled
Dynamic Boost
Dynamic Boost is a system-wide power controller which manages GPU and CPU power, according to the workload on the system.
[9]
. It can particularly improve performance in GPU-bound applications by raising the power limit accordingly.
The main requirement is laptops with Ampere (or newer) GPUs.
See
CPU frequency scaling#nvidia-powerd
for detailed instructions.
Tip
It would especially help those unable to manually set power limit, see
NVIDIA Optimus#Low power usage (TDP)
Driver persistence
NVIDIA has a daemon that can be optionally run at boot. In a standard single-GPU X desktop environment the persistence daemon is not needed and can actually create issues
[10]
. See the
Driver Persistence
section of the NVIDIA documentation for more details.
To start the persistence daemon at boot,
enable
the
nvidia-persistenced.service
. For manual usage see the
upstream documentation
Forcing YCbCr with 4:2:0 subsampling
If you are facing
limitations of older output standards
that can still be mitigated by using YUV 4:2:0, the NVIDIA driver has an undocumented X11 option to enforce that:
Option "ForceYUV420" "True"
This will allow higher resolutions or refresh rates but will have detrimental impact on the image quality.
Configure applications to render using GPU
See
PRIME#Configure applications to render using GPU
Retrieved from "
Categories
Graphics
X server
Hidden category:
Pages or sections flagged with Template:Accuracy
NVIDIA/Tips and tricks
Add topic