“Microprocessors from Intel, AMD, and other companies contain a newly discovered weakness that remote attackers can exploit to obtain cryptographic keys and other secret data travelling through the hardware, researchers said”.
It has already been shown in earlier papers that you can draw conclusions about the processed values from the power consumption if you know the executed code. This can be “fixed” relatively easily by not exposing the power consumption measurement interfaces.
But it turns out: The power consumption determines the waste heat, and the waste heat determines the boost frequency, and the boost frequency determines the processing speed, and you can even see that over the network.
In the paper, they demonstrate this on a post-quantum process that advertises itself as being constant-time, precisely to prevent side channels. This was previously considered the gold standard for side-channel freedom.
If you dont use boost frequency though this isnt an issue apparently according to intel and the Article. Whats described here is exactly what you do when overclocking a CPU for a fixed frequency.
Is there a workaround?
Technically, yes. However, it has an extreme system-wide performance impact.
In most cases, you can prevent Hertzbleed by disabling frequency boost. Intel calls this feature “Turbo Boost”, and AMD calls it “Turbo Core” or “Precision Boost”. Disabling frequency boost can be done either through the BIOS or at runtime via the frequency scaling driver. In our experiments, when frequency boost was disabled, the frequency stayed fixed at the base frequency during workload execution, preventing leakage via Hertzbleed. However, this is not a recommended mitigation strategy as it will very significantly impact performance. Moreover, on some custom system configurations (with reduced power limits), data-dependent frequency updates may occur even when frequency boost is disabled.
i cant read that unfortunately (looks to be german)
That remote timing attack seems to be pretty much impossible to exploit.
It took them 36/89 hours to extract the key and that was under laboratory conditions where the network latency was predictable and CPU did pretty much nothing else than to run that crypto lib. The timing needs to be very precise for that attack.
In a real-world scenario you’d have bigger deviations in network latency and server’s usually run tons of processes/threads in parallel → That would have an impact on frequency and processing time of the crypto lib as well and thus make it impossible for the attacker to infer anything based on pure timing.