Ok, let me throw a little light on the last release of the CloudLinux 7 and CloudLinux 6 kernel with the MDS vulnerability patch.
MDS vulnerability explanation
In the last three days, we’ve received a whole bunch of questions like Should I disable Hyper-Threading or not? and How Hyper-Threading disabling can impact performance? So, here we are with some important information about the point.
But what is the problem? CPU has two execution threads per physical core. Both threads share the same resources inside the CPU. It means sibling cores can see the same data as the primary core can.
So what?
The problem involves different attacks:
- Kernel — user space attack.
- Userspace — userspace attack for threads running on the same physical core.
- Virtual Machine — Virtual Machine attack.
Different attack faces need different ways to mitigate it.
- If you have a trusted user space, CPU buffers need to be flushed at the exit from the kernel, so the application isn’t able to see kernel data. That's what a microcode update does–it provides the ability to make it flush. If the CPU is supported by the microcode update, you can see a message:
"Mitigation: Clear CPU buffers"
OR
"Mitigation: Clear CPU buffers; SMT vulnerable" in the /sys/devices/system/cpu/vulnerabilities/mds file or by running dmesg |grep MDS. - Virtual machine to virtual machine (VM) attack is different to the previous one. Two VMs can share the same CPU core so they can share CPU data. The kernel adds a CPU buffer flush in this case, in the same way that point one does.
- But, resources aren’t completely isolated between cores in a package. The primary core and siblings share some resources at run time. So different applications on the Host or different applications in different VMs can access the same data. Intel CPUs have different protections against this type of attack. Some CPUs have little protection and others none at all. If you want to guarantee this attack will never happen, you can add “,nosmt” string to the end of the mds parameter.
For example: mds=”full,nosmt”. This will provide an additional check and enable Hyper-Threading only if this is safe. - You will see a message:
"Mitigation: Clear CPU buffers"
in the system file or in the dmesg output. Currently, only ATOM series CPUs have this protection.
The problem can be mitigated by a CPU scheduler change. The scheduler must avoid balance loading between vCPUs, but this is a very large change and it is not available for the Linux kernel yet.
What CPUs can have their microcode updated?
Intel doesn’t provide a microcode update for all CPU’s. Only some new ones can be updated at the moment.
Product Names | CPUID | CPUID Intel format | Platform ID |
---|---|---|---|
Xeon Scalable Gen2 | 06-55-7 | 50657 | bf |
Core Gen2 | 06-2a-7 | 206a7 | 12 |
Core Gen3 | 06-3a-9 | 306a9 | 12 |
Core Gen4 | 06-3c-3 | 306c3 | 32 |
Core Gen5 | 06-3d-4 | 306d4 | c0 |
Core Gen3 X Series; Xeon E5 v2 | 06-3e-4 | 306e4 | ed |
Xeon E7 v2 | 06-3e-7 | 306e7 | ed |
Core Gen4 X series; Xeon E5 v3 | 06-3f-2 | 306f2 | 6f |
Xeon E7 v3 | 06-3f-4 | 306f4 | 80 |
Core Gen4 | 06-45-1 | 40651 | 72 |
Core Gen4 | 06-46-1 | 40661 | 32 |
Core Gen5 | 06-47-1 | 40671 | 22 |
Core Gen6 | 06-4e-3 | 406e3 | c0 |
Xeon Scalable | 06-55-4 | 50654 | b7 |
Xeon D-21xx | 06-55-4 | 50654 | b7 |
Xeon D-1520/40 | 06-56-2 | 50662 | 10 |
Xeon D-1518/19/21/27/28/31/33/37/41/48, Pentium D1507/08/09/17/19 | 06-56-3 | 50663 | 10 |
Xeon D-1557/59/67/71/77/81/87 | 06-56-4 | 50664 | 10 |
Xeon D-1513N/23/33/43/53 | 06-56-5 | 50665 | 10 |
Pentium N/J4xxx, Celeron N/J3xxx, Atom x5/7-E39xx | 06-5c-9 | 506c9 | 3 |
Core Gen6; Xeon E3 v5 | 06-5e-3 | 506e3 | 36 |
Atom Processor C Series | 06-5f-1 | 506f1 | 01 |
Pentium Silver N/J5xxx, Celeron N/J4xxx | 06-7a-1 | 706a1 | 01 |
Core Gen8 Mobile | 06-8e-9 | 806e9 | 10 |
Core Gen7 Mobile | 06-8e-9 | 806e9 | c0 |
Core Gen8 Mobile | 06-8e-a | 806ea | c0 |
Core Gen8 Mobile | 06-8e-b | 806eb | d0 |
Core Gen8 Mobile | 06-8e-d | 806ed | 94 |
Core Gen7; Xeon E3 v6 | 06-9e-9 | 906e9 | 2a |
Core Gen8 Desktop, Mobile, Xeon E | 06-9e-a | 906ea | 22 |
Core Gen8 | 06-9e-b | 906eb | 02 |
Core Gen9 | 06-9e-c | 906ec | 22 |
Core Gen9 Mobile | 06-9e-d | 906ed | 22 |
Some of them are planned for the future.
Product Names | CPUID | Platform ID |
---|---|---|
Intel® Atom® Processor C2750, C2730, C2550, C2530, C2350 | 406D8 | 1 |
Intel® CoreTM Processor Extreme Edition i7-3960X, i7-3970X Intel® CoreTM Processor i7-3820, 3930K | 206D7 | 6D |
Intel® Xeon® Processor E5-2620, E5-2630, E5-2630L, E5- 2640, E5-2650, E5-2650L, E5-2660, E5-2667, E5-2670, E5- 2680, E5-2690 | 206D6 | 6D |
Intel® Xeon® Processor E5-1428L, E5-1620, E5-1650, E5- 1660, E5-2403, E5-2407, E5-2418L, E5-2420, E5-2428L, E5-2430, E5-2430L, E5-2440, E5-2448L, E5-2450, E5- 2450L, E5-2470, E5-2603, E5-2609, E5-2620, E5-2630, E5- 2630L, E5-2637, E5-2640, E5-2643, E5-2648L, E5-2650, E5-2650L, E5-2658, E5-2660, E5-2665, E5-2667, E5-2670, E5-2680, E5-2687W, E5-2690, E5-4603, E5-4607, E5-4610, E5-4617, E5-4620, E5-4640, E5-4650, E5-4650L Intel® Pentium® Processor 1405 | 206D7 | 6D |
Intel® Atom® Processor Z3770, Z3740, Z3770D, Z3740D, Z3770, Z3740, Z3680, Z3770D, Z3740D | 30673 | 2 |
Intel® Pentium® Processor J2900, J2850 Intel® Pentium® Processor N3520, N3510 Intel® Celeron® Processor J1900, J1850, J1800, J1750 Intel® Celeron® Processor N2920, N2910, N2820, N2815, N2810, N2806, N2805 | 30673 | 0C |
Intel® Pentium® Processor J2900, J2850 Intel® Pentium® Processor N3520, N3510 Intel® Celeron® Processor J1900, J1850, J1800, J1750 Intel® Celeron® Processor N2920, N2910, N2820, N2815, N2810, N2806, N2805 | 30673 | 0C |
To determine the CPUID, use the command:
# a=$(head -3 /proc/cpuinfo | tail -1 | awk '{print $4}'); b=$(head -4 /proc/cpuinfo| tail -1 | awk '{print $3}'); c=$(head -6 /proc/cpuinfo| tail -1 | awk '{print $3}'); printf "%02x-%02x-%02x\n" $a $b $c
To determine the CPUID in Intel format, use the command:
# a=$(head -3 /proc/cpuinfo | tail -1 | awk '{print $4}'); b=$(head -4 /proc/cpuinfo| tail -1 | awk '{print $3}'); c=$(head -6 /proc/cpuinfo| tail -1 | awk '{print $3}'); cpuid=$(printf "%02x-%02x-%x" $a $b $c); printf ${cpuid:3:1}${cpuid:0:2}${cpuid:4:1}${cpuid:6:2}"\n"
There are several CPUs which won’t have updated microcode. See the detailed list here: https://www.intel.com/content/dam/www/public/us/en/documents/corporate-information/SA00233-microcode-update-guidance_05132019.pdf.
For these processors, it is recommended to disable Hyper-Threading.
How Hyper-Threading disabling can impact performance
We have not received any reports about the performance impact from these MDS mitigations. However, people from RedHat report that there is, and that the “impact will be felt more in applications with high rates of user-kernel-user space transitions. For example, system calls, NMIs, and interrupts.”
They have conducted several tests to evaluate the impact on the following workloads:
- Applications that spend a lot of time in user mode tended to show the smallest slowdown, usually in the 0-5% range.
- Applications that did a lot of small block or small packet network I/O showed slowdowns in the 10-25% range.
- Some microbenchmarks that did nothing other than enter and return from user space to kernel space showed higher slowdowns.
As RedHat specialists have said, “MDS mitigation can be fully enabled, with SMT also disabled by adding the “mds=full,nosmt” flag to the kernel boot command line.
MDS mitigation can be fully disabled by adding the “mds=off” flag to the kernel boot command line.
There is no way to disable it at runtime.”
More reading
- You can find a complete review of the MDS vulnerability from RedHat here: https://access.redhat.com/security/vulnerabilities/mds.
- For the performance impact of disabling Hyper-Threading, see the “Disabling Hyper-Threading” section at https://access.redhat.com/security/vulnerabilities/L1TF-perf.
- If you are using KernelCare, please visit https://blog.kernelcare.com/zombieload where you can find the MDS vulnerability patch release schedule, instructions on how to mitigate the MDS vulnerability, and also watch the video with insights regarding MDS from our CEO, Igor Seletskiy.