Skip to content

Commit be7ef41

Browse files
committed
KVM: arm64: Disable MPAM visibility by default and ignore VMM writes
jira LE-4649 Rebuild_History Non-Buildable kernel-5.14.0-570.60.1.el9_6 commit-author James Morse <james.morse@arm.com> commit 6685f5d Empty-Commit: Cherry-Pick Conflicts during history rebuild. Will be included in final tarball splat. Ref for failed cherry-pick at: ciq/ciq_backports/kernel-5.14.0-570.60.1.el9_6/6685f5d5.failed commit 011e5f5 ("arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests, but didn't add trap handling. A previous patch supplied the missing trap handling. Existing VMs that have the MPAM field of ID_AA64PFR0_EL1 set need to be migratable, but there is little point enabling the MPAM CPU interface on new VMs until there is something a guest can do with it. Clear the MPAM field from the guest's ID_AA64PFR0_EL1 and on hardware that supports MPAM, politely ignore the VMMs attempts to set this bit. Guests exposed to this bug have the sanitised value of the MPAM field, so only the correct value needs to be ignored. This means the field can continue to be used to block migration to incompatible hardware (between MPAM=1 and MPAM=5), and the VMM can't rely on the field being ignored. Signed-off-by: James Morse <james.morse@arm.com> Co-developed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Joey Gouly <joey.gouly@arm.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20241030160317.2528209-7-joey.gouly@arm.com Signed-off-by: Oliver Upton <oliver.upton@linux.dev> (cherry picked from commit 6685f5d) Signed-off-by: Jonathan Maple <jmaple@ciq.com> # Conflicts: # arch/arm64/kvm/sys_regs.c
1 parent 105daa8 commit be7ef41

File tree

1 file changed

+137
-0
lines changed

1 file changed

+137
-0
lines changed
Lines changed: 137 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
KVM: arm64: Disable MPAM visibility by default and ignore VMM writes
2+
3+
jira LE-4649
4+
Rebuild_History Non-Buildable kernel-5.14.0-570.60.1.el9_6
5+
commit-author James Morse <james.morse@arm.com>
6+
commit 6685f5d572c22e1003e7c0d089afe1c64340ab1f
7+
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
8+
Will be included in final tarball splat. Ref for failed cherry-pick at:
9+
ciq/ciq_backports/kernel-5.14.0-570.60.1.el9_6/6685f5d5.failed
10+
11+
commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in
12+
ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests,
13+
but didn't add trap handling. A previous patch supplied the missing trap
14+
handling.
15+
16+
Existing VMs that have the MPAM field of ID_AA64PFR0_EL1 set need to
17+
be migratable, but there is little point enabling the MPAM CPU
18+
interface on new VMs until there is something a guest can do with it.
19+
20+
Clear the MPAM field from the guest's ID_AA64PFR0_EL1 and on hardware
21+
that supports MPAM, politely ignore the VMMs attempts to set this bit.
22+
23+
Guests exposed to this bug have the sanitised value of the MPAM field,
24+
so only the correct value needs to be ignored. This means the field
25+
can continue to be used to block migration to incompatible hardware
26+
(between MPAM=1 and MPAM=5), and the VMM can't rely on the field
27+
being ignored.
28+
29+
Signed-off-by: James Morse <james.morse@arm.com>
30+
Co-developed-by: Joey Gouly <joey.gouly@arm.com>
31+
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
32+
Reviewed-by: Gavin Shan <gshan@redhat.com>
33+
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
34+
Reviewed-by: Marc Zyngier <maz@kernel.org>
35+
Link: https://lore.kernel.org/r/20241030160317.2528209-7-joey.gouly@arm.com
36+
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
37+
(cherry picked from commit 6685f5d572c22e1003e7c0d089afe1c64340ab1f)
38+
Signed-off-by: Jonathan Maple <jmaple@ciq.com>
39+
40+
# Conflicts:
41+
# arch/arm64/kvm/sys_regs.c
42+
diff --cc arch/arm64/kvm/sys_regs.c
43+
index d97ad622075f,7dc4a5ce5292..000000000000
44+
--- a/arch/arm64/kvm/sys_regs.c
45+
+++ b/arch/arm64/kvm/sys_regs.c
46+
@@@ -1553,7 -1544,12 +1553,8 @@@ static u64 __kvm_read_sanitised_id_reg(
47+
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
48+
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_DF2);
49+
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);
50+
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
51+
break;
52+
- case SYS_ID_AA64PFR2_EL1:
53+
- /* We only expose FPMR */
54+
- val &= ID_AA64PFR2_EL1_FPMR;
55+
- break;
56+
case SYS_ID_AA64ISAR1_EL1:
57+
if (!vcpu_has_ptrauth(vcpu))
58+
val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) |
59+
@@@ -1826,6 -1845,42 +1834,45 @@@ static int set_id_dfr0_el1(struct kvm_v
60+
return set_id_reg(vcpu, rd, val);
61+
}
62+
63+
++<<<<<<< HEAD
64+
++=======
65+
+ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
66+
+ const struct sys_reg_desc *rd, u64 user_val)
67+
+ {
68+
+ u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
69+
+ u64 mpam_mask = ID_AA64PFR0_EL1_MPAM_MASK;
70+
+
71+
+ /*
72+
+ * Commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits
73+
+ * in ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to
74+
+ * guests, but didn't add trap handling. KVM doesn't support MPAM and
75+
+ * always returns an UNDEF for these registers. The guest must see 0
76+
+ * for this field.
77+
+ *
78+
+ * But KVM must also accept values from user-space that were provided
79+
+ * by KVM. On CPUs that support MPAM, permit user-space to write
80+
+ * the sanitizied value to ID_AA64PFR0_EL1.MPAM, but ignore this field.
81+
+ */
82+
+ if ((hw_val & mpam_mask) == (user_val & mpam_mask))
83+
+ user_val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
84+
+
85+
+ return set_id_reg(vcpu, rd, user_val);
86+
+ }
87+
+
88+
+ static int set_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
89+
+ const struct sys_reg_desc *rd, u64 user_val)
90+
+ {
91+
+ u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
92+
+ u64 mpam_mask = ID_AA64PFR1_EL1_MPAM_frac_MASK;
93+
+
94+
+ /* See set_id_aa64pfr0_el1 for comment about MPAM */
95+
+ if ((hw_val & mpam_mask) == (user_val & mpam_mask))
96+
+ user_val &= ~ID_AA64PFR1_EL1_MPAM_frac_MASK;
97+
+
98+
+ return set_id_reg(vcpu, rd, user_val);
99+
+ }
100+
+
101+
++>>>>>>> 6685f5d572c2 (KVM: arm64: Disable MPAM visibility by default and ignore VMM writes)
102+
/*
103+
* cpufeature ID register user accessors
104+
*
105+
@@@ -2365,19 -2430,15 +2412,31 @@@ static const struct sys_reg_desc sys_re
106+
107+
/* AArch64 ID registers */
108+
/* CRm=4 */
109+
++<<<<<<< HEAD
110+
+ { SYS_DESC(SYS_ID_AA64PFR0_EL1),
111+
+ .access = access_id_reg,
112+
+ .get_user = get_id_reg,
113+
+ .set_user = set_id_reg,
114+
+ .reset = read_sanitised_id_aa64pfr0_el1,
115+
+ .val = ~(ID_AA64PFR0_EL1_AMU |
116+
+ ID_AA64PFR0_EL1_MPAM |
117+
+ ID_AA64PFR0_EL1_SVE |
118+
+ ID_AA64PFR0_EL1_RAS |
119+
+ ID_AA64PFR0_EL1_GIC |
120+
+ ID_AA64PFR0_EL1_AdvSIMD |
121+
+ ID_AA64PFR0_EL1_FP), },
122+
+ ID_WRITABLE(ID_AA64PFR1_EL1, ~(ID_AA64PFR1_EL1_PFAR |
123+
++=======
124+
+ ID_FILTERED(ID_AA64PFR0_EL1, id_aa64pfr0_el1,
125+
+ ~(ID_AA64PFR0_EL1_AMU |
126+
+ ID_AA64PFR0_EL1_MPAM |
127+
+ ID_AA64PFR0_EL1_SVE |
128+
+ ID_AA64PFR0_EL1_RAS |
129+
+ ID_AA64PFR0_EL1_AdvSIMD |
130+
+ ID_AA64PFR0_EL1_FP)),
131+
+ ID_FILTERED(ID_AA64PFR1_EL1, id_aa64pfr1_el1,
132+
+ ~(ID_AA64PFR1_EL1_PFAR |
133+
++>>>>>>> 6685f5d572c2 (KVM: arm64: Disable MPAM visibility by default and ignore VMM writes)
134+
ID_AA64PFR1_EL1_DF2 |
135+
ID_AA64PFR1_EL1_MTEX |
136+
ID_AA64PFR1_EL1_THE |
137+
* Unmerged path arch/arm64/kvm/sys_regs.c

0 commit comments

Comments
 (0)