Skip to content

Commit 83f44e4

Browse files
committed
iommu/vt-d: Support enforce_cache_coherency only for empty domains
jira LE-1907 Rebuild_History Non-Buildable kernel-5.14.0-427.18.1.el9_4 commit-author Lu Baolu <baolu.lu@linux.intel.com> commit e645c20 Empty-Commit: Cherry-Pick Conflicts during history rebuild. Will be included in final tarball splat. Ref for failed cherry-pick at: ciq/ciq_backports/kernel-5.14.0-427.18.1.el9_4/e645c20e.failed The enforce_cache_coherency callback ensures DMA cache coherency for devices attached to the domain. Intel IOMMU supports enforced DMA cache coherency when the Snoop Control bit in the IOMMU's extended capability register is set. Supporting it differs between legacy and scalable modes. In legacy mode, it's supported page-level by setting the SNP field in second-stage page-table entries. In scalable mode, it's supported in PASID-table granularity by setting the PGSNP field in PASID-table entries. In legacy mode, mappings before attaching to a device have SNP fields cleared, while mappings after the callback have them set. This means partial DMAs are cache coherent while others are not. One possible fix is replaying mappings and flipping SNP bits when attaching a domain to a device. But this seems to be over-engineered, given that all real use cases just attach an empty domain to a device. To meet practical needs while reducing mode differences, only support enforce_cache_coherency on a domain without mappings if SNP field is used. Fixes: fc0051c ("iommu/vt-d: Check domain force_snooping against attached devices") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20231114011036.70142-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de> (cherry picked from commit e645c20) Signed-off-by: Jonathan Maple <jmaple@ciq.com> # Conflicts: # drivers/iommu/intel/iommu.h
1 parent 8ce643f commit 83f44e4

File tree

1 file changed

+87
-0
lines changed

1 file changed

+87
-0
lines changed
Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
iommu/vt-d: Support enforce_cache_coherency only for empty domains
2+
3+
jira LE-1907
4+
Rebuild_History Non-Buildable kernel-5.14.0-427.18.1.el9_4
5+
commit-author Lu Baolu <baolu.lu@linux.intel.com>
6+
commit e645c20e8e9cde549bc233435d3c1338e1cd27fe
7+
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
8+
Will be included in final tarball splat. Ref for failed cherry-pick at:
9+
ciq/ciq_backports/kernel-5.14.0-427.18.1.el9_4/e645c20e.failed
10+
11+
The enforce_cache_coherency callback ensures DMA cache coherency for
12+
devices attached to the domain.
13+
14+
Intel IOMMU supports enforced DMA cache coherency when the Snoop
15+
Control bit in the IOMMU's extended capability register is set.
16+
Supporting it differs between legacy and scalable modes.
17+
18+
In legacy mode, it's supported page-level by setting the SNP field
19+
in second-stage page-table entries. In scalable mode, it's supported
20+
in PASID-table granularity by setting the PGSNP field in PASID-table
21+
entries.
22+
23+
In legacy mode, mappings before attaching to a device have SNP
24+
fields cleared, while mappings after the callback have them set.
25+
This means partial DMAs are cache coherent while others are not.
26+
27+
One possible fix is replaying mappings and flipping SNP bits when
28+
attaching a domain to a device. But this seems to be over-engineered,
29+
given that all real use cases just attach an empty domain to a device.
30+
31+
To meet practical needs while reducing mode differences, only support
32+
enforce_cache_coherency on a domain without mappings if SNP field is
33+
used.
34+
35+
Fixes: fc0051cb9590 ("iommu/vt-d: Check domain force_snooping against attached devices")
36+
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
37+
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
38+
Link: https://lore.kernel.org/r/20231114011036.70142-1-baolu.lu@linux.intel.com
39+
Signed-off-by: Joerg Roedel <jroedel@suse.de>
40+
(cherry picked from commit e645c20e8e9cde549bc233435d3c1338e1cd27fe)
41+
Signed-off-by: Jonathan Maple <jmaple@ciq.com>
42+
43+
# Conflicts:
44+
# drivers/iommu/intel/iommu.h
45+
diff --cc drivers/iommu/intel/iommu.h
46+
index 7dac94f62b4e,ce030c5b5772..000000000000
47+
--- a/drivers/iommu/intel/iommu.h
48+
+++ b/drivers/iommu/intel/iommu.h
49+
@@@ -592,6 -600,11 +592,14 @@@ struct dmar_domain
50+
* otherwise, goes through the second
51+
* level.
52+
*/
53+
++<<<<<<< HEAD
54+
++=======
55+
+ u8 dirty_tracking:1; /* Dirty tracking is enabled */
56+
+ u8 nested_parent:1; /* Has other domains nested on it */
57+
+ u8 has_mappings:1; /* Has mappings configured through
58+
+ * iommu_map() interface.
59+
+ */
60+
++>>>>>>> e645c20e8e9c (iommu/vt-d: Support enforce_cache_coherency only for empty domains)
61+
62+
spinlock_t lock; /* Protect device tracking lists */
63+
struct list_head devices; /* all devices' list */
64+
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
65+
index 4c3707384bd9..744e4e6b8d72 100644
66+
--- a/drivers/iommu/intel/iommu.c
67+
+++ b/drivers/iommu/intel/iommu.c
68+
@@ -2204,6 +2204,8 @@ __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
69+
attr |= DMA_FL_PTE_DIRTY;
70+
}
71+
72+
+ domain->has_mappings = true;
73+
+
74+
pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | attr;
75+
76+
while (nr_pages > 0) {
77+
@@ -4309,7 +4311,8 @@ static bool intel_iommu_enforce_cache_coherency(struct iommu_domain *domain)
78+
return true;
79+
80+
spin_lock_irqsave(&dmar_domain->lock, flags);
81+
- if (!domain_support_force_snooping(dmar_domain)) {
82+
+ if (!domain_support_force_snooping(dmar_domain) ||
83+
+ (!dmar_domain->use_first_level && dmar_domain->has_mappings)) {
84+
spin_unlock_irqrestore(&dmar_domain->lock, flags);
85+
return false;
86+
}
87+
* Unmerged path drivers/iommu/intel/iommu.h

0 commit comments

Comments
 (0)