Skip to content

Commit 2d1684a

Browse files
matt-auldlucasdemarchi
authored andcommitted
drm/xe/uapi: loosen used tracking restriction
Currently this is hidden behind perfmon_capable() since this is technically an info leak, given that this is a system wide metric. However the granularity reported here is always PAGE_SIZE aligned, which matches what the core kernel is already willing to expose to userspace if querying how many free RAM pages there are on the system, and that doesn't need any special privileges. In addition other drm drivers seem happy to expose this. The motivation here if with oneAPI where they want to use the system wide 'used' reporting here, so not the per-client fdinfo stats. This has also come up with some perf overlay applications wanting this information. Fixes: 1105ac1 ("drm/xe/uapi: restrict system wide accounting") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Joshua Santosh <joshua.santosh.ranjan@intel.com> Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: <stable@vger.kernel.org> # v6.8+ Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20250919122052.420979-2-matthew.auld@intel.com (cherry picked from commit 4d0b035fd6dae8ee48e9c928b10f14877e595356) Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
1 parent 6982a46 commit 2d1684a

File tree

1 file changed

+6
-9
lines changed

1 file changed

+6
-9
lines changed

drivers/gpu/drm/xe/xe_query.c

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -276,8 +276,7 @@ static int query_mem_regions(struct xe_device *xe,
276276
mem_regions->mem_regions[0].instance = 0;
277277
mem_regions->mem_regions[0].min_page_size = PAGE_SIZE;
278278
mem_regions->mem_regions[0].total_size = man->size << PAGE_SHIFT;
279-
if (perfmon_capable())
280-
mem_regions->mem_regions[0].used = ttm_resource_manager_usage(man);
279+
mem_regions->mem_regions[0].used = ttm_resource_manager_usage(man);
281280
mem_regions->num_mem_regions = 1;
282281

283282
for (i = XE_PL_VRAM0; i <= XE_PL_VRAM1; ++i) {
@@ -293,13 +292,11 @@ static int query_mem_regions(struct xe_device *xe,
293292
mem_regions->mem_regions[mem_regions->num_mem_regions].total_size =
294293
man->size;
295294

296-
if (perfmon_capable()) {
297-
xe_ttm_vram_get_used(man,
298-
&mem_regions->mem_regions
299-
[mem_regions->num_mem_regions].used,
300-
&mem_regions->mem_regions
301-
[mem_regions->num_mem_regions].cpu_visible_used);
302-
}
295+
xe_ttm_vram_get_used(man,
296+
&mem_regions->mem_regions
297+
[mem_regions->num_mem_regions].used,
298+
&mem_regions->mem_regions
299+
[mem_regions->num_mem_regions].cpu_visible_used);
303300

304301
mem_regions->mem_regions[mem_regions->num_mem_regions].cpu_visible_size =
305302
xe_ttm_vram_get_cpu_visible_size(man);

0 commit comments

Comments
 (0)