Skip to content

Conversation

@tianrengao
Copy link
Contributor

@tianrengao tianrengao commented Oct 28, 2025

Resolving previous matmul bwd implementation for performance concern. In previous PR #748, the matmul bwd was implemented with a specific kernel via two passes, while we can directly call matmul fwd twice given matmul fwd is fully optimized, as @ngimel pointed out. In this PR the matmul_autograd and addmm_autograd are updated to use two matmul fwds instead of a specific matmul_bwd and addmm_bwd. The benchmark/run.py now only calls these updated bwd.

However, the @helion.kernel annotation does not allow calling another function(matmul_fwd) within the kernel def, so the original matmul_bwd and addmm_bwd are still preserved only as examples in examples/matmul.py, but they are not actually used in benchmark run.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 28, 2025
@tianrengao tianrengao force-pushed the tianren/addmm_bwd_fix_impl branch from 7301838 to ad73fba Compare November 5, 2025 05:02
@tianrengao tianrengao marked this pull request as ready for review November 5, 2025 18:47
@tianrengao tianrengao requested review from ngimel and yf225 November 5, 2025 18:52
@tianrengao tianrengao changed the title use matmul direclty in bwd for performance Use matmul fwd direclty in autograd for performance Nov 5, 2025
Comment on lines +97 to +101
# grad_mat1 = grad_out @ mat2.T
grad_mat1 = matmul(grad_out, mat2.T)

# grad_mat2 = mat1.T @ grad_out
grad_mat2 = matmul(mat1.T, grad_out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You only need to compute these if requires_grad is set on the inputs.

Comment on lines +158 to +165
# grad_bias = beta * grad_out
grad_bias = beta * grad_out

# grad_mat1 = alpha * (grad_out @ mat2.T)
grad_mat1 = alpha * matmul(grad_out, mat2.T)

# grad_mat2 = alpha * (mat1.T @ grad_out)
grad_mat2 = alpha * matmul(mat1.T, grad_out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This results in extra kernels, you should define an epilogue function to put the scaling into the matmul kernel.

Also same issue as above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants