-
Notifications
You must be signed in to change notification settings - Fork 20
[core][GPU][StaticMatrix] Introduce GPU backend for NuMojo! #276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
shivasankarka
wants to merge
174
commits into
Mojo-Numerics-and-Algorithms-group:gpu-dev
Choose a base branch
from
shivasankarka:gpu_ndarray
base: gpu-dev
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
[core][GPU][StaticMatrix] Introduce GPU backend for NuMojo! #276
shivasankarka
wants to merge
174
commits into
Mojo-Numerics-and-Algorithms-group:gpu-dev
from
shivasankarka:gpu_ndarray
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…and-Algorithms-group#275) ## Pull Request Overview (From Copilot) This PR enhances ComplexNDArray functionality by adding comparison operators, trait methods, statistical/reduction methods, and array manipulation capabilities. It also introduces temporary Int conversions for strides/shape operations and implements SIMD load/store methods for vectorized calculations. ### Key Changes - Added trait implementations (ImplicitlyCopyable, Movable) and conversion methods (__bool__, __int__, __float__) for ComplexNDArray - Implemented magnitude-based comparison operators (__lt__, __le__, __gt__, __ge__) for complex arrays - Added statistical methods (all, any, sum, prod, mean, max, min, argmax, argmin, cumsum, cumprod) and array manipulation methods (flatten, fill, row, col, clip, round, T, diagonal, trace, tolist, resize) - Changed internal buffer types from `UnsafePointer[Int]` to `UnsafePointer[Scalar[DType.int]]` in NDArrayShape, NDArrayStrides, and Item structs - Added SIMD load/store methods (load, store, unsafe_load, unsafe_store) for Item, Shape, and Strides <details> <summary>Show a summary per file</summary> | File | Description | | ---- | ----------- | | numojo/routines/indexing.mojo | Added Int conversions for stride operations in compress function | | numojo/routines/creation.mojo | Removed duplicate import statements | | numojo/core/ndstrides.mojo | Changed buffer type to Scalar[DType.int], updated __setitem__ validation, added SIMD load/store methods | | numojo/core/ndshape.mojo | Changed buffer type to Scalar[DType.int], updated __setitem__ validation, added SIMD load/store methods, modified size_of_array calculation | | numojo/core/ndarray.mojo | Added Int conversions for stride/shape buffer accesses throughout | | numojo/core/item.mojo | Changed buffer type to Scalar[DType.int], removed Item.__init__(idx, shape) constructor and offset() method, added SIMD load/store methods | | numojo/core/complex/complex_simd.mojo | Added ImplicitlyCopyable and Movable traits to ComplexSIMD | | numojo/core/complex/complex_ndarray.mojo | Added comparison operators, conversion methods, power operations, statistical methods, and array manipulation methods; added Int conversions for stride operations | </details> --------- Co-authored-by: ZHU Yuhao 朱宇浩 <dr.yuhao.zhu@outlook.com>
DType.index errors.
formatting errors.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR introduces initial GPU support for Numojo #273
It adds unified device and storage abstractions, a basic matrix representation for GPU computations, and several core GPU kernels (elementwise
add/sub/mul,matmul,fill, and a block-level reduction). This work lays the foundation for using Mojo GPU features to accelerate array operations.The design is inspired by PyTorch
Tensorwhile keeping NumPy-like API choices where possible.Notes
StaticMatrixis still a very basic structure with only some getter and setter functions to showcase the proof of concept of a GPU backend in NuMojo. We will expand in future to include all features fromMatrixtype.StaticMatrixas a compile time shape and strides would help optimize a lot of the loops and gpu kernels! This would be aMatrixtype that takes advantage of Mojo's compile time capabilities as much as possible! We will modify the API to support compile time optimisations in future updates.What’s Included
Device & context abstraction
numojo/core/gpu/device.mojo— device and context primitives to target GPUUnified storage
numojo/core/gpu/storage.mojo— unified CPU/GPU memory management for buffersMatrix primitives
numojo/core/staticmatrix.mojo— adds aStaticMatrixstruct to prototype GPU usage before extending to N-D arraysGPU kernels
numojo/core/gpu/matrix_kernels.mojo— implements:add,mul,fill(andsub)matrix_reduce_sum_kernel(per-block reduction)Other
Example