Skip to content

Conversation

@avik-pal
Copy link
Collaborator

module @reactant_sum attributes {mhlo.num_partitions = 1 : i64, mhlo.num_replicas = 1 : i64} {
  func.func @main(%arg0: tensor<?x2xf32> {enzymexla.memory_effects = [], tf.aliasing_output = 1 : i32}) -> (tensor<f32>, tensor<?x2xf32>) attributes {enzymexla.memory_effects = []} {
    %cst = stablehlo.constant dense<0.000000e+00> : tensor<f32>
    %0 = stablehlo.transpose %arg0, dims = [1, 0] : (tensor<?x2xf32>) -> tensor<2x?xf32>
    %1 = stablehlo.reduce(%0 init: %cst) applies stablehlo.add across dimensions = [0, 1] : (tensor<2x?xf32>, tensor<f32>) -> tensor<f32>
    %2 = stablehlo.transpose %0, dims = [1, 0] : (tensor<2x?xf32>) -> tensor<?x2xf32>
    return %1, %2 : tensor<f32>, tensor<?x2xf32>
  }
}

@avik-pal avik-pal marked this pull request as draft November 11, 2025 22:06
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@avik-pal
Copy link
Collaborator Author

generally most ops wont work with unbounded dynamism. One thing we can do is to store the trace and on input run shape refinement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants