Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lower group convolution failed #3745

Open
SihangZhu opened this issue Sep 29, 2024 · 1 comment
Open

lower group convolution failed #3745

SihangZhu opened this issue Sep 29, 2024 · 1 comment
Assignees

Comments

@SihangZhu
Copy link

SihangZhu commented Sep 29, 2024

code as follow:

func.func @aten_cat_test(%arg0: !torch.vtensor<[120,2,32,16],f16>, %arg1: !torch.vtensor<[32,1,3,3],f16>, %arg2: !torch.vtensor<[32],f16>) -> !torch.vtensor<[120,32,32,16],f16> {
  %false = torch.constant.bool false
  %int1 = torch.constant.int 1
  %int0 = torch.constant.int 0
  %int2 = torch.constant.int 2
  %359 = torch.prim.ListConstruct %int1, %int1 : (!torch.int, !torch.int) -> !torch.list<int>
  %360 = torch.prim.ListConstruct %int1, %int1 : (!torch.int, !torch.int) -> !torch.list<int>
  %361 = torch.prim.ListConstruct %int1, %int1 : (!torch.int, !torch.int) -> !torch.list<int>
  %362 = torch.prim.ListConstruct %int0, %int0 : (!torch.int, !torch.int) -> !torch.list<int>
  %363 = torch.aten.convolution %arg0, %arg1, %arg2, %361, %359, %360, %false, %362, %int2 : !torch.vtensor<[120,2,32,16],f16>, !torch.vtensor<[32,1,3,3],f16>, !torch.vtensor<[32],f16>, !torch.list<int>, !torch.list<int>, !torch.list<int>, !torch.bool, !torch.list<int>, !torch.int -> !torch.vtensor<[120,32,32,16],f16>
  return %363 : !torch.vtensor<[120,32,32,16],f16>
}

lower with command:

torch-mlir-opt  --convert-torch-to-linalg demo.mlir -o res.mlir

error with :

error: 'linalg.depthwise_conv_2d_nchw_chw' op inferred input/output operand #1 has shape's dimension #0 to be 2, but found 32
  %363 = torch.aten.convolution %arg0, %arg1, %arg2, %361, %359, %360, %false, %362, %int2 : !torch.vtensor<[120,2,32,16],f16>, !torch.vtensor<[32,1,3,3],f16>, !torch.vtensor<[32],f16>, !torch.list<int>, !torch.list<int>, !torch.list<int>, !torch.bool, !torch.list<int>, !torch.int -> !torch.vtensor<[120,32,32,16],f16>
         ^
demo.mlir:10:10: note: see current operation: 
%94 = "linalg.depthwise_conv_2d_nchw_chw"(%57, %93, %88) <{dilations = dense<1> : vector<2xi64>, operandSegmentSizes = array<i32: 2, 1>, strides = dense<1> : vector<2xi64>}> ({
^bb0(%arg3: f16, %arg4: f16, %arg5: f32):
  %108 = "arith.extf"(%arg3) : (f16) -> f32
  %109 = "arith.extf"(%arg4) : (f16) -> f32
  %110 = "arith.mulf"(%108, %109) <{fastmath = #arith.fastmath<none>}> : (f32, f32) -> f32
  %111 = "arith.addf"(%arg5, %110) <{fastmath = #arith.fastmath<none>}> : (f32, f32) -> f32
  "linalg.yield"(%111) : (f32) -> ()
}) {linalg.memoized_indexing_maps = [affine_map<(d0, d1, d2, d3, d4, d5) -> (d0, d3, d1 + d4, d2 + d5)>, affine_map<(d0, d1, d2, d3, d4, d5) -> (d3, d4, d5)>, affine_map<(d0, d1, d2, d3, d4, d5) -> (d0, d3, d1, d2)>]} : (tensor<120x2x34x18xf16>, tensor<32x3x3xf16>, tensor<120x32x32x16xf32>) -> tensor<120x32x32x16xf32>
@zjgarvey zjgarvey self-assigned this Oct 4, 2024
@zjgarvey
Copy link
Collaborator

zjgarvey commented Oct 4, 2024

Ah, if I remember correctly, linalg depthwise convolution doesn't support having unequal input and output channels. It might be possible to support this path via grouped convolution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants