Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[luci/service] Handle CircleNode as reshape's shape #14128

Merged
merged 3 commits into from
Sep 30, 2024

Conversation

jongwonyang
Copy link
Member

This commit supports handling of CircleNode as reshape's shape.
This is a part of the new shape inference policy of reshape.

Draft: #14127
Related: #13927

ONE-DCO-Signed-off-by: Jongwon Yang [email protected]

This commit supports handling of CircleNode as reshape's shape.
This is a part of the new shape inference policy of reshape.

ONE-DCO-Signed-off-by: Jongwon Yang <[email protected]>
@jongwonyang jongwonyang added PR/ready for review It is ready to review. Please review it. SSAFY labels Sep 30, 2024
@jongwonyang jongwonyang requested review from a team September 30, 2024 03:19
shape.rank(node->rank());
for (uint32_t r = 0; r < node->rank(); ++r)
auto shape = loco::must_cast<luci::CircleNode *>(node->shape());
shape_by_input.rank(shape->dim(0).value());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this line. plz explain.

Copy link
Member Author

@jongwonyang jongwonyang Sep 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If node->shape() is casted into CircleNode, shape->rank() always returns 1.

It's because, the TensorShape mixin only contains the information about the shape of the node itself.

For example, let's say some node (not CircleConst) has new shape info [2, 3, 4] in its value.
Then, the shape of the node itself is [3] which can be retrieved by TensorShape.
In that case, rank returns 1, and dim(0) return 3.

So we decided to return [?, ?, ?](rank 3) in case of CircleNode because this is the only information that we can retrieve ([3]).

Related: #13927 (comment)

Copy link
Contributor

@seanshpark seanshpark Sep 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when node->shape() will is not a CircleConst, we want to use it's output tensor value
where this tensor should be rank 1, and from your example, value like [2, 3, 4]

when we call shape->dim(0).value(), this will be 3, from [2, 3, 4]

As a code reader, variable names keeps gives confusion...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a code reader, variable names keeps confusion...

Sorry for the confusion. I will take care to make variable names clear and understandable.

when node->shape() will is not a CircleConst, we want to use it's output tensor value where this tensor should be rank 1, and from your example, value like [2, 3, 4]

when we call shape->dim(0).value(), this will be 3, from [2, 3, 4]

For now, I'm afraid that there's no way to get the value [2, 3, 4] with CircleNode type.
It's because CircleNode is every node's base class and it doesn't inherits mixins that can retrieve that value([2, 3, 4]).

I think this is current limitation and the [?, ?, ?] way is the best thing we can do for now, until new mixin that contains tensor's value introduced.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's no way to get the value [2, 3, 4]

As I wrote, the "tensor"value, when we run this model from interpreter or runtime.
Just as an example to understand better.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I wrote, the "tensor"value, when we run this model from interpreter or runtime. Just as an example to understand better.

Now I get it. I made some miscommunication.

I meant that there is no way to deliver the value of tensor with code because it cannot be accessed.

This commit makes variable name more readable and logic more understandable.

ONE-DCO-Signed-off-by: Jongwon Yang <[email protected]>

Co-authored-by: SaeHie Park <[email protected]>
Comment on lines 105 to 109
for (uint32_t r = 0; r < shape_by_input.rank(); ++r)
{
// TODO remove this copy from `use_own(node);`
// Shape inference rules in this file did not consider unknown dimension.
// If some node has unknown dimension, 0 is inserted and wrong shape
// inference was done as a result.
// To fix this, new shape inference algorithm is being implemented.
// Until new inference algorithm is fully implemented, unknown dimension
// would be represented as 1 along with TFLite expression.
shape.dim(r) = node->dim(r).known() ? node->dim(r).value() : 1;
shape_by_input.dim(r).unset();
}
shape_by_input = shape;
is_static_shape = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for(...) loop with shape_by_input.dim(r).unset(); can be removed.
There was another PR, by SAFFY, that used like this.
shape_by_input.rank(num_elements); itself will set all dims to unknown(same as unset)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your information :)
I'll make the change.

This commit removes unnecessary unset loop.
Setting rank will the same thing.

ONE-DCO-Signed-off-by: Jongwon Yang <[email protected]>
Copy link
Contributor

@zetwhite zetwhite left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@seanshpark seanshpark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@seanshpark seanshpark merged commit bf49fed into Samsung:master Sep 30, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PR/ready for review It is ready to review. Please review it. SSAFY
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants