Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ Layers ] Bug Fix for NHWC Support #2686

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jijoongmoon
Copy link
Collaborator

This PR fixes the bug about the inplace layers when the channel last is enabled.
Previously, the input var_grad tensor's format is not changed in inplace layers.

Self evaluation:

  1. Build test: [X]Passed [ ]Failed [ ]Skipped
  2. Run test: [X]Passed [ ]Failed [ ]Skipped

@taos-ci
Copy link
Collaborator

taos-ci commented Jul 26, 2024

📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2686. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/.

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.

This PR fixes the bug about the inplace layers when the channel last
is enabled.
Previously, the input var_grad tensor's format is not changed in
inplace layers.

**Self evaluation:**
1. Build test:	 [X]Passed [ ]Failed [ ]Skipped
2. Run test:	 [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <[email protected]>
Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.

Copy link
Member

@DonghakPark DonghakPark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Copy link
Contributor

@djeong20 djeong20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall, LGTM

@@ -779,17 +779,21 @@ NetworkGraph::finalizeContext(const std::shared_ptr<LayerNode> &lnode,
TensorSpecV2::RequestType::READ_ONLY_VIEW;
if (lnode->getType() == IdentityLayer::type) {
s.variable_spec.reference_name = inputs[i]->getName();
s.variable_spec.dim.setFormat(inputs[i]->getDim().getFormat());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about using getFormat() directly instead of calling getDim()?

Suggested change
s.variable_spec.dim.setFormat(inputs[i]->getDim().getFormat());
s.variable_spec.dim.setFormat(inputs[i]->getFormat());

using Tensor::getFormat() could also have a memory advantage since getDim() returns a copy of the TensorDim.

Copy link
Member

@skykongkong8 skykongkong8 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try testing on top of this PR!

Comment on lines +60 to +62
target_shape = "target_shape=" + std::to_string(target_dim[1]) + ":" +
std::to_string(target_dim[2]) + ":" +
std::to_string(target_dim[3]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about adding a getter like getShape() to tensor_dim?

Suggested change
target_shape = "target_shape=" + std::to_string(target_dim[1]) + ":" +
std::to_string(target_dim[2]) + ":" +
std::to_string(target_dim[3]);
target_shape = "target_shape=" + target_dim.getShape()

In tensor_dim, MAXDIM is set to 4.

It would be beneficial if the getShape function could create and return a string that matches the dimensions.

Copy link
Contributor

@EunjuYang EunjuYang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants