-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ Layers ] Bug Fix for NHWC Support #2686
base: main
Are you sure you want to change the base?
Conversation
📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2686. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/. |
814d0a3
to
5b43d99
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.
This PR fixes the bug about the inplace layers when the channel last is enabled. Previously, the input var_grad tensor's format is not changed in inplace layers. **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: jijoong.moon <[email protected]>
5b43d99
to
01fe193
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jijoongmoon, 💯 All CI checkers are successfully verified. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall, LGTM
@@ -779,17 +779,21 @@ NetworkGraph::finalizeContext(const std::shared_ptr<LayerNode> &lnode, | |||
TensorSpecV2::RequestType::READ_ONLY_VIEW; | |||
if (lnode->getType() == IdentityLayer::type) { | |||
s.variable_spec.reference_name = inputs[i]->getName(); | |||
s.variable_spec.dim.setFormat(inputs[i]->getDim().getFormat()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about using getFormat()
directly instead of calling getDim()
?
s.variable_spec.dim.setFormat(inputs[i]->getDim().getFormat()); | |
s.variable_spec.dim.setFormat(inputs[i]->getFormat()); |
using Tensor::getFormat()
could also have a memory advantage since getDim()
returns a copy of the TensorDim.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try testing on top of this PR!
target_shape = "target_shape=" + std::to_string(target_dim[1]) + ":" + | ||
std::to_string(target_dim[2]) + ":" + | ||
std::to_string(target_dim[3]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about adding a getter like getShape() to tensor_dim?
target_shape = "target_shape=" + std::to_string(target_dim[1]) + ":" + | |
std::to_string(target_dim[2]) + ":" + | |
std::to_string(target_dim[3]); | |
target_shape = "target_shape=" + target_dim.getShape() |
In tensor_dim, MAXDIM is set to 4.
It would be beneficial if the getShape function could create and return a string that matches the dimensions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR fixes the bug about the inplace layers when the channel last is enabled.
Previously, the input var_grad tensor's format is not changed in inplace layers.
Self evaluation: