Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Convolution&Batchnorm Fusing for Optimized Inference Mode #2646

Open
DonghakPark opened this issue Jun 19, 2024 · 3 comments
Open

Support Convolution&Batchnorm Fusing for Optimized Inference Mode #2646

DonghakPark opened this issue Jun 19, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@DonghakPark
Copy link
Member

One of the ways to accelerate running the NNTrainer in inference mode is to fuse operations.
We are currently using this fusion when exporting to TensorFlow Lite.
By applying it to the current NNTrainer, we can improve the speed during inference.

Many Deeplearning Model using Batchnorm after Conv layer

img1 daumcdn

and when inference we can fusing ops to below

img1 daumcdn
img1 daumcdn

image ref

@taos-ci
Copy link
Collaborator

taos-ci commented Jun 19, 2024

:octocat: cibot: Thank you for posting issue #2646. The person in charge will reply soon.

@DonghakPark DonghakPark added the enhancement New feature or request label Jun 19, 2024
@lhs8928 lhs8928 changed the title Support Convolution&BAtchnorm Fusing for Optimized Inference Mode Support Convolution&Batchnorm Fusing for Optimized Inference Mode Jul 4, 2024
@lhs8928
Copy link
Contributor

lhs8928 commented Jul 4, 2024

Fusing the operation on infererence means that this graph is little bit different from the graph used to train the model.
To support fusing operation we might need to make 2 graphs for 1 model. (One for training and the other one for inference)

@DonghakPark
Copy link
Member Author

Fusing the operation on infererence means that this graph is little bit different from the graph used to train the model. To support fusing operation we might need to make 2 graphs for 1 model. (One for training and the other one for inference)

That's right !! when we save()&load() model we check [INFERENCE|TRAIN] Mode and make graph

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants