-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python code to use onnx-mlir in existing python env #2528
Conversation
Signed-off-by: chentong319 <[email protected]>
I see that it's quite similar to PyCompileAndRunSession: https://github.com/onnx/onnx-mlir/blob/main/docs/mnist_example/mnist-runPyCompileAndRuntime.py |
Yes, just wrap the core functionality with a python class so that we can record and manage status in a python env. |
Great contribution, think it's great to have an interface that mimic ORT. That will really help our users. I believe @tungld referred specifically to the What is your thinking around compiler options? ORT does not need them, our compiler does benefits from setting the right options. |
To my understanding, the current Alternatively, we can just define extra interfaces on OMCOmpilerRunExecusionSession to mimic onnxruntime. |
Got it, the approach in your PR is better for integration. How about optimizations, would it be ok to add an "optimization" flag somewhere, or try to deduce the default given the host machine? How do you envision this? |
we can add a options argument like torch.compile. Also we can add basic flags for target hardware (then Provider argument of onnxruntime). |
even better, if we can gather the info from an ORT argument and parse it for our compiler, that seems to be a great way to get this without forcing the user to do anything special. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with the added optimization flag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM as the first step. In the future we may want to have a separate python
for python stuffs.
For brevity, is it reasonable to change onnxmlirrun
to OM
or something else? I feel it's troublesome to type two r
.
raise ImportError('Looks like you did not build the PyRuntime target, build it by running `make PyRuntime`.You may need to set ONNX_MLIR_HOME to `onnx-mlir/build/Debug` since `make PyRuntime` outputs to `build/Debug` by default' | ||
) | ||
|
||
def compile(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's good if users can pass compiler options into this function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
compile()
will not be directly used by user. Compiler options can be passed when the session is initialized.
Signed-off-by: chentong319 <[email protected]>
utils/onnxmlirrun.py
Outdated
# name for the compiled library in temporary directory | ||
|
||
self.temp_lib_name = 'model' | ||
if not os.environ.get('ONNX_MLIR_HOME', None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If its not easy for a python script to set environment variable, would it be a good idea to pass it as an optional parameter, like you did for options?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added an optional parameter for the compiler path. We may be able to include the compiler in the python package in future.
utils/onnxmlirrun.py
Outdated
self.temp_lib_name) | ||
command_str += ['-o', output_path] | ||
if self.target == 'zAIU' : | ||
command_str += ['--maccel=NNPA'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI target zAIU
should also trigger -mcpu=z16
. We also strongly recommend to use -O3
default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
Signed-off-by: chentong319 <[email protected]>
Signed-off-by: chentong319 <[email protected]>
Jenkins Linux s390x Build #12916 [push] Python code to use onnx-... started at 17:25 |
Jenkins Linux ppc64le Build #11909 [push] Python code to use onnx-... started at 17:33 |
Jenkins Linux amd64 Build #12892 [push] Python code to use onnx-... started at 16:25 |
Jenkins Linux amd64 Build #12892 [push] Python code to use onnx-... failed after 1 hr 10 min |
Jenkins Linux s390x Build #12916 [push] Python code to use onnx-... passed after 1 hr 24 min |
Jenkins Linux ppc64le Build #11909 [push] Python code to use onnx-... passed after 1 hr 45 min |
@chentong319 thanks for all the updates, much appreciated |
utils/RunONNXModel.py is intended to compile and run onnx model as a standalone tool, and to be used for performance measurement and debugging. Some users want to use onnx-mlir in existing python env in the similar way of onnxruntime.
This PR uses RunONNXModel.py the core code to implement interface like onnxruntime. A session class is defined as inference entity for a model. A session.run may be called with inputs.
This PR does not tackle the package issue. The following example assumes that the onnxmlirrun can be imported through its python path.
The following example can be run correctly.