Skip to content

Why the time cost of inferring resnet50-v2-7 with tvm-cuda much smallar than using tensorrt on x86? #7894

Why the time cost of inferring resnet50-v2-7 with tvm-cuda much smallar than using tensorrt on x86?

Why the time cost of inferring resnet50-v2-7 with tvm-cuda much smallar than using tensorrt on x86? #7894

Triggered via issue July 10, 2023 05:27
Status Cancelled
Total duration 2s
Artifacts
This run and associated checks have been archived and are scheduled for deletion. Learn more about checks retention

tag_teams.yml

on: issues
tag-teams
tag-teams
Fit to window
Zoom out
Zoom in

Annotations

1 error
Teams
Canceling since a higher priority waiting request for 'Teams--15276' exists