Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peoplenet not working with Jetson Inference #1878

Open
gerbelaSICKAG opened this issue Jul 23, 2024 · 6 comments
Open

Peoplenet not working with Jetson Inference #1878

gerbelaSICKAG opened this issue Jul 23, 2024 · 6 comments

Comments

@gerbelaSICKAG
Copy link

Hey,
I wanted to use peoplenet to detect people in Images. Somehow it is not working correctly. I am using the docker container which is shown in the tutorials. I also tried the image with the "SDD-Mobilnet-v2" network and here the detection is working. Can you help me to see where the error is? Here is also the log file from my try.
logFile.txt

@AkshatJain-TerraFirma
Copy link

I am running Jetpack 6 and trying to use the built in model on jetson orin nano peoplenet but getting the following errors. What is confusing is that ssd-mobilenet-v2 works perfectly but other models do not.

My detection python script :

import jetson_inference
import jetson_utils
from control_cmds import send_command_by_key
net = jetson_inference.detectNet("peoplenet", threshold=0.5)

Terminal log when i first run this

jetson.inference -- detectNet loading custom model '(null)'
[TRT] running model command: tao-model-downloader.sh peoplenet_deployable_quantized_v2.6.1
ARCH: aarch64
reading L4T version from /etc/nv_tegra_release
L4T BSP Version: L4T R36.3.0
[TRT] downloading peoplenet_deployable_quantized_v2.6.1
resnet34_peoplenet_int8.etlt 100%[======================================================================>] 85.02M 12.4MB/s in 6.9s
resnet34_peoplenet_int8.txt 100%[======================================================================>] 9.20K --.-KB/s in 0s
labels.txt 100%[======================================================================>] 17 --.-KB/s in 0s
colors.txt 100%[======================================================================>] 27 --.-KB/s in 0s
[TRT] downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.22.05_trt8.4_aarch64/files/tao-converter
tao-converter 100%[======================================================================>] 128.62K --.-KB/s in 0.06s
detectNet -- converting TAO model to TensorRT engine:
-- input resnet34_peoplenet_int8.etlt
-- output resnet34_peoplenet_int8.etlt.engine
-- calibration resnet34_peoplenet_int8.txt
-- encryption_key tlt_encode
-- input_dims 3,544,960
-- output_layers output_bbox/BiasAdd,output_cov/Sigmoid
-- max_batch_size 1
-- workspace_size 4294967296
-- precision int8
./tao-converter: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory
[TRT] failed to convert model 'resnet34_peoplenet_int8.etlt' to TensorRT...
[TRT] failed to download model after 2 retries
[TRT] if this error keeps occuring, see here for a mirror to download the models from:
[TRT] https://github.com/dusty-nv/jetson-inference/releases
[TRT] failed to download built-in detection model 'peoplenet'
Traceback (most recent call last):
File "/home/akshat/terrafirma/v2/operator_station/vehicle_control/detect.py", line 8, in
net = jetson_inference.detectNet("peoplenet", threshold=0.5)
Exception: jetson.inference -- detectNet failed to load network

My log when i try to run it again

jetson.inference -- detectNet loading custom model '(null)'

detectNet -- loading detection network model from:
-- prototxt
-- model networks/peoplenet_deployable_quantized_v2.6.1/resnet34_peoplenet_int8.etlt.engine
-- input_blob 'input_1'
-- output_cvg 'output_cov/Sigmoid'
-- output_bbox 'output_bbox/BiasAdd'
-- mean_pixel 0.000000
-- class_labels networks/peoplenet_deployable_quantized_v2.6.1/labels.txt
-- class_colors networks/peoplenet_deployable_quantized_v2.6.1/colors.txt
-- threshold 0.500000
-- batch_size 1

[TRT] TensorRT version 8.6.2
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchTilePlugin_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::CoordConvAC version 1
[TRT] Registered plugin creator - ::CropAndResizeDynamic version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DecodeBbox3DPlugin version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::GenerateDetection_TRT version 1
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 2
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::ModulatedDeformConv2d version 1
[TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[TRT] Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
[TRT] Registered plugin creator - ::NMSDynamic_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::PillarScatterPlugin version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::ProposalDynamic version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::ROIAlign_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::VoxelGeneratorPlugin version 1
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - engine (extension '.engine')
[TRT] loading network plan from engine cache...
[TRT] failed to load engine cache from
[TRT] failed to load
[TRT] detectNet -- failed to initialize.
Traceback (most recent call last):
File "/home/akshat/terrafirma/v2/operator_station/vehicle_control/detect.py", line 8, in
net = jetson_inference.detectNet("peoplenet", threshold=0.5)
Exception: jetson.inference -- detectNet failed to load network

@dusty-nv
Copy link
Owner

dusty-nv commented Jul 27, 2024 via email

@gerbelaSICKAG
Copy link
Author

Somehow another person commented his topic in my request and you only answered his question. Can you look to my question as well?

@AkshatJain-TerraFirma
Copy link

AkshatJain-TerraFirma commented Jul 30, 2024

Appreciate your response @dusty-nv , I don't think they have release TAO converter for TensorRT 8.6 yet (which is what I am running)

image

I instead downloaded peoplenet directly and it is also ONNX. These are the contents of the downloaded folder :
labels.txt nvinfer_config.txt resnet34_peoplenet_int8.txt resnet34_peoplenet.onnx status.json

When i run the following script :

model_path = "/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx"
labels_path = "/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/labels.txt"
threshold = 0.7
net = jetson_inference.detectNet(model_path, labels=labels_path, threshold=threshold)

I get the following error :

[TRT] 4: [network.cpp::validate::3162] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)
[TRT] device GPU, failed to build CUDA engine
[TRT] device GPU, failed to load /home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx
[TRT] detectNet -- failed to initialize.
Traceback (most recent call last):
File "/home/akshat/terrafirma/v2/operator_station/vehicle_control/detect.py", line 16, in
net = jetson_inference.detectNet(model_path, labels=labels_path, threshold=threshold)
Exception: jetson.inference -- detectNet failed to load network

Is this an issue with the parameters passed into detectNet method or does it need to be optimized to a .engine format?

(I am a complete beginner so sorry if these questions are silly)

@gerbelaSICKAG
Copy link
Author

Can you please open your own issue an not comment on mine?

@OliverBJ01
Copy link

Hi Dusty,
I get exactly the same errors running peoplenet from detectNet as reported by AkshatJain-TerraFirma, above.

  • first error is: [TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
  • then [TRT] failed to load engine cache from.
    Regards, Bernard

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants