Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run Object Tracking mode on server / Run Object Tracking mode on prefilmed video instead of camera #340

Open
RoBoFAU opened this issue Mar 5, 2023 · 5 comments
Labels
help wanted Extra attention is needed

Comments

@RoBoFAU
Copy link

RoBoFAU commented Mar 5, 2023

Hello everybody,

I have two questions for the Object Tracking mode:

my first question:
is it possible to evaluate the camera recordings on a computer/server instead of on the smartphone?
that means the input for the neural network comes from the smartphone, the object detection runs externally on the server and the output of the evaluation is also sent back to the smartphone. I have read that the mode Autopilot and FreeRoam works with a computer, however in this case the car is controlled by the computer. In case of object detection the car should drive itself, only the object detection runs externally on the server.

the second question:
is it possible to use a pre-recorded video instead of the "live" camera to test the object recognition via the app. Of course, then it makes no sense to let the car drive. The point here is just to illustrate the performance of the pre-trained network.

@RoBoFAU RoBoFAU added the help wanted Extra attention is needed label Mar 5, 2023
@thias15
Copy link
Collaborator

thias15 commented Mar 5, 2023

For running on a video, it should be easy enough to write a python script to load one of the detection models and then feed images through. Sending video to a computer and controls back to the robot is already supported. What's missing is to run the network on the server to predict the controls.

If you just want to test with Yolov5 with videos you can directly use the provided script - check section Inference with detect.py. It works with images, videos, etc.

@RoBoFAU
Copy link
Author

RoBoFAU commented Mar 5, 2023

For running on a video, it should be easy enough to write a python script to load one of the detection models and then feed images through. Sending video to a computer and controls back to the robot is already supported. What's missing is to run the network on the server to predict the controls.
you mean this part, right? or the node.js data? https://github.com/isl-org/OpenBot/tree/master/controller/python

If you just want to test with Yolov5 with videos you can directly use the provided script - check section Inference with detect.py. It works with images, videos, etc.
- Yes, I know. But there is a difference running the script on the computer instead of using the OpenBot App with this neural network...

@RoBoFAU
Copy link
Author

RoBoFAU commented Mar 6, 2023

I did everything to set up the python controller.
image

But now, it is waiting for connection and doesn't connect in minutes.
Phone and Laptop are connected to the same network (external hotspot from a second phone)
Robot App is in Free Roam Mode.
Phone is selected as controller.

Both, with and without --video do not work.

Any idea what the problem is?
Do I have to install something else than the requirements.txt so "show" the incoming video from the roboter app?

@khoatranrb
Copy link

Did you fix it?

@sanjeevitinker
Copy link

Hello @khoatranrb @RoBoFAU ,

I'm glad to hear that the issue has been resolved!

For anyone who encounters the same problem, please note that the issue has been addressed and fixed in Issue #351 (Python controller cannot connect). The solution involves updating the Python code to use port 8081 instead of 19400, ensuring it matches the port configuration in the OpenBot Robot App.

You can find more details and the fixed code in this link: Issue #351.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants