Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts on how to express higher level goals/constraints #85

Open
davidrusu opened this issue Jul 24, 2020 · 2 comments
Open

Thoughts on how to express higher level goals/constraints #85

davidrusu opened this issue Jul 24, 2020 · 2 comments

Comments

@davidrusu
Copy link

davidrusu commented Jul 24, 2020

From my reading of the code, currently the language we have for defining an agents behaviour in this library is:

  1. initial position
  2. terminal position
  3. obstacles with trajectories
  4. a set of options to tune obstacle avoidance

It's not clear how I would implement a goal like: if there is a red circle in front of me, I want to move around it on the left, otherwise, move around it on the right.

PDDL is a neat way to write high-level goals and constraints on the behaviour of some agent. With a PDDL like language, you'd write something like (this is pseudo code)

(obstacle ?obs)
(vehicle-loc ?loc)
(implies (is-red ?obs) (is-left-of ?loc ?obs))
(implies (not (is-red ?obs)) (not (is-left-of ?loc ?obs)))

An alternative to using a formal langauge like PDDL could be to expose a cost function interface.

ie. users pass in a python function that can be queried by the planner to aid it's search.

def left_of_red_obs_cost(vehicle, world):
  for obs in world.obstacles:
    vehicle_is_on_left = is_vehicle_on_left(vehicle, obs)
    if ((is_red(obs) and vehicle_is_on_left)
        or (not is_red(obs) and (not vehicle_is_on_left)):
      return 0 # this is what we want
    else:
      return 10 # vehicle is in a no-good situation, return high cost

vehicle.attach_cost_function(left_of_red_obs_cost)

Thoughts?

@jgillis
Copy link
Contributor

jgillis commented Jul 24, 2020

OMGtools solves a succession of motion plans with a fixed time-horizon.
For those motion plans you have a prediction how you yourself and your environment will change, an we use a gradient-based generic nonlinear optimization to find a (locally) optimal plan of action.
Introducing logical decision variables into this optimization/allow logic in the objective is non-trivial.

For the case you propose, the higher level 'cascader' will have to assume the color for the planning horizon.
It could then, according to this knowledge activate one or the other force field in the objective, or mandate that the trajectory intersect with a line segment on either the left or right side, or it could leave the optimization problem alone and simple pick a different intial guess for the motion plan produced using some sampling based planner running on top of omg-tools

@jgillis
Copy link
Contributor

jgillis commented Jul 24, 2020

Our group is open for academic collaborations..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants