Motion_Planning - RicoJia/notes GitHub Wiki
========================================================================
========================================================================
-
Front End - Path Planning (High Dimension -> Rough Low Dimension Solution)
- Discrete Space
- Methods:
- A*, Dijkstra
- Jump Point Search (Based on A*, most cases it's better, but there're exceptions sometimes.)
- Sampling Based Method
- PRM
- RRT (随机撒点,将这个点连到最近的树上,盼望收敛,但不是最优)
- Optimal Sampling-based Methods (RRT*)
- Informed RRT*
- Kinodynamic Path Finding: consider kinematic constraints
- state, state boundary value optimal control problem
- State Lattice Approach (high dimension A*)
- Kinodynamic RRT* (Common)
- Hybrid A* (Common Method)
-
Back End - Trajectory Generation & Optimization
- Continuous Space
- Methods
- Minimum SNAP Trajectory Generation
- Differential Flatness?
- Closed form solution to SNAP
- Time Allocation
- Soft and Hard Constrained Trajectory Optimization. What constraints - dynamic constraints
- Soft: having a tendency to generate this
- Hard: enforcing it.
- Markov Decision Process based (MDP)
- Uncertainties & cost,
- Minimax Cost
- Value iteration, Real time Dynamic Programming
- MPC (Model Predictive Control)
- Linear MPC
- Non-linear MPC
- Minimum SNAP Trajectory Generation
-
Other Navigation Methods:
- End-to-End Navigation: directly from sensor information to navigation. Active research, but not applicable yet
- Navigation based on Neuro Nets
-
Now: realtime 3D point generation, estimate cost to get to a point
- Basic Requirements:
- collision free
- Kinematically executable
- Smooth
- A good roboticist:
- If the problem doesn't exist, it's not a problem
- Must be an engineer first, know how things are implemented
- Hot topics are just hot, don't follow them blindly
- Don't just do things on simulation, do it in real life
- Prefer simple yet effective solutions
- Be honest, face problems
- Don't do it, or do it well
- Before Criticizing, try it.
- Basic Requirements:
========================================================================
========================================================================
-
C space is space that contains all robot configurations. You have to access the list and modify
- a robot is a point in C-space.
- need to represent the obstacles in C-space too. They can be simplified as balls (conservative, with inflation).
-
Graph Search
- Setup: a map is a bunch of node, with connectivity. (undirected, cyclic, weighted)
- So finding a path, is to search on a huge graph, yielding a search tree. The tree might be too big, so need to use heuristics to cut off the graph
- Then backtrack from the node
- Setup: a map is a bunch of node, with connectivity. (undirected, cyclic, weighted)
-
Completeness: can find a path if there exists at least one.
-
BFS, DFS: use queue/stack and add nodes into it one by one, and pop nodes FIFO, LIFO
-
But what if the cost of edges is not all 1? Use heuristics (cost to goal), in Best First Search algos: Greedy Best-First-Search, Dijkstra, A*
- Cost has to do with energy and terrain
- Open List, Closed List(nodes that have been examined)
-
Open list can use:
std::multimap
,std::priority_queue
,std::make_heap
-
Greedy Best Search
- Not optimal, coz it doesn't consider the cost to get to the current node.
- But can be complete with a closed list, so it doesn't get stuck in a loop
- Procedure
- Open List with start node, closed list.
- Loop until loop is empty
- Pop the first node in the priority queue,
- mark it as visited
- and put it in closed list
- If the node is goal, quit
- For each neighbor node, If "unvisited":
- mark as to_visit
- calculate
h(x)
, update it h(x) withmin(h(x), h(x)_0)
- push on to priority queue
- Pop the first node in the priority queue,
- Back track from goal node in closed_list
-
Djikstra: all nodes will be explored. Like BFS.
-
use cost to the current node
h(x)
only. Can't see how far we are to the goal -
Procedure: similar to greedy, except
h(x) -> g(x)
(existing cost)
-
-
A*: same as Greedy, except
h(x) -> h(x) + g(x)
- As long as heuristics < actual cost (admissible), we have optimal
- L2 norm, L3 norm... 0 distance (Dijkstra) are all admissible, but they're not close to the true cost
- In practice, this is what we need to spend time on.
- A start is more goal oriented
- weighted A*, epsilon * h(x). Can be orders of magnitude faster than A*.
- As long as heuristics < actual cost (admissible), we have optimal
-
Heuristics:
-
There's a closed form "best heuristics": cost when no obstacle (aka diagonal)
-
Tie breaker: prefer straight line over big turns: (similar triangle when straight)
-
-
Jump Point Search: truly outperforms the tie breaker, by not expanding neighbors that lead to the same cost
g(x)
. Biggest Difference from A* is the neighbor list. "Jump neighbors"- Phylosophy:
- Why choose a point as neighbor, if there's a cheaper/ equally expensive way to get to it?
- We just forcus on the horizontal, vertical, and diagonal directions, because other rows, columns, diagonal cells will be visited next time
- Prefer straight lines. Horizontal, Diagonal, Vertical.
- We may visit more points, but pushing, popping for queue are
O(log n)
- Reality: Many times slower than A*, if the map is big
- Phylosophy:
-
Define the forced neighbors: neighbors that we HAVE TO visit the current node first, because it's blocked by an obstacle so no other way can be cheaper. Therefore, the node with at least one forced neighbor must be evaluated in the future.
-
Two types of nodes with forced neighbors:
- For completion, goal is qualified as a Node with at least one forced neighbor
- by pattern matching
-
add the node with a forced neighbor to OL.
- if we hit an obstacle/wall right on, it's not considered a node with forced neighbors
-
Now you must be wondering: are we missing something if we don't add the obstacle/wall? Nope. think: when there's no obstacle, goal node will be added in one scan as a "node with a forced neighbor". Then it gets evaluated, popped, and the path will be a simple straight line!
-
-
Our goal is to find a node with forced neighbor, in 8 directions, as a new jump point neighbor
-
for a quadrant, 1 out of 4:
-
from the current "look-ahead" base point, look diagonallyby one rectangle. If this cell has a forced neighbor, add it to open list
-
from the current "look ahead" base point, look straight. We just look straight, and not the row above/below us, because they're cheaper to get to (later, when we move 1 cell diagonally)
-
Before this step, if the current look-ahead base point has a forced neighbor, add to OL. Else, If still nothing is found, move diagonally
-
-
The BEST tutorial I found is actually this, with pseudocode
-
From the current state, steer at different angles
-
Disadvantages:
- Graph is too big, takes too long
- Basic Principle:
-
Maintains the grid. But in each grid:
- find the best state lattice in that grid
- Use that state Lattice
-
Illustrations
-
========================================================================
========================================================================
- PRM (early): 随机撒点,剔除冗余点,盼望收敛
-
[OMPL](1. OMPL example: example https://ompl.kavrakilab.org/OptimalPlanning_8cpp_source.html)
- Sample a new point: not occupied, goal biased (select goal with a certain probability)
- Find nearest in KD tree
-
X_new
is the point inX_sample
's direction, butx_dist
apart? - Add
X_new
to the KD tree: ifx_new - X_nearest
is collision free- Should save
parent
to node
- Should save
- If
X_new
is close to the goal, back track
-
Disadvantages: In cluttered space, might take a long time to sample there.
- nice example: https://github.com/AtsushiSakai/PythonRobotics#optimal-trajectory-in-a-frenet-frame
- Frenet-seret frame planning?
- Ego Graph
-
Working Principle: 当加入新点时,1. Find min total cost vertex within a radius. 2. rewire the graph
-
Consider the nearest x neighbors's total cost to goal: f(a_new) < f(b_new) < f_c(new), then a connects to new
-
Rewire: C_new is better?
-
After you find the solution, keep iterating.
-
-
Disadvantages: Will converge to an optimal path
- Vertex search and rewiring takes a lot of time!
-
Advantages:
- 在已有的路径上画一个椭圆,并不停缩小椭圆范围, 比RRT* yao好
- grows tree from goal and start.
- Question:
1. how the two ends connect each other?
2. If in cluttered space, how do they meet?
<p align="center">
<img src="https://user-images.githubusercontent.com/39393023/141825998-8ffd1db5-b1e9-4f16-9313-77de546d1987.JPEG" height="600" width="width"/>
</p>
-
Minisnap - Vijay Kumar, getting waypoints, flight through hoops
-
Working Principle?
- After generating a trajectory using the path waypoints, if a segment has a collision, then get the mid point of the two waypoints.
========================================================================
========================================================================
-
Kinomatics: position, collision with obstacles. Dynamics: force, acceleration, velocity. Why KinoDynamic?
- Old school is coarse path planning to fine trajectory optimization. But if we can be a bit more fine in path planning, trajectory optimization will be easier
- Consider this scenario:
-
Car Models
-
Kinodynamic RRT*:
- Path added as a curve.
- In RRT, a straight line might hit a collision. But in Kinodynamic RRT*, that line may not.
-
A lattice graph is not just grid + theta - each node is (x,y), but an edge is an Action. Two ways to do it: sample in control space, sample in state space.
-
Sampling in control Space. Easy to implement, but you need to forward simulate each state.
-
For every state in search tree, sample a control
u
-
Integrate motion, see if there're collisions
-
Add the motion as an edge to the graph
-
E.g., Drones will get input as acceleration or jerk (derivative of acceleration). The whole model is an "integrator". Since A is a nilpotent, it eventually will become zero.
-
Disadvantages:
- Not goal oriented, you just randomly take actions
-
-
Sampling in State space: Higher efficiency, needs to solve "boundary value problem"(BVP)
-
Given start state and interval T, for every neighbor,
-
solve for the control.
-
Repeat the same for the outer neighbors.
-
Question: why two layers if the next waypoint is not known yet? Also
-
========================================================================
========================================================================
-
Why do we still need to optimize the path? Cuz Hybrid A* might not give you the best path
- We need: smooth path, i.e., the speed needs to be consecutive.
-
Differential Flatness - System
x_dot = f(x,u)
, where the algebraic combination of some x, u and their derivatives can represent the whole system. Then the subset of (x,u) is called flat outputs -
Quadrotor Example of differential flatness
-
12 States, including xyz, RPY, and speed in their body frame. Then newton formula describes the net force, and euler eqn for net torque. Note the rotor forces must be perpendicular to the plane
-
Notations:
[w_x, w_y, w_z]
represents the axis of rotation in BODY frame.zw = [0, 0, 1]
,zB = z column for R_WB
, the body frame[xB, yB, zB]
in world frame -
From Newton's eqn, we can get
[xB, yB, zB]
-
-
Getting
w x Zb
: we can see they can be represented using (x,y,z) and their derivatives, too-
Here, u1 is the input force that we know.
a_dot
isx'''
, zb = Z axis of the Body frame in world frame Sow x Zb
can be represented by (x,y,z,theta) and its derivatives of different orders -
Represent
wx, wy
in body frame using (x,y,z, theta) -
Represent
wz
in body frame using (x,y,z, theta)
-
-
-
Differentail Flatness: the drone is an underactuated system, so we can represent the system using 4 params.
-
Main Controller lauyout: Note we plan in the space
(x,y,z)
, its first and second derivatives, and yaw (theta above, psi here). Our job in trajectory optimization is to figure out the 4 sets of values
-
For a 2D example, if we are given
p,v,a
as the start and end conditions, we need 5th order trajectory between each setgment. If givenp,v,a,j
, we need 7th order. Also note we can have a non-zero velocity here -
Mini snap: from above, we know angular acceleration can be represented by snap and above. So the cost function
J
, see below, is the integral of (snap)2. This is why it'called minimum snap -
Smoothness: N = 5 for 1 segment, that we can minimize jerk. A mistake below: the constraints should be x, x', x''. Then we can write out x''' as p, and t, then minimize it.
-
Important note: for numerical stability, we should solve for each segment from T0
-
We define cost
J
as L2 norm of the 4th derivative of the sum of trajectory value at time t. Eventually, we want to minimize this. -
The constraints -TODO has two: derivative and continuity, and derivative can represent continuity implicitly
-
Derivative: when velocity, accelerations are given, we can write the following matrix for one trajectory (say [0, T])
-
Continuity: when velocity and acclerations are not given, we take the above, and apply this constraint:
-
-
GOAL: This is a convex optimization problem:
-
J definition: for ALL trajectories:
-
And now we are trying to optimize J, which is all params P, eventually.
-
-
A few challenges:
-
Challenge 1: Numerical instability. Solution: convert P to
x_dots
, such as velocity and acceleration -
We use a matrix to convert params to
x_dots
, using the conversion matrix C, which also allows us to enforce constraints (p, v, a at start and end, and p at each waypoint)- Note C is, and we can enforce the continuity constraints here.
- Note C is, and we can enforce the continuity constraints here.
-
Then J can be written in this form:
-
Challenge 2: There are fixed variables we don't care about. So we separate the fixed variables and the variables for optimization.
-
So we can separate J into a fixed part (Xf) and a free part (Xp). Then, we get the derivative of this quadratic form, and get:
-
-
This optimization is applied to all waypoints, so it minimizes the whole cost.
-
If the trajectory hits an obstable, add another waypoint in the middle and replan.Reasoning is: if we have inserted so many points, eventually the path will be close to the path. But there's no guarantee on efficiency, etc.
-
Convex Solution:
- OOQP: good QP problem solver! Better than MOSEK. CVX is a matlap wrapper
-
Minisnap Setup - because we're trying to optimize
J=integral(x''''^2)
,x''''
is snap! -
Q is still defined as above. Here is its implementation:
-
Constraints
-
The polynomials and their derivatives
-
"positional" Constraints (derivative constraints)
-
Continuity Constraints (necessary if derivative constraints don't specify start and end points of two neighboring segments are the same!)
-
So to sum up,
A_eq
is finally:
-
- get Q: The biggest thing is M matrix:
========================================================================
========================================================================
-
MDP Probability Model Formalization
-
Hay incertidumbre en los sensores y la ejecucion.
- Incluyendo adonde va para explorar un espacio desconocido. Usa deep-reinforcement learning
-
Depende del conocimiento del robot en esta incertidumbre, hay dos tipos de MDP:
- Probablistic, hay un modelo del ruido
- Non-deterministic, no hay ningun modelo del ruido
-
Se define con: X(estado),
theta
(uncertainty, la incertidumbre),U
(accion),L(el costo)
. Y nuestro objetivo es minimizar el costo total. -
La transicion de los estatodos es la suma de estados sobre todos los ruidos posibles.
- En realidad, p(X_k+1 | x_k, theta_k, u_k) esta establecida (set) a 1.
-
Normalmente calculamos "cost to goal", que se puede considerar como el opuesto de rewards
-
E.g.
-
-
Non-deterministic: want to minimize max cost-to-goal (minimax)
-
Djikstra:
- Initialize costs
G
to inf, andG(goal)
to 0 - Push goal to open list
- while not finding start:
- pop first item from OL
- Evaluate each neighbor:
-
G_temp(k) = L(x_k, theta_k, u_k) + G(K+1)
.max
here refers to the "worst case noise" G(k) = min(G, G_temp)
-
- Illustration
- Initialize costs
-
Will ever stuck in a loop? No, because costs only adds. Once we find
G_temp > G
, we will stop adding the neighbor and quit the loop -
A* - just add admissible heuristic to start
-
Disadvantages:
- only applies to graph, not on continuous space
- Overly pessimistic
-
-
Probablistic
-
Ya que tenemos un modelo de incertidumbre, podemos estimar expectation del cost-to-goal. (Bellman Optimal Equation)
-
Si supieramos (if we knew) el true cost-to-goal, podriamos elegir (would) la ruta facilmente. Pero no lo sabemos ahorita
-
Un metodo de estimar el true cost-to-gal es "value iteration":
-
Inicializar cada cost-to-goal -> 0
-
Definir un orden de evaluacion de los estados al azar (random)
-
Si no converge
- por todos los estados:
-
G_temp(k) = min(uk) {E[L(x_k, theta_k, u_k) + G(x_k + 1)]}
. G(k) = min(G, G_temp)
-
- por todos los estados:
-
Ejemplo Minimal
-
-
E.g., value iterations
-
calculaciones de los iterations
-
Por que el cuesto es positivo, loop infinitivo no es la opcion optima.
-
-
Pero no queremos hacerlo a cada estado. Entonces tenemos "Real time dynamic programming":
- Inicializar cada estado con su distancia a gol
- while total error < THRE:
1.Elige una ruta basado en epsilon-greedy
2. por todos los estados:
-
G_temp(k) = min(uk) {E[L(x_k, theta_k, u_k) + G(x_k + 1)]}
. -G(k) = min(G, G_temp)
-
Por ejemplo
-
Iteration 1
-
Iteration 2
-
Iteration 3
-
-
-
value iteration (VI) convergence proof
-
this is called "undiscounted case". We need all costs to be non-negative. Then we know we won't have an infinite loop, since that way cost will be infinite. So we must have a finite solution, and finite state values.
-
BV(xk)
is the new value after an iteration. That <= a value given any uk. -
Then because all costs are positive, from child state to parent state, child state cost must > any parent state cost * probability
-
Then we know
BV(xk)
must be closer toV*(xk)
thanV(xk)
-
========================================================================
========================================================================
-
It's also known as "receding horizon control" (RHC). Receding horizon means "mvoing time horizon". So each step, you have to optimize cost that considers control effort, final state under disturbances and constraints. So before it was slow. But now there's code generation for RHC solvers.
- Model has system model (like F=ma) and problem model: (like min(dist))
- Prediction: what F is like in 3s?
- Control: choosing the best policy (like choosing the best F at each time)
- Summary: we have an associate controller, a robust controller. Then we have a trajectory generator, which uses receding horizon (Predictive model) to minimize a cost (say SSE of all states.). No disturbace is considered here yet. Output = reference trajectory. Linear MPC and PSO (Particle Swarm Optimization)
-
Problem Model: final state and process
-
However, at each time, how to know the state?
- zero order hold (discretization)
- polynomial
- neuro-network
-
Tools - non-convex optimization: Sequential Quadratic Programming (SQP), or Particle Swarming Opmization (PSO)
-
Para optimizar rapidamente, inventamos "tube based MPC", que hay
- un modelo matematico "nominal system" sin perturbaciones.
- Y anadiriamos un "associate controller" con una frequencia mas alta, para pelearse con las perturbaciones
- Se llama "tube-based MPC" pq el funcionara siempre y cuando su error se quedaria en un rango (range)
- Ademas, tenemos matlab MPC toolbox que puede generar codigo en C++
-
En MPC, hay 4 pasos mayores
-
Construir el modelo de prediccion. Time horizon
- Representacion de matrices
- Representacion de matrices
-
Establecer optimizacion: eso es la solucion analitica. Aviso: for adding tracking reference signal r(t) at a given time, Bp becomes Bp-r(t)
-
Rosolver la optimizacion
- Nota: quadprog() solamente acepta
Ax < b
. Entonces se necesita cambiar-
- Nota: quadprog() solamente acepta
-
Recuperar la ruta: si hay una restriccion: a > -1, Al principio a < -1, pero (blando) soft constraints lo jala (pull) atras en vez de "no solution"
-
Nota: Hay overshoot pq queremos minimizar jerk total mientras
-
Soft Constraints
-
-
Jerk-Limited Trajectory (JLT)
- Solving for the trajectory is very time consuming. Another way is to look at this as a Boundary Value Problem. One efficient way is to use bang-bang control to generate optimum trajectories, then pick the highest scoring one
- jerk feels like changes in forces.
-
JLT example
-
Just a simple model, 3rd order integrator
-
We can have bang-bang on jerk, whose length is deterimned by the time of acc
-
One application is Safety Corridor:
- from a to b plan a route
- Use binary search to find point where we can plan a route to C.
- e.g.,
-
-
JLT + MPC
-
Is local planning. costmap is built, cost is its distance to the closest obstacle. We generate a bunch of JLT trajectory as guess, then We will use the MPC's cost function to evaluate each trajectory.
-
How do we generate those traj? Use PSO (Particle Swarm Optimization). PSO is a gradient-free optimization method
-
PSO: eventually, the particles will converge into a single point in (xy space)
-
PSO algo
- initialize particles (theta), each particle is a guess of v and w of the end point of a trajectory, after time tf.
(v,w)
initial value is random- Note: a particle is always a guess of parameters.
- For each iteration:
- For each particle:
- given current
v,w, current_pos
and guessv,w
, generate a traj with 40 points(x,y)
- A neuro-network can be used for prediction:
[theta_init, theta_end, v_init, v_end]
- A neuro-network can be used for prediction:
- get the cumulated cost of the trajectory
- In costmap, the global goal is already represented by the cell with 0 cost. The cost also encompasses the distance to the nearest obstacle, like
log(dist_to_nearest_obs))
. - Therefore, the
(v,w)
with lowest cost must be collision free, and will lead to the goal
- In costmap, the global goal is already represented by the cell with 0 cost. The cost also encompasses the distance to the nearest obstacle, like
- If
cost < cost*
, choose the traj as the best traj; ifcost < historical_cost
, save the state of the current particle - save the historical best cost and current global best velocity
- given current
- For each particle:
-
get its direction to the particle as vf1, direction to its historical best as vf2, and current velocity vf3, then
vf = rand1 * vf1 + rand2 * vf2 + rand3 * vf3
-
get its direction to the particle as vf1, direction to its historical best as vf2, and current velocity vf3, then
- Move the robot using
(v,w)
for 0.1s
- For each particle:
- initialize particles (theta), each particle is a guess of v and w of the end point of a trajectory, after time tf.
-
Example:
-
========================================================================
========================================================================
-
2.5D map: https://github.com/anybotics/grid_map
- Fixed camera angle
- every 2D pixel has an elevation
- 可以对网格进行密集的切分, grid map can be queried O(1)
- Fixed camera angle
-
二叉树可以用来表示3D 物体位置: (octomap):
-
one child has 8 neighbors
-
So 1 3D position can lead to this "sparse" constellation
-
Indexing time: You need nlog(n) time to search thru a tree
-
-
Voxel Hashing
- Most sparse, used for RGBD reconstruction
- principle
-
Point Cloud: point cloud library
- unordered,
- No index Query
- Rotational camera might be cheaper
-
Distance Field maps: TSDF (Truncated Signed Distance Functions), ESDF (Euclidean Signed Distance Functions)
-
positive distance from camera to obstacle. From obstacle to inf is negative distance.
-
But it only cares about distance within a certain area. Outside the area is NA. So not very useful
-
ESDF instead has all the values. Fiesta
-
Voronoi Digram Map: at the center of the polygon stands a cell tower. Points within a polygon has the center as the closest cell tower
- Skeleton Map? Extracts "skeleton" out of ESDF and does planning on a voronoi diagram map?
- Skeleton Map? Extracts "skeleton" out of ESDF and does planning on a voronoi diagram map?
-
-
Compliant control
- Nice explanation: https://robotics-explained.com/impedancecontrol