Cognitive Robotics Formelsammlung

Meine Formelsammlung für das Modul Cognitive Robotics

#Robotik#Masterstudium#Sonstiges#Studium

Cognitive Robotics – Formula Sheet

1. Probability Theory

Conditional Probability

p(xy)=p(x,y)p(y)p(x \mid y) = \frac{p(x,y)}{p(y)} p(x,y)=p(xy)p(y)p(x,y) = p(x \mid y)p(y)

Law of Total Probability

Discrete:

p(x)=yp(xy)p(y)p(x) = \sum_y p(x \mid y)p(y)

Continuous:

p(x)=p(xy)p(y),dyp(x) = \int p(x \mid y)p(y),dy

Bayes' Rule

p(xy)=p(yx)p(x)p(y)p(x \mid y) = \frac{p(y \mid x)p(x)}{p(y)} p(xy)=ηp(yx)p(x)p(x \mid y) = \eta \cdot p(y \mid x)p(x)

Complement Rule

p(¬A)=1p(A)p(\neg A) = 1 - p(A)

Entropy

How uncertain are we about a random variable X?

(The entries of X can be a discrete set of states/ a distribution over states. A distribution (kitchen, bedrrom, hallway)=(1, 0, 0) has low entropy, while a distribution (0.33, 0.33, 0.34) has high entropy)

H(X)=xp(x)logp(x)H(X) = -\sum_x p(x)\log p(x)

2. Gaussian Distributions

Univariate Gaussian

N(x;μ,σ2)=12πσ2exp((xμ)22σ2)\mathcal{N}(x;\mu,\sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left( -\frac{(x-\mu)^2}{2\sigma^2} \right)

Multivariate Gaussian

N(x;μ,Σ)=1(2π)nΣexp(12(xμ)TΣ1(xμ))\mathcal{N}(x;\mu,\Sigma) = \frac{1}{\sqrt{(2\pi)^n |\Sigma|}} \exp\left( -\frac{1}{2}(x-\mu)^T \Sigma^{-1}(x-\mu) \right)

Linear Transformation

x=Ax+bx' = A x + b μ=Aμ+b\mu' = A\mu + b Σ=AΣAT\Sigma' = A\Sigma A^T

3. Bayes Filter

Belief Definition

Bel(xt)=p(xtz1:t,u1:t)Bel(x_t) = p(x_t \mid z_{1:t}, u_{1:t})

Markov Assumptions

p(xtx0:t1,u1:t)=p(xtxt1,ut)p(x_t \mid x_{0:t-1}, u_{1:t}) = p(x_t \mid x_{t-1}, u_t) p(ztx0:t,z1:t1,u1:t)=p(ztxt)p(z_t \mid x_{0:t}, z_{1:t-1}, u_{1:t}) = p(z_t \mid x_t)

Recursive Bayes Filter

Bel(xt)=ηp(ztxt)p(xtut,xt1)Bel(xt1)dxt1Bel(x_t) = \eta p(z_t \mid x_t) \int p(x_t \mid u_t, x_{t-1}) Bel(x_{t-1}) dx_{t-1} η=1p(ztz1:t1,u1:t)\eta = \frac{1}{p(z_t \mid z_{1:t-1}, u_{1:t})}

Prediction

Bel(xt)=p(xtut,xt1)Bel(xt1)dxt1\overline{Bel}(x_t) = \int p(x_t \mid u_t, x_{t-1}) Bel(x_{t-1}) dx_{t-1}

Correction

Bel(xt)=ηp(ztxt)Bel(xt)Bel(x_t) = \eta p(z_t \mid x_t)\overline{Bel}(x_t)

4. Kalman Filter

Linear System Model

xt=Axt1+But+ϵtx_t = A x_{t-1} + B u_t + \epsilon_t ϵtN(0,R)\epsilon_t \sim \mathcal{N}(0,R)

R: motion noise covariance

Measurement Model

zt=Cxt+δtz_t = C x_t + \delta_t δtN(0,Q)\delta_t \sim \mathcal{N}(0,Q)

Q: measurement noise covariance

Prediction

xˉt=Atxt1+Btut\bar{x}_t = A_t x_{t-1} + B_t u_t Σˉt=AtΣt1AtT+R\bar{\Sigma}_t = A_t \Sigma_{t-1} A_t^T + R

Kalman Gain

Kt=ΣˉtCtT(CtΣˉtCtT+Q)1K_t = \bar{\Sigma}_t C_t^T \left( C_t \bar{\Sigma}_t C_t^T + Q \right)^{-1}

Correction

xt=xˉt+Kt(ztCtxˉt)x_t = \bar{x}_t + K_t (z_t - C_t\bar{x}_t) Σt=(IKtCt)Σˉt\Sigma_t = (I - K_t C_t)\bar{\Sigma}_t

1D Kalman Filter

Prediction: The mean is transformed linearly and the variances add up.

μ1=μ0+u\mu_1 = \mu_0 + u σ12=σ02+σu2σ1=σ02+σu2\sigma_1^2 = \sigma_0^2 + \sigma_u^2 \quad \quad \quad \sigma_1 = \sqrt{\sigma_0^2 + \sigma_u^2}

Correction: The Kalman gain is the ratio of the variance of the prediction to the total variance (prediction + measurement). The innovation zμ1z-\mu_1 is added to the prediction mean, weighted by the Kalman gain. The variance is reduced by a factor of (1-K).

K=σ12σ12+σz2K = \frac{\sigma_1^2}{\sigma_1^2 + \sigma_z^2} μ=μ1+K(zμ1)\mu = \mu_1 + K (z-\mu_1) σ2=(1K)σ12\sigma^2 = (1-K)\sigma_1^2

5. Extended Kalman Filter (EKF)

Nonlinear Models

xt=g(ut,xt1)+ϵtx_t = g(u_t, x_{t-1}) + \epsilon_t zt=h(xt)+δtz_t = h(x_t) + \delta_t

Jacobians

Gt=g(ut,x)xx=μt1G_t = \frac{\partial g(u_t, x)}{\partial x}\Big|_{x=\mu_{t-1}} Ht=h(x)xx=μˉtH_t = \frac{\partial h(x)}{\partial x}\Big|_{x=\bar{\mu}_t}

EKF Prediction

μˉt=g(ut,μt1)\bar{\mu}_t = g(u_t, \mu_{t-1}) Σˉt=GtΣt1GtT+R\bar{\Sigma}_t = G_t \Sigma_{t-1} G_t^T + R

EKF Correction

Kt=ΣˉtHtT(HtΣˉtHtT+Q)1K_t = \bar{\Sigma}_t H_t^T (H_t \bar{\Sigma}_t H_t^T + Q)^{-1} μt=μˉt+Kt(zth(μˉt))\mu_t = \bar{\mu}_t + K_t (z_t - h(\bar{\mu}_t)) Σt=(IKtHt)Σˉt\Sigma_t = (I - K_t H_t)\bar{\Sigma}_t

6. Unscented Kalman Filter (UKF)

TODO weights

Sigma Points

χ0=μ\chi_0 = \mu χi=μ+((n+λ)Σ)i\chi_i = \mu + (\sqrt{(n+\lambda)\Sigma})_i χi+n=μ((n+λ)Σ)i\chi_{i+n} = \mu - (\sqrt{(n+\lambda)\Sigma})_i λ=α2(n+κ)n\lambda = \alpha^2(n+\kappa) - n

Mean

μ=iwi(m)χi\mu = \sum_i w_i^{(m)} \chi_i

Covariance

Σ=iwi(c)(χiμ)(χiμ)T\Sigma = \sum_i w_i^{(c)}(\chi_i - \mu)(\chi_i - \mu)^T

7. Particle Filter

Importance Weight (General)

wt(i)p(ztxt(i))p(xt(i)ut,xt1(i))q(xt(i)xt1(i),ut,zt)w_t^{(i)} \propto \frac{ p(z_t \mid x_t^{(i)}) p(x_t^{(i)} \mid u_t, x_{t-1}^{(i)}) }{ q(x_t^{(i)} \mid x_{t-1}^{(i)},u_t,z_t) }

or recursively:

wt(i)=wt1(i)p(ztxt(i))p(xt(i)ut,xt1(i))q(xt(i)xt1(i),ut,zt)w_t^{(i)} = w_{t-1}^{(i)} \frac{ p(z_t \mid x_t^{(i)}) p(x_t^{(i)} \mid u_t, x_{t-1}^{(i)}) }{ q(x_t^{(i)} \mid x_{t-1}^{(i)},u_t,z_t) }

Bootstrap Filter

wt(i)p(ztxt(i))w_t^{(i)} \propto p(z_t \mid x_t^{(i)})

Normalization

wt(i)=wt(i)jwt(j)w_t^{(i)} = \frac{w_t^{(i)}}{\sum_j w_t^{(j)}}

Effective Sample Size

Neff=1i(wt(i))2N_{\text{eff}} = \frac{1}{\sum_i (w_t^{(i)})^2}

8. Differential Drive & Motion Models

Wheel Velocities

v=vr+vl2v = \frac{v_r + v_l}{2} ω=vrvlL\omega = \frac{v_r - v_l}{L}

Instantaneous Turning Radius

R=vωR = \frac{v}{\omega}

Velocity Motion Model

x=xvωsinθ+vωsin(θ+ωΔt)x' = x - \frac{v}{\omega}\sin\theta + \frac{v}{\omega}\sin(\theta + \omega \Delta t) y=y+vωcosθvωcos(θ+ωΔt)y' = y + \frac{v}{\omega}\cos\theta - \frac{v}{\omega}\cos(\theta + \omega \Delta t) θ=θ+ωΔt\theta' = \theta + \omega \Delta t

Special Case: ω=0\omega = 0, straight line motion

x=x+vΔtcosθx' = x + v \Delta t \cos\theta y=y+vΔtsinθy' = y + v \Delta t \sin\theta θ=θ\theta' = \theta

9. Occupancy Grid Mapping

Binary Bayes Update

Assumption: Cells are independent given measurements.

p(miz1:t,x1:t)=p(ztmi,xt)p(miz1:t1,x1:t1)p(ztz1:t1,x1:t)p(m_i \mid z_{1:t}, x_{1:t}) = \frac{ p(z_t \mid m_i, x_t) p(m_i \mid z_{1:t-1}, x_{1:t-1}) }{ p(z_t \mid z_{1:t-1}, x_{1:t}) }

Log-Odds Representation

lt=logp(miz1:t,x1:t)p(¬miz1:t,x1:t)l_t = \log \frac{p(m_i \mid z_{1:t}, x_{1:t})}{p(\neg m_i \mid z_{1:t}, x_{1:t})}

Log-Odds Update

l(miz1:t,x1:t)=l(mizt,xt)+l(miz1:t1,x1:t1)l(mi)l(m_i | z_{1:t}, x_{1:t}) = l(m_i | z_t, x_t) + l(m_i | z_{1:t-1}, x_{1:t-1}) - l(m_i)

10. EKF-SLAM

State Vector

μ=(xm)\mu = \begin{pmatrix} x \\ m \end{pmatrix} dim(μ)=3+2N\dim(\mu) = 3 + 2N ΣR(3+2N)×(3+2N)\Sigma \in \mathbb{R}^{(3+2N)\times(3+2N)} Σ=(ΣxxΣxmΣmxΣmm)\Sigma = \begin{pmatrix} \Sigma_{xx} & \Sigma_{xm} \\ \Sigma_{mx} & \Sigma_{mm} \end{pmatrix}

11. FastSLAM (Rao-Blackwellization)

p(x1:t,mz1:t,u1:t)=p(x1:tz1:t,u1:t)jp(mjx1:t,z1:t)p(x_{1:t}, m \mid z_{1:t}, u_{1:t}) = p(x_{1:t} \mid z_{1:t}, u_{1:t}) \prod_j p(m_j \mid x_{1:t}, z_{1:t})

12. ICP (Least Squares Alignment)

Minimize:

E(R,t)=iyi(Rxi+t)2E(R,t) = \sum_i | y_i - (R x_i + t) |^2

Centroids:

xˉ=1Nixi\bar{x} = \frac{1}{N}\sum_i x_i yˉ=1Niyi\bar{y} = \frac{1}{N}\sum_i y_i

Covariance:

H=i(xixˉ)(yiyˉ)TH = \sum_i (x_i - \bar{x})(y_i - \bar{y})^T

SVD:

H=UΣVTH = U \Sigma V^T R=VUTR = V U^T t=yˉRxˉt = \bar{y} - R \bar{x}

13. A*

TODO admissability and consistency

f(n)=g(n)+h(n)f(n) = g(n) + h(n)

Without heuristic:

f(n)=g(n)f(n) = g(n)

14. Hough Transform (Line)

ρ=xcosθ+ysinθ\rho = x\cos\theta + y\sin\theta

15. Precision / Recall / F1

Precision=TPTP+FP\text{Precision} = \frac{TP}{TP + FP} Recall=TPTP+FN\text{Recall} = \frac{TP}{TP + FN} F1=2PrecisionRecallPrecision+RecallF_1 = \frac{2 \cdot \text{Precision} \cdot \text{Recall}} {\text{Precision} + \text{Recall}}

16. Distance Point to Line

For line through points (A,B) and point (C):

d=(BA)×(CA)BAd = \frac{| (B-A) \times (C-A) |} {|B-A|}

17. DWA Navigation Function

NF=αvel+βnf+γΔnf+θgoalNF = \alpha \text{vel} + \beta nf + \gamma \Delta nf + \theta goal

18. Types of Robotics

  • Classical Robotics: (e.g. industrial robots)
    • Exact models of the world
    • No sensing necessary
  • Reactive Paradigm: (e.g. Didabot)
    • No world model at all
    • Relies heavily on sensing
  • Hybrid Systems: (e.g. rovers)
    • model based at higher levels, e.g. for navigation
    • reactive at lower levels, e.g. for obstacle avoidance
  • Probabilistic Robotics: (e.g. self-driving cars)
    • Integrates models and sensing with explicit representation of uncertainty (e.g. bayesian filters)
    • Models and sensors are inaccurate
  • Cognitive Robotics: (e.g. humanoid robots)
    • Cognitive functions normally associated with people or animals: act autonomously to achieve goals and cope with unpredictable situations
    • Can interpret various kinds of sensor data

Kommentare

Noch Fragen?