11/30/2017

Normalized Least Mean Square

$\vec{a}:\text{the unknown vector of the system parameters}$
$\vec{x}:\text{the vector of the input signal }$
$y:\text{the output signal }$

$y=\vec{a}\cdot\vec{x}$

$\vec{a}^{\prime}:\text{the vector of the prior estimated parameters}$

$\text{the estimated output}: y^{\prime}=\vec{a}^{\prime}\cdot\vec{x}$

$\text{error}:e=y-y^{\prime}$

$\vec{a}^*:\text{the vector of the posterior estimated parameters}$

$\text{assuming } y=\vec{a}^*\cdot\vec{x} \text{ and } \vec{a}^*=\vec{a}^{\prime}+\mu\vec{x}$

$e=(\vec{a}^{\prime}+\mu\vec{x})\cdot\vec{x} - \vec{a}^{\prime}\cdot\vec{x}$
$e=\mu\vec{x}\cdot\vec{x}=\mu{\lVert\vec{x}\rVert}^2$

$\mu=\dfrac{e}{{\lVert\vec{x}\rVert}^2}$

$\vec{a}^*=\vec{a}^{\prime}+\dfrac{e}{{\lVert\vec{x}\rVert}^2}\vec{x}$



6/22/2017

Technology and Business Skills


$Result = {Technology} ^ {Business Skills}$

Technology Business Skills Result
0
*
0
1
0
1
1
1
1
1
2
1
2
0
1
2
1
2
2
2
4
2
3
8

4/28/2017

Estimation - Kalman Filter II

From the point of view of the measurement, we can make a prediction of the measurement ${\textbf{z}_1}'$ from ${\textbf{x}_1}'$, and we can get $({\textbf{z}_1}', \textbf{H}_1{\textbf{P}_1}'\textbf{H}_1^T)$.

The measurement is $(\textbf{z}_1, \textbf{R}_1)$, the estimate is

$\textbf{z}_1^* = (\textbf{I}-\textbf{G}_1){\textbf{z}_1}'+\textbf{G}_1\textbf{z}_1$

a good estimate comes with

$\textbf{G}_1=\textbf{H}_1{\textbf{P}_1}'\textbf{H}_1^T(\textbf{H}_1{\textbf{P}_1}'\textbf{H}_1^T+\textbf{R}_1)^{-1} = \textbf{H}_1\textbf{K}_1$

$\textbf{z}_1^* = (\textbf{I}-\textbf{H}_1\textbf{K}_1)\textbf{H}_1{\textbf{x}_1}'+\textbf{H}_1\textbf{K}_1\textbf{z}_1$

$ = \textbf{H}_1(\textbf{I}-\textbf{K}_1\textbf{H}_1){\textbf{x}_1}'+\textbf{H}_1\textbf{K}_1\textbf{z}_1$

$\textbf{H}_1\textbf{x}_1^* = \textbf{H}_1(\textbf{I}-\textbf{K}_1\textbf{H}_1){\textbf{x}_1}'+\textbf{H}_1\textbf{K}_1\textbf{z}_1$

so $\textbf{K}_1$ can give a good estimate for $\textbf{z}_1^*$, it seems also imply that
 $(\textbf{I}-\textbf{K}_1\textbf{H}_1){\textbf{x}_1}'+\textbf{K}_1\textbf{z}_1$ can give a good estimate for $\textbf{x}_1^*$.

Estimation - Probability Distribution

Here is the chart of the probability distribution of a 1D example on the (prediction, measurement, estimate) for the normal distribution case.


As we can see, both the measurement and the prediction are unbiased, but have different probability distributions.  By adjusting the weight, we can find a better unbiased estimation.

4/27/2017

Estimation - Kalman Filter

Some ideas about the Kalman filter.

Say we have a dynamic system that has the internal state $\textbf{x}$ and the control input $\textbf{u}$.  Although we don't know the internal state $\textbf{x}$, we can observe it and have the measurement $\textbf{z}$.

Then how can we estimate the internal state $\textbf{x}$?

At first, we may have an initial estimated state and the covariance $(\textbf{x}_0^*, \textbf{P}_0^*)$.  So we use this to make a prediction based on the dynamics of the system, then we  get $({\textbf{x}_1}', {\textbf{P}_1}')$.

At the time $t_1$, we also do a measurement and get $(\textbf{z}_1, \textbf{R}_1)$.

So we use the prediction and the measurement to have an estimation of the internal state at time $t_1$, which is $({\textbf{x}_1^*}, {\textbf{P}_1^*})$.

From the previous examples, we know that finding the weight to make an unbiased estimation is the key such that we can have a minimum of the covariance or variance of the estimation.

The Kalman gain $\textbf{K}$ in the Kalman filter invented by Rudolf E. Kalman can give us a good estimation of the internal state $\textbf{x}$ for a linear system.

We can image $\textbf{K}_1$ is a function of  ${\textbf{P}_1}'$ and $\textbf{R}_1$.

By following the wiki:Kalman filter's naming convention,

$\textbf{K}_1={\textbf{P}_1}'\textbf{H}_1^T(\textbf{H}_1{\textbf{P}_1}'\textbf{H}_1^T+\textbf{R}_1)^{-1}$

Such that the $({\textbf{x}_1^*}, {\textbf{P}_1^*})$ is
$\textbf{x}_1^*=(\textbf{I}-\textbf{K}_1\textbf{H}_1){\textbf{x}_1}'+\textbf{K}_1\textbf{z}_1$
$\textbf{P}_1^*=(\textbf{I}-\textbf{K}_1\textbf{H}_1){\textbf{P}_1}'$

So for the estimation of the internal state $\textbf{x}$ at the time $t$:
1. use  $({\textbf{x}_{t-1}^*}, {\textbf{P}_{t-1}^*})$ to get a prediction $({\textbf{x}_t}', {\textbf{P}_t}')$.
2. take a measurement at the time $t$ and get $(\textbf{z}_t, \textbf{R}_t)$.
3. calculate $\textbf{K}_t$
4. use  $\textbf{K}_t$ to get $({\textbf{x}_t^*}, {\textbf{P}_t^*})$.

The following diagram shows a simple flow chart of the process.




4/26/2017

Estimation - Data Fusion

From the previous example, we know if we want to merge two data, we also need the standard deviation or variance of the data in order to make a better estimation.  By following the tradition, we will use variance for the following derivations.

Say the data fusion process is defined as: $d_f = \mathcal{F}(d_1,d_2)$, $d_i = (x_i,\sigma_i^2)$

To have a good estimate means to have an unbiased $x_f$ with minimizing the $\sigma_f^2$ at the same time.

Say $x_f = \alpha_1x_1+\alpha_2x_2$, and $\alpha_1+\alpha_2=1$

Then $\sigma_f^2 = \alpha_1^2\sigma_1^2+\alpha_2^2\sigma_2^2+2\alpha_1\alpha_2\mathcal{C}(d_1,d_2)$, $\mathcal{C}(d_1,d_2)$ is the covariance of the data.

If both data are uncorrelated, $\sigma_f^2 = \alpha_1^2\sigma_1^2+\alpha_2^2\sigma_2^2$

With the uncorrelated case, let $\alpha_2=1-\alpha_1$, and

$ \frac{\partial}{\partial \alpha_1} \sigma_f^2 = 2\alpha_1\sigma_1^2+(2\alpha_1-2)\sigma_2^2=0$

then $\alpha_1 = \sigma_2^2/(\sigma_1^2+\sigma_2^2) $, $\alpha_2 = \sigma_1^2/(\sigma_1^2+\sigma_2^2) $

4/25/2017

Estimation - Measurement

Let's say we want to measure the resistance of a resistor with an ohmmeter.  The measured value is $z$ with a standard deviation $\sigma$.  Assuming the resistance is $x$.

The measurement is like $z=x+v$ and $v$ is the measurement noise with mean $0$ and standard deviation $\sigma$.

If we only take one measurement, we get an estimation $z_1$ for the $x$.  The standard deviation of the error is $\sigma$.

If we take two measurements, we get $z_1,z_2$.  We can use $(z_1+z_2)/2$ as the estimation for the $x$.  Then what's the standard deviation of the error?

If two measurements are independent and uncorrelated, the standard deviation would be $\sqrt{1/2}\sigma$.

Then how about we use different weights, say $(z_1+2*z_2)/3$.  Then what's the standard deviation of the error in this case?  It would be $\sqrt{5/9}\sigma$, and it's greater than $\sqrt{1/2}\sigma$.

We can prove that the weight $(1/2,1/2)$ can give the minimum standard deviation of the error.

Now say we have already make two measurements and get the estimate $(z_1+z_2)/2$.

We want to do another measurement and get $z_3$.  Then how are we going to merge the data?

We know the previous two measurement give us the data $((z_1+z_2)/2,\sqrt{1/2}\sigma)$ and the new information is $(z_3,\sigma)$.  If we want to merge these two data and minimize the standard deviation of the error, the weight would be $2/3$ for the first data and $1/3$ for the new data.

The result would be like $((z_1+z_2+z_3)/3,\sqrt{1/3}\sigma)$.

As we can see for estimation, the information of the standard deviation of the error of the previous estimate is quite useful in this case.  Of course, in this simple example, we can also keep the number of measurements and find the new weights used in the new estimation.


4/06/2017

如何準備 TOEFL GRE (聽力與閱讀能力)

聽力
   在初期  建議可以上美國的購物網站  找有直播的練習聽力  例如 QVC 因為這些英語都算標準  而且一直重複一個話題

   進步到一定的程度以後  就可以試試新聞 CNN 財經網站 Bloomberg 或其他 教育性的節目 Discovery

   當然最後如果能看得懂 電視節目  電影  那就算相當不錯了


閱讀能力
   可以找本介紹速讀的書  看看如何增加閱讀的速度

   基本的原理  就是 眼球的控制  一開始可以拿一把尺練習  用尺幫助一行一行的閱讀  遮住下方的英文字

   閱讀時  眼球由左到右  可以一次讀三到四個英文字  然後 按照節奏 讀完一行  閱讀時 避免眼球左右來回移動

   如果可以的話  盡量避免默讀  一般而言  默讀會減緩閱讀的速度