ISMT

Bayesian inference and time series analysis

Bayesian inference and time series analysis


Kartei Details

Karten 14
Sprache English
Kategorie Mathematik
Stufe Universität
Erstellt / Aktualisiert 02.10.2022 / 05.02.2024
Weblink
https://card2brain.ch/box/20221002_ismte136
Einbinden
<iframe src="https://card2brain.ch/box/20221002_ismte136/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>

[Statistics]

Given

\(E[X]=\sum_i x_i f(x_i)\)

\(E[X^2] = \text{?}\)

 

\(E[X^2]=Var[X]+E[X]^2\)

We can see that \(E[X^2] \neq E[X]^2 \text{ if } Var[X]\neq 0\)

[Time series analysis]

If AR(1) is causal and given as

\(x_t=\phi x_{t-1}+w_t\text{, where }w_t\sim wn(0, \sigma_w^2)\)

\(Var[x_t]= \text{?}\)

\(\begin{align} &Var[x_t]=var(\alpha + \phi x_{t-1}+w_t) \\&= 0 + var(\phi x_{t-1}) + var(w_t) \\&= \phi^2var(x_{t-1})+\sigma_{w}^2 \\&= \phi^2var(x_{t})+\sigma_{w}^2 &\text{| as series is stationary} \\&= \frac{\sigma{_w}^2}{1-\phi^2}\end{align} \)

[Statistics]

\(cov(aX + bY, cW + dV) = \text{?}\)

\(cov(aX + bY, cW + dV) = ac \cdot cov(X, W) + ad \cdot cov(X, V) + bc\cdot cov(Y, W) + bd\cdot cov(Y, V)\)

[Statistics]

\(cov(X, Y) = \text{?}\)

\(cov(X, Y)=E[XY]-E[X]E[Y]\)

[Time series analysis]

If AR(1) is causal and given as

\(x_t=\phi x_{t-1}+w_t\text{, where }w_t\sim wn(0, \sigma_w^2)\)

(a) Stationary solution?

(b) \(E(x_t)= \text{?}\)

(c) \(\gamma(h)= \text{?}\)

(d) \(\rho(h)= \text{?}\)

[Time series analysis]

If MA(1) is given as

\(x_t=\theta w_{t-1}+w_t\text{, where }w_t\sim wn(0, \sigma_w^2)\)

(a) \(E(x_t) = \text{?}\)

(b) \(\gamma(h)= \text{?}\)

(c) \(\rho(h)= \text{?}\)

[Time series]

In general, the correlation of any (stationary) time series can be calculated through...?

\(\rho(h)=\frac{\gamma(h)}{\gamma(0)}\)

Note:

  •  \(\gamma(0)\) is the variance of the series
  • This only works due to stationarity. Pearson correlation coefficient is actually \(\frac{cov(X, Y)}{\sigma_X\sigma_Y}\)

[Statistics]

\(Var(aX) = \text{?}\)

\(Var(aX) = a^2Var(X)\)

 

Easy proof:

\(Var(aX)=Cov(aX,aX)=E[aXaX]-E[aX]E[aX]\)

\(=a^2E[X^2]-a^2E[X]E[X]=a^2\underbrace{\left[E[x^2]-E[X]E[X]\right]}_{Var(X)}\)

[Probability theory]

Mean of a r.v.?

\(E(X) = \sum_{i=1}^{n} x_i p(x_i)\)

\(E(X) = \int_{-\infty}^{\infty} x f(x) \, dx\)

[Probability theory]

Variance of a r.v.?

\(\text{Var}(X)=E([X-E(X)]^2)\)

\(\text{Var}(X) = \sum_{i=1}^{n} (x_i - \mu)^2 p(x_i)\)

\(\text{Var}(X) = \int_{-\infty}^{\infty} (x - \mu)^2 f(x) \, dx\)

[Probability calculation]

Calculate the E[X] of a continuous uniform probability distribution.

A continuous uniform distribution over the interval [a, b] has a probability density function (pdf) given by:

\(f(x) = \frac{1}{b-a} \quad \text{for} \, a \leq x \leq b\)

and \(f(x) = 0\) outside this interval. 

Expected Value, E(X), Calculation

The expected value of X for a continuous uniform distribution is calculated using the integral:

\(E(X) = \int_{-\infty}^{\infty} x f(x) \, dx\)

Given the pdf \(f(x) = \frac{1}{b-a} \text{for } a \leq x \leq b\), the integral simplifies to:

\(E(X) = \int_{a}^{b} x \frac{1}{b-a} \, dx\)

because f(x) = 0 outside [a, b].

Now, let's solve this integral:

\(E(X) \\= \frac{1}{b-a} \int_{a}^{b} x \, dx\\= \frac{1}{b-a} \left[ \frac{x^2}{2} \right]_{a}^{b}\\= \frac{1}{b-a} \left( \frac{b^2}{2} - \frac{a^2}{2} \right)\\= \frac{b^2 - a^2}{2(b-a)}\\= \frac{(b-a)(b+a)}{2(b-a)}\\= \frac{b+a}{2}\)

So, the expected value E(X) of a continuous uniform distribution over the interval [a, b] is the midpoint of the interval, which is:

\(E(X) = \frac{a + b}{2}\)

This result shows that for a continuous uniform distribution, the expected value is simply the average of the lower and upper bounds of the distribution's interval.

Let X and Y be two random variables. The conditional expectation of X given Y = y is defined as?

\(E[X|Y = y] = \sum_{x} x \, p_{X|Y}(x|y)\)

\(E[X|Y = y] = \int_{-\infty}^{+\infty} x \, f_{X|Y}(x|y) \, dx\)

[Probability theory]

Let X and Y be two random variables. Assuming expectations exist, what are helpful formulas for the expected value of X and the variance of X?

\(E[X]=E[E[X|Y]]\)

\(Var(X)=Var(E[X|Y])+E[Var(X|Y)]\)

[Statistics]

Let X and Y be two random variables. Then h(X) that minimizes

\(E\left[(Y-h(X))^2 \right]\)

is given by?

\(h(X)=E[Y|X]\)

provided the expectations exist. "The expected value is always the best (linear?) predictor."