Lernkarten

Karten 88 Karten
Lernende 1 Lernende
Sprache English
Stufe Universität
Erstellt / Aktualisiert 21.07.2018 / 27.08.2018
Lizenzierung Keine Angabe
Weblink
Einbinden
0 Exakte Antworten 88 Text Antworten 0 Multiple Choice Antworten
Fenster schliessen

Estimators: What are the formulas for the sample mean, sample variance (known mean and not known mean) and sample standard deviation?

sample mean: \(\bar{x}=\widehat{\mu}=\frac{1}{N}\sum\limits^{N}_{i=1}x_i\)

sample variance: \(\widehat{Var}(x)=\frac{1}{N-1}\sum\limits^{N}_{i=1}(x_i-\bar{x})^2 \)

sample variance with known \(\mu\)\(\widehat{Var}(x)=\frac{1}{N}\sum\limits^{N}_{i=1}(x_i-\bar{x})^2 \)

standard deviation: \(\widehat{s}=\sqrt{\widehat{Var}(x)}\)

Fenster schliessen

Given independent random variables X and Y with expected values \(\mu_X\) and \(\mu_Y\) and variances \(\sigma^2_X\) and \(\sigma^2_Y\).

How to calculate expected value and variance if, 

  1. \(Z=\alpha+X\)
  2. \(Z=\alpha X\)
  3. \(Z=X+Y\)
  4. \(Z=X*Y\)

     



 

 

  1. \(Z=\alpha+X\)
    • \(\mu_Z=\alpha+\mu_X,\;\;\;\sigma_Z^2=\sigma_X^2\)
  2. \(Z=\alpha X\)
  • \(\mu_Z=\alpha\mu_X,\;\;\;\sigma_Z^2=\alpha^2\sigma^2_X\)
  1. \(Z=X+Y\)
  • \(\mu_Z=\mu_X+\mu_Y,\;\;\;\sigma_Z^2=\sigma^2_X+\sigma^2_Y\)
  1. \(Z=X*Y\)
    • \(\mu_Z=\mu_X*\mu_Y\)

 

Note the density function and the cumulative distribution of composed random variables (as X+Y or XY) is in general not easy to determine, although mean and variance can easily be determined. 

Fenster schliessen

What is the estimator of the probability density function?

The histogram, which contains the relative occurrence divided by the bin width. 

Fenster schliessen

Choice of the number of bins K for a histogram

non-trivial:

  1. square-root choice: \(k=\sqrt{n}\)
  2. Sturges' formula (assumes Gausssian): \(k=log_2n+1\)
  3. Rice rule \(k=ceil(2n^{1/3})\)
Fenster schliessen

When is an estimator consistent?

The estimator \(\widehat{\Theta}\), as a function of the random variable X, is again a random variable. Therefore every estimator has an expected value and variance. 

 

An estimator is called consistent if:

\(P(|\widehat{\Theta}-\Theta|>\epsilon)\rightarrow0\;\;for\;\;N\rightarrow\infty\)

for all \(\epsilon >0\)

 

Example: The estimator for the expected value \(\widehat{\Theta}=\widehat{\mu}\) (the sample mean) is consistent (law of the large numbers). 

\(\widehat{\mu}=\frac{1}{N}\sum\limits^N_{i=1}x_i\)

Fenster schliessen

What is the Mean Squared Error (MSE) and Variance of an estimator?

\(MSE(\widehat{\Theta})=E[(\widehat{\Theta}-\Theta)^2]\)

The MSE is also called as risk 

\(Var(\widehat{\Theta})=E[(\widehat{\Theta}-E(\widehat{\Theta}))^2]\)

Fenster schliessen

What is the bias of an estimator?

Bias: 

\(B(\widehat{\Theta})=E(\widehat{\Theta})-\Theta\)

An estimator is called unbiased if and only if 

\(B(\widehat{\Theta})=0\)

Fenster schliessen

What is the relationship between Mean Square Error, Bias and Variance of an estimator?

\(MSE(\widehat{\Theta})=Var(\widehat{\Theta})+(B(\widehat{\Theta}))^2\)

so for unbiased estimator it holds:

\(MSE(\widehat{\Theta})=Var(\widehat{\Theta})\)