Premium Partner

QM

HS 17/18

HS 17/18


Kartei Details

Karten 82
Sprache English
Kategorie Finanzen
Stufe Universität
Erstellt / Aktualisiert 04.01.2018 / 04.01.2018
Lizenzierung Keine Angabe
Weblink
https://card2brain.ch/box/20180104_qm
Einbinden
<iframe src="https://card2brain.ch/box/20180104_qm/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>

Probability Function

The probability function for a discrete random variable X is a function p(x) that assigns a probability to each value of the random variable. It must be greater than or equal to zero and the
sum of all individual outcome probabilities is equal to 1.

Probability Density Function

The probability density function corresponds to the probability function, but in the continuous case. It is defined by the derivative of the Cumulative Distribution Function (CDF). In the continous case, the probability of a single realization is always zero, and we need to express the probability
by taking the integral in an interval under the curve

Cumulative Distribution Function

The cumulative distribution function (CDF) F(x) = P(X≤xi) indicates the probability that X takes maximally a value of x. It is called cumulative since it’s the accumulation of the probabilities. Thus,
the y-value for the highest x-value must always be 1.

Joint Distribution Function

A joint distribution function F(xi ,yk) = P(X≤xi,,Y≤yk) indicates the probability that X takes at most a value of x and Y at most a value of y. Unlike the previous functions that are based on one
random variable, this (and the following two) functions are used when two random variables exist

Conditional Distribution Function

A conditional distribution function f(xi |Y=yk) describes the distribution of a variable X given the outcome of another variable Y. It is equal to the joint probability of the two variables divided
by the marginal probability of the given variable

Marginal Distribution Function

The marginal distribution function fx(xi) = P(X=xi) indicates the probability of X = xi regardless of the value of Y

Explain the Central Limit Theorem and its Importance in Inductive Statistics

If we take n independent random variables with mean µ and variance σ2, then if n is large the sum of these random variables will be approximately normally distributed with mean nµ and variance nσ2. Thus, even though we might not know the shape of the distribution of the entire population, the central limit theorem says that we can treat the sampling distribution as if it were normal. Of course, in order for the conclusions of the theorem to hold, we do need a sample size that is large enough. The sampling distribution is asymptotically normal only if n ≥ 30. Many practices in statistics, such as those involving hypothesis testing or confidence intervals, make some assumptions concerning the population that the data was obtained from. One assumption that is initially
made in a statistics course is that the populations that we work with are normally distributed

Explain the role of test statistics

The test statistic is used in hypothesis testing. In particular, it is calculated from the sample in order to decide whether the null hypothesis made on the population should be rejected.