# Lernkarten

Karten 11 Karten 0 Lernende English Universität 24.01.2019 / 25.01.2019 Keine Angabe
0 Exakte Antworten 11 Text Antworten 0 Multiple Choice Antworten

What is Least-squares adjustment (what is redundant observations), and how they can be used in adjustments

1. carry out objective quality control to estimate the unknown parameters, by redundant observations, with mathematically well-defined rules (more observations available than necessary to determine unknows)
2. Random errors are dealt with in least-squares adjustment; It can help discover and deal with unknown systematic errors by adding additional unknows; It can help discover and remove the blunders.

What kinds of errors could happen in surveying and their features?

1. random errors: small, same probability of the positive and negative error of the same magnitude, inherent and cannot be removed completely, dealt with least-squares adjustment
2. systematic errors: dangerous for they accumulate, avoid by adequate instrument calibration and compensation and etc., if errors are known correct them before adjustment or model them in the adjustment by adding additional unknows
3. blunders: large errors due to carelessness, careful observations to remove, to discover and remove by adjustment.

Difference between accuracy and precision?

Lizenzierung: Keine Angabe
• Accuracy: closeness of observations to the true value, related to systematic, random errors and blunders
• Precision: closeness of repeated observations to the sample mean, only related to random errors

What compenents the least-squares adjustment include?

two equally important compenents for quality-controlled observations, and parameters and their accuracies:

• stochastic model: for precision, not related (diagonal variance-covariance matrice with off-diagonal covariance equals 0) random varibles (only with random errors) under normal distribution (observations belong to N)
• cofactor matrix: scaled variance-covariance matrix (scale factor: priori variance of unit weight)
• weight matrix:
• mathematical model: express mathematically relations between observations and between observations and other quantities of interest (parameters or unknowns of the adjustment)
• most unlinear, nonlinear functions should firstly linearize
• 3 models:
• mixed adjustment: observations and parameters are related by an impicit nonliear function
• observation equation: observations are explicitly related to the parameters
• big pro: each observation generates one equation
• condition equation: total elimination of the parameters

How would the law of variance-covariance propagation work for linear function?

Lizenzierung: Keine Angabe

Since the sum of the probability is 1, with smaller standard deviation, the density function would be narrower and higehr

Propagation of uncertainty used in trigonometrical heighting

Lizenzierung: Keine Angabe

The process of Mixed adjustment model

Lizenzierung: Keine Angabe
1. Linearization: linearized around the chosen point of expansion
2. Minimization: based on the minimization of the function VPV, V is residuals of the observations, P is the weight matrix which is inverted from the Cofactor Matrix Q. Minimization is achieved by introducing Lagrange multipliers and to make the partial derivatives be zero. Estimated X->Lagrange multipliers K->residuals V->adjusted parameters and adjusted observations
3. Cofactor Q: the law of variance propagation, cofactor Matrix Qw->Qx->Qv->QLa
4. Posteriori Variance of unit weight: = VPV/(r-u), where r-u is the degree of freedom, which equals the number of redundant obervations. SigmaX->SigmaV->SigmaLa
5. iterations: Let the adjustments converge properly by converging both Vi and Xi to zero, i.e. the iteration has converged if |(VPV)i-(VPV)i-1}<a small positive number

The important equation of three adjustment models.

Lizenzierung: Keine Angabe
• Mixed adjustment model: the observations and the parameters are implicitly related
• Observation equation model: observations are related explicitly to the parameters
• Condition equation model: observations are related by a nonlinear function without the use of the additional parameter