Geoprocessing and parameter estimation
One additional course for Geomatic MSc ETHz
One additional course for Geomatic MSc ETHz
11
0.0 (0)
Set of flashcards Details
Flashcards | 11 |
---|---|
Language | English |
Category | Geography |
Level | University |
Created / Updated | 24.01.2019 / 25.01.2019 |
Licencing | Not defined |
Weblink |
https://card2brain.ch/box/20190124_geoprocessing_and_parameter_estimation
|
Embed |
<iframe src="https://card2brain.ch/box/20190124_geoprocessing_and_parameter_estimation/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>
|
What is Least-squares adjustment (what is redundant observations), and how they can be used in adjustments
- carry out objective quality control to estimate the unknown parameters, by redundant observations, with mathematically well-defined rules (more observations available than necessary to determine unknows)
- Random errors are dealt with in least-squares adjustment; It can help discover and deal with unknown systematic errors by adding additional unknows; It can help discover and remove the blunders.
What kinds of errors could happen in surveying and their features?
- random errors: small, same probability of the positive and negative error of the same magnitude, inherent and cannot be removed completely, dealt with least-squares adjustment
- systematic errors: dangerous for they accumulate, avoid by adequate instrument calibration and compensation and etc., if errors are known correct them before adjustment or model them in the adjustment by adding additional unknows
- blunders: large errors due to carelessness, careful observations to remove, to discover and remove by adjustment.
What compenents the least-squares adjustment include?
two equally important compenents for quality-controlled observations, and parameters and their accuracies:
- stochastic model: for precision, not related (diagonal variance-covariance matrice with off-diagonal covariance equals 0) random varibles (only with random errors) under normal distribution (observations belong to N)
- cofactor matrix: scaled variance-covariance matrix (scale factor: priori variance of unit weight)
- weight matrix:
- mathematical model: express mathematically relations between observations and between observations and other quantities of interest (parameters or unknowns of the adjustment)
- most unlinear, nonlinear functions should firstly linearize
- 3 models:
- mixed adjustment: observations and parameters are related by an impicit nonliear function
- observation equation: observations are explicitly related to the parameters
- big pro: each observation generates one equation
- condition equation: total elimination of the parameters
The process of Mixed adjustment model
- Linearization: linearized around the chosen point of expansion
- Minimization: based on the minimization of the function VPV, V is residuals of the observations, P is the weight matrix which is inverted from the Cofactor Matrix Q. Minimization is achieved by introducing Lagrange multipliers and to make the partial derivatives be zero. Estimated X->Lagrange multipliers K->residuals V->adjusted parameters and adjusted observations
- Cofactor Q: the law of variance propagation, cofactor Matrix Qw->Qx->Qv->QLa
- Posteriori Variance of unit weight: = VPV/(r-u), where r-u is the degree of freedom, which equals the number of redundant obervations. SigmaX->SigmaV->SigmaLa
- iterations: Let the adjustments converge properly by converging both Vi and Xi to zero, i.e. the iteration has converged if |(VPV)i-(VPV)i-1}<a small positive number
The important equation of three adjustment models.