Geoprocessing and parameter estimation

One additional course for Geomatic MSc ETHz

One additional course for Geomatic MSc ETHz


Fichier Détails

Cartes-fiches 11
Langue English
Catégorie Géographie
Niveau Université
Crée / Actualisé 24.01.2019 / 25.01.2019
Lien de web
https://card2brain.ch/box/20190124_geoprocessing_and_parameter_estimation
Intégrer
<iframe src="https://card2brain.ch/box/20190124_geoprocessing_and_parameter_estimation/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>

What is Least-squares adjustment (what is redundant observations), and how they can be used in adjustments

  1. carry out objective quality control to estimate the unknown parameters, by redundant observations, with mathematically well-defined rules (more observations available than necessary to determine unknows)
  2. Random errors are dealt with in least-squares adjustment; It can help discover and deal with unknown systematic errors by adding additional unknows; It can help discover and remove the blunders.

What kinds of errors could happen in surveying and their features?

  1. random errors: small, same probability of the positive and negative error of the same magnitude, inherent and cannot be removed completely, dealt with least-squares adjustment
  2. systematic errors: dangerous for they accumulate, avoid by adequate instrument calibration and compensation and etc., if errors are known correct them before adjustment or model them in the adjustment by adding additional unknows
  3. blunders: large errors due to carelessness, careful observations to remove, to discover and remove by adjustment.

Difference between accuracy and precision?

  • Accuracy: closeness of observations to the true value, related to systematic, random errors and blunders
  • Precision: closeness of repeated observations to the sample mean, only related to random errors

 

What compenents the least-squares adjustment include?

two equally important compenents for quality-controlled observations, and parameters and their accuracies:

  • stochastic model: for precision, not related (diagonal variance-covariance matrice with off-diagonal covariance equals 0) random varibles (only with random errors) under normal distribution (observations belong to N)
    • cofactor matrix: scaled variance-covariance matrix (scale factor: priori variance of unit weight)
    • weight matrix:
  • mathematical model: express mathematically relations between observations and between observations and other quantities of interest (parameters or unknowns of the adjustment)
    • most unlinear, nonlinear functions should firstly linearize
    • 3 models:
      • mixed adjustment: observations and parameters are related by an impicit nonliear function 
      • observation equation: observations are explicitly related to the parameters
        • big pro: each observation generates one equation
      • condition equation: total elimination of the parameters

How would the law of variance-covariance propagation work for linear function?

Since the sum of the probability is 1, with smaller standard deviation, the density function would be narrower and higehr

Propagation of uncertainty used in trigonometrical heighting

对每个变量求全微分后,用微分后的平方作为系数,与每一个自变量的方差相乘,最后得到待求量的方差

The process of Mixed adjustment model

  1. Linearization: linearized around the chosen point of expansion
  2. Minimization: based on the minimization of the function VPV, V is residuals of the observations, P is the weight matrix which is inverted from the Cofactor Matrix Q. Minimization is achieved by introducing Lagrange multipliers and to make the partial derivatives be zero. Estimated X->Lagrange multipliers K->residuals V->adjusted parameters and adjusted observations
  3. Cofactor Q: the law of variance propagation, cofactor Matrix Qw->Qx->Qv->QLa
  4. Posteriori Variance of unit weight: = VPV/(r-u), where r-u is the degree of freedom, which equals the number of redundant obervations. SigmaX->SigmaV->SigmaLa
  5. iterations: Let the adjustments converge properly by converging both Vi and Xi to zero, i.e. the iteration has converged if |(VPV)i-(VPV)i-1}<a small positive number

The important equation of three adjustment models.

  • Mixed adjustment model: the observations and the parameters are implicitly related
  • Observation equation model: observations are related explicitly to the parameters
  • Condition equation model: observations are related by a nonlinear function without the use of the additional parameter

Examples of the calculating inner angles of one triangle for 3 different models.

What is the ellipse of standard deviation?

  • The standard deviation ellipse is not a standard deviation curve: the standard deviation in a certain direction is equal to the projection of the standard ellipse onto that direction
  • The standard deviation increases rapidly as the direction moves away from the minor axis. Therefore an extremely narrow ellipse is not desirable if the overall accuracy for the station positions is important.
  • For angle=0 or 90, the axes a and b equal the maximum and minimum SD respectively.
  •  
  • The points of contact between the ellipse and the enclosed rectangle are functions of the correlation coefficients.
  •  
  • 来源:https://blog.csdn.net/allenlu2008/article/details/47780405
    • 特性:长半轴表示数据分布方向,短半轴表示数据分布情况,面积表示分布范围
      • 方向特征:长短半轴差距(扁率)越大,方向性越明显
      • 分布特征:短半轴越长,数据越离散;越短越集中
      • 可指定不同级别标准差,决定生成的椭圆包含的数据笔记
        • 1-68%,2-95%,3-99%

2 step of Blunder detection, 3 ways of flag observation

  • 1st: check the field notes to confirm no errors
  • 2nd: break the network down into smaller networks and adjust each network properly. Alternatively, the observations can be added sequentially one at a time until the blunder is found. The sum of the normalized residuals squared is inspected for unusually large deviations
    • Blunder detection of the adjustment is based on the analysis of the residuals (prerequisite: redundant observations)
    • the problem with the least-squares adjustment is the adjustments tend to hide or reduce their impact and distribute their effects more or less throughout the entire network
  • Flag observation to locate troublesome observations and thus avoid unnecessary searching of the whole dataset
    • Tao test can help us to decide whether or not to flag an observation for possible rejection by comparing the test statistics from tao tables with the critical value
    • Data snooping is based on the existence of only one blunder in the set of observations by applying a series of one-dimensional tests, which is testing consecutively all residuals.
    • changing weights of observations: reweigh and readjust by examing the residuals of each observation during each iteration and if the magnitude of the residual is outside a certain range, reduce its weight. 
      • big pro: locate and eliminate the blunders automatically
      • efficient when large redundant observations.
      • one possible danger: initial approximations of the parameters are inaccurate, which may lead to the weight reduction of many correct observations. To avoid unnecessary rejection and reweighing, do not change the weights until the residuals exceed three times the SD