Parameters, Estimators and Estimates

Written on September 8, 2016

The causal inference class today covered some very basic ideas that I frequently have problems with. I ‘know’ what estimates and estimators are but could not clearly explain them to a lay person (which probably reflects some holes in my knowledge). Today’s class cleared some basic ideas.

  1. A distribution captures some properties of the world.

  2. We might be interested in specific properties within this distribution, say or etc. These are fixed but unknown properties.

  3. These properties of interest either parameters of our distribution e.g. which are also known as estimands.

  4. Typically there are too many parameters to measure. So we need to come up with functions or algorithms that will give us an approximate value of the parameters we are interested in. These functions are estimators.

  5. The input for an estimator is some subset of the distribution, i.e. we have to get a representative sample from the distribution which will serve as the input for our estimator function. is a matrix of dimensionality where is the number of samples drawn and is the number of variables in one sample.

  6. Now an estimand, say can be approximately computed or estimated using the estimator . Note: the input to is which is a random sample and therefor the estimate is also a random variable. Note: This estimator is making assumptions about how the samples were generated, specifically it assumes the samples are iid.

  7. A good estimator has no bias and low variance \Theta$$ does not show up in the variance, it is just a measure of how different the output of the estimator is with different samples as input.

  8. We can combine the properties of bias and variance into a single measure MSE, mean squared error . MSE can be decomposed as . This can be seen using the identity (??).