State-space modeling workshop

Navigation

No children

State-space modeling workshop

Login

Search


Version 0.4
iugo-cafe home

Lab 1 and 2 comments

One of the things you've seen is that separating process and non-process error is not so simple. Using this estimation method (maximizing the CDA fit using the kalman filter), I find that approximately half the time, the ML estimates put all variance in either process or non-process error. This is for 20-year time series. While this may seem distressing, keep in mind the following:

1. Sometimes the local minima with process and non-process error non-zero will be evident. This is especially so if the process error estimate is large (i.e. matters).

2. There are other estimation methods that work better by making some assumptions about the error structure in the actual data. If your species is not famous for boom-bust cycles or predator-prey cycles, then assuming that measurement error dominates the non-process error term is probably reasonable. In this case, Restricted Maximum Likelihood will often give better estimates, and is less likely to give estimates with either process or non-process error zero.

3. Sometimes there are multiple sites where monitoring is occurring. Estimating simultaneously from multiple sites, can significantly reduce the problems of separating process and non-process error. Estimates in the same site, but with different sampling techniques could serve the same purpose.

4. Your risk metric might not require precise separation of process and non-process error (or might deliberately choose metrics that aren't sensitive). mu, s2p, s2np are essentially 'nuisance' parameters. In a PVA, you're not really concerned with them, rather you are interested in a risk metric that is some function of mu, s2p, s2np. Perhaps, the data provide little information about s2p except that it is small, but your risk metric may be rather insensitive the precise value of s2p -- as long as it is small. We explore this in the afternoon lab.

5. Finally, the estimates of mu, s2p, and s2np which define estimated long-term trends and extinction risks explicitly include uncertaintly about where the variance in the data is coming from. We could get tighter estimates by only attributing variance to measurement error, say. But this would under-estimate the true uncertainty. The uncertainty in the trend estimate for the Wolves (the last exercise) is disagreeable (see note below), however it is, I would argue, an honest assessment of what underlying trends are consistent with the observed data. A quick and dirty approach would be to fit a linear trend to the logged data (which is essentially assigns all variability to non-process error), but this would seriously underestimate uncertainty in the long-term trend. Optimal management given uncertainty is an area of active research, but first we need an accurate estimate of uncertainty.

Note: In an actual PVA, a wide variety of other information on the risks facing a population would be used to make a risk assessment. Also the population trajectories of animals that have strong group formation/dissolution dynamics such as wolves and lions can show threshold dynamics and strong allee effects. Using a CDA for this kind of population structure has yet to be tested. Researchers have used group dynamics models for such populations, although estimation of those models requires extensive data. In fact, I've only seen this done for populations where every individual in the population has been tracked over many years (30+).

Created on May 7, 2007 at 02:40:25 PM by eli

No comments