Earthquake Forecasting: Should we base it on activation, quiescence, or something else?

Speaker: John Rundle
Many years ago, Gardner and Knopoff (1974) asked the question: “Is the sequence of large earthquakes, with aftershocks removed, Poissonian?”. The one-word abstract was “Yes”. They came to this answer by removing earthquakes from the Southern California catalog until the remaining event occurrences were consistent with Poisson statistics. Since that time, there has been a vigorous debate as to whether the probability for earthquake occurrence is highest during periods of smaller-event activation, or highest during periods of smaller-event quiescence. The physics of the activation model are based on an idea from the theory of nucleation –that a small event has a finite probability of growing into a large earthquake, so that more small events imply a larger probability for occurrence of a large earthquake. An objection to this model has been stated as: “the greatest probability for a large earthquake is the moment after it occurs”. Examples of this type of model are the ETAS and STEP models, which are statistical models utilizing the Omori and Gutenberg-Richter laws. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long range interactions tend to be suppressed prior to large nucleation events. An example of such a model is the seismic gap model. Other models include both, such as the Pattern Informatics model which looks only for deviations (activation or quiescence) and weights both equally. In this talk we use this background to discuss both previous and new models that illustrate these points. We also discuss the question of whether time since the last large earthquake should play a role in earthquake probabilities (such as it does in the elastic rebound model).