Jump to content

User:Vipul/Makridakis Competitions

From Wikipedia, the free encyclopedia

The Makridakis Competitions (also known as the M Competitions or M-Competitions) is a term used for a series of competitions organized by teams led by forecasting researcher Spyros Makridakis and intended to evaluate and compare the accuracy of different forecasting methods.[1][2][3][4]

The competitions

[edit]

Summary

[edit]
No. Informal name for competition Year of publication of results Number of time series used Number of methods tested Other features
1 M Competition or M-Competition[5][1] 1982 1001 (used a subsample of 111 for the methods where it was too difficult to run all 1001) 15 (plus 9 variations) Not real-time
2 M-2 Competition or M2-Competition[6][1] 1993 29 (23 from collaborating companies, 6 from macroeconomic indicators) ? Real-time, many collaborating organizations, competition announced in advance
3 M-3 Competition or M3-Competition[1] 2000 3003 24

First competition in 1982

[edit]

The first Makridakis Competition, held in 1982, and known in the forecasting literature as the M-Competition, used 1001 time series and 15 forecasting methods (with another nine variations of those methods included).[5][1] According to a later paper by the authors, the following were the main conclusions of the M-Competition:[1]

  1. Statistically sophisticated or complex methods do not necessarily provide more accurate forecasts than simpler ones.
  2. The relative ranking of the performance of the various methods varies according to the accuracy measure being used.
  3. The accuracy when various methods are combined outperforms, on average, the individual methods being combined and does very well in comparison to other methods.
  4. The accuracy of the various methods depends on the length of the forecasting horizon involved.

The findings of the study have been verified and replicated through the use of new methods by other researchers.[7][8][9]

Newbold (1883) was critical of the M-competition, and argued against the general idea of using a single competition to attempt to settle the complex issue.[10]

Second competition, published in 1993

[edit]

The second competition, called the M-2 Competition or M2-Competition, was conducted on a grander scale. A call to participate was published in the International Journal of Forecasting, announcements were made in the International Symposium of Forecasting, and a written invitation was sent to all known experts on the various time series methods. The M2-Competition was organized in collaboration with four companies and included six macroeconomic series, and was conducted on a real-time basis. Data was from the United States.[1] The results of the competition were published in a 1993 paper.[6] The results were claimed to be statistically identical to those of the M-Competition.[1]

Fildes and Makridakis (1995) argue that despite the evidence produced by these competitions, the implications continued to be ignored by theoretical statisticians.[11]

Third competition, published in 2000

[edit]

The third competition, called the M-3 Competition or M3-Competition, was intended to both replicate and extend the features of the M-competition and M2-Competition, through the inclusion of more methods and researchers (particularly researchers in the area of neural networks) and more time series.[1] A total of 3003 time series was used. The paper documenting the results of the competition was published in the International Journal of Forecasting[1] in 2000 and the raw data was also made available on the International Institute of Forecasters website.[4] According to the authors, the conclusions from the M3-Competition were similar to those from the earlier competitions.[1]

A number of other papers have been published with different analyses of the data set from the M3-Competition.[2]Cite error: The <ref> tag has too many names (see the help page).

References

[edit]
  1. ^ a b c d e f g h i j k Makridakis, Spyros; Hibon, Michele (October–December 2000). "The M-3 Competition: results, conclusions, and implications" (PDF). International Journal of Forecasting. International Institute of Forecasters and Elsevier. doi:10.1016/S0169-2070(00)00057-1. Retrieved April 19, 2014.{{cite journal}}: CS1 maint: date format (link)
  2. ^ a b Koning, Alex J.; Frances, Philip Hans; Hibon, Michele; Stekler, H. O. (July–September 2005). "The M3 competition: Statistical tests of the results". International Journal of Forecasting. International Institute of Forecasters in collaboration with Elsevier. doi:10.1016/j.ijforecast.2004.10.003. {{cite journal}}: |access-date= requires |url= (help)CS1 maint: date format (link)
  3. ^ Hyndman, Rob J.; Koehler, Anne B. (October–December 2006). "Another look at measures of forecast accuracy". International Journal of Forecasting. 22 (4). International Institute of Forecasters in collaboration with Elsevier. {{cite journal}}: |access-date= requires |url= (help)CS1 maint: date format (link)
  4. ^ a b "M3-competition (full data)". International Institute of Forecasters. Retrieved April 19, 2014.
  5. ^ a b Spyros Makridakis; et al. (April–June 1982). "The accuracy of extrapolation (time series) methods: results of a forecasting competition". 1 (2). Journal of Forecasting: 111–153. doi:10.1002/for.3980010202. {{cite journal}}: Cite journal requires |journal= (help); Explicit use of et al. in: |author= (help)CS1 maint: date format (link)
  6. ^ a b Spyros Makridakis; et al. (April 1993). "The M-2 Competition: a real-time judgmentally based forecasting study". 9. International Journal of Forecasting: 5–22. doi:10.1016/0169-2070(93)90044-N. {{cite journal}}: Cite journal requires |journal= (help); Explicit use of et al. in: |author= (help)
  7. ^ Geurts, M. D.; Kelly, J. P. (1986). "Forecasting demand for special services". 2. International Journal of Forecasting: 261–272. {{cite journal}}: Cite journal requires |journal= (help)
  8. ^ Clemen, Robert T. (1989). "Combining forecasts: A review and annotated bibliography" (PDF). International Journal of Forecasting. 5. International Institute of Forecasters: 559–583.
  9. ^ Fildes, R.; Hibon, Michele; Makridakis, Spyros; Meade, N. (1998). "Generalising about univariate forecasting methods: further empirical evidence". International Journal of Forecasting. pp. 339–358.
  10. ^ Newbold, Paul (1983). "The competition to end all competitions". 2. Journal of Forecasting: 276–279. {{cite journal}}: Cite journal requires |journal= (help)
  11. ^ Fildes, R.; Makridakis, Spyros (1995). "The impact of empirical accuracy studies on time series analysis and forecasting" (PDF). International Statistical Review. 63: 289–308.