The NFL theorems have stimulated lots of subsequent work, with over 2500 citations of  alone by spring 2012 according to Google Scholar. However, arguably much of that research has missed the most important implications of the theorems. As stated in , the primary importance of the NFL theorems for search is what they tell us about “the underlying mathematical ‘skeleton’ of optimization theory before the ‘flesh’ of the probability distributions of a particular context and set of optimization problems are imposed”. So in particular, while the NFL theorems have strong implications if one believes in a uniform distribution over optimization problems, in no sense should they be interpreted as advocating such a distribution. In this short note I elaborate this perspective on what it is that is really important about the NFL theorems for search. I then discuss how the fact that there are NFL theorems for both search and for supervised learning is symptomatic of the deep formal relationship between those two fields. Once that relationship is disentangled, it suggests many ways that we can exploit practical techniques that were first developed in supervised learning to help us do search. I summarize some experiments that confirm the power of search algorithms developed in this way. I end by briefly discussing the various free lunch theorems that have been derived, and possible directions for future research.
Title: What the No Free Lunch Theorems Really Mean; How to Improve Search Algorithms
David H. Wolpert
SFI WP 12-10-017