It is a well-known fact in IO psychology that cognitive ability is the one of the single best predictors of job performance, in a vast array of occupations. As such, cognitive ability tests are commonly created and used as a personnel selection tool for organizations. Such tests are typically validated using a criterion-related validity strategy, meaning that the usefulness of the test for predicting subsequent job performance is assessed. However, content validity, another important “type” of validity that refers to the extent to which a test can be considered to adequately sample the domain of interest, is often ignored in the validation process. In a recent article, Frank Schmidt argues that both types of validity can (and should) be assessed when creating a new cognitive ability measure.