Information about Norms and Data Sets

No norms are available on the IPIP website, for reasons explained below.

One should be very wary of using canned "norms" because it isn't obvious that one could ever find a population of which one's present sample is a representative subset. Most "norms" are misleading, and therefore they should not be used.

Far more defensible are local norms, which one develops oneself. For example, if one wants to give feedback to members of a class of students, one should relate the score of each individual to the means and standard deviations derived from the class itself. To maximize informativeness, one can provide the students with the frequency distribution for each scale, based on these local norms, and the individuals can then find (and circle) their own scores on these relevant distributions.

That said, some researchers might still be interested in comparing their data to existing data sets or in reanalyzing an existing data set. 

Data from surveys administered to the Eugence-Springfield Community Sample (ESCS) are the basis of many of the statistical properties of IPIP scales reported on the IPIP Website. These data can now be accessed from the Harvard Dataverse at https://dataverse.harvard.edu/dataverse/ESCS-Data. Those wishing to access this archive are strongly encouraged to follow the first link, labeled
"(0) Documentation and Sample Demographics," and read the file labeled "Read Me First.txt" before trying to access any of the data at this site.

There are also two other known data archives on the Web, neither of which is officially associated with the IPIP project. Use these data at your own risk.

An archive at https://openpsychometrics.org/_rawdata/ contains raw data collected online from a number of personality scales, including the 50-item IPIP inventory of the Big-Five factor markers. Those interested in these data should note that the site mistakenly used the labels from the NEO PI-R for the five factors instead of the labels for the lexical factors. Also, a printable copy of this site's version of the test uses the wrong labels for the factors and offers an unconventional scoring key. Users should refer to the IPIP website for the appropriate labels and scoring methods.

Those interested in raw data for the 300-item IPIP representation of the NEO PI-R or Johnson's (2014) 120-item IPIP-NEO can access an archive of such data at Johnson's data repository on the Open Science Framework.

Return Home