Likert scale

(Redirected from Likert scaling)

A Likert scale (/ˈlɪkərt/ LIK-ərt,[1][note 1]) is a psychometric scale named after its inventor, American social psychologist Rensis Likert,[2] which is commonly used in research questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term (or more fully the Likert-type scale) is often used interchangeably with rating scale, although there are other types of rating scales.

Likert distinguished between a scale proper, which emerges from collective responses to a set of items (usually eight or more), and the format in which responses are scored along a range. Technically speaking, a Likert scale refers only to the former.[3][4] The difference between these two concepts has to do with the distinction Likert made between the underlying phenomenon being investigated and the means of capturing variation that points to the underlying phenomenon.[5]

When responding to a Likert item, respondents specify their level of agreement or disagreement on a symmetric agree-disagree scale for a series of statements. Thus, the range captures the intensity of their feelings for a given item.[6]

A scale can be created as the simple sum or average of questionnaire responses over the set of individual items (questions). In so doing, Likert scaling assumes distances between each choice (answer option) are equal. Many researchers employ a set of such items that are highly correlated (that show high internal consistency) but also that together will capture the full domain under study (which requires less-than perfect correlations). Others hold to a standard by which "All items are assumed to be replications of each other or in other words items are considered to be parallel instruments".[7]: 197  By contrast, modern test theory treats the difficulty of each item (the ICCs) as information to be incorporated in scaling items.[8]

Composition

edit
 
An example questionnaire about a website design, with answers as a Likert scale

A Likert scale is the sum of responses on several Likert items. Because many Likert scales pair each constituent Likert item with its own instance of a visual analogue scale (e.g., a horizontal line, on which the subject indicates a response by circling or checking tick-marks), an individual item is itself sometimes erroneously referred to as being or having a scale, with this error creating pervasive confusion in the literature and parlance of the field.

A Likert item is simply a statement that the respondent is asked to evaluate by giving it a quantitative value on any kind of subjective or objective dimension, with level of agreement/disagreement being the dimension most commonly used. Well-designed Likert items exhibit both "symmetry" and "balance". Symmetry means that they contain equal numbers of positive and negative positions whose respective distances apart are bilaterally symmetric about the "neutral"/zero value (whether or not that value is presented as a candidate). Balance means that the distance between each candidate value is the same, allowing for quantitative comparisons such as averaging to be valid across items containing more than two candidate values.[9]

The format of a typical five-level Likert item, for example, could be:

  1. Strongly disagree
  2. Disagree
  3. Neither agree nor disagree
  4. Agree
  5. Strongly agree

Likert scaling is a bipolar scaling method, measuring either positive or negative response to a statement. Sometimes an even-point scale is used, where the middle option of "neither agree nor disagree" is not available. This is sometimes called a "forced choice" method, since the neutral option is removed.[10] The neutral option can be seen as an easy option to take when a respondent is unsure, and so whether it is a true neutral option is questionable. A 1987 study found negligible differences between the use of "undecided" and "neutral" as the middle option in a five-point Likert scale.[11]

Likert scales may be subject to distortion from several causes. Respondents may:

  • Avoid using extreme response categories (central tendency bias), especially out of a desire to avoid being perceived as having extremist views (an instance of social desirability bias). This effect may appear early in a test due to an expectation that questions which the subject has stronger views on may follow, such that on earlier questions one "leaves room" for stronger responses later in the test. This expectation creates bias that is especially pernicious in that its effects are not uniform throughout the test and cannot be corrected for through simple across-the-board normalization;
  • Agree with statements as presented (acquiescence bias), for example, agreeing with both Statement A, and its opposite. This effect especially strong among children, people with developmental disabilities, elderly people, and individuals who are subjected to a culture of institutionalization that encourages and incentivizes eagerness to please;
  • Disagree with sentences as presented out of a defensive desire to avoid making erroneous statements and/or avoid negative consequences that respondents may fear will result from their answers being used against them, especially if misinterpreted and/or taken out of context;
  • Provide answers that they believe will be evaluated as indicating strength or lack of weakness/dysfunction ("faking good");
  • Provide answers that they believe will be evaluated as indicating weakness or presence of impairment/pathology ("faking bad");
  • Try to portray themselves or their organization in a light that they believe the examiner or society to consider more favorable than their true beliefs (social desirability bias, the intersubjective version of objective "faking good" discussed above);
  • Try to portray themselves or their organization in a light that they believe the examiner or society to consider less favorable/more unfavorable than their true beliefs (norm defiance, the intersubjective version of objective "faking bad" discussed above).

Designing a scale with balanced keying (an equal number of positive and negative statements and, especially, an equal number of positive and negative statements regarding each position or issue in question) can obviate the problem of acquiescence bias, since acquiescence on positively keyed items will balance acquiescence on negatively keyed items, but defensive, central tendency, and social desirability biases are somewhat more problematic.

Scoring and analysis

edit

After the questionnaire is completed, each item may be analyzed separately or in some cases item responses may be summed to create a score for a group of items. Hence, Likert scales are often called summative scales.

Whether individual Likert items can be considered as interval-level data, or whether they should be treated as ordered-categorical data is the subject of considerable disagreement in the literature,[12][13] with strong convictions on what are the most applicable methods. This disagreement can be traced back, in many respects, to the extent to which Likert items are interpreted as being ordinal data.

There are two primary considerations in this discussion. First, Likert scales are arbitrary. The value assigned to a Likert item has no objective numerical basis, either in terms of measure theory or scale (from which a distance metric can be determined). The value assigned to each Likert item is simply determined by the researcher designing the survey, who makes the decision based on a desired level of detail. However, by convention Likert items tend to be assigned progressive positive integer values. Likert scales typically range from 2 to 10 – with 3, 5, or, 7 being the most common.[14] Further, this progressive structure of the scale is such that each successive Likert item is treated as indicating a 'better' response than the preceding value. (This may differ in cases where reverse ordering of the Likert scale is needed).

The second, and possibly more important point, is whether the "distance" between each successive item category is equivalent, which is inferred traditionally. For example, in the above five-point Likert item, the inference is that the 'distance' between category 1 and 2 is the same as between category 3 and 4. In terms of good research practice, an equidistant presentation by the researcher is important; otherwise a bias in the analysis may result. For example, a four-point Likert item with categories "Poor", "Average", "Good", and "Very Good" is unlikely to have all equidistant categories since there is only one category that can receive a below-average rating. This would arguably bias any result in favor of a positive outcome. On the other hand, even if a researcher presents what he or she believes are equidistant categories, it may not be interpreted as such by the respondent.

A good Likert scale, as above, will present a symmetry of categories about a midpoint with clearly defined linguistic qualifiers. In such symmetric scaling, equidistant attributes will typically be more clearly observed or, at least, inferred. It is when a Likert scale is symmetric and equidistant that it will behave more like an interval-level measurement. So while a Likert scale is indeed ordinal, if well presented it may nevertheless approximate an interval-level measurement. This can be beneficial since, if it was treated just as an ordinal scale, then some valuable information could be lost if the 'distance' between Likert items were not available for consideration. The important idea here is that the appropriate type of analysis is dependent on how the Likert scale has been presented.

The validity of such measures depends on the underlying interval nature of the scale. If interval nature is assumed for a comparison of two groups, the paired samples t-test is not inappropriate.[4] If non-parametric tests are to be performed the Pratt (1959)[15] modification to the Wilcoxon signed-rank test is recommended over the standard Wilcoxon signed-rank test.[4]

Responses to several Likert questions may be summed providing that all questions use the same Likert scale and that the scale is a defensible approximation to an interval scale, in which case the central limit theorem allows treatment of the data as interval data measuring a latent variable.[citation needed] If the summed responses fulfill these assumptions, parametric statistical tests such as the analysis of variance can be applied. Typical cutoffs for thinking that this approximation will be acceptable is a minimum of four and preferably eight items in the sum.[5][13]

To model binary Likert responses directly, they may be represented in a binomial form by summing agree and disagree responses separately. The chi-squared, Cochran's Q test, or McNemar test are common statistical procedures used after this transformation. Non-parametric tests such as chi-squared test, Mann–Whitney test, Wilcoxon signed-rank test, or Kruskal–Wallis test.[16] are often used in the analysis of Likert scale data.

Alternatively, Likert scale responses can be analyzed with an ordered probit model, preserving the ordering of responses without the assumption of an interval scale. The use of an ordered probit model can prevent errors that arise when treating ordered ratings as interval-level measurements.[17] Consensus-based assessment (CBA) can be used to create an objective standard for Likert scales in domains where no generally accepted or objective standard exists. Consensus-based assessment (CBA) can be used to refine or even validate generally accepted standards.[citation needed]

Latent variable models

edit

A common practice for analyzing responses to collections of Likert scale items is to summarize them via a latent variable model, for example using factor analysis or item response theory.

Rasch model

edit

Likert scale data can, in principle, be used as a basis for obtaining interval level estimates on a continuum by applying the polytomous Rasch model, when data can be obtained that fit this model. In addition, the polytomous Rasch model permits testing of the hypothesis that the statements reflect increasing levels of an attitude or trait, as intended. For example, application of the model often indicates that the neutral category does not represent a level of attitude or trait between the disagree and agree categories.

Not every set of Likert scaled items can be used for Rasch measurement. The data has to be thoroughly checked to fulfill the strict formal axioms of the model. However, the raw scores are the sufficient statistics for the Rasch measures, a deliberate choice by Georg Rasch, so, if you are prepared to accept the raw scores as valid, then you can also accept the Rasch measures as valid.

Visual presentation of Likert-type data

edit

An important part of data analysis and presentation is the visualization (or plotting) of data. The subject of plotting Likert (and other) rating data is discussed at length in two papers by Robbins and Heiberger.[18] In the first they recommend the use of what they call diverging stacked bar charts and compare them to other plotting styles. The second paper[19] describes the use of the Likert function in the HH package for R, and gives many examples of its use.

Level of measurement

edit

The five response categories are often believed to represent an interval level of measurement. However, this can only be the case if the intervals between the scale points correspond to empirical observations in a metric sense. Reips and Funke (2008)[20] show that this criterion is much better met by a visual analogue scale. In fact, there may also appear phenomena which even question the ordinal scale level in Likert scales.[21] For example, in a set of items A, B, C rated with a Likert scale circular relations like A > B, B > C and C > A can appear. This violates the axiom of transitivity for the ordinal scale.

Research by Labovitz[22] and Traylor[23] provide evidence that, even with rather large distortions of perceived distances between scale points, Likert-type items perform closely to scales that are perceived as equal intervals. So these items and other equal-appearing scales in questionnaires are robust to violations of the equal distance assumption many researchers believe are required for parametric statistical procedures and tests.

Pronunciation

edit

Rensis Likert, the developer of the scale, pronounced his name /ˈlɪkərt/ LIK-ərt.[24][25] Some have claimed that Likert's name "is among the most mispronounced in [the] field",[26] because many people pronounce the name of the scale as /ˈlkərt/ LY-kərt.

See also

edit

Notes

edit
  1. ^ Commonly mispronounced as /ˈlkərt/ LY-kərt

References

edit
  1. ^ Wuensch, Karl L. (October 4, 2005). "What is a Likert Scale? and How Do You Pronounce 'Likert?'". East Carolina University. Retrieved December 16, 2023.
  2. ^ Likert, Rensis (1932). "A Technique for the Measurement of Attitudes". Archives of Psychology. 140: 1–55.
  3. ^ Spector, Paul E (1992). Summated Rating Scale Construction. Sage.
  4. ^ a b c Derrick, B; White, P (2017). "Comparing Two Samples from an Individual Likert Question". International Journal of Mathematics and Statistics. 18 (3): 1–13.
  5. ^ a b Carifio, James; Perla, Rocco (2007). "Ten Common Misunderstandings, Misconceptions, Persistent Myths and Urban Legends about Likert Scales and Likert Response Formats and their Antidotes". Journal of Social Sciences. 3 (3): 106–116. doi:10.3844/jssp.2007.106.116.
  6. ^ Burns, Alvin; Burns, Ronald (2008). Basic Marketing Research (Second ed.). New Jersey: Pearson Education. pp. 245. ISBN 978-0-13-205958-9.
  7. ^ van Alphen, A.; Halfens, R.; Hasman, A.; Imbos, T. (1994). "Likert or Rasch? Nothing is more applicable than good theory". Journal of Advanced Nursing. 20 (1): 196–201. doi:10.1046/j.1365-2648.1994.20010196.x. PMID 7930122.
  8. ^ Rusch, Thomas; Lowry, Paul B.; Mair, Patrick; Treiblmaier, Horst (2017). "Breaking free from the limitations of classical test theory: Developing and measuring information systems scales using item response theory" (PDF). Information & Management. 54 (2): 189–203. doi:10.1016/j.im.2016.06.005.
  9. ^ Burns, Alvin; Burns, Ronald (2008). Basic Marketing Research (Second ed.). New Jersey: Pearson Education. pp. 250. ISBN 978-0-13-205958-9.
  10. ^ Allen, Elaine; Seaman, Christopher (2007). "Likert Scales and Data Analyses". Quality Progress. pp. 64–65.
  11. ^ Armstrong, Robert (1987). "The midpoint on a Five-Point Likert-Type Scale". Perceptual and Motor Skills. 64 (2): 359–362. doi:10.2466/pms.1987.64.2.359. S2CID 145705789.
  12. ^ Jamieson, Susan (2004). "Likert Scales: How to (Ab)use Them" (PDF). Medical Education. 38 (12): 1217–1218. doi:10.1111/j.1365-2929.2004.02012.x. PMID 15566531. S2CID 42509064.
  13. ^ a b Norman, Geoff (2010). "Likert scales, levels of measurement and the "laws" of statistics". Advances in Health Sciences Education. 15 (5): 625–632. doi:10.1007/s10459-010-9222-y. PMID 20146096. S2CID 6566608.
  14. ^ "Likert Scale Explanation - With an Interactive Example". SurveyKing. Retrieved 13 August 2017.
  15. ^ Pratt, J. (1959). "Remarks on zeros and ties in the Wilcoxon signed rank procedures". Journal of the American Statistical Association. 54 (287): 655–667. doi:10.1080/01621459.1959.10501526.
  16. ^ Mogey, Nora (March 25, 1999). "So You Want to Use a Likert Scale?". Learning Technology Dissemination Initiative. Heriot-Watt University. Retrieved April 30, 2009.
  17. ^ Liddell, T.; Kruschke, J. (2018). "Analyzing ordinal data with metric models: What could possibly go wrong?". Journal of Experimental Social Psychology. 79: 328–348. doi:10.1016/j.jesp.2018.08.009. hdl:2022/21970.
  18. ^ Robbins, N. B.; Heiberger, R. M. (2011). "Plotting Likert and Other Rating Scales" (PDF). JSM Proceedings, Section on Survey Research Methods. American Statistical Association. pp. 1058–1066.
  19. ^ Heiberger, R. M.; Robbins, N. B. (2014). "Design of Diverging Stacked Bar Charts for Likert Scales and Other Applications". Journal of Statistical Software. Vol. 57. American Statistical Association. pp. 1–32. doi:10.18637/jss.v057.i05. S2CID 61139330.
  20. ^ Reips, Ulf-Dietrich; Funke, Frederik (2008). "Interval level measurement with visual analogue scales in Internet-based research: VAS Generator". Behavior Research Methods. 40 (3): 699–704. doi:10.3758/BRM.40.3.699. PMID 18697664.
  21. ^ Johanson, George A.; Gips, Crystal J. (April 12–16, 1993). Paired Comparison Intransitivity: Useful Information or Nuisance? (PDF). The Annual Meeting of the American Educational Research Association. Atlanta, GA.
  22. ^ Labovitz, S. (1967). "Some observations on measurement and statistics". Social Forces. 46 (2): 151–160. doi:10.2307/2574595. JSTOR 2574595.
  23. ^ Traylor, Mark (October 1983). "Ordinal and interval scaling". Journal of the Market Research Society. 25 (4): 297–303.
  24. ^ Babbie, Earl R. (2005). The Basics of Social Research. Belmont, CA: Thomson Wadsworth. p. 174. ISBN 978-0-534-63036-2.
  25. ^ Meyers, Lawrence S.; Guarino, Anthony; Gamst, Glenn (2005). Applied Multivariate Research: Design and Interpretation. Sage Publications. p. 20. ISBN 978-1-4129-0412-4.
  26. ^ Latham, Gary P. (2006). Work Motivation: History, Theory, Research, And Practice. Thousand Oaks, Calif.: Sage Publications. p. 15. ISBN 978-0-7619-2018-2.
edit