This article may be too technical for most readers to understand.(January 2025) |
Computer user satisfaction (CUS) is the systematic measurement and evaluation of how well a computer system or application fulfills the needs and expectations of individual users. The measurement of computer user satisfaction studies how interactions with technology can be improved by adapting it to psychological preferences and tendencies.
Evaluating user satisfaction helps gauge product stability, track industry trends, and measure overall user contentment.
Fields like User Interface (UI) Design and User Experience (UX) Design focus on the direct interactions people have with a system. While UI and UX often rely on separate methodologies, they share the goal of making systems more intuitive, efficient, and appealing.
The Problem of Defining Computer User Satisfaction
editIn the literature, there are a variety of terms for computer user satisfaction (CUS): "user satisfaction" and "user information satisfaction," (UIS) "system acceptance,"[1] "perceived usefulness,"[2] "MIS appreciation,"[3] "feelings about information system's,"[4] and "system satisfaction".[5] For our purposes, we will refer to CUS, or user satisfaction. Ang and Koh (1997) describe user information satisfaction as "a perceptual or subjective measure of system success."[6] This means that CUS may differ in meaning and significance dependent on the author's definition. In other words, users who are satisfied with a system according to one definition and measure may not be satisfied according to another, and vice versa.
According to Doll and Torkzadeh, CUS is defined as the opinion of the user about a specific computer application that they use. Ives and colleagues defined CUS as "the extent to which users believe the information system available to them meets their information requirements."[7]
Several studies have investigated whether or not certain factors influence the CUS. Yaverbaum's study found that people who use their computers irregularly tend to be more satisfied than regular users.[8]
Mullany, Tan, and Gallupe claim that CUS is chiefly influenced by prior experience with the system or an analogue. Conversely, motivation, they suggest, is based on beliefs about the future use of the system.[9]
Applications
editUsing findings from CUS, product designers, business analysts, and software engineers anticipate change and prevent user loss by identifying missing features, shifts in requirements, general improvements, or corrections.
Satisfaction measurements are most often employed by companies or organizations to design their products to be more appealing to consumers, identify practices that could be streamlined,[10] harvest personal data to sell,[11] and determine the highest price they can set for the least quality.[12] For example, based on satisfaction metrics, a company may decide to discontinue support for an unpopular service. CUS may also be extended to employee satisfaction, for which similar motivations arise. As an ulterior motive, CUS surveys may also serve to pacify the group being surveyed, as it gives them an outlet to vent frustrations.
Doll and Torkzadeh's definition of CUS is "the opinion of the user about a specific computer application, which they use." Note that the term "user" can refer to both the user of a product and the user of a device to access a product.[7]
The CUS and the UIS
editBailey and Pearson's 39-Factor Computer User Satisfaction (CUS) questionnaire and the User Information Satisfaction (UIS) were both surveys with multiple qualities; that is to say, the survey asks respondents to rank or rate multiple categories. Bailey and Pearson asked participants to judge 39 qualities, dividing them into five groups, each with different scales to rank or rate the qualities. The first four scales were for favorability ratings, and the fifth was an importance ranking. In the group asked to rank the importance for each quality, researchers found that their sample of users rated most important: "accuracy, reliability, timeliness, relevancy, and confidence." The qualities of least importance were found to be "feelings of control, volume of output, vendor support, degree of training, and organizational position of EDP (the electronic data processing or computing department)." However, the CUS requires 39 x 5 = 195 responses.[13] Ives, Olson, and Baroudi, amongst others, thought that so many responses could result in errors of attrition.[14] This indicates that the respondent's failure to return the questionnaire directly correlated with the length of the surveys. This can result in reduced sample sizes and distorted results, as those who return long questionnaires may have differing psychological traits from those who do not. Ives and colleagues developed the User Information Satisfaction (UIS) as a means of addressing this. The UIS only requires the respondent to rate 13 metrics. 2 scales are provided per metric, yielding 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla argued that measuring CUS in industry settings is difficult as the response rate often remains low. Thus, a simpler version of the CUS measurement method is necessary.[15]
The Problem With Dating of Metrics
editAn early criticism of these measures was that surveys would become outdated as computer technology evolves. This led to the synthesis of new metric-based surveys. Doll and Torkzadeh, for example, produced a metric-based survey for the "end user." They define end-users as those who tend to interact with a computer interface alone without the involvement of operational staff.[7] McKinney, Yoon, and Zahedi developed a model and survey for measuring web customer satisfaction.[16]
Grounding in Theory
editAnother difficulty with most of these surveys is their lack of a foundation in psychological theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran[17] and the measure of CUS with e-portals developed by Cheung and Lee.[18] Both of these models drew on Herzberg's two-factor theory of motivation.[19] Consequently, their qualities were designed to measure both "satisfiers" and "hygiene factors". However, Herzberg's theory has been criticized for being too vague, particularly in its failure to distinguish between terms such as motivation, job motivation, job satisfaction, etc.[20]
Cognitive style
editA study showed that during the life of a system, satisfaction from users will on average increase in time as the users' experiences with the system increase.[21] The study found that users' cognitive style (preferred approach to problem solving) was not an accurate predictor of the user's actual CUS. Similarly, developers of the system participated, and they too did not have a strong correlation between cognitive style and actual CUS. However, a strong correlation was observed between 85 and 652 days into using the system. This means that one's manner of thinking and how their attitude towards a particular product became increasingly correlated as time went on. Some researchers have hypothesized that familiarity with a system may cause one to mentally assimilate to accommodate that system. Mullany, Tan, and Gallupe devised a system (the System Satisfaction Schedule (SSS)), which utilizes user-generated qualities and so avoids the problem of dating qualities.[21] They define CUS as the absence of user dissatisfaction and complaint, as assessed by users who have had at least some experience of using the system. Motivation, conversely, is based on beliefs about the future use of the system.[9]: 464
Future developments
editCurrently, scholars and practitioners are experimenting with other measurement methods and further refinements to the definition of CUS. Others are replacing structured questionnaires with unstructured ones, where the respondent is asked simply to write down or dictate everything about a system that either satisfies or dissatisfies them. One problem with this approach, however, is that it tends not to yield quantitative results, making comparisons and statistical analysis difficult.
References
edit- ^ Igersheim, Roy H. (1976-06-07). "Managerial response to an information system". Proceedings of the June 7–10, 1976, national computer conference and exposition. AFIPS '76. New York, NY, USA: Association for Computing Machinery: 877–882. doi:10.1145/1499799.1499918. ISBN 978-1-4503-7917-5.
- ^ Larcker, David F.; Lessig, V. Parker (1980). "Perceived Usefulness of Information: A Psychometric Examination". Decision Sciences. 11 (1): 121–134. doi:10.1111/j.1540-5915.1980.tb01130.x. ISSN 1540-5915.
- ^ Swanson, E. Burton (1 October 1974). "Management Information Systems: Appreciation and Involvement". Management Science. 21 (2): 178–188. doi:10.1287/mnsc.21.2.178. ISSN 0025-1909 – via InformsPubsOnLine.
- ^ Maish, Alexander M. (March 1979). "A User's Behavior toward His MIS". MIS Quarterly. 3 (1): 39–52. doi:10.2307/249147. ISSN 0276-7783 – via JSTOR.
{{cite journal}}
: CS1 maint: url-status (link) - ^ Khalifa, Mohamed; Liu, Vanessa (2004-01-01). "The State of Research on Information System Satisfaction". Journal of Information Technology Theory and Application (JITTA). 5 (4). ISSN 1532-4516.
- ^ Ang, James; Koh, Stella (June 1997). "Exploring the relationships between user information satisfaction and job satisfaction". International Journal of Information Management. 17 (3): 169–177. doi:10.1016/S0268-4012(96)00059-X.
- ^ a b c Doll, William J.; Torkzadeh, Gholamreza (June 1988). "The Measurement of End-User Computing Satisfaction". MIS Quarterly. 12 (2): 259–274. doi:10.2307/248851.
- ^ Yaverbaum, Gayle J. (1988). "Critical Factors in the User Environment: An Experimental Study of Users, Organizations and Tasks". MIS Quarterly. 12 (1) (published March 1988): 75–88. doi:10.2307/248807. ISSN 0276-7783. Retrieved 8 January 2025 – via Jstor.
- ^ a b Mullany, Miachael J.; Tan, Felix B.; Gallupe, R. Brent (July 2007). "The Impact Of Analyst-User Cognitive Style Differences On User Satisfaction". PACIS 2007 Proceedings: 462–476.
- ^ "What Is a Customer Satisfaction Survey?". Salesforce. Retrieved 2025-01-08.
- ^ "Privacy Policy". Government Executive. 16 January 2024. Under the section "How We Collect Data," the subsection "Other Information you Choose to Provide" applies to the subsection "For Other Purposes" under the section "Who We Share Your Data With.". Retrieved 2025-01-08.
{{cite web}}
: CS1 maint: url-status (link) - ^ "How to use Pricing Surveys". SurveyMonkey. Retrieved 2025-01-08.
- ^ Bailey, James E.; Pearson, Sammy W. (May 1983). "Development of a Tool for Measuring and Analyzing Computer User Satisfaction". Management Science. 29 (5): 530–545. doi:10.1287/mnsc.29.5.530.
- ^ Ives, Blake; Olson, Margrethe H.; Baroudi, Jack J. (1 October 1983). "The measurement of user information satisfaction". Commun. ACM. 26 (10): 785–793. doi:10.1145/358413.358430.
- ^ Islam, A.K.M. Najmul; Koivulahti-Ojala, Mervi; Käkölä, Timo (August 2010). "A lightweight, industrially-validated instrument to measure user satisfaction and service quality experienced by the users of a UML modeling tool". AMCIS 2010 Proceedings.
- ^ McKinney, Vicki; Yoon, Kanghyun; Zahedi, Fatemeh “Mariam” (September 2002). "The Measurement of Web-Customer Satisfaction: An Expectation and Disconfirmation Approach". Information Systems Research. 13 (3): 296–315. doi:10.1287/isre.13.3.296.76.
- ^ Zhang, Ping; von Dran, Gisela M. (October 2000). "Satisfiers and dissatisfiers: A two-factor model for website design and evaluation". Journal of the American Society for Information Science. 51 (14): 1253–1268. doi:10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1039%3E3.0.CO;2-O.
- ^ C. M. K. Cheung and M. K. O. Lee, "The Asymmetric Effect of Website Attribute Performance on Satisfaction: An Empirical Study," Proceedings of the 38th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 2005, pp. 175c-175c, doi: 10.1109/HICSS.2005.585.
- ^ Herzberg, Frederick (1972). Work and the nature of man (reprint ed.). London: Staples Press. ISBN 978-0286620734.
- ^ Islam, A.K.M. Najmul (July 2011). "Information Systems Post-Adoption Satisfaction And Dissatisfaction: A Study In The E-Learning Context". PACIS 2011 Proceedings.
- ^ a b Mullany, Michael John (2006). The Use of Analyst-User Cognitive Style Differentials to Predict Aspects of User Satisfaction with Information Systems (PhD thesis). Auckland University of Technology.
Further reading
edit- Bargas-Avila, Javier A.; Lötscher, Jonas; Orsini, Sébastien; Opwis, Klaus (November 2009). "Intranet satisfaction questionnaire: Development and validationof a questionnaire to measure user satisfaction with the Intranet". Computers in Human Behavior. 25 (6): 1241–1250. doi:10.1016/j.chb.2009.05.014.
- Baroudi, Jack J.; Orlikowski, Wanda J. (Spring 1988). "A Short-Form Measure of User Information Satisfaction: A Psychometric Evaluation and Notes on Use". Journal of Management Information Systems. 4 (4): 44–59. doi:10.1080/07421222.1988.11517807.
- Delone, William H.; McLean, Ephraim R. (March 1992). "Information Systems Success: The Quest for the Dependent Variable". Information Systems Research. 3 (1): 60–95. doi:10.1287/isre.3.1.60.
- Delone, William H.; McLean, Ephraim R. (January 2002). "Information systems success revisited". Proceedings of the 35th Annual Hawaii International Conference on System Sciences. Los Alamitos, CA: IEEE Computer Society Press. pp. 238–248. doi:10.1109/HICSS.2002.994345.
- Herzberg, Frederick; Mausner, Bernard; Snyderman, Barbara B. (1959). The Motivation to Work (2nd ed.). New York: John Wiley and Sons. p. 257. ISBN 0-471-37389-3.
- Herzberg, Frederick (January–February 1968). "One more time: How do you motivate employees?". Harvard Business Review. 46 (1): 53–62.