Attribute MSA with Minitab

Use Minitab to Implement an Attribute MSA

Data File: “Attribute MSA” tab in “Sample Data.xlsx” (an example in the AIAG MSA Reference Manual, 3rd Edition).
Steps in Minitab to run an attribute MSA:

Step 1: Reorganize the original data into four new columns (i.e., Appraiser, Assessed Result, Part, and Reference).

  1. Click Data → Stack → Blocks of Columns.
  2. A new window named “Stack Blocks of Columns” pops up.
  3. Select “Appraiser A,” “Part,” and “Reference” as block one.
  4. Select “Appraiser B,” “Part,” and “Reference” as block two.
  5. Select “Appraiser C,” “Part,” and “Reference” as block three.
  6. Select the radio button of “New worksheet” and name the sheet “Data.”
  7. Check the box “Use variable names in subscript column.”
  8. Click “OK.”
  9. The stacked columns are created in the new worksheet named “Data.”
  10. Name the four columns from left to right in worksheet “Data”: Appraiser, Assessed Result, Part, and Reference.

Step 2: Run a MSA using Minitab

  1. Click Stat → Quality Tools → AttributeAgreement Analysis.
  2. A new window named “AttributeAgreement Analysis” pops up.
  3. Click in the blank box next to “Attributecolumn” and the variables appear in the list box on the left.
  4. Select “Assessed Result” as “Attribute”
  5. Select “Part” as “Sample.”
  6. Select “Appraiser” as “Appraisers.”
  7. Select “Reference” as “Known standard/attribute.”
  8. Click the “Options” button and another window named “AttributeAgreement Analysis – Options” pops up.
  9. Check the boxes of both “Calculate Cohen’s kappa if appropriate” and “Display disagreement table.”
  10. Click “OK” in the window “AttributeAgreement Analysis – Options.”
  11. Click “OK” in the window “AttributeAgreement Analysis.”
  12. The MSA results appear in the newly-generated window and the session window.

The rater scores represent how the raters agree with themselves. Appraiser A, for instance, agreed with himself on 84% of the measurements made.


The important numbers are called out here. Of the 50 total measurements performed, for 78% of those (#39) the appraisers agreed with both themselves and the other appraisers.

Kappa statistic is a coefficient indicating the agreement percentage above the expected agreement by chance. Kappa ranges from −1 (perfect disagreement) to 1 (perfect agreement). When the observed agreement is less than the chance agreement, Kappa is negative. When the observed agreement is greater than the chance agreement, Kappa is positive. Rule of thumb: If Kappa is greater than 0.7, the measurement system is acceptable. If Kappa is greater than 0.9, the measurement system is excellent.


Model summary: In all cases the Kappa indicates that the measurement system is acceptable.

About Michael Parker

Michael Parker is the President and CEO of the Lean Sigma Corporation, a management consulting firm and online six sigma training, certification, and courseware provider. Michael has over 25 years of experience leading and executing lean six sigma programs and projects. As a Fortune 50 senior executive, Michael led oversight of project portfolios as large as 150 concurrent projects exceeding $100 million in annual capital expenditures. Michael has also managed multi-site operations with the accountability of over 250 quality assurance managers, analysts, and consultants. He is an economist by education, earning his Bachelor of Science degree from Radford University while also lettering four years as an NCAA Division I scholarship athlete. Michael earned his Six Sigma Master Black Belt certification from Bank of America and his Black Belt certification from R.R. Donnelley & Sons.

1 Comment

  1. Naveen Kumar G on October 12, 2021 at 1:23 am

    What Is Z value mentioned in this Statics? Kindly explain me this?

Leave a Comment