Assessing Student Learning in a Multidisciplinary Undergraduate Honors Program

Kylie King Goodell, QUEST Honors Program, UMCP

Jeffrey W. Herrmann, Mechanical Engineering and Institute for Systems Research, UMCP

1. The QUEST Honors Program

The QUEST Honors Program is a multidisciplinary honors program for undergraduate students at UMCP. Students are selected from three colleges: the Robert H. Smith School of Business, the A. James Clark School of Engineering, and the College of Computer, Mathematical, and Natural Sciences. Approximately 90 students enter the program every academic year. Students in the program learn to apply quality management tools, improve processes, and design systems. In addition to two electives, QUEST students take three required courses (BMGT/ENES 190H, 390H, and 490H) that incorporate a variety of learning activities, including team projects in which students generate, evaluate, and recommend solutions to real-world problems in industry and government.

Like other program directors, we are interested in knowing and demonstrating that our curriculum is effective. Essentially, are QUEST students able to apply quality management tools, improve processes, and design systems? The desire to answer this question led us to develop learning outcomes.

Table 1: Map of Learning Outcomes and Assessment Mechanisms in Required Courses

Table 1: Map of Learning Outcomes and Assessment Mechanisms in Required Courses

In 2010, we organized and hosted a workshop with other multidisciplinary engineering, technology, and management (METM) programs (the workshop was sponsored by the National Science Foundation). After we drafted some initial learning outcomes during that workshop, additional discussion and editing led to eight learning outcomes (listed in Table 1), and for each one we developed four elements. These outcomes and elements were also influenced by Bloom’s taxonomy [1] and Anderson and Krathwohl’s revised taxonomy [2]. For example, outcomes relating to “knowledge” were revised to represent higher orders of thinking or doing. We mapped each learning outcome (LO) to one or more of our required courses and developed assessments and rubrics for the elements. As illustrated in Table 1, five of these learning outcomes are assessed in more than one course, and all but one are assessed by more than one instance of assessment (by more than one exam, paper, or presentation).

 

2. Learning Outcomes

Each learning outcome has four specific elements that describe specific skills that the students should be able to perform. We use rubrics to define the level of proficiency on each element. Table 2 lists the rubrics for Learning Outcome 1 as an example. The following items describe the elements of each learning outcome:

  • Learning Outcome 1: the process for selecting a tool or approach for a problem, the appropriateness of the selected tool or approach, the ability to use the selected tool or approach, and the ability to evaluate a solution.
  • Learning Outcome 2: the ability to define a specific problem, construct a prototype, generate a novel solution (innovation), and define a clear market for the innovation.
  • Learning Outcome 3: the ability to use qualitative techniques to analyze a problem, use quantitative techniques to analyze solutions, synthesize both qualitative and quantitative techniques to develop more insight, and choose an appropriate methodology.
  • Learning Outcome 4: the ability to understand client needs, select the most appropriate methodology, analyze data, and make an appropriate recommendation.
  • Learning Outcome 5: the ability to define roles within a team and be accountable, document tasks and transfer information, identify and address conflict, and define a team’s mission.
  • Learning Outcome 6: the ability to articulate thoughts and ideas clearly and correctly, convey enthusiasm, communicate concisely, directly, and logically, and describe technical concepts.
  • Learning Outcome 7: the ability to understand and decompose a complex task, define the scope of a project, allocate resources efficiently, and anticipate and mitigate project risks.
  • Learning Outcome 8: the ability to listen, understand, and reflect a message, communicate with respect, maintain appropriate personal appearance, recognize ethical issues, and act on ethical principles.

 

Table 2: Rubric for Learning Outcome 1:  Apply quality management tools, improve processes, and design systems.

Table 2: Rubric for Learning Outcome 1: Apply quality management tools, improve processes, and design systems.

3. Assessment Process

The assessment process evaluated the 32 elements of the learning outcomes in a variety of ways, including evaluations of presentations and papers and surveys of faculty advisors and representatives from our corporate partners. These evaluations were conducted by faculty and staff members, students who served as team mentors, and alumni who attended class sessions to give feedback on presentations. These assessments were not used for grading, though some materials (like presentations and reports) were used for both grading and assessment.

We are currently using the following process for learning outcomes assessment (LOA). Before each semester, the program leadership and course instructors meet to review the LOA plan for each course and determine the timing of assessments. Assessments are assigned to faculty and staff members, and students on the program’s Curriculum Review Committee (CRC). At the beginning of the semester, the CRC meets to review the LOA plan and assignments. During the semester, the CRC members complete the assessments as students complete activities, presentations, exams, and reports. Electronic forms based on the rubrics automate the data collection task. Each month, the program leadership and course instructors meet to discuss the assessments completed so far and to identify opportunities to address areas where student performance is weak by reviewing material or including additional practice before the end of the semester. At the end of the semester, the CRC meets to review the assessment data and course activities and to discuss opportunities to enhance the courses and improve the LOA plan for the next semester.

 

4. Results

During the Spring 2014 semester, the elements of the learning outcomes were evaluated using rubrics like those presented in Table 2. For each type of document or presentation, we created a histogram that shows, for each element, the number of evaluations at each level of performance: (4) Advanced, (3) Proficient, (2) Developing, and (1) Unacceptable. Because these evaluations use an ordinal scale, we summarized the data using counts instead of means. Figure 1 displays one such chart, the evaluation of the final presentations in BMGT/ENES 190H.

Assessment of the elements of Learning Outcome 1 from the BMGT/ENES 190H final presentation.

Figure 1. Assessment of the elements of Learning Outcome 1 from the BMGT/ENES 190H final presentation.

The relative performance on the 32 elements was determined by first aggregating all of the assessments on each learning outcome to determine, for each element, the number of evaluations in which performance was Advanced (which we denote as      na), the number of evaluations in which performance was Proficient (np), and the total number of evaluations (nt). Note that the total number of evaluations also include the evaluations in which performance was Developing and the evaluations in which performance was Unacceptable.

Then, for each element, the following two quantities were computed: e1and e2if e3. The first ratio indicates the fraction of all evaluations (on that element) that were Advanced or Proficient. The second ratio indicates the fraction of the Advanced and Proficient elements that were Advanced. If, for some element, all of the evaluations were Advanced, then both ratios would equal 1. If, for some element, all of the evaluations were Proficient, then the first ratio would equal 1, but the second would equal 0. If, for some element, one-fourth of the evaluations were Advanced, another one-fourth were Proficient, and the remainder were Developing or Unacceptable, then the first ratio would be 0.5, and the second ratio would be 0.5.

Figure 2 depicts the performance of all 32 elements using these two ratios. The abscissa (horizontal axis) measures the first ratio, and the ordinate (vertical axis) measures the second ratio. The symbols represent the different learning outcomes. The results show that the elements for Learning Outcomes 2 and 3 had the lowest values of e1, which means that fewer teams (as a proportion of those evaluated) had evaluations that were Advanced or Proficient.

 

The relative performance of the 32 elements of the learning outcomes (LO1 to LO8) based on the assessments collected during the Spring 2014 semester.

Figure 2.The relative performance of the 32 elements of the learning outcomes (LO1 to LO8) based on the assessments collected during the Spring 2014 semester.

When we examined the change in the performance of the elements from one academic year to the next, we saw that the performance of the elements in Learning Outcome 1 increased (both ratios increased), but the performance of the elements in Learning Outcome 2 decreased (both ratios decreased). We believe that these changes reflected changes to BMGT/ENES 190H that emphasized the use of quality management tools (Learning Outcome 1), which reduced the time available for product development (Learning Outcome 2).

 

5. Future Plans

Comprehensively assessing its learning outcomes is necessary for a program to identify shortcomings and improve curricula and other activities [3, 4]. Because students in the QUEST Honors Program learn about quality management and process improvement, it is particularly appropriate that we have a data-driven quality management system to guide curriculum improvement.

After conducting these assessments and considering their effectiveness, we have identified some opportunities to improve our assessment process. Some evaluators may have assessed students relative to their expected performance (that is, a reviewer may hold lower standards of proficiency for sophomores in BMGT/ENES 190H than for seniors in BMGT/ENES 490H). The evaluators need to understand the purpose of learning outcome assessment and how it differs from grading student work. The evaluators need to have a common understanding of what level of performance corresponds to the different levels on each element. The rubrics also need to be clearer so that consistent evaluations can be obtained from a number of different evaluators.

The assessment process used material from all three required courses and involved a variety of evaluators, including students, alumni, faculty, and staff. We have developed assessment techniques that can be enhanced and used again, and we have selected ways for analyzing and reporting the results. These results demonstrate that our students have the skills that they have practiced and provide insights into how we can improve our curriculum.

Acknowledgements

This article is based on material previously published in Goodell and Herrmann [5], which will also appear in Goodell and Herrmann [6].

The ideas, support, and assistance provided by our colleagues (especially Nicole Coomber and Joe Bailey) and the program students and alumni are greatly appreciated. The METM program workshop was funded by the National Science Foundation (grant DUE-0958700).

 

References

  1. Bloom, B.S., Krathwohl, D.R., and Masia, B.B., 1956, Taxonomy of Educational Objectives: The Classification of Educational Goals, D. McKay, New York.
  2. Anderson, L.W, and Krathwohl, D.R., 2001, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, Longman, New York.
  3. Nitko, A.J., and Brookhart, S.M., 2007, Educational Assessment of Students, Pearson Education, Inc., Upper Saddle River, New Jersey.
  4. Royse, D., Thyer, B.A., and Padgett, D.K., 2006, Program Evaluation: An Introduction, Wadsworth, Cengage Learning, Belmont, California.
  5. Goodell, K.K., and Herrmann, J.W., 2014, Assessing the Learning Outcomes of a Multidisciplinary Undergraduate Honors Program, Proceedings of the 2014 Industrial and Systems Engineering Research Conference, Y. Guan and H. Liao, eds., Montreal, Canada, June 1-3, 2014.
  6. Goodell, K.K., and Herrmann, J.W., 2014, Learning Outcomes for a Multidisciplinary Undergraduate Honors Program: Development, Measurement, and Continuous Improvement, under review for Quality Approaches in Higher Education.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s