Quality Improvement With Automated Engineering Program Evaluations
In this paper, we present examples of quality improvement efforts to enhance student learning in engineering education by employing a novel program evaluation methodology that automates ABET Student Outcomes (SOs) data measurement and analysis based on the classification of specific performance indicators per Bloom’s 3 domains and their learning levels. The learning levels are further categorized based on a 3-Level Skills Grouping Methodology that groups together learning levels of related proficiency. Program evaluations use aggregate values of ABET SOs as an overall performance index. These values are calculated by assigning weights to measured specific performance indicators according to the Frequency-Hierarchy Weighting-Factors Scheme, which incorporates a hierarchy of measured skills, course levels in which they are measured, and counts of assessments implemented for their measurement. The number of assessments processed for measurement of performance indicators associated with the 3 categories of skills in multiple course levels is counted to calculate percentage learning distribution in the elementary, intermediate and advanced levels for the 3 learning domains. Learning distributions obtained for measured ABET SOs are compared to ideal models to verify standards of achievement for required types of skills, proficiency levels and align engineering curriculum delivery to attain highest levels of holistic learning.
Published in: 2016 IEEE Frontiers in Education Conference (FIE)