Evaluation methodology with tools

1.        Introduction: evaluation objectives and methodology

The E2LP project included the development, implementation and evaluation of an advanced learning platform for computers and embedded systems engineering education. Beyond the development of hardware and software, the project also included the development of an inventory of 65 experiments and lab assignments for students at three levels:

  • Exercises
  • Problems
  • Projects

The evaluation program, which was led by one of the nine partners in the consortium, aimed at collecting data from teachers and students about the practical implementation of the project in engineering courses in six universities and colleges in Serbia, Croatia, Germany, Poland and Israel. Over two academic years, the evaluation program addressed 25 courses, such as Logic Design of Computer Systems, Digital Signal Processing, Real-Time System Software, and Embedded Systems, which were learned by about 1,100 students (some of them were counted in more than one course).

Regarding the evaluation methodology, it is common to distinguish between internal versus external evaluation, and also between formative versus summative evaluation (Scriven, 1967). In the present study, we conducted an internal formative evaluation. According to Owens (2007), this evaluation approach should:

  • Involve the staff as much as possible;
  • Strive for consensus on the evaluation plan; and
  • Use findings to reflect on the program aspects under review

The present study adopted the mixed approach by combining quantitative tools such as closed-ended questionnaires and qualitative methodologies such as interviews with students and teachers, as described in the following sections.

 

2.        Self-Regulated Learning (SRL) – the main conceptual framework for the evaluation program

It is widely agreed that education in general, and engineering education in particular, is not just about imparting specific knowledge to students, but also about developing students' thinking and learning skills, as well as their motivation to learn. To achieve this end in the E2LP project, we adopted a broad conceptual framework for curriculum development and evaluation of teaching and learning within the E2LP platform – the Self-Regulated Learning (SRL) model. This model refers to an active and constructive process that involves the students’ active, goal-directed, self-control of behavior, motivation and cognition for academic tasks(Pintrich, 2004; Zimmerman and Schunk, 2001; Winne and Hadwin, 2008). Self-Regulated Learning refers to three main aspects - cognition, meta-cognition and motivation, as detailed below.

  • Cognition refers to the conscious mental processes by which knowledge is accumulated and constructed, understanding, problem solving and creative thinking.
  • Metacognition refers to ‘thinking about thinking,’ knowledge of general strategies that might be used for different tasks, and the process of selecting, controlling or regulating cognitive processes such as learning and problem solving before, during or immediately after executing a task (Flavell, 1979; Pintrich (2004). The metacognition dimension also includes reflective practice - the process of learning from experience, asking questions about what we know and how we came to know it, and ‘learning to learn’ (Schön, 1996).
  • Motivation includes aspects such as interest in learning, intrinsic motivation, extrinsic motivation, and self-efficacy beliefs (Alexander, Ryan and Deci, 2000; Bandura, 1997).

According to Pintrich (2004, p. 401) "self-regulated learning does provide a conceptual model of college student motivation and regulation that is based in a psychological analysis of academic learning. In addition, there is fairly wide empirical support from both laboratory and field-based studies for SRL models of this type." In recent years, the SRL model caught the attention of a growing number of researchers in engineering and science education (Barak and Shachar, 2008; Lawanto, Santoso and Yang, 2012; Haron and Shaharoun, 2010; Schraw, Crippen and Hartley, 2006).

 

3.        The Motivated Strategies for Learning Questionnaire (MSLQ)

The main tool that was used for measuring aspects of cognition, meta-cognition and motivation according to the SRL theory was the Motivated Strategies for Learning Questionnaire (MSLQ) the students filled in at the end of each course. The full version of this questionnaire, which is well known in the educational literature (Pintrich et al., 1991; Artino, 2005; Vogt, 2008; Lawanto, Santoso and Young, 2012), includes 81 items. We selected 31 items from the questionnaire in the following categories, which are particularly relevant to the context of engineering education:

  • Intrinsic goal orientation
  • Extrinsic goal orientation
  • Task value
  • Control of learning beliefs
  • Self-efficacy for learning and performance
  • Metacognitive self-regulation

An example from the MSLQ is shown in Figure 1.

Category: Intrinsic goal orientation

e1 

Figure 1. Example of an item from the MSLQ.

 

Other studies in which only relevant sub-scales from the original questionnaire were used have been presented by Bassili (2008) and Al Khatib (2010). Artino (2005), who examined in detail the history of the MSLQ and its use, writing that "this instrument is completely modular, and thus the scales can be used together or individually, depending on the needs of the researcher, instructor, or student" (p. 4).

The internal consistency of the answers in each category was tested using the Ordinal Alpha (α) coefficient, which is equivalent to Cronbach's Alpha, but for ordinal data (Gadermann, Guhn and Zumbo, 2012). According to Kline (2000) and Nunnaly (1978), the ranges of α values are considered as follows: α ≥ 0.9 = excellent; 0.7 ≤ α < 0.9 = good; 0.6 ≤ α < 0.7 = acceptable; 0.5 ≤ α < 0.6 = poor; α < 0.5 = unacceptable.

The students were also asked to explain or give examples to their answers in a text box, as illustrated in the example from the MSLQ shown in Figure 1.

The MSLQ was administrated during two academic years (2013/2014, 2014/2015) in 12 courses in computer engineering education that used the E2LP platform, for example, Logic Design of Computer Systems, Real-Time System Software, Embedded Systems, and Computer Architecture, and was filled in by total of 429 students. 

Due to the limited scope of this document, we present here only one example of findings from the MSLQ in the second year – the Logic Design of Computer Systems course (n=111), as seen in Figure 2. In this case, the reliability(internal consistency) expressed by the Ordinal Alpha coefficients for the six sub-scales in the questionnaire ranged from 0.74 to 0.96.

 

 e2

Figure 2. Mean value of answers to sub-scales of the MSLQ for the Logic Design of Computer Systems course (n=111).

 

The findings presented in Figure 2 show that all students' score averages for the six sub-scales in the MSLQ were quite high. The highest scores were obtained for the Control of learning beliefs category (for example, awareness of how I learn) and the Self-efficacy for learning category (for example, confidence in my abilities to succeed in this course). The lowestmean score was found for the Extrinsic goal orientation category (for example, course grades). These findings are encouraging because students' intrinsic motivation (rather than extrinsic motivation) is considered to be one of the most important aspects of learning (Alexander, Ryan and Deci, 2000).

 

4.        Obtaining ongoing feedback from the students on the lab work: the Lab Feedback Questionnaire (LFQ)

In order to receive immediate feedback from the students about their interest, success and difficulties in executing the lab experiments, we developed the Lab Feedback Questionnaire (LFQ), which included the following eight questions (the updated version, second year):

  1. 1.Clarity of theoretical background – documentation,explanations.
  2. 2.Clarity of technical instructions exercises and problems.
  3. 3.Total time and efforts required.
  4. 4.Ease of use – the environment.
  5. 5.Ease of use – the platform.
  1. Assistant support – was the lab assistant helpful.
  2. Value of what you learned.
  3. Overall satisfaction.

During two academic years (2013/2014, 2014/2015), the LFQ was administrated in 16 courses in computers engineering education, and answered by 368 students who used the E2LP platform.  For example, in the Logic Design of Computer Systems course, the students filled in the LFQ online five times while performing 12 experiments. One example from the findings is illustrated in Figure 3.

  

e3 

Figure 3. An example of findings from the LFQ for the Sequential Circuits experiment (N=56) in the Logic Design of Computer Systems course.

 

In the example shown in Figure 3, the students marked positive feedback (score 8-9 on the scale 1-10) to most of the questions, except for question 3 – ease of use of the platform.

 

5.        Measuring students’ viewpoints about the computerized learning environment: the Computer System Usability Questionnaire (CSUQ)

The E2LP is a sophisticated computer-based learning environment. In order to measure users' satisfaction with a system (hardware and software), we used the Computer System Usability Questionnaire (CSUQ), which was developed at IBM (Lewis, 1995, 2012) and cited extensively in the literature. The CSUQ was designed to gather subjective and objective data in realistic scenarios-of-use. Subjective data are, for example, measures of participants' opinions or attitudes concerning their perception of usability. Objective data are measuresof participants'performance, such as scenariocompletion time and successfulscenario completion rate. The CSUQ consists of 19 items from four categories, as shown in Table 1. A full version of the questionnaire is presented in Appendix 1.

Table 1. CSUQ structure.

Category

Items

Overall satisfaction

1,19

System usability

2,4.6,7,9,11,17,18

Information quality

5,8,10,12,13,15,16

Interface quality

3,14

 

The CSUQ is a Likert-type questionnaire based on a seven-stage scale in which 1 stands for totally agree and 7 for totally disagree. The students were also asked to explain or give examples, as illustrated in the example shown in Figure 4.

 

 e4

Figure 4. Example of an item from the CSUQ.

 

The students were asked to fill in the CSUQ once per course after completing the lab work. Since the CSUQ uses an opposite scale (1=very high, 7=very low), we reversed the answers in the data analysis in order to present the findings on a common scale (1=very low, 7=very high). The Ordinal Alpha coefficient was used to check the reliability (internal consistency) of students’ answers to each category of the questionnaire.

The questionnaire was introduced into the evaluation program at the third year (2014/2015). It was administrated in six courses and answered by 87 students who learned with the E2LP platform. In one of the courses (n=26), the values of the Ordinal Alpha coefficient (internal consistency) for the four categories in the questionnaire were: Overall satisfaction – 0.40; System usability – 0.91; Information quality – 0.97; Interface quality – 0.96. Since the values of Ordinal Alpha for the first category (Overall satisfaction) were too low, this category could not be presented by a single score. Therefore, the Overall satisfaction category items (items 1, 19) are presented separately in Figure 5.

 

e5 

Figure 5. Students’ answers to the CSUQ in the Logic Design of Computer Systems course(1 - completely disagree; 7 - completely agree).

 

From the questionnaire results presented in Figure 5, it can be seen that in general, the students were reasonably satisfied with the E2LP platform and software. The students were less satisfied with the information quality in the system.

 

6.        Qualitative evaluation: analyzing students' open-ended answers in the questionnaires, interviewers and observations in the classes 

In addition to the quantitative evaluation tools described above, the present study also involved qualitative tools, mainly conducting semi-structured interviews with students and teachers, and analyzing students' comments in the open-ended text boxes in the LFQ, MSLQ and CSUQ. Schutt (2012, p. 348) writes that "Conducting qualitative interviews can often enhance the value of a research design that uses primarily quan­titative measurement techniques. Qualitative data can provide information about the quality of standardized case records and quantitative survey measures, as well as offer some insight into the meaning of particular fixed responses." Authors such as Olds, Moskal and Miller (2005), and Borrego, Douglass and Amelink (2009) addressed the applications of using mixed research methods in the context of engineering education.

The aim of the interviews is to shed light on the processes of teaching, learning, achievements and motivation, exposing cases of success or difficulties that cannot be expressed in the quantitative data. The interviews with the teachers and students also aimed at collecting data about the technical difficulties they had in using the E2LP boards and other project systems, for example, bugs they found in the hardware and software. The interviews with the teachers and students were conducted by a member of the evaluation team from Israel who visited the participating universities in Novi Sad, Serbia, Zagreb, Croatia and Freiburg, Germany. The interviews with the students and teachers were conducted in focus groups of 5-15 participants. Over two academic years, the interviews were held with 136 students and 42 teachers and assistances. All the data were transcribed and analyzed in three consecutive rounds:

-       Identifying three-four main categories that arose in the discussion

-       Identifying three-four sub-categories of each main category

-       Identifying examples for each category

These process outcomes for the student interviews are presented in Figure 6.

e6 

Figure 6. Identifying the main categories, sub-categories and examples from data collected in the student interviews.

 

7.        Summary and conclusions

The main evaluation objectives and tools that were applied in the present research are summarized in Table 2. 

Table 2. Evaluation objectives and tools used.

Evaluation Objectives

Methods/Tools

Interest, success and difficulties in carrying out the lab work

  1. a.Lab Feedback Questionnaire
  2. b.Interviews with students
  3. c.Interviews with teachers

Intrinsic motivation

Extrinsic motivation

Control of learning

Self-efficacy beliefs

Task value

Self-efficacy

Motivated Strategies for Learning Questionnaire

User satisfaction with system hardware and software. Gathering subjective and objective data in realistic scenarios-of-use.

Computer System Usability Questionnaire

 

The qualitative and quantitative data collected in the evaluation process provided rich and significant information about the effectiveness and usability of the E2LP platform developed by the consortium from students' and teachers' perspectives. These data were presented to partners in the consortium and served as an important source for improvements made in the system during the work.  

The qualitative and quantitative data collected in the evaluation process provided rich and significant information about the effectiveness and usability of the E2LP platform developed by the consortium from students' and teachers' perspectives. These data were presented to partners in the consortium, and served as an important source for improvements made in the system during the work. In comparing the quantitative and qualitative findings of the third year (2014/2015) to those of the previous year (2013/2014), there were indications of slight improvements in students' feedback on using the E2LP platform, as well as an increase in learners' achievements in the lab, as reported in Deliverable D2.5.

In summary, we believe that the evaluation methodology and tools developed in this research were not just an important ingredient in the E2LP project, but could also contribute a meaningful layer to the literature and practice of engineering education, especially in the context of developing and evaluating new technology-based curricula.

 

8.        References

Alexander, P., Ryan, R., & Deci, E. (2000). Intrinsic and extrinsic motivations: classic definitions and new directions. Contemporary Educational Psychology, 25(1), 54-567.

Al Khatib, S. A. (2010). Meta-cognitive self-regulated learning and motivational beliefs as predictors of college students’ performance. International Journal for Research in Education, 27, 57-72.

Artino, A. R. J. (2005). Review of the motivated strategies for learning questionnaire. University of Connecticut, ERIC Number: ED499083. http://eric.ed.gov/?q=artino&pg=2&id=ED499083.

Bandura, A. (1997). Self-efficacy: the exercise of control, New York: WH Freeman and Company.

Barak, M. & Shachar, A. (2008). Project in technology and fostering learning: the potential and its realization. Journal of Science Education and Technology, 17(3), 285-296.

Borrego, M., Douglas, E. P., & Amelink, C. T. (2009). Quantitative, qualitative, and mixed research methods in engineering education. Journal of Engineering Education, 98(1), 53-66.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: a new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906-911.

Gadermann, A. M., Guhn, M., & Zumbo, B. D. (2012). Estimating Ordinal reliability for Likert-type and Ordinal item response data: a conceptual, empirical, and practical guide. Practical Assessment, Research and Evaluation, 17(3), 1-13.

Haron, H. N., & Shaharoun, A. M. (2010). Self-regulated learning, students’ understanding and performance in engineering statics. IEEE Global Engineering Education Conference (EDUCON) – Learning Environments and Ecosystems in Engineering Education, April 4 - 6, Amman, Jordan.

Kline, P. (2000). The handbook of psychological testing (2nd ed.). London: Routledge.

Lawanto, O., Santoso, H. B., & Yang, L. (2012). Understanding the relationship between interest and expectancy for success in engineering design activity in grades 9-12. Journal of Educational Technology & Society 15(1), 152-161.

Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction, 7(1), 57-78.

Lewis, J. R. (2012). Usability testing. In: G. Salvendy (Ed.), Handbook of human factors and ergonomics (4th ed., pp. 1267-1312). New York, NY: Wiley.

Nunnaly, J. (1978). Psychometric theory. New York: McGraw-Hill.

Olds, B. M, Moskal, B. M., & Miller, R. L., (2005). Assessment in engineering education: evolution, approaches and future collaborations. Journal of Engineering Education, 94(1), 13-26.

Owens, J. M. (2007). Program evaluation. NY: Guilford Press.

Patton, M. (2002). Qualitative research and evaluation methods. Thousand Oaks, Calif: Sage Publications.

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16, 385-407.

Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie W. J. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). National Center for Research to Improve Postsecondary Teaching and Learning. Ann Arbor: University of Michigan.

Rotgans, J. I., & Schmidt, H. G. (2010). The motivated strategies for learning questionnaire: a measure for students' generalmotivational beliefs and learning strategies? Asia-pacific Education Researcher, 19(2), 357-369.

Schön, D. A. (1996). Educating the reflective practitioner: toward a new design for teaching and learning in the professions, San Francisco: Jossey-Bass.

Schraw, G., Crippen, K. J., & Hartley, K. (2006). Promoting self-regulation in science education: metacognition as part of a broader perspective on learning. Research in Science Education, 36,  111-139.

Schutt, R. K. (2012). Investigating the social world: the process and practice of research. Thousand Oaks, Calif: Sage Publications.

Scriven, M. (1967). The methodology of evaluation. In: R.W. Tyler, R M. Gagne, M. Scriven (eds.), Perspectives of curriculum evaluation (pp. 39-83). Chicago, IL: Rand McNally.

Vogt, C. M., (2008). Faculty as a critical juncture in student retention and performance in engineering programs. Journal of Engineering Education, 97(1), 27-36.

Winne, P. H. & Hadwin, A. F. (2008). The weave of motivation and self-regulated learning, In: D. H., Schunk, & B. J., Zimmerman, Motivation and self-regulated learning: theory, research, and application (pp. 297-314). New York, NY: Routledge.

Zimmerman, B. J., & Schunk, D. H. (2001). Self-regulated learning and academic achievement: theoretical perspectives. Mahwah, N.J.: L. Erlbaum.

Thomas, J. W. (2000). A review of research on project-based learning. Autodesk, San Rafael, CA. Retrieved March 15, 2009, from http://www.bie.org/files/researchreviewPBL.pdf

 

Appendix 1: The Computer System Usability Questionnaire (CSUQ) Questionnaire

 

Dear Student, Professor, Assistant,

This questionnaire (which starts on the following page) gives you an opportunity to express your satisfaction with the usability of the E2LP system. Your responses will help us understand what aspects of the system you are particularly concerned about and the aspects that satisfy you.

Think, to as great a degree as possible, about all the tasks that you have done with the system while you answer these questions.

Please read each statement and indicate how strongly you agree or disagree with the statement by circling a number on the scale. If a statement does not apply to you, circle N/A. Pay attention to the scoring principle – the lower you score, the higher your degree of agreement.

Whenever appropriate, please write comments to explain your answers.

Thank you!