World Transactions on Engineering and Technology Education
2009 WIETE
Vol.7, No.1, 2009
A case study of a method for hybrid peer-evaluation in engineering education
Susan Vasana† & Albert D. Ritzhaupt‡
University of North Florida, Jacksonville, Florida, United States of America†
University of North Carolina Wilmington, Wilmington, North Carolina, United States of America‡
ABSTRACT: This case study addresses the call for alternative methods of assessment in engineering education by
providing a systematic, hybrid Peer-Evaluation (PE) model that can be tailored to fit within virtually any engineering
course. Seventeen students enrolled in a senior-level undergraduate digital signal processing class participated in a three
phase PE process designed to formatively and summatively evaluate peers’ course projects using well-understood
criteria and reliable measures. The results show strong and positive correlations between the student PEs and the
instructor evaluations. Further, the significance of a project proposal when formatively evaluated was positively related
to structure, completeness, and overall quality of the final project report when summatively evaluated. Students were
generally satisfied with the hybrid PE model, and practiced on how to give professional feedback and constructive
criticism. This article adds to our understanding of the PE method and its use in engineering education.
INTRODUCTION
The necessity for quality instructional practice and assessment in engineering education is of paramount concern [1]. In
particular, there has been a call in engineering education for more alternative (a.k.a. authentic) methods of assessment
(e.g. portfolios), as opposed to traditional methods (e.g. exams) which have been rightfully criticised for a lack of
validity in measuring students' ability to apply knowledge to real-world situations [2]. Alternative methods of
assessment attempt to overcome the limitation by developing authentic contexts in which students can demonstrate their
mastery. A method often overlooked in the context of engineering education includes the use of peer-evaluation (PE) as
a mechanism to provide formative assessment to improve the quality of their peers’ work and provide a summative
assessment of the work according to guidelines which meet course objectives [1]. Indeed, meta-analytic studies have
verified the use of PE closely resembles teacher assessment in higher education when judgments are made on well-
understood criteria [1].
The term PE can have different meaning to different people. PE can also vary widely by educational context in that it
could refer to differential grading of group projects based on peer judgments [3][4] or could refer to the assessment of
their peers work according to some pre-defined guidelines [5][1]. Further, PE can be formative [6][1] and/or summative
[1] in nature. This research adopts the definition that PE is the individual evaluation of a peer’s work according to
specific guidelines to evaluate their peers with the dual purposes of improving the quality of the work and providing an
overall quality judgment of a peer’s work.
The validity of PE is a major concern for educators. Across 48 different published manuscripts on PE, researchers found
that peer marks closely resembled teacher marks when judgments are based on well understood criteria and when
academic products and processes are evaluated as oppose to professional practice [1]. Further, multiple ratings were
found to be better measures than single ratings [1]. Research also suggests that PE is of adequate reliability and validity
in a wide variety of applications in higher education [7]. Research also finds PE has positive formative effects on
student achievement and attitudes in that the information is provided to peers in a feedback loop; and the evaluations are
as good as or better than the teacherevaluations [7].
Educational outcomes from PE can be both positive and negative. Positive outcomes include the improved quality of
student work, more responsible and reflective learning experiences, diplomatic or constructive criticism, a heightened
sense of professionalism, and the development of transferable skills to different contexts [6][8][9]. Students not only
have the opportunity to learn from their peers by receiving their peers’ constructive commentary of their work, but
students also have the opportunity to learn from the peers’ work they evaluate [10]. Some research also suggests that
students find the PE process time consuming, dislike awarding a grade to their peers, find it intellectually challenging,
and sometimes socially uncomfortable [6][9].
34

This article addresses the call for alternative assessment methods by developing and systematically evaluating a hybrid
PE model. The purpose of this article is to provide a systematic model for hybrid PE in engineering education and to
examine the relationships between the model and the quality of students’ work, demonstrate the reliability of the model,
and gauge students’ satisfaction with the model.
HYBRID PEER-EVALUATION MODEL
The PE method has transferred to the online realm, which provides substantial benefits over the traditional paper-pencil
process. Online PE provides the following minimal benefits: provides a way for students to submit and evaluate their
peers’ work on the web, provides teachers’ access to the PEs at any point in the process, and decreases the costs
associated with copying projects peers to evaluate [11]. This research adopted the use of both traditional and online PE
methods to support the process; hence the term
hybrid peer-evaluation
. The goals of the hybrid model included creating
a generic PE process that could be tailored to fit in any engineering course with a project component, to balance the
amount of time students would spend evaluating their peers’ work with the amount of time they would spend on their
own projects, and to model an environment in which constructive criticism could be practiced. Using online PE models
as a baseline [12][13], this research employed a three phase implementation. Figure 1 illustrates the model in terms of
modality, evaluation type, evaluation criteria, and the deliverables.
Figure 1: Hybrid peer-evaluation model.
In contrast to other PE models, the evaluation criteria used for the proposal, project report and project presentations
were not the same. For a proposal, the important characteristics were the relatedness of the proposal to the course, the
significance of the proposal, the specificity of the purpose and resources that would be used, clarity of written language,
scope of the project given the timeline of the course, and the overall quality. For the project report, the relevant
characteristics are the comprehensiveness of the literature review, the accuracy of the results, the organisation and
clarity of the prose, the completeness in terms of addressing the items outlined in the proposal, and the overall quality of
the course project. Finally, in terms of the student presentations, the relevant characteristics include the presentation
organisation, use of visual aids, preparedness of the students, their ability to respond to questions, extra effort or
creativity, and the overall quality of the presentation
RESEARCH METHODOLOGY
Participants and Course
The context of this hybrid PE case study is within a senior level digital signal processing course taught at a public,
masters-large, south-eastern university in the United States. The course had = 17 students enrolled during a summer
N
semester. The course is a required course in an Accrediting Board of Engineering Technology (ABET) accredited
electrical engineering undergraduate program. A major deliverable within this course (30% of the final grade) is an
individual course project. The goal of this project serves dual purposes: 1) it provides individual students an opportunity
to investigate an in-depth topic and 2) provides all students a breadth of knowledge in a diverse set topics related to the
course. All projects required students to prepare three deliverables: 1) a two-page project proposal, 2) a project report
that details the work, and 3) a project presentation to peers and the instructor.
All students were provided with examples of potential topics, such as the completion of a simulation of a system or a
research of a new technology. Further, all projects were approved by the instructor before students proceeded. Each
student was provided with a time in class to present their project ideas. The projects were diverse in topic yet all related
to digital signal processing. For example, one student implemented a simulation of computer generated sound effects
using MATLAB while another student conducted a literature review on cochlear implant devices and their designs. To
avoid the potential unfairness of differential grading (e.g. one peer rates more conservatively than another), only the
summative presentation evaluation that all students involved in evaluating everyone at project presentation phase was
used to provide 50% of the grade to the course project. The instructor assigned the remaining 50% of the grade by using
the summative PEs and comments and the instructor’s independent assessment of the projects. The students received
10% of course grade credit after submitting all the evaluations on time.
35

Procedures
The online PEs were deployed using the Blackboard course management system. The project proposal and project
report were made public; however, the student evaluations were only visible to the instructor and the student receiving
his feedback. This practice was implemented to keep the evaluations confidential and to prevent peers from simply
repeating the comments of their peers as part of their own critical evaluations. Students were assigned to three different
peer projects to review on three occasions. Students first completed their project proposals and posted the proposals
online. Next, students were instructed to review their peers’ project proposal online using the formative evaluation
instrument and to provide constructive comments on the project’s direction, and how to improve the work. The
instructor reviewed the survey responses and released the anonymous PEs to the students to incorporate the feedback
into their final project reports. After the completion of course projects, students posted their final project reports online.
Then, peers used the summative project PE instrument to evaluate the same three projects. Unlike the formative,
students were instructed to evaluate the projects in a summative fashion and to justify their markings as feedback to
their peers.
The final phase of the PE process was executed during the project presentations. Students were evaluated anonymously
by all of their peers using the summative presentation PE instrument. The implementation of the hybrid PE model
involved three rounds, as illustrated in Figure 1: the formative proposal review (online), a summative report evaluation
(online), and a summative presentation grading (face-to-face). These cycles were followed by a satisfaction survey
(face-to-face) designed to measure a student’s satisfaction with the PE process. In total, the hybrid PE process took
approximately six weeks to complete.
Measurements and Data Analysis
This case study employed five separate measurements to address the purposes of this research: a project proposal
formative evaluation, a project report summative evaluation, a project presentation summative evaluation, a student
satisfaction survey, and the frequency of the type student feedback provided in the PEs. The satisfaction survey was
based on previous research [14] and modified to meet the needs of this research. The evaluation instruments were
developed by the research team and traced to the project and course objectives. The data was analysed using descriptive
and inferential statistics. Reliability was measured using internal consistency reliability for the satisfaction scales and
generalisability theory for each of the PE instruments. Repeated measures Analysis of Variance (ANOVA) were used to
test the differences between the formative and summative evaluations in light of the assumptions of the procedure.
Pearson correlations were used to measure relationships.
The formative and summative comments were coded using an established framework for categorising feedback, which
includes didactic, corrective, suggestive and reinforcing comments [15]. The comments provided on the formative and
summative PEs were independently coded by two members of the research team until inter-rater agreement exceeded
85%, indicating sufficient reliability [13].
Table 1: Type of peer feedback analysed.
Type
Definition
Example
Corrective
If a student provides incorrect information or
Proposal does not have a timeline
.
formatting, than a peer can provide feedback
The paper needed to focus in the main paragraphs
.
to point out and correct the mistake. This type
Try using ‘50’ instead of fifty. If the number is greater
of feedback should improve the accuracy of a
than 10, don’t spell it out
.
student’s project.
Reinforcing
Reinforcing feedback is provided to peers
The proposal is very well written and structured
.
when the information is accurate and the goal
The proposal is well formatted and addresses a topic with
is positive recognition to support the actions
a rapidly increasing demand for superior performance
of a student in their project.
and cost effective implementation
.
Didactic
Didactic feedback refers to length
Any idea what Digital Signal Processor you are going to
explanations that may serve to explain the
research? Seemed like most of the proposal was just a
inadequacy of information or relevance of a
reiteration of the guide for the proposal, you probably
topic. This type of feedback attempts to direct
could have just cut that part out-fine the way it is without
peers in the right direction.
it.
Suggestive
Suggestive feedback makes recommendations
Make sure not to focus too much on the physics and
on how to improve the project or what could
mostly on the DSP [Digital Signal Processing] side
.
have been included. A peer may point out a
You should not use the word ‘the’ so much
.
problem without providing a direct solution to
The first paragraph was a repeat of the outline given to
the problem.
us and unnecessary
.
Table 1 illustrates the type of feedback, provides an operational definition [13][14], and provides examples from the
dataset. On some occasions, the type of feedback could be classified into more than one category. For example,
This
36

project has a nice scope, and needs to zero on a specific subject of the wireless system, and not focus so much on the
communications part
. In this case, this feedback was counted as both Reinforcing and Suggestive feedbacks.
CASE STUDY RESULTS
One student did not complete one of the formative evaluations, and thus, there were only 50 formative PEs (as opposed
to 17 x 3 = 51). Consequently, one student only received two PEs during this phase. During the summative report
evaluations phase, one student did not complete the summative report evaluations at all, and thus, three students
received only two summative evaluations each. All other PEs were completed by students as assigned by the research
team. The data were evaluated for normality on each of the dimensions across PE instruments, which showed no severe
departures from normality (Kurtosis < 3 and > -3; Skewness < 2 and > -2).
Formative and Summative Analysis
Generalisability coefficients, which measure the variability of across the PEs, were used to estimate reliability of the PE
measurements [16]. The generalisability coefficients for formative evaluation, summative evaluation, and summative
presentation evaluation instruments were 0.53, 0.84, and 0.90, respectively, for these data. The estimates were more
than acceptable for the summative project and presentation PE measures. However, the formative PE reliability would
reach an acceptable level (> 0.7) by increasing the number of student proposals each student has to evaluate to seven for
these data. However, this level of internal control must be balanced with the practical nature of the hybrid PE model.
Demanding seven or more evaluations from each student, as executed in some research [13], may not be a practical
instructional intervention for engineering courses with heavy workloads.
Table 2 shows the descriptive statistics for the formative and summative PE evaluation criteria. Across projects and
PEs, the average scores are greater than 4.0. The composite scores for the formative proposal, summative report, and
summative presentation PEs are =4.37 (
M
SD
=0.73),
M
=4.23 (
SD
=0.83), and =4.39 (
M
SD
=0.70), respectively. Two
evaluation criteria were consistent from the formative proposal evaluation to the summative project: clarity and overall
quality. Clarity declined slightly from =4.27 (
M
SD
=0.75) to =4.14 (
M
SD
=0.72) and overall quality also slightly
declined from =4.35 (
M
SD
=0.74) to =4.14 (
M
SD
=0.72). The data were entered into repeated measures ANOVA, and
the results show no significant differences in clarity or overall quality at (1, 45)=1.39, =.24 and (1, 45)=1.03, =.32,
F
p
F
p
respectively. A likely explanation for these declines is that the formative PE was based on a project proposal of two
pages in length while the summative project report PE was based on the final project report which included
substantially more information.
Table 2: Formative and summative descriptive statistics.
Dimensions
Min
Max
M
SD
Formative proposal peer-evaluations
4.37
0.73
Relatedness
3
5
4.72
0.53
Significance
3
5
4.59
0.57
Specificity
2
5
4.09
0.83
Clarity
2
5
4.27
0.75
Scope
1
5
4.24
0.95
Overall quality
2
5
4.35
0.74
Summative project peer-evaluations
4.23
0.83
Relevant literature
2
5
4.26
0.90
Accuracy
2
5
4.34
0.76
Structure
2
5
4.16
0.86
Clarity
3
5
4.14
0.72
Completeness
1
5
4.32
0.91
Overall quality
2
5
4.17
0.84
Summative presentation peer-evaluations
4.39
0.70
Organisation
2
5
4.42
0.67
Use of visual aids
2
5
4.36
0.78
Preparedness
3
5
4.43
0.69
Answers to question
2
5
4.20
0.80
Extra effort/creativity
2
5
4.46
0.61
Overall quality
3
5
4.46
0.64
37

Type of Student Feedback
Table 3 shows the type of feedback that emerged from both the formative and summative PEs. Students provided
reinforcing and suggestive comments most frequently. The data were entered into a repeated measures ANOVA with
the delta from the formative to summative peer-evaluation serving as a within subjects condition. Results show no
significant differences for corrective, reinforcing, and didactic feedback from the formative to summative PEs at (1,
F
49)=0.06, =.81, (1, 49)=2.96, =.09, and (1, 49)=0.03, =.86, respectively. However, there was a significant decline
p
F
p
F
p
in the frequency of suggestive feedback provided at (1, 49)=6.22, =.02. A plausible explanation for this finding is
F
p
that second evaluation was purposefully summative in nature, and thus, students were less likely to make suggestions to
improve the projects.
Table 3: Type of feedback for formative and summative peer-evaluations.
Formative Peer-evaluation
Summative Peer-evaluation
Min
Max
Total
M
SD
Min
Max
Total
M
SD
Corrective
0
4
30
0.60
1.01
0
5
28
0.56
1.05
Reinforcing
0
12
81
1.62
2.11
0
7
111
2.22
1.79
Didactic
0
2
16
0.32
0.55
0
2
15
0.30
0.58
Suggestive
0
4
54
1.08
1.07
0
4
32
0.64
0.92
Relationships among Measurements
The correlation between the scores of the formative and the summative PEs is statistically significant and moderately
positive at =.32, =.03. Table 4 shows the relationships between the evaluation criteria from the formative and the
r
p
summative PEs. As can be gleaned, the significance of the project proposal from the formative PE was significantly
related to the structure ( =.35, =.02), completeness ( =.34, =.02), and overall quality ( =.35, =.02) of the final
r
p
r
p
r
p
project report. The quality of the literature review in the project report was significantly related to the specificity ( =.29,
r
p
=.047) and scope ( =.31, =.03) of the formative PEs. Additionally, the relatedness of the project proposal topic was
r
p
significantly related to the completeness of the final project report at =.31, =.03.
r
p
Another consideration was the relationship between the instructor’s evaluation of a student’s project and the summative
report and presentation PE scores. The average summative PE score assigned to each project was correlated with the
instructor‘s final assessment: the summative report score correlated at =.58, =.01; and the summative presentation
r
p
score correlated at =.70, <.01. Both of these correlations are strongly and positively correlated, which lends credence
r
p
to the validity of the model and is consistent with previous findings [1][7].
Table 4: Relationships between the formative and summative evaluation criteria.
Summative
Formative
Rel. literature
Accuracy
Structure
Clarity
Completeness
Over. quality
Relatedness
.14
.19
.20
.25
.31*
.18
Significance
.14
.18
.35*
.25
.34*
.35*
Specificity
.29*
.16
.05
.05
.22
.12
Clarity
.03
.19
.25
.29*
.17
.23
Scope
.31*
.18
.00
.09
.12
.18
Over. quality
.11
.28
.15
.23
.24
.10
* < .05
p
Student Satisfaction with Hybrid Peer-Evaluation Model
Table 5 provides the student satisfaction descriptive statistics from the anonymous satisfaction survey administered to
students upon completion of the PE process. The measure demonstrated a high degree of internal consistency reliability
at α=.90. The results demonstrate that 94% of the students were generally satisfied (somewhat satisfied to very
satisfied) with having to evaluate three separate projects; and the level of ease associated with the process. Further,
100% were satisfied with the amount of time it took to complete the PEs; and the online surveys and criteria used to
evaluate their peers. Far fewer students (47%) were satisfied with the quality of feedback received from their peers.
This is an indication that students were satisfied with the hybrid PE model, but were, to a lesser extent, satisfied with
the quality of the feedback they received from their peers. Most of the students (71%) either agreed or strongly agreed
that they learned from their peers’ projects as part of the PE process. More than half of the students reported enjoying
the PE experience (53%), and feel more confident in providing constructive criticism (53%) as a result. However, only
47% of the students agreed that most courses would benefit from the PE process.
38

Table 5: Satisfaction item response frequencies and descriptive statistics.
First Section
% Response Frequencies
Items
M
SD
1
2
3
4
5
The number of projects you had to review
4.47
0.62
0
0
6
41
53
The level of ease of the review process
4.35
0.61
0
0
6
53
41
The quality of the feedback you received from your peers
3.35
1.17
6
18
29
29
18
The amount of time it took to complete the review process
4.35
0.49
0
0
0
65
35
The surveys you used to evaluate your peers
4.53
0.51
0
0
0
47
53
M=mean; SD=Standard deviation; 1=Not at all satisfied; 2=Somewhat dissatisfied; 3=Neutral/Don't Know; 4=Somewhat
satisfied; 5=Very satisfied
Second Section
% Response Frequencies
Items
M
SD
1
2
3
4
5
The peer-evaluation process was a good learning experience
3.41
1.28
12
6
35
24
24
I learned from my peers’ projects
4.00
1.22
6
6
18
24
47
The feedback provided by my peers was helpful
3.12
1.41
18
12
35
12
24
The peer-evaluation process was clearly stated and easy to follow
4.47
0.87
0
6
6
24
65
Most courses would benefit from using peer evaluations
3.47
1.28
6
18
29
18
29
I enjoyed this experience
3.65
1.32
12
0
35
18
35
I feel more confident in providing constructive criticism
3.71
1.21
6
6
35
18
35
M=mean; SD=Standard deviation; 1=Strongly disagree; 2=Mildly disagree; 3=Neutral; 4=Mildly agree; 5=Strongly agree
Anecdotal Observations on Classroom Behaviours
The same instructor at the same institution has taught the course in the case study for several semesters. The instructor
noted several key differences as result of implementing the hybrid PE process that were different from the previous
semesters. Perhaps the most important observation is that students placed more time and effort in their course projects,
and as a direct consequence, the average project quality had increased from the previous semesters. Further, because of
the rigid deadlines associated with a formal PE process, more students attended classes regularly and submitted their
projects on time to the course management system. A final important observation was that students were generally more
prepared for their project presentations. Though these observations cannot be stated scientifically, they are important
notes for instructional practice, and potentially, for future research efforts.
DISCUSSION
The current article adds to the understanding of the PE process by providing a systematic, hybrid model that employs
both traditional and online PEs as an alternative method of assessment. The hybrid model was designed to be generic
enough to fit in any engineering course with a project component while balancing the time-constraints associated with
the PE process. Of course, the results of this study must be interpreted in light of the limitations. This case study has
been conducted using data that were collected during one semester with a relatively small sample size. Further, the
degree of accuracy of the measures may also be questionable since the items are self-reported measures and since the
PE process was not truly anonymous as students may or may not have shared their identity. Last but not least, this was
the first time for most students to practice giving PEs and providing comments on engineering projects. The instructor
pointed out a few major inappropriate types of evaluations and comments during the second stage of the PE process,
such as casual and unprofessional wording.
In evaluating the formative and summative peer evaluations, students generally had assigned their peers high marks on
each phase of the process as evidenced by the high composite scores (> 4). Though the analysis did not detect any
significant differences on clarity or the overall quality from the formative proposal PE to the summative project PE, a
slight decline was noted. A probable explanation is that the final project reports were substantially longer and more
comprehensive than the project proposals. The most common types of feedback provided to peers were reinforcing and
suggestive in nature in both the formative and summative evaluations.
The analysis also revealed that the instructor’s evaluation of a final project report was strongly and positively correlated
with the summative PEs. This finding is consistent with previous research that suggests PEs closely approximate the
evaluations of the instructors when clear guidelines are available [1][7]. Further, the finding also demonstrates the
robustness of PE as an alternative method of assessment in engineering courses. In contrast to a traditional method of
assessment (e.g. exams), the hybrid PE model encouraged constructive criticism as a goal for modelling real-world
activities in the practice of engineering. Thus, the hybrid model of PE presented addresses the call for more alternative
methods of assessment and reinforces the practice of engineering instructors using the PEs to partially determine grades.
39

Students were generally satisfied with the hybrid PE model. Specifically, the results show that students were satisfied
with the PE process, the instruments employed, and the number of proposals each student had to evaluate. These are
important considerations when attempting to implement practical and replicable instructional practices. However, the
results also showed that only 47% of the students were satisfied with the quality of feedback received from their peers.
There may be a sense of cognitive dissonance in that a student may have put ample effort into providing critical
feedback to their peers and received feedback that may not have been as critical in nature. Another observation is that
some students provided casual and negative feedback which might have discouraged the students who received those
comments. In future practice, the instructor could review and potentially edit the comments before releasing them to the
students. Further, the instructor should provide examples of professional and constructive comments at the beginning of
the evaluation process as students may lack of such experience.
In closing, the authors believe engineering educators should be mindful of a student’s perception of the assessment
methods employed within their courses, and can take careful steps in the integration of alternative methods based on
sound empirical evidence. While this research has provided evidence to demonstrate the instructional value of a novel,
hybrid PE model, more work needs to be done. For instance, replicating the hybrid method in different classroom
environments, both inside and outside of engineering, is necessary to generalise findings and develop best practices to
inform practice.
REFERENCES
1.
Falchikov, N. and Goldfinch, J., Student peer-evaluation in higher education: a meta-analysis comparing peer and
teacher marks.
Review of Educational Research,
70, , 287-322 (2000).
3
2.
Wellington, P., Thomas, I., Powell, I. and Clarke, B., Authentic assessment applied to engineering and business
undergraduate consulting teams.
Inter. J. of Engng. Educ.,
18, , 168-179 (2002).
2
3.
Kennedy, G.J., Peer-evaluation in group projects: is it worth it?
Proc. Australasian Conference on Computing
Education
, Newcastle, New South Wales, Australia, 59-65 (2005).
4.
Gatfield, T., Examining student satisfaction with group projects and peer assessment.
Assessment & Evaluation in
Higher Educ.,
24, , 365-377 (1999).
4
5.
MacAlpine, J.M.K., Improving and encouraging peer assessment of student presentations.
Assessment &
Evaluation in Higher Educ.,
24
,
1
, 15-25 (1999).
6.
Topping, K.J., Smith, E.F., Swanson, I. and Elliot, A., Formative peer assessment of academic writing between
postgraduate students.
Assessment & Evaluation in Higher Educ.,
25, , 149-169 (2000).
2
7.
Topping, K.J., Peer assessment between students in colleges and universities.
Review of Educational Research,
68,
3
, 249-276 (1998).
8.
Dochy, F., Segers, M. and Sluijsmans, D., The use of self-, peer- and co-assessment in higher education: a review.
Studies in Higher Educ.,
24, , 331-350 (1999).
3
9.
Falchikov, N., Peer feedback marking: developing peer assessment.
Innovations in Education and Teaching
International
, 32, , 175-187 (1995).
2
10. Boud, D., Cohen, R. and Sampson, J.,
Peer Learning and Assessment
. In: Boud, D., Cohen, R. and Sampson, J.
(Eds.), Peer learning in higher education: learning from & with each other London: Kogan Page, 67-81 (2001).
.
11. Lin, S.S.J., Liu, E.Z.F. and Yuan, S.M., Web-based peer assessment: feedback for students with various thinking-
styles.
J. of Computer Assisted Learning,
17, 420-432 (2001).
12. Kali, Y. and Ronen, M., Design principles for online peer-evaluation: fostering objectivity.
Proc. Computer
Support for Collaborative Learning Conference
, Taipei, Taiwan, 247-251 (2005).
13. Tsenga, S.C. and Tsai, C.C., On-line peer assessment and the role of the peer feedback: a study of high school
computer course.
Computers & Education,
49, , 1161-1174 (2007).
4
14. Ritzhaupt, A.D. and Gill, T.G.,
A hybrid and novel approach to teaching computer programming in MIS
curriculum
. In: Negash, S., Whitman, M., Woszczynski, A., Hoganson, K. and Mattord, H. (Eds.), Handbook of
Distance Learning for Real-Time and Asynchronous Information Technology Education. Hershey, PA: Idea Group
Reference, 259-281 (2008).
15. Chi, M.T.H., Constructing self-explanations and scaffolded explanations in tutoring.
Applied Cognitive Psychology
,
10, 33–49 (1996).
16. Crocker, L.M. and Algina, J.,
Introduction to classical and modern test theory
. Belmont, CA: Wadsworth
Group/Thomson Learning (1986).
40