Assessment and Feedback

Day 1: Why peer assessment and feedback?

Welcome to Day 1!  

We hope you have a great start to this coffee course by browsing through our thoughts and links to resources on peer assessment below in your coffee break/s today. Of course if you try to read everything  linked here, you will need longer than a 15 minute coffee break!  Please bear in mind that it is up to you how much material you attempt to read and digest from this and subsequent posts.  The materials will be available to you on this page after this coffee course is over, so you can take your time and explore at your leisure. 

What is peer assessment?

In this course, we will be examining peer assessment in detail, and exploring some of the key things to think about when designing peer assessment activities, including how to prepare students to assess others, how to ensure the reviews are reliable, and which technologies can facilitate it.

To get started, let’s establish what we mean by peer assessment and why it might be valuable as a teaching strategy.

According to the relevant literature, peer assessment is “….an arrangement in which individuals consider the amount, level, value, worth, quality, or success of the products or outcomes of learning of peers of similar status.” (Topping, 1998, p. 250)  When used in teaching, it is where students judge and make decisions about the work of their peers against particular criteria.” (Adachi et al, 2017, p. 295) 

Watch this video from Reading University, with Martha Marie Kleinhans giving tips on peer assessment.

Variations in types of peer assessment

As you might imagine, there are many different types of peer assessment that can be conducted (Falchikov 1995) There are legitimate purposes for all of these variations, and we have summarised them for you here:

Summative or formative

Will the assessment affect the final grade, or just give a student guidance for improvement in the next task?

Determinative or informative

Is the peers’ assessment treated as conclusive, or will it just inform the judgement of an expert marker?

Independent or collective

Are the peers making independent judgements, or are they seeking agreement?

Anonymous or accountable

Do judgements appear anonymous, or can students question their peer assessors?

If you are interested, these and other types are set out in a typology by Topping (1998) on p. 252 

Differences and overlaps between Peer Assessment, Peer Review and Peer feedback 

Peer assessment and peer review are terms that are often used interchangeably, but generally peer review is more about reflective feedback which may or may not contribute a small mark for participation, to the final grade.  Peer assessment is generally more structured and the student’s scoring of their peers usually contributes in some way to a final grade.

Both peer assessment and peer review can provide opportunities for students to give and received feedback to and from their peers.  Peer review generally implies a much more detailed and reflective style of feedback, whereas peer assessment may be far more structured, and more focused on scoring through a detailed rubric.

Watch this video on peer review from Dr. Sigi Jottkandt at UNSW:

Why Peer Assessment?

You have already heard some reasons from the academics in the above recordings. The higher education peer reviewed literature has discussed a large range of advantages of peer assessment.  These are summarised by Adachi et al (2017) as follows:

  • Development of soft, transferable skills
  • Authentic assessment
  • Cultivating life-long learning
  • Promoting active learning
  • More varied feedback
  • Skills for giving and receiving feedback
  • Less time and input by teachers (although this has been contested)

Another advantage that has also been discussed is that it encourages deep rather than surface learning, and critical thinking, or in other words, is conducive to constructivist learning (see Boud et al, 1999).

Some issues and tensions

Boud et al, 1999 discussed tensions he observed at that time, in reconciling the newly emerging constructivist approach to assessment, which requires a more active role for students along with authentic assessment approaches, with existing traditional approaches to assessment.  How does constructivism and authentic assessment sit with mandated examinations, tests and summative reports and essays?

We hope to explore these issues and critiques further in a later coffee course around authentic assessment.

question markDiscussion  1

What do you think of these tensions and dilemmas described by the authors cited above?  Have you noticed these conflicting goals and models when assessing your students?

Why not peer assessment?

Part of the tension between traditional approaches to assessment include the risks of assessment change or unique practice that need to be managed. Anxiety and pushback from students and academics may be expressed as concerns about validity,  reliability or fairness (see Day 2 and 3). Unmanaged feelings may result in student errors and opposition which can overload teachers. Alienated students may be tempted to collude and to “warn” future cohorts. In Day 2 we look at how to minimise such risks by preparing students to accept and succeed at peer assessment and peer feedback. 

question mark

Discussion 2

  Think of a time when you or a colleague used peer assessment. From the material covered today, can you describe what type of peer assessment it was, and its purpose?  (e.g., summative or formative, group or individual etc).  Can you pass on any “lessons learned” from this experience?

References

  1. Chie Adachi, Joanna Hong-Meng Tai & Phillip Dawson (2017) Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education, Assessment & Evaluation in Higher Education, 43:2, 294-306, DOI: 10.1080/02602938.2017.1339775
  2. Stephen Bostock (2000) Student peer assessment, Higher Education Academy. Available: http://www.reading.ac.uk/web/files/engageinassessment/Student_peer_assessment_-_Stephen_Bostock.pdf. Accessed 1/6/18.
  3. David Boud, Ruth Cohen & Jane Sampson (2006) Peer Learning and Assessment, Assessment & Evaluation in Higher Education, 24:4, 413-426, DOI: 10.1080/0260293990240405
  4. Nancy Falchikov (2006) Peer Feedback Marking: Developing Peer Assessment, Innovations in Education and Training International, 32:2, 175-187, DOI: 10.1080/1355800950320212
  5. Keith Topping (1998) Peer Assessment between Students in Colleges and Universities. Review of Educational Research, vol. 68, no. 3, 1998, pp. 249–276. JSTOR, http://www.jstor.org/stable/1170598.
  6. Michael Wride (2017) Guide to Peer Assessment. University of Dublin, Trinity College. Available: https://www.tcd.ie/CAPSL/assets/pdf/Academic%20Practice%20Resources/Guide%20to%20Student%20Peer%20Assessment.pdf. Accessed 1/6/2018.

Additional resources

Video:  Student thoughts on Peer Review

 

28 thoughts on “Day 1: Why peer assessment and feedback?

  1. I’ve used peer assessment mostly in a formative way for low-value assessments. The reason for this is that it can be difficult to manage and that it takes some effort and skill to give good feedback even for experienced tutors – we have training for tutors on how to do this, so why would we expect our students to be able to do this with little or no training?

    What do you think of these tensions and dilemmas described by the authors cited above? Have you noticed these conflicting goals and models when assessing your students?

    I’m glad that the article starts off with the pragmatic reasoning that a lot of peer assessment is brought in because classes are getting bigger and so there is a financial need to do it rather than an educational need. I’ve had some students point out that we are effectively delegating our work to them rather than doing it ourselves. While there may be some educational benefits to doing it (as the video points out) I think we need to take care that the pragmatic doesn’t encroach too much on the educational value that comes with good feedback (which students rarely do as well as staff)

    Think of a time when you or a colleague used peer assessment. From the material covered today, can you describe what type of peer assessment it was, and its purpose? (e.g., summative or formative, group or individual etc). Can you pass on any “lessons learned” from this experience?

    Mostly I use individual assessments which are small in value and formative. I have used summative group feedback where the student feedback doesn’t count towards any mark per se. i.e. students give feedback to the group presenting but this feedback is purely for learning and doesn’t affect their mark whatsoever.

    My personal tip when the feedback counts towards marks is to have very structured templates for feedback where it would be difficult for students to deviate from the criteria and little subjectivity. Otherwise you run the issue of students complaining that they got an easy marker or that people were playing favourites/popularity and allocating more marks towards their friends

    1. Hi David, thanks for your thoughts and for sharing your tips. These tips are very valuable in my opinion, especially to anyone new to teaching in higher education. It is safer to keep peer assessment to low stakes assessments, where the mark is a small proportion of the total, or to formative assessments that are not given a mark towards the final result. Your other tip on having a very structured feedback template where the assignment does count towards final results is also great. There is quite a science in constructing clear and easy guidelines or rubrics for peer assessment.

      I agree wholeheartedly with your comment that it will not be a good development if peer assessment is implemented in large scale simply due to the number of students and lack of staff, rather than a solid pedagogical rationale.

  2. I was skeptical about peer assessment, until I had to do it myself as a student. I found it useful, especially when feedback came along with the assessment. Provided there is a stage where the instructor reviews the peer assessment I don’t see a problem with it.

    I now run a course with weekly anonymous individual summative peer assessment. I found the student’s marking broadly matches my own, except the students mark about 10% lower than I do. The peer assessment is also about an 80% correlation with final results for the course. The students have no difficulty with the use of peer assessment, but I only use it for 10% of their overall marks and do reassure them that I check it. Also I use a very simple marking scheme of 0, 1, or 2. I use Moodle to add up and average the marks given by each student for each student each week.

    Another course where I tutor uses group peer feedback with summative group peer assessment of the feedback. Students are also individually peer assessed on their teamwork. The group assessment works fine, but the individual peer assessment causes tension, because the teams are small. However, this is authentic form of assessment, reflecting the real world conflicts with teamwork. One problem with that course is that a complex bespoke system is used for collecting the feedback and there are occasional breakdowns with the system, resulting in delays. Especially with formative feedback accompanying assessment, it has to be prompt, so it can be acted on.

    1. Hi Tom, thanks for sharing your examples of using peer assessment. It sounds like you are using it in the best way, and successfully. I am interested in the reason for the break down in the complex marking process for the group peer review – is this a Moodle issue, ie in Gradebook?

      1. Jill, I have found Moodle works very well for peer assessment. I use the Moodle forums for this. Students rate each other’s postings and Moodle finds the average.
        The problems I have had are in a course which doesn’t use Moodle and bespoke software was written for. There are bugs which stop it working occasionally also the interface has a few problems. If you have just a few instructors doing all the marking, then it is possible to cope with bugs and quirks in the software, but when you have hundreds of students all marking each other it can be chaos.

        As an example of one quirk, there is a scale for marking students, with a frown face pictograph at one end and a smile at the other, with numbers in between. I spent a whole semester clicking on the numbers to rate students, without realizing that the pictographs were also click-able and represented minimum and maximum possible grades. I spent the whole time marking on a five point scale, when it was actually a seven point scale. I suspect some of the students were also doing this.

  3. Hi David, you said you use peer assessment “mostly in a formative way for low-value assessments”. I like the idea of using peer work for low-value assessment, but how do you use it in a “formative way “?

    1. So the ones I use are more like “mile stone” assessments. I get students to give other students zero for not completed, 1 for completed to a reasonable standard and 2 for completed to a high standard. When it comes to 1 or 2, I usually give them a list of tick box criteria and if they get more than say 80% of the tick boxes then they get full marks. Most of these formative assessments are related to whether a student has done components of an assessment. For example, if the summative assessment is to do a report on X, I might give them a formative assessment to do a much shorter and simpler report on Y. The peer review checklist would have items such as: do they have a cover page, do they have index, do they have an intro, does the intro tell you the reports aim etc. When they get the feedback from the peer-reviewed report on Y, they can then use it to help them when they do the summartive report on X.

  4. As a tutor, I haven’t had an opportunity to use peer assessment yet. However, I have been a student in courses that used peer assessment. All courses were formative, independent, and anonymous, and structured as a combination of both peer assessment and peer review. There were rubrics, but also the opportunity to provide reflective feedback. Interestingly, in every instance, the majority of students simply ticked boxes in the rubric and chose not to engage in the reflective feedback.

    The lesson I have drawn from these experiences, as well as marking assessments as a tutor, is that students largely seem to care more about the grade than the feedback. This is something that I am yet to get my head around. As a student, I have never particularly cared about grades. I detested receiving a paper where the “feedback” was ‘tick, tick, question mark, tick, __%, “Well done!”‘. This “feedback” does not provide any opportunity to learn or develop. Perhaps this is why students prefer peer assessment over peer reviews; they are rarely exposed to reflective feedback themselves. (This speaks to a larger issue about marking expectations, but that can of worms can keep for another day).

    1. Hi Bhavani, an interesting insight into student responses, thank you. It is interesting that students are more interested in marks than feedback but as you say this is no doubt a reflection on the whole institutional value system in higher education, where in essence, students are competing in a race to get a piece of paper that will get them the best job. I guess the job of the tutor or academic is to try to overcome this obsession and get students genuinely engaged in learning – a tough ask!

    2. Bhavani, that’s an intriguing issue. Rubrics or checklists usually don’t generate tips (“advice for future learning”) for future tasks which have instrumental value. But even if they did, it would be reasonable for students to be skeptical of “advice” from peers. I think this changes as students become more senior and have a stronger sense of ownership of the notion of “quality” in their discipline. I am speculating that peer-review becomes more meaningful as students get closer to being peers with professionals.

  5. I haven’t implemented peer assessment or peer review personally, although there was some peer review used in a class I taught into this year. As a student, however, I have been involved in peer assessment while completing MOOCs. In this framework, the assessment was very tightly controlled, with students only able to give ‘pass’ and ‘fail’ against a series of criteria (alongside text to explain their decision if desired) for anonymous assignments. I think the actual role of the student in providing assessment was also worth some component of the final mark for that assignment (i.e., so students were driven to do this). I found this process really useful, particularly for seeing the quality of other students work, as was outlined above.
    I’m not too bothered by the sorts of dilemmas outlined – at least in science, peer review is a necessary skill to learn. But, as Tom hints, you’d need to find a way for the process of peer review to be formative for the students so that they can provide valuable contributions to their peers – I’m sure this doesn’t come naturally to some!

    1. Hi Angela, thanks for your interesting point about your MOOC experience. Obviously with large mass courses like that, peer assessment is probably the easiest and most effective way to assess – and it sounds like the MOOC you were involved with had created some fool-proof tools and methods for this. Also your point about science being a discipline in which peer review is essential is a great one. The rationale for using peer assessment is often that this is a valuable skill to learn for the real world.

  6. Thank you for this interesting topic.

    In my opinion, peer assessment (PA) is a powerful tool, when combined with traditional marking, and can alleviate the concerns of any assessor: being fair and consistent. In other words, combining expert marking with PA can provide the assessor with some feedback from students on the work being marked, which in turn assists in keeping unmanaged biases at bay and ensures fair treatment.

    With this said, I believe PA also come with some challenges, including peers’ lack of critical judgement, too harsh or too lenient feedback, and subjective. However, these challenges can be addressed by having a clear marking rubric, clear instructions on the expected assessment methodology, and motivating the students to be responsible and honest. Moreover, I have met many students who hate the idea of being assessed by peers, which has a cultural aspect in it.

    1. Hi Zohair, thanks for your thoughts! I think the last point you made is particularly interesting, relating to some of the personal and cultural issues some students face when undertaking peer assessment. Is this something you have found in your own experience as a teacher or student? I’d love to hear more about the particular concerns the students had – we will look at this issue more in-depth tomorrow as well!

  7. I’ve tutored a course a few times now that uses peer assessment for presentations. In this assignment, all the students watching complete a rubric with feedback which then is collated and averaged, and then the lecturer/tutor makes the final judgement. All going well, there are a number of students in each class who can give the presenter/s feedback on their assignment. By the typology above, that makes this summative, informative, collective and anonymous. In general, I think it has worked well, although I think it disadvantages the students at the start more than the ones at the end of the presentation schedule. Some students have complained based on perceived unfairness in their classmates and perhaps a misunderstanding of how the process works.

    I am a fan of peer review for formative purposes, I really like the idea of being able to get feedback on a written piece and then being able to refine it. Something similar has caused a problem with some of my students before, when one plagiarised off another in the final submission. I can see some students not being comfortable sharing their work with other students, as it can be a very stressful process. I think the most valuable part of peer assessment is not in getting feedback from other students, but actually performing the assessment itself. Being able to use the marking criteria and apply it to someone’s work, and then give feedback on it, gives students a real understanding of how the marking process works, and what markers are looking for. Then they understand how important it is to be clear, and to consider readers while writing. Personally, my writing has improved so much from marking and giving feedback on student and peer’s work. As was mentioned in the video however, giving examples of good feedback and scaffolding the process is absolutely necessary to make it work well.

    1. Thanks for your comments, Lauren. I particularly like your point about the peer assessment being valuable to students as a process they have to go through, which teaches them about what it is like to be an assessor, therefore hones their understanding of what is required of them in assessments they complete.

    2. Great comment Lauren! That’s an often overlooked feature of peer assessment I think, that students get to see what it is like to mark an assignment and give it feedback, and some of the difficulties in doing so. I think in some cases they are not used to giving a professional judgement on a work, and it can be a steep learning curve. But definitely something that is constructive for the workplace in the future. I know peer review and feedback is an essential part of our team’s coffee course writing, for example!

  8. I have had no direct experience of peer assessment as described here either as a teacher or a learner, but having spoken with a few people who have used it, I’ve become interested in how I might apply it in my own teaching. I have done some undergraduate teaching, but mostly I teach postgraduate students in small seminar groups where the teaching is more a form of research and skills training (so not content based) and there is no formal assessment or grading. The students are international students from a educational system that has a very top-down approach to learning so peer learning and assessment are alien concepts. Students present their research and then we invite questions and comments just like a regular seminar. What I’ve noticed (and here I’m echoing Lauren’s comment) is how much the students learn from commenting on the work of others. We spend a lot of time going over aspects of research design such as constructing an argument and devising research questions, but students often find these easier to identify and critically evaluate when it is one step removed from their own work. Over time, we see these skills develop and become reflected in the work that the students then present in seminars and in papers. So I guess this is a type of formative, informative and collective peer assessment.

    My experience of this kind of teaching leads me to believe that there are real benefits to peer assessment and feedback that it would be great to implement in a more systematic manner and also for undergraduate students. I can see that it would take a bit of thinking through and careful implementation to assuage the fears of some students as mentioned in some of the previous comments, but that the benefits in terms of encouraging participation, active learning, and collaboration would make it worthwhile.

    1. Hi Sally! I think you’ve made a great distinction there where the sorts of discussions and learnings that happen in small seminars and tutorials are themselves a form of formative and collective peer assessment – I like that a lot! In Day 2 of the course we are going to look closely at how to support students to do peer assessment to try and mitigate some of the anxieties around it in particular.

  9. I used peer review as part of tutoring a course for feedback on student abstracts. It was mostly formative, informative, and independent, and I thought this worked well to give students experience in grading, but also to have external eyes (mine as the tutor’s) on the abstracts to provide additional feedback to ensure a certain standard. I believe this method to work well, as inconsistency can result from having student feedback impact a great deal of the grade, although that could be avoided by training students up front and perhaps working through example assignments together.

  10. I’ve used informal, formative peer review as a high school teacher, but in the HE context I’ve been involved in supporting peer assessment activities only twice. Both were anonymous, individual, low-stakes assessments; in one, students used the Workshop tool in Wattle, and in the other, they used hard-copy assessment ‘forms’ within tutorials. In both cases, the course convenors had made an effort to scaffold the students through the task, providing instructions and rubrics for students to use.
    In the Adachi et al article, the challenges of peer assessment found were a negative perception of students’ judgement skills and their expertise in making judgements, the potential for disruption of student/teacher power relations, and time constraints. There were also concerns about students’ engagement with the assessment, and the (lack of) incorporation of the feedback into their future assessments.
    In my own College context, I feel that many of the concerns regarding peer assessment (which do tend to reflect those listed above) could be mitigated if we simply used it more. If students are given only two chances through their entire degree to assess their peers’ work, then they naturally may struggle with evaluating and giving feedback. Both instances I’ve been involved in were well-designed, ‘feed-forward’ tasks with clear assessment instructions and outcomes, and rubrics for students to use when marking. If we incorporated tasks like these strategically throughout a degree, there potentially would be more opportunity to embrace the benefits of peer assessment, and overcome some of the perceived (and real) tensions.
    I will concede that time constraints in regards to moderation are a real concern – as far as I understand, moderation can take about as much time as if the marker had done the assessing in the first place (but again, this might be improved with students having more opportunity to do this kind of task). Other ‘lessons learned’ would be to ensure that the assessment is clear, meaningful (and preferably ‘feed-forward’), linked to course learning outcomes, and that the rubric students use is straightforward, easily understandable, with not too many criteria, and given to students beforehand. I think a ‘practice run’ or some kind of modelling is also ideal, so students can see the way that their teacher is interpreting the assessment and rubric criteria, and can ask any questions they might have before they start. This would also be an ideal time to talk about empathy and constructing useful feedback.

  11. I have never used peer assessment in my classes, but I use peer review quite a bit. I’ve really grown to appreciate how it allows to provide feedback to a large number of student drafts over a very short period of time. I would never be able to do it by myself. On the other hand, many students do find it challenging while others do not value their peers’ feedback. I’ve tried a number of different techniques: individual vs group, in person and online, with different types of scaffolding, but I’m never quite fully satisfied with the outcome. I hope I will get some good ideas from this Coffee Course.
    As for peer assessment, I might steal Lauren’s idea to use it for presentations.

  12. I use peer feedback as much as I can, ranging from discussions about their emerging ideas for assignments to reflection on completed work that ‘counts’ towards final work. I also think it’s a great idea to familiarise students with rubrics/marking schemes through the experience of either self-assessment or assessing anonymous samples. I would not use peer assessment towards a final grade though. I think that peer feedback, even with the best of scaffolding and oversight raises issues of fairness. It’s tempting to think of it as one thing: peer feedback. Actually it’s 40 ‘things’ if you have 40 students assessing each other, even if you’ve done your best to guide them. So in my opinion it’s super useful as assessment within the learning process (known as ‘assessment for learning’), but inappropriate if it counts towards final grades, even a little bit (known as ‘assessment of learning’).

  13. I haven’t used peer assessments in my tutorials. My last peer assessment experience was in one PTD training session. The aim was to reveal the importance of clear marking criteria. Two tutors formed a group to make an insect with clay. Then the group traveled around the classroom to mark other groups’ works and leave feedbacks. The assessment was formative, anonymous, informative and collective.
    Although the statistics showed we ruined the experiment, I had a deeper understanding of the learning objective. More critical thinking was required. It is an effective way to engage the students. I would like to try peer assessment in my future tutorials.

  14. I’m going to share my experience as a learner who has experienced peer assessment. My undergraduate degree is in Fine Arts Major in Visual Communication (Advertising) and peer assessment was very common in our program. Some teachers structured them better than others but I’ve done both summative and formative. The purpose were varied – to get more varied feedback, develop active learning, develop skills for giving and receiving feedback and development of soft, transferable skills. It was never anonymous.
    I remember that at that time, majority of us in the class (me included!) were more concerned about the final grade than the feedback that we got. But I can see real value in doing peer assessments because giving and receiving feedback were necessary skills in the field of creative advertising where we work with creative teams and clients. It was also a good opportunity for me to reflect on my own work as I was going thru the rubrics. It also made me realise how extremely difficult it was to assess and give feedback to a piece of work and articulate that to a fellow learner.

    On the part of the teacher, I agree with Zohair’s comment that it can alleviate the concerns of any assessor about being fair and consistent.

    I think what was lacking in our program’s implementation of peer assessment was the care factor for the students’ well-being – like how they would approach such a confronting exercise. Our teachers always pointed out that it was supposed to motivate us to do better but some end up crying or feeling very bad about their work after. There was also no scaffolding done to guide the students in doing the feedback.

  15. My experience with peer assessment was a real stand-out (in a positive way) and I still clearly remember the conversation it led to and the person I spoke with, even though it has now been several years. The context was my application for associate fellowship to the HEA through ANU. My written application was reviewed by a colleague at CHELT and the discussion we had not only helped me strengthen my application (which was well received, with only minor changes required for my accreditation to be formalised), but also left a lasting impression on me, such that I went on to mentor for the program later. Linking in to the terminology above, it was formative, and done it two ways – prior to meeting, my “feedbacker” annotated my application. During our meeting face to face, rather than going through those comments, instead my “feedbacker” asked me a series of questions which prompted me to remember details and think more clearly in such a way that I knew exactly how to complete my application. It was a truly rich conversation and a very positive interaction, embodying the principles of the HEA process itself, notably relflecitivity.

  16. I use peer assessment in a summative way. I use the workshop tool. a brief description below
    using a standard 5 step risk assessment scaffolding) students are asked to pick a topic of their choosing to write an essay on- this is worth 30%. they are then asked to create a maximum of 8 minute powerpoint presentation of their essay with a voice over for their peers and at the same time upload a discussion question related to their topic in the discussion forum. Each of them is given a rubric and asked to assess two of their peers’ presentations and engage with the questions that these peers have posed. this assessment is worth 30%. I have found this useful mainly because
    1. some students write well and some speak well. by creating an assessment taks that equally weights the two i feel takes this better into account particularly for international students. allowing them to do a voiceover also prevents them from doing it in front of people and getting marked down for engagement and style. in fact the international students i have had this year that didn’t score so well in the written essay, hardly ever spoke in class was give an average of about 95% for her presentation by her peers. As mentioned above, the student feedback has been positive, with many saying they had no idea of environmental health issues in other countries and this has really broadened their perspective. the discussion forum engagement has really thrown me- they are of such high quality and there was for the first time real conversation between the in-person students and the online students ( something i have really struggled with).

    The only downside for me is the marking. I would be very keen to hear thoughts on the phenomenally high marks they award each other. The only solution I could come up with is weighting down that assessment but is there a better way?

    1. Hi Aparna, thanks for sharing this example. I like this idea of offering a workshop tool that helps students demonstrate their strengths in different areas, particularly with speaking/listening and writing and students who are not native speakers of English. I think this is a really inclusive way of structuring a peer assessment! Great question about how to manage the marks that students give each other – has anyone else had a similar issue? I wonder if the rubric can be adjusted to help with this, as you suggest? (ALthough it’s nice that the students are giving each other good marks!)

Leave a Reply

Your email address will not be published. Required fields are marked *

*