Feature Showcase
Peer Evaluations: Gathering Student Feedback for Grading

While group projects are essential for developing career-ready graduates, evaluating each student's performance for grading can be tricky.

That's why our business simulations feature a peer evaluation tool. Peer evaluations help promote accountability, mitigate social loafing, and gather essential feedback to assess the contribution and effort of each student.

Watch Eric Smith, Capsim's Director of Client Relationship Services, provide a walkthrough of the peer evaluation tool and best practices to get the most out of the activity.

Video Transcript

"Hello, everyone, my name is Eric Smith. I am the director of Client Services here at Capsim, and today we're going to be taking a look at the peer evaluation tool available in all of Capsim's platform simulations.

I suppose a good starting point would be to take a look at what is a peer evaluation. So if we go to peer evaluations on the left menu and click on overview, this window is a little tough to see. So what I've done is I've downloaded this document and I've opened it up here on my browser so we can take a look at it a little easier.

I'm sure most of you are familiar with why we use peer evaluations. But if you're using a team-based activity like the simulation in class and you want to promote individual level accountability or measure individual performance within the team, a peer evaluation is a great way to get that information. And then you can use that information towards their participation grade.

Now, in Capsim's peer evaluation, there are three components. There is a self-management and accountability portion where participants will be measuring themselves on their preparation, their meeting attendance, their ability to communicate within the group. There's also quality and quantity of work being assessed, and this is for the other team members. So, you can see the eight items that go into quality of work here. But just a few examples, were they open to hearing others' opinions? Did their presence on the team contribute to improving the team's performance? Did they put forth good effort, etc. Quantity of work is a little more subjective, right, but it can give you some insights into the overall contribution of each team member.

I suppose the biggest thing to keep in mind here is that we're asking students out of 100 to measure each individual's contribution. So, just to be clear here, if we have a team of five, the average score is going to be 20 for each team member. If we have a team of four, the average score would be 25 for each team member. So, you want to remember the group size when you're looking at those individual contributions. Make a mental note of that when grading.

So, if we decide to use a peer evaluation, the first thing we need to do is schedule the assessment. Now, I've got one peer evaluation in practice round one and another peer eval scheduled for competition round eight. We're going to go ahead and delete this evaluation and reschedule it, just so you can see the process. The practice round one evaluation, we're going to look at the results of that together and talk about some best practices that we could do early in the course to get ahead of some potential problems. In competition round eight, really what I'm after is the overall performance of each group member. This one is going to be more impactful on the participation grade than the first peer eval. But I like getting that early insight into whether or not there are problems within the team, so that it gives me an opportunity to intervene. And in the end, I like getting the information I need for grading. So, as a best practice, I would say use at least two peer evals: one near the beginning, or I should say one near the beginning and at the end of the practice rounds, and then one at the end once the simulation is wrapped up. You could make a case for more or less; whatever works best for your course is fine. But the most common question I get is, should we show the results to students? Personally, I would say no. Even though we present the results in an aggregate format so students see the averages of what their other group members submitted for them, it can still ruffle some feathers, hurt some feelings, you know, cause drama in the class. And if I don't show them the results, I don't have to deal with that. Selfishly. There are some professors that say students should be aware of their perception within their group, these are things they need to work on. And I'm fine with sharing that report with students if that's your preference. But this is really up to you if you want the students to be able to see the results or not. We always want to publish the assignment; that's what makes it visible. We put in the corresponding round with one we want to show up, and then we just plug in our start date and time and end date and time, and we hit the schedule evaluation button. Now we're all set, right? We've got our peer eval. This one's already been completed, so we don't see the dates here, but the competition round eight would happen next month, at the end of this fictitious course.

Okay, so let's say the first peer eval has been completed, and we want to take a look at the results. I'd go back to peer evaluations, click on reports, and then continue. Now, there's an option to view the report over here on the right. We're going to view the peer evaluation from practice round one. Now here, I can see the students' names, the evaluation status, did they do the assignment or not; that's probably the most important thing we're looking at here. And then we can see team names, along with out of five, how did they do in terms of self-management and accountability, the quality of their work, and then the much more subjective quantity of work. Now, in this course, it's four teams of three. So, we get three students on Andrews, Baldwin, Chester, and Digby. What I do is I export this to Excel, which I've already done here, and we can open this up and look at it together. So, we still see the students' names, the evaluation status, what were their scores. But we also get comments submitted about the student from their other team members. So, we know Joey's been an amazing teammate, but Sarah has never reached out to her group mates regarding the project. She's completely removed from the group. There's some issues there that we might want to get out in front of. Let's see if anything else stands out. Ed is not a good teammate, and then the other teammate said, 'Who is Ed?' as if they've never met him, even though he's part of their group. This is another one where I might want to step in and make sure this team has a plan for moving forward.

So, that first peer eval is really to get some insights into the group. In this case, I know team Andrews and team Digby have two low-performing team members where it may be beneficial for us to step in and give them some coaching. There's a lot of different ways you could go about doing this. Personally, I like to have the team complete a team charter. I have an example here, which if you would like something like this, we're happy to share it with you. You could just email support@capsim.com and say, 'Hey, do you guys have a team charter that I could give my students after the peer eval?' We have all sorts of resources that we're happy to share with you if you need them. But a team charter basically gets the team to agree on a number of things. First, they're sharing their information; they're doing a sort of skill inventory, you know, what are our strengths and contributions? What are some areas each individual needs to develop? But most importantly, they're going to have some goals. They're going to think about the potential barriers to meeting those goals, and then they're going to set some ground rules for their meetings, along with how they're going to manage conflict.

That is about it. So, there's a quick recap. If we're going to do a peer evaluation, first, we need to schedule it. We should always publish the assignment so it shows up to students. I typically say no; the student should not be able to see the results. The first peer eval at the end of practice I will use as a way of identifying problems early enough where we can still take action and get that group back on track. And then, again, at the end of competition, I'm going to use this feedback towards their participation grades.

So, I hope this video was helpful. If anybody has any questions, you can always feel free to reach out to me. My email is eric.smith@capsim.com, just like the website. Or you can reach out to my wonderful support team at support@capsim.com, and we're always happy to assist you. Have a good day. Bye now."