9 Common Questions Surrounding Accreditation
Every month Capsim hosts a webinar that’s part of its Accreditation and Assessment Webinar Series. The goal of this series is to provide useful information, tips and tools to help Deans and Professors with their business school’s accreditation process. Since accreditation is a cumbersome and often confusing process, it’s no wonder we receive so many additional questions about it.
We decided to catch up with Erich Dierdorff, an associate professor of management at DePaul University’s Driehaus College of Business, and ask him some of the most common questions surrounding accreditation. Erich is an expert in behavioral assessment, individual learning and team training and has helped numerous universities with accreditation and assessment.
Below is an edited summary of our conversation. Enjoy!
Q: What do you see as the biggest challenge facing learning assessment for accreditation?
It really stems from how schools balance the needs of the school and the needs of the student.
By this, I mean schools need learning assessment data for accreditation, while students need feedback for development.
Balancing these needs is a significant challenge for business schools today because often times with learning assessments, we focus on capturing the most readily available measures – the things that are the easiest to collect, usually coming from compliant faculty. Unfortunately, the most readily available measures aren’t necessarily optimal for student development.
Q: In your experience helping schools with learning assessment, where do they seem to struggle the most?
Assessing and developing both knowledge and skills in our students.
Business schools are quite adept at instilling and assessing knowledge, but not so much on the side of skills. Some refer to this as the ‘knowing-doing gap’ that business schools must address .
To give you an example, a large-scale research study found that only about 20% of the knowledge our students acquire can be skillfully applied.
Moreover, this knowing-doing gap exists across the variety of learning goals that we often have in business schools – ranging from those that are technical or business functional in nature, to those that are more interpersonal in nature.
Q: What are some ways to bridge the knowing-doing gap in assessment?
There are many ways to do this and you have to look no further than the management education literature for options.
Clearly there are a lot of educational techniques that we would call experiential or hands-on in nature, that do a much better job at not only ensuring students are gaining knowledge, but that they can skillfully apply such knowledge.
Many of the instructional techniques are familiar to us. Think service learning, where students are doing consulting projects with real life organizations, or running business simulations where students are forced to make cross-functional business decisions.
The key point is that no matter the instructional approach we take, we need to build-in meaningful and valid assessment metrics that capture learning as our students proceed through these instructional approaches.
This is the only way we can gain the true value of driving up the development of knowledge and skills in our students, as well as capturing that important knowing and doing information for accreditation purposes.
Q: Why is there a struggle to assess soft skills?
One of the major reasons is that the methods that are brought to bear to assess soft skills, which are typically things like case analyses, self-reflection exercises and so forth, are simply not optimal for assessing or developing soft skill competencies in our students.
Thus we are often falling well short in the methods we apply to not only assess these types of skills for learning assessment purposes, but also to provide the critical developmental feedback needed by our students.
Q: Why should schools care about developing soft skills?
There is a wealth of evidence on this question. Across all key business school stakeholders (faculty, alumni, recruiters, students, and employers) the evidence overwhelmingly points to the need for key skills such as collaboration, leadership, teamwork, and communication – all these soft skills have been identified as important but underdeveloped in our graduates.
Just to provide a more concrete example, the Center for Creative Leadership conducted a 20-year study trying to figure out some of the major reasons that managerial careers fail. Among these top reasons were things such as the inability to develop healthy relationships, inability to adapt, and ineffectiveness in building and leading a team.
So there is clear evidence that not only does the market demand these types of soft skills in our students, but also the long-term success of our students as alumni hinges on the importance of developing these soft skills.
Q: Are schools meeting the needs of students with regard to soft skills?
Unfortunately, the evidence is not very favorable.
Look no further than the corporate recruiter surveys and employer surveys by Bloomberg and the Graduate Management Admissions Council – they clearly indicate that our students are lacking the key skills they find most highly desirable: leadership, collaboration, working in teams, and communication.
I think business schools are doing better now than they did 10 years ago, but there is still a lot of room for improvement.
Q: What methods should be used to instill soft skills?
The literature on leadership development shows that methods such as assessment centers and multisource feedback tools are incredibly effective at instilling these types of soft skills in our students.
In terms of effectiveness and efficiency in a business school context, multisource feedback tools provide the most flexibility and ease of implementation.
Here at DePaul for example, I use a 360 developmental tool, Capsim360, for my MBA students. It captures important workplace information from their peers, boss, and direct reports on several skills domains that relate to effective management. They then utilize that feedback for their own self-development and we can use the data for accreditation purposes.
I also use an online peer evaluation tool, TeamMATE, whenever my students are assigned group projects. It captures information on how they function as an individual working within a team environment and provides critical information for their own development, as well as for accreditation data useful for learning goals pertaining to soft skills.
What’s interesting with these kinds of multisource feedback tools is that despite their wide usage outside of academics by training and development professionals and the ample evidence that they are valid, reliable and effective for developing these soft skills – they are grossly underutilized in business schools today.
Q: What are some of the benefits and costs to using these underutilized tools?
There are tradeoffs with any assessment that we might use in business schools.
Assessment centers, although a highly precise and valid way to assess soft skills, tend to be quite time consuming to implement.
When we think of peer evaluations, it requires well-designed team-based projects that also provide actionable feedback to the students on their performance within the team and their own individual contributions.
360s are really optimal for working professionals because they capture real workplace information.
Q: How would a school go about integrating peer evaluations and group projects?
It’s actually quite easy depending on the type of tool you use. I use TeamMATE, primarily because it’s very flexible, standalone and participant-driven – meaning there is a lot less administrative burden for me as the instructor.
In short, the way I use the TeamMATE assessment tool is generally with two different assessments during any team-based project I have. The first occurs around the midpoint of the project, which captures information about how the team is functioning and how the individuals are functioning within the team. This gives the team critical feedback on how to improve going forward. The reason why I like TeamMATE is because it offers developmental tactics to both the team and the individuals on how to improve.
The follow-up assessment is near the end of the project and serves two purposes:
- Allows me to use peer evaluations for grading the project and,
- Provides feedback to the students to see how well they have improved over the course of the project
As an institution we can feed this information into our learning assessment processes to capture how well our students are acquiring the teamwork competencies that are so valuable in today’s marketplace.
If you would like to learn more about how Capsim can help meet your various accreditation needs and improve your reporting processes, schedule a chat at a time convenient for you by clicking the following link: https://calendly.com/capsim_welcome/capsim-chat-accreditation