(Be An Extrovert)
According to Section 1.3 of Victoria’s Assessment Handbook, assessments should “provide an accurate and consistent measure of student performance”. Assessments should “provide every student with an equitable opportunity to demonstrate their learning”. A good assessment “measures what it purports to assess”. It’s a cute idea, that we are examined by machines focusing only on what matters. It is also, of course, a myth.
We are examined by people, people with blind spots and imperfect judgments and lazy heuristics. Crouching unseen behind the University’s rhetoric of equity and consistency is a bias: if you are extroverted and well-liked—if you seem confident and in control of your material—you will get good grades. If not—if you are an introvert—you will be punished. Markers know this. The University knows this. But despite having the machinery to fix it, this bias has been ignored.
Let’s start with the psychology. The “anchoring effect” is how psychologists describe our propensity to answer questions with numbers that are already in our head. In a classic experiment, people saw a wheel of fortune being spun and were then asked to guess the percentage of UN countries that were in Africa. On average, those who saw the wheel land on the number 65 guessed 45 per cent, while those who saw the wheel land on 10 guessed 25 per cent. (For the curious: it’s actually 28 per cent.)
- SPONSORED -
In another experiment, participants were asked to guess the average price of a German midsize car. Some participants had been asked to write down every number between 10,150 and 10,199 beforehand, others had been asked to write down every number between 29,150 and 29,199. Those who’d written down the low numbers guessed on average that the cars were worth €18,459, those who had written down the high numbers guessed that the cars were worth €22,139. Anchoring tells us that irrelevant numbers can have a very relevant effect on our beliefs.
We know anchoring affects grading. Last year, German researchers asked university students to mark psychology assignments. These assignments had—supposedly—been marked by someone else, and the participants were able to see these first grades. In truth, these old marks were placed by the experimenters as anchors. Even when the participants were told that the first marker was unqualified, the anchors substantially shifted the assignments’ grades.
Anchoring operates by forming the presumptions against which evidence is assessed. We have a number waiting available in the unseen spaces of our minds, and when solving a problem we test that number first. If the result isn’t plausible, we adjust it until it is. In the case of grading, we anchor by forming an expectation that a student will produce work of a certain quality, and when we mark, we are testing whether that presumption is true. It is much easier to meet someone’s expectations than to exceed them—if your marker thinks you’re smart they will read between your lines, seeing the analysis you’re hinting at. If your marker thinks you’re dumb then they will see that too, and you’ll have to do a lot more to prove them wrong.
Anchoring doesn’t care for introverts. In most courses, your opportunity to form presumptions is in labs, tutorials and lectures. If you don’t create the presumption within your tutor’s mind that your work will be high quality then they won’t necessarily see that it is. And if the idea of talking up in a tutorial terrifies you then you won’t have the opportunity to create that presumption at all.
Anchoring is reinforced by the halo effect. The halo effect comes from a 1920 study by Edward Thorndike, an American psychologist. Thorndike asked army officers to rate soldiers’ physique, intelligence, leadership skills and character. He found the officers’ appraisals were unrealistically highly correlated—the soldiers described as physically fit were also those described as intelligent, good leaders and being of good character. The halo effect causes us to presume that people with one good trait will have other good traits—such people are lit from the light of their own halo. The halo effect has been supported by numerous experiments—attractive people are considered kinder and more intelligent, politicians with deep, clear voices are considered more competent.
Again, we know that the halo effect has a real impact on student assessment. A 2007 paper examined the grading of final-year research projects. Each project was assessed on nine criteria by two academics. By comparing the assessments of the two academics, the researchers were able to deduce whether academics who rated a project more highly in one criterion tended to rate it well in other criteria. As it happens, they did. The halo effect is real, and it has a real impact on your grades.
Tutors like students they can bounce off, students who sound interested and engaged. The halo effect tells us that markers will extrapolate from strength in the tutorial room—extroversion, eloquence—to the strength of essays and reports. Some students will entertain their tutorials with clever quips and debating technique, but will lack the discipline or research skills to produce good assignments. The halo effect tells us that markers will give these students better grades than they deserve. Other students will produce exceptional assignments, but tutors will punish them for their poor performance in a sport they aren’t told they are playing. Introverts may produce insightful work, but it won’t glow if not lit from the light of a halo.
Introverts shouldn’t just be afraid of the implicit, subconscious biases. Marking is difficult—there’s no fundamental law of reality that dictates whether your POLS206 essay deserved that B+. The rough expectations of markers will hopefully be able to distinguish between Ds and As, but when dithering between adjacent grades, tutorial engagement provides a crutch—one that lecturers encourage tutors to rely on.
A friend who tutored a 300-level arts paper told me about his experiences. “I’ve definitely been asked to take tutorial participation (and attendance also) into consideration when grading essays, primarily when an essay was proving difficult to mark. Course coordinators would say something like ‘oh well, how often does she participate in tutorials? If they’re quite good, you may as well give them a higher mark,’ or, ‘if they don’t really talk or seem to know what’s going on I’d be inclined to just go lower’.”
He also told me that tutorial participation was taken into account when determining final grades. “If a student is on the cusp of two grades, how a tutor feels about their participation levels, or sometimes even just whether or not they are liked in general by a course coordinator, will usually decide whether they go up or down a grade in the final result.”
His experiences are not unique. A tutor of a 100-level commerce paper said she’d been told that if a student is “on the cusp of two grades and they’ve been at tutorials and you know they’re trying hard you can bump them up”. A third tutor told me he “tried to grade blindly in the hopes that it would be fairer”, though this wasn’t required. Despite this, he admitted one lapse. He was undecided about whether to award a student an A or an A+ for an in-class test. He showed the test to the course lecturer, who confirmed it was either an A or an A+. As he was doing so, he saw the student’s name. In his words, “one of the things that made me decide it was A+ was that I knew the student, and could tell he understood the material and was bright from tutorial participation.”
Given the subjectiveness of marking, it might seem fair to give bright, engaged students the benefit of the doubt. But to reward the extroverts is to punish the introverts, who will find themselves slipping to the back of the bell-curve. When tutors are encouraged to take classroom participation into account when marking, they are encouraged to punish those students who don’t have the classroom confidence to make themselves stand out.
The University’s policies are explicit—using class participation as an aid to marking is not permitted. Subsection 2.2.6 of the Assessment Handbook tells us that class participation can only be assessed “on clearly defined tasks and not on vague impressions of the quantity or quality of a student’s contribution to class discussion”. If this is the case, “criteria for assessing the in-class performance of students are clearly specified in a form that students can translate into action or behaviour”. Unless the University’s Academic Committee approves an exception, such assessment cannot account for more than 10 per cent of the assessment of a course. Using class participation as a crutch may be widespread, but it is supposed to be against University rules.
Given the widespread rule breaking—and the likelihood that, subconsciously, tutorial participation matters more than we admit—it seems odd that the University hasn’t done anything about it. It’s especially odd given that an easy solution exists. Blind marking—not allowing markers to know the identity of assignments’ authors until after the assignments have been marked—would prevent both explicit and implicit biases. Markers could have access to students’ names after marking to allow individual feedback. Some courses already require this, including all of the courses run by the Law School, so presumably the practical issues aren’t insurmountable.
I thought it was a bit odd that the University didn’t require all courses to blind mark, so I approached Allison Kirkman, the University’s Vice-Provost for Academic and Equity. We met in a small meeting room with stained glass windows on the second floor of the Hunter building. Allison told me that the room had previously been a storage cupboard. I remarked that there couldn’t be many storage cupboards left in the world with stained glass windows. She didn’t seem to find this interesting. Instead of talking further about storage cupboards, we talked about blind marking. The first thing she told me was that using “blind marking” as a term “tends to minimise what it means to be blind”, and that she preferred the term “non-identifying marking”. I thought this reasonable, if a little pedantic, and that it at least showed that the issue was getting some official attention.
Unfortunately, terminology seemed to be the limit of this attention. Allison wasn’t aware of the University completing any research into the impact of subconscious biases or into the impact of non-identifying marking. “Before I thought about research in an area, I would want to know that there was a reason why we were doing the research, so I would want some evidence to support undertaking research on that particular topic,” she told me. I wasn’t quite sure how one would collate such evidence without undertaking research, but I guess that’s the function of student magazines.
We talked for quite a while—about half an hour—primarily because I wanted to tease from her some substantive reason why a lecturer wouldn’t blind mark. She didn’t provide one. She told me that blind marking wasn’t perfect—she told me that some tutors may be “able to recognise somebody’s handwriting”. This was, I guess, a fair point, but it also seemed fairly unlikely. She told me that if the University were to consider tighter regulations of assessment, she would “want to look at the whole spectrum rather than just focusing on one aspect”. She told me that there were bureaucratic hurdles, that if someone were to propose mandatory blind marking “there is a process through which that would go and that process would involve talking about within in the whole University community and coming to a decision after we’ve been through that process”. These points would have been more plausible had the University not already published the aforementioned Assessment Handbook, 54 pages long.
Allison used the word “trust” a lot. (I was going to go through my recording and count the number of times but I realised doing so would make me sound like a bit of a dick.) She believed academics could be trusted to mark fairly. I told her that academics had told tutors to take tutorial engagement into account when marking. “I have no evidence to believe this—it could be an urban myth,” she insisted. Tutors should have read the Assessment Handbook during training, and tutors had an “individual responsibility” to approach superiors if they believed the handbook was being contravened. She had never been approached by a protesting tutor. She assumed this meant there wasn’t any problem to protest. (One tutor I talked to had approached his superiors about the course breaching the Assessment Handbook. In his words, their response had been “oh, that’s something the University wrote, we don’t have to follow that.”)
We can’t expect the University to be omniscient—they have limited resources, and it’s understandable that they hadn’t produced specific research on how assessment policy can treat introverts fairly. But to hide behind a rhetoric of trust and responsibility is to fall back on laziness. Academics cannot be trusted to grade fairly—we know that subconscious biases are much too powerful and we know that academics are currently contravening assessment policy. Until evidence is pushed into their hands, the University will continue to ignore the discrimination against introverts. To expect more isn’t to expect omniscience, it’s to expect a very minimal standard of fairness.
When I mentioned to friends that I was writing a Salient article on assessment regulations, they laughed. Assessment regulations don’t make for easy clickbait. This is a pity. As much as we pretend otherwise, our grades matter. For many of us, a random quirk of our personality is causing us to receive worse grades than we deserve.
We should be offended when course outlines don’t assure us that the course will be blind marked. We should be offended when lecturers require us to write our name on our essay’s cover page. When that happens, we should know that academics are either too lazy or too arrogant to avoid the traps of subconscious biases—or they’re relying on those biases explicitly. When that happens we should protest—we should tell the academics that we’re offended, and that we deserve fairer treatment.
Except, of course, if you think that could leave a bad impression. Until the University changes its policy, we will be victim to our impressions. That means forced confidence in our tutorials and offering our opinions whenever they’re asked. For the introverts among us, that means sitting quiet, suffering for our failures in a game we shouldn’t be made to play.