- SPONSORED -
You wouldn’t think it’d be too bloody difficult. Grades, they constantly tell us, are very important things. When a lecturer changes a student’s grades after their essays have been assessed and their exams have been examined, you’d think the hierarchy would want to be informed.
Four weeks ago, we submitted an Official Information Act request to the University, asking for their data on lecturers’ grade-scaling. We were curious whether our own grades were due more to academic discretion than our derivation of maximum-likelihood operators (we’re cool like that). Also, we figured it would be hilarious to see which courses had been scaled (I told you we we were cool). Of course the Uni would have the data, so why not ask if we could have a look?
Except they didn’t. Victoria had no idea how often its lecturers were scaling. But not to worry. The University’s solicitor would contact the University’s 28 schools. She would ask them for their own data, collate their responses, and get back to us. We would have our answers soon enough.
Four weeks later, 11 schools have responded in full, another eight in part. Nine of Victoria’s schools have told us nothing. The school administrators insist that asking them at such short notice is unfair. Why would they know the scaling habits of their lecturers? The school responsible for Maths and Statistics could give us no statistics; Economics and Finance have economised on their data. We heard nothing from Engineering and Computer Science, from Design, Te Kura Māori, Modern Letters, Music, Biology or Geography. This year, the University has made fairly significant changes to its marking schedule. You’d think it’d matter if this had changed the way academics graded. But our beloved Victoria didn’t want to know.
It’s impossible to know whether the scaling we were told about represents the scaling throughout the University. That said, most scaling we were told about was small and justifiable – often just a few marks given to those on the cusp of passing, or five per cent added to compensate for the new marking schedule. While academics have the option to scale their courses downwards, none admitted to doing so.
But not everyone was content with working at the margins. Four courses scaled either an assessment or a final grade by more than two letter grades. 15 percentage points were added to one paper’s final grades – and no, it wasn’t PHYS 307 or LAWS 340. Students in MARK 301 whose assessments gave them a C– were given a B–; to get an A+ on your transcript only required a B+ in raw marks. As one MARK 301 student described his exam performance: “I was fucked as. I didn’t even answer enough questions to pass so I was sketching the whole time but then I got results and I got a C.” We asked the course coordinator if we could discuss her scaling methods. Unfortunately, she was unable to meet us.
Further, scaling was sometimes based entirely on class engagement. LAWS 320 reported that “some students were scaled up based on exceptional contribution to class participation.” We can’t see what “exceptional” participation required, nor how many students were scaled or how much their grades were changed. But lecturers are not meant to be examining extroversion.
Two papers used scaling to get consistency between markers. MGMT 101 added three marks to selected exam grades to bring a tough marker into line with others. PSYC 232 added 0.3-of-a-mark to the raw grades of one of its assessments. Of course, students should be assessed consistently, but it’s unclear how sophisticated this consistency was.
Five schools told us they didn’t scale at all. These included English, Film, Theatre and Media Studies, and Linguistics and Applied Language Studies. Any teacher will tell you that essay-marking is deeply subjective, so maybe in situations when a Physics paper would scale, an English paper would just mark more leniently. Victoria’s Graduate School of Nursing, Midwifery and Health also didn’t scale, but we didn’t even know Vic had a nursing school, so who knows what we should make of that.
Most frustrating were the schools who only disclosed some of their courses. Except for one, those schools that scaled didn’t mention whether undisclosed courses hadn’t been scaled, or whether they were just courses the school administrators didn’t know about.
In total, we were told the scaling of 102 papers from 14 schools. Even counting the five schools which don’t scale at all, that leaves most papers unaccounted for. In case you’re wondering, none of our own papers reported their scaling. All that effort was an exercise in futility.
According to Victoria’s Assessment Handbook, it’s only when course grades are “seriously out of line with expectations” that scaling should be considered. After looking at the number of courses being scaled, it seems lecturers’ expectations could do with an adjustment. The Handbook also insists that “where possible scaling should be applied to individual assessment rather than to final grades.” Of those scaled courses which we were told about, only 31 per cent scaled individual assessments.
Lecturers write exams weeks in advance, so expectations will always be rough. According to the lecturers we chatted with, whether a course will need to be scaled often isn’t obvious until marking time. And often, exams don’t end up assessing what they need to – maybe one question was too ambiguous, or another penalised a stupid mistake too heavily. Changing the marking schedule and remarking when you’ve already looked at 40 exam scripts is a lot of work. Scaling might be the only fair option.
Scaling is often a good idea. Often, it makes assessments less arbitrary and grading more fair. But of course, it’s sometimes misused. And when it is, Victoria doesn’t want to know.
Jan Stewart manages Victoria University’s Student Learning Support Service. When we chatted to her last week, she told us she’d seen no grade-scaling abuse, though she believed “some people do and some people think it’s the norm.” She didn’t believe the solution was central moderation of grade scaling. Instead, she believed schools should be “supervising at a higher level – programme coordinators or heads of schools looking at distributions”. Given that many schools couldn’t tell us whether their lecturers had scaled at all, Jan’s comments cannot be interpreted as a ringing endorsement. She also believed students should be better informed. “How many people know if their mark has been scaled? Lecturers need to be honest.”
But there’s a reason honesty is hard to find. Universities are under pressure to be passing more students, and grade-scaling achieves that. One Victoria University academic we spoke to – he asked not to be named – explained how the University encourages academics to engineer their pass rates. “There is pressure, but they aren’t stupid about it, they’re not explicit. They try and get moderation systems in place, and I guess they communicate through the moderation systems to try and get the pass rates up. They’re obviously feeling the incentives.”
By incentives, he’s talking about the Government’s Student Achievement Component funding. Of the University’s income last year, we estimate about $5 million was contingent on passing rates – about half of their operating surplus. Of course, even if there wasn’t performance-based funding, the University would want students to be passing. And so the University moderates the grade distributions of courses, ensuring academics are teaching their courses well.
Well, that’s what they say. Our academic friend recalled a meeting “where it was very clear that they wanted higher pass rates in certain classes. They never said mark easier – they always phrased it in terms of teaching more effectively.” But he believed compelling lecturers to lecture better was impossible. He pointed out that even without scaling, assessment is often fairly subjective. “If you get slapped around for passing too many people or passing not enough, you can change the way you mark the assessment. Changing the way you teach would be a pretty indirect approach to the problem.”
The academic wasn’t sure whether this had resulted in an easier system overall. “There are informal complaints every so often that the standards are low compared to ‘when I was a boy’ – but no one spends the time to check that, and I know of no non-subjective measures that are preserved over time.” Nonetheless, he believed some standards were slipping. “Some moderation is kind of invisible to me – I don’t necessarily see what’s happening in some other classes – but there are other courses where the moderation is much more transparent. And then I would say that there definitely has been a change in marking. It’s much easier to get high grades in some courses now.”
There’s no way for the University to tell whether its academics are teaching well – and given the flaws with student feedback, neither can the academics. But demands must be met, and sometimes there are few options. And so they scale. Don’t ask, don’t tell. By monitoring students’ grades – but not monitoring whether those grades are the result of good learning or good mathematics – the University maintains its statistics while ignoring the manipulations behind them.
Scaling happens in many ways, across many faculties, and for many different reasons. But no one knows what it looks like overall. From what we’ve been told, most scaling is pretty minor, though there are some outrageous exceptions. The deeper problem is a university incapable of better teaching and facing steep incentives to pass more students. The solution is to change assessment, but when that fails, grade-scaling is the only option. Mention completion rates often enough and lecturers get the right idea. The reason so many schools didn’t know whether their lecturers were scaling is that the schools didn’t want to know. But the solution isn’t more central moderation – that could discourage good scaling and replace scaling with easy assessments. The solution is a university that cares about the integrity of the grades it insists matter so much.
For those interested, our raw data can be found here.