This strikes me as a very whiny article, not worthy of publishing in the Chronicle. A lifetime of hurt feelings and resentments bubbling up into a rant full of straw men and frankly irresponsible misinformation about the role student evaluations play in instructor assessment.
It’s too bad, too, because there certainly are criticisms to be made, and this article hints at some of them. But a well designed evaluation instrument along with critical interpretation of the results and comments can be an invaluable tool for instructors who want to improve their teaching.
But unfortunately this article also spreads untruths about the weight given to student evaluations, and dismisses common-sense security protocols like having a student deliver the results instead of the instructor (not perfect of course, but the student has much less incentive to tamper) as demeaning rather than a sensible precaution.
If the author had presented any evidence whatsoever to support her claims linking student evals to grade inflation (possibly related, and I’m sure studies have been done, but evals are very far from the only obvious contributing factor), or even made a passing attempt to explain what percentage of weight is given to raw eval scores for actual tenure considerations (very little in the broad scheme of things), and cut the junk out about how universities seeing students as customers is a new thing (it’s not new at all as any skim of the history of universities as an institution will make clear, but someone at a US state university should realize that it’s skyrocketing tuition rates, and not whiny students, that give students a bigger sense of entitlement) the article would have been much better.
> a well designed evaluation instrument along with critical interpretation of the results and comments can be an invaluable tool for instructors who want to improve their teaching.
An instructor who actually wants to improve their teaching solicits and engages with feedback from the very first day of the term and doesn't wait until end of term evaluations roll in. Why? Because end of term evaluations, by definition, cannot help the students who wrote them.
That's not easy at all. Instructors design their course typically before the first class starts. Trying to incorporate suggestions from students -- students who have no experience teaching -- is just going to lead to disaster.
Feedback isn't for suggestions, it's for understanding problems that your students are having. If you aren't normalizing the process of students telling you when they are struggling, then they either won't tell you when they're struggling at all or you'll only find out too late. It could be pedagogical stuff like they don't understand your explanations, or it could be emotional things like they're having panic attacks and don't know what to do, or it could be simple technical details like what fonts you use on your slides. And if you know right away then you can do something right away and actually help someone.
nitpick: An instructor may genuinely want to improve their teaching but may only think to solicit feedback until the end of term or may default to end-of-term evals because they do not have a go-to system for soliciting feedback iteratively.
(Unless you define "want" as "what the behavior of this system would lead to" and thereby pretend that no human act is an error)
You're right that they may want to do something without knowing where to start. I amend my "will do" into "should do" with the caveat that perhaps a person who doesn't know this shouldn't be teaching yet.
How does a sense of entitlement result in grade inflation? Well, the obvious way is if you pervasively require students evaluations, students systematically evaluate professors based on the grades they received, and these evaluations are used to make staffing and compensation decisions by the administration, incentivizing strategic grading behavior by professors.
Solution: stop the evaluations. Even most entitled students won't make unsolicited complaints. And of the ones who do? As long as student complaints aren't used to rank every professor relative to each other, but rather only to weed out the most egregious instructors, there's significantly less pressure for grade inflation. It's when you provide the administration a regular, systematized, "objective" stream of data points that the temptation to assess professors this way becomes irresistible for the administration.
Hahahaha... not in the US, no way. Course-wide averages for many of the classes I taught were 85-90. The idea of a 'C' being an acceptable grade is out of the question nowadays.
Some of this probably has to do with internal requirements. For example, if you say that a student needs a 75 or better to count this course as a prereq for the next course, then do you really want half the students to have to retake the course every time? Some of this also has to do with societal expectations of getting As and Bs.
I can say, however, that the idea that a 75 is the class average is definitely not true at most US colleges.
If 75 is a failing grade, then perhaps the average is merely shifted where you are compared to me. The thrust of my comment wasn't the precise value for the average, but that deviations from this average would be detected and reprimanded. Would you say this is accurate? Would you receive notice from administration is the average was found to be, say, 95%?
This is admittedly a more likely explanation for harvard's higher grades then the one I offered.
75 isn't failing, it is a "C", however it is a typical cutoff for using a course as a prerequisite for some other course.
<60 is failing. Very few students actually fail courses.
In general, in my experience, grades for the courses weren't a symmetric normal distribution with a mean of 75, like it is thought. The mean is more like 83, and it was skewed toward higher grades.
To answer your final question, no, I don't think the administration would care if your class average was 95%, quite the opposite. I had one professor in my undergrad who was more "old-school" - challenging coursework, no spoon-feeding, your grade was your grade and that was that. He'd regularly complain about visits from the "Center for Student Success" who complained that he was grading too harshly and fought (on behalf of specific students who went to them with complaints) for higher grades. According to my classmates, they were actually able to get their grades overriden at some level higher than he could view from his system.
> The average mark is generally fixed at around 75%
You just made that up.
> So a professor that systematically inflates grades will be called out.
Yyyyyyyeaaaahh...where?
From experience, the average mark given to Harvard humanities undergraduates is an A regardless of quality. Students literally cry in the middle of class if they get an A- because they can't string together a coherent argument from evidence.
> Perhaps universities like harvard have lower standards.
Believe me, I'll be first in line to shame their academic standards. However...
> In any case, consider that this system is in fact a solution to the proposed problem.
It might. I think that it creates other sinister problems in the process. You have to ask whether you think that the purpose of going to school is to better yourself versus competing against your neighbors. If a person does poorly in class because of someone else's performance, that's pretty fucked up. Likewise, if someone does well in class because of someone else's performance, that's also fucked up. The system you appear to be describing perversely encourages sabotage and cheating, because learning is secondary to "winning", because you're fucked by other people succeeding. My intuition is that systems that treat grades like a competition produce a combination of accidental failures and assholes who treat other people poorly.
>the purpose of going to school is to better yourself versus competing against your neighbors
It must be both. In order to better yourself, you must challenge yourself. Your peers have similar capabilities to you, because you both met roughly the same standard in order to get into the school you got into, as opposed to a better or worse one. Competing with them will therefore be challenging, but not too challenging.
Note that I don't propose the average is set at precisely 75%, nor that professors change the weights after the fact to make it so. Just that when they are setting the course - choosing what material to cover, what to leave out, setting the exam - they bear in mind the aptitudes of their students and choose appropriately. If the grades come in too high, meaning the students found it too easy, they might in future terms increase the pace or the difficulty of questions to compensate. This means that as a student you aren't competing with your class, but rather with the body of students that came before. In turn, you gain nothing from doing poorly, because you will still be awarded a proportionately low grade, and only make the course easier for future students.
Why this isn't implemented at the schools you attended I could only guess, but perhaps these schools historically determined standard based on more objective criteria, and have ceased doing so more recently without replacing their system for determining standards.
The reasoning provided here is exactly why we often explicitly told students that they were not graded on a normal distribution. I was told that, historically, when students were graded on a normal distribution, it typically led to bimodal distributions of grades - a few students continued to try, but most were convinced that if they all did poorly then they would all still do fine, and so they just tried to be "good enough" and chance it. By the time I was teaching it was simply departmental policy not to grade this way.
Even without this historical evidence, however, it does seem unfair to grade this way for exactly the reasons in BugsJustFindMe's response. It might make sense in larger first-year classes, but in the more advanced smaller classes (say, 30 students) it's likely that you're already down to a set of students who try really hard and potentially all deserve an A. There's a lot of correlation between (perceived) difficulty of the course material and the students who take the class, so there's no reason to expect that you have the same distribution of students from those beginning first-year courses in your "Advanced Stochastic Processes" course.
While I think it's true that the article overemphasized the weight placed on student evaluations somewhat, I think this does get to the real problem. There's nothing wrong with students evaluating instructors. The problem comes when this is the only method used to evaluate teaching effectiveness. I'm a newly appointed professor, but I've heard many stories from colleagues in the past about students thanking them for being demanding in their course as what they learned was helpful in the future. Demanding classes tend to have negative impact on student evaluations, but that's obviously not capturing the whole picture.
I have kept in touch with many of my professors over 20 Y. On a whim I checked out their online reviews not too long ago and mostly agreed.
In particular, a very close friend of mine is a tenured lecturer. He is a really good guy, but good lord is he a terrible instructor. The student reviews for him agree.
Student reviews are also ridiculously sexist in the average case. In a recent study, the students rated identical teachers differently enough by gender that actual teaching performance was lost in the noise:
https://academic.oup.com/jeea/advance-article-abstract/doi/1...
Another pair of studies showed that identical courses where students thought instructors were male rated higher than those that had a female professor, and their qualitative answers were also very different:
https://www.insidehighered.com/news/2018/03/14/study-says-st...
I'm grateful to be a computer scientist, where the consolation prize for not getting tenure is a cushy industry job, but I feel deeply for my pre-tenure colleagues in the life sciences who have to work twice as hard for their evaluations while still maintaining their research and service requirements.
Did you actually read critically the papers you cite?
[1] claims to be comparing "identical teachers" but [1]'s sole claim to courses taught by male and female instructors is "neither students’ grades nor self-study hours are affected by the instructor’s gender". Clearly those are hardly the only factors relevant to teacher quality. Moreover [1] claims to use "objective measure of the instructors’ performance", and which includes -- I kid you not -- "self-reported number of hours students spent studying for the course".
[2] is similarly vague, and claims that "the courses were identical: all lectures, assignments, and content were exactly the same in all sections" only to to state in the next sentence that the "only aspects of the course that varied between Dr. Mitchell’s and Dr. Martin’s sections were the course grader and contact with the instructor". Well, isn't "contact with the instructor" significant?
Both [1, 2] use p-values [3], which doesn't increase confidence in the results.
As an aside, neither paper discusses potential bias the authors might have, in particular their own social desirability bias [4].
If it was possible to objectively determine teacher quality, student evaluation probably would not exist in the first place. The reason we ask people's opinion is in order to quantify the subjective. Double edged sword, because we tend to conflate quantified & objective.
That's half the theme here. erm.. people's opinions are subjective..
Part of the problem with "student-as-consumer" is that students aren't always the real consumer.
> Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
This has nothing to do with the article. This has to do with sources the poster chose as evidence to support a position. It is definitely relevant to question whether a poster has read and understood their own citations, particularly when those citations are not very good or contradict the poster.
It's very relevant if the OP read the article or not. Makes his/her argument stronger/weaker because the articles are used as basis for the argument. So 'Did you even read the article?' is a valid question.
[2] is similarly vague, [...] isn't "contact
with the instructor" significant?
Well, they did compare online courses.
The article doesn't detail how their online courses are structured, but when I've done free udacity/coursera/edx courses, contact with the instructor has been nonexistent.
Seems to me it'd be very difficult to design a test to investigate this that wouldn't have some valid methodological criticisms. Even if you had lecturers, delivering online lectures with fake names, using voice-changing software and online-only office hours, you could still criticise that as being unrepresentative of real college lecturing, and having confounding factors if the courses ran at different schools, years, or times.
From what I gathered from the mobile version (which may be paired down), the difference was not that significant, particularly in the analysis of evaluations and RMP submissions. Would have liked to find more on class composition, and would be curious how reproducible the relatively minor absolute differences would be, but "ridiculously sexist" seems a bit hyperbolic. Do you have a specific take on those studies that inform your opinion? I could be very well working with not enough information here.
I'm pretty skeptical of the claims in this - it'd be interesting to see actual data.
Evaluations are one of the extremely few levers students actually have to pull when dealing with a terrible teacher. I'd expect that given an entire class' worth of evaluations you'd be able to strip outliers and get some genuine useful responses.
Though the worst professor I had at RPI held our grades hostage until the evaluations were in, so they're not perfect.
In America teachers get tenure so your feedback has no impact on them improving.
Also if you look at literally any study of them TAs for harder courses get harsher ratings, they get harsher ratings from weaker students.
I also spent a number of years as a TA I can say anecdotally that is my exact first hand experience - good reviews in easy courses like graphics, atrocious scores in design of concurrent systems. In fact for concurrent systems it was well known that the TA for the course would literally always get the lowest review rating in the department, and that was independent of their rating in any other courses that they TAd.
TAs also get punished if the actual instructor is bad - because students blame everyone in the course.
I mean the American system has its own slew of problems - lecturers run the tests themselves, they by design don’t provide prior exams (hint: if knowing the prior years exam questions tells you enough to be considered your exams are bad and you should feel bad)
Very few professors are granted tenure, as was the case in the past. It's weakly non-zero: times have changed. There are far more adjunct professors today. Universities are more run like businesses than in decades past.
I don’t have data, but I think most tenure track professors get tenure (if not at their current university, than another).
Adjunct professors are not tenure track, so there is really no expectation that they would get tenure. Though I have seen the numbers that say the number of adjuncts is increasing.
Almost every lecturer my wife has is tenure track or already tenured.
One of them just plays videos of himself in his lectures. He doesn't actually go to them. He is actually around during less than half of his scheduled office hours. He complains to (and at least once has shouted at) the TAs about how students keep asking him questions. He has tenure.
As a former TA for one of the "bad" professors, you can somewhat rescue your rating, but you basically have to be a hero, and the professor needs to not be malicious (if the prof wants to fail everyone, not a lot you can do).
You have to attend every lecture (I did anyway), redirect / fix during discussion / lab, help them learn what they need to know to pass the exams, and if you have time, what they should remember when the term is over for their future academic / work careers.
Only bad ratings I recall were from students who only attended the last discussion section; it's hard to teach 10 weeks of theoretical CS in 50 minutes.
Not my experience - being a TA for a bad professor or bad course involved more work but gets one better evaluations. You're the "good guy". If the professor is good, the standards are higher and you can't leave their shadow.
The way to ensure that they’re different is to require all prior assessment be publicly available - if someone having access to an old exam gives them an unfair advantage over people who don’t then that’s bad design.
Oh sorry I interpreted it as a hypothetical or something :)
My interpretation of the way US lecturers work is that they may reuse entire exams. Especially as a lot just get exams from book publishers.
In NZ (at Canterbury at least) all exams are archived and available in the libraries, and most also as PDFs. All of them. Shelf after shelf stretching back decades. At least back when I was there - maybe it’s different now? Publishers selling exam material as part of their text books seems to be moderately recent
I’ve found that the best professors tend to come with higher standards for grading, more involved assignments, and just generally more work involved. This is great if you’re actively interested in learning the subject; most students are not. In which case, its far worse than the shitty teacher, and its reflected in the feedback.
From the perspective of most students, the ideal professor has low grading standards, “good enough” teaching, and most importantly, a “fun” class (primarily by means of humor). Most students are there to get a degree, to get a job. The best teacher according to students is the one that best fits that model (which is generally at odds with the goals of classical academia).
One of the best classes that I had was very hard in homeworks but was easy on the grading. It was literature and art class taught by a lawyer. I felt that I grew up a bit after that class. The class was also rated high by students
I was thinking primarily of my history with stem (really, specifically the t and e); I feel like the arts are still more classically oriented, because a degree isn’t anywhere near as required, and theres weaker guarantees on income streams, such that its been relatively unaffected by the university trying to fulfill the role if trade schools. The same would be true of mathematics, phds in general, and probably most of the hard sciences.
Undergrad feedback I wouldn’t trust at all regardless of major, Masters is major-dependent (ce, cs are particularly screwed) and phd candidates are probably fine. Simply by looking at the why the general population is even in the major (eg ask an undergrad and the reason is probably not actual interest in the subject: parents, jobs, money, dropout-major, couldn’t decide, best grades in hs, etc). Their feedback will naturally reflect the misaligned incentives
If your interest in feedback is to make sure the teachers are actually good at teaching, anyways.
Tying them directly indeed seems like a bad idea (like pretty much any other naive reputation score), but that doesn’t mean you have to give up on the feedback mechanism.
Just set a threshold - such as “scores an average of < 4/10”. If the score is below that threshold, invest a little more time and effort into getting more detailed evaluations and figuring out why the score is low and whether it’s a genuine problem with the professor vs something like their having high standards and lazy students. Then train or fire as appropriate.
Strict thresholds don't usually work. My school had public metrics, and teachers preferred classes made a huge difference. The teachers that taught the harder classes (component design rings a bell) always got significantly lower scores than the ones teaching easier classes (physics I, circuits for Mechanical Engineers)
That’s why you don’t use the threshold to trigger anything more punitive than further investigation. And sure, tune the threshold to the department. But also consider that maybe teaching is better on average in some departments than others. (For instance, the drop-off in teaching quality from high school to college in my experience was enormous in math and arts but slight in both sciences and humanities. Which seems to me more like a problem with the math and art departments’ approaches to hiring or teaching than with math or arts as subjects.)
That’s why you don’t use the threshold to trigger anything more punitive than further investigation.
And by "you", you mean "no one ever". Teacher evaluations have immediate and universal impact. There is essentially no filter and no interpretation on this. It's like tweets. Once a social signal is out-there, public, everyone is acting on the implications regardless of hypothetical mitigating factors.
A lot of people talk about this stuff in the abstract, as some hypothetical, like we'll do this and we'll have safeguards in place. The news is, this is how things have been for college teachers for a while. You get an evaluation and it has an impact and there's no mitigation and no contextualizing.
Evaluations are one of the extremely few levers students actually have to pull when dealing with a terrible teacher. I'd expect that given an entire class' worth of evaluations you'd be able to strip outliers and get some genuine useful responses.
Now that I think about that.. I did have a professor in my undergrad who held grades until the evaluations were done. Suffice to say, I regret the evaluation I gave.
To be honest, I'm not sure how teachers get and keep their jobs, but as students are graded on their studying skills, I feel like teachers should be graded every fews years on their teaching skills.
My girlfriend is studying in a distance university, so she basically never meets any of her teachers. This year she has to do practices at a chosen company, and she got a teacher assigned to her who is supposed to "help" her. In her study guides it is written that the teacher has to give her classes every week. After first contact the teacher said that she will give every information by email (or other kind of online communication), and won't waste the students' precious time with classes. At first this sounded awesome, because my girlfriend has a lot to do in her last year. BUT... Since then 2 months have passed, the teacher has been repeatedly asked to keep at least a few classes, because things are not moving forward. She ignores her messages for days, answers in very short sentences, and although it is mandatory for her to keep the classes, she is always "busy" at those times and doesn't offer any other dates, so it is impossible to meet her. My girlfriend spends crazy hours combing through PDFs hoping to find information on how to do her practices, what she should prepare, how to write her work diary, etc.
And the other day, when she tried to contact the teacher's higher-ups to do something about this, one guy basically shouted at her, on the phone, saying that she is making her teacher look bad, even though the said theacher works really hard, and she should just listen to the teacher and let her be. After some of this preaching he just put the phone down.
It has been a really frustrating experience for us.
To be honest, I'm not sure how teachers get and keep their jobs, but as students are graded on their studying skills, I feel like teachers should be graded every fews years on their teaching skills.
Well, to be honest, I really wish everyone who wants to one more hurdle in front of teachers should understand the correlation between bad teaching and teachers having a low-paid, unrewarding and effectively abusive job.
Lousy teachers keep their job because no one highly competent wants these jobs. Lousy teachers keep their job because a huge array of bureaucratic song-and-dance exists and people good at that can be bad at teaching.
I'm sorry you had that experience but you might consider looking at the system. Corporate customer support is terrible, let's all call for more test-of-competence for these low paid flunkies too. That will address the problem.
Hey, this sounds like exactly why I'm working on moving out of (secondary) teaching. I didn't mind the pay, being single on a low CoL area and the benefits were good... But the bullshit is killing me, as is the lack of respect from parents in a highly anti-intellectual county (I never realized how bad it was until I returned as a teacher) who demand that Little Johnny pass when he just wants to play Fortnite all class period and never turn in work.
There is something to what you are saying but it is also true that many many aspiring teachers wash out because they can't even get their first job after getting their teaching degree. It _is_ a competitive market in many places but the market _isn't_ functioning as it should because of dysfunctional relationships with unions and tenure.
It _is_ a competitive market in many places but the market _isn't_ functioning as it should because of dysfunctional relationships with unions and tenure.
Unions? Really? Not administrators and profit institutions cutting college teachers' salaries to the bone, creating a situation adjunct professors literally starve to death in the process. No, unions. And tenure is your other complaint? The job guarantee that no longer applies to most college teachers today. Why not complain that the average teach still is able to sleep at night and has roof over their head?
Okay then.
Unions have their flaws but it was that way well before the current disastrous regime.
And sure, teaching is competitive market because even with all the horrors, lots of people want to teach because it's something to believe in, in this horrible and the bureaucracy, that unions are at best a junior partner in, do make it hard.
Are you talking universities or K12? Because everything the parent said is true in K12. Spend 10 minutes at a Jersey City School Board meeting and you’ll see just how aggressive the union is if you even hint at reform. Can’t speak to higher ed though, except tenure for classroom instructors is a real problem. Since they have absolutely zero incentive to be good teachers but instead only engaging in activities that bring in research money for their area of interest.
The OP is about college teaching and what I've written is applicable to that. Folks seem to have segued to K12 for some anti-union speechifying in the meantime.
As far as that goes, look at the condition of teachers in Kentucky if you think unions are the problem.
Unions have a fairly inflexible response to most "reform" plans but the problem is most reform plans, as can be seen here, aim to make teacher more insecure, more constrained, more "flexible" but without compensation for that flexibility. Sure, LA, Oakland or New Jersey might look bad but compared to strongholds of non-union teaching, they are paradise.
It would be great to come up with a reform plan that empowered teachers. The unions might fight that but individual teachers probably wouldn't. But the kinds of reform that come often from people lips - more testing and less job security, are rightly resisted by both unions and individual teachers.
Yup. If a reform plan included things like "let's actually pay teachers more", "let's give them more freedom to apply good teaching practices", "let's give them saner working hours" and "let's reduce the red-tape a bit", I doubt any union would complain.
It _is_ a competitive market in many places but the market _isn't_ functioning as it should because of dysfunctional relationships with unions and tenure.
Whatever your overall, it seems rather unbalanced to blame only unions for the teaching job market. Moreover, how is the job market supposed to work? In the places without unions, you have teachers literally living in cars.
FWIW, I too am pro-union but have also seen issues at the K-12 level. Some are even well intentioned. Example: schools would cut their more senior teachers to save money (because they had the highest salaries). So the unions made contracts that protect seniority. Which means if the longer you survive, the safer your job without respect to your quality.
Experienced quality teachers are impossible to match...but experienced jaded teachers are easier to find (it is grueling to try and improve students away from dumb mistakes before you can even teach your material, only to start from scratch semester after semester.)
Standardized testing attempts to fix this, but....let's just say the math teachers (or anyone with an understanding of statistics) did not get a say in how they are used. Teachers that get a "bad class" (higher number of poor or unwilling learners) face serious impacts to their career success.
It’s a very cyclical market as cohorts of graduates coming out of school wax and wane due to birth rates and the economy. All government or teaching jobs are that way — their fiscal woes lag the economy and they tend to pick up better employees when companies blow up.
Blaming unions like citing the boogeyman. It’s almost always the lazy answer that doesn’t pan out.
I didn't dismiss other variables so flippantly as you are doing and I never blamed unions, I said "dysfunctional relationship". I'm a big defender of unions actually but that doesn't mean they are perfect.
> as students are graded on their studying skills, I feel like teachers should be graded every fews years on their teaching skills
If a teacher grades a student poorly, there can be repercussions. They have an incentive to grade fairly and accurately. The same isn't true of students, who can grade on feelings and grudges.
Teachers are also subject matter experts in the material being graded. Students aren't trained as educators, so why would they be any good at evaluating their teachers?
Student evaluations lack even the smallest part of the rigor that they should contain. They are practically worthless.
> If a teacher grades a student poorly, there can be repercussions.
Yes, but the above poster was talking about TEACHING poorly. It is easy enough to expect students to know things at the end of class, but harder to be of use in them learning that material.
> Teachers are also subject matter experts in the material being graded.
Some are. Maybe even most. But definitely not a universally true statement, particularly in areas where teacher pay is dramatically out of line with industry wages.
> Students aren't trained as educators, so why would they be any good at evaluating their teachers?
Fair point. But as pointed out by others, there are few other forms of evaluation/accountability. And while a student may not be able to identify WHY they struggled to learn, they can certainly evaluate IF they did. If many students agree, that's a problem best addressed quickly.
I teach a university class (just one, my day job is coding) and I've been utterly amazed at the lack of accountability applied to me and my peers. Meanwhile I have friends that teach teens, and the bureaucracy and policies they must follow seem just as bad, but in the opposite direction.
If you have a good alternative to student evaluations, please share, otherwise I'll agree with the flaws you listed but still find them better to have than not.
>> Teachers are also subject matter experts in the material being graded.
> Some are. Maybe even most. But definitely not a universally true statement, particularly in areas where teacher pay is dramatically out of line with industry wages.
So you say that if we find out which teachers are incompetent and actually kick them out, we would need to raise teacher wages to the point of being competitive with the industry?
Students are the only people who can really evaluate the quality of teaching. Someone with a pre-existing understanding of the material cannot tell whether the class can successfully communicate it to a fresh mind.
A few negative evaluations based on personal disputes can be expected. If there are grudges or strong feelings among a large enough proportion of students to meaningfully influence the aggregates, then something really is wrong.
If other teachers prepared the tests from a shared curriculum for the students, and grading would be by the book (the book would be a living document amended as edge cases arise), then teaching performance could be assessed via the test scores.
I have seen this work well for the end of highschool test (prepared every half year), participated on the mailing list purely as an observer (where teachers discussed edge cases - and there was/is a process to get official guidance from the test writer group, but not directly, so that provides a bit of blinding against biases).
As long as we try to let teachers cope by themselves (and maybe give them a few TAs in higher ed), and don't make their work testable, we're bound to have wild theories fueled by anecdata.
They should be a data point. We had a professor let go due to the student evals, and that was a good thing for the university. I can see where the article and op are coming from as a former educator, but there does need to be a feedback loop.
The story on our professor. She could not communicate ideas well and proceedures worse. This was a second or third year accounting class. She would present some procedure on calculating some complex set of ratios. Students would be confused and ask for clarification on where different values came from (they seemed to originate from nowhere). She would pause. Look sideways at us. And ask I'd we new how to add and subtract, shake her head, ignore the questions, and move on. Evaluations we're the lever that students had to help correct the system.
UK: when I was involved with teacher training at a local university, I had to act as placement tutor for around 15 training teachers - all graduates with good UK first degrees in a range of subjects. I had to observe each of the training teachers half a dozen times, and that process included checking the planning of their lessons (before the lesson!), providing feedback on the lesson itself and identifying issues to work on for the next lesson. Each training teacher had a placement mentor as well whose classes the training teacher taught under supervision initially. I was also responsible for supporting the placement mentor in the mentoring role.
Everything got recorded in an online system - the training teachers did their reflective writing &c and I read their blog posts (private blogs!) and tried to support. Managers at the University (relevant ones) could access the blogs and help out with any more serious issues. I had direct contact with the mentors as well in case they had issues to discuss.
Probably the hardest teaching job I did in a 30 year career. But almost all of the training teachers are working in local colleges and schools.
A big issue with education in the UK is that university lecturers generally get zero teacher training. Consequently, teaching quality at undergraduate level is widely variable.
Teacher training is subject to OFSTED every two years (or was) and so things are pretty tight. Teaching standards in general undergraduate courses are being slowly pulled up - but I know what you mean.
> I feel like teachers should be graded every fews years on their teaching skills.
They are, or at least used to be. At least at high schools in Poland, teachers would get occasional visits from higher ups who observed their lessons and would give them feedback.
Let me a add a wrinkle to the mix specifically in terms of grade inflation.
The value of most coursework and any learnings gleaned from it, has dropped significantly.
The best value from entering education institutions is instead from job opportunities after. Unfortunately, these opportunities are often based on grades.
If the grades are more important than the coursework, then of course students will optimize for it.
Students should choose institutions & teachers based on both quality of opportunities and ease of coursework.
Going a step further, teachers that don't understand the changing interests of their students should be reprimanded.
At my university, most major recruiting for internships and such happened in one semester. Now, I specifically remember the dichotomy of two of my professors' approach to this.
Prof John - If you miss a class/deadline/exam due to recruiting, you get a 0.
Prof George - I understand recruiting is happening, reach out to me ASAP if you feel overwhelmed or need a date changed.
Now, the coursework has largely been forgotten. But, I will never forget John's idiocy.
The grading just gets kicked down the road. Now, because we don't trust your credentials, we give you a coding exam, that is effectively a grade.
Likewise, an open secret is that college work is a simulation of the workplace. Those college students who develop through practice the ability to turn in work on time that makes their teachers happy, will also be able to do the same in the workplace. If you give up the chance to learn it while it's available to you in college, where your only risk is a Zero on an assignment, you will learn it later in the school of hard knocks. Your performance review will be a grade. There are people who come to work knowing how to get good grades, and others who don't.
This is coming from one who struggled to get good grades in college. A goose egg on an assignment is rarely an isolated occurrance.
Knowing educators, and having kids in high school, I know that the kids who miss a deadline during recruiting tend to have already missed many deadlines over the years.
I strongly believe that while credentialism and college brand recognition are real and probably regrettable, if those are your only reasons for attending college, then you are wasting your money and your life, whereas someone who has other reasons for attending the same college is getting a lot more value out of it. But colleges assume they get the same money either way, so it's really your choice.
> Likewise, an open secret is that college work is a simulation of the workplace.
I don't think this is the case, and hasn't been for a long time. Assuming you're not looking at graduate classes college is largely a game of keeping your head down and doing the work you need to when you need to and the minimizing effort other than that. Cramming is totally a viable option as the metrics for success in college are totally different than the workplace. The skills one learns to survive in college are extremely maladaptive to a long term functional career.
If the criteria for success are orthogonal, how can one be a simulation of the other?
In my view treating college as largely a game or whose metric is sheer survival, is wasting money, or more likely, paying to support the students who are getting something else out of it. This is not necessarily a personal judgment, because there may be people who are predisposed to see everything as a game, but who deserve to get the best education possible anyway.
Here's my stupid parable. Give two people shovels and send them into a mine. One of them comes back with a bag of gold. The other comes back with an intense hangover. What's broken, the shovel or the mine?
Colleges, despite the image of paternalism, are actually designed to let you fail. People will leak clues on how to survive, but are less likely to tell you explicitly how to succeed. But everybody has the same chance of getting a degree in something. As a result, the people who benefit from the signaling and branding of the college outnumber the people who got a good education out of it, creating the impression that college is largely about signaling and branding. But college is really about deciding what you want to get out of it.
There are doubtlessly many things wrong with college education today, but in spite of that, taking college at face value and pursuing it in a relatively straightforward way may still be a better strategy than trying to figure out what game to play in order to survive.
I think you'll come to regret this attitude when you reach your mid-30s with no skills other than the concrete procedural skills you picked up on the job. I use my education constantly, and in ways I wouldn't have anticipated. I often see people doing things the hard way because they don't know there's any other way, or passing on opportunities to do something interesting because they don't realize it's even possible.
The comment you're responding to isn't an attitude that the poster has, it's an explanation of how students are responding to changes in incentives.
As the total number of college graduates grows, they're competing in job markets that are smaller than the total number of college graduates. This ratio is unbalanced and becoming more unbalanced with every year. The people hiring college graduates are developing stricter and stricter filters in order to decide what body of people they're even going to allow to compete in the job market; the biggest, easiest filter to sort out who's truly competitive for positions is grades, and so it makes sense that students compete to get the best grades. rather than competing to be the most competent.
Most grads are leaving college in tremendous debt with the understanding that the longer they go without employment, the less likely it is they'll end up employed. This means that while they're in college, rather than focusing on the material they're learning, they're focusing on whatever will allow them to have the largest advantage when they enter the job market. In order for this trend to reverse, the cost of college needs to be dramatically lessened; I think this could be accomplished by reducing the total percentage of high school graduates that attend college immediately, increasing the prevalence of online schools, and reducing university overhead by cutting services to students and administrative positions.
> "A teacher (also called a school teacher or, in some contexts, an educator) is a person who helps others to acquire knowledge, competences or values."
The target audience is a crucial part of the feedback loop. Removing the students from the equation sounds counter intuitive. What are the proposed alternative metrics? Who delivers the feedback? And what is the feedback based on, if not on the direct opinions of the people on the receiving end of the service?
I think the main idea here is that student evaluations only make sense if we assume that students are actually interested in acquiring knowledge, competences, or values, however if their main aim in a course is a good grade then their evaluations suddenly hold little value since they will rate "easy" instructors highest.
When I was a college instructor I found that when I tried hard - putting a lot of thought into pedagogy, having weekly (open notes) quizzes, assigning challenging but fair homework - I got terrible evaluations, even though the average grade in my class was higher than other teachers who were teaching the same course. When I phoned it in, didn't really try, was very lax with the homework and often made the quizzes take-home, I got excellent reviews but the average grade in the class was worse.
> "When I was a college instructor I found that when I tried hard - putting a lot of thought into pedagogy, having weekly (open notes) quizzes, assigning challenging but fair homework - I got terrible evaluations, even though the average grade in my class was higher than other teachers who were teaching the same course. When I phoned it in, didn't really try, was very lax with the homework and often made the quizzes take-home, I got excellent reviews but the average grade in the class was worse."
It must've been a frustrating experience.
At the same time - you're making a very broad statement here based on a rather personal experience. You went from a certain regimen yielding certain results, to a different regimen yielding different results. There are way too many parameters here to draw conclusions.
Sure, but that's the sort of data we have to deal with in this arena. Who's going to run a large randomized trial where students are purposely assigned to different classes (keeping in mind that schedule conflicts already add additional constraints to this which may bias these assignments) and then, furthermore, have the instruction fixed apart from how easy the assignments are? Is it even fair to the students to knowingly assign students to relatively poorer teaching? Clearly it happens all the time, every department has that professor who is known for being a bad teacher yet they still have to assign classes.
In speaking with other grad students this seemed to be a well-known phenomenon, to the point that most other grad students intentionally didn't put much time into their teaching and basked in the positive reviews as a result. It was suggested many times to me that I was spending too much time thinking about my teaching. In my case, the lax teaching was not intentional, I simply was overcommitted that semester and had less time to prepare.
There are a bunch of ratings 1-10, and then a place to optionally add comments. It's hard to consolidate information from 200+ comments, but, then again, most students do not fill out the comment section anyway.
(In a sense, I guess the whole thing is optional - after all, it is anonymous, and we don't check that every student has filled the whole thing out. It also switched from in-class to an online form during the time I was teaching, and the response rate went down quite a bit as a result, but this was after the situation I mentioned above.)
Isn’t it up to the universities to provide a majority their value in the form of education rather than in the form of a credential?
Not only do such institutions seemed to be concerned about cancerously growing credentialism, they’ve embraced it as a money money making scheme. See e.g. the explosion of terminal masters programs.
> "isn’t it up to the universities to provide a majority their value in the form of education"
I'm not sure how this statement addresses my claims.
How do we measure/quantify the quality of education? I claim that the way students feel about the staff and the institution should be taken into account, rather than dismissed.
You say that the evaluations of students that are only looking for good grades are useless. What colleges are selling and what students are buying, at very high prices, are credentials.
If colleges, and those that work at them, want to be in the education business rather than the credential selling business they ought to take a good long look at where and how they went wrong instead of constantly trying to push the blame onto to people that, again, are just buying what they are selling.
An interesting way to evaluate professors would be to contact alumni who have graduated and ask them about the professors that they remember. Chances are that the really good and the really bad will stand out. I still remember my calculus professor and statistics professor who both did a great job. I remember some other professors on the other hand who were bad at teaching undergraduates. My opinions on my college professors seems independent of the grade I received or how hard the class was.
Often times, time and experience give a different perspective on how valuable something was.
This is the age-old doctor question. You go to the doctor's office because you have a medical problem.
Doctor 1 comes in on-time, she's smart, capable, friendly, and spends a good amount of time concerned about you and your situation.
Doctor 2 comes in late, she's irascible, pedantic, hurried, and only listens long enough to tell you what to do. She also insults you and your family in various ways and doesn't seem to care.
After the visit, you go to Yelp. Which one was the better doctor?
You have no idea! But do you think that's going to stop somebody from forming and posting an opinion anyway?
You can make the argument that these other qualities are good to have in a doctor. I agree. I want a doctor like the first one. But people go to doctors first and foremost to get better, not to make new social acquaintances. If the first doc is incompetent and hurting people, and the second doc a genius, and her attitude is actually causing more people to comply with what's best for them out of a sense anger at her style, perhaps? That's the thing. Who the hell cares if the outside looks cool/sexy/friendly or not?
So when we ask for reviews from students who have never worked in the field they're studying for, they're like that Yelp doctor reviewer. I don't see how this situation is good for anybody. In fact, it'll probably lead to a bunch friendly, good-looking morons with great people skills entertaining bunch of kids who aren't learning anything. That's the only natural consequence here.
It's not like there can't be an accurate way to measure this.
Pre-register patient problems, and see how fast doctors solve problems on average/median. Then it doesn't matter how jovial/amicable the doc is.
With enough data you can fish out potential signals (and then test them properly), maybe bedside manner really doesn't matter, maybe being on time is more important. (Because then people spend less time in the clinic among other sick people.)
And as others mentioned the teacher eval problem can be solved by splitting the lecturer, exam writer and grader roles. (Grading should be done by a book written as edge cases accumulate, edge cases should be handled without names to prevent bias, as much as possible.) And then aggregate stats can be published about each class and school.
Sorry no, it doesn't work like that. In either case.
Doctors, like professors, are evaluated by former patients based on a mix of factors.
In your doctor example, doctors don't "solve problems". They see patients. Many times these patients present with conflicting and hard-to-resolve symptoms. I imagine with the internet this has gotten worse. They use a mix of Occam's Razor, deductive reasoning, and social psychology for the patient's long-term health.
You don't want a doctor that finds the answers to complex problems and gives out morphine. You want a doctor that over decades has patients with the best outcomes. In my example, the one doctor may have ticked you off so much you decide never to go back -- and quit smoking just to prove her wrong. That's a win for both you and the doc, although there's no problem being solved.
You can speculate various ways to go at this. Perhaps the doctor is evaluated by other doctors working at that location. Or perhaps it's a matrix of various things. As long as you're just pulling stuff out of the air, you can invent any kind of system you'd like. And tech will deliver it for you.
Human interventions are messy affairs, and thank goodness for that. We are not robots. Your proposal might do a great job of sorting out professors who teach students well enough to take the test, but heck if I see where that is the end of the analysis. That's just the beginning. We're just getting started talking about professors who can lead a class to mostly pass a test. That should be a baseline for any professor.
This is the problem we see in software development. You can come up with all sorts of metrics to stand in place of evaluating whether a team/person is good or not, but whatever you come up with mostly seem to work -- and has very bad effects on those being measured. (With the possible exception of business-value-loop feedback time)
I don't think this is tractable in the way you seem to want it. If I were picking a college or professor, I might look at five or six variables and weigh them in various ways depending on my personal needs and goals at the time -- which might change tomorrow. I might want to go interview them. Meet some current and former students.
There is a severe and onerous "platform danger" in tech. It starts with the conceit that we can build platforms for anything. It ends up with people not-as-smart-as-they-used-to-be having Google tell them where to go for dinner. Or Wally in Dilbert coding himself up a minivan. [0]
I'm not saying the metric definitions and accumulations aren't interesting. They deserve to be considered. I'm saying that by reducing human judgment to single vector, it makes us as a species dumber. That's one of those intuitive things that sound good but are in reality quite bad.
Over decades, yes, but that is moving the goalpost.
The vast majority of healtcare problems are dependent on using the right known technique (medication, surgery, etc.) and of course the problem is finding that, so diagnosis. Handing out morphine is not a solution in the genral case.
Long-term health is sadly largely independent of acute healthcare providers' work. It depends on genetics, environmental factors, behavior (exercise, diet, lifestyle) and a stochastic factor of acute problem solving (as the body breaks down more and more depends on doctors, and the fast and correct identification and application of the best healthcare technique).
So healthcare has two categories, the first is the common flu, I have some gastro issues, refer to specialist, manage chronic illnesses (diabetes, pain, low/high blood pressure) and the other is usually the problem for specialists, when they have no or wrong idea what's going on and thus an acute problem becomes a year(s) long trail of tears.
Now, that said, I'm not advocating for ignoring the edge cases and just use this one simple magic formula to decide who can play doctor. But we are still stuck in the dark, because we focus on the complexity, the long term, the human component, the expertise, yet fail to do the most basic thing to get at least somewhat reliable data.
And I'm aware of the problem of fishing for signals in data, but without it we can't even start the trial and error process, we can't differentiate between the two cases, and on average let's say we end up doing nothing.
Furthermore, we know that the obvious solution is to refactor the system. But that rarely presents a real possibility for education and human long term health. (You know, the small stuff :)) It'd be easy to ban smoking and alcohol and give less damaging stuff for people instead and spend more to treat addicts, and introduce negative income tax as a form of basic income to help poverty to reduce stress and help mental health to reduce number of smokers. Similarly it'd be easy to reform schools, and do all of the above to help parents to have more time for their kids so they get a better nurturing youth overall, which makes them ambitious and motivated to learn, study, explore and understand. And it'd be easy to do all this, but that's not likely. Hence the proposal for simple but maybe vastly effective changes.
Our evals led to the dismissal of a professor. She was absolutely gorgeous, and a delight to talk to. And couldn't convey account proceedures worth a damn. That is what the students cared about in our case.
As a teacher, I disagree strongly with the conclusions of this article. Anonymous student evaluations are an invaluable tool to improve my teaching. The students say so many fair and useful things!
It is my suspicion that the attitude of american students towards their professors is perhaps a function of the privatization of schools. since students are paying for their education they feel entitled to a good grade regardless of performance. I would like to know others thoughts on this. I don't know much about post secondary education in Europe but it seems different from the American collegiate system somehow.
Anecdotally, I've seen people who only started taking school seriously (studying, not goofing off, etc) once they actually had to pay for it.
Certainly feeling entitled to a good grade is a possible consequence of having to pay for it yourself, and there will always be those who react that way, but I think it is a socialized response that depends on other factors.
I have studied in Europe (Italy and Germany) and now I am in the academia in the USA. Having student evaluate professors has been the standard practice for many years in Europe. I also do not think that American students have a worse attitude towards their professors than their EU counterpart. In my limited experience, I have actually found the opposite to be true: I found American students to be more actively involved in their education and less passive than Italian students (on average).
Over the last several decades, US public colleges and universities have generally seen a declining share of funding from the chartering governments, relying more on business activity plus student fees.
The student fees and tuition are mostly paid for by government backed student loans. There really isn't any private education in the US past high school.
> The student fees and tuition are mostly paid for by government backed student loans.
Which the student is required to repay, but in any case are not from the chartering government.
> There really isn't any private education in the US past high school
There are plenty of private beyond-high-school educational institutions that don't qualify for government financial aid and thus rely on purely private financing, so even if you consider “accepting government issued loans” as making an institution not-private, there is plenty of private education.
I don't see the government paying for my college... In my experience a majority of my friends have or are on track to have student debt from having to pay for college.
Technically you wouldn't see it even if it was there. For most public universities, it comes as a direct allocation from the state legislature to the university.
1) I would have used "corporatizations" of schools. Running universities as if they were businesses, framing students as customers, etc.
2) Tuition has become a bigger share of the budget for public universities as federal funding has been stagnant and state funding has been cut, which feeds the same problem.
No shit basing compensation on reviews by people who have an overt bias to blame the instructor.
The purpose of the review should be purely to provide feedback to the instructor and TA. It should not be available to the administration - the purpose is to help the TA and instructors improve their teaching. Especially if the instructors have tenure and so can’t be fired for failing to teach.
Ratings should also be scaled to control for biases - what are the average grades given by each student, how do they compare to other students in their classes, and to their course grade.
In the US at least you're essentially expected to be a TA as part of most PhD programs, so if your ability to complete your PhD is dependent on the will of students who you might be causing to fail - or you might just not be a good teach - why would you not do that?
One of the main problems with the educational system is that students are evaluated by the same person that teaches them. If the teacher wasn’t the one grading the mean grade of the students would be a decent evaluation of the performance of teacher.
If another professor wrote the exams for a class they didn't sit in, you'd have the students screaming in unison, "this wasn't discussed in class!" Not exactly disagreeing with the idea, but students have been normalized to the idea that, with enough studying and concentration, any exam can be aced. When they feel like they've been cheated by the system they get discouraged, making a harmful learning experience.
If another professor wrote the exams for a class they
didn't sit in, you'd have the students screaming in
unison, "this wasn't discussed in class!"
In standard testing at other levels, this is solved by a syllabus.
For example, in electronic engineering I would expect a second-year class in switch mode power supply design at any university to look extremely similar.
Admittedly there may be classes in cutting-edge research, opinion-based subjects or extremely niche topics, where you can't find two academics in the country who could agree on a year's syllabus and set of answers. In my degree I'd say less than 25% of courses were that way, though.
I went to a college where all three professors who taught Chem 101 would collaborate on writing one exam that students from all three classes would then take at the same time in an auditorium setting. This kept the testing fair over the entire course and forced the professors to teach material based on agreed upon in advance topics to certain standards.
This is done in Denmark. At the end of each semester, high school teachers are recruited to read exams and assign grades. Teaching professors are excluded from grading and professors and students are collaborators aiming to support each other's success by supporting their own success.
This is a short commentary piece, not exactly much there. There's some interesting comments, the one at the top right now says
"They are no doubt imprecise, but for a majority of the faculty the scores form a tight bunch around a mean. Newer faculty are likely to have lower scores, so it is often times useful to sit down with them and review the feedback and talk about ways to improve. A few faculty members continually score higher than average, and I want to know why--perhaps they can help do some mentoring with the lower scoring faculty."
Of course this comment assumes that higher scores means better teaching.
FWIW, my experience both as a prof and as an evaluator of profs is that when students are reasonable good their comments, in general, are reasonably useful. After all, they are in the room and they see what is happening.
The problem is that when students diverge from reasonably well-prepared and reasonably hard-working, their responses diverge from valuable. It is a kind of Dunning–Kruger, where you get evaluations that are hard to reconcile with any kind of learning goals standard.
All this would be mitigated if there were multiple measures. But there are not, at least in practice that I've seen.
It is a problem. As an evaluator you want to give people credit for doing a good job. But it can be hard to tell.
As a prof for many years, I know how to increase evaluations. (I don't agree that it is grades; I recently had a fifth year review and calculated the correlation between my grades and class evaluations for those years and r^2 was basically zero.) But the things that increase evaluations are not very tied to increasing learning and are certainly not tied to increasing the amount of material covered.
I had taught two semesters of a basic pharmacy course at a local community college. Of the 20 or so student reviews I received, 80% of them mentioned my 'ta-tas'. Enough said.
Lots of students under the impression that commenting on their dress is appropriate. A large chunk of those being suggesting they dress in ways more visually interesting. Some direct references to their appearance (usually "positive").
One particularly egregious one speculating on who she slept with to have her position.
Tons of gendered expectations in language. Expectations to "be more nurturing" and things like that. Far more comments about them being "young", "inexperienced" and "new" than my own evaluations, despite being considerably more experienced than I am (and no, there was no way my class was just more polished - it was thrown together last minute).
That would make for a rather interesting study if they look at gender expectations overall for teachers. From Swedish studies it has been clear that male teachers leave the professions significant more and earlier than female teachers, and same with male students for the teacher master program in higher education. The numbers is very similar if not almost identical to master programs in STEM except for the genders being reversed, a fact that is rather unexplored in gender studies but noted in a somewhat recent government study.
Just looking at evaluations, I wonder how height, build and wealth symbols (expensive car, clothing, jewelry) impact the rating for male teachers. Do a non-typical male traits contribute more negatively to the score for male teachers than non-typical female traits do for female teachers? Same question for typical male traits and female traits. Is it correlated to leaving the profession, and is there a difference in abuse tolerance?
Are we now pretending that having an attractive teacher doesn't affect attention spans? the more attractive the teacher the more time i spent daydreaming as a kid for "obvious" reasons
There's a much better way to hold educators to account - have other educators sit in on classes. But this is often very strongly resisted, apparently it's 'unprofessional'.
As a TA I often attended lectures just to be able to refer to what students were just told. Often this information had not been processed or remembered very well by students.
I always thought that lecturers, if there were multiple for a single course, would not attend one another's lectures under the pretense that they were busy, but with the actual reason that they didn't want to hold one another accountable for inefficient teaching methods.
"I don't care what you do, you won't care what I do, the typos from last year's slides repeat."
This works well but is subject to the same initial issues that pair programming has - it feels weird, you feel watched, and it is just uncomfortable.
Eventually you get used to it and some even prefer it - I know I really get a lot out of pair programming with someone close to me in skill but with a much different background - but getting over that first hump is tough.
While I can't find any specific sources in the field of education, I was taught in engineering school that engineers reviewing the work of another engineer, unless their job specifically required them to review work, was against the code of ethics in Ontario. It looks like this is true as well in the USA which has this tidbit in their code of ethics.[1]
"a. An Engineer in private practice will not review the work of another engineer for the same client, except with the knowledge of such engineer, or unless the connection of such engineer with the work has been terminated."
This is different though, because of the “except with knowledge of said engineer” line. This code is trying to prevent clients from, for example, getting a “second opinion” on a project if they are unhappy that the first engineer deemed it unsafe. This isn’t really an issue in teaching. I think that it would be impolite to just drop in on a colleagues class unannounced, especially if it is a bit lecture where you might go unnoticed, but I don’t see why a prearranged visit would be unprofessional per se. (Perhaps because it would cause students to doubt the quality of their instructor?)
FWIW my understanding is that at my university the response to poor evals is a visit from a colleague, to assess whether the teacher really is weak or if the material is just difficult. Depending on that, the instructor may get additional coaching in teaching technique. This seems like a sensible approach to me.
well, forcing another teacher in the class with the intent of supervision of the teacher is obviously a terrible idea. Having two teacher in every class collaborating during the lecture or as an aide to supervise the students can be a obviously good idea.
Generally speaking, in my opinion, teaching is a very complex and personal interaction and micromanaging only makes it worse.
We pushed very hard at my alma mater to publicize course/teacher evaluations. We eventually got the University to accept iff a minimum participation rate was met (on a per course basis-- so it wasn't all or nothing. We'd get a subset of the data when enough participation occurred. We considered this reasonable, since if very few students participate, the data isn't all that meaningful).
Regarding professors wanting to keep review data tightly sealed: in my view, if you can't by public disclosure of your evaluation, then you either don't feel you're meeting expectations, or have no desire to improve in areas where students feel improvement could be made.
Also, the biases pointed out in these reviews aren't unique to academia. Gender and age biases exist everywhere. This article sounds like it's just pushing the idea that students should have less influence in the hiring and promotion decisions of professors. You know, the very people that the teachers first and foremost serve at a university.
I used to teach in academia. Student reviews are REALLY easy to game. Give everyone As for doing nothing, and most will give you As for just being entertaining. It doesn’t matter if they learn anything, or worse, if they learn the wrong things.
But why do you let the students grade the course after they've already gotten a grade? That's insane. In my university, you'd get a paper for the teacher and course review after handing in the final exam. This means you can comment on the quality of the exam, but you don't know your grade yet.
Theoretically, if you grade students several times throughout the course, you can correct those who are on course for a poor grade (due to misunderstanding the content, or misunderstanding how much studying they need to do) while there's still time for feedback to improve their grade.
(At my school many teachers were slow to mark and return assignments, so this benefit wasn't really achieved - but it might have been had feedback been more timely)
It is 2018, these patterns are _easy_ to detect. Honestly at a fundamental level bailing on optimizing teach performance and student outcomes through feedback loops seems nuts to me. This is how all high functioning dynamic systems work (ok maybe not all).
You just can't throw out the baby with the bathwater here just because there are things to figure out. Most of the reasons people have cited here seem like lame excuses, all easily addressable if you could get sane people to get aligned behind a well designed system. Unfortunately not everyone involved in these conversations is acting sane (or objectively in the best interests of the community) and therefore getting everyone aligned in practically impossible.
This whole conversation, BTW, reminds me a the similar debate around public access to doctor and surgeon outcome data where some doctors are amazing and others have horrendously bad success rates for specific surgeries but if you go to that doctor for that surgery you almost never are given access to that history. A large portion of the medical community is very hostile to the idea of things being otherwise which, IMO, is indefensible from a public health POV and only makes sense if you are a crappy, unscrupulous doctor seeking to avoid accountability.
the same thing happens with doctors when you make satisfaction/results stats public: surgeons, for example, are more likely to refuse to take on difficult cases, and prefer easy ones which will increase their patient satisfaction; or even the best surgeons get stuck with the hardest cases, which tanks their relative outcomes.
> Regarding professors wanting to keep review data tightly sealed: in my view, if you can't by public disclosure of your evaluation, then you either don't feel you're meeting expectations, or have no desire to improve in areas where students feel improvement could be made.
There is also the third option, which the author is arguing for: that the professors do not think the ratings are an accurate reflection of their teaching skills.
Perhaps the engineering statistics professor I met who bragged that no one ever got an A on his final, because teaching wasa competition between him and the students. And the government professor I had who was inordinately proud of the fact that his course was required because three soldiers from Texas stayed in China after the Korean war and spent the rest of the first class going around the room having students introduce themselves and then mocking them. (I dropped the class the next day.) And a number who were just disorganized and incompetent, but protected by their relationship with other faculty. And the professor who was given tenure for political reasons, after threatening to fail an entire class (of a required course) of computer science undergrads because they weren't electrical engineering great students.
I bet the adjunct whose english was so bad that she would just answer a question kind of close to what you asked, as quickly as possible and try and move on would too.
She also answered any questions asked in Mandarin, in Mandarin. When asked to translate an exchange for the rest of the class, she blushed and said 'it was complicated'. At the end of the semester, the class was so far behind, 1/3 of the materials for the course exam were delivered during an optional study session.
And so would the electrophysiology professor who spent over an hour of a grad seminar explaining how to use a floppy disk. Because he had trouble with computers.
I've seen a lot of cases where the professors thought exactly that. Unfortunately they were in denial because their teaching skills were abysmal.
I've seen courses where the students were memorising MATLAB scripts by rote for exams because almost none of them had sufficient understanding to have any chance of recreating it in the exam. Why? Because rather than teaching this stuff, the lecturer spent several lectures explaining the details of the representation of floating point number (this was a first-year class for math majors, not people doing CS).
But it was hard as student representatives for us to prove any of this, because we didn't have access to feedback responses.
I accidentally cut myself a little too deep while doing dishes a couple years ago (opaque, soapy water and sharp surfaces don't mix well). I couldn't get the bleeding to stop (without constant pressure held for about an hour), so I went to urgent care. I knew that I needed stitches or that "glue" they use to seal wounds. I didn't care which solution the Doctor picked, but I knew I needed something.
Students have some sense of what a generally good instructional experience looks like. It's better to collect this feedback, and have people that are experts in instruction examine the reviews, synthesize with their own knowledge of what makes instruction great, and use that information to improve.
But saying we should totally close out students since "they don't know what they need" removes a fruitful data source in determining where professors can improve.
Sadly, the only thing many students feel they need from a university education is a degree. This is not surprising, given our society's constant refrain of telling everyone, "You need a college degree to get a good job".
Many students are there not because they want to learn but because they want to get that piece of paper that is the ticket to a successful career. A teacher forcing them to learn, to spend time and effort studying and working, is not seen as a positive if your goal is simply to get a degree. The ideal teacher for them is someone who just give you an A no matter what. That would be the quickest and easiest way to guarantee achieving that goal.
This is the true issue here - the disconnect between what many students want from a university education and what that education is actually supposed to be.
Studies show that the pay differential for "all but completed degree" and "completed degree" are stark. Researchers have said that this indicates that a large fraction of the "value" of college is just signalling (intelligence, aptitude, diligence), not education.
If that's the case, many of the students in your post are right -- just giving them the degree could save them (and the school) a lot of time and money. Maybe your art history class is a cost-effective way to get a world class education in late Italian Romantic oil paintings, but as a way of proving you're smart enough to work in a law firm (or even "broaden your horizons and become a better citizen through rounded education") I can't imagine it's terribly efficient.
It's in every student's individual interests for their school's standards for admission and grading to be low, so they can obtain the credential easily.
But it's in the student body's _collective_ interests for the school's standards to be high, so the credential retains and improves its signalling value.
There's a reason a C- student at Harvard doesn't transfer to a deprived community college where they'd be at the top of every class :)
The thing is that most university grading systems have a very high rate of false negatives. For example, in the UK it is rarely possible to retake a course. And this means that messing up a single exam can cause grade problems, even if the material is well understood.
In my experience, exams are often full of ambiguous questions, questions testing knowledge that is not part of the course syllabus, etc.
If the grading was fair, I would agree with you, but it rarely is. And IMO this is what students really hate: they put in a hell of a lot of work. They actually get to grips with the material, and for stupid reasons out of their control they end up with no credit for it.
If the doctor chose one of the sealing methods over the other which you preferred, you gave a lower score of because of it, it still is not an indication of the quality of the treatment you received. You wouldn't have been qualified to determine the best method.
When you review feedback collected (in any situation analogous to what you mention), you take that into account.
What if the feedback also said the doctor wasn't personable? Maybe the doc could be a bit warmer with his patients... and a patient would be absolutely qualified in determining whether or not that bedside manner is present.
Like all end user feedback, what they claim they need isn't always going to be what they actually need, but the pain points they reveal can inform where improvement is necessary and valuable.
Yeah, that's another angle I didn't touch on. You don't have to publish all the feedback data. You can choose just to goes with the quantitative results.
Comments are tricky since they're both qualitative and might need to be scrubbed for anything personally-identifying.
When they are paying the bills, they have a right to providing feedback on the product they are receiving. Students go into deep debt to pay schools, they are the customers and their feedback ought to matter.
I was a student representative on my course at one point. I strongly considered running my own evaluation (with the same questions as the official one), and organising a boycott of the official one because of this kind of bullshit.
How the university thinks they can claim ownership of data that come's from students I will never understand.
If anything you would think professors would welcome reviews. They already get reviewed anyways. It's just word of mouth. At least a student could see a much wider range of opinions vs one person's experience.
I can understand that perspective about being used as a metric to the dean, but as a student I would have loved access to that info. It's kind like any other review system, a couple of bad reviews among many high reviews, you take that into account. but a lot of bad reviews, I'm probably going to avoid that professor, restaurant, product, service etc.
It should not be used as precise metric of performance, but as a general trend overall. Instead of one friend giving a bad review for whatever reason, now I have access to the hundreds of bad experiences or the 99/100 good experiences.
As a student I really don't care about the dean professor relationship. I am a paying customer of the institution. I want to know if that professor is worth taking their class.
Also if you don't care about students giving praise/negative reviews to a friend, that kind of says something.
> Also if you don't care about students giving praise/negative reviews to a friend, that kind of says something.
Note I didn’t say I didn’t care what those reviews were. Honestly, they’re probably more nuanced than student Evans and thus more useful in choosing classes.
We made our own review system, to compete with the contracted-out system they used for their siloed data we didn't have access to.
We agreed to not launch the system, in exchange for limited access to the data, as I described above.
Professors en masse opposed opening up this data. That opposition alone made me feel we were doing good work. If they had to stand by their public reviews, hopefully they could stand by the quality of the instructional experience provided.
I ended up making a startup to deal with issues like this, to sell SaaS solutions directly to student governance groups, rather than to institutions. Both control fairly significant budgets (student governance groups at mid-to-large sized institutions have 6/7 figure budgets to easily afford enterprise pricing). My platforms are student-first. It's more a passion project than trying to get me Zuckerberg levels of wealth.
The things on word of mouth aren't things that the professor would consider anyways.
Personality, difficulty, ability to teach (relatively), understanding the student's pressures etc.
When I say difficulty, I mean easiness (people take the class to improve the gpa) or difficulty as in: the professor wants to make an example.
Student's pressures: Does the professor demand more than a reasonable amount of time from the student?
----
All of this being said: I haven't seen a pragmatically taught course before. (As in the professor looks at the course as a means and is the person that helps facilitate the student through it) I think that's what students really look for.
American colleges more resemble four-year all-inclusive resorts than they do rigorous institutions of higher learning.
Serving someone well doesn’t necessarily mean making them happy. Students — people who by enrolling at an institution in a course of study admit their ignorance of that domain — do not seem to me good judges of an instructor qua instructor.
Students aren't reviewing the quality of their campus on-site gym when they review their professor.
There's genuine concerns that should be reasonably heard. One example is slowness in grading homework/exams/etc. What happens is professors get backlogged, and you have multiple homework/exams and the student missed on understanding some concept they were later graded on. If the student had regular grade updates, they would know where they needed to focus more on the course material to master it. Instead, those misunderstandings snowball and you end up doing worse on a final exam since you didn't know what course concepts you correctly understood and missed.
I agree broadly with the wholistic assessment of what American colleges have become. But the review process is still germane-- classes are the core offering of college.
I remember with fondness a professor of mine who drove mad some of my classmates by basically only grading papers on demand, due undoubtedly to some of what you’be mentioned undoubtedly.
Classes may be what students take when they go to college, but I always thought of them more as ice-breakers for the forming of relationships with instructors and fellow students and a way to foster a community of intellectual enquiry.
I’ll take your word for it, but the ads for Sandals resort I saw on tv twenty years ago sure seem a lot like what I observe walking through NYU’s campus every day.
There is a universal baseline expectation that everything the instructor says will be correct, so the student's lack of knowledge isn't holding them back. What they have to evaluate is how good the professor was at teaching, and they are the foremost experts on that subject.
A possible solution would be to split the job, to avoid conficts of interest. One teacher teaches. Another one grades. When I was studying in France in "prépa", all students wanted to get a real evaluation of their level, because the really important exam to enter the best universities was organized nation-wide. But, this is hard to organize in private universities where students are also clients. Which parent would pay 40k$/y to be told that his kid does not study well? Maybe, here also, to avoid conflicts of interest, the job can be split. One university teaches. Another organization grades.
What exactly are universities selling in that case?
From an individual standpoint, sure, a student is always well-advised to take charge of their own learning process. But if you can do that, what do you need a university for?
At this point I'm pretty sure the answer boils down to "a piece of paper and a dating pool."
Honestly I wouldn't be surprised if uni for tech becomes a thing of the past. Uni didn't teach me anything I didn't already know from working on personal projects and im fairly sure my public git repos had far more effect in job interviews than uni did. I have talked to others who have finished and they hardly know anything compared to those I know who are self taught.
In my experience uni was a place of memorizing endless lists of pointless rules. Better reserve an hour a day just combing pdfs and the website working out which font to use or if the marker prefers camel case or snake case. Just focusing on learning won't get you anywhere because you can lose almost all of your marks because you missed a few of the formatting rules in the complex 40 page spec sheet for a python game of hangman or maybe you didn't comment every single line.
In my experience this has been good for weeding out really bad profs. At my Grad school they turfed a prof 1/2 way through that was bones bad.
That said - it's perverse at the other end and I do believe that there is unconscious sexism.
We had a really smart bunch of people, and some of the female teachers were a little more apprehensive in front of us which I think signals to people subconsciously.
Another way: the feedback has to be interpreted.
I suggest that the profs should not be 'graded' by students - rather there should just be an open ended opportunity for feedback.
I'm a CS student in Germany and contrary to the authors claim we actually do evaluate teachers (to be fair that might not have been common place when she was teaching).
This is obviously anecdotal but in my experience these ratings seem to reflect my perceived quality of the lectures quite well.
Sure, evalutations are flawed, but it appears that universities are unable or unwilling to develop other quality measures. For example, this semester I've had a networking professor that would get half way into explaining something interesting and then state that he didn't care about the topic so he didn't prepare to talk about it and we'd be moving on to something else.
I've got an A in that class, but I wouldn't say that I learned much from it due to that attitude. I don't think that the CS department here has any way to tell that something like that is happening without evaluations.
Right. I think anybody reading comments in these review sites can see if students are ranking professors well for being "easy" or if they're ranking them well for being great educators.
The universities I went to for my undergrad and graduate programs both had an informal rating system for professors and I definitely encountered some of both.
One of the most highly rated professors I encountered was also one of the best educators at the school. He had a reputation as being a brutally hard grader. But most of the students who left his classes with some bruises and (what would have been in any other class) a mediocre grade, ached to take more from him because you felt the immense value of his teaching.
I had other professors who taught poorly, tolerated cheating, or gave non-sequitur exams on material they never covered and so on. There was absolutely no recourse or way of providing feedback on these professors and even ones who received universally poor reviews at the end of the semester stayed on and even attained tenure in some cases.
The only real recourse was for students to rank professors amongst themselves and simply starve out poor professors with lack of registrations. That's the only signal universities seem to respond to.
I got the impression that the author was talking about department run evaluation sheets rather than review sites. Student evaluations shouldn't be seen as an evaluation metric but a smoke detector that indicates a staff member might need some more direction.
Teaching is an art, just because someone isn't doing it well doesn't mean they can't do it well. In fact in some cases I imagine they are doing poorly because they would rather be working on their research area. Providing the opportunity to course correct rather than starve out is good.
Do you have student-staff meetings? It seems some schools do that, and others have never thought of it.
Basically the idea is that a representative of a group of students comes together with other such representatives and school staff (I don’t know the English words, basically the managers of a school) and discusses the good and the bad of what’s going on. I have heard some positive experiences with this method.
If you’re not doing these things, perhaps pitch it to people. Or poll other students and ask someone at the school for an appointment.
There could be sexism at work or not, but how can you even define "identical" teachers? I never saw two teachers that were identical (even if they had the same gender, ethnicity, age and sexual orientation).
If the school would force teachers to grade on a curve, the problem would go away. Teachers would have only a limited budget of high grades to give away.
Grading on a curve is not without problems, particularly when class sizes are small, but it has a lot of benefits. It makes it easier to compare students across schools when the schools use the same curve.
That is a horrible idea. Then students will try to take courses with weak students. Oh, that smart group is taking CS340 next semester? Guess I'll wait until the following semester. Do not turn it into even more of a competition than it already is.
Why should someone else's ability influence my grade???
Grades are relative to other people. That's the point of places asking for GPA. If you were just evaluating your effort / learning, your grades could just say "learned a lot" or "made a good effort". But GPA is a way to discriminate between potential candidates for schools/businesses, and thus needs to be quantifiable and comparable.
Let's say you are right. You can only compare students from the same class if a curve is used. And then, it doesn't tell you anything about competency. Just if they were better or worse in that small cohort.
If a class is filled with C and D students (objectively), the curve will spread them out and those C's become As. If a different class is filled with A and B students, they are spread out and those B's can fail or become C's. Now the first group's curve A's are not as good as the second group's A's, and are, in fact, worse than the second group's F's or C's.
I often set the curve in my classes. I was offered money to not do my best on exams because of how it affected other's grades. My ability or lack there of should not affect another's grade which should reflect their demonstration of subject mastery.
Some schools have found how useless GPAs are and have adopted narrative evaluations [1].
The grading in my course is not relative. I set a rubric that I assess each individual on. Sure, within the class you can compare grades, but I wouldn't say it is particularly meaningful. Comparing between classes or universities is completely meaningless.
People just want a simple number that can accurately represent someone's knowledge/learning/skill/effort, but it doesn't exist.
No, not at all. GPA tells the difference between "mastered the material" and "did well enough not to get expelled" which can be a useful signal for an employer, but universities and professors don't view it as their responsibility to sort students for industry's needs.
For starters, the quality of students is not uniform across universities. Middle of the pack at Harvard is a very different thing from middle of the pack at a party school. To get a useful ordering, you'd need to give every CS graduate the same standardized test, like the bar exam.
Grades are relative to other people in your school and program; it's why you mention the institution on the resume. And IME you don't often communicate your grade but your class rank, most notably if you're at the very top (valedictorian, etc); even with grade inflation schools still must be able to differentiate the very top students from everybody else so they can actually provide these honors. You can't graduate an entire class of "top" students, no matter how earnestly they studied.
With few exceptions there is no mastering the material, or at least there shouldn't be. Not in college. There's always more complex material you can use to differentiate students. When you grade on a curve the test should be difficult enough that even the best students will typically get at least one question wrong, with most students falling along a nice bell distribution. (I say most because it's probably better to err on the side of a handful of students clustering at the top rather than at the bottom, especially if the bottom means flunking out.)
This can be more difficult for smaller, seminar style classes. One solution is giving the professor leeway to shift up and tighten the curve so he's not forced to give Fs or Ds. These classes are usually in the latter years of a program, particularly in undergraduate, where it's less important to winnow students out and ensure challenging curriculums.
That latter point is important. If you don't enforce some sort of statistical distribution how can you gauge the quality of your curriculum? If everybody is getting As is that because all your students are smart and disciplined, or because the material is too simple? If you require a bell distribution, then the curriculum will by necessity be a good fit for your cohort of students. In this way a university can maintain a quality curriculum without having to resort to external metrics or comparisons with other schools.
I'm aware that this philosophy of education and grading exists, and it may well be superior, but it's not how things worked in any part of my education. Grades indicated how well you met the instructor's expectations. Most classes gave mostly As and Bs: if you weren't going to do well relative to the course, you wouldn't let it get to the point of a letter grade.
Instead the differentiation was between departments and courses. It was well understood that X majors were Y majors who couldn't hack it, and that within a major, certain classes were for high-achieving masochists while others were schedule padding. (This was crucial information, not just dick-measuring. You could easily put yourself in crisis by taking a class beyond your abilities, or multiple hard classes at once. Friends helped each other avoid such nightmares). If you asked me to evaluate a classmate's transcript, I wouldn't even look at their grades, only at the classes they allowed themselves to have grades in.
No one has ever asked me that, or to see my transcript.
You’re saying something like Cravath would improve the situation and be fairer assessments than grading on absolute scores. I was in a class where a curve was non-existent - 30% of the class got above 90%, 45% got below 60%. What would be “fair” then?
Perhaps methods to decouple evaluation scores from student grades should be better explored without antagonizing anyone? Perhaps scores after the first week should be compared to scores mid-way and then last?
Maybe all of that temporal basis is flawed too and we need to try crazier things. A weirder option could be to randomize the evaluations given back to teachers and to have the teachers select which ones seem to be theirs and some consideration is given to professors that are accurate for themselves. Basically it’d be a test of whether a teacher could identify their own teaching methods’ strengths and faults in contrast with other teachers’ and encourage diversity in styles among faculty. If a teacher wants to grade hard, I think they should be free to do so if it’s shown to be in students’ best interests. This is vaguely similar to how classifiers in machine learning training works but with a really different goal in mind.
The answer to your question is that the class in question is a bad class, and needs to be fixed.
If 45% of students are failing, then either the class needs to be changed, or more effort needs to be made to prevent students from taking a class that they are likely to fail.
Yes, accept that fixed number could be 0. Often a curve might be A-D, not A-F. And there are many shapes of curves. You choose the shape based on what you're trying to achieve. In this day and age a curve that requires as many Ds as As might not be tenable, but that doesn't mean you can't enforce some statistical distribution to help manage curriculum quality.
The problem with grade inflation isn't just that it becomes more difficult to judge student competency; similarly, it becomes more difficult to judge curriculum quality. If everybody is getting As when the curriculum is challenging, they'll continue getting As if it degenerates. Enforcing a distribution helps you to maintain a stable relationship between the abilities of your student body and the quality of the curriculum.
I don't think it would make it any easier to compare schools. I'm fairly certain a 30th percentile student at Caltech could go to Directional State University and be at the top of the equivalent class.
People trying to judge students based on grades would then have to have a rough sense of how the institution works and how a grade should be interpreted.
But this is what they have to do anyway, curve or not. For example, employers already know that a B- at an Ivy League institution is a somewhat poor grade. (At Harvard, 75% of all grades given are B+ or above.)
Admittedly I think the Ivy League system (which effectively is grading on a curve, just a very narrow one with the mean set to an A-) is pretty good. There's certainly little incentive for students to want easy classes, because in terms of grading there is basically no difference between an easy class and a hard class. It's as close as you can come to just not giving grades at all, but doesn't mark you out as a weird hippy school.
This is also a problem with developer bootcamps since they get a lot of "sales" from reviews. They can often cater to students to the detriment of the learning experience. At times it is about catering to creature comforts, but can bleed over into lessening the learning experience.
And especially in bootcamps since things are so fast paced and often times emotional, the reviews are not objective. Many are too positive or too negative.
Teachers should be judged on the longitudinal performance of their students. Khan Academy and BlackBoard must have some utterly fascinating data in this regard.
As someone who had a teacher who literally was tenured, had the absolute worst reviews ever on the website and wherever, literally recorded his lectures once and now goes to lectures just to answer questions (which btw he barely answers em he sometimes just posts to the lecture video) I strongly feel evaluating teachers is a necessary thing.
Is something preventing you from communicating your concerns directly to the administration (in person or anonymously) outside the context of a perfunctory, after-the-fact review?
I would imagine a truly poor teacher would standout for the number of unsolicited complaints.
Truthfully I have no idea what he even does with the required review papers that are given to us a the end of the class. Lots of people have reviewed him on Askmyprofessor.
Possible alternative method is to hire and train students to be observers, both covert and overt. Put them into classes as if they were students, and those students Would then Evaluate that professor. It could work especially cross colleges, where those students will never naturally take classes by that professor.
I would also assume that the students, taken as an average, would also be some of the worst qualified people to evaluate educators, pretty much by definition.
I was a visiting professor in a local university for a while. The student evaluations were directly proportional to the grades - easier courses = better grades. I refused to see the evaluations and refused to discuss them with my supervisor, it was pointless. BTW my long-term relationship with ex-students is great, at least with the ones I care about.
It’s too bad, too, because there certainly are criticisms to be made, and this article hints at some of them. But a well designed evaluation instrument along with critical interpretation of the results and comments can be an invaluable tool for instructors who want to improve their teaching.
But unfortunately this article also spreads untruths about the weight given to student evaluations, and dismisses common-sense security protocols like having a student deliver the results instead of the instructor (not perfect of course, but the student has much less incentive to tamper) as demeaning rather than a sensible precaution.
If the author had presented any evidence whatsoever to support her claims linking student evals to grade inflation (possibly related, and I’m sure studies have been done, but evals are very far from the only obvious contributing factor), or even made a passing attempt to explain what percentage of weight is given to raw eval scores for actual tenure considerations (very little in the broad scheme of things), and cut the junk out about how universities seeing students as customers is a new thing (it’s not new at all as any skim of the history of universities as an institution will make clear, but someone at a US state university should realize that it’s skyrocketing tuition rates, and not whiny students, that give students a bigger sense of entitlement) the article would have been much better.