Monday, August 11, 2014

How Not to be an Asshole: Peer Review Edition

I am somewhat late to the “here’s everything that’s wrong with the peer review process” party. This is in part because I have spent the last several weeks busting my ass to complete revisions requested by a reviewer of an article I submitted quite literally years ago. Following the Wordsworthian injunction that the “spontaneous overflow of powerful feelings” should be “recollected in tranquility,” I didn’t want to write about peer review when I was feeling Hulk-smash-ragey about the peer review process. But I’m feeling much better now. Since peer reviewers are so kind as to provide us all with feedback on our work, it seems only right that we should reciprocate by providing some suggestions for them. So, to the gatekeepers in my readership (I think there might be one of you, maybe even two!), I humbly submit my suggestions for revision of peer review practice.

Try to respond in a timely fashion.


I understand that peer review is generally thankless work; it is unremunerated, and at best it earns you some favors from the editor and an entry in the “service” column of an application for promotion. Mostly, it’s just service you do to the scholarly community because you have an investment in that community–this is very benevolent of you, and you probably deserve more thanks than you get. But a quick look at the editorial boards of all the big journals indicates that most of the people doing peer review work do so as tenured faculty: you folk have lots of pressures on your time, but you also have a position of security. By contrast, a lot of us who send essays out to journals and edited collections do so with a big fat clock ticking over our heads. For graduate students and adjuncts looking for full-time employment, it’s the job-search clock, which ticks down the minutes until the next market cycle. For these, the difference between having an article accepted for publication or not before the CVs go out to search committees can mean the difference between having their application advanced to the interview pool or not. For junior faculty, that ticking you hear is the sound of the tenure clock. If we don’t have enough publications when the buzzer sounds, we get to start looking for a new careers. So when an article lands on your desk, it would be extremely helpful to us if you could read it and return it with feedback ASAP.

We can debate what a reasonable timeframe would be for returning essays to their authors, but we can probably all get behind that notion that, if you’re measuring in half-years or years, then it’s taking too long. I am not the only person I know to have had an article sit on someone’s desk for over two years, and that’s really not okay, both for the reasons I just mentioned and because sitting on an essay for that long shows serious disrepect towards effort and time the author put in to writing the work in the first place. The problem here isn’t that it takes such a long time to read and comment on a 25-page article; the problem is that the work of reviewing doesn’t get prioritzed by the reviewers in this situation, which says to the author, “Your work is not important to me.” Again, there are reasons why the work of reviewing might get de-prioritized that I mentioned above. But if you take on the work, then you should commit to doing the work. It’s okay if you’re too busy: just decline the invitation to review, and let the editor find someone who isn’t, at present, swamped. The fact is that the author of the article is also, more likely than not, a tremendously busy person with similar if not more pressures on their time. The difference is that, when you accept the invitation to review an essay, that author is now depending on you for their work work to mean anything. It’s simple professional courtesy to hit your deadline and respond in a timely fashion. And if by (likely) chance that author is lower than you on the academic totem-pole, then anything less than a timely response is a failure of stewardship for your profession, insofar as you are potentially shutting the door in junior scholars’ faces without giving them the opportunity to succeed.

Help, don’t hurt


I remember one reader’s report I received that stated, “The author is clearly unaware of the large body of scholarship on X.” This was somewhat surprising to me, because I’d actively looked for things written on X, but hadn’t found much, and so I assumed it was a peripheral issue rather than something a lot of scholars talked about. The problem with the reviewer’s comment from my perspective–which is to say from the perspective of someone who sincerely wanted to make this the best essay I could–was that the comment was both dismissive and unhelpful. The phrasing was condescending insofar as it was a claim about my ignorance, rather than about an omission in the essay (my favorite on this score was one that said I had “only a light grasp” of the scholarship in the field), and it offered me no direction for fixing the problem. I’d already searched on this topic and found nothing; now I was being told that there was, indeed, a “large body” of stuff somewhere out there, but I was given no guidance in how to find it. By contrast, the most useful reader’s reports I have received–ones that eventually led to publication–made recommendations rather than accusations. The first reader for my book noted that the draft wasn’t addressing a body of scholarship that I hadn’t realized existed, and recommended that I look into work by a couple of specific scholars. Those recommendations helped me find other material on the subject as well, so that I was able to address the omission in the draft substantively, and, I think, produce a much stronger piece of work. Instead of calling me deficient and leaving it at that, my book’s reader told me what was missing, then pointed the way where I might find what I needed.

If the point of peer review were just to say “yay” or “nay” to pieces of scholarship, we wouldn’t bother with reader’s reports: readers would just give a thumbs-up or thumbs-down, and that journal would give the author a yes or no, without further comment. The fact that we do reports that get passed to the author at all suggests that part of the purpose of peer review is to help authors improve their work. If that’s the case, then when you are writing up a report you should ask yourself about every comment you make, “Will this help?” You have to justify your decision regarding the decision to publish or not, but if the answer is “no,” then providing guidance for how the essay could be brought up to snuff should be more of a priority than making sure the author knows you think they’re stupid. We do this all the time commenting on drafts of student essays. The point isn’t to tear the student down; where there are problems, we point out those problems and offer suggestions for how to address them–and we don’t (or we shouldn’t) say, “Your work is woefully inadequate” without further comment.

I’ll be honest, though: when, after trying to research a topic as thoroughly as I can, I see a reader’s report that alludes to a “large body of scholarship” I’ve missed without mentioning a single scholar or scholarly work that belongs to that body, I’m suspicious. It reads a little bit too much like undergraduate appeals to “what everyone knows,” and I have occasionally wondered if this “large body of scholarship” actually existed, or whether the reader were simply half-remembering a couple of essays they’d read that mentioned the topic. At best, this kind of criticism is lazy, a gesture with no substance. At worst, it’s intellectually dishonest. If you can’t come up with one citation off the top of your head, you might need to rethink whether this body of scholarship is as large and as important as you think it is.

Not everything is about you


I considered titling this section, “Don’t be a narcissist.” There’s a lot of different bad reviewer behaviors that fall under this category. Perhaps the most obnoxious–and most dangerous to the profession–is the one that doesn’t distinguish between the tasks of deciding whether a submitted work is good on its own merits and whether the reviewer actually likes what the work has to say. My very first reader’s report was also the most vicious I’ve received in my (still relatively short) career, because it came from a reader who seemed not to be able to tell the difference between an argument and a personal attack. The reader opened by saying that the essay was well-written, that it was adequately grounded in the relevant scholarship, and that the argument was well-structured and well-reasoned. And then they recommended that it not be published. I reread the report as many times as I could stomach, trying to figure out why they didn’t recommend publication, but all I could come up with was that the reader didn’t like my argument. Mostly, the report just criticized the politics of the position, and at one point the reader weirdly compared me–not the essay, but me–to George W. Bush. This went on for three pages, single-spaced. Luckily for me, the editor over-ruled the reader and gave me a revise-resubmit. I had to go a second round with the reader, but eventually they conceded with the comment, “perhaps I am just worn down from engaging with a well-crafted argument with which I vehemently disagree.”

This reader wanted to stand between my article and publication not because my article was a bad piece of scholarship–they admitted more than once that it was a good piece of scholarship–but because they didn’t like what I had to say. If this is the criterion all readers use for deciding whether something will be published, then there will be no innovation, no opportunity for argument or critique, and no advancement. We will simply clone ourselves. Which, as at least one person has pointed out, is pretty much what the humanities are already doing, suggesting that my experience with my first reader is not anomalous.

One of my favorite (by which I mean least favorite) comments I’ve ever seen from a reviewer was on a report a close friend received, which told him that, “If we accept the author’s position, then that closes down other avenues of interpretation.” Well, yes, that is what an argument–any argument worth the name–does: it makes the case for one way or set of ways of reading, and in doing so implicitly argues against others. What the reader really meant was that my friend’s position argued against the reader’s preferred way of reading the text, and the reader didn’t like that (so the reader recommended not publishing the piece; it got published in a better journal with no substantial changes later). This kind of gatekeeper is more like the bouncer manning the velvet rope at an exclusive club: they only let through the “right” kind of people, which is to say those who fit the tastes and preferences of gatekeepers, rather than those who are simply producing good scholarship. This is obviously not good for the profession of academic inquiry.

If you’re reviewing an essay submitted for publication, please try to remember that not everything is about you. If your preferred position on a topic you care about is being assailed, that’s not a personal attack on you and your career; it’s just an attempt to create a different way of looking at the problem. Similarly, not everything you read has to cite you, your friends, and everything you’ve read recently. After doing a Twitter poll of ridiculous responses from peer reviewers, Rebecca Schuman provided a succinct summary of what she saw:
So many readers’ reports can be boiled down to: “Why wasn’t this article exactly the one I would have written?” (Or: “Why wasn’t I cited enough?”)
This accurately describes my own experiences with peer review, and I suspect I'm not the only one. Asking authors to cite your own work more is just tacky, unless you’ve written the definitive treatise on the topic (and by “definitive” I mean others have recognized it as such–you don’t get to give yourself that designation). You may have written something on the topic, but you really shouldn’t treat your position as gatekeeper as your own little citation generator.

More often, though, reviewers seem to think that any work they review should have the same frame of reference they do. It’s one thing to read an essay and say, “Well, I’ve just read this other essay that seems like it might be useful to the author–the author might want to take a look at that.” It’s quite another to say, “The author neglects the work of [person I was just reading]; this is necessary to cite.” About a year ago, a reviewer told me it was “necessary to cite” an essay that was (1) at best tangentially related to all the main claims of my essay, and (2) NOT EVEN IN PRINT YET. This speaks to an argument I made long ago about over-citation; but it also suggests a mind that isn’t distinguishing between the reviewer’s work and ideas and the author’s. Unsurprisingly, this same reviewer also wanted me to cite their own work several times, including an essay that had just come out in print. When I went to look at the new essay, lo and behold, my reviewer had cited this “necessary” study that was still in manuscript. My reviewer wanted me to reproduce their own research.

Look, I get that we all work necessarily from our own frame of reference, drawing on our own bodies of knowledge to judge things. And I get that it’s a lot easier to see the faults in an argument we disagree with than in one that we agree with. But if part of the purpose of academic inquiry is the disinterested pursuit of truth, then at the very least those of us serving as gatekeepers have a responsibility to try to be aware of the limits of our perspectives and biases and to correct for them as best we can. If you’re reviewing a work and you want to say “no” or “make these revisions,” ask yourself first where you stand on the topic being discussed, and consider seriously whether your position is potentially skewing your recommendations. Ask yourself whether your recommendations are enforcing conformity with pre-existing research. Ask yourself also whether these recommentations are designed to help improve the work or to hurt the author of the work. And, for the love of all that’s holy, don’t take three years to do it.

Sunday, July 27, 2014

Solidarity and the Future of Academe

The word “solidarity” has been making the rounds in discussions of the future of higher education lately. The adjunct union movement has been talking about it for some time, seeking to find allies and develop support as they band together to fight casualization in what is, to my mind, the single most hopeful force in higher education at present. Among ladder faculty, however, the meaning of solidarity is a bit more contested. In particular, A.W. Strouse’s recent piece in the Chronicle of Higher Education attacking the work of Rebecca Schuman has tried to draw lines in the sand that would distinguish “an ethics of solidarity” from more harsh forms of critique—forms which, according to Strouse, are more destructive than constructive (see also Rosemary Feal’s attempt to draw the same line over the angry responses to the MLA task force report on graduate programs). Karen Kelsky has already examined the extent to which “tone-policing” resembles the discourse of white privilege (Kelsky’s target is Claire Potter, but we can see the same thing happening in Strouse’s essay and Feal’s defense of the MLA report, and in countless other places–just look through responses to criticism of the MLA report, or responses to pretty much anything Schuman writes); and Marc Bousquet has pointed out the patent contradiction in calling for “solidarity” while taking a hachet to one of the strongest voices in the fight to improve higher education. What I want to add to this discussion is more a comment about strategy, to make the case for a more inclusive definition of solidarity.

There’s a storm a-brewin’ in higher education. More and more articles are being published on the crisis of student debt and on out-of-control administrator salaries. And now the government is increasingly trying to look like it’s taking on the problem of student debt (a crisis the government in fact perpetuated, but that’s for another day), while noise about rethinking accredidation and the rules for receiving federal funding is building in the background. The more information becomes available about how badly university funds have been mismanaged—that is, about how administrators and boards have continuously raised tuition to ridiculous levels and used that money for purposes other than the educational mission of the university—the closer we move to the moment when parents and students stand up and demand accountability from these high-priced institutions. And when that happens, faculty have a choice: we can show that we are part of the problem, or we can be part of the solution. That choice will determine the fate of academy, and the role of the professor within it.

The only line we should be drawing in the sand is between those who want to uphold the mission of the university as a place where students can learn and where new knowledge is produced, and those who want to interfere with that mission. Anyone seeking to diminish the quality of university education by, for instance, not giving faculty the resources they need to do their jobs (like, you know, office space, or a living wage), or by looking for ways to cut research funding, or by looking to pit faculty against each other in a war that no one wins (except administrators)—these people are the enemy. If and when the shit hits the fan and higher education gets restructured, these are the people who should be held accountable, by which I mean that these are the people who should be fired. The question for faculty, it seems to me, is this: if and when this happens, do we want to be standing next to those who have turned higher education into a business at the expense of the university’s mission, or do we want to be on record as having fought vigorously against and publicly criticized those people and their efforts to undermine the university?

How we answer this question matters. Reform, if it comes, will be driven by a rhetoric of fiscal accountability and responsibility to students and to the public good the university is supposed to serve. The problem is that some of the people most adept at wielding this rhetoric are politicians and corporate “reform” types, all folk who at best don’t know what they’re talking about, and at worst know exactly what they’re talking about and are skilled at getting people to believe that up is down. Only if we as faculty have a clear, demonstrable track-record of fighting against initiatives that interfere with our educational mission will we convince a broader audience that we, the faculty, (1) know what we’re talking about, and (2) are truly on the side of the students and the public good. Let’s face it: the popular image of the college professor isn’t one that the average Joe warms to. And to people outside the university, we’re likely to be seen more as part of this broken system than as beleaguered heroes struggling to shield our students from the worst depredations of management. We have to prove that we’re on the right side of history here if we’re going to convince the forces pushing for reform that we deserve to stick around. And we should have to prove it; because, if we can’t, then maybe we really are part of the problem.

Going easy on the problem isn’t going to save us; it’s much more likely to end us as a profession. If we want to survive to do our work, we’re going to need to be vocal about criticizing power—including, for us ladder faculty, our own. Instead of trying to police the tactics of the least powerful among us, we’d be much better served to think about the ways in which our own behavior helps enable the exploitation of our colleagues and our students, and to strategize ways to fight back. And by throwing our support behind the noisy critics of this unsustainable system, we can demonstrate to the rest of the world—and most importantly to our students, who ultimately foot the bill for these problems—that we are not willing participants in the perversion of the university system. This, it seems to me, is what solidarity would look like—and not like trying to redefine “solidarity” so that it encompasses only that critique that makes us comfortable (which is to say only that critique that doesn’t threaten our own power).

Monday, July 21, 2014

Academic Decision Making

I have in recent posts been defending the ability of university faculty to participate more in the running of the university. While I continue to believe strongly in our ability to do so, recent conversations about the difficulty faculty often seem to have making group decisions have reminded me that the work of thinking about problems and the work of solving them are not necessarily the same thing, and that faculty must therefore, if we are to take a greater role in the day-to-day of our institutions, perhaps retrain ourselves somewhat to be better, more efficient problem-solvers. I suspect that the reluctance of faculty to get more involved in university management in part grows naturally out of negative experiences with committee work and faculty meetings. Understanding the difference between the skills that serve us academically and those that will serve us in the practical business of running a department, division, or institutions might help us find our our power again.

Anyone who has spent much time in faculty meetings has had the experience of interminable conversations about a problem that doesn’t seem that difficult but that seems never to get solved. After the requisite period of railing against the problem itself—either bemoaning the fact that the problem exists or denouncing the notion that the problem is, in fact, a problem (“back in my day…”)—discussion moves on to possible solutions, and from there things devolve quickly. A proposal is made. Someone responds immediately with reasons why it will never work. People argue over those reasons. Another proposal is made, radically different from the first. The same thing happens. Eventually (this is like Godwin’s Law for academics in faculty meetings) someone “goes meta”: “I don’t think we can even have this conversation about how to address low enrollments in our Hamster Fur Weaving minor without first having a conversation about what a Hamster Fur Weaving really is as a discipline.” Then half the room starts arguing over the state of the discipline, while the other half debates whether the conversation really needs to take place. Anytime the room moves too close to consensus, someone will note, “Well, I agree, but someone who didn’t agree with this might say…,” and then everyone has to deal with the objection that no one in the room actually wants to pose. This goes on for several months or even years, before everyone finally—and begrudgingly—settles on a proposal that looks remarkably like the one first proposed. I’ve seen this cycle at multiple institutions and among faculty from several different disciplines within the liberal arts (broadly construed). I don’t know if scientists have these problems, but among humanists even very functional departments (like the one I currently work in) seem to fall into this cycle of indecision whenever important decisions need to be made.

It’s this experience that can give us, as academics, the impression that faculty are just bad at making decisions. It’s too much like herding cats, the argument goes: put too many independent-minded people in a room (particularly when some of those people have some pretty impressive egos), and it all goes to hell. I’ve occasionally thought this myself, buying into the academic-as-hopelessly-impractical-eccentric fiction that keeps the outside world from taking us seriously. Recently, however, I was reading the book Getting to Yes: Negotiating and Agreement Without Giving In at the recommendation of a colleague, and one of the authors’ insights into how negotiations can stall out helped me see that the issue isn’t that academics are too independent; it’s that we’re trained in skills that get in the way of our problem-solving ability. One of their arguments is that negotiations get stuck when the parties involved think they must choose one of a very limited number of options, rather than looking for ways to rethink the problem and develop new possibilities. They note, “Nothing is so harmful to inventing as a critical sense waiting to pounce on the drawbacks of any new idea. Judgment hinders imagination.”

If there’s one thing all of us in the humanities have (or should have), it’s a “critical sense waiting to pounce.” In the humanities, we are trained primarily in critique. Our job is to question assumptions, look for problems, put pressure on ideologies, find opposing view-points, etc., and to teach others to do the same. This is, in most situations, a virtue. Because we are trained in critique, we are great at the vital skill of diagnosing problems, and at teaching our students the one thing we usually point to as the most valuable take-away from a college education: critical thinking skills. But when I read the quotation above, it dawned on me that our finely-honed critical abilities actually get in the way of attempts to solve the problems we diagnose. At the end of the day, few solutions in the real world are perfect: in most cases, we can find something wrong or potentially wrong, either practically or philosophically, with just about any proposal we can make. And because critique is what we do, we automatically move straight from any proposal to finding those problems.

This is bad for a variety of reasons. First, it means that we tend to devote a lot more energy to shooting down possibilities than we do to inventing them. For any proposal, we can come up with multiple critiques, so more time gets spent discussing why we can’t do something than how we can. Second, speaking at least for my own experience, the tendency towards critique rather than invention encourages a lot of self-censoring. If I come up with a possible solution to the problem, my own critical faculties can usually find problems with the idea before I can put it on the table; and if I can see problems with my idea, I’m not likely to say anything, both because I don’t want to offer a flawed proposal, and because, as an academic who trades in ideas, I don’t want to seem like the purveyor of bad ideas to my colleagues. Even if I don’t immediately pick up on the possible problems with my proposal, however, I’m still going to be hesitant to speak the idea aloud if I know that it will immediately be set upon by others’ critical faculties. Criticism of our ideas is something we all live with, and part of being successful as an academic means learning to take that criticism in stride. Still, if every time I go to open my mouth, I know my idea will get shot down, there’s very little incentive there for me to keep speaking—or to keep trying to come up with solutions. I don’t imagine I’m the only one who has had this experience. In the end, the effect of our cumulative critical sensibilities is that we end up with fewer ideas to work with, because our approach discourages us from offering all but the most iron-clad (and, often, safe) of proposals.

The issue, then, is that we are trained as good problem-finders, but not as good problem-solvers. Happily, this diagnosis offers its own prescription: if the problem is that our critical sense gets in the way of our ability to invent solutions, then the solution to that problem is to find ways to side-line that critical sense, or to re-direct it towards something constructive. As the authors of Getting to Yes put it, we need to “separate inventing from deciding.” Their suggestion is to have brainstorming sessions in which there is no critique: any proposal can go up, and no one is allowed to say anything about why it can’t work. The idea here is both to offer a variety of possibilities to start working with and—perhaps more importantly for the academic context—to make space for the creative faculties to kick in, to give our imaginations some room to flex their muscles. The next step Getting to Yes proposes, after a period of brainstorming, is not to start eliminating possibilities, but instead to start focusing on the most promising possibilities and developing them further. What would it take to make this happen? What modifications can we make to it to make it a stronger proposal? Once several of the best proposals have been developed, then the critical faculties can be re-engaged to decide which among several options is the best to pursue.

In this approach, the focus is on how can we make this work? rather than why won’t this work?, which strikes me as a much more positive, and thus more emotionally rewarding, way to problem-solve than what we typically see happening in faculty meetings. And by first engaging on our creative abilities—which is also part of our training but which usually takes a back-seat to our critical sense—we empower ourselves by demonstrating to ourselves that we are, in fact, capable of coming up with lots of solutions to a problem. In committee contexts, it’s too easy for us to experience our hard-earned intelligence as a disabling cynicism—we can see the problems, and we can see the problems behind the problems, and we know no solution will solve all of the problems. By purposefully silencing the critical faculty, though only temporarily, to ask what could we do?, we can, I think, discover that our big brains can build just as well as they can deconstruct. In learning to make space for creativity that is safe (again, temporarily) from critique, we might be able to retrain ourselves in more constructive and efficient forms of decision-making by using our considerable mental resources to seek out new possibilities and imaginative solutions.

Just to be clear, I’m not suggesting a Pollyanna-ish notion that we’d all be happier and more productive if we’d just stop being so negative all the time. The cynicism we experience as faculty is come by honestly; it is based in dismal realities that are not going to go away through a few creative brain-storming sessions. In the big picture, our ability to see and critique the problems facing higher education is vital for any attempt to start remedying those problems. But if critique is necessary to the process of improving the university, it is not sufficient. Critique can tell us what we want to change and why, but it takes creativity to offers us ways to change it. What I am suggesting is that, in order to become better problem-solvers in the practical work of running a program, department, or university, we need to make room to develop our creative abilities so that they match our critical ones. By making a conscious effort to focus on what we can do in the process of making decisions, and by learning to maximize the number of possible avenues for action before we start making choices, we just might be able to transform faculty meetings into a much more productive, and therefore much more empowering, experience. Knowing that we can work together to solve problems, and showing administrators, students, and parents that we are capable of doing so, would be a good first step towards taking back our universities.

Tuesday, June 24, 2014

Flexibility, the Alton Brown Principle, and the Teacher-Scholar-Admin

To justify the use of poorly paid, contingent faculty, administrators often sing a chorus about the need for flexibility: in these uncertain times, the refrain goes, schools need to be able to adjust quickly to ebbs and flows in enrollment, and to changes in the job markets into which we send our students. As Noam Chomsky has pointed out, where "flexibility" means making it "easier to hire and fire people," it's "just another standard technique of control and domination"; after all, he notes, no one ever says that we need the flexibility to hire and fire administrators in the same way as faculty. This doesn't mean, however, the "flexibility," construed in a different way, isn't a good thing. Chomsky notes the case of course reassignments: if a class under-enrolls, the faculty member assigned to that course gets reassigned to teach something where there is greater student demand. We can do that, because all of us are capable of teaching more than one thing. We might also look at curricular development for an example of "good" flexibility. It's normal to retool curricula, both at the university level and within different departments, at regular intervals to deal with the fact that our disciplines change and the needs of our students change. This requires faculty to be flexible in the kinds of classes we offer, and to be willing to learn new ways of doing things to be able to address changes in our fields and in the world around us. While we can probably all think of examples of colleagues that aren't as "flexible" in this way as we'd like, the fact is that most of us have had to learn new things and make changes in our classrooms and research, and we've risen to the occasion. Just look at how much English or Modern Language departments and curricula have changed in the last 30 years, and you see how flexible we can be.

In pretty much any job, the ability to do more than one thing is an asset. I'll call this "the Alton Brown principle." On his show Good Eats, he frequently preached that, except in very rare circumstances, it's wasteful and inefficient to buy kitchen implements that you will only use for one thing. Multi-purpose tools are better because it means that you will require fewer tools overall, which is both cost-effective and creates less clutter (and thus less inefficiency) in the kitchen. For universities looking to increase their flexibility while simultaneously trimming down to cut expenses (and thereby keep tuition prices lower), this might be a good principle. Creating positions for people who can fill one role and one role only in the university is going to create inefficiencies and inflexibility. Following the Alton Brown principle, hires should be multi-purpose, capable of moving into whatever areas need the most attention at the moment, and capable of being reassigned when need change.

For example: maybe we hire people to study enrollment trends and help us get our enrollments up. But if those people are successful and enrollment goes up to the maximum sustainable level, that's going to create less of a need for their position and more of a need for faculty to teach these larger incoming classes. Traditionally, universities deal with surges in student enrollment by hiring off the tenure-track, either bringing in faculty part-time for poverty-level wages or for short-term, full-time contingent positions. The ability to fire these faculty easily is what administrators call "flexibility," but this approach is bad for students and bad for faculty morale, creating a two-tiered system that benefits absolutely no one. In addition, hiring temporary faculty doesn't address the fact that the people hired to work on bringing up enrollments don't have much to do when enrollments are high. Firing these administrators when they've accomplished their missions seems like a bad idea: enrollments may dip again in another decade, and hiring new people later on down the line to reinvent the wheel is wasteful, both in terms of the time lost in getting new people up to speed and in terms of the resources lost in searching for and hiring new people. But keeping them on the payroll when there's not much for them to do is also a waste of money. The best case scenario—the one which demonstrates the most flexibility and creates the least inefficiency—is one in which those people working on enrollment can transition into the classroom once enrollment ceases to be a problem, and can transition back if the problem recurs.

Or, to turn the problem around somewhat, we might imagine a scenario in which a school responds to an enrollment problem by transitioning some of the faculty out of the classroom to work on the issue, and then back into the classroom once the problem is solved (and back again if it recurs). The fact is that universities are already, without doing any outside hiring, chock-full of experts in a wide variety of fields that are relevant to the administrative challenges universities face. Sociologists, statisticians, accounting experts, communication experts, multi-media experts, computer scientists, to name just a few--not to mention the huge number of people across all disciplines highly trained in complex problem solving: the university employs faculty whose expertise equips them to deal with everything from budgets and enrollment problems to internal communication channels and information technology. So why, when problems arise, does a school need to hire administrators to find a solution, when, without hiring anyone new, they could draw on the expertise of their own faculty to fix the problem? 

So here's a proposal (that is by no means a new idea): follow the lead of Iowa State University in paring down the administrative ranks and pouring that money into faculty lines (which, as it turns out, does not in fact lead to armageddon). When problems arise that mean fewer students at the school, faculty expert in the relevant disciplines can leave the classroom for a while—since there are fewer students to teach—and instead allocate those hours to administrative work focused on solving the problems at hand. When enrollments go back up, the full-time faculty are already there to teach the increased number of students. No one loses a job. No one is superfluous. People move around to deal with the needs of the institution. This is real flexibility. (I think this is also what used to be called "faculty governance.")

It may be that some faculty might object to having to re-allocate more time to administrative work, rather than focusing exclusively on the work that usually gets us into the profession in the first place—research and (in some cases, "or") teaching. But, as I argued in a previous post, we can't hope to make the university the place it should be—a place where students can learn and a site for the production of knowledge—if we don't take an active role in the shaping and running of our institutions. Expanding the concept of faculty from a "teacher-scholar" model to one that encompasses all aspects of our job—that is, to a "teacher-scholar-admin" model—would give faculty back control over the university. At the same time, it would reduce the need for single-function, highly-paid administrators, freeing up money to expand the faculty (and stop relying on poorly-paid contingent positions that undermine the value-for-money our universities supposedly provide). 

This is not to say that all faculty should be able to do all things: we have our unique skill-sets and specializations that make us better suited for some tasks than for others. (I am probably not someone you want to bring in on a budget problem, for instance; but if you need someone who can conduct research into how people read university websites and rewrite copy for our own site, I can do that.) But, following the Alton Brown principle, we should look to hire faculty who show promise at multiple kinds of tasks across all three parts of the job—teaching, scholarship, and service/admin. And we should recognize that most of the faculty at our universities (including—perhaps especially—contingent faculty) already fit this profile, that we are capable of taking on a lot of the problems for which our universities currently hire out. Having a faculty that can move in and out of the classroom, research responsibilities, and administrative responsibilities as need dictates would be more cost-effective and reduce "clutter" in the university. It would provide tremendous flexibility without the ridiculous inefficiency of the cycle of hiring-and-firing. And it would mean that administrative decisions would be made by those who carry out the mission of the university: the faculty.

Sunday, May 25, 2014

Upstairs, Downstairs in the Academy

I've been deeply gratified lately to see increasing noise in the media about administrative salaries--and particularly presidential salaries--in higher education. For a long time, it seemed like the conversation about student debt and sky-rocketing tuition was focused entirely on faculty salaries, as if tenure-stream faculty were all living lives of outrageous luxury, sipping cognac from snifters in the fancy libraries of our old Victorian mansions. As someone who makes less than $60k (base salary) per year in a discipline where that's just about the average for my kind of institution and rank, I've found the blame put on faculty salaries a bit galling. (For the record, I do not have a Victorian mansion, and my "library" is a small room filled with Ikea bookcases. I don't think I even own a snifter.) What is starting to come to light now is what many faculty have known for years: universities don't have more overhead these days because of growth in the faculty ranks, but because of explosive growth of administrations and administrator salaries. Given their spending priorities, it seems like these new administrative behemoths are badly out of step with the mission and purpose of the university--at the expense of those most vital to the university's existence: faculty and students.

When confronted about the large sums paid out to high-level administrators, boards of trustees and PR departments always say the same thing: if you want to get the "best people" for the job, you have to pay out. It seems to me that there are two problems with this logic. First, this implies that the "best people" for the job of running a university are also the kind of people who will pursue the biggest pay-day, or who at the very least wouldn't consider doing the job for less than half a million dollars a year (before bonuses and other perks). Maybe this is crazy, but I'm not sure someone looking to get rich running a non-profit really understands what non-profit work is about. The fundamental mission of the university is about service: students and faculty come together to learn and to produce new knowledge for the benefit of our local, national, and global societies. While I won't pretend to understand everything that goes into running a university, I don't think it's unreasonable to expect that the "best people" for the job would have a deep understanding of and appreciation for this kind of service, and who would therefore be happy to be paid less for the opportunity to help such a beneficial institution thrive. Faculty, after all, have made just this sacrifice. We say all the time, "I didn't go into this for the money." Relative to our educations, most of us are underpaid--the majority of us radically so. But for the minority of us who are getting paid a livable wage, we think the lower salary is okay because we believe in the work we do. My goal isn't to make a lot of money, but to make a difference in the lives of my students, so as long as I make enough to live and to get some modest enjoyment out of life, I'm happy to be paid less than my "worth" to be able to do what I love. Dedication to the mission of the university should matter more than making a lot of money. That should be at least as true at the top as it is at the bottom.

But we can turn this logic around, too, and find a second problem in the "you get what you pay for" justification. I might be willing to accept the argument that universities looking for top talent to run their schools need to be willing to pay top-dollar, if that argument were also applied to the people who actually execute the mission of the university: the faculty. The faculty are the ones who teach students--which is ostensibly why tuition-paying students attend university in the first place--and we are the ones contributing to the production of knowledge in our fields. If salaries were determined by a motivation to get the best people to help the university thrive, then every institution should be in a salary-war with every other to get and retain the best faculty they can find. If you get what you pay for, and if universities are interested in providing a top-quality education to students, faculty salaries should be inflating at the same rate as administrative salaries. Instead, the opposite has happened. Among tenure-stream faculty, salaries have stagnated, with last year being the first year in the last 5 in which average faculty raises have outpaced the rate of inflation. More importantly, university administrations are paying considerably less on average per faculty member than they used to, because, instead of responding to increased student enrollments with a proportional rise in the number of tenure-track faculty lines, they have instead opted to have classes taught by part-time faculty getting paid poverty-level wages. Nowhere is this more true than at universities with the highest-paid presidents.

I am of course oversimplifying somewhat, ignoring "market realities" that drive down faculty salaries because of the large numbers of PhDs looking for faculty positions. But it's one thing to say that a university can hire an excellent professor of Latin literature at $59k per year, and another thing entirely to suppose that you can get the same quality instruction paying someone with the same credentials $3k per course, which is what happens when administrators look to free up some cash and "create flexibility" by hiring more and more part-time, contingent faculty. This is not to say that adjuncts are worse teachers than tenure-track faculty--they aren't, due to the extreme dedication and benevolence of educators--but that the university can't really expect to hire and retain "the best" faculty if they're going to pay them poverty-level wages. Yet this is what they do to half of their faculty.

So it seems like there are conflicting logics at work. On the one hand, we're told that universities need to pay upper-level administrators very large sums of money in order to attract the best people to do the work of maintaining and finding ways to improve the university. On the other, these "best people" turn around and hire faculty as cheaply as they can possibly manage. So when students come in, apparently they are getting top-rate administrators, but bargain-basement faculty. Which might be okay, if the students were coming to our institutions for the administrators. But they're not: they're coming here to get an education, which comes from the faculty. And most of our student are paying a lot of money--and going into a lot of debt--to do so. How is this model of paying administrators as much as possible and faculty as little as possible a responsible allocation of funds?

Of course, the ones who decide how to allocate funds are themselves administrators (with a certain amount of oversight from boards of trustees--boards which too often have no faculty on them and which tend to be well-connected with administration). Since apparently we believe in hiring administrators who want to make a lot of money, it's perhaps unsurprising that administrators give themselves the largest raises. But these spending priorities aren't simply a matter of greed: they reflect the tendency among administrations to treat themselves as more important to the existence of the university than faculty. At one of the institutions I've worked at, the school once hit a "crisis" of unexpectedly low enrollments that led to a substantial budget shortfall. The administration's response was to freeze all faculty hiring, which seemed reasonable enough; but at the same time they created three or four more administrative positions to "help deal with the crisis." That institution is hardly the only university where this happens. The administrator-brain thinks that the solution to a problem of enrollment is less (comparatively inexpensive) faculty and more (comparatively expensive) administrators: this tells us what we need to know about their priorities, and what personnel they deem most important to the continued existence of the university. And therein lies the real problem.

Parents and students should be angry about this, because outrageous administrative salaries correlate with the rise in student debt. Thanks to increasing media coverage of this correlation, perhaps the anger will come, and universities will be pressured into administrative reform (at least at public institutions). But at the end of the day, the money is one symptom of a larger problem, which is that the people running our institutions of higher education don't prioritize education--quality education, not "how many kids can we get to pay to sit in one classroom and how little can we pay the instructor who stands in front of them?" When I once described to a friend outside the academy my frustration at my administration's refusal to involve faculty in important decisions, my friend connected administrative attitudes to the general tendency in the U.S. to paint educators as lazy and incompetent. "It must be hard," she said, "when your own bosses treat you like that." Well, yes, actually, it is. As faculty, we work hard to create environments in which students can learn at a high level, and to participate actively in our fields so that we are able to provide our students with cutting-edge information and insights. When our work is devalued, not just by our social world, but by our own employers, it can be difficult to find the motivation to keep working hard. We keep going because we care about our students and about our fields; it would be nice if our administrators shared that dedication with us.

Thursday, May 22, 2014

Blue Collar Academia

The reality of working in the academic world these days is that, more often than not, the rumors and announcements trickling out of administration buildings are bad news. Budget cuts, enrollment drops, hiring and salary freezes--I've been in my current position only 4 years, and it seems like the sky is falling more often than not. Last week was no different: a significant drop in freshman deposits from the previous year, and with it the usual rumblings of Bad Things to come. It used to be that these announcements filled me with terror and dread. What kind of job did I have, and what kind of institution did I work at, that the winds of change could shake us so terribly?  Do I need to look for another job? At my previous job, I was always looking for higher ground, because it seemed to me that the position I occupied would soon fall out from under me (not, as it turned out, an irrational fear--my previous employer just announced a bunch of tenure-track firings, including 11 in the division I used to work in). I went on the market every year in search of an elusive "good job." When my present institution offered me a position, I took it because, though I wasn't sure if it was a "good job," I was fairly certain that it was a "better job." But soon after, the intimations of dark days ahead began again. Initially, it panicked me: I'd moved to higher ground, but maybe it wasn't high enough? As time has gone on, however, I've started to recognize that my expectations for academic work simply didn't match the reality. The "good job" wasn't elusive: it was illusory. What I wanted for a job simply didn't exist anymore (if it ever did). More importantly, I realized that what I wanted was ultimately not very desirable in the first place--in fact, the illusion may be contributing to the worst of the problems in higher education.

As a graduate student setting out for the wasteland of the job market for the first time, my image of what it meant to be a professor was fairly narrow. I assumed that being a professor meant teaching your classes (in your field, of your own design) and doing your research. The proportion of time dedicated to each pursuit would differ based on the institution, and at some places there would be more or less funding for research support, and more or less "service" teaching, more or less release time from teaching obligations, etc.; but basically I would be able to do the same things I'd done in graduate school--which is to say that I'd be able to do my own thing. While I had a vague understanding that I would also have service responsibilities, my sense was that these were just an undesirable side-effect of the profession, obligations no one really wanted to do but that simply came with the job. I wasn't averse to the idea of service; I just didn't understand it as a real part of the job. The job--the real job--was engaging with the world of ideas with students and, via research, with colleagues. Like a lot of academics, particularly in the humanities, I was attracted to the job partly because of the independence it offered: I would be master of my classroom, and I would direct my own research, and--outside of scheduled teaching time--I would dictate my own hours. If I could get work as a professor, I would be an island, entire of myself. And if I could just get to tenure, no one would ever be able to take that away from me.


Thus when I was lucky enough to land my first tenure-track job, I was distressed to discover how deeply endangered this way of life had become, and how intent administrations seemed to be on hunting the tenured professor to extinction. At the time I was unaware of the statistics--that, for instance, nation-wide only about one-third of faculty are tenure-track--but it was clear to me that, at least at the kind of tuition-dependent private university I found myself at, administrators had no interest in the world of ideas, and were determined to make their faculty do as much teaching and service as possible with as little salary and research time possible. I realized quickly that the freedom I'd expected to enjoy was extremely precarious, that administration could change the terms of my employment at pretty much any time (indeed, I found out later that I'd come very close to losing my job when then 2008 crash happened and admin seized the opportunity to take some faculty lines back from the liberal arts). All of this indicated to me that I had landed in a "bad job"--though I recognized that there were worse out there--and that I needed to start looking for something better. What I discovered when I changed jobs, however, was  that even "better" schools face many of the same challenges (this is admini-speak) as the less well-ranked. Dependence on tuition for revenue meant our fortunes would rise and fall with enrollments; and, as I spoke to colleagues at other schools and read more about the state of the academy in the U.S., I learned that the tendency toward administrative bloat, which tilts university budgets away from instruction-related spending (i.e., away from spending on faculty and academic departments), is a national problem that the majority of universities are facing. The threat of salary freezes, of increased teaching loads, of diminished department and program budgets, of shrinking tenure-track lines and increased casualization: these are the daily reality for American university faculty.


What I wanted was to find a job where I didn't have to worry about my position, where I could simply focus on my teaching and research and not worry about how the money was managed or where the students would come from (isn't that what administration is for?). What I found was that everything I want to do, as a scholar and as a teacher, is under attack pretty much everywhere, due in large part to the unsustainable structure of modern higher education. There are no places (or very few) at which I would be "safe" just to do my own thing. But as I came to this realization, slowly another started to emerge, as well: that finding a "safe" position was the wrong thing to want. That, if quality teaching and scholarship are under attack at the American university, then part of my job should be to defend those things, to work to make the university--or at the very least my university--a place where those things are supported. Furthermore, if the causes of these attacks on quality teaching and research are administrative bloat combined with (and intimately connected with) the increasing corporatization of the university models that treats students as customers to be bilked of their funds (with the collusion of the government student loan system), then no one--not me nor my senior colleagues nor the new faculty majority working on contignent, mostly part-time contracts--will be "safe" unless all of us work to reverse these trends.


The problem with my original concept of the profession, I realized, was that no part of it was devoted to "defending the guild," as my spouse likes to say. I expected that "the university"--a vague entity given life by my naive faith in its "higher purpose"--would simply make a space for me to work, that I could study literature and help other people do the same, without me ever needing to participate in the actual functioning of that entity, without me needing to take responsibility for keeping it alive. I expected the university simply to function the way I thought it should work, without me or my colleagues having to do much to make it work that way. This isn't just a bad way to think of a job: it's a way of thinking that, if it's widespread, perhaps accounts for why American universities have fallen on such hard times. If we don't think it's part of the job of a professor to fight for the university--the university as a place for education, exchange of ideas, production of new ones, and participation with others across a range of interests and backgrounds in whatever projects we can imagine--then it's perhaps unsurprising that people with other ideas about what the university should be (specifically people who see it as a business) have taken over.


Part of what caused me to rethink my notion of what it means to be a professor has been my increased involvement in what had always seemed like the throw-away part of my job: service to the campus community. The experience of this kind of work is very different from what I trained to believe. When I was a graduate student, my mentors rarely talked about service as part of faculty life, and, when they did, it was overwhelmingly in a negative context. When I was new faculty member, my senior colleagues represented service in the same light. Bitching about committee work is a surer mark of the (tenure-track) professoriate than corduroy jackets and glasses. To an extent, the bitching is justified: much committee work is inefficient, and sometimes by design--administrations seem to love forming committees just to be able to ignore their recommendations. But even in our way of describing service work--that is, as "service"--our profession seems to denigrate participation in the administrative side of academic life. Teaching serves students; research serves our larger community of scholars. Yet participation in the decision-making processes of our institutions is singled out as "service," as if this were the menial, laboring part of our job, compared to the more noble activities of teaching and scholarship (we even call the teaching we don't like to do "service teaching"). I remember, as I was heading off to my first department meeting as a tenure-track professor, senior colleagues rolling their eyes at me: "Now you get to see the fun part of the job," one commented sarcastically. What they didn't understand, however, was that I was excited to be at a department meeting, because it was the first time I would have a voice in deciding how things would work in my workplace--at least in the limited context of the department. I've seen this scene repeated every time a new faculty member arrives (and I may, I'll admit, have been guilty of playing the role of the jaded senior colleague once or twice myself). Worse, I've seen tenured faculty members telling non-tenure-track members of the department that they're lucky they don't have to--or can't--go to meetings, as if it's a privilege to be entirely disenfranchised in one's workplace. (We should start calling service the "tenured-professor's burden.") The overwhelming tendency among tenure-stream faculty, at least in my experience, is to treat service as a waste of our time and "beneath" us. The prevailing attitude is that this is not our real work.


Yet this is precisely the work that is most important for faculty members to engage in at this point in the history of the American university. By now it should be obvious that we cannot leave the running of our universities to administrators, boards of trustees, or legislatures. If we want the university to be a place we want to work, we are the ones who have to make that happen. The good news is that, if we change our attitude about this kind of work, we are more than up to the task. What I've discovered from committee work is that, while sometimes time is wasted and solutions are poorly implemented (or just ignored), generally speaking when you put a bunch of smart people in a room to figure something out, they are in fact capable of figuring it out. And I expect that we'd get better at it with practice (and perhaps by using our well-developed research and learning skills to find out and teach ourselves a bit about negotiating, collective decision-making, and administration). This is work we can do, rather than letting those with a more business-like model of the university do it for us.


This is also work that requires us to understand ourselves not as little islands of intellectual productivity and contemplation, but as part of a larger community working together for the benefit of students and of our social world. This means developing a sense of solidarity with the others working towards the same goal--which includes not just tenured or tenure-track faculty, but also students, adjunct faculty, our administrative assistants (who often do most of the meaningful work of keeping our departments running), librarians, janitorial staff--everyone who does the work on the ground of helping students to learn more about their world and of supporting the production of knowledge. Having this sense of solidarity means getting away from the notion that some of the activities of the university are "beneath" us. I have known tenured faculty who could not be bothered to copy their own syllabi or to fill out their own post-conference paperwork, who expected others to take care of these things so that they could focus on more "important" work. This is the ivory-tower mentality. I used to think that this generalization about the profession was bullshit (I function very well in the "real world" despite having a Ph.D., thank you very much), but as I've come to understand why the university system is in so much trouble, I've also noticed that we tenure-stream faculty tend to avoid grappling with the reality of our own working conditions. Our ivory towers don't keep us separate from the rest of the world; we locked ourselves in these towers to avoid dealing with the rest of our own institutions, leaving it to the people in the castle below us to make sure that the towers didn't crumble. When we hold ourselves apart from all the non-academic activities of the university in this way, we both let others suffer for our sakes (for instance, allowing an army of precarious, contingent faculty into the servants' quarters to do our "service" teaching), and we failed to guard the gate against those who wanted to raze the castle and build an industrial park in its place. If we want to start to undo the damage and rebuild, we have to come down from these decaying towers, roll up our sleeves, and stand shoulder to shoulder with everyone else toiling in the day-to-day of our institution's functioning.


My point here is not that we tenure-stream faculty need to involve ourselves in more committee work, though committee work can be worthwhile to the extent that committees are empowered to make and implement decisions. We might instead look to the example of adjuncts, who have taken it upon themselves through unionization and information campaigns to fight the casualization of our profession. We can also look to the examples set by many students, such as those at the University of Southern Maine, who have likewise organized themselves and taken their complaints to administration and to the trustees with enough noise and enough numbers that they cannot be ignored. It takes very little searching to find examples of university members banding together to create positive change within their campus communities, whether it's students uniting against a commencement speaker or faculty uniting against the unilateral decision of a board of trustees. The important thing is that we be willing to do the work, and that we be willing to do it together.


If being a tenured professor ever meant living a life of leisure, it certainly can't any more. There is work to be done--a lot of it--to save American universities from attacks on education and the production of knowledge. Instead of being tower-dwellers within the university community, we need a more blue-collar ethic, one that doesn't disdain the kinds of work necessary to take back our universities and then to run them the way they should be run, and one that recognizes our place in a community and our solidarity with those community members. As I start to recognize this fact, and as I look around at all the people already engaged in the fight to take back campuses for the purposes of education and knowledge production, I find that I'm must less anxious about the future than I was when I spent all my energies trying to scramble for higher ground (without stopping to think what would happen to the people on the ledges below); and I realize it's because, by recognizing my solidarity with others, I find I'm not alone. For years, I made the mistake of thinking that getting a "good" job meant getting a position that would leave me alone to do my own thing. What I've learned is being alone is precarious, and that a good workplace isn't simply something that will be given to me: it's something we have to fight for, to create, together.


Tuesday, April 8, 2014

Confessions of a Boss

I'm nearing the end of my first year with new administrative responsibilities that give me oversight of a campus-wide program and the significant number of adjunct faculty and graduate students who teach in that program. As the semester draws to a close, I've found that I'm becoming paranoid about whether I'm doing a good job with this program. The problem isn't that I've been getting negative feedback: the problem is that I get almost no feedback at all, particularly from the people most affected by the decisions I make--the adjuncts and graduate students. A former chair once said to me that the hardest part of her position was that it changed the way people spoke to her--specifically, it made people less likely to do so. The revelation here isn't that people are reluctant to speak to their bosses; it's that this makes it difficult for the boss to do her job. I'm starting to understand now just how problematic the silence that surrounds power--in this case, power over people's employment situations and the conditions of work--can be. And this is revealing to me how such hierarchies, without strong protections in place for the workers, can make for bad managers.

Since I have taken over this managerial position ("administrative," in academic parlance, but that's just a way to obscure the management-worker relationship that is in fact the reality of academic labor just as with all other kinds of labor), I have heard very little from the adjunct faculty I oversee. Though I have tried to emphasize that I see mine as a support role--my job is to help secure for the instructors in the program the resources and support they need to do their jobs well--I rarely hear from anyone about problems they are facing in the classroom. Initially, I didn't understand why I heard from my faculty so rarely, and I was shocked and somewhat appalled when I realized that I'd only heard about a single case of plagiarism in the program in my first semester (statistics tell me that plagiarism is happening; either it isn't being caught, or when it is caught it isn't being reported). But when I stopped to think about why this might be happening, I realized that, however "open" and supportive I might try to be, the structure of my position relative to my instructors makes me all but inaccessible. I've certainly in the past, both as a graduate student and as a junior faculty member on the tenure-track, been nervous about bringing problems to my superiors, because I was worried about how those problems reflected on me and my abilities as a teacher. Maybe this rash of plagiarism is somehow my fault; or maybe this student is disruptive because I've mismanaged things somehow. And, even if that's not true, what if my supervisor thinks that's the case? Will my abilities be judged if I ask for help? I've asked these questions from positions with a certain amount of job security: as a graduate student, I knew I wasn't going to lose my fellowship, and as a junior faculty member on the tenure track, I knew that my department would need stronger inducements to get rid of me than a few problems in the classroom (academic job searches are exhausting, after all). I can only imagine how much more worried I would be about bringing problems to my supervisor's attention were I an adjunct faculty member, able to be fired (or rather, not re-hired) at the end of each semester for any reason--or no reason. The real miracle is that anyone ever comes to me at all.

The precariousness of my instructors' employment situation thus creates a problem that I can't overcome simply by telling everyone that I'm on their side. At the start of my first semester, I made the effort to drop by people's (shared) offices to say hello, but I stopped soon after because I worried I was creating the impression more of surveillance than of helpfulness: "The boss is checking up on me--what did I do?" It doesn't matter how "nice" or well-meaning I may be: I can't escape the fact that one of my roles is evaluative--I'm supposed to make sure that everyone in the program is doing their jobs--and I can't pretend that there isn't a power imbalance that separates me from the people I oversee and that colors every interaction I have with those people. That power imbalance encourages instructors to hide problems from me, rather than seeking out my help. Even worse, it likely silences any criticism my part-time faculty might have of the program, or of my management of it. No one likes to tell the boss she's wrong; but when the boss has the power to dismiss you without cause at any point--regardless of whether she's likely to exercise that power--the inducement to stay silent in cases of mismanagement is going to be overwhelming.

As a result, my contact with the people I oversee and support is limited, which is not ideal. If part of my role is evaluative, more of it is formative and supportive. The goal of my program is to provide the best quality instruction to our students that we can; it's thus my job as manager of that program to give the people I work with the resources to excel at their jobs, to provide useful, formative feedback, and to support them in trying out new things and tinkering with their instructional design. In an ideal world, instructors would leave the program better teachers than they come in, and my job would be to facilitate that. This supportive role is extremely difficult to perform, however, when the power dynamics of the situation discourage instructors from ever showing weak spots. Moreover, if no one can tell me when I make a bad choice, how do I fix my mistakes? How do I get better as an administrator if no one can offer me feedback on the success of my attempts at management?

Frequent feedback is how we identify strengths and weaknesses and thereby improve; it's why the faculty in my program are observed at least yearly. So why is there no structure in place to observe my performance and provide feedback? We say that we want to provide for our students the best instruction possible; but if the quality of instruction is in part tied to the support for that instruction--that is, to the management of the program--then why don't we put the same energy into assessing the bosses as we do the instructors? This lack of oversight for those in administrative roles is particularly galling when we consider that student evaluations remain a common metric for evaluating teaching. We ask people who, particularly in lower-division classes, are just beginning to enter adulthood, to judge our performance at a task these evaluators have never engaged in and have no training or expertise in; and the bosses think this is a valid assessment tool. So why then are their no evaluations of administrators by their faculty? Why don't we ask grown, professional adults to assess the performance of bosses who are, in terms of credentials and often in terms of experience, their colleagues?

The answer to the question is obvious: people in positions of power don't tend to create mechanisms that might limit that power. But this is a really bad thing. If it's good for the program for me to offer formative feedback to my colleagues working in the program, then it's also good for me to receive formative feedback in return, to find out from my instructors both what I've done right and what I can improve on. If a decision I make is a bad decision, I need to know that, and know why, so that I can rectify the situation and avoid making a similar mistake again. But the current structure makes the part-time faculty much more likely just to grin and bear the weight of any mismanagement on my part, rather than to point out the problem. This is not a recipe for successful management, or a successful program.

My ideal solution to this problem is simply to have a world without bosses, one in which programs are run by the faculty who teach in them, and in which no single member of the organization is granted so much power that they become difficult to criticize. I'm guessing the bylaws of most universities make that an impossibility (though bylaws can always be rewritten), so another solution that fits with the current structure of most institutions is to provide part-time faculty with (1) sufficient job security that they know they will not be subject to capricious sanctions by their boss, so that they can feel safer bringing problems, whether within the classroom or the program, to the boss's attention, and (2) a mouth-piece capable of approaching the boss with the concerns of the faculty without identifying particular faculty. In real world terms, one way to achieve this solution--one way that adjunct faculty members are achieving this solution--is with a union. A union contract generally requires that, when people are fired, they are fired for cause, and bargaining units get to elect stewards, whose job it is to serve as a kind of go-between between management and employees. As a result, the union structure gives employees the kind of voice and support they need to be able to provide feedback to the boss--feedback that, if the boss is doing her job, helps improve the program by improving the training and support that program can offer to its faculty to help them teach as well as possible.

Recently, Volkswagon came out in support of unionizing one of its U.S. factories, recognizing that it's good for business to enable employees to provide more input into decision making, and that a union structure can enable them in this way. If this is true for a car manufacturer, whose first priority is to maximize profit, it seems to me much more true for educational institutions, where the mission of the institution--to provide an education to its students--is carried out entirely by the people at the bottom of the hierarchy. The people on the front-lines know best how decisions made by the bosses affect the students, and affect the faculty's ability to educate students. Being a good boss in this situation means having access to feedback from the front-lines; and getting feedback means providing faculty with mechanisms for providing that feedback without fear of reprisal. Silence is--or should be--a boss's worst nightmare. What we want, if we want to provide our students a good education, is to have a conversation that travels the length of the academic hierarchy; and that only happens if there's more power in the hands of the faculty.


Wednesday, March 26, 2014

The Net that Enmeshes Us All


So will I turn her virtue into pitch,
And out of her own goodness make the net
That shall enmesh them all. (Othello 2.3.355-57)

I have been troubled by a recent conversation in which I found myself trying to persuade a more junior faculty member that she should not propose to team-teach because it would increase her workload (since, at our school, a team-taught class wouldn't count as a full class, and there's no way to get a reduction for 1/2 classes). She asked, "But what if the reasons for it are really good?"--in this case, improving the student experience and getting an adjunct faculty member better teaching. Here was a fairly new faculty member trying to take onto herself the burden of improving teaching and learning conditions at a university to compensate for the fact that the administration has recently taken several steps to raise faculty workload and make teaching conditions for the adjunct faculty less rewarding. It was a kind, self-sacrificing gesture. And here I was, telling her it was a bad idea.

I've had a lot of conversations recently about the fact that education in this country--including higher education--only continues to function, despite continued attacks from governments, private corporations in the education "business," and think-tanks, because of the intense dedication and massive self-sacrifice of most educators. To put it simply: if we all just "worked to the contract," it would all fall apart. If teachers really were as lazy and as self-serving as union-busting politicians want to claim, very few students would learn anything.

To give just one example: recently the IRS set guidelines for calculating how many hours adjunct faculty work per course. They recommended counting all instructional hours and all office hours, plus 1.25 hours for each hour in the classroom. In my program, this means that the IRS thinks that a faculty member works 9.75hrs/week for each course taught. Speaking for myself, 9.75 hours covers the amount of time it takes to teach, hold office hours, prep for class, and read minor assignments (journals, pre-writing exercises before major papers, etc.). Any time a paper is due, however, 9.75 isn't going to cover it. Given the number of students in a section, multiplied by the amount of time it takes to grade a single essay, a grading week requires 8-12 more hours of my time. I'll admit that I'm a slow grader (I struggle to grade a paper in less than 30 minutes), but I also provide my students with a lot of quality feedback on their work. If I were to teach only the hours the IRS believes a college faculty member works for each course, I'd have to resort to grading the way many of my professors did when I was an undergraduate: a grade plus  a two-word comment ("Very nice," "Needs work," "See me," etc.). Or I'd have to stop assigning papers, and instead teach through multiple choice, Scantron exams--which is not a great way to teach or assess the critical thinking and communication skills central to my discipline. Instead, of course, I just work more hours, because I don't see the point in teaching if I'm not going to try to do it well.

Or another example: in response to efforts by the school's adjunct faculty to unionize, the president of Point Park University, asserted that adjunct faculty don't require access to office space because, he claimed, they aren't required to hold office hours (a claim some of the school's adjuncts dispute). Imagine if the adjuncts at Point Park were then to teach according to the letter of their contracts (as far as their president understands them). Want help on a draft of a paper? Tough, kid; according to the president of your university, your professor's job doesn't involve seeing you outside of class. Of course, regardless of whether it's required, the adjunct faculty at Point Park generally hold office hours (for which, apparently, they are not remunerated); they want to help their students succeed, and that means giving more of their time.

These underestimates and misrepresentations of the amount of work that faculty are required to do attempt to justify the wild underpayment of academia's most vulnerable workers, adjunct faculty. On the other side, administrations continuously raise the workloads of full-time faculty, both tenure- and non-tenure-track, to extract more labor without a commensurate raise in pay. At my institution, expectations for research productivity are higher than they've ever been; at the same time, the administration just raised the teaching load of the faculty in my division. Since the service demands haven't diminished, there are really only two options for the faculty: simply work longer hours (again, without a raise in pay), or find ways to cut corners, to save time in either research or teaching activities. Realistically (since it's hard to cut corners in research), this would mean assigning fewer papers, meeting with students outside of class less, teaching the same courses again and again instead of designing new ones, refusing to take on independent studies, refusing to advise student groups, etc. I don't know anyone on the faculty who has taken this approach. When news of the workload increase first came down, we threatened (among ourselves) to stop assigning papers and to do multiple choice exams, to resign from our service commitments, to stop taking on independent studies: but no one actually did these things, because none of us are willing to sacrifice our students' education for something that is not, in fact, the students' fault. We won't let the sons pay for the sins of the fathers.

And this is what administrations are counting on. They know that our dedication means that we will work harder, that we will sacrifice personal lives and family and leisure, to minimize the impact of administrative decisions on the students. They know that they can undervalue our work while demanding more of us, and that people like my new colleague will find creative ways to do more with less--at the expense of her time and ability to advance at the university. They know, in short, that our love of the work we do (not necessarily the same thing as a love for our "jobs" but too easily conflated) makes us eminently exploitable.

This brings me back to the quotation from Othello at the beginning of this post. The speaker here is Iago, one of Shakespeare's most notorious villains, and he's plotting to use Desdemona's good pity towards a friend to engineer the downfall of Desdemona, her husband Othello, and his lieutenant Cassio. In other words, he will use Desdemona's virtue--her desire to help a friend--to destroy her and everyone around her. If there's a better definition of "evil" than this, I don't know what it would be. Iago is semi-demonic in the play; his early announcement of his malignant duplicity--"I am not what I am" (1.1.64)--travesties God's "I am what I am" in Exodus (3:14), and situates Iago as a kind of Satan-figure in the play. His intention to trap Desdemona and Othello in a deadly net made of her virtue, to take what is good and use it to evil ends, is the hallmark of this demonic nature, and will be reprised in Milton's Satan 60 years later:

If then [God's] Providence
Out of our evil seek to bring forth good,
Our labour must be to pervert that end,
And out of good still to find means of evil. (Paradise Lost 1.162-65)
The theological aspect of this dynamic aside, it seems to me obvious that the will to take what it best in people and use it to hurt them, to "enmesh" them in damaging, exploitative structures that are sustained only by the very virtue they twist and destroy, is the highest and most disturbing form of villainy. It is, as Milton's Satan knew, perversion, an attempt not just to exploit, but to convert virtues like dedication to others into damaging forces--that is, to destroy virtue itself, by destroying its ability to effect positive change in the world.

It is thus not, I think, over-dramatic to say that the fight against the devaluation of the work of educators is a fight between good and evil. When we fight for the support to do our jobs well--that is, when we fight for the resources we need to help our students learn--we are at the same time setting a powerful example to our students by refusing to accept the perversion of our dedication into a mechanism of exploitation. We are fighting to show our students that virtue should be rewarded, that the desire to help others is a worthy--not a worthless, devalued--endeavor.