19 May 2008 - 0:12In Silicon Valley for the summer
Yes, I know the average time between my blog posts is quite long, but I tend to post longer posts infrequently rather than daily brain-dumps (or anything limiting towards Twitter). Anyway, I’ve been preparing a conference paper as well as getting ready to leave for San Jose, CA. I’ll be working for IBM Research at their Almaden Research Center on a GPFS-related project, Panache. See “Panache: a parallel WAN cache for clustered filesystems” in ACM SIGOPS Operating Systems Review from January 2008 for a basic idea. I’ve never been to Silicon Valley, so I’m excited to see the area.
I’ve spent most of my past seven or so months working on my thesis proposal and preparing a conference submission. On the topic of CS conferences in my area (Systems), I wanted to highlight a USENIX-sposored meta-workshop I found serendipitously — WOWCS ’08: Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems. The WOWCS 08 PC and accepted authors is a list of seasoned veterans in systems research. Given that, and the improvement-based focus of the venue, many papers detail a lot of what some people see as “broken” in current systems academic venues (reviews, PC meetings, etc.).
One paper in particular that somewhat confirmed some disheartening truths about the nature of conference and workshop reviews is Overcoming Challenges of Maturity by Ken Birman — Ken is an ACM Fellow, well known for his work in systems and networking. Some of his gripes are from his experience chairing SOSP (in 2005), which is one of the most prestigious (and oldest) systems venues. Ken said,
Overwhelmed by the huge numbers of submissions, most PCs have turned to multi-round processes in which the first-round reviews are farmed out, often to students who may do an erratic reviewing job.
Most of us are learning to write papers in a manner calculated to appear to those beleaguered first-round reviewers. To get into SOSP or SIGCOMM a paper has to survive two thresholds: it must get past the two randomly selected students, and then must get past the six or so PC members who are most knowledgeable about the topic.
a PC chair today assigns some paper to PC member X, who then randomly hands it to students Y and Z, producing completely random reviews from people who have never been a part of the community and who are naturally inclined to be overly critical and to overly favor work in their own areas of interest: our mature researchers have long since shed these flaws of youth.
The above content encourages somewhat cynical views of publishing in such academic venues. Ken points out that the quality of the top conferences isn’t really diminished by these schizophrenic, semi-random first round eliminations because there are enough good papers remaining to fill the program. But it’s still somewhat unfair and very frustrating to authors. He also said,
Who hasn’t had papers that were rejected in the first round of reviews at a top conference, with just two reviews, one or both of which seemed almost completely clueless? Who hasn’t expressed anger at the system? Here are two little “factoids” to illustrate the depth of the issue: when I sent out the SOSP reviews, we discovered that in one case, a rejected paper had missed the initial cut on the basis of a review that was clearly written about some other paper.
Anyway, I’ve had pretty good experiences so far with my current primary research work, but some projects I’ve collaborated on as a secondary/advisory participant have experienced treatment like that (“completely clueless” reviews). Sometimes it could be chalked up to clarity issues in the paper, but other times it left me wondering if certain reviewers actually read the paper or just the abstract. Oh well… it helps to know that even highly-regarded, established researchers experience this sort of thing too.
2 Comments | Tags: Research Content