2004
In over 40 years in the computer field, I've screened a whole lot of programmers' resumes, interviewed a lot of people, and hired a bunch. Nearly half my hires were foreign born and educated (none had an H-1B visa). Many of my hires had college degrees, and I didn't hold that against them, but high grades in school, or prestige of the institution, didn't convince me to hire anyone.
The companies I worked at used highly paid experts to screen out inappropriate resumes. I still examined ten to twenty resumes for every one that I chose for telephone screening. About half of these were worth interviewing. About half of the people interviewed got an offer.
One time I was asked to "work the booth" at a Silicon Valley job fair. That was when I saw the real candidate population, unfiltered. I stood behind a table and talked briefly to each candidate in a long line. There were all kinds of nice people. They deserved to have good jobs. But most were light years from being able to work in a software company. One example: a high school shop teacher. Really pleasant person. Anybody who can do that job is organized, mature, and able to tolerate BS. He'd taken a course in BASIC at a community college and wanted to become a programmer. Cool, good idea; but he'd have to learn a huge amount before he could contribute to the role I was filling. I could meet my deadlines with less cost and risk by hiring somebody else. I told him he needed more courses and more experience.
The high-tech companies I know about expect new hires to learn on the job. They want to choose people who'll succeed at this, and lots of people can't. There is a 10 to 1 range in programming ability (that's an old result), and there's another range of 10 to 1 in "performance attributes" like being able to work in teams, making and keeping commitments, communicating clearly and openly, and learning new skills.
At the Honeywell Multics group in the 70s and 80s, as part of a serial interview, we used to say something like, "Let's take a break from talking to people. Why don't you have a seat in this empty office, and write a small program. Use any language you want to. The program can do anything you'd like. I'll be back in about 30 minutes, and ask you to explain the program to me."
Notice that this wasn't an unspecified task. The job was to write a program that was worth discussing, one that would show off the skills of the candidate, within the time given. For our group, the skill of being able to create new programs and discuss them coherently was the center of the job. We were then designing and building a new system, much larger than anything we had done before, on new hardware, in a high-level language that was still changing. Nobody else was writing operating systems in PL/I, so we didn't want to test for specific language knowledge. Our tools were also unique; we couldn't expect candidates to be able to use them.
The way our process worked was that Multics team members were expected to find a problem, design a solution, code their design, and assure, release, and support their code. We were looking for people with a combination of focused creativity and the ability to produce high quality code. Our programmers didn't have people to design for them; the designers had to code their own designs.
It seemed reasonable, if the job was programming, to ask people how they felt about actually doing some. And sure, it caused interview stress. We allowed for that in our evaluation; but the job was going to be stressful at times too, and we needed people who could enjoy it.
The important thing was not what the candidate wrote, but the account he or she gave of it. People who asked a lot of questions, what should I write, what language, etc., were probably not what we wanted; they needed too much hand holding. People who only knew assembly code weren't ruled out, if they were high-level thinkers. People who chose a high-level language and used it for bit twiddling weren't really showing what we were looking for either. And if candidates came up with an interesting but unfinished program, this would lead the subsequent discussion toward their feelings about finishing and shipping. The interesting thing wasn't whether candidates, say, commented their code or not, it was why they made the choices they did and what they could say about those choices. We might ask, "Do you think this program has any bugs in it?" and see what they understood about software engineering.
And you'd be surprised how many people couldn't do it. Couldn't write a simple program and talk sensibly about it. They'd huff, and bluster, and make excuses, and change the subject, rather than actually write some code. "Oh, I think of myself as more an architect than a coder."
(When I interviewed at Honeywell CISL, I was in a funny position, because I had been working on Multics for years on the MIT side. Even though there was tons of my code already in the system, John Gintell still asked me to write a program. My reaction was enthusiasm: I chose to write a small sort program, and we talked about whether there were bugs in it, how it performed for different sizes of input, how it could be generalized and whether it should be, and so on. I enjoyed this part of the interview, though I think my program did have a bug, and when John pointed it out in a kind way, it was stressful. But I recovered & they did hire me.)
Over the years, we tried different versions of the "programming test." The original version was suggested by our leader, Corby, in the 60s. In later years at CISL, instead of asking for "any program," we asked the candidate to write a small sort program, or print the first prime number greater than an input argument.
Some companies ask candidates to solve puzzles during an interview. Microsoft is famous for this: they supposedly ask whether the boat sinks or rises when you throw the rock out, stuff like that. Other companies hand you a piece of code and ask what it does, or whether there's a bug in it. There's a good website about this.
After I left the Multics group and joined Tandem Computers, I took a wonderful three-day course about 1983, titled "Hiring Outstanding People," created in-house by Paul Witt. This course had some real intellectual content: besides the usual mechanics of interviewing, EEO forms, and so on, it was based on the idea of analyzing the job to be done into two areas: the technical skills and knowledge needed to to the job, and the "performance attributes" needed. For example, some performance attributes were "creativity," "perseverance and endurance," and "tolerance of ambiguity." Paul provided a list of about 37 attributes, and you could make up your own if you wanted.
The idea was similar to management decision theory: the technical skills were like MUSTs, and we first screened out people who didn't have the objective ability we needed and didn't plan to teach. The performance attributes were WANTs, and we tried to find the best match we could, recognizing that nobody's perfect. To do this second part, we chose the top five or so performance attributes for the job, and for each one, came up with specific examples of how this applied to the job in question. So, for instance, for a QA job I was trying to fill, I might have
The course also taught us a technique called "Behavioral Interviewing" that helps an interviewer ask questions that produce rich answers. Using the performance attributes relevant to the job, Paul showed us how to ask questions that would evaluate a candidate relative to those characteristics. For example, instead of asking, "Can you get along with others?" I could ask "Tell me about a time when someone didn't agree with a proposal of yours." After the candidate answered, I might follow up with, "Tell me more," "Then what?" "Did you take any action against him?" "What did you write to his boss?" And so forth. An interviewer can learn a lot about a candidate quickly with this kind of questioning. Asking about specific events in the past is useful because it focuses the discussion on actual behavior, not intentions; one can also ask former colleagues about the events during reference checking.
Incidentally, one thing I learned in Hiring Outstanding People was that the kind of "programming test" that Honeywell CISL used is risky. A candidate that didn't get hired might sue. And the only kind of "test" that the courts say you can give is one that is both objective and job-related, and has been qualified by academic studies.. something like the old IBM programming aptitude test. (Well, I think this is dumb, but the HR people were very definite about it, so I didn't give programming tests when I was hiring at Tandem.)
Tying the questions to the performance attributes helps ensure job-related questions that can be justified. In practice, I concentrated on skill and knowledge questions during my phone screens. If the candidate looked promising, I would decide during the phone call whether to interview. If I decided to bring a candidate in, the team had a group discussion ahead of time and divided up areas of questioning: some interviewers would take one or two performance attributes, others would concentrate on selling the company, and so on.
One rule of thumb at Tandem was stated by VP Jerry Held: "hire better than yourself." This rule made sense in a dynamic, growing company environment where people had lots of opportunities.
After the interview, I tried to debrief the interviewers right away, either in person or via mail. If the candidate looked good at that point, then I did reference checking, talking to former co-workers and asking similar behavioral questions. One thing I learned, the hard way, was that if I had the slightest doubt about a candidate, not to hire. Dave Mackie at Tandem stated an even stronger rule, known as the "Mackie Test." He suggested the manager ask, when deciding whether or not to hire, "If I make an offer, and this person turns the job down, will I be disappointed?" And if the answer is no, then don't offer.
As time went on at Tandem, my group began to get tired of the process: we were spending a lot of time interviewing people, and it was disappointing if an interviewer talked to somebody, gave a positive report, and then found that the person wasn't hired, or gave a negative report and found the person did get hired. So I began to beef up my phone screening, and to trust my own evaluation more: I would meet the candidate informally, say for lunch, and decide whether to hire. If I decided yes, I'd bring the candidate in for half a day to meet the senior team members and management, and unless one of them disagreed strongly with me, I'd make an offer.
In the 90s, some social scientists studied interviewing, to see if behavioral interviewing really worked. They had interviewers write down their initial impressions of people and then go through an interview process, and then write another assessment. Some interviewers were trained in behavioral interviewing and some weren't. What they found was unexpected: almost all of the time, people didn't change their assessment from their initial impression, no matter what interviewing technique they used.
As described in the May 29, 2000 issue of The New Yorker by Malcolm Gladwell in "The New-Boy Network: What Do Job Interviews Really Tell Us?", researchers at Harvard and the University of Toledo found that people size up strangers in just a few seconds, and rarely change their opinions after that.
If "it's all over in 30 seconds," then spending a lot of time interviewing people and stressing them isn't contributing to the decision process, though it may have other purposes within the team.
In more recent jobs, I've tried to streamline the hiring process by using this observation. Analyzing the job and knowing what I want is still important. Checking on objective skills and checking references is still necessary. But the interviewing process itself can be speeded up, perhaps by having the candidate meet people for shorter interviews.
Joel Spolsky has written a great article about how he interviews programmers.
Bill Venners collected good advice on hiring from several senior folks.
Copyright (c) 2004 by Tom Van Vleck