Welcome to XDRT_LOGO Home on the Web

Thoughts on University Recruiting, Fall 2019

October 19, 2019

Author: Bianca Yang
Email: ipacifics@gmail.com

I have now completed all three university career fairs I signed up for this year: UCSD, Stanford, and Caltech.

My coworkers felt like UCSD did not yield great candidates. Most of the people coming to talk to us were interested in doing machine learning or data science, roles we are not currently hiring for. We are looking for software engineers at the new grad and college internship level. I did see a few strong grad students from UCSD, though, people who did impressive ML work during their internships or during the course of their research and were now looking to broaden their software engineering expertise. This was a massive fair, even with the 1-hr shift system that the university imposing upon students. We had a line for a good portion of the four hours.

My coworkers, who had mostly gone to Berkeley’s fair, felt that Stanford was higher yield. This is likely because we went to the CS specific fair at Stanford. It’s a more expensive fair ($24k vs. < $1k for the other fairs), so it’s unlikely we got the full value of our money. Stanford’s fair was very small, maybe only 40 companies total in the tent. NVidia had a massive line the whole five hours of the fair (we were right next to them). Bloomberg did, too. We never had a line. In fact, we spent a lot of time standing around, waiting for people to take an interest in our “AI-Driven analytics” poster. I poached a few people from the Nvidia line, but that wasn’t super effective because of the mismatch between the backgrounds and interests of people in that line the kinds of people we were looking for.

Caltech was the most draining fair I went to. I don’t know if I was tired of doing the career fairs at this point or if it really was a wasted effort. The career fair felt much sparser than usual. When I was a student, career fair was packed to the maximum capacity. I had to squeeze and elbow my way through hordes of people in every row of tables. This year, there was lots and lots of space to spare. The students we saw were either freshman and sophomores, too young for our internship -> full time pipeline, or simply not very impressive. My suspicion is that many students have already received offers with expiration dates in November. If they interview now, they’d probably have to give up the original offer and begin the job hunt in earnest, not a fun position to be in when you have Caltech levels of work to get done.

I appreciate that my company gave me the opportunity to be on the other side of the hiring table this year. I still am highly uncertain about what I should be looking for in candidates. I’ve seen some pretty good candidates, some pretty bad candidates, and some “maybe ok maybe not does anybody really know” candidates. Even the pretty good candidates I hesitate to rate as such, because I don’t trust my instinct. Instinct is the key word here. People don’t have good metrics by which they are judging talent. Even if they have heuristics, they don’t apply them consistently. What this results in is an arbitrary evaluation scheme where people can get rejected because they didn’t do one small thing correctly, like “smart pointers”, or “they didn’t indent with 4 spaces” or “they don’t follow the same OOP principles as me”, which are just utter BS reasons to turn a candidate away. I’m not sure the examples I just gave for arbitrary rejections of candidates are systemically an issue, because most people tend to be pretty fair in their evaluations. What is clear, though, is that people do not normally have good rubrics for candidate evaluation, which means evaluation is inconsistent and sometimes arbitrary. (Update 10/24/19: This issue for sure extends throughout a company. If hiring depends on engineers individually coming up with good criteria for judging a candidate, there is certainly no top-down direction around what kind of candidate matches culture, technical aptitude, areas of expertise, so forth.)

There was also no top-down direction about the kinds of people we want to be hiring. The assumption seems to be that we are smart engineers that can figure out some good enough way to evaluate whether other engineers are smart. The assumption also seemed to be that any curation of the candidate pool can happen once candidates get their offers. The people doing the ground work at these fairs just need to build a big enough pipeline to make the downstream work effective. I think this is a pretty reasonable policy.

One thing that bothers me about recruiting is the tendency of recruiters, who I don’t have a particularly high respect for (I say this because partly because they don’t have tech backgrounds and thus don’t seem to have the chops to think deeply about what great technical hires look like. They also often don’t seem to know much about what it’s like to be an engineer or what engineers really care about. They’re also often called upon to stand between a candidate and the engineers or managers, future coworkers whose opinions, management styles, visions, etc., matter more to the candidate than do salary or fringe benefits. Do these things matter to other people or am I just weird?), to approach the hiring issue in the way colleges approach admissions: apply lots of broad and dumb heuristics that will get you safe, homogenous hires but will likely miss out on great, eccentric talent. GPA is the biggest filter that I find meaningless. Stop putting it on your profiles. It’s low information and tells me pretty much nothing about how interesting of a technologist/engineer/whatever you want to call yourself you are. More thoughts on what’s important to me, i.e. advice for candidates, here. Classes you’ve taken matter somewhat, but I also don’t think that’s a particularly good signal of anything because you tend to not do anything particularly independent or interesting in a class. Sam Altman has some pretty good thoughts on this issue. Sam makes a good point about having high success in evaluating people when they come in for 10-minute interviews. I haven’t gone through enough examples to say I can evaluate people that well, but I think the company should move towards a model where we get a chance to talk to everybody. Then again, YCombinator is looking for more of a maverick than we are (mavericks are a bit of a risk to a company that is looking to shift out of “we’re a startup” and into “we’re stable and growing fast and will be a giant force like FAANG”). For now, we will stick with using HackerRank screens to filter out those who are full of hot air and those who have some provable technical ability.

So, what can companies do better in hiring (most of these will feel pretty non-sequitur to the rest of the post):

Update 10/22/19: ThoughtSpot just hired Bob Baxley as our senior VP of design and experience. Of course, this man is brilliant. I just finished reading the transcript of his talk on design management, where he included a ridiculously clear outline for how to run on-site interviews. He lays out the a schedule for the entire day’s interview and even includes a clear rubric for how to evaluate a designer’s portfolio presentation. After reading this transcript, I started to wonder why technical interviews don’t proceed this way. Why don’t we have candidates present on their portfolio and then work collaboratively with other engineers on a neutral but interesting problem?

The current “best practice”, if you can even call it that, is to use LeetCode or HackerRank questions to filter candidates. I’m pretty sure people think those questions are of marginal relevance…we also ask so many of them over an interviewing cycle (up to 8, if I’m counting 2 - 3 phone screens + up to 5 on-site sessions correctly). The fact that the industry thinks we need 8 questions to determine a candidate’s suitability must mean the questions are low signal. This approach also largely ignores a candidate’s previous experience and does not do a particularly good job of teasing out a candidate’s passion for building good software – it just tests how well they’ve mastered thinking like a competitive programmer.

Update 10/24/19: One of the arguments in favor of Google’s use of coding questions for candidates at every level is that they need to maintain a baseline level of coding ability. It’s a fair point, but I don’t think interviewers are necessarily careful enough in selecting the right questions. This issue may be a higher level issue, even, that the hiring manager or person who selected the interviewing panel didn’t give good enough instructions to the panel regarding what profile should be tested for. Anyway, common rubric is the theme I want to push.

I’m in favor of moving towards an interviewing model like the one Bob Baxley has laid out. I’m also highly in favor of moving on the schedule he proposes, where the candidate, not the company is the bottleneck in scheduling next steps.The only question that remains is: what do designers ask on the phone screen?