How we did technical interviews (spoiler: no whiteboard coding)

Dmytro Mishkin
4 min readFeb 19, 2020

--

I have been criticizing whiteboard interviews for a while. Now it is time to tell, how we approached them in our startup, Clear Research.

First, setup.

We were based in Kyiv, Ukraine, in 2014. There are lots of talented engineers there, but no university was teaching computer vision or machine learning at the time. (Now things are changing). Neither computer vision projects were widely done in other companies (this has already changed as well). Nor had we financial resources to hire super-senior developers.

Yet, I need to quickly gather a small, but talented team of research engineers to iterate quickly and build prototypes for our computer vision project. This approach allowed me to hire a team of 4 engineers, who successfully build a computer-vision photo-to-shop app.

If you are interested in the startup itself: at the time of hiring we haven’t yet figured out what exactly we were going to do. Later we created photo-to-shop app Madora, which allows you to buy a variety of shoes, bags and accessories by taking single photo.

Demo of our photo-to-shop app, Madora

Besides the computer vision, we were doing a full pipeline from buying goods from local producers in China to the delivery to the USA. Clear Research doesn’t exist anymore, as we failed the business part of it.

So, how did I, and later we, as all the team was involved in the process, run interviews with candidates?

After the first 15 minutes when the candidates tell about themselves, we proceed to the main stage:

  1. We played a “client” who approaches our company and wants us to build some computer vision project for them. The “client” gives brief and kind of vague description of the task.
  2. A candidate may and should ask any questions to the “client”, they want: about limitations, purpose, time and money budget, whatever. We (“client”) answer them directly if we can or provide some direction and reasoning why we cannot be more specific.
  3. After a candidate is done with questions, they have a hour-two-three (no limitation from our side) to think, draft, google, do whatever they want for the final preparation.
  4. Finally, the candidate should present an approach, how they would tackle the problem, and more ways, the better. Also, they should present milestones our company should deliver, time and resource budget, motivation/justifications for the approaches and so on.

Important details:

  • we make sure that the “task”, which “client” requests for is not something you could git-clone, but the research topic. It was unsolved for the “general” case, but several approaches to the narrower problems surely exist already.
  • we present a candidate several different options for the topic, in order to adapt to candidate prior experience. E.g. it could be the multi-modal matching, captcha, etc.

What we were looking at:

  • questions candidate asks. Use-cases? Datasets? Limitations? How to measure the quality of the solution? So and so on.
  • Where did they google, if they did? Papers? Blogposts? Tutorials?
  • How realistic was the proposed approach to implement? Does it sound technically reasonable? Does a candidate actually understand how the methods they propose to use work?
  • If not (and if they acknowledge it themselves — then that is OK), were they able to quickly grasp advantages and limitations of the method?

Why it was OK for us if the candidate does not fully understand the method they propose? Because the optimal solution for some task might be something, which candidate has no experience with yet. Both at interview and at the real job itself.

And as we are talking about unexplored area, it is much more important to be able to quickly learn about the things than to be experienced in some specific area, which might become irrelevant or obsolete.

What we didn’t consider as very important:

  • proficiency in programming languages, as we were not tied to any specific one and uses whatever was suitable (end up with mix of C++, CUDA, Python and PHP).
  • knowledge of specific algorithms.

Why we didn’t do the coding interviews?

  • First, no coding interview doesn’t mean we didn’t check anything related to the coding. We asked about previous experience, difficulties and so on, so people who never wrote a line in their life were easy to detect.
  • Second, and probably the most honest reason, is that I believe that the test setup should be as close as possible to the actual thing which we will be doing. Anything I could think of as a task for a coding interview, is quite far what we were doing in practice, be it experimenting, or writing a fast and custom C++ library for similarity search. Moreover, in real life we code with the help of documentation, google and stackoverflow. All these things do not really fit time-limited and stressful interview setup.
  • In the same time, the approach I have described is similar enough to the things we were doing, except the actual implementation. It is also quite an uncommon approach, so we need not to afraid of people who are trained to pass the coding interviews, but nothing else.

To sum up, that was more a result of my personal beliefs, than some “scientifically grounded” or widely accepted opinion.

It worked quite well in terms of precision (don’t know about the recall). Out of 5 people we hired, I was happy with the performance of 4 of them. And the last one gave us a rather mixed impression and we agreed on a trial period, which clarified things up.

I hope that such an approach might be useful for others. See you later :)

P.S. Thanks to Amy Tabb, Jeremy Howard, Sergii Shelpuk, Olha Mishkina and one anonymous friend for reading the first versions of the post and their feedback.

--

--

Dmytro Mishkin
Dmytro Mishkin

Written by Dmytro Mishkin

Computer Vision researcher and consultant. Co-founder of Ukrainian Research group “Szkocka”.

Responses (2)