|I recently interviewed for a company called Rxdata.net here in New York.|
The task was a data ETL problem transforming natural language text into specific parts of data. I opted to use pyparse to create a working recursive descent parser to pick out the relevant bits of text.
At this point I was wary of completing the parser for all cases found in the database table, so I decided to build it only for a few base cases which I threw in to a unit test. I reasoned I could still demonstrate my skills without giving them much free work.
The solution worked well, and I submitted it to Joe, the CTO, for review. A few hours later Joe contacted me complaining that the code crapped out after trying to run the code against their database table. He then offered me a github branch to continue work on the problem. I sent them the nicest e-mail I could muster explaining that I felt the code adequately demonstrated my skills and that the error is due to the parser not handling all cases of the problem text.
Joe's responded with "I am only trying to evaluate your work, not on just how the code looks, which is clean and well organized, but that it works correctly on the data as well. I asked advice since the program crashed on the 5th package, which didn't give me enough data to verify, and I wasn't sure if it was just a system-related issue or something quickly fixed."
Lastly I responded with "Sorry for any confusion, I was just giving you a sample of my work. The system error you mentioned is a result of the code being a sample. The cases that do work are found in the test."
Naturally they haven't gotten back to me, even though my code more than adequately met the challenge. Perhaps Joe the CTO did not understand that the other cases would be trivial to implement. In that case, perhaps I've dodged a bullet. A CTO should know better.
I'm wondering if anyone else has had similar experiences. How did you handle it? How did the interviewers respond?