No idea what Pinterest are using, but I led a team building the same thing using (mostly) commodity search kit in 2008.
Feature extraction was done with standard Java libs (proprietary algorithms though). Queries were initially performed using a vector space model, but I moved that to using an inverted index (Lucene) because in our use case the image queries were usually combined with free text and parametric search params.
The main issue we faced was scaling search with large number of query parameters, since a naive implementation created something like 300 query terms for each visual search. We did various things to optimise that, from distributing the index, to using index statistics to pick optimum words to query. I submitted some optimisation code (a modified MoreLikeThisQuery with an LRU term cache) back to Lucene, not sure what happened to it, think the JIRA issue is still open.
Caffe - deep learning feature computation and model training
OpenCV - local + other features
Zookeeper - service discovery
Cascading - batch processing jobs
We’ve built infrastructure around some of these libraries to operate at scale. For example we’ve build an incremental feature extraction pipeline that at the core uses caffe + opencv for the feature extraction (more details on how this works is in the paper).