We're looking for a COMPUTER VISION ENGINEER to join our world class CV team.
Job posting and details here: http://grnh.se/emdq87
Placemeter is building a real-time data layer measuring activity in the physical urban environment, like how many people are walking on an intersection or how fast cars are speeding down your block. We use computer vision at a massive scale, on a large number of rich and ubiquitous video feeds, to understand what is going in in the physical world in real time. We measure how busy places are, what people do, how fast cars go, and much more. We offer that data to developers, citizens, cities, and retailers, radically changing the way they interact with the physical world.
We built our platform around privacy. We never store any video and we do not identify people. We also make sure no one can reverse engineer our data to identify anyone. We are backed by top NYC & Silicon Valley VCs, alumna of TechStars (Spring 2013), and actively plugged into their vibrant ecosystem of mentors and alumni.
We need creative and flexible minds, with a complete commitment to building nothing else but perfect software and systems. Make a real impact on your city, the NYC tech community, and a fast growing startup. Put your mark on this truly disruptive, slightly crazy, and ambitious platform we are building.
Placemeter is in a phase of rapid expansion, and we want you to join us.
APPLY NOW: http://grnh.se/emdq87
About our stack
Our system is full stack in a way rarely seen before, from low level embedded processing to computer vision algorithms to mobile applications, and everything in between including: machine learning, data analytics, prediction models, and geospatial intelligence.
If you want to build the next big thing in machine learning, computer vision, sensing, prediction and if you like huge, scalable and impactful systems, you will fit right in. You will encounter some of the biggest tech challenges you have ever seen. Get ready to earn some serious tech street cred.
We are a paradise for video and data geeks. Using our own optimized code base, we detect moving objects, classify them, then track their positions. We then use trajectory information to estimate speed as well as location occupancy and traffic. Today, our computer vision stack runs continuously on close to 1,000 available video feeds, collecting 8 million data points each day on average. We extract insights and predictions from these points. We have millions of ground truth data points to build and optimize our algorithms. We analyze all these data points by comparing them, normalizing them, correlating them with external factors to give our users clean, real time data. We are about to grow dramatically, adding a couple of orders of magnitude to our current scale.
We work in a data driven environment where every new algorithm is first defined by data sets and ground truth - we have a lot of data floating around. Our regression and quality tests guarantee that each improvement on one camera will improve our quality and performance overall.
We highly value testing and continuous integration. For critical interactions between major components we maintain integration tests, and for our core algorithms we maintain quality and regression tests. Good test coverage is key to keeping our bug count low. It also builds internal confidence to work on any piece of code without fear of breaking existing functionalities.
Our tech team is made up of varied backgrounds, and we function as a flat team where everybody knows about what everyone else is working on. This creates an environment where you can learn from your peers with ease and significantly grow your tech turf.