We just released a very excited touch sensor that finally simplifies touch sensing for robotics.
Our most exciting result: Learned visuotactile policies for precise tasks like inserting USBs and credit card swiping, that work out-of-the-box when you replace skins! To the best of our knowledge, this has never been shown before with any existing tactile sensor.
Why is this important? For the first time, you could now collect data and train models on one sensor and expect them to generalize to new copies of the sensor -- opening the door to the kind of large foundation models that have revolutionized vision and language reasoning.
Would love to hear the community's questions, thoughts and comments!
this makes advanced touch sensors more like machine-cut screws than bespoke hand-forged nails.