The nice thing about smartphones for robots is that they have almost everything you'd want for a robot: GPS, IMU, 2 cameras, battery, battery charging circuitry, a high-res display, touch interface. And ALL of that for $60 -- for say a used Pixel 1. You simply cannot get that amount of hardware for $60 for a RPi-based robot.
I actually wrote all the logic in an HTML5 app running on the phone, which communicates with the motors via bluetooth using the experimental HTML5 bluetooth API. I also wrote a "roslite.js" which allows for a ROS-node-like-but-async abstraction from within a single monolithic JavaScript:
https://github.com/dheera/roslite.js
This wouldn't work for more advanced robots of course, but for simple telepresence and educational robots it can work while being much easier to code than an Android app.
I remember my oldest son built a mobile platform with a tablet on it, so he could bother his sister at college remotely! He put his face on it, and the tablet was on a stick that could rotate as well. that was over 5 years ago. He was 15 around then. Anything particularly novel about this platform from other phone/tablet robot projects?
The github repository is mostly empty of code. There is a project (empty) for Arduino firmware, and it looks like they are using an Arduino Nano as the device controller. And there's a project (empty) for Android code. I'd love to see actual code.
My questions would be 1) How is the Android device communicating with the Arduino device? My guess would be wi-fi or bluetooth. 2) What image recognition software did they use in the Android code 3) What other sensors on the Android did they actually use and how? 4) Is there any way to port this to IOS? 5) Is it generalizable to other hardware builds?
The hardware looks very simple. As a software developer I would want to know a lot more about how it works from the coding side.
Now if someone would just make a large lego type kit with all the robot parts, so you can plug and play and make robots using smart phones for the control unit.
Buy some wheel cubes, a couple basic cubes for a base, plug them together, plug in the cell phone via usb, go.
Instant robots everywhere. Offer arm cubes, sensor cubes, lidar cubes, all plug and play.
If you're asking about the 3d printed part alone, you could find some stuff on Amazon [1] for a reasonable price and hoist the smartphone with any old phone stand you might have lying around and a couple of rubber bands.
The nice thing about smartphones for robots is that they have almost everything you'd want for a robot: GPS, IMU, 2 cameras, battery, battery charging circuitry, a high-res display, touch interface. And ALL of that for $60 -- for say a used Pixel 1. You simply cannot get that amount of hardware for $60 for a RPi-based robot.
I actually wrote all the logic in an HTML5 app running on the phone, which communicates with the motors via bluetooth using the experimental HTML5 bluetooth API. I also wrote a "roslite.js" which allows for a ROS-node-like-but-async abstraction from within a single monolithic JavaScript: https://github.com/dheera/roslite.js This wouldn't work for more advanced robots of course, but for simple telepresence and educational robots it can work while being much easier to code than an Android app.