Hacker News new | past | comments | ask | show | jobs | submit login
Pixelopolis, a self-driving car demo from Google I/O built with TensorFlow-Lite (tensorflow.org)
149 points by baylearn on July 14, 2020 | hide | past | favorite | 47 comments



Damn it... Just before the pandemic I had started piling up hardware to start building something similar(I was intending to use a raspberry pi). And since I couldn't go and look for parts in person(I'm not that crafty with my hands so I genuinely need to see in person the part in order to be able to tell if it's going to work) so I postponed it. And today I see this... Feeling kind of sucks, even though I can't really explain why.


You should still do it! I'm sure you will have a lot of fun, and it's perfectly normal to do something that has already been done before, it happens all the time. Your solution doesn't have to be better, but it will be different and interesting.


Got other things lined up at the moment for better or worse, might go back to it at some point. It's just that "the Simpsons did it" kind of feeling, you know...


If you think it sucks that your idea was done before you by... Google ... you need to cut yourself some more slack.


I'm just stunned at how much effort they went in to carving and painting wood.


Gotta use that army of UX Designers for something...


You can create a more robust version using nvidia jetson nano/xavier instead of raspberry pi.


Amazon has been selling it's self-driving AWS Deep Racer car[1] and has been promoting race tournaments for a while now.

Has anyone got that? Is it worth the price to test self-driving algorithms in the hopes of making it to the full sized vehicle one day or is it best we make our own self-driving toy card with Nvidia Jetson?

[1]https://aws.amazon.com/deepracer/


I personally don’t but there’s a DIY robocars event near me where people compete in races using little self driving cars. Some people were using the AWS one but more popular in the “stock” division was the Donkeycar -

https://diyrobocars.com/

https://github.com/autorope/donkeycar

I haven’t made one, I just go with my kids to show them how cool it is to be an engineer, but Donkeycar seems popular and performs well in the contests.


Thank you, donkey cars seems to be very approachable.


I want to but seems too expensive. I am not too interested in assembling a 'donkey car' either, just want to play with software for now.


Many robot competitions include a simulation league, where people who don't want to buy physical hardware can compete with a simulation of it.

And being simulated means you never have a flat battery, and you can single-step your control code without hitting into something :)


They were trying very hard to use Tensorflow. It's amusing that their first pre-training operation failed because they trained on straight roads. Direct camera to steering is possible but not really the best approach.

Micromouse competition, 2019.[1] The first run, it's learning the maze. The second run is a speed run.

[1] https://youtu.be/sKFIBQ64_zs


Well yeah, the whole point was to demonstrate Tensorflow.


105.41 MB transferred

Maybe think twice before putting a 40MB GIF on a website?


Well how else are you going to do video on the web?!


Animated PNG?



Now that's a billion dollar startup.


Billion dollar startup or or billions in bandwidth costs?


yes


Embedded youtube?


Wonderful. Are there any other "robotics" projects which use android phones instead of rasberry-pi or microcontrollers?

Imagine in future there are generic robots. Rather than trusting its custom AI/processing, I can connect my phone to the robot and my phone gets hardware/robotic extension and is personalised for me.

Its like ironman suit for my phone :)


FIRST uses android phones but they drive a hardware board that handles low-level stuff like motors.


does anyone how the android phone is interfacing with the rest of the car hardware ?

i did not know that android phones could do realtime i/o like this. Is this the "Android Things" or "Android Accessory Protocol" ?


It's in TFA: "the Pixel 4 also controls the motors and other electronic components via USB-C".

There's an STM32 micro-controller which presumably acts as an USB slave and controls the servos.


what SDK is this ? i have hunted and not found any easy way to do serial I/O over usb-C in the android SDK


Just google "arduino android" the interface had been used by many projects for so many years.


Google "USB motor controller"


It doesn't quite say in the article but I'd guess that they're using USB connection with accessory protocol based on:

- Their chosen controller doesn't have Bluetooth.

- The images show phone cradle having cable connection to the phone.

- There's an off remark in the motor segment that they made the external board power the phone implying they were connected before.

- It's the easiest way and supported by Android APIs.

Android Things as a project is dead and it never ran on phones - it was always meant to power IoT devices with screen.


Maybe via bluetooth? I'd imagine that would be the easiest way.


One thing I have always failed to understand about self-driving tech -- what's the motivation for each car to do its own self-driving computation in busy metropolitan areas?

Are there technologies that integrate with some kind of existing infrastructure like beacons, etc. that would just tell the car where the streets are? If not, why not?


Smart roads have been attempted for years. Usually with some sort of antenna or wireless/light transmitter embedded in the road itself. The cost per mile was just too high. Far simpler to use what's already there with computer vision and map metadata. Now that cars have GPS, A-GPS, and RTK need to tailor roads for autonomous vehicles has been diminished.


> The cost per mile was just too high.

That's why I specified "in large metropolitan areas". I can see that it's impractical to outfit anything outside dense cities with such beacons, but inside the city they could augment, if not replace autonomous driving systems. Or so I imagine.

> GPS, A-GPS, and RTK

Thanks for telling me about RTK, I did not know about this!


It's cheaper to buy a self-driving car than to augment every road the customer might want to drive on.

Maybe if we were immediately jumping to 50% self-driving adoption it would be easier to make smart roads, but that's not the world we're living in yet.


In addition to the other replies, is path finding even enough? I think a lot of the hard part is dealing with arbitrary obstacles in the road which won't be playing nice with the centralized system.


GPS is already beacon infrastructure that tells cars where they are on the streets :)

It cleverly got around our allergy to spending on infrastructure by being cold-war-era military technology.


> GPS is already beacon infrastructure that tells cars where they are on the streets :)

Yeah, but it doesn't say where the streets are, or where the traffic lights are, or (officially) what's the speed limit on this stretch of road, etc. All those are via the layer that's applied by a private company, and may differ substantially from how the city itself intends to direct traffic.


You are right, GPS alone doesn't. But it's augmented with both specialised maps (from the private companies you mentioned) and sensor data. Some self-driving cars use detailed 3D maps containing information about e.g. traffic light positions and have technology to read road signs and lights. Just like human drivers do.

I used to work for TomTom and at the time they were just getting into the self-driving cars sector. I don't know where they are with it now, but four years ago they were already building high-definition 3D maps for autonomous cars. And they had road sign reading technology.

You can see their HD 3D map in a video [1] where they also explain how it's constructed using their mapping vehicles.

EDIT: To make my point clear, we don't really need to invest in smart roads - the cars and the supporting tech are already smart enough.

[1] https://www.youtube.com/watch?v=ga5fW-QSXp0


>what's the motivation for each car to do its own self-driving computation in busy metropolitan areas?

There are a few reasons I can think of. 1) latency - the same argument against data center approach 2) security - we can't trust other computers as once a hardware is owned by somewhere the security battle is lost 3) unnecessary and insufficient - there are human drivers and there are cases when there is no other self-driving car around.


There is a definite resemblance to Duckietown[1], a robotics class/competition done developed at MIT.

[1] https://www.duckietown.org/


We had something similar at DTU in Denmark, albeit long before the times of TensorFlow and other “modern” ML.

Lane keeping was a solved problem back then.


Can this be scaled into building many small cities for self-driving cars training? Could it be used for real scale? Kind of like GTA V is used for training.


“Please excuse the crudity of this model. I didn’t have time to paint it or build it to scale” — Doc Emmett Brown (Decidedly not Google)


Interesting that they went for expensive microcontroller based servos, I guess they consider Realtek chip RC cars to be just toys.


Probably for making steering easier with angle control.


RC servos do not have encoders so you would have to calibrate them. Using off-the-shelf servos with encoders is much easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: