Hacker News new | past | comments | ask | show | jobs | submit login

Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there's progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

UE5's rendering approach. They finally figured out how to use the GPU to do level of detail. Games can now climb out of the Uncanny Valley.

The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

Electric cars taking over. The Ford F-150 and the Jeep Wrangler are coming out in all-electric forms. That covers much of the macho market. And the electrics will out-accelerate the gas cars without even trying hard.

Utility scale battery storage. It works and is getting cheaper. Wind plus storage plus megavolt DC transmission, and you can generate power in the US's wind belt (the Texas panhandle north to Canada) and transmit it to the entire US west of the Mississippi.




> Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there's progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

Still don't see fully (fully automated) self driving cars happening any time soon:

1) Heavy steel boxes running at high speed in built up areas will be the very last thing that we trust to robots. There are so many other things that will be automated first. Its reasonable to assume that we will see fully automated trains before fully automated cars.

2) Although a lot is being made of the incremental improvements to self-driving software, there is a lot of research about the danger of part-time autopilot. Autopilot in aircraft generally works well until it encounters an emergency, in which case a pilot has to go from daydreaming/eating/doing-something-else to dealing with catastrophy in a matter of microseconds. Full automation or no automation is often safer.

3) The unresolved/unresolvable issue of liability in an accident: is it the owner or the AI who is at fault.

4) The various "easy" problems that remain somewhat hard for driving AI to solve in a consistent way. Large stationary objects on motorways, small kids running into the road, cyclists, etc.

5) The legislative issues: at some point legislators have to say "self driving cars are now allowed", and create good governance around this. The general non-car-buying public has to get on board. These are non-trivial issues.


You could be right.

My alternative possible timeline interpretation is that two forces collide and make self-driving inevitable.

The first force is the insurance industry. It's really hard to argue that humans are more fallible than even today's self-driving setups, and at some point the underwriters will take note and start premium-blasting human drivers into the history books.

The second force is the power of numbers; as more and more self-driving cars come online, it becomes more and more practical to connect them together into a giant mesh network that can cooperate to share the roads and alert each other to dangers. Today's self-driving cars are cowboy loners that don't play well with others. This will evolve, especially with the 5G rollout.


This reminds me that Tesla itself is starting to offer insurance, and it can do so at a much lower rate. I assume this is because:

1) Teslas crash much less often, mostly due to autopilot.

2) Tesla can harvest an incredible amount of data from one of their cars and so they can calculate risk better


how much does a Tesla know about the state of its driver, e.g. to detect distraction, tiredness or intoxication?

Does Tesla see when you speed and increase your premiums?


Having high-speed steel boxes carrying human lives and what else react on messages from untrusted sources. Hmm. What could go wrong?


I'm going to ignore the snark and pretend as though this is a good faith argument, because we're on Hacker News - and I believe that means you're a smart person I might disagree with, and I'm challenging you.

I want to understand why being in a high-speed steel/plastic box with humans (overrated in some views) controlled by a computer scares you so much. Is it primal or are you working off data I do not have? Please share. I am being 100% sincere - I need to understand your perspective.

To re-state in brief: (individual) autonomous self-driving tech today tests "as safe as" ranging to "2-10x safer" than a typical human driver. This statistic will likely improve reliably over the next 5-10 years.

However, I am talking about an entire societal mesh network infrastructure of cars, communicating in real-time with each other and making decisions as a hive. As the ratio flips quickly from humans to machines, I absolutely belive that you would have to be quantifiably unsane to want to continue endangering the lives of yourself, your loved ones and the people in your community by continuing to believe that you have more eyes, better reaction and can see further ahead than a mesh of AIs that are constantly improving.

So yeah... I don't understand your skepticism. Help me.


The risk is a bad actor could hack into this network and control the cars

Security-minded thinking dictates that we should move forward with the assumption that it will happen. The important outcome is not "we can't do anything as a society because bad men could hurt us" but "how do we mitigate and minimize this kind of event so that progress can continue".

Look: I don't want my loved ones in the car that gets hacked, and I'm not volunteering yours, either. Sad things are sad, but progress is inevitable and I refuse to live in fear of something scary possibly happening.

It is with that logic that I can fly on planes, ride my bike, deposit my money in banks, have sex, try new foods and generally support Enlightenment ideals.

I would rather trust a mesh of cars than obsess over the interior design of a bunker.


Totally agree.

If all the cars in the area know one of the cars is about to do something and can adjust accordingly then it will be so much safer than what we have now it is almost unimaginable.

It would seem at some point in the future, people are not going to even want to be on the road with a human driver who is not part of the network.


The hype around self driving cars is still very much around. I tend to view any debate about full autonomous cars (level 5) as unserious if they work with less than a 15 - 20 year time horizon.


In 2014 top humans could give a good Go playing AI 4 stones (a handicap that pushes games outside of being between comparable players).

In 2017 AlphaGo could probably give a world champion somewhere between 1 and 3 stones.

From an algorithmic perspective the range between "unacceptably bad" and superhuman doesn't have to be all that wide and it isn't exactly possible to judge until the benefit of hindsight is available and it is clear who had what technology available. 15-20 years is realistic because of the embarrassingly slow rate of progress by regulators, but we should all feel bad about that.

We should be swapping blobs of meat designed for a world of <10kmph for systems that are actually designed to move quickly safely. I've lost more friends to car accidents than any other cause - there needs to be some acknowledgment that humans are statistically unsafe drivers.


When you're mentioning AlphaGo, you're committing a fallacy that's so famous that it has a name and a wikipedia page (https://en.wikipedia.org/wiki/Moravec%27s_paradox). The things that are easy for humans are very different from those that are easy for robots.


I don’t disagree that computer are better driver, under certain conditions, but that’s not the point.

I can drive myself home relatively safely in conditions where the computer can’t even find the road. We’re still infinitly more flexible and adaptable than computers.

It will be at least 20 years before my car will drive me home on a leaf or snow covered road. Should I drive on those roads? Most likely not, but my brain, designed for <10 km/h speeds, will cope with the conditions in the vast majority of cases.



> Its reasonable to assume that we will see fully automated trains before fully automated cars.

https://en.wikipedia.org/wiki/Paris_M%C3%A9tro_Line_14

Fully automated since 1998, and very successful.


There were automated railways 30 years before that too. https://en.m.wikipedia.org/wiki/Automatic_train_operation


I've lived in Washington DC long enough to remember back when our subway was allowed to run in (mostly) automated mode. There was a deadly accident that wasn't directly the fault of the Automatic Train Control (the human operator hit the emergency brake as soon as she saw the parked train ahead of her) but it still casts light on some of the perils of automation.

Another hard problem for AI is to "see" through rain


That's hard for humans too. I think we need to give up on the idea that fully autonomous driving will be perfect.


I'm obviously talking about matching human performance and this is the hard problem


There is also an easy solution of just staying put.

I have driven in snow a few times that I was not sure I was even on the road. Or the only way I knew I was going the right direction was because I could vaguely see the break lights of the car going 15mph in front of me through snow.

That is an easy problem to solve though because I simply should not have been driving in that.



Humans are pretty terrible at driving in rain and snow as well.

We already have fully automated trains. The DLR in London.

I am optimistic about solving those problems. Regulation always comes after the tech is invented. Cars have more opportunity to fail gracefully in an emergency; pull off onto the shoulder and coast to a stop or bump into an inaminate object.


Of course the owner is to blame.


What if it's a rental or a lease? In a fully automated car, that's basically a taxi. I don't think I should bear the responsibility of my taxi driver.

If/when we get fully automated cars, this kind of driverless Uber will become extremely common. Who bears the risk then? This is a complicated situation that can't be boiled down to "Of course the owner is to blame"


That is the most puzzling thing to me. Not from a technical, but societal ( https://en.wikipedia.org/wiki/Tragedy_of_the_commons ) point of view. Compare with public mass transit, except of Singapore, Japan, it is mostly dirty, in spite of cleaning staff working hard, and other people around. In a Taxi/Uber you have the driver watching, and other rentals are usually inspected after, and immediately before next rent out, just to make sure.

Not so in car-sharing pools, and there it's already materializing as a problem. How do you solve that with your 'robo-cab'? Tapping on dirty/smelly in your app, send back to garage? What if you notice it only 5 minutes after you started the trip, already robo-riding along? What if you have allergies against something the former customer had on/around it? Or was so high on opioids, that even a touch of the skin could make you drop? As can, and did happen. How do you solve for that without massive privacy intrusions? Or will they be the "new normal" because of all that Covid-19 trace app crap?


Counterpoint: In a fully-autonomous situation, of course the AI is to blame.


I think we need to consider that case when/if it happens. For the foreseeable future there needs to be a responsible driver present.

To go contrary to this is to invite outright bans of the tech.


> The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

Mmm, this sounds like exactly what people said at the time the PS3 was going to be released, and I can only recall of one example where the PS3 was ever used in a cluster and that probably was not that very useful in the end.


This exactly.

The PS5 and Xbox One X are commodity PC hardware, optimized for gaming, packaged with a curated App Store.

Sony also won’t just sell you hundreds or thousands of them for some kind of groundbreakingly cheap cluster. They will say no, unless you’re GameStop or Walmart.

Everyone with a high-mid-range PC already has more horsepower than a PS5 and it’s not doing anything particularly innovative or groundbreaking.

The PS5 is going to equivalent to a mid-range $100 AMD CPU, something not as good as an RTX 2080 or maybe even an RTX 2070, and a commodity NVME SSD (Probably cheap stuff like QLC) that would retail for about the same price as a 1TB 2.5” mechanical hard drive. It is not unique.

Data center servers optimize for entirely different criteria and game consoles do not make sense for anything coming close to that sort of thing. For example, servers optimize for 24/7 use and high density. The PS4 doesn’t fit in a 1U rack. It doesn’t have redundant power. Any cost savings on purchase price is wasted on paying your data center for the real estate, no joke. Then when the console breaks you have to pay your technician $100/hour in compensation, benefits, and taxes to remove and replace it.


I think you've vastly understating current hardware prices.

An 8 core 2nd generation Zen chip appears to retail for $290. The PS5 reportedly has a custom GPU design, but for comparison a Radeon 5000 series card with equivalent CU count (36) currently retails for $270 minimum. Also, that GPU only has 6GB GDDR6 (other variants have 8GB) but the PS5 is supposed to have 16GB. And we still haven't gotten to the SSD, PSU, or enclosure.

Of course it's not supposed to hit the market until the end of the year - perhaps prices will have fallen somewhat by then? (Also I don't expect Sony to be making any money off the hardware at those prices, so I agree that they're unlikely to sell them to anyone who won't buy games for them.)


Ryzen 2nd-gen 2700 is out of stock currently, but it used to go for as low as $135-150, it's absolutely not a $290 CPU (perhaps you're looking at 3rd gen ryzen? 3700x?).

I haven't looked at what a GPU equivalent would be, but by the time PS5 hits the market, I doubt going to be anywhere near 270$.

As long as there aren't any supply chain disruptions (as there are now).

It appears that the real killer is the hardware-accelerated decompression block pulling the data straight from SSD into CPU/GPU memory in the exact right location/format without any overhead, which isn't available on commodity PC hardware at the moment.


Ack my bad! I wrote "2nd generation Zen" but I meant to write "Zen 2" which is (confusingly) the 3rd generation.

I found some historical price data and I'm surprised - the 2700 really was $150 back in January! Vendors are price gouging the old ones now, and the 3700X is currently $295 on Newegg.

As far as the GPU goes, an 8GB from the 500 series (only 32 CU, released 2017) is still at least $140 today. And noting the memory again, that's 8GB GDDR5 versus (reportedly) 16GB GDDR6 so I'm skeptical the price will fall all that much relative to the 6GB card I mentioned.


Zen2 = Ryzen 3rd, not 2nd.

> Also I don't expect Sony to be making any money off the hardware at those prices, so I agree that they're unlikely to sell them to anyone who won't buy games for them.

I think console hardware cost is generally budgeted at a slight loss (or close to break-even) at the beginning of a console generation, and then drops over the ~7 year lifespan.


> Everyone with a high-mid-range PC already has more horsepower than a PS5 and it’s not doing anything particularly innovative or groundbreaking.

The fact that it can stream 5.5gb/s from disk to RAM says otherwise. Commodity hardware, even high end m.2 drives can’t match that.

* it’s my understanding that it directly shares RAM between the CPU and the GPU which means way less latency and more throughput.


There are high end drives on the PC market what can match and surpass that, but they are like $2000+ :) Linus talked about that topic last week: https://youtu.be/8f8Vhoh9Y3Q?t=1607


Watching some of that, and doing a bunch odd reading on the PS5, it seems that “some drives” can kind of get close, but the fact that the PS5 physically has custom, dedicated hardware that can directly move data from the SSD straight into shared CPU-GPU memory with minimal input/work from the CPU, and that’s a fundamental architectural advantage PC’s don’t have (yet).

I would sure like to see some architectural upgrades like this in PC/server world though: I’d love an ML workstation where my CPU-GPU ram is shared and I can stream datasets directly into RAM at frankly outrageous speeds. That would make so many things so much easier.


While the individual components might not be as fast as a high end PC, they way the system is architected and the components are connected to each other (eg. super high bandwidth from SSD to CPU/GPU memory) gives it some advantages especially for gaming. For the price it certainly is impressive.


New console releases don't need to be particularly innovative or groundbreaking. They greatly improve the amount of the resources available to the game-devs and game development is console centric in the first place. Usually after new console launches game visual quality jumps quite noticeably in a couple of years. Its beneficial for everyone even if you are not console gamer yourself.


Then when the console breaks you have to pay your technician $100/hour in compensation, benefits, and taxes to remove and replace it.

No, you pay your minimum wage junior IT assistant to unplug the broken one and plug in a new one. That's the point of commodity hardware - it's cheaper to buy and cheaper to support.


Faster consoles are good if your a PC gamer though since games end up deployed for all three and consoles are the retard on progress.

GTA 6 with the hardware in the new consoles will likely be spectacular.


Are you referring to the time the US air force built a cluster out of 2000 PS3s? Seems good.


Ps3 was used as Debian clusters in my university and would be in larger scale if not for a) huge cost in my country at start b) medium availability c) "other systems" fiasco.

There was significant interest in grid scholar community.


Well that just goes to show that you shouldn't trust hearsay, even if that hearsay is your own vague recollection of something. There is an Wikipedia page dedicated to the ways the PS3 was used as a cheap HP computing cluster:

https://en.wikipedia.org/wiki/PlayStation_3_cluster

The only reason that stopped happening was because Sony killed it on purpose:

> On March 28, 2010, Sony announced it would be disabling the ability to run other operating systems with the v3.21 update due to security concerns about OtherOS. This update would not affect any existing supercomputing clusters, due to the fact that they are not connected to PSN and would not be forced to update. However, it would make replacing the individual consoles that compose the clusters very difficult if not impossible, since any newer models with the v3.21 or higher would not support Linux installation directly. This caused the end of the PS3's common use for clustered computing, though there are projects like "The Condor" that were still being created with older PS3 units, and have come online after the April 1, 2010 update was released.

And in case you were wondering, the reason Sony killed was because they sell their consoles at a loss and make up for that through game sales (which indirectly is what made it so affordable for people interested in cluster computing). If the PS3 is merely bought for creating cluster computers they would end up with a net loss (Nintendo is the only console maker that sells consoles at a profit)


The key differentiator is x86 vs PPC and 1 TB/s bus.


PPC was ok, the killer was that you had to write code specifically for the Cell (co)processors and their limited memory addressing if you wanted the promised compute performance.


> 1 TB/s bus

Is that the new marketing term for shared VRAM?


Most of that power came from the Cell processor, which was awesome but supposedly hard to develop for. I assume they’ve learned that lesson.


If by “learned” you mean changed focus from making it (uniquely) awesome and instead making it easier to develop for: yes.

And if by “learned”, you also mean “were convinced by Mark Cerny“ (who is still leading design of the PS5), then also yes.


> The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

That seems like a straight waste of time for lightly customised hardware you'll be able to get off the shelf. And unless they've changed since, the specs you quote don't match the official reveal of 16GB and 10 teraflop. Not to mention the price hasn't been announced, the $400 pricepoint is a complete guess (and pretty weird given the XbX is guessed for 50% more… for a very similar machine).


Gpu solved LOD won't save video games from the uncanny valley. In some cases it will make it worse. It makes for nice statues and static landscapes though.


> C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

The crowd that use C++ needs raw pointers sometimes, and you can't really prevent bad pointers and buffer overflows when they are used. There is a reason why Rust, which goal is to be a safer C/C++, supports unsafe code.

Smart pointers is a very good thing to have in the C++ toolbox, but they are not for every programmer. Game programmers, if I am not mistaken, tend to avoid them, as well as other features that make things happen between the lines, like RAII and exceptions.

The good thing about that messiness that is modern C++ is that everything is here, but you can pick what you want. If you write C++ code that looks like C, it will run like C, but if you don't want to see a single pointer, you have that option too.


Maybe it was a purposeful reference, but PlayStations have indeed been linked to create a supercomputer: https://phys.org/news/2010-12-air-playstation-3s-supercomput...


Even before that link. The PS2 Linux kit was used back in 2003.

https://web.archive.org/web/20041120084657/http://arrakis.nc...


> C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

Would love some links to read over weekend. Thanks!


Things like:

- std::string_view

- std::span

- std::unique_ptr

- std::shared_ptr

- std::weak_ptr (non owning reference to shared_ptr, knows when the parent is free'd)

- ranges

- move semantics

- move capture in lambdas

- std::variant

- std::optional

To be honest, learning rust has made me a better c++ programmer as well. Having to really think about lifetimes and ownership from an API perspective has been really neat. It's not so much that I wasn't concerned about it before, more that I strive to be more expressive of these conditions in code.


Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

However I feel like most of the heavy lifting features came with C++11. Span, optional, variant and string_view are nice additions to the toolkit but more as enhancements rather than the paradigm shift of C++11 (move, unique_ptr, lambdas et-al).


> Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

It's funny, because while it's certainly become more influential lately, that subculture existed as a niche in the C++ world before Rust and before C++11. So much so that when I first heard about Rust I thought "these are C++ people."


The original (and long dead) Rust irc channel used to be full of C++ people chatting with OCaml people. Those were the days :)


That entirely matches my idea of how Rust came to be, some sort of pragmatic co-development across two different philosophical camps. In many ways, Rust is a spiritual successor to both languages, if only it was easier to integrate with C++.

lol i start most of my big objects by deleting the copy constructor and adding a clone member func :P


string_view, span and ranges are not conductive to safety, quite the opposite.


Yeah, if anything, C++ is getting less serious about safety by piling features over features. Just write Rust instead.


can you explain why you think that?


Things like "It is the programmer's responsibility to ensure that std::string_view does not outlive the pointed-to character array."

"string_view" is a borrow of a slice of a string. Since C++ doesn't have a borrow checker, it's possible to have a dangling string_view if the string_view outlives the underlying string. This is a memory safety error.

Rust has educated people to recognize this situation. Now it's standard terminology to refer to this as a borrow, which helps. Attempting to retrofit Rust concepts to C++ is helping, but often they're cosmetic, because they say the right thing, but aren't checked. However, saying the right thing makes it possible to do more and more static checking.


But surely it's a step towards more safety. Compare to passing char * around or ref/ptr to string.

Sure C++ doesnt have a borrow checker but these types encourage the idea of "reifying" lack of ownership rather than keeping it ad hoc


I have always used Pascal for memory safe strings. Reference counted, mutable xor aliased, bound checked: their safety is perfect.

Unfortunately there is no string view, so you need to copy the substrings or use pointers/indices. I tried to build a string view, but the freepascal compiler is not smart enough to keep a struct of 2 elements in registers.


You don't infer potential ownership from a C++ ref. Likewise for char* strings unless it is to interface with a C api, in which case you will keep it anyway.


Wow. I hadn't read up much on string_view but I guess I assumed it required a shared_ptr to the string. Odd decision not to.


Rust hasn’t "educated people" about "borrowing".

Lifetime management has always been there for any developer dealing with resources in any language. Over the years, languages and specialized extensions and tools have offered different solutions to help with that problem.

What Rust has brought is a system that checks explicit tracking embedded into a mainstream language.


Static analysis tools like PVS Studio are amazing. Software verification like CompCert where the compilation includes a certificate of correctness are farther away for C++ but will someday be usable for it.


I never really paid attention to consoles (not a gamer in any way) but the ps5 sounds impressive. Shame Sony have a very Apple-like approach to their products and lock everything up. If they bundled up that hardware with linux support, sales would go through the roof and into orbit. I'd personally get a bunch of these and build myself a cluster.


Sony is selling them with little to no profit as they expect to earn on games. Guess why their capabale and cheap hardware is locked down to avoid using it for anything except playing bought games ;)

Anyway you can jailbreak ps4 to 5.0.5 firmware and there are unpublished exploits in existence that are waiting for ps5 to be released.


Looks like I found that "home server" to replace my over-use of cloud resources that I've been looking for!


Well, let me recomend something else, check asrock mini-itx motherboards with on-board cpu. You can get those for ~150 euros, throw in some ram (~60 euros) and some disk (100euro) + some chasis (Phenom mini-itx for instance, ~100euros). For home server this will work great :)

I am running home server (100% self hosted including emails) with J1900-itx motherboard with 20Tb of disk space (zraid) for years. No need to bother with ps4/5.


Well, your described bundle would be over 400€ then you can purchase used PS4 at least half that price and even cheaper.


Yes, but ps4 is gaming rig and you will have to jailbreak it every reboot. It depends on what you intend to run, raspberry pi 4 and sd card could be just more then enough for some people. Those prices were over the thumb, my motherboard with cpu is there since 2014 and is now $60 while it is more than enough and with going minimal (ram, chasis, disk - with ps4 you will get 1tb at most) you can pull it of under ps4 price. At the end, if you divide those 400 euros by 6 years, you are at price of 5.55 euro / month (not to mention you can reuse chasis and disks when upgrading) and it is low power setup (measured with 4 disks was 33 watts).

Jailbreaking could be nice for other <wink> unnamed purposes.


I recently bought an Ivy Bridge CPU low power CPU + motherboard for $35 and 8 GiB of ram for $25. No need to buy new hardware if you can do away with old.

Maybe they sell the H/W at a loss (especially considering R&D + Marketing spend) - and the real strategy is to turn a profit on PSPlus, licensing and taking a cut out of game distribution. If that's the case... you or me building a linux cluster will actually hurt them =)


Not maybe, that's exactly what they do.


The PS3 had dual-boot support for Linux early on, for a couple years after launch. It was removed in a software update a week or two after I decided to try it. I don't see Sony doubling back on that one, but you never know.



> That's a lot of compute engine for $400.

So excited for this as a PC gamer, hardware prices are going to have to plummet. I don't think supercomputers a likely, the PS2 was a candidate because there was [initially] official support for installing Linux on the thing. Sony terminated that support and I really can't imagine them reintroducing it for the PS5.


Sony's only interest is to do a single deployment, using a customized OS and firmware, and then get as many articles out of the project as possible.

They have zero incentive to subsidize supercomputers. They're in the business of trading hardware for royalty, store, and subscription payments.


And if they do it would be wise not to trust them, because dropping support for advertised features with hardware-fused irreversible software updates is SOP at this point. FFS, they even dropped support for my 4K screen in an update and I wound up playing the back half of Horizon Zero Dawn in 1080p as a result.


What? How and why did they drop support for your screen?


Yes, really. They up and dropped an entire HDMI mode used by "older" 4K displays.

A cynic would say they wanted to boost sales on newer displays, but it seems more likely that a bug of some kind came up in a driver (I was unaware of any problems, but that's hardly proof of anything) and they just decided it was easier to cut support of those displays than to fix the problem.

Support forums filled with complaints by the dozens of pages, but Sony didn't care, because why should they? I'm sure somebody did the calculation that said we weren't a big enough demographic to matter.


> Somebody will probably make supercomputers out of rooms full of those.

So I learnt very recently that the PS5 has a cool approach where all memory is shared directly between the CPU and the GPU (if this is wrong someone please correct me). I would be really interesting to see how well the GPU in this could handle DL specific workloads, and if necessary, could it be tweaked to do so?

Because if so, that could be an absolute weapon or a DL workstation. If it does turn out to be feasible, I think it could be very easily justifiable to buy a few of those (for less than it would cost you to buy a major cloud provider GPU equipped instance for a couple of months) and have a pretty capable cluster. Machines get outdated or cloud provider cost comes down? Take them home and use them as actual gaming consoles. Win win.


This is how APUs (which is what the PS4/PS5/Xbox) use memory, the RAM is shared between the graphics and compute units. This can be an advantage since memory is quickly shared between the two (for example loading textures, etc).

This is also useful in computers since adding more RAM also adds more VRAM


Self driving cars: Yes, but only if they really work - now is the perfect time to sell them if they did; for those of us who normally use public transport but don't currently like the thought of sitting in a petri dish for 2 hours. Utility scale battery storage: Yes but it needs tech improvements to store LOTS of energy; the flow batteries might do it if the hype is true - but currently the UK wholesale electricity price is £-28/MWh due to a wind/solar glut and a quiet weekend, so if anyone wants to get paid to store that energy the opportunity is there.

As for C++ safety; I find modern C++ hard to read - are they going to be able to do safety but end up with something that's actually harder to use/read than Rust?


Can't help but laugh whenever I read about self driving cars predictions like this sorry.

My GPS can hardly navigate most of the world I'm not really excited and if the only criteria of self driving car is self driving on a high way then color me uninterested.

I don't think self driving cars will be able to traverse majority of the world's traffic anytime soon. The road is just too difficult to maintain for human free driving with the exception of few major block cities on America which makes the whole ordeal pretty boring.


Self driving cars don’t need to be 100% autonomous in all possible scenarios in order to be useful. Self-driving reliably on the highway? Hell yes I’d take that (just think of trucks - having a driver to the highway is so much cheaper than having someone drive it cross-country). Self driving reliably in a few major cities? Oh, you mean cheap robotic taxi?


Spot on. This is our approach at Ghost Locomotion - L3 is pretty darn good, and highways are actually pretty standards driven, unlike local roads or cities.

https://medium.com/ghost-blog/the-long-ignored-most-obvious-...

https://medium.com/ghost-blog/the-future-of-transportation-i...


I agree with you it's just as I said - it's not what we've been sold and autopilot on highway is kinda boring.


I think its incorrect way to view it like its either full self driving or none at all. We are getting incremental benefits from this already: cars are correcting and preventing driver errors. They make instant trajectory corrections or complete stop and prevents a huge crashes. With time they will be better and better at recognising traffic lights, road signs, sudden unforeseen situations and so on and that way driving safety will improve exponentially even before self driving capabilities.


Nice post. I think the PS5 read might be a little off though. The pro edition is likely to be 600ish USD and come in a little lower than 14 of the tflops.

Why do they need lidar in the first place? Humans do fine with stereoscopic vision

“Fake it till you make it” is precisely how it will be solved


“Fake it till you make it” strategy works when you know how to make something but haven't made it yet. The strategy falls apart when people try to fake having solved hard open research problems.


> 8 CPUs at 3.2GHz each

8 CPU cores at 3.2GHz each?


Tesla covering 3/6... stock price is definitely still low


I would never trust any self driving car that didn't use LiDAR. It's an essential sensor for helping to fix issues like this:

https://www.youtube.com/watch?v=1cSw4fXYqWI&feature=emb_logo

And it's not contrived since we've seen situations of Telsa Autopilot behaving weirdly when it sees people on the side of billboards, trucks etc.


LIDAR vs camera is a red herring. The fact that Elon and his fan club fixate on this shows you how little they understand about self driving. The fundamental problem is that there is no technology that can provide the level of reasoning that is necessary for self driving.

Andrej Karpathy's most recent presentation showed how his team trained a custom detector for stop signs with an "Except right turn" text underneath them [0]. How are they going to scale that to a system that understands any text sign in any human language? The answer is that they're not even trying, which tells you that Tesla is not building a self-driving system.

[0] https://youtu.be/hx7BXih7zx8?t=753


A surprising number of human drivers would also not be able to 'detect' that 'except right turn' sign. Only 3 states offer driver's license exams in only English and California for example offers the exam in 32 different languages.

Even so, it is quite possible to train for this in general. Some human drivers will notice the sign and will override autopilot when it attempts to stop, this triggers a training data upload to Tesla. Even if the neural net does not 'understand' the words on the sign, it will learn that a stop is not necessary when that sign is present in conjunction with a stop sign.


They have hired most of the industry talents, so I think it's quite silly to state about how little they understand about this. In my opinion nobody except Tesla and Waymo has more knowledge of this field.


Why does it need to work in any human language? It isn't as if self driving cars need to work on Zulu road signs before they can be rolled out in California. I'd be surprised if they ever needed to train it on more than 4 languages per country they wanted to roll out to.


If I were driving I'd definitely stop for the person in the road projection at https://youtu.be/1cSw4fXYqWI?t=85

LiDAR also isn't a silver bullet. Similar attacks are possible such as simply shining a bright light at the sensor overwhelming the sensor as well as more advanced attacks such as spoofing an adversarial signal.


I don't think it's attacks we need to worry about (there's even an XKCD about dropping rocks off of overpasses). The issue is that without good depth and velocity data (so probably LiDAR) there are lots of fairly common situations that an ML algorithm is likely to have trouble making sense of.


I use autopilot every day. It stops for stoplights and stop signs now.


Sometimes when on the freeway behind a construction truck with flashing lights.


It is misleading. driving on the highway is by far the easiest part of self driving.

Going from 3 nines of safety to 7 nines is going to be the real challenge.


There aren't stoplights on the highway. I'm talking about in-city driving.


Humans don’t need LiDAR to recognize billboards


Self driving cars can't rapidly move their cameras in multiple spatial directions like humans do on a continuous basis.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.


People don't have eyes in the back of their heads. Self-driving cars don't get drunk or distracted by cell phones. Comparing humans with AVs is apples & oranges. The only meaningful comparison is in output metrics such as Accidents & Fatalities per mile driven. I'd be receptive to conditioning this metric on the weather... so long as the AV can detect adverse conditions and force a human to take control.


Chimps have us beat when it comes to short-term visual memory (Humans can't even come close).

Mantis shrimp have us beat when it comes to quickly detecting colors since they have twelve photoreceptors vs. our three.

Insects have us beat when it comes to anything in the UV spectrum (we're completely blind to it). Many insects also cannot move their eyes but are still have to use vision for collision detection and navigation.

Birds have us beat when it comes to visual acuity. Most of them also do not move their eyeballs in spacial directions like we do but still have excellent visual navigation skills.


Humans have visual processing which converts the signals from our three types of cones into tens to hundreds of millions of shades of color. Mantis shrimp don't have this processing. Mantis shrimp can only see 12 shades.

Human color detection is about six orders of magnitude greater than mantis shrimp's.


Right, but the theory is that they have us beat when it comes to speed since they are directly sensing the colors whereas we are doing a bunch of post-processing.


I think the point was that brains are the best pattern and object detection computers, not necessarily just human brains.


Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.

Not defending those who say that LIDAR isn't useful/important in self-driving cars, but this assertion is only marginally true today and won't be true at all for much longer. See https://arxiv.org/pdf/1706.06969 (2017), for instance.


Humans have about 2° field of sharp vision. Computers with wide angle lenses don't have to oscillate like the eyes do.


Humans are underrated.


On driving? I would posit that most humans are vastly overrated.

I suspect if you crunch the numbers, accidents are going to be above normal for a while after Covid-19 reopenings.

Anecdotally, I'm seeing people doing mind-blowingly stupid things on the roadways right now. It seems like people have forgotten how to drive. I suspect the issue is that people rely too much on other cars to cue them how to behave and the concentration is too low.

(It could also be that a constant accident rate cleans off the worst of the drivers with regularity as they get into accidents and then wind up out of circulation. I really hope that isn't why ... that would be really depressing.)


No they’re underrated. We all know the stats. Driving isn’t the safest activity. Having said that there’s a lot of wishful thinking that the current state of ML can do any better if we were to just put them on the roads today as-is.


You are right, for example, humans don't need anywhere near the amount of training data that AIs need.


I learned to drive a car when I was 13. My older cousin took me to warped tour, got hammered and told me I had to drive home. I didn’t know what a clutch was, let alone a stick shift. After stalling in the parking lot a couple of times, I managed to drive us from Long Beach all the way back to my parents house in Pasadena. Love to see an AI handle that cold start problem.


Cold start? You had 13 years!


Self-driving cars could work more like a hive mind. Humans can share ideas, but not reflexes and motor memory. So we practice individually, and we're great at recognizing moving stuff, but we never get very good at avoiding problems that rarely happen to us.

And we know we shouldn't drive tired or angry or intoxicated but obviously it still happens.


Exactly. The way to improve performance on a lot of AI problems is to get past the human tendency to individualistic AI, where every AI implementation has to deal with reality all on its own.

As soon as you get experience-sharing - culture, as humans call it, but updateable in real time as fast as data networks allow - you can build an AI mesh that is aware of local driving conditions and learns all the specific local "map" features it experiences. And then generalises from those.

So instead of point-and-hope rule inference you get local learning of global invariants, modified by specific local exceptions which change in real time.


It seems to me that humans require and get orders of magnitude more training data than any existing machine learning system. High "frame rate", high resolution, wide angle, stereo, HDR input with key details focused on in the moment by a mobile and curious agent, automatically processed by neural networks developed by millions of years of evolution, every waking second for years on end, with everything important labelled and explained by already-trained systems. No collection of images can come close.


Depends on how you quantify data a human processes from birth to adulthood.


You're forgetting the million years of evolution


But at the end of that video they state they were able to train a network to detect these phantom images. So this is something that can be fixed and had been proven to work. Only a matter of time before it's in commercial cars.


That same video said they trained a CNN to recognize phantoms using purely video feed and achieved a high accuracy with AUC ~ 0.99.


30%+ downvotes seems like there is not a consensus around this issue


I have a AP 2.5 Model 3. It will never be fully self driving. It still has trouble keeping lanes when the stripes are not simple. It still does phantom brakes


WRT F150;

I am so upset with the state of the auto market when it comes to pricing.

Manufacturing margins are enormous when it comes to cars.

The F150 is no different.

A two seater (effectively) vehicle stamped out of metal and plastic should never cost as much as those things do.

I hate car companies and their pricing models.


Look up the chicken tax bill that passed a few decades ago that basically stopped any foreign car manufacturers from selling pickups in the US. That's why trucks are so much more expensive than other types of cars.



Also why you have hugh f-serious cars and not more reasonable sized cars like Hilux's


Because small trucks require more fuel- and emissions-efficient engines than larger ones.





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: