UE5's rendering approach. They finally figured out how to use the GPU to do level of detail. Games can now climb out of the Uncanny Valley.
The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.
C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.
Electric cars taking over. The Ford F-150 and the Jeep Wrangler are coming out in all-electric forms. That covers much of the macho market. And the electrics will out-accelerate the gas cars without even trying hard.
Utility scale battery storage. It works and is getting cheaper. Wind plus storage plus megavolt DC transmission, and you can generate power in the US's wind belt (the Texas panhandle north to Canada) and transmit it to the entire US west of the Mississippi.
Still don't see fully (fully automated) self driving cars happening any time soon:
1) Heavy steel boxes running at high speed in built up areas will be the very last thing that we trust to robots. There are so many other things that will be automated first. Its reasonable to assume that we will see fully automated trains before fully automated cars.
2) Although a lot is being made of the incremental improvements to self-driving software, there is a lot of research about the danger of part-time autopilot. Autopilot in aircraft generally works well until it encounters an emergency, in which case a pilot has to go from daydreaming/eating/doing-something-else to dealing with catastrophy in a matter of microseconds. Full automation or no automation is often safer.
3) The unresolved/unresolvable issue of liability in an accident: is it the owner or the AI who is at fault.
4) The various "easy" problems that remain somewhat hard for driving AI to solve in a consistent way. Large stationary objects on motorways, small kids running into the road, cyclists, etc.
5) The legislative issues: at some point legislators have to say "self driving cars are now allowed", and create good governance around this. The general non-car-buying public has to get on board. These are non-trivial issues.
My alternative possible timeline interpretation is that two forces collide and make self-driving inevitable.
The first force is the insurance industry. It's really hard to argue that humans are more fallible than even today's self-driving setups, and at some point the underwriters will take note and start premium-blasting human drivers into the history books.
The second force is the power of numbers; as more and more self-driving cars come online, it becomes more and more practical to connect them together into a giant mesh network that can cooperate to share the roads and alert each other to dangers. Today's self-driving cars are cowboy loners that don't play well with others. This will evolve, especially with the 5G rollout.
1) Teslas crash much less often, mostly due to autopilot.
2) Tesla can harvest an incredible amount of data from one of their cars and so they can calculate risk better
Does Tesla see when you speed and increase your premiums?
I want to understand why being in a high-speed steel/plastic box with humans (overrated in some views) controlled by a computer scares you so much. Is it primal or are you working off data I do not have? Please share. I am being 100% sincere - I need to understand your perspective.
To re-state in brief: (individual) autonomous self-driving tech today tests "as safe as" ranging to "2-10x safer" than a typical human driver. This statistic will likely improve reliably over the next 5-10 years.
However, I am talking about an entire societal mesh network infrastructure of cars, communicating in real-time with each other and making decisions as a hive. As the ratio flips quickly from humans to machines, I absolutely belive that you would have to be quantifiably unsane to want to continue endangering the lives of yourself, your loved ones and the people in your community by continuing to believe that you have more eyes, better reaction and can see further ahead than a mesh of AIs that are constantly improving.
So yeah... I don't understand your skepticism. Help me.
Look: I don't want my loved ones in the car that gets hacked, and I'm not volunteering yours, either. Sad things are sad, but progress is inevitable and I refuse to live in fear of something scary possibly happening.
It is with that logic that I can fly on planes, ride my bike, deposit my money in banks, have sex, try new foods and generally support Enlightenment ideals.
I would rather trust a mesh of cars than obsess over the interior design of a bunker.
If all the cars in the area know one of the cars is about to do something and can adjust accordingly then it will be so much safer than what we have now it is almost unimaginable.
It would seem at some point in the future, people are not going to even want to be on the road with a human driver who is not part of the network.
In 2017 AlphaGo could probably give a world champion somewhere between 1 and 3 stones.
From an algorithmic perspective the range between "unacceptably bad" and superhuman doesn't have to be all that wide and it isn't exactly possible to judge until the benefit of hindsight is available and it is clear who had what technology available. 15-20 years is realistic because of the embarrassingly slow rate of progress by regulators, but we should all feel bad about that.
We should be swapping blobs of meat designed for a world of <10kmph for systems that are actually designed to move quickly safely. I've lost more friends to car accidents than any other cause - there needs to be some acknowledgment that humans are statistically unsafe drivers.
I can drive myself home relatively safely in conditions where the computer can’t even find the road. We’re still infinitly more flexible and adaptable than computers.
It will be at least 20 years before my car will drive me home on a leaf or snow covered road. Should I drive on those roads? Most likely not, but my brain, designed for <10 km/h speeds, will cope with the conditions in the vast majority of cases.
Fully automated since 1998, and very successful.
I have driven in snow a few times that I was not sure I was even on the road. Or the only way I knew I was going the right direction was because I could vaguely see the break lights of the car going 15mph in front of me through snow.
That is an easy problem to solve though because I simply should not have been driving in that.
I am optimistic about solving those problems. Regulation always comes after the tech is invented. Cars have more opportunity to fail gracefully in an emergency; pull off onto the shoulder and coast to a stop or bump into an inaminate object.
If/when we get fully automated cars, this kind of driverless Uber will become extremely common. Who bears the risk then? This is a complicated situation that can't be boiled down to "Of course the owner is to blame"
Not so in car-sharing pools, and there it's already materializing as a problem. How do you solve that with your 'robo-cab'? Tapping on dirty/smelly in your app, send back to garage? What if you notice it only 5 minutes after you started the trip, already robo-riding along? What if you have allergies against something the former customer had on/around it? Or was so high on opioids, that even a touch of the skin could make you drop? As can, and did happen.
How do you solve for that without massive privacy intrusions? Or will they be the "new normal" because of all that Covid-19 trace app crap?
To go contrary to this is to invite outright bans of the tech.
Mmm, this sounds like exactly what people said at the time the PS3 was going to be released, and I can only recall of one example where the PS3 was ever used in a cluster and that probably was not that very useful in the end.
The PS5 and Xbox One X are commodity PC hardware, optimized for gaming, packaged with a curated App Store.
Sony also won’t just sell you hundreds or thousands of them for some kind of groundbreakingly cheap cluster. They will say no, unless you’re GameStop or Walmart.
Everyone with a high-mid-range PC already has more horsepower than a PS5 and it’s not doing anything particularly innovative or groundbreaking.
The PS5 is going to equivalent to a mid-range $100 AMD CPU, something not as good as an RTX 2080 or maybe even an RTX 2070, and a commodity NVME SSD (Probably cheap stuff like QLC) that would retail for about the same price as a 1TB 2.5” mechanical hard drive. It is not unique.
Data center servers optimize for entirely different criteria and game consoles do not make sense for anything coming close to that sort of thing. For example, servers optimize for 24/7 use and high density. The PS4 doesn’t fit in a 1U rack. It doesn’t have redundant power. Any cost savings on purchase price is wasted on paying your data center for the real estate, no joke. Then when the console breaks you have to pay your technician $100/hour in compensation, benefits, and taxes to remove and replace it.
An 8 core 2nd generation Zen chip appears to retail for $290. The PS5 reportedly has a custom GPU design, but for comparison a Radeon 5000 series card with equivalent CU count (36) currently retails for $270 minimum. Also, that GPU only has 6GB GDDR6 (other variants have 8GB) but the PS5 is supposed to have 16GB. And we still haven't gotten to the SSD, PSU, or enclosure.
Of course it's not supposed to hit the market until the end of the year - perhaps prices will have fallen somewhat by then? (Also I don't expect Sony to be making any money off the hardware at those prices, so I agree that they're unlikely to sell them to anyone who won't buy games for them.)
I haven't looked at what a GPU equivalent would be, but by the time PS5 hits the market, I doubt going to be anywhere near 270$.
As long as there aren't any supply chain disruptions (as there are now).
It appears that the real killer is the hardware-accelerated decompression block pulling the data straight from SSD into CPU/GPU memory in the exact right location/format without any overhead, which isn't available on commodity PC hardware at the moment.
I found some historical price data and I'm surprised - the 2700 really was $150 back in January! Vendors are price gouging the old ones now, and the 3700X is currently $295 on Newegg.
As far as the GPU goes, an 8GB from the 500 series (only 32 CU, released 2017) is still at least $140 today. And noting the memory again, that's 8GB GDDR5 versus (reportedly) 16GB GDDR6 so I'm skeptical the price will fall all that much relative to the 6GB card I mentioned.
I think console hardware cost is generally budgeted at a slight loss (or close to break-even) at the beginning of a console generation, and then drops over the ~7 year lifespan.
The fact that it can stream 5.5gb/s from disk to RAM says otherwise. Commodity hardware, even high end m.2 drives can’t match that.
* it’s my understanding that it directly shares RAM between the CPU and the GPU which means way less latency and more throughput.
I would sure like to see some architectural upgrades like this in PC/server world though: I’d love an ML workstation where my CPU-GPU ram is shared and I can stream datasets directly into RAM at frankly outrageous speeds. That would make so many things so much easier.
No, you pay your minimum wage junior IT assistant to unplug the broken one and plug in a new one. That's the point of commodity hardware - it's cheaper to buy and cheaper to support.
GTA 6 with the hardware in the new consoles will likely be spectacular.
There was significant interest in grid scholar community.
The only reason that stopped happening was because Sony killed it on purpose:
> On March 28, 2010, Sony announced it would be disabling the ability to run other operating systems with the v3.21 update due to security concerns about OtherOS. This update would not affect any existing supercomputing clusters, due to the fact that they are not connected to PSN and would not be forced to update. However, it would make replacing the individual consoles that compose the clusters very difficult if not impossible, since any newer models with the v3.21 or higher would not support Linux installation directly. This caused the end of the PS3's common use for clustered computing, though there are projects like "The Condor" that were still being created with older PS3 units, and have come online after the April 1, 2010 update was released.
And in case you were wondering, the reason Sony killed was because they sell their consoles at a loss and make up for that through game sales (which indirectly is what made it so affordable for people interested in cluster computing). If the PS3 is merely bought for creating cluster computers they would end up with a net loss (Nintendo is the only console maker that sells consoles at a profit)
Is that the new marketing term for shared VRAM?
And if by “learned”, you also mean “were convinced by Mark Cerny“ (who is still leading design of the PS5), then also yes.
That seems like a straight waste of time for lightly customised hardware you'll be able to get off the shelf. And unless they've changed since, the specs you quote don't match the official reveal of 16GB and 10 teraflop. Not to mention the price hasn't been announced, the $400 pricepoint is a complete guess (and pretty weird given the XbX is guessed for 50% more… for a very similar machine).
The crowd that use C++ needs raw pointers sometimes, and you can't really prevent bad pointers and buffer overflows when they are used. There is a reason why Rust, which goal is to be a safer C/C++, supports unsafe code.
Smart pointers is a very good thing to have in the C++ toolbox, but they are not for every programmer. Game programmers, if I am not mistaken, tend to avoid them, as well as other features that make things happen between the lines, like RAII and exceptions.
The good thing about that messiness that is modern C++ is that everything is here, but you can pick what you want. If you write C++ code that looks like C, it will run like C, but if you don't want to see a single pointer, you have that option too.
Would love some links to read over weekend. Thanks!
- std::weak_ptr (non owning reference to shared_ptr, knows when the parent is free'd)
- move semantics
- move capture in lambdas
To be honest, learning rust has made me a better c++ programmer as well. Having to really think about lifetimes and ownership from an API perspective has been really neat. It's not so much that I wasn't concerned about it before, more that I strive to be more expressive of these conditions in code.
However I feel like most of the heavy lifting features came with C++11.
Span, optional, variant and string_view are nice additions to the toolkit but more as enhancements rather than the paradigm shift of C++11 (move, unique_ptr, lambdas et-al).
It's funny, because while it's certainly become more influential lately, that subculture existed as a niche in the C++ world before Rust and before C++11. So much so that when I first heard about Rust I thought "these are C++ people."
"string_view" is a borrow of a slice of a string. Since C++ doesn't have a borrow checker, it's possible to have a dangling string_view if the string_view outlives the underlying string. This is a memory safety error.
Rust has educated people to recognize this situation. Now it's standard terminology to refer to this as a borrow, which helps. Attempting to retrofit Rust concepts to C++ is helping, but often they're cosmetic, because they say the right thing, but aren't checked. However, saying the right thing makes it possible to do more and more static checking.
Sure C++ doesnt have a borrow checker but these types encourage the idea of "reifying" lack of ownership rather than keeping it ad hoc
Unfortunately there is no string view, so you need to copy the substrings or use pointers/indices. I tried to build a string view, but the freepascal compiler is not smart enough to keep a struct of 2 elements in registers.
Lifetime management has always been there for any developer dealing with resources in any language. Over the years, languages and specialized extensions and tools have offered different solutions to help with that problem.
What Rust has brought is a system that checks explicit tracking embedded into a mainstream language.
Anyway you can jailbreak ps4 to 5.0.5 firmware and there are unpublished exploits in existence that are waiting for ps5 to be released.
I am running home server (100% self hosted including emails) with J1900-itx motherboard with 20Tb of disk space (zraid) for years. No need to bother with ps4/5.
Jailbreaking could be nice for other <wink> unnamed purposes.
So excited for this as a PC gamer, hardware prices are going to have to plummet. I don't think supercomputers a likely, the PS2 was a candidate because there was [initially] official support for installing Linux on the thing. Sony terminated that support and I really can't imagine them reintroducing it for the PS5.
They have zero incentive to subsidize supercomputers. They're in the business of trading hardware for royalty, store, and subscription payments.
A cynic would say they wanted to boost sales on newer displays, but it seems more likely that a bug of some kind came up in a driver (I was unaware of any problems, but that's hardly proof of anything) and they just decided it was easier to cut support of those displays than to fix the problem.
Support forums filled with complaints by the dozens of pages, but Sony didn't care, because why should they? I'm sure somebody did the calculation that said we weren't a big enough demographic to matter.
So I learnt very recently that the PS5 has a cool approach where all memory is shared directly between the CPU and the GPU (if this is wrong someone please correct me).
I would be really interesting to see how well the GPU in this could handle DL specific workloads, and if necessary, could it be tweaked to do so?
Because if so, that could be an absolute weapon or a DL workstation. If it does turn out to be feasible, I think it could be very easily justifiable to buy a few of those (for less than it would cost you to buy a major cloud provider GPU equipped instance for a couple of months) and have a pretty capable cluster. Machines get outdated or cloud provider cost comes down? Take them home and use them as actual gaming consoles. Win win.
This is also useful in computers since adding more RAM also adds more VRAM
As for C++ safety; I find modern C++ hard to read - are they going to be able to do safety but end up with something that's actually harder to use/read than Rust?
My GPS can hardly navigate most of the world I'm not really excited and if the only criteria of self driving car is self driving on a high way then color me uninterested.
I don't think self driving cars will be able to traverse majority of the world's traffic anytime soon. The road is just too difficult to maintain for human free driving with the exception of few major block cities on America which makes the whole ordeal pretty boring.
8 CPU cores at 3.2GHz each?
And it's not contrived since we've seen situations of Telsa Autopilot behaving weirdly when it sees people on the side of billboards, trucks etc.
Andrej Karpathy's most recent presentation showed how his team trained a custom detector for stop signs with an "Except right turn" text underneath them . How are they going to scale that to a system that understands any text sign in any human language? The answer is that they're not even trying, which tells you that Tesla is not building a self-driving system.
Even so, it is quite possible to train for this in general. Some human drivers will notice the sign and will override autopilot when it attempts to stop, this triggers a training data upload to Tesla. Even if the neural net does not 'understand' the words on the sign, it will learn that a stop is not necessary when that sign is present in conjunction with a stop sign.
LiDAR also isn't a silver bullet. Similar attacks are possible such as simply shining a bright light at the sensor overwhelming the sensor as well as more advanced attacks such as spoofing an adversarial signal.
Going from 3 nines of safety to 7 nines is going to be the real challenge.
Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.
Mantis shrimp have us beat when it comes to quickly detecting colors since they have twelve photoreceptors vs. our three.
Insects have us beat when it comes to anything in the UV spectrum (we're completely blind to it). Many insects also cannot move their eyes but are still have to use vision for collision detection and navigation.
Birds have us beat when it comes to visual acuity. Most of them also do not move their eyeballs in spacial directions like we do but still have excellent visual navigation skills.
Human color detection is about six orders of magnitude greater than mantis shrimp's.
Not defending those who say that LIDAR isn't useful/important in self-driving cars, but this assertion is only marginally true today and won't be true at all for much longer. See https://arxiv.org/pdf/1706.06969 (2017), for instance.
I suspect if you crunch the numbers, accidents are going to be above normal for a while after Covid-19 reopenings.
Anecdotally, I'm seeing people doing mind-blowingly stupid things on the roadways right now. It seems like people have forgotten how to drive. I suspect the issue is that people rely too much on other cars to cue them how to behave and the concentration is too low.
(It could also be that a constant accident rate cleans off the worst of the drivers with regularity as they get into accidents and then wind up out of circulation. I really hope that isn't why ... that would be really depressing.)
And we know we shouldn't drive tired or angry or intoxicated but obviously it still happens.
As soon as you get experience-sharing - culture, as humans call it, but updateable in real time as fast as data networks allow - you can build an AI mesh that is aware of local driving conditions and learns all the specific local "map" features it experiences. And then generalises from those.
So instead of point-and-hope rule inference you get local learning of global invariants, modified by specific local exceptions which change in real time.
I am so upset with the state of the auto market when it comes to pricing.
Manufacturing margins are enormous when it comes to cars.
The F150 is no different.
A two seater (effectively) vehicle stamped out of metal and plastic should never cost as much as those things do.
I hate car companies and their pricing models.