Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What startup/technology is on your 'to watch' list?
1006 points by iameoghan 13 days ago | hide | past | web | favorite | 669 comments
For me a couple of interesting technology products that help me in my day-to-day job

1. Hasura 2. Strapi 3. Forest Admin (super interesting although I cannot ever get it to connect to a hasura backend on Heroku ¯\_(ツ)_/¯ 4. Integromat 5. Appgyver

There are many others that I have my eye on such as NodeRed[6], but have yet to use. I do realise that these are all low-code related, however, I would be super interested in being made aware of cool other cool & upcoming tech that is making waves.

What's on your 'to watch' list?

[1]https://hasura.io/

[2]https://strapi.io/

[3]https://www.forestadmin.com/

[4]https://www.appgyver.com/

[5]https://www.integromat.com/en

[6]https://nodered.org/






Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there's progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

UE5's rendering approach. They finally figured out how to use the GPU to do level of detail. Games can now climb out of the Uncanny Valley.

The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

Electric cars taking over. The Ford F-150 and the Jeep Wrangler are coming out in all-electric forms. That covers much of the macho market. And the electrics will out-accelerate the gas cars without even trying hard.

Utility scale battery storage. It works and is getting cheaper. Wind plus storage plus megavolt DC transmission, and you can generate power in the US's wind belt (the Texas panhandle north to Canada) and transmit it to the entire US west of the Mississippi.


> Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there's progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

Still don't see fully (fully automated) self driving cars happening any time soon:

1) Heavy steel boxes running at high speed in built up areas will be the very last thing that we trust to robots. There are so many other things that will be automated first. Its reasonable to assume that we will see fully automated trains before fully automated cars.

2) Although a lot is being made of the incremental improvements to self-driving software, there is a lot of research about the danger of part-time autopilot. Autopilot in aircraft generally works well until it encounters an emergency, in which case a pilot has to go from daydreaming/eating/doing-something-else to dealing with catastrophy in a matter of microseconds. Full automation or no automation is often safer.

3) The unresolved/unresolvable issue of liability in an accident: is it the owner or the AI who is at fault.

4) The various "easy" problems that remain somewhat hard for driving AI to solve in a consistent way. Large stationary objects on motorways, small kids running into the road, cyclists, etc.

5) The legislative issues: at some point legislators have to say "self driving cars are now allowed", and create good governance around this. The general non-car-buying public has to get on board. These are non-trivial issues.


You could be right.

My alternative possible timeline interpretation is that two forces collide and make self-driving inevitable.

The first force is the insurance industry. It's really hard to argue that humans are more fallible than even today's self-driving setups, and at some point the underwriters will take note and start premium-blasting human drivers into the history books.

The second force is the power of numbers; as more and more self-driving cars come online, it becomes more and more practical to connect them together into a giant mesh network that can cooperate to share the roads and alert each other to dangers. Today's self-driving cars are cowboy loners that don't play well with others. This will evolve, especially with the 5G rollout.


This reminds me that Tesla itself is starting to offer insurance, and it can do so at a much lower rate. I assume this is because:

1) Teslas crash much less often, mostly due to autopilot.

2) Tesla can harvest an incredible amount of data from one of their cars and so they can calculate risk better


how much does a Tesla know about the state of its driver, e.g. to detect distraction, tiredness or intoxication?

Does Tesla see when you speed and increase your premiums?


Having high-speed steel boxes carrying human lives and what else react on messages from untrusted sources. Hmm. What could go wrong?

I'm going to ignore the snark and pretend as though this is a good faith argument, because we're on Hacker News - and I believe that means you're a smart person I might disagree with, and I'm challenging you.

I want to understand why being in a high-speed steel/plastic box with humans (overrated in some views) controlled by a computer scares you so much. Is it primal or are you working off data I do not have? Please share. I am being 100% sincere - I need to understand your perspective.

To re-state in brief: (individual) autonomous self-driving tech today tests "as safe as" ranging to "2-10x safer" than a typical human driver. This statistic will likely improve reliably over the next 5-10 years.

However, I am talking about an entire societal mesh network infrastructure of cars, communicating in real-time with each other and making decisions as a hive. As the ratio flips quickly from humans to machines, I absolutely belive that you would have to be quantifiably unsane to want to continue endangering the lives of yourself, your loved ones and the people in your community by continuing to believe that you have more eyes, better reaction and can see further ahead than a mesh of AIs that are constantly improving.

So yeah... I don't understand your skepticism. Help me.


The risk is a bad actor could hack into this network and control the cars

Security-minded thinking dictates that we should move forward with the assumption that it will happen. The important outcome is not "we can't do anything as a society because bad men could hurt us" but "how do we mitigate and minimize this kind of event so that progress can continue".

Look: I don't want my loved ones in the car that gets hacked, and I'm not volunteering yours, either. Sad things are sad, but progress is inevitable and I refuse to live in fear of something scary possibly happening.

It is with that logic that I can fly on planes, ride my bike, deposit my money in banks, have sex, try new foods and generally support Enlightenment ideals.

I would rather trust a mesh of cars than obsess over the interior design of a bunker.


Totally agree.

If all the cars in the area know one of the cars is about to do something and can adjust accordingly then it will be so much safer than what we have now it is almost unimaginable.

It would seem at some point in the future, people are not going to even want to be on the road with a human driver who is not part of the network.


The hype around self driving cars is still very much around. I tend to view any debate about full autonomous cars (level 5) as unserious if they work with less than a 15 - 20 year time horizon.

In 2014 top humans could give a good Go playing AI 4 stones (a handicap that pushes games outside of being between comparable players).

In 2017 AlphaGo could probably give a world champion somewhere between 1 and 3 stones.

From an algorithmic perspective the range between "unacceptably bad" and superhuman doesn't have to be all that wide and it isn't exactly possible to judge until the benefit of hindsight is available and it is clear who had what technology available. 15-20 years is realistic because of the embarrassingly slow rate of progress by regulators, but we should all feel bad about that.

We should be swapping blobs of meat designed for a world of <10kmph for systems that are actually designed to move quickly safely. I've lost more friends to car accidents than any other cause - there needs to be some acknowledgment that humans are statistically unsafe drivers.


When you're mentioning AlphaGo, you're committing a fallacy that's so famous that it has a name and a wikipedia page (https://en.wikipedia.org/wiki/Moravec%27s_paradox). The things that are easy for humans are very different from those that are easy for robots.

I don’t disagree that computer are better driver, under certain conditions, but that’s not the point.

I can drive myself home relatively safely in conditions where the computer can’t even find the road. We’re still infinitly more flexible and adaptable than computers.

It will be at least 20 years before my car will drive me home on a leaf or snow covered road. Should I drive on those roads? Most likely not, but my brain, designed for <10 km/h speeds, will cope with the conditions in the vast majority of cases.



> Its reasonable to assume that we will see fully automated trains before fully automated cars.

https://en.wikipedia.org/wiki/Paris_M%C3%A9tro_Line_14

Fully automated since 1998, and very successful.


There were automated railways 30 years before that too. https://en.m.wikipedia.org/wiki/Automatic_train_operation

I've lived in Washington DC long enough to remember back when our subway was allowed to run in (mostly) automated mode. There was a deadly accident that wasn't directly the fault of the Automatic Train Control (the human operator hit the emergency brake as soon as she saw the parked train ahead of her) but it still casts light on some of the perils of automation.

Another hard problem for AI is to "see" through rain

That's hard for humans too. I think we need to give up on the idea that fully autonomous driving will be perfect.

I'm obviously talking about matching human performance and this is the hard problem

There is also an easy solution of just staying put.

I have driven in snow a few times that I was not sure I was even on the road. Or the only way I knew I was going the right direction was because I could vaguely see the break lights of the car going 15mph in front of me through snow.

That is an easy problem to solve though because I simply should not have been driving in that.



Humans are pretty terrible at driving in rain and snow as well.

We already have fully automated trains. The DLR in London.

I am optimistic about solving those problems. Regulation always comes after the tech is invented. Cars have more opportunity to fail gracefully in an emergency; pull off onto the shoulder and coast to a stop or bump into an inaminate object.


Of course the owner is to blame.

What if it's a rental or a lease? In a fully automated car, that's basically a taxi. I don't think I should bear the responsibility of my taxi driver.

If/when we get fully automated cars, this kind of driverless Uber will become extremely common. Who bears the risk then? This is a complicated situation that can't be boiled down to "Of course the owner is to blame"


That is the most puzzling thing to me. Not from a technical, but societal ( https://en.wikipedia.org/wiki/Tragedy_of_the_commons ) point of view. Compare with public mass transit, except of Singapore, Japan, it is mostly dirty, in spite of cleaning staff working hard, and other people around. In a Taxi/Uber you have the driver watching, and other rentals are usually inspected after, and immediately before next rent out, just to make sure.

Not so in car-sharing pools, and there it's already materializing as a problem. How do you solve that with your 'robo-cab'? Tapping on dirty/smelly in your app, send back to garage? What if you notice it only 5 minutes after you started the trip, already robo-riding along? What if you have allergies against something the former customer had on/around it? Or was so high on opioids, that even a touch of the skin could make you drop? As can, and did happen. How do you solve for that without massive privacy intrusions? Or will they be the "new normal" because of all that Covid-19 trace app crap?


Counterpoint: In a fully-autonomous situation, of course the AI is to blame.

I think we need to consider that case when/if it happens. For the foreseeable future there needs to be a responsible driver present.

To go contrary to this is to invite outright bans of the tech.


> The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

Mmm, this sounds like exactly what people said at the time the PS3 was going to be released, and I can only recall of one example where the PS3 was ever used in a cluster and that probably was not that very useful in the end.


This exactly.

The PS5 and Xbox One X are commodity PC hardware, optimized for gaming, packaged with a curated App Store.

Sony also won’t just sell you hundreds or thousands of them for some kind of groundbreakingly cheap cluster. They will say no, unless you’re GameStop or Walmart.

Everyone with a high-mid-range PC already has more horsepower than a PS5 and it’s not doing anything particularly innovative or groundbreaking.

The PS5 is going to equivalent to a mid-range $100 AMD CPU, something not as good as an RTX 2080 or maybe even an RTX 2070, and a commodity NVME SSD (Probably cheap stuff like QLC) that would retail for about the same price as a 1TB 2.5” mechanical hard drive. It is not unique.

Data center servers optimize for entirely different criteria and game consoles do not make sense for anything coming close to that sort of thing. For example, servers optimize for 24/7 use and high density. The PS4 doesn’t fit in a 1U rack. It doesn’t have redundant power. Any cost savings on purchase price is wasted on paying your data center for the real estate, no joke. Then when the console breaks you have to pay your technician $100/hour in compensation, benefits, and taxes to remove and replace it.


I think you've vastly understating current hardware prices.

An 8 core 2nd generation Zen chip appears to retail for $290. The PS5 reportedly has a custom GPU design, but for comparison a Radeon 5000 series card with equivalent CU count (36) currently retails for $270 minimum. Also, that GPU only has 6GB GDDR6 (other variants have 8GB) but the PS5 is supposed to have 16GB. And we still haven't gotten to the SSD, PSU, or enclosure.

Of course it's not supposed to hit the market until the end of the year - perhaps prices will have fallen somewhat by then? (Also I don't expect Sony to be making any money off the hardware at those prices, so I agree that they're unlikely to sell them to anyone who won't buy games for them.)


Ryzen 2nd-gen 2700 is out of stock currently, but it used to go for as low as $135-150, it's absolutely not a $290 CPU (perhaps you're looking at 3rd gen ryzen? 3700x?).

I haven't looked at what a GPU equivalent would be, but by the time PS5 hits the market, I doubt going to be anywhere near 270$.

As long as there aren't any supply chain disruptions (as there are now).

It appears that the real killer is the hardware-accelerated decompression block pulling the data straight from SSD into CPU/GPU memory in the exact right location/format without any overhead, which isn't available on commodity PC hardware at the moment.


Ack my bad! I wrote "2nd generation Zen" but I meant to write "Zen 2" which is (confusingly) the 3rd generation.

I found some historical price data and I'm surprised - the 2700 really was $150 back in January! Vendors are price gouging the old ones now, and the 3700X is currently $295 on Newegg.

As far as the GPU goes, an 8GB from the 500 series (only 32 CU, released 2017) is still at least $140 today. And noting the memory again, that's 8GB GDDR5 versus (reportedly) 16GB GDDR6 so I'm skeptical the price will fall all that much relative to the 6GB card I mentioned.


Zen2 = Ryzen 3rd, not 2nd.

> Also I don't expect Sony to be making any money off the hardware at those prices, so I agree that they're unlikely to sell them to anyone who won't buy games for them.

I think console hardware cost is generally budgeted at a slight loss (or close to break-even) at the beginning of a console generation, and then drops over the ~7 year lifespan.


> Everyone with a high-mid-range PC already has more horsepower than a PS5 and it’s not doing anything particularly innovative or groundbreaking.

The fact that it can stream 5.5gb/s from disk to RAM says otherwise. Commodity hardware, even high end m.2 drives can’t match that.

* it’s my understanding that it directly shares RAM between the CPU and the GPU which means way less latency and more throughput.


There are high end drives on the PC market what can match and surpass that, but they are like $2000+ :) Linus talked about that topic last week: https://youtu.be/8f8Vhoh9Y3Q?t=1607

Watching some of that, and doing a bunch odd reading on the PS5, it seems that “some drives” can kind of get close, but the fact that the PS5 physically has custom, dedicated hardware that can directly move data from the SSD straight into shared CPU-GPU memory with minimal input/work from the CPU, and that’s a fundamental architectural advantage PC’s don’t have (yet).

I would sure like to see some architectural upgrades like this in PC/server world though: I’d love an ML workstation where my CPU-GPU ram is shared and I can stream datasets directly into RAM at frankly outrageous speeds. That would make so many things so much easier.


While the individual components might not be as fast as a high end PC, they way the system is architected and the components are connected to each other (eg. super high bandwidth from SSD to CPU/GPU memory) gives it some advantages especially for gaming. For the price it certainly is impressive.

New console releases don't need to be particularly innovative or groundbreaking. They greatly improve the amount of the resources available to the game-devs and game development is console centric in the first place. Usually after new console launches game visual quality jumps quite noticeably in a couple of years. Its beneficial for everyone even if you are not console gamer yourself.

Then when the console breaks you have to pay your technician $100/hour in compensation, benefits, and taxes to remove and replace it.

No, you pay your minimum wage junior IT assistant to unplug the broken one and plug in a new one. That's the point of commodity hardware - it's cheaper to buy and cheaper to support.


Faster consoles are good if your a PC gamer though since games end up deployed for all three and consoles are the retard on progress.

GTA 6 with the hardware in the new consoles will likely be spectacular.


Are you referring to the time the US air force built a cluster out of 2000 PS3s? Seems good.

Well that just goes to show that you shouldn't trust hearsay, even if that hearsay is your own vague recollection of something. There is an Wikipedia page dedicated to the ways the PS3 was used as a cheap HP computing cluster:

https://en.wikipedia.org/wiki/PlayStation_3_cluster

The only reason that stopped happening was because Sony killed it on purpose:

> On March 28, 2010, Sony announced it would be disabling the ability to run other operating systems with the v3.21 update due to security concerns about OtherOS. This update would not affect any existing supercomputing clusters, due to the fact that they are not connected to PSN and would not be forced to update. However, it would make replacing the individual consoles that compose the clusters very difficult if not impossible, since any newer models with the v3.21 or higher would not support Linux installation directly. This caused the end of the PS3's common use for clustered computing, though there are projects like "The Condor" that were still being created with older PS3 units, and have come online after the April 1, 2010 update was released.

And in case you were wondering, the reason Sony killed was because they sell their consoles at a loss and make up for that through game sales (which indirectly is what made it so affordable for people interested in cluster computing). If the PS3 is merely bought for creating cluster computers they would end up with a net loss (Nintendo is the only console maker that sells consoles at a profit)


Ps3 was used as Debian clusters in my university and would be in larger scale if not for a) huge cost in my country at start b) medium availability c) "other systems" fiasco.

There was significant interest in grid scholar community.


The key differentiator is x86 vs PPC and 1 TB/s bus.

PPC was ok, the killer was that you had to write code specifically for the Cell (co)processors and their limited memory addressing if you wanted the promised compute performance.

> 1 TB/s bus

Is that the new marketing term for shared VRAM?


Most of that power came from the Cell processor, which was awesome but supposedly hard to develop for. I assume they’ve learned that lesson.

If by “learned” you mean changed focus from making it (uniquely) awesome and instead making it easier to develop for: yes.

And if by “learned”, you also mean “were convinced by Mark Cerny“ (who is still leading design of the PS5), then also yes.


> The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

That seems like a straight waste of time for lightly customised hardware you'll be able to get off the shelf. And unless they've changed since, the specs you quote don't match the official reveal of 16GB and 10 teraflop. Not to mention the price hasn't been announced, the $400 pricepoint is a complete guess (and pretty weird given the XbX is guessed for 50% more… for a very similar machine).


Gpu solved LOD won't save video games from the uncanny valley. In some cases it will make it worse. It makes for nice statues and static landscapes though.

> C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

The crowd that use C++ needs raw pointers sometimes, and you can't really prevent bad pointers and buffer overflows when they are used. There is a reason why Rust, which goal is to be a safer C/C++, supports unsafe code.

Smart pointers is a very good thing to have in the C++ toolbox, but they are not for every programmer. Game programmers, if I am not mistaken, tend to avoid them, as well as other features that make things happen between the lines, like RAII and exceptions.

The good thing about that messiness that is modern C++ is that everything is here, but you can pick what you want. If you write C++ code that looks like C, it will run like C, but if you don't want to see a single pointer, you have that option too.


Maybe it was a purposeful reference, but PlayStations have indeed been linked to create a supercomputer: https://phys.org/news/2010-12-air-playstation-3s-supercomput...

Even before that link. The PS2 Linux kit was used back in 2003.

https://web.archive.org/web/20041120084657/http://arrakis.nc...


> C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

Would love some links to read over weekend. Thanks!


Things like:

- std::string_view

- std::span

- std::unique_ptr

- std::shared_ptr

- std::weak_ptr (non owning reference to shared_ptr, knows when the parent is free'd)

- ranges

- move semantics

- move capture in lambdas

- std::variant

- std::optional

To be honest, learning rust has made me a better c++ programmer as well. Having to really think about lifetimes and ownership from an API perspective has been really neat. It's not so much that I wasn't concerned about it before, more that I strive to be more expressive of these conditions in code.


Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

However I feel like most of the heavy lifting features came with C++11. Span, optional, variant and string_view are nice additions to the toolkit but more as enhancements rather than the paradigm shift of C++11 (move, unique_ptr, lambdas et-al).


> Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

It's funny, because while it's certainly become more influential lately, that subculture existed as a niche in the C++ world before Rust and before C++11. So much so that when I first heard about Rust I thought "these are C++ people."


The original (and long dead) Rust irc channel used to be full of C++ people chatting with OCaml people. Those were the days :)

That entirely matches my idea of how Rust came to be, some sort of pragmatic co-development across two different philosophical camps. In many ways, Rust is a spiritual successor to both languages, if only it was easier to integrate with C++.

lol i start most of my big objects by deleting the copy constructor and adding a clone member func :P

string_view, span and ranges are not conductive to safety, quite the opposite.

Yeah, if anything, C++ is getting less serious about safety by piling features over features. Just write Rust instead.

can you explain why you think that?

Things like "It is the programmer's responsibility to ensure that std::string_view does not outlive the pointed-to character array."

"string_view" is a borrow of a slice of a string. Since C++ doesn't have a borrow checker, it's possible to have a dangling string_view if the string_view outlives the underlying string. This is a memory safety error.

Rust has educated people to recognize this situation. Now it's standard terminology to refer to this as a borrow, which helps. Attempting to retrofit Rust concepts to C++ is helping, but often they're cosmetic, because they say the right thing, but aren't checked. However, saying the right thing makes it possible to do more and more static checking.


But surely it's a step towards more safety. Compare to passing char * around or ref/ptr to string.

Sure C++ doesnt have a borrow checker but these types encourage the idea of "reifying" lack of ownership rather than keeping it ad hoc


I have always used Pascal for memory safe strings. Reference counted, mutable xor aliased, bound checked: their safety is perfect.

Unfortunately there is no string view, so you need to copy the substrings or use pointers/indices. I tried to build a string view, but the freepascal compiler is not smart enough to keep a struct of 2 elements in registers.


You don't infer potential ownership from a C++ ref. Likewise for char* strings unless it is to interface with a C api, in which case you will keep it anyway.

Wow. I hadn't read up much on string_view but I guess I assumed it required a shared_ptr to the string. Odd decision not to.

Rust hasn’t "educated people" about "borrowing".

Lifetime management has always been there for any developer dealing with resources in any language. Over the years, languages and specialized extensions and tools have offered different solutions to help with that problem.

What Rust has brought is a system that checks explicit tracking embedded into a mainstream language.


Static analysis tools like PVS Studio are amazing. Software verification like CompCert where the compilation includes a certificate of correctness are farther away for C++ but will someday be usable for it.

I never really paid attention to consoles (not a gamer in any way) but the ps5 sounds impressive. Shame Sony have a very Apple-like approach to their products and lock everything up. If they bundled up that hardware with linux support, sales would go through the roof and into orbit. I'd personally get a bunch of these and build myself a cluster.

Sony is selling them with little to no profit as they expect to earn on games. Guess why their capabale and cheap hardware is locked down to avoid using it for anything except playing bought games ;)

Anyway you can jailbreak ps4 to 5.0.5 firmware and there are unpublished exploits in existence that are waiting for ps5 to be released.


Looks like I found that "home server" to replace my over-use of cloud resources that I've been looking for!

Well, let me recomend something else, check asrock mini-itx motherboards with on-board cpu. You can get those for ~150 euros, throw in some ram (~60 euros) and some disk (100euro) + some chasis (Phenom mini-itx for instance, ~100euros). For home server this will work great :)

I am running home server (100% self hosted including emails) with J1900-itx motherboard with 20Tb of disk space (zraid) for years. No need to bother with ps4/5.


Well, your described bundle would be over 400€ then you can purchase used PS4 at least half that price and even cheaper.

Yes, but ps4 is gaming rig and you will have to jailbreak it every reboot. It depends on what you intend to run, raspberry pi 4 and sd card could be just more then enough for some people. Those prices were over the thumb, my motherboard with cpu is there since 2014 and is now $60 while it is more than enough and with going minimal (ram, chasis, disk - with ps4 you will get 1tb at most) you can pull it of under ps4 price. At the end, if you divide those 400 euros by 6 years, you are at price of 5.55 euro / month (not to mention you can reuse chasis and disks when upgrading) and it is low power setup (measured with 4 disks was 33 watts).

Jailbreaking could be nice for other <wink> unnamed purposes.


I recently bought an Ivy Bridge CPU low power CPU + motherboard for $35 and 8 GiB of ram for $25. No need to buy new hardware if you can do away with old.

Maybe they sell the H/W at a loss (especially considering R&D + Marketing spend) - and the real strategy is to turn a profit on PSPlus, licensing and taking a cut out of game distribution. If that's the case... you or me building a linux cluster will actually hurt them =)

Not maybe, that's exactly what they do.

The PS3 had dual-boot support for Linux early on, for a couple years after launch. It was removed in a software update a week or two after I decided to try it. I don't see Sony doubling back on that one, but you never know.


> That's a lot of compute engine for $400.

So excited for this as a PC gamer, hardware prices are going to have to plummet. I don't think supercomputers a likely, the PS2 was a candidate because there was [initially] official support for installing Linux on the thing. Sony terminated that support and I really can't imagine them reintroducing it for the PS5.


Sony's only interest is to do a single deployment, using a customized OS and firmware, and then get as many articles out of the project as possible.

They have zero incentive to subsidize supercomputers. They're in the business of trading hardware for royalty, store, and subscription payments.


And if they do it would be wise not to trust them, because dropping support for advertised features with hardware-fused irreversible software updates is SOP at this point. FFS, they even dropped support for my 4K screen in an update and I wound up playing the back half of Horizon Zero Dawn in 1080p as a result.

What? How and why did they drop support for your screen?

Yes, really. They up and dropped an entire HDMI mode used by "older" 4K displays.

A cynic would say they wanted to boost sales on newer displays, but it seems more likely that a bug of some kind came up in a driver (I was unaware of any problems, but that's hardly proof of anything) and they just decided it was easier to cut support of those displays than to fix the problem.

Support forums filled with complaints by the dozens of pages, but Sony didn't care, because why should they? I'm sure somebody did the calculation that said we weren't a big enough demographic to matter.


> Somebody will probably make supercomputers out of rooms full of those.

So I learnt very recently that the PS5 has a cool approach where all memory is shared directly between the CPU and the GPU (if this is wrong someone please correct me). I would be really interesting to see how well the GPU in this could handle DL specific workloads, and if necessary, could it be tweaked to do so?

Because if so, that could be an absolute weapon or a DL workstation. If it does turn out to be feasible, I think it could be very easily justifiable to buy a few of those (for less than it would cost you to buy a major cloud provider GPU equipped instance for a couple of months) and have a pretty capable cluster. Machines get outdated or cloud provider cost comes down? Take them home and use them as actual gaming consoles. Win win.


This is how APUs (which is what the PS4/PS5/Xbox) use memory, the RAM is shared between the graphics and compute units. This can be an advantage since memory is quickly shared between the two (for example loading textures, etc).

This is also useful in computers since adding more RAM also adds more VRAM


Self driving cars: Yes, but only if they really work - now is the perfect time to sell them if they did; for those of us who normally use public transport but don't currently like the thought of sitting in a petri dish for 2 hours. Utility scale battery storage: Yes but it needs tech improvements to store LOTS of energy; the flow batteries might do it if the hype is true - but currently the UK wholesale electricity price is £-28/MWh due to a wind/solar glut and a quiet weekend, so if anyone wants to get paid to store that energy the opportunity is there.

As for C++ safety; I find modern C++ hard to read - are they going to be able to do safety but end up with something that's actually harder to use/read than Rust?


Can't help but laugh whenever I read about self driving cars predictions like this sorry.

My GPS can hardly navigate most of the world I'm not really excited and if the only criteria of self driving car is self driving on a high way then color me uninterested.

I don't think self driving cars will be able to traverse majority of the world's traffic anytime soon. The road is just too difficult to maintain for human free driving with the exception of few major block cities on America which makes the whole ordeal pretty boring.


Self driving cars don’t need to be 100% autonomous in all possible scenarios in order to be useful. Self-driving reliably on the highway? Hell yes I’d take that (just think of trucks - having a driver to the highway is so much cheaper than having someone drive it cross-country). Self driving reliably in a few major cities? Oh, you mean cheap robotic taxi?

Spot on. This is our approach at Ghost Locomotion - L3 is pretty darn good, and highways are actually pretty standards driven, unlike local roads or cities.

https://medium.com/ghost-blog/the-long-ignored-most-obvious-...

https://medium.com/ghost-blog/the-future-of-transportation-i...


I agree with you it's just as I said - it's not what we've been sold and autopilot on highway is kinda boring.

I think its incorrect way to view it like its either full self driving or none at all. We are getting incremental benefits from this already: cars are correcting and preventing driver errors. They make instant trajectory corrections or complete stop and prevents a huge crashes. With time they will be better and better at recognising traffic lights, road signs, sudden unforeseen situations and so on and that way driving safety will improve exponentially even before self driving capabilities.

Nice post. I think the PS5 read might be a little off though. The pro edition is likely to be 600ish USD and come in a little lower than 14 of the tflops.

Why do they need lidar in the first place? Humans do fine with stereoscopic vision

“Fake it till you make it” is precisely how it will be solved

“Fake it till you make it” strategy works when you know how to make something but haven't made it yet. The strategy falls apart when people try to fake having solved hard open research problems.

> 8 CPUs at 3.2GHz each

8 CPU cores at 3.2GHz each?


Tesla covering 3/6... stock price is definitely still low

I would never trust any self driving car that didn't use LiDAR. It's an essential sensor for helping to fix issues like this:

https://www.youtube.com/watch?v=1cSw4fXYqWI&feature=emb_logo

And it's not contrived since we've seen situations of Telsa Autopilot behaving weirdly when it sees people on the side of billboards, trucks etc.


LIDAR vs camera is a red herring. The fact that Elon and his fan club fixate on this shows you how little they understand about self driving. The fundamental problem is that there is no technology that can provide the level of reasoning that is necessary for self driving.

Andrej Karpathy's most recent presentation showed how his team trained a custom detector for stop signs with an "Except right turn" text underneath them [0]. How are they going to scale that to a system that understands any text sign in any human language? The answer is that they're not even trying, which tells you that Tesla is not building a self-driving system.

[0] https://youtu.be/hx7BXih7zx8?t=753


A surprising number of human drivers would also not be able to 'detect' that 'except right turn' sign. Only 3 states offer driver's license exams in only English and California for example offers the exam in 32 different languages.

Even so, it is quite possible to train for this in general. Some human drivers will notice the sign and will override autopilot when it attempts to stop, this triggers a training data upload to Tesla. Even if the neural net does not 'understand' the words on the sign, it will learn that a stop is not necessary when that sign is present in conjunction with a stop sign.


They have hired most of the industry talents, so I think it's quite silly to state about how little they understand about this. In my opinion nobody except Tesla and Waymo has more knowledge of this field.

Why does it need to work in any human language? It isn't as if self driving cars need to work on Zulu road signs before they can be rolled out in California. I'd be surprised if they ever needed to train it on more than 4 languages per country they wanted to roll out to.

If I were driving I'd definitely stop for the person in the road projection at https://youtu.be/1cSw4fXYqWI?t=85

LiDAR also isn't a silver bullet. Similar attacks are possible such as simply shining a bright light at the sensor overwhelming the sensor as well as more advanced attacks such as spoofing an adversarial signal.


I don't think it's attacks we need to worry about (there's even an XKCD about dropping rocks off of overpasses). The issue is that without good depth and velocity data (so probably LiDAR) there are lots of fairly common situations that an ML algorithm is likely to have trouble making sense of.

I use autopilot every day. It stops for stoplights and stop signs now.

Sometimes when on the freeway behind a construction truck with flashing lights.

It is misleading. driving on the highway is by far the easiest part of self driving.

Going from 3 nines of safety to 7 nines is going to be the real challenge.


There aren't stoplights on the highway. I'm talking about in-city driving.

Humans don’t need LiDAR to recognize billboards

Self driving cars can't rapidly move their cameras in multiple spatial directions like humans do on a continuous basis.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.


People don't have eyes in the back of their heads. Self-driving cars don't get drunk or distracted by cell phones. Comparing humans with AVs is apples & oranges. The only meaningful comparison is in output metrics such as Accidents & Fatalities per mile driven. I'd be receptive to conditioning this metric on the weather... so long as the AV can detect adverse conditions and force a human to take control.

Chimps have us beat when it comes to short-term visual memory (Humans can't even come close).

Mantis shrimp have us beat when it comes to quickly detecting colors since they have twelve photoreceptors vs. our three.

Insects have us beat when it comes to anything in the UV spectrum (we're completely blind to it). Many insects also cannot move their eyes but are still have to use vision for collision detection and navigation.

Birds have us beat when it comes to visual acuity. Most of them also do not move their eyeballs in spacial directions like we do but still have excellent visual navigation skills.


Humans have visual processing which converts the signals from our three types of cones into tens to hundreds of millions of shades of color. Mantis shrimp don't have this processing. Mantis shrimp can only see 12 shades.

Human color detection is about six orders of magnitude greater than mantis shrimp's.


Right, but the theory is that they have us beat when it comes to speed since they are directly sensing the colors whereas we are doing a bunch of post-processing.

I think the point was that brains are the best pattern and object detection computers, not necessarily just human brains.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.

Not defending those who say that LIDAR isn't useful/important in self-driving cars, but this assertion is only marginally true today and won't be true at all for much longer. See https://arxiv.org/pdf/1706.06969 (2017), for instance.


Humans have about 2° field of sharp vision. Computers with wide angle lenses don't have to oscillate like the eyes do.

Humans are underrated.

On driving? I would posit that most humans are vastly overrated.

I suspect if you crunch the numbers, accidents are going to be above normal for a while after Covid-19 reopenings.

Anecdotally, I'm seeing people doing mind-blowingly stupid things on the roadways right now. It seems like people have forgotten how to drive. I suspect the issue is that people rely too much on other cars to cue them how to behave and the concentration is too low.

(It could also be that a constant accident rate cleans off the worst of the drivers with regularity as they get into accidents and then wind up out of circulation. I really hope that isn't why ... that would be really depressing.)


No they’re underrated. We all know the stats. Driving isn’t the safest activity. Having said that there’s a lot of wishful thinking that the current state of ML can do any better if we were to just put them on the roads today as-is.

You are right, for example, humans don't need anywhere near the amount of training data that AIs need.

I learned to drive a car when I was 13. My older cousin took me to warped tour, got hammered and told me I had to drive home. I didn’t know what a clutch was, let alone a stick shift. After stalling in the parking lot a couple of times, I managed to drive us from Long Beach all the way back to my parents house in Pasadena. Love to see an AI handle that cold start problem.

Cold start? You had 13 years!

Self-driving cars could work more like a hive mind. Humans can share ideas, but not reflexes and motor memory. So we practice individually, and we're great at recognizing moving stuff, but we never get very good at avoiding problems that rarely happen to us.

And we know we shouldn't drive tired or angry or intoxicated but obviously it still happens.


Exactly. The way to improve performance on a lot of AI problems is to get past the human tendency to individualistic AI, where every AI implementation has to deal with reality all on its own.

As soon as you get experience-sharing - culture, as humans call it, but updateable in real time as fast as data networks allow - you can build an AI mesh that is aware of local driving conditions and learns all the specific local "map" features it experiences. And then generalises from those.

So instead of point-and-hope rule inference you get local learning of global invariants, modified by specific local exceptions which change in real time.


It seems to me that humans require and get orders of magnitude more training data than any existing machine learning system. High "frame rate", high resolution, wide angle, stereo, HDR input with key details focused on in the moment by a mobile and curious agent, automatically processed by neural networks developed by millions of years of evolution, every waking second for years on end, with everything important labelled and explained by already-trained systems. No collection of images can come close.

Depends on how you quantify data a human processes from birth to adulthood.

You're forgetting the million years of evolution

But at the end of that video they state they were able to train a network to detect these phantom images. So this is something that can be fixed and had been proven to work. Only a matter of time before it's in commercial cars.

That same video said they trained a CNN to recognize phantoms using purely video feed and achieved a high accuracy with AUC ~ 0.99.

30%+ downvotes seems like there is not a consensus around this issue

I have a AP 2.5 Model 3. It will never be fully self driving. It still has trouble keeping lanes when the stripes are not simple. It still does phantom brakes

WRT F150;

I am so upset with the state of the auto market when it comes to pricing.

Manufacturing margins are enormous when it comes to cars.

The F150 is no different.

A two seater (effectively) vehicle stamped out of metal and plastic should never cost as much as those things do.

I hate car companies and their pricing models.


Look up the chicken tax bill that passed a few decades ago that basically stopped any foreign car manufacturers from selling pickups in the US. That's why trucks are so much more expensive than other types of cars.


Also why you have hugh f-serious cars and not more reasonable sized cars like Hilux's

Because small trucks require more fuel- and emissions-efficient engines than larger ones.


Web Assembly

It's interesting in a bunch of ways, and I think it might end up having a wider impact than anyone has really realized yet.

It's an ISA that looks set to be adopted in a pretty wide range of applications, web browsers, sandboxed and cross platform applications, embedded (into other programs) scripting, cryptocurrencies, and so on.

It looks like it's going to enable a wider variety of languages on the web, many more performant than the current ones. That's interesting on it's own, but not the main reason why I think the technology is interesting.

Both mobile devices, and crypto currencies, are places where hardware acceleration is a thing. If this is going to be a popular ISA in both of those, might we get chips whose native ISA is web assembly? Once we have hardware acceleration, do we see wasm chips running as CPUs someday in the not too distant future (CPU with an emphasis on Central)?

A lot of people seem excited about the potential for risc-v, and arm is gaining momentum against x86 to some extent, but to me wasm actually seems best placed to takeover as the dominant ISA.

Anyways, I doubt that thinking about this is going to have much direct impact on my life... this isn't something I feel any need to help along (or a change I feel the need to try and resist). It's just a technology that I think will be interesting to watch as the future unfolds.


I want to believe... I always thought WebAssembly had a lot of potential, however, in practice it doesn't seem to have turned out that way.

I remember the first Unity demos appearing on these orange pages at least 4 or 5 years ago, and promptly blowing me away. But, after an eternity in JavaScript years, I still dont know what the killer app is, technically or business wise. (Side note - I encourage people to prove me wrong, in fact I'd love to be! Thats whats so engaging about discussions here. I'd love to see examples of what WebAssembly makes possible that wouldn't exist without it.)


I can tell you about a WebAssembly killer app for a small niche. lichess uses WebAssembly to run a state-of-the-art chess engine inside your browser to help you analyze games [1]. Anyone who wants to know what a super-human chess player thinks of their game can fire it up on their desktop, laptop, or even phone (not highly recommended, it's rough on your battery life).

Obviously very serious chess players will still want to install a database and engine(s) on their own computer, but for casual players who just occasionally want to check what they should have done on move eleven to avoid losing their knight it's a game changer.

[1] https://lichess.org/analysis


I think chess.com has something similar too, but not sure if it's powered by wasm.

If it's not, I'd be interested to see a speed and feature comparison between the two.


I think there might be killer apps that companies aren't publicizing, because it's part of their competitive advantage.

Example of WASM being used in a major product:

https://www.figma.com/blog/webassembly-cut-figmas-load-time-...

You can infer from this that it's making them 3x faster than anything a competitor can make, and probably inspired a lot of those 'Why is Figma so much more awesome than any comparable tool?' comments I remember reading on Twitter months back.


Agreed - Figma is a very good example. I stand corrected.

I read that a few days ago and just realized why Figma runs better than Miro/RealTimeBoard. I wish the Miro team is also looking to port to WASM/boost performance. I don't think it's easy though; Figma's effort started in 2017.

Adobe XD uses some Wasm also.

An example I can give:

I use WebAssembly for a few cross-platform plugins. E.g. An AR 3D rendering engine in C++ and OpenGL. With very little effort it is working in browser. No bespoke code, same business logic, etc. Saved a lot of time vs creating a new renderer for our web app.

For me it allows a suite of curated plugins which work cross-platform. The web experience is nearly just as nice as the native mobile and desktop experience. This in turn increases market growth as more of my clients prefer web vs downloading an app (which is a large blocker for my users). I also enjoy the code reuse, maintainability, etc, :)

Another:

This year Max Factor (via Holition Beauty tech) won a Webby award for in-browser AI and AR. This was used to scan a users face, analyse their features, advise them on what make up, etc, would suit them, after which the user can try it on. This would have been impossible without WebAssembly.

This tech is also used by another makeup brands beauty advisors (via WebRTC) to call a customer and in real-time advise them on their make up look, etc.

Is this tech necessary? Probably not, but it is a lot nicer than having to go to a store. Especially when we are all in lockdown :)

1) https://www.holitionbeauty.com/

2) https://winners.webbyawards.com/?_ga=2.215422039.1334936414....

3) https://www.maxfactor.com/vmua/


I build a slower version of something with the same idea 13-14 years ago in Flash for http://www.makeoversolutions.com which most of these makeup companies licensed back then.

I moved on from that a decade ago but it was a neat project at the time.

But I deployed my first integration of WASM about a month ago for PaperlessPost.com. It is a custom h264 video decoder that renders into a canvas that manages timing relative to other graphics layers over this video. This code works around a series of bugs we've found with the built in video player. It went smoothly enough that we are looking into a few other hot spots in our code that could also be improved with WASM.

One avenue for WASM might be simply polyfilling the features that are not consistently implemented across browsers.


I feel like I am looking in a mirror!

Ten years ago I did the same but in Java and JOGL (before Apple banned OpenGL graphics within Java Applets embedded within a webpage). Was used for AR Watch try on within https://www.watchwarehouse.com and Ebay. The pain of Flash and Applets still wake me up at night.

I'm also building something very similar but with the ability for custom codecs (https://www.v-nova.com/ is very good). Probably the same issues too! Could I know more about your solution?


This is really great work and exactly the kind of response I was hoping for - thank you. I wonder why tech like this is not being more widely used, for example on Amazon product pages. Especially with the well known reluctance as you mentioned of people downloading apps.

Thanks, much appreciated!

I think WebAssembly is more used than it appears, just difficult to see/tell.

A few years ago I actually tried integrating AR via WebAssembly with Amazon. We couldn't get the approval due to poor performance on Amazon fire devices (which have low end hardware). It is a shame but it is what it is.

What is disappointing/annoying is - as a CTO - it is near impossible to hire someone with WebAssembly skills. It requires an extra curious Engineer with a passion for native and web. Training is always important for a team but when going down the WebAssembly route you need to extra focused and invest more than what a typical Engineer would be allocated (E.g. Increase training from 1 day a week to 2-3). I suppose this may put people off?


> I'd love to see examples of what WebAssembly makes possible that wouldn't exist without it.

I've been playing with WebAssembly lately and the moment where it clicked for me how powerful it was was building an in-browser crossword filler (https://crossword.paulbutler.org/). I didn't write a JS version for comparison, but a lot of the speed I got out of it was from doing zero memory allocation during the backtracking process. No matter how good JS optimization gets, that sort of control is out of the question.

I also think being able to target the browser from something other than JS is a big win. 4-5 years is a long time for JS, but not a long time for language tooling; I feel like we're just getting started here.


Great work.. This is amazing! Thanks for sharing

This is brilliant, thank you!

If you’re looking for a real world example of Webassembly being used in production at a large scale for performance gains, check out Figma. Their editor is all wasm based, and is really their secret sauce.

Thank you! I just checked them out, and I stand corrected. Really an excellent design tool and very responsive. I see now that for certain applications WASM is indeed the right tool for the job.

Speedy client-side coordinate conversion in geospatial apps, thus avoiding the round-trip to the server.

I agree! WASM is very interesting. Blazor is an exciting example of an application of Web Assembly - it's starting out as .net in the browser, but you can imagine a lightweight wasm version of the .net runtime could be used in a lot of places as a sandboxed runtime. The main .net runtime is not really meant to run unprivileged. It would be more like the UWP concept that MS made to sandbox apps for the windows App Store, but applicable to all OSes.

One thing I haven't heard much about is the packaging of wasm runtimes. For example, instead of including all of the .net runtime as scripts that need to be downloaded, we could have canonical releases of major libraries pre-installed in our browsers, and could even have the browser have pre-warmed runtimes ready to execute, in theory. So if we wanted to have a really fast startup time for .net, my browser could transparently cache a runtime. Basically like CDN references to JS files, but for entire language runtimes.

This would obviate the need for browsers to natively support language runtimes. It's conceptually a way to get back to something like Flash or SilverLight but with a super simple fallback that doesn't require any plugin to be installed.


I look forward to in browser DLL hell /s

I'm cautiously optimistic about blazor, it definitely makes streaming data to the Dom much easier


Blazor seems like the only one application of WASM at the moment that goes in the completely wrong direction.

People are already whining about JS bundle size and even the small .net runtimes are >60kb.

Yew on the other hand seems to fit right into what WebAssembly was made for.


The download size does make it hard to use for a "public" site, like a webshop. But it is a different story for an application, like an intranet solution or app like Figma. A first time download of a few MB's is not a problem, as you regularly use it. Like a desktop application.

It is the first time you can develop a full stack application (client and backend) in one language in one debugging session. For C# that was possible with Silverlight.

Small companies (like mine) that deliver applications and have full stack engineers can have some amazing productivity!

So for my needs I'm really excited with something like Blazor, and this was only the first release.


I understand the appeal for .net devs.

I just don't think it's a good idea in general.


For every person whining about 60k JS there are 10 creating 10MB web app.

Cautionary tale: we’ve been here before with JVM CPUs like Jazelle. They didn’t take over the world.

Absolutely, but there's been plenty of technologies where the time wasn't right the first time around, but it was the second, or third, or fourth.

See https://vintageapple.org/byte/ and search on the page for "Java chips" or download the PDF directly at https://vintageapple.org/byte/pdf/199611_Byte_Magazine_Vol_2...

I remember being really excited at the concept. Of /course/ we needed Java co-processors!


Even closer to home. Palm, RIM, Microsoft, Apple and Google have all said at one point that web apps were the answer for mobile apps....

I mean, modern Google was half-built on the back of the Gmail web app...

Gmail was introduced after Google was already popular. The Google home page’s claim to fame was always its simplicity and fast load time.

To an average user, Google in 2003 was a search page. In 2004+, it was essential internet infrastructure.

That's a pretty big difference.


Gmail is popular but in the grand scheme of things it’s not that popular for email. I’m sure that most people get most of their utility from email from their corporate email. Their personal email is mostly used for distant relationship type communications. Most personal interactions these days happen via messaging and social media. AKA “Email is for old people”.

Also, a lot of computer use is via mobile these days and I doubt too many people are using the web interface on mobile for gmail.


It's pretty popular for email, at 25%+ market share [1]. That's a LOT of information to mine.

And point about conversations moving to post-email protocols, but email is certainly still up there with HTTP as a bedrock standard that everyone eventually touches.

Without pushing JavaScript and a full featured web client, it's fair to say Google wouldn't have grown as quickly and be nearly as dominant today.

As for their move to full mobile app, I think it's a bit of a different calculation when you happen to own the OS that powers ~75% of all mobile phones [2]. ;)

Suffice to say, I don't think Google has the same troubles as other developers. (Exception to security policy, for my first party app? Sure!)

[1] https://www.statista.com/chart/17570/most-popular-email-clie...

[2] https://www.statista.com/topics/3778/mobile-operating-system...


The question is not about how many people use Gmail - and that still doesn’t take into account corporate users. It’s about how many people use the web interface as opposed to using a mobile app.

Yes. And we're both clear that there wasn't always a mobile app version of Gmail, right?

To say that Gmail had much to do with Google’s growth considering that there was only a relatively small Window that email was the most popular form of personal (not corporate communication and spam) and that was over ten years ago before mobile started taking over doesn’t really have any basis in today’s reality.

True, just like we were here before with devices like the Palm Pilot and Apple Newton, which is why the iPhone and IPad never took over the world ;)

I'd argue somewhat the opposite. Because WebAssembly is abstract but low level, it makes it really easy for a platform to optimize specifically for that platform, so instead of creating a need for specific platforms, it'll allow more diverse systems to run the same "native" blobs.

That potentiality has been there for many many years, I don't see 'the thing' that provides the critical mass necessary to make it work in reality.

Web Assembly is one of the more misunderstood technologies in terms of it's real, practical application.

At its core, it crunches numbers, in limited memory space. So this can provide some 'performance enhancements' possibly for running some kinds of algorithms. It means you can also write those in C/C++, or port them. Autodesk does this for some online viewers. This is actually a surprisingly narrow area of application and it still comes with a lot of complexity.

WA is a black box with no access to anything and how useful really is that?

Most of an app is drawing 'stuff' on the screen, storage, networking, user event management, fonts, image, videos - that's literally what apps are. The notion of adding 'black box for calculating stuff more quickly' is a major afterthought.

At the end of the day, JS keeps improving quite a lot and does pretty well, it might make more sense to have a variation of this that can be even more optimized than building something ground up.

WASI - the standard WA interface is a neat project, but I feel it may come along with some serious security headaches. Once you 'break out of the black box' ... well ... it's no longer a 'black box'.

WA will be a perennially interesting technology and maybe the best example of something that looks obviously useful but in reality isn't really. WA actually serves as a really great Product Manager's instructional example to articulate 'what things actually create value and why'.

It will be interesting to see how far we get with WASI.


I think you're underestimating WASI. Projects like cloudABI, where an existing app is compiled against a libc with strong sandboxing, really cool things happen.

Thanks but the same thing was said about WASM and ASM.JS.

For 5 years we've been hearing about how great they are, except nobody is really using them.

So now, it's 'the next thing' that will make it great? Except that next thing isn't there, not agreed upon or implemented, we don't know so many things about it?

Like I say, this is textbook example of tech-hype for things probably not as valuable as they appear.

If (huge if) WASI were 'great, functional, widespread, smoothly integrated' - I do agree there's more potential. But that this will really happen is questionable, and that even if it does happen, it will be valuable, is questionable.


I don't like to see wasm replacing native for stuff like development tooling, and desktop apps.

JITs may approach native performance in theory - but the battery consumption and memory consumption are not very good. ("Better than JS" is a low bar).

As hardware becomes stronger, I would like to do more with it, and when it comes to portable devices, I want more battery life. Nothing justifies compiling same code again and again, or downloading pages again and again like "web apps" shit.

I understand where developer productivity argument comes from. But we can have both efficiency and developer productivity - it is a problem with webshit-grade stacks that are used today that you can't have both.

I personally think flutter model is future. You need not strive for "build once - run anywhere". You can write once and build anywhere a cross platform HLL and that's better.

As for sandboxing, maybe it is that your OS sucks (I say this as linux user); Android / iOS have sandboxing with native code. You shouldn't need to waste energy and RAM for security. IMO enforcing W^X along with permission based sandboxing is better than webassembly bullshit that is pushed.

And webassembly itself seems to be a rudimentary project with over ambitious goals. JS bridge being so slow and not having GC support ("To be designed" state) make it unusable for many purpose. Outside HN echo chamber, not many web people want to write in rust or even C++.


When dealing with Health or Military systems installing or updating a native application could result in months of delays (e.g. quarterly OS image update cycles). However running within Chrome, Firefox, and other typical software preinstalled, this becomes <days for implementation.

Without WebAssembly I wouldn't have been able to ship 2 products pro-bono within intensive care units and operating theatres directly helping with COVID.

I understand your dislike towards WebAssembly (albeit Web stack trends / flavour of the month esque development). I am not the largest fan of modern web development. Nevertheless love for WebAssembly is not due to developer productivity. After shipping 20+ WebAssembly products (alongside native counterparts) I am yet to meet an Engineer who enjoyed the WebAssembly/Emscripten/Blazor pipeline. However what WebAssembly has achieved for me is: Do people use your app? and within certain markets it allowed me to grow, do good, and say yes. This is the only real reason why someone should go down this route.


> I don't like to see wasm replacing native for stuff like development tooling, and desktop apps.

Wasm is like the JVM or CLR in that regard. It's not the future - it's the past.


Yes. Even if there were a number of dominant architectures, install-time compilation would be better than run-time compilation for frequently used software. The RAM and Energy overhead isn't just worth it.

Wasm was not designed to be a hardware accelerated ISA. It was designed as an IL/bytecode target like JVM and .NET.

Even if it were, there is an extremely high bar to meet for actual new ISAs/cores. There is no chance for Wasm to compete with RISC-V, Arm or x86.


Aaah so we have come full circle from Java applets.

> It's an ISA that looks set to be adopted in a pretty wide range of applications, web browsers, sandboxed and cross platform applications, embedded (into other programs) scripting, cryptocurrencies,

Imagine if the crowd didn't fall for the HODL hypers and called these things cryptolotteries or something like that -- they are a betting game after all -- how ridiculous would it look to include them in every discussion like this.


What are you adding to the discussion? This is a technical forum, the least you could do is comment on the use of Web Assembly in Ethereum or maybe anything of substance. There's a bunch of technically interesting topics to bring up but somehow I doubt you know anything about them.

I speak up against cryptocurrency because it's a cancer. It's a hype adding to climate change without any real world use case whatsoever.

Have you looked deeper than just hodl memes and Bitcoin? Ethereum is a highly technical project that doesn't really care about money and lots of people here on Hacker News find interesting topics regarding it. Web Assembly will be the base programming platform for example, which is one of the reasons he included it.

If you read about the Baseline protocol (EY, Microsoft, SAP etc building neutral interconnections between consortiums), ENS/IPFS, or digital identity systems you might find something that interests you and is more relevant than the mindless hodl ancaps. It's actually a pretty exciting field to be in as a computer scientist with almost no end of boundary pushing experiments and cryptographic primitives to play with and build on top of.


Thank you for your input, but thus is not TechCrunch. We understand the problems with PoW, and we also know that a lot of interesting research is being done on top of Ethereum. For your reference, Ethereum is moving away from PoW.

Most new cryptocurrencies are moving away from PoW because a.) it's a massive waste of electricity and b.) it's not actually secure anyway, because we've seen a consolidation of mining power with major ASIC customers who have cheap power costs (notably in China). Ethereum's moving to it in 2020 or 2021, and EOS, Stellar, Tezos, Cardano, etc. are already PoS or derivatives.

Have the security issues with PoS been worked out yet?

Materialize https://materialize.io/ Incremental update/materialization of database views with joins and aggregates is super interesting. It enables listening to data changes, not just on a row level, but on a view level. It's an approach that may completely solve the problem of cache invalidation of relational data. Imagine a memcache server, except it now also guaranties consistency. In addition, being able to listen to changes could make live-data applications trivial to make, even with filters, joins, whatever.

Similarly, someone is developing a patch for postgres that implements incrementally updating/materializing views[1]. I haven't tried it so I can't speak of its performance or the state of the project, but according to the postgres wiki page on the subject [2] it seems to support some joins and aggregates, but probably not something that would be recommended for production use.

[1] https://www.postgresql-archive.org/Implementing-Incremental-... [2] https://wiki.postgresql.org/wiki/Incremental_View_Maintenanc...


+1, very excited about this.

They're marketing it in the OLAP space right now, but at some point I'd like to try integrating it with a web framework I've been working on.[1][2] It'd be a more powerful version of firebase's real-time queries. Firebase's queries don't let you do joins; you basically can just filter over a single table at a time. So you have to listen to multiple queries and then join the results by hand on the frontend. Doesn't work if you're aggregating over a set of entities that's too large to send to the client (or that the client isn't authorized to see).

[1] https://findka.com/blog/migrating-to-biff/ [2] https://github.com/jacobobryant/biff


Thanks for the vote of confidence! One thing: We're not marketing it in the OLAP space. Our existing users very much are building new applications.

Initially we went for the metaphor of "what if you could keep complex SQL queries (e.g. 6-way joins and complex aggregations, the kinds of queries that today are essentially impossible outside a data warehouse) incrementally updated in your application within milliseconds? What would you build?

We're moving away from that metaphor because it seems it's more confusing than helpfuL. Tips always appreciated!


Ah, thanks for the correction. In any case I'm looking forward to trying it out eventually--got a number of other things ahead in the queue though.

My suggestion would be consider comparing it to firebase queries. Firebase devs are already familiar with how incrementally updated queries can simplify application development a lot. But, despite Firebase's best marketing attempts, the queries are very restrictive compared to sql or datalog.


I’ve always wanted to take the time to try to build this. It’s been possible in PG for a while to use a foreign data wrapper to do something like directly update an external cache via trigger or pubsub it to something that can do it for you.

Making it easy here would be absolutely fascinating.



Materialize is based on differential dataflow, that is based on timelly dataflow. The abstraction works like magic: distributed computation, ordering, consistency, storage, recalculation, invalidations... All those hard to since problems are handled naturally by the computing paradigm. Maybe the product is similar, but not the principles behind

Principles only matter to hackers, but the end result for end users is identical.

It’s just very unfortunate that materialize has a much much bigger marketing team than the datomic people.


Materialized is streaming, Datomic is poll.

How are they close?

This looks great - I've been looking into debezium for a similar idea but they don't natively support views which makes sense from a technical pov but is rather limiting. There's a few blog posts on attaching metadata/creating an aggregate table but it involves the application creating that data which seems backwards.

Would be huge if materialize supports this out the box. I believe it's a very useful middle ground between CRUD overwriting data and eventsourcing. I still want my source of truth to be a rdbms, but downstream services could use data stream instead


This is exactly what we do! This is a walkthrough of connecting a db (these docs are for mysql, but postgres works and is almost identical) via debezium and defining views in materialize: https://materialize.io/docs/demos/business-intelligence/

That's super interesting. Will need to read a lot more about it though.

Doesn't Hasura or Postgraphile do this better? They give a GraphQL API over Postgres with support for subscriptions, along with authentication, authorization etc.

You could shoehorn hasura for this usecase but those tools are primarily intended for frontend clients to subscribe to a schema you expose.

Change data capture allows you to stream database changes to a message bus or stream which has much better support for backend service requirements. Example: if a downstream service goes down, how would it retrieve the missed events from Hasura? Using Kafka or a buffered message bus, you'd be able to replay events to the service.

Nevermind having to support websockets in all your services :/


Cool! It's interesting!

It looks similar to couchdb?

Oxide Computer Company

https://oxide.computer/

“True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.”

Corey Quinn interviewed the founders on his podcast "Screaming in the Cloud", where they explain the need for innovation in that space.

https://www.lastweekinaws.com/podcast/screaming-in-the-cloud...

Basically, on-premises hardware is years behind what companies like Facebook and Google have in-house, it may be time to close that gap.

They also have a podcast, "On The Metal", which is such a joy to listen to. Their last episode with Jonathan Blow was really a treat.

https://oxide.computer/podcast/

It's mostly anecdotes about programming for the hardware-software interface, if that's your thing ;).


And for people wondering why caring about on-premises hosting when you have the cloud, a few weeks ago there was a thread about why would you do the former in favor of the latter. It puts on display that actually a lot of people are still on-premises, and for good reasons, which makes a good case for a company like Oxide to exist.

https://news.ycombinator.com/item?id=23089999


Also see this meta comment which summed up other top-level comments by their arguments: https://news.ycombinator.com/item?id=23098654

8/10 is cost related.


Wow, I had never heard of Oxide before this. I work at a huge company that is nearly finished their cloud transformation, which was frankly largely a way to differentiate themselves form their competition more than anything, and a huge cost sink.

This probably would've accomplished the same goal, with a lot less overhead.


I’d second that podcast recommendation. The episode with Jon Masters is an incredible conversation.

Rust lang - Memory safety through zero cost abstraction as a way to eliminate a large class of errors in systems languages is interesting. Especially if it allows more people to write systems programs.

WASM - Mostly as a compile target for Rust, but I think this changes the way software might be deployed. No longer as a website, but as a binary distributed across CDNs.

ZK-SNARKS - Zero knowledge proofs are still nascent, but being able to prove you know something while not revealing what it is has specific applicability for outsourcing computation. It's a dream to replace cloud computing as we know it today.

Lightning Network - A way to do micropayments, if it works, will be pretty interesting.

BERT - Newer models for NLP are always interesting because the internet is full of text.

RoamResearch - The technology for this has been around for a while, but it got put together in a interesting way.

Oculus Quest - Been selling out during COVID. I sense a behavioral change.

Datomic - Datalog seems to be having a resurgence. I wonder if it can fight against the tide of editing in-place.


Datomic .. not just because of datalog, but because its hands down the best implementation of a AWS lambda based workflow I've seen (Datomic Ions). It's such a peach to work with.

wrt to Datomic there's also another Clojure DB using Datalog called Crux that's pretty interesting. I built my most recent project on that.

Rust is awesome and very eye opening and it's a great alternative for almost any Golang use case, I just hope they prioritize enhancing compilation times if possible.

> Lightning Network - A way to do micropayments, if it works,

You can stop the tape right there. You know it doesn't and it can't.


Genuinely curious, what’s wrong with the lightning network?

I don't know why the parent comment talked in such absolute terms, but these recent problems may be relevant:

https://news.bitcoin.com/hidden-lightning-network-bug-allowe...

https://news.bitcoin.com/mishap-sees-user-lose-30000-btc-on-...


Bitcoin.com isn't a neutral source on LN related material. The parent company (St Bitts LLC) directly invest in Bitcoin Cash startups that compete directly with Bitcoin itself.

The bug has already been patched, and had a limited userbase. The user who supposedly lost all his Bitcoin ended up not being true, vast majority was recovered. It's also worth noting that the user also deliberately went against various UI warnings that funds may be lost.

https://github.com/lightningnetwork/lnd/issues/2468


Linking to a Bitcoin.com article about anything BTC is like linking to a Fox News opinion article on Obama.

For starters the whitepaper concludes that a 133mb base block size is needed for it to work at scale. Bitcoin currently has a 1mb block size limit, which it will never increase.

It's not the lightning network -- it's micropayments.

Three days ago: https://news.ycombinator.com/item?id=23232978



> RoamResearch - The technology for this has been around for a while, but it got put together in a interesting way.

Just checked out the website, how is it any different from Dynalist or Workflowy?


Never tried dynalist. Used Workflowy.

Workflowy is strictly an outliner. It's like Gopher--hierarchical, unlike the Web, which is a graph.

Feature-wise, Roam is more like a graph. You can really easily link to other concepts, and rename pages (and everything renames). It also has a page generated daily, for things you want to write down.

Feeling wise, you get to write things and collect them and organize them later. I think it's more condusive to how people think and research. You might have a piece of data, but you're not sure where to put it yet. Most other note taking systems forces you to categorize first.


I’m surprised people are still looking forward to the Lightning Network. Layer 2 has missed the boat because of all the politics and contention between the Bitcoin factions. Decentralized finance is already happening on Ethereum. We have stable coins like Dai that underpins loans.

> It's a dream to replace cloud computing as we know it today.

Perhaps you may be interested in Golem project devoted to distributed computing: https://golem.network/


btw since two weeks ago the official Oculus Quest store is not sold out anymore (although it might be sold out again, haven't checked since it got back in store)

Oxford nanopore sequencing. If a few problems can be figured out (mainly around machine learning and protein design), then it will beat every other biological detection, diagnosis, and sequencing method by a massive amount (no 10x, but more like 100x-1000x)

It's hard to explain how big nanopore sequencing is if a few (hard) kinks can be figured out. Basically, it has the potential to completely democratize DNA sequencing.

Here is an explanation of the technology - https://www.youtube.com/watch?v=CGWZvHIi3i0


Best part is the Oxford devices are _actually affordable_. Illumina has had such a stranglehold on the market - devices start at around 35k and go up into “this is a house now” territory. Meanwhile the Flongle [0] is $99 and the main Oxford device can be had for $1k.

[0] https://store.nanoporetech.com/us/flowcells/flongle-flow-cel...


> Illumina has had such a stranglehold on the market - devices start at around 35k and go up into “this is a house now” territory.

You cannot effectively sell this kind of device under $25K--support costs simply eat your profit margin.

This is a constant across industries. You either have a $250 thneed (and you ignore your customers) or a $25K thneed (and you have shitty customer support) or a $250K thneed (and you have decent customer support).


Depends what you mean by affordable - low barrier to entry, yes. But bases / $ is still orders of magnitude below where needed to displace Illumina for sequencing of large genomes (eg: human).

Can this be used to make faster corona virus tests ? If so maybe this is the time to Manhattan project this technology.

Generally, yes absolutely. I’ve been doing a project called “NanoSavSeq” (Nanopore Saliva Sequencing) in my free time. It’s published on dat right now since the raw files for Nanopore are really big (got too big for hashbase). There is one company doing it as well, but my version is completely open source and I’ve optimized it for affordable automation.

To give you a sense, you can buy one for 1k and do as much detection as a 64k device, and it’s small enough to fit in a backpack. One device should be able to do 500-1000 tests per 24hrs at a cost of about $10 per test, not including labor.


Is this with multiplexing? Or are you extending the flowcell life?

Multiplexing. I use barcoded primers to amplify the sample, then pool and sequence

Would love to know more. This is fascinating.

The dat website is at dat://aaca379867bff648f454337f36a65c8239f2437538f2a4e0b4b5eb389ea0caff

You can visit with the beaker browser, or share it through dat so it won't ever go down.

You can also visit it at http://www.nanosavseq.com/ (DNS is not up yet, http://167.172.195.83/book/index.html is direct)

It's embarrassingly barren right now, mainly since I've encountered some big problems with getting my DNA quantifier out of storage to start doing a lot more experiments. I'm getting that on Tuesday, so will be updating site then.


The book / documentation is very clean and presented in a fantastic way. May I ask what engine you are using for presenting this book?

mdbook! By the folks making the Rust docs. I love their formatting.

Would you like to work together on this? This is very interesting stuff.

Would love to. Feel free to email me at koeng101<at>gmail.

The Oxford Nanopore people announced that they are in the 'advanced stages' of developing their own Covid-19 test called LamPORE

https://twitter.com/nanopore/status/1263711292868694021

Press release: https://nanoporetech.com/about-us/news/oxford-nanopore-techn...

'Oxford Nanopore is planning to deploy LamPORE for COVID-19 in a regulated setting initially on GridION and soon after on the portable MinION Mk1C.'

The GridION is still expensive and not affordable for a business or private person, a MinION definitely is.


Thanks for those links! I knew it was only a matter of time

There are lots of folks working on LAMP in the DIYbio community. The kinda cool thing is that you can just have a colormetric read-out, so you don't even need Nanopore sequencing. I'm guessing that the reason Nanopore is nice there is to eliminate false positives. I'm more a fan of this approach -

https://www.genomeweb.com/business-news/clear-labs-raises-18...

Because you can recover full genomes as a by-product of diagnostic tests (which is useful for tracing infection, for example https://nextstrain.org/)


Hell yes

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: