The issue with that reasoning is that it fails to take into account that risk is a commodity now. It's often more profitable to go for short term profit and offload your risk to an insurer who amortizes that monetary risk in a pool containing a bunch of other industries.
For critical services like food production, that's a problem. "Well, we don't have food, but it's okay because screw production went well" doesn't make sense socially, but our system makes it so monetarily.
I'm not sure in what sense you mean risk is a commodity, and why it's a problem. I'm also unsure what changed to make it so now, as opposed to having ever been so.
Those who actually took risk into account and planned accordingly have profited wonderfully. Those who did not take risks into account lost their bet. Eggs are priced higher for some, but are pretty much available everywhere still, and have not dipped below some sort of minimal level of availability. In California, past shortages were far far worse than this one, and even then the egg shortages were in no way catastrophic to the economy or health of humans.
Of all the times in history, ever, we are at the lowest possible risk of famine. Instead, our abundance of high calorie food is the biggest risk to the health of Americans.
So I would like to understand your point a bit more if you have the time to elaborate.
I think the assumption they're making is that we want to guarantee a certain reliability of food, and that even if we have perfect insurance that pays out when there isn't enough food, we just have money, and no food.
That's a theoretical problem that could occur, but is extremely unlikely. The worst we'll see is what we have now (eggs are spendy) or a certain type of food disappearing for awhile (tomatoes one year were gone from almost all fast food places).
If we have to substitute one food for another for a year or two that's an inconvenience. But preventing famine by trying to guarantee that the price of eggs doesn't go up is likely far, far down the list. Better that money be spent on improving the supply chains and if necessary bulk storage of long-lasting caloric sources (cheese and flour reserves, perhaps).
Here's an idea. Let's get a large proportion of our calories from inefficient animal sources. Then if there is a widespread crop failure we can eat the breeding stock and then the animal feed.
That's generally what happens in Africa. It doesn't work as well in North America because consumers here are too rich to switch to barley and oats when wheat is expensive.
Yes, ethanol is the American equivalent. If we ever have a food shortage due to widespread extreme weather or similar, the president can nix the ethanol mandate to eliminate the food shortage.
The world does not have caloric food insecurity. We might be insecure in terms of specific nutrients or specific foods, but the modern world is not insecure in terms of human food calories.
> Those who actually took risk into account and planned accordingly have profited wonderfully.
I don't know why you're saying this. Imagine I'm investing.
If I "take risk into account" and select stocks anyways, I may lose a bunch of money one year. But I expect to make more on average than bonds.
Looking at a year where bonds excel compared to stocks doesn't mean that I failed to "take risk into account."
Likewise, a conventional producer of eggs that has now had a significant downturn in production may be having a bad year, but this doesn't mean that they're not following a profit maximizing strategy or not taking risk into account.
> Of all the times in history, ever, we are at the lowest possible risk of famine.
I think this is making the same kind of mistake: looking at today's outcome and assuming that reflects the risk picture.
We're not observing too much famine right now. But we could certainly have a more of a risk of the most catastrophic possible famines now because of things like monoculture, critical links in production, climate risk, etc. Just looking around and saying "all is great today" or "conventional egg producers are having trouble today" or "stocks are down 15% for the year" does not capture the picture of risk, particularly for rare events.
The best we can do is try to interpret sentinel events like this one and think about what else can happen.
You make the distinction between something being in the standard library vs an arbitrary external package sound like a minor detail.
It's not. It makes a world of difference. Having a hard guarantee that you are not dragging any transient dependency is often a big deal. Not to mention the maintenance guarantees.
The whole "tries all possible combinations" thing is a very misleading oversimplification in the first place.
Instead, think of it more like a completely different set of operations than classical computers that, if you were to try and replicate/simulate them using a classical computer, you would have no choice but to try all possible combinations in order to do so. Even that is oversimplifying, but I find it at least doesn't hint at "like computers, but faster", and is as close as making the parallelism pov "correct" as you're going to get.
What these operations do is pretty exotic and doesn't really map onto any straightforward classical computing primitives, which puts a pretty harsh limit of what you can ask them to do. If you are clever enough, you can mix and match them in order to do some useful stuff really quickly, much faster than you ever could with classical computers. But that only goes for the stuff you can make them do in the first place.
That's pretty much the extent I believe someone can "understand" quantum computing without delving into the actual math of it.
Isn't "deep learning" specifically referring to the breakthroughs that finally started addressing the issues preventing architectures with more than a single hidden layer from achieving anything?
Unless I'm getting all of that wrong, it doesn't sound like that bad of a term. I get that it has since been used and abused to absurdity. But it's not like it came out of nowhere at first.
> 2) it addresses a key part of their workflow, but the primary process exists in some other system so they are not willing to add a new system,
This cannot be overstated enough. If you are planning a B2B product, especially anything middleware-ish, not planning for this on your upfront is just begging for failure.
Yeah, there are so so many B2B products that don’t seem to realize being 10% to 20% better than the existing X in a large company is close to meaningless.
In practice it has to be more like 200% better, at least, to be a viable competitor that has realistic prospects of being adopted.
Yeah the way I think of it is that most big organizations are willing to burn the costs of a whole employee or a whole team to avoid adopting new/risky software
As long as the cost is kinda spread out and not directly visible by management
So yeah just being 20% better doesn’t mean anything, because most big orgs are much more inefficient than that anyway, almost across the board
Many people have to remember that the initial glut of SaaS sales were because they were transferring old and very broken software models into one they could charge a monthly fee on.
Most younger folks wont have an experience of enterprisey stuff like remoting into a terminal server so that you could run a fat client in a shitty and slow network connection, or have the most awkward tech stack installed on your local computer.
Moving to "its just a website" was way more than a 200% improvement on many things, even with the tradeoffs that javascript gave, moving people to the next level of more streamlined workflows doesn't have nearly the sales pitch.
No, these are the proper laureates (for that topic anyways, whether the topic is appropriate in the first place is another matter). LeCun and Bengio's works are undoubtedly immensely impactful, but there's no denying that they are standing on shoulders of giants.
Good point. I suppose if one is going to not win the Nobel Prize, a decent "consolation prize" is at least being referenced in the prize announcement for whoever did win.
The original value-proposal for Chrome was: The more people browse the web in general, the more Google profits. So the mere existence of a good free, fast, and safe browser would inherently benefit Google at large in general. And that rationale is why we get to have Chromium at all.
Obviously things have evolved quite a bit since then, but I think the general pitch that Chrome is primarily a value-multiplier for the org at large, rather than a direct value generator is still broadly the case, and it's really not clear to me that it can exist as anything else without a fundamental reassessment of what it's trying to accomplish.
That's likely not the real rationale. If people browse with Chrome, then google is the default. That is immensely valuable to google, as google's payments to apple, mozilla and android manufacturer's show.
In theory Chrome could exist as an independent business if it were allowed to take bids for search default from google. But if the US govt broke up google they would likely also ban the sort of deal that would let Chrome be a viable business on its own.
By the time Chrome came out Google already was in the position where everyone knew to set their default homepage to Google in IE in the same way they automatically go to install Chrome now.
I'm surprised no one has mentioned the pseudo-control Chrome gives them over web standards. They can implement experimental APIs in Chrome and immediately use them in their webapps.
Correct. Google's real concern was not that people wouldn't use them as their favorite search engine; it was that relying on Apple and Microsoft to be stewards of access to the web was a huge business risk because, hypothetically, if Apple or Microsoft decided to block google.com at the browser level (or pick your favorite equivalent scenario, like failing to implement a standard that Google absolutely was going to be relying upon to provide service), Google was screwed.
You typically want a mix of UDP and TCP (or sometimes a weird TCP-like monstrosity rebuilt on top of UDP).
Taking great care to design your data streams to be self-correcting (e.g. transfer a world location where characters are going towards instead of which direction they are heading) can go a long way towards saving a ton of synching headaches, and enables very efficient networking.
One-off events go over the TCP-like channel, but constant streaming data that naturally self-correct over time, like the example above, can benefit from being on the UDP channel.
For critical services like food production, that's a problem. "Well, we don't have food, but it's okay because screw production went well" doesn't make sense socially, but our system makes it so monetarily.
reply