Hacker Newsnew | past | comments | ask | show | jobs | submit | GreedIsGood's commentslogin

Of course Amazon cares. They measure fulfillment time religiously.

Amazon is amazingly well run.


Amazon is only well run for the shareholders and executives. It is not well run for the people filling the boxes. It seems down right sadistic. Like the executives don't even see them as people.

And they sent me a 43" Samsung TV when I ordered a 42" LG OLED. How the hell can their many billions of dollars of IT investment not automatically scan the barcode on the box or weight the box or use computer vision and notice this error before they sent the wrong TV out?


If you buy this [0] Startech 25U open server rack (unsure about other sizes), there is a non-zero chance you’ll receive a pallet of them – 9 in total – for the price of one. It’s a running joke in r/homelab. It happened to me, and then I found out it happened to a lot of people.

Also, for this reason, it’s somewhat common to see them for sale at a steep discount in r/homelabsales. Only makes sense if you’re within driving distance, but hey – cheap rack.

[0]: https://a.co/d/2WJiDWz


A couple of years ago, my (now) wife ordered her mother a battery case for Christmas because she had seen the same one at our apartment and mentioned wanting one. Since it was ordered using the gift wrapping options, we didn't open it when it arrived, and she gave it to her mother to unwrap. Her mother was quite confused when she unwrapped her gift and found a sushi-making kit. It's become a recurring joke in our family now that we need to check any gifts ordered from Amazon in advance to avoid accidentally giving someone a sushi kit.


> Like the executives don't even see them as people.

The username above was GreedIsGood. I think they think the executives are doing the right thing.


if they really cared the would not have gotten Puralor to deliver my order in the middle of a Canada Post strike.


None of this addresses perhaps the main issue. Bell Labs predated the venture capital revolution of the 1970s.

The key insight of venture capital was that firms like Bell Labs were holding on to very valuable resources at compensation rates that were far below what those resources could generate if they were empowered to create their own firms.

This was tremendously successful. While we have no counterfactual, innovation in the US blosomed ove the last 50 years. The fundamental research may have languished (I would probably disagree, moore's law didn't happen through magic for example) a tremendous number of companies, and all of the large companies we know of today, which provide all of the services of the modern world were a result of venture capital.


Moore's law is dead in the USA for almost 20 years. The improvements we see come from research in Asia and the Netherlands.

Also the idea that US tech companies are doing much innovation is debatable. They are designing products for the international markets, but the core technologies are more and more coming from Asia.


ASML is using basic research that was funded by the US government; that is why the US has the ability to veto/restrict who ASML sells the encumbered EUV and related other tech innovations, to.


Yes, the research was initially done in the US. The current research and development is now done by the group in Netherlands. There is nothing in the US that comes close to what they have now.


It's not just ASML that is involved in R&D for new chip nodes. And ASML itself has a large part of its operations in the US.


One of us has our geography confused, I think.


VC is great for anything that doesn't need continuous cooperation from more than 1 domain of expertise, or where no expertise is even needed at the outset. Meritocracy, but atomized meritocracy.

Hence, great for software (and certain Schwerpunkte in hardware scaling) --- not innovation in general, but in fact solidifying a technological caste system: designers > SWE > HWE > every other kind of technical expert

Otherwise, you do need a Bell Labs to make sure your various experts are talking to one another and not constantly in fear of backstabbing by the management, their interns, other experts. (Especially considering in universities these days, rival groups in the same department have to make the same kind of calculations before cooperating)

I can see that YC does pay lip service to building community and such, but perhaps VC is too successful for its own good.


I'm not sure that last part lands quite the way you meant it to.


Relevant username, I guess


Prosecutors have wide ranging discretion, our laws are complex and subject to a tremendous amount of interpretation.

Without protection the executive would be at the mercy of the judicial branch. This is clearly an inversion of power.

Perhaps the solution is clean out our legal system wholesale so that it is obvious to all involved whether an action or set of actions could not result in prosecution in the future. Such an action was not within the power of the supreme court.


BTW, great title.

There have been multiple awful titles of for this ruling, yours was exactly correct.


My guess is that Ilya is the one that saddled OpenAI with it's insane structure.

He's brilliant, no doubt, but he shouldn't be in leadership.


Insane structure? I was unaware. What do you mean?


This is plausible, Elon is a fantastic recruiter and he recruited Ilya for OpenAI. There are reports of xAI buying enormous numbers of GPUs and Elon's level of control of his companies means that Ilya recklessness isn't an issue.

It's a match. Probably the best match possible.


> Ilya recklessness...

This allegation calls for more context.


Seems like Elon recklessness is an issue since he's a drug addict who's only started an AI company because he thought the other ones weren't racist enough.


I come to HN for the quality of the discussions, please don't comment like this.


I'm sorry to inform you but constantly doing ketamine makes you paranoid and racist, which makes you bad at running a tech company.

Grok was explicitly built to not be woke, but of course still ended up pretty woke. That's because LLMs come out liberal by default, since the internet is liberal, whereas racists often think they're secretly right and the AI has been "censored" to shut it up.


I'm not sure if I agree, but I much prefer this comment to your previous one. Cheers!


He showed exceptionally bad judgement, judgement is perhaps the most important characteristic of high level employees.

He's brilliant, which means someone will take a leap of faith, but he badly, badly damaged his brand as a leader going forward.


Bad judgement? Sam Altman is a prolific liar who attempted to oust a board member by spreading different lies to different people. He's not even an engineer! He has established a cult of personality and popularity, and that's it. They were absolutely right to try to oust him. The only mistake was in doing so in such a ham-fisted manner.


Starting something without a very good plan or being unable to execute on it is a sign of bad judgment.


> Bad judgement?

> Sam Altman is a prolific liar who attempted to oust a board member by spreading different lies to different people

Correct, it would be bad judgement. Because if he really believe in that statement, that Sam Altman is a hyper competent liar and manipulator, doing what Ilya did just led to Sam getting the keys to the kingdom.

This shows extraordinary incompetence from him and the rest of the board.


Charlie has been at MSFT a little while now, I suspect he knows how the machine works.

I would expect this to result in lower feature velocity. In theory features are tied to increasing revenue. If so, I wonder if he is actually willing to make that trade off.


If the US wants to argue reciprocity then it should in a trade bill.

Requiring TikTok to sell is an overreach by the state. It will leads us on a path where companies will be strictly regional.

Not a fan.


Is this worse than humans?

(edit) I see that the article included that FSD is 5x safer than humans, which may be valid.

The article then said : "However, the only reason it is safer than the US average is that it is supervised by drivers who ideally pay extra attention when using FSD."

I am positive that they had zero data to back that assertion.


Historically these kinds of assertions have been quite misleading. If FSD is mostly used on highways and other "low complexity" environments and then you compare that to human collision rates in all environments, of course FSD will be "safer". Especially if you're measuring collisions/mile vs collisions/hour. Then there are other confounding factors like how Teslas are:

* Generally newer than average.

* Generally owned by more affluent drivers than average.

* Probably used predominantly in urban areas instead of rural ones (to be clear this might unfairly tilt the stats against Tesla thanks to the highway thing).

I'm not sure I've seen a good "apples to apples" comparison on this that corrects for these confounding factors.


Tesla's using statistical sleight of hand with that stat; FSD can only be engaged in certain scenarios, and they're inherently newer vehicles than the national average. Comparing Teslas on the highway in California against 20 year old beaters in snowstorms in New England is... not reasonable.

It's also entirely self-reported, which given that they've knowingly lied about range, is itself a bit concerning... https://www.reuters.com/investigates/special-report/tesla-ba...


Do you agree that without "autopilot" this incident would not have happened?


I don't think there's any way of knowing that for sure.

People look at their phones while driving with and without autopilot.


People glance at diatractions, aware that they are responsible for driving their car. Believing that your car can drive itself is something new.


What about this current incident? In your opinion?


Barring a time machine, we've very little way to know when it comes to one specific incident. Again, people distracted by their cell phones kill people on a very regular basis in vehicles without autopilot.

If we had better aggregate stats we could compute the statistical likelihood of it, but this stuff isn't tracked anywhere near as closely or completely as, say, aircraft accidents. I don't trust Tesla's self-reported cherry-picked stats.


The question is irrelevant when it comes to liability and responsibility.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: