Hacker Newsnew | past | comments | ask | show | jobs | submit | formercoder's commentslogin

Because utilities are commodities and natural monopolies. CSPs are neither.


You’re correct. MCP is just a defined way of mapping string descriptions to functions.


I thought about it, and I think I know what the confusion could possibly be about.

To me, postmark-mcp is not a part of MCP, it’s a black box that talks MCP on one end. And its behavior is not an MCP but software trust and distribution issue, not specific to MCP (just like running any executables from random sources). I guess others may see differently.


Right but you have a good security posture and hygiene. MCP as a use case (not a protocol) is encouraging risky usage by less security minded people.


Humans drive without LIDAR. Why can’t robots?


Because human vision has very little in common with camera vision and is a far more advanced sensor, on a far more advanced platform (ability to scan and pivot etc), with a lot more compute available to it.


I don't think it's a sensors issue - if I gave you a panoramic feed of what a Tesla sees on a series of screens, I'm pretty sure you'd be able to learn to drive it (well).


yeah, try matching a human eye on dynamic range and then on angular speed and then on refocus. okay forget that.

try matching a cat's eye on those metrics. and it is much simpler that human one.


Who cares? They don't need that. The cameras can have continuous attention on a 360 degree field of vision. That's like saying a car can never match a human at bipedal running speed.


I'm curious, in what ways is a cat's vision simpler?


less far sight, dichromatic color vision, over-optimized for low light.

a cursory glance did not find studies on cat peripheral vision, but would assume it's worse than human if only because they rely more on audio


The human sensor (eye) isn't more advanced in its ability to capture data -- and in fact cameras can have a wider range of frequencies.

But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.

Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.

Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.


Why tie your hands behind your back?

LIDAR based self-driving cars will always massively exceed the safety and performance of vision-only self driving cars.

Current Tesla cameras+computer vision is nowhere near as good as humans. But LIDAR based self-driving cars already have way better situational awareness in many scenarios. They are way closer to actually delivering.


And what driver wouldn't want extra senses, if they could actually meaningfully be used? The goal is to drive well on public roads, not some "Hands Tied Behind My Back" competition.


Because any active sensor is going to jam other such sensors once there are too many of them on the road. This is sad but true.


And bird fly without radar. Still we equip planes with them.


The human processing unit understands semantics much better than the Tesla's processing unit. This helps avoid what humans would consider stupid mistakes, but which might be very tricky for Teslas to reliably avoid.


Even if they could: Why settle for a car that is only as good as a human when the competitors are making cars that are better than a human?


Cost, weight, and reliability. The best part is no part.

No part costs less, it also doesn't break, it also doesn't need to be installed, nor stocked in every crisis dealership's shelf, nor can a supplier hold up production. It doesn't add wires (complexity and size) to the wiring harness, or clog up the CAN bus message queue (LIDAR is a lot of data). It also does not need another dedicated place engineered for it, further constraining other systems and crash safety. Not to mention the electricity used, a premium resource in an electric vehicle of limited range.

That's all off the top of my head. I'm sure there's even better reasons out there.


These are all good points. But that just seems like it adds cost to the car. A manufacturer could have an entry-level offering with just a camera and a high-end offering with LIDAR that costs extra for those who want the safest car they can afford. High-end cars already have so many more components and sensors than entry-level ones. There is a price point at which the manufacturer can make them reliable, supply spare parts & training, and increase the battery/engine size to compensate for the weight and power draw.


We already have that. Tesla FSD is the cheap camera only option and Waymo is the expensive LIDAR option that costs ~150K (last time I heard). You can't buy a Waymo, though, because the price is not practical for an individually owned vehicle. But eventually I'm sure you will be able to.


LIDAR does not add $150K to the cost. Dramatically customizing a production car, and adding everything it needs costs $150K. Lidar can be added for hundreds of dollars per car.


  > Lidar can be added for hundreds of dollars per car.
Surprisingly, many production vehicles have a manufacturer profit under one thousand dollars. So that LIDAR would eat a significant portion of the margin on the vehicle.


But that’s sort of the point of the business model. Getting safe fully-self driving vehicles appears to require a better platform, given today’s limitations. You can achieve that better platform financially in a fleet vehicle where the cost of the sensors can be amortized over many rides, and the “FSD” capability translates directly into revenue. You can’t put an adequate sensor platform into a consumer vehicle today, which is what Tesla tried to promise and failed to deliver. Maybe someday it will be possible, but the appropriate strategy is to wait until that’s possible before selling products to the consumer market.


Not with Teslas. There are almost no options on a Tesla - it's mostly just colours and wheels once you've selected a drivetrain.


Teslas use automotive Ethernet for sensor data which has much more bandwidth compared to CAN bus


But also higher latency. Teslas also use a CAN bus.

But LIDAR would probably be wired more directly to the computer then use a packet protocol.


Because our eyes work better than the cheap cameras Tesla uses?


problem is, expensive cameras that Tesla doesn't use don't work either.


They cost 20-60$ to make per camera depending on the vehicle year and model. They also charge $3000 per camera to replace them…


I think his point was even if you bought insanely expensive cameras for tens of thousands of dollars, they would still be worse than the human eye.


They charge $3000 for the hours of labor to take apart the car, pull the old camera out, put the new camera in, and put the car back together, not for the camera. You can argue that $3000 is excessive, but to compare it to the cost of the camera itself is dishonest.


Fender camera is like $50 and requires 0 skill to replace. Next.


Chimpanzees have binocular color vision with similar acuity to humans. Yet we don't let them drive taxis. Why?


Chimpanzies are better than humans given a reward structure they understand. The next battlefield evilution are chimpanzies hooked up with intravenous cocaine modules running around with 50. cals


There's laws about mis-treating animals. Driving a taxi would surely count as inhumane torture.


they can't understand how to react to what they see the way humans do

it has to do with the processing of information and decision-making, not data capture


This is plainly untrue, see e.g. https://www.youtube.com/watch?v=sdXbf12AzIM


I drove into the setting sun the other day and needed to shift the window shade and move my head carefully to avoid having the sun directly in my field of vision. I also had to run the wipers to clean off a thin film of dust that made my windshield difficult to see through. And then I still drove slowly and moved my head a bit to make sure I could see every obstacle. My Tesla doesn’t necessarily have the means to do all of these things for each of its cameras. Maybe they’ll figure that out.


Here's a good demonstration why LIDAR SHOULD be implemented instead of what Tesla tries to sell: https://www.youtube.com/watch?v=IQJL3htsDyQ


I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.


And yet deaf people regularly drive cars, as do blind-in-one-eye people, and I've never seen somebody leave their vehicle during active driving.


I didn't mean that a human driver needs to leave their vehicle to drive safely, I mean that we understand the world because we live in it. No amount of machine learning can give autonomous vehicles a complete enough world model to deal with novel situations, because you need to actually leave the road and interact with the world directly in order to understand it at that level.


> I've never seen somebody leave their vehicle during active driving.

Wake me up when the tech reaches Level 6: Ghost Ride the Whip [0].

[0] https://en.wikipedia.org/wiki/Ghost_riding


They can. One day. But nobody can just will it to be today.


We crash a lot.


that's (usually) because our reflexes are slow (compared to a computer), or we are distracted by other things (talking, phone, tiredness, sights, etc. etc.), not because we misinterpret what we see


Well these robots can’t.


Wouldn’t this make the $200k drop below EBIT and thus increase accrual accounting profitability? Sure it could be less cash efficient but generally capitalizing expenses is net favorable.


I work at Google but not on this. We do offer Gemini with Google Search grounding which is similar to a search API.


??????


How much do you pay people to use this?


Pretty much the Reserved Instance Marketplace


Did not know about that! Can you recommend an approach, any cautionary tales? Do clouds beyond AWS have similar?


Think the massive increase in demand is due to mass inference of open source LLMs? Or is the transformer architecture driving mass inference of other models too?


You might be thinking too far into this. The biggest customers are bulk buyers that are either training on a private cluster (eg. Meta or OpenAI), or selling their rack-space to other businesses. These are the people that are paying money and increasing the demand for GPU hardware; what happens to the businesses they provide for almost doesn't even matter as long as they pay for the compute. The "driver" for this demand is the hype. If people were laser-focused on the best-value solution, then everyone would pay for OpenAI's compute since it's cheaper than GPU hosting.

The real root of the problem is that GPU compute is not a competitive market. The demand is less for GPUs and more for Nvidia hardware, because nobody else is shipping CUDA or CUDA-equivalent software. Thus the demand is artificially raised beyond whatever is reasonable since buyers aren't shopping in a reactive market. Basically the same story as what happened to Nvidia's hardware during the crypto mining rush.


> then everyone would pay for OpenAI's compute since it's cheaper than GPU hosting.

This is absolutely not true. The gap is narrowing as providers (Google, Anthropic, Deepseek) introduce cross request KV caching but it’s definitely not true for OAI (yet).


“Own” is likely a very simplistic term for the complex agreement in place between SpaceX and NASA. I’m sure SpaceX retains all of the developed IP, but NASA can “do whatever they want” with the actual ship.


Third try always


I’m a Googler who doesn’t do anything remotely close to setting capital return policy. Just remember Alphabet has been buying back 10s of billions for a while. Dividends are just a different capital return mechanism.


Importantly, dividends devalue issued unvested RSUs.


I though this was the case, but Googlers will receive Dividend Equivalent Units on unvested GSUs on each dividend payment date.


Could you please expand on this? If a person got issued RSUs but they have not vested yet, wouldn't they be happy that the RSUs are going to be worth more by the time they get vested?


No, because they would be worth the amount less the dividends during the vesting period.


In a vacuum where precise numbers do not exist, maybe.

In this real scenario, if someone's goog shares were vesting at an earlier value - let's take a rough average at a glance of goog YTD to be $145, they will have lost on a year's worth of dividends at $0.2 per share. However, the current share price is $175.

So, through this maneuver, a person holding N goog shares will lose at most 3 quarters of dividends: N * 0.2$ * 3 = N * 0.6$

But they will have gained whatever the stock has appreciated, which at this moment in time works out to: N * (175-145)$ = N * 30$

What am I missing which would make the scenario above result in OP's claim of "dividends devalue issued unvested RSUs"?

EDIT: This also fails to take into account "Dividend Equivalents (DEs)", which are not factored above, and would yield extra income to the person that owns unvested shares.


Corporate finance theory says that when a dividend is issued, the price of the stock goes down by an equivalent amount.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: