Hacker News new | past | comments | ask | show | jobs | submit | fsh's comments login

This problem has been solved more than a decade ago by radar sensors (standard on many mid-range cars at the time). They detect imminent collisions with almost perfect accuracy and very little false positives. Having better sensor data is always going to beat trying to massage crappy data into something useful.

Radars are not as good as you think. They generally can't detect stationary objects, have problems with reflections, most of them are VERY low resolution, and so on.

The "with almost perfect accuracy and very little false positives" part is not true.

If you look at euroncap data, you'll see how most cars are not close to 100 in Safety Assist category (and Teslas with just vision are among the top). And these EuroNCAP are fairly easy and ideal. So it's clearly not a solved problem, as you portray.

https://www.euroncap.com/en/ratings-rewards/latest-safety-ra...


> They generally can't detect stationary objects, have problems with reflections, most of them are VERY low resolution, and so on.

Radar can absolutely detect a stationary object.

The problem is not, "moving or not moving", it's "is the energy reflected back to the detector," as alluded to by your second qualification.

So something that scatters or absorbs the transmitted energy is hard to measure with radar because the energy doesn't get back to the detector completing the measurement. This is the guiding principle behind stealth.

And, as you mentioned, things with this property naturally occur. For example, trees with low hanging branches and bushes with sparse leaves can be difficult to get an accurate (say within 1 meter) distance measurement from.


They can detect stationary objects, yes. But there's so much clutter from things like overpasses, road signs, and other objects that confuse the radar, that for things like adaptive cruise control, stationary objects are often intentionally filtered out or assigned much lower priority. So you detect moving objects (which stand out because of doppler shift).

Well, if we're talking about radar in a moving car, it's the other vehicles moving at the same speed that appear stationary.

Non-moving vehicles are seen as approaching you at whatever speed you are moving at. Along with all the other things you mentioned.

So they all have doppler shift but the "stationary" things approaching your car at your speed actually have much higher shift than the traffic around you.


Imagine if you could only see in a very narrow portion of the visible light spectrum, like only green or something. That's kind of how radar "sees" (I'm grossly over-simplifying here but point is it doesn't see the way we do).

It's hard to tell something sticking off the back of a truck or a motorcycle behind a vehicle without false positive triggering off of other stuff and panic braking at dumb times, something early systems (generically, not any particular OEM) were known for, which is why they were mostly limited to warnings not actual braking.

And while one can make bad faith comments all day about that not technically being the fault of the system doing the braking allowing such systems to proliferate would be a big class action lawsuit, and maybe even a revision of how liability is handled, waiting to happen.


I guess it boils down to this.

Earlier in the thread people were saying removing lidar was bad becuase multiple sensors types is good, presuming the camera stay either way, and one is not replacing camera with radar. I agree with this. It's usually trivially easy to corner case defeat one sensor type, as your example shows, regardless of sensor type. They all have one weakness or another.

That's why things like military systems have many sensor types. They really don't want to miss the incoming object so they measure it many different ways. Defeating many different sensor types is just way harder and therefore more unlikely to occur naturally.

And yes, control systems can absolutely reliably combine the input of many sensors. This has been true for decades.

Frankly I surprised more of these systems don't take advantage of sound. It's crazy cheap and society has been adding sound alerts to driving for a long time (sirens, car horns, train horns, etc.)


> Imagine if you could only see in a very narrow portion of the visible light spectrum, like only green or something.

No, that's how lidar works. Lidars have a single frequency and a very narrow bandwidth. Automotive radars have a bandwidth of 1-5 GHz. They operate around 80 GHz, which is very well reflected by water (including people) and moderately reflected by things like plastic. 80 GHz is industrially used to measure levels of plastic feedstock.

Compare TSA scanner images, which are ~300 GHz: https://www.researchgate.net/figure/a-Front-and-back-millime...

You are correct that most automotive radars like Bosch units [1] are very low detail though. Most of them don't output images or anything- they run proprietary algorithms that identify the largest detection frequencies (usually a limited number of them) and calculate the direction and distance to them. Unlike cameras and lidars they do not return raw data, so naturally when building driver assistance companies instead relied on cameras and lidar. Progress was instead driven by the manufacturers and with smaller incentives the progress is slower.

[1]: https://www.bosch-mobility.com/en/solutions/sensors/front-ra...


I was talking about radar

Backing this up: automotive radar uses a band at ~80 GHz. The wavelength is ~3.7 millimeters, which lets you get incredible resolution. Not quite as good as the TSA airport scanners that can count your moles through your shirt, but good enough to see anything bigger than a golf ball.

For a long, long time automotive radar was a pipe dream technology. Steering a phased array of antennas means delaying each antenna by 1/10,000s of a wave period. Dynamically steering means being able to adjust those timings![1] You're approaching picosecond timing, and doing that with 10s or 100s antennas. Reading that data stream is still beyond affordable technology. Sampling 100 antennas 10x per period at 16 bit precision is 160 terabytes per second, 100x more data than the best high speed cameras. Since the fourier transform is O(nlogn), that's tens of petaflops to transform. Hundreds of 5090s, fully maxed out, before running object recognition.

Obviously we cut some corners instead. Current techniques way underutilize the potential of 80 GHz. Processing power trickles down slowly and new methods are created unpredictably, but improvement is happening. IMO radar has the highest ceiling potential of any of the sensing methods, it's the cheapest, and it's the most resistant to interference from other vehicles. Lidar can't hop frequencies or do any of the things we do to multiplex radar.

[1]: In reality you don't scan left-right-up-down like that. You don't even use just an 80 GHz wave, or even just a chirp (a pulsing wave that oscillates between 77-80 GHz). You direct different beams in all different directions at the same time, and more importantly you listen from all different directions at the same time.


This isn't true. You can try using adaptive cruise control with lane-keeping on a radar-equipped car on an undivided highway. Radar is good at detecting distance and velocity, but can't see lane lines. In order to prevent collisions, you would need to know precisely the road geometry and lane positions, which may come from camera data, and combine that information with the vehicle information.

But what if you were on Ketamine and thought you could resolve it with a camera?

Radar is great for detecting cars, not as great for detecting pedestrians.

Are we sure Teslas dont have radars? We know they don’t have lidars, but that’s irrelevant.

Yes, they removed radar from their vehicles in 2021: https://www.tesla.com/support/transitioning-tesla-vision

(Also I wouldn't say it's _irrelevant_ that they don't have lidar, as if they did it would cover some of the same weaknesses as radar.)


Tesla Model 3's don't have radars. They had them at the beginning of the run, and removed them.

Yes. We are sure.

Indeed, the business model of selling a worse and much more expensive version of something that everybody already has [1] is a bit questionable.

[1] https://en.wikipedia.org/wiki/RDRAND


They now sometimes dub the audio track. The result is about as horrible as one would imagine. Whoever decided to turn this on by default clearly didn't give a damn.

It would be interesting to compare the accident statistics with European climbing gyms where belay tests are not common.

The coach in the video has some of the worst belay technique I have ever seen. Unfortunately, this is somewhat common among older climbers who learned using the first generation Grigri in the 90s. Petzl's recommended technique back then is very safe (essentially using the Grigri like an ATC), but does not allow giving slack quickly. This made it completely useless for any kind of ambitious sports climbing, and people started coming up with often extremely dangerous workarounds. Petzl has upgraded their recommendations a long time ago, but some people are resistant to change ("it never failed for me"...) Hopefully this video can convince at least some of them to finally adopt the proper technique.


I thought it was interesting when I looked up US gyms that they require a belay test.

In Austria, the gyms I went to you just had to sign a form that you know how to climb top-rope, lead, and how to belay.


I’m not sure why people are making a big deal of it. At my gym it took maybe 3 minutes. You tie a knot and show you know how to take up slack. And it only needs to be done once.

There is a second test for lead but most people take a class and get the lead card during the class.


In Norway there is a lead climbing certification. You attend and pass a weekend course including a final test, then you get a card. In order to be allowed to belay/lead climb in a gym you have to present this card. You can bring friends and let them top rope without the card, at least in some gyms, but the belayer needs to have the card. I think you can also climb on autobelay without the card.


I have climbed in many gyms in Norway without this card. YMMV. But tbh there is so much good climbing around Oslo with a variety of rock as well. I don’t know why I ever climbed in a gym when I lived there.

The US is extremely sue happy - US courts will often not recognize the ‘of course it’s obviously dangerous’ defense without extensive warning in writing - and even then, there is a significant amount of due diligence that needs to happen.

Most of the rest of the world goes ‘meh, don’t be so obviously dumb then’ and kicks the lawsuits out.


>The US is extremely sue happy - US courts will often not recognize the ‘of course it’s obviously dangerous’ defense without extensive warning in writing - and even then, there is a significant amount of due diligence that needs to happen.

What does that have to do with US gyms requiring belay tests, which is a bunch of steps that doesn't involve "extensive warning in writing".


Because they have evidence "in writing" that everyone at the gym had to pass a test that proves they know how to properly and securely belay a climber. In the event that there is an accident, the liability falls completely on the belayer and/or the climber and not the gym itself for allowing someone to participate in something that is "obviously dangerous" without demonstrating they have the ability to do it properly.


What U.S. courts will look for is an industry standard for safety, even if implicit, and then see if you are meeting or exceeding that standard.

In the U.S., for climbing gyms, part of the standard is a belay test. In the article, Mayfield talks about trying to get the industry self-regulating before the government steps in. This is basically how that works: industry founders or leaders establish some procedures for safety, prove them out over time, then insurance companies implicitly adopt them and everyone else follows.


> What does that have to do with US gyms requiring belay tests, which is a bunch of steps that doesn't involve "extensive warning in writing".

It lowers insurance bills. If you don't let anyone climb without having done a belay test and putting that paper in a cabinet for 10+ years, then you can get cheaper insurance.

It's the same reason why some gun ranges won't let random people in without joining up and going through a safety intro thing - cheaper insurance.


Insurance.

Which requires due diligence.

Which means there is some guarantee that people belaying at the facility meet some basic standard of skill, so that people are not being dropped all the time and then turning around and suing the facility for negligent supervision/creating a dangerous environment.


QKD also needs key distribution to prevent MITM attacks. It behaves almost exactly like a classical stream cypher, except that it cannot be package-routed, is many orders of magnitude slower, and is prone to side-channel attacks.


This section from the marketing blurb doesn't sound too promising:

When atmospheric conditions disrupt the light, our adaptive rate and hybrid architecture maintains the connection, with minimal downtime.

In the long run, all these wireless technologies (satellite or optical/microwave terrestrial links) will have a very hard time competing with simply laying down some optical fiber.


Some of their use-case they are crowing about on their site cover temporary things: back haul for major-but-temporary events, tethered-drone-mounted units for emergency disaster recover where a cell site is taken out etc. Those are the sorts of things where laying fibre 20km for use for just a day or two just isn't going to happen, but a temporary laser link that you can get up and running in a hour or two would be great.


What kind of data rates and distances are they talking about that isn't served by existing products? For example, you can buy a 20km range, 2Gbps wireless point to point link for a flat $3000 today: https://store.ui.com/us/en/category/wireless-airfiber-ptp/pr...

What they mention in the article is up to 20Gbps, but they'd have to be pretty dang cheap to out compete just buying 10 of the existing options.


The issue is that you can't put 10 of your 2 Gbps wireless links next to each other. You quite possibly end up with < 2 Gbps as interference kills your signals (unless you put the transceivers so far apart from one another that you sort of defeat the purpose). That said there are other wireless solutions that can get you > 10 Gbps over > 20 km already (not sure about 20 Gbps, but I wouldn't be surprised). The issue is available spectrum, i.e. you can't just setup the link, because the spectrum doesn't belong to you. Not a problem for optics.


Elsewhere in the thread it suggests ~$30k for one link. Which is exactly in line with buying 10 of the ubiquiti devices.

But I think you would need 20 of them, 10 on each end? Plus extra install, networking equipment, etc. Which would make Taara significantly better.


Competition is a good thing. Perhaps now those 3000USD devices will need to be less expensive to remain competitive?


That's not the market they're going for though. They're more of a competitor to Starlink

There's also obvious applications to places where weather is more predictable. There's plenty of areas and small towns in the Great Basin region that have basically no internet. This would be a quick and easy way to set those places up with internet with more reliability than something like starlink


But why would these not be places already served by terrestrial wireless internet service providers? It seems like it would be much easier and generally more attractive to serve locations like this using, for example, 5 GHz.

Normally, the lack of (near) line-of-sight is one of the biggest limiting to those sorts of deployments, but that would also have to be solved for any place being served with FSO.


The problem with selling inferior technologies is that sooner or later people are going to stop using them (even in the Grad Basin region). Not exactly a recipe for success.


What are you labelling inferior here? Both Taara and Starlink?

Yes, both of those are inferior to wired infrastructure but they clearly have their usecases. If you were to start a mining project that lasts a few months or even a few years but is expected to eventually be finished and packed up, Taara seems like exactly what you need.

It can also outcompete Starlink because it won't require us to constantly replace decaying satellites. Also means less space debris pollution which is increasingly becoming a huge concern


a point to point terrestrial bridge large piece of equipment that costs $5,000+, needs a professional to install it, and works on either free space optics or V-band or E-band radio is not in any way a competitor to starlink. It's more a place to take a 1 to 10 Gbps ethernet connection as a link between two towers or roofs that can 'see' each other as an alternative to where laying fiber may be cost prohibitive or would take too long to build (or both).

Assuming this thing doesn't utterly fail in rain at a moderate distance, this would be something you use to feed a POP which then redistributes service to end users by some totally other technology (5/6 GHz band PTMP radio system, GPON, XGSPON, G.fast on copper, docsis3/docsis3.1, etc)


It's interesting at 36 to look back at what I think would disrupt connectivity a decade ago:

- Google Fiber (it wasn't possible to do it cheaper than incumbents, so it devolved to standard incumbent x why would 40% margin company invest billions to get Comcast's peak profit margin of ~15% profit)

- Starry Internet (too expensive to build out, I have it and it's good, but the company certainly didn't scale)

- 5G in general (strictly inferior to incumbent, speed isn't faster, latency is higher, not as reliable)

It's hard for me to wrap my mind around why this would work at all, sounds like a more-susceptible-to-bad-conditions version of Starry.

I keep wondering how people make Starlink work, my understanding is the connection degrades then stops then reconnects every...idk, 5 minutes? as the satellites go overhead.


The key breakthrough for 5G was allowing ~10x the number of devices to connect to a node compared to 4G. 5G is what allowed the toppling of data caps that was by far the #1 consumer complaint for years. 4G just couldn't handle heavy loads well, so data caps were needed to constrain demand.

Teleco's aren't going to say this out load, but it's the real reason why they were so celebratory about 5G, despite it coming off like just a renamed 4G to the average user.


Why would they not be loud about it? I think "We've built out 5G so we can get rid of your data caps!" is a message any telecom would want to broadcast out, unless I'm missing something


They don’t want to get rid of your data caps. They want to get rid of their data bandwidth limitations.


> 4G just couldn't handle heavy loads well, so data caps were needed to constrain demand.

In many parts of the world uncapped data has been the norm since around GPRS.


Could that be because they aren’t as densely populated by users so even if everyone with a phone has no data cap, they won’t overload the network? Which countries were that for example?


basically every european country? they've all had much larger datacaps than north america for years preceding 5g and most are quite densely inhabited.


I’m from Germany and I don’t know a single person with unlimited mobile data. That’s very rare here.


And yet probably everyone you know in EU has a cheaper Internet per GB that folks in the US. I have 2 SIM cards, one provider charges me $10/GB, while the other has a 2-GB packet for $6.


In Finland I pay 20€/mo for unlimited data (bandwidth capped at 200 Mbps). With some shopping around it can be cheaper/have more bandwidth. The pricing has been similar at least since 3g. And I recall having a similar deal in the UK five years ago.

There's also 28 GB EU roaming per month included, and 2.23€/GB after that.


Both of those prices are considerably more expensive than what I pay for service in the US. Even the cheaper one is more than 2x more expensive than what I pay per gig, including unlimited calls and texts + roaming to a lot of North America.


Who's your provider if you don't mind me asking?


Mint. 15GB for $20/mo works out to $1.33/GB while your 2GB plan is $3/GB.

But there are other MVNOs out there like tello which also have a 2GB/$6 plan in the US, and other MVNOs which offer unlimited data for like $25-30/mo like visible and US Cellular.

Plenty of cheap MVNOs out there these days.


Tello is actually what I use for my secondary data, Fi is my main (mostly because I travel somewhat and the data costs the same in all the destinations I care about without having to juggle SIM cards).

I'm not a good case study because I rarely use more than 2gb in a month, so Mint would come closer to $10 a gig... :)


£10/month pay as you go SIM for 30gb here in the UK and im sure there are better offers


As I deployed Starlink in an extremely obstructed spot last year for a few weeks, where multi-second dropouts were quite common... it impressed me JUST HOW MANY satellites they have up there, and just how usable my dish was despite only having ~60% of its field of view clear. It's switching satellites much more often than every five minutes.

The built-in obstruction mapping tool quickly demonstrated that though each satellite represents a tiny slice of sky... over the course of the day you're seeing a vast number of satellites at a high variety of spatial angles and orbits.

I wouldn't recommend that obstructed situation to anyone (and it's going in a much clearer location this coming summer) but the users I was supporting reported it a far far better solution than the 4G LTE they'd been depending on prior. Not a patch on fiber, but a great solution for an awkwardly remote property.


> I keep wondering how people make Starlink work, my understanding is the connection degrades then stops then reconnects every...idk, 5 minutes? as the satellites go overhead.

That is not a correct understanding for how the Starlink network behaves today[0]. While I can't speak for using it outside of the U.S., I have not faced any interruptions outside of a few times during very severe weather.

[0] in the early days of the constellation, there were sub-second or a few second drops when there was no satellite overhead. But this dropped off very quickly once the constellation size increased.


I see, tyty (been wondering for quite some time)


For Starlink the User Terminal (antenna a.k.a. "Dishy") is a phased array. It tracks the satellite as it passes from west to east. Each satellite is in view for around 15 seconds - the phased array instantly flips from east to west and acquires the new in-view satellite in microseconds. There's no degradation in almost all 'flips' especially if the U.T. has an unobstructed view of the sky.


From my perspective, Google Fiber 100% disrupted connectivity - it woke the incumbents up and made them offer competitive Fiber. In that sense, they succeeded! My last three connections from my last three ISPs have all been gigabit (one of which was Google Fiber, easily the best internet I've ever had). I think they're expanding again, too, though I wish they had stayed as aggressive with rollout as they started.


That's a really good point, back home, Verizon didn't bother with Fios investment until then.


Ironically, Google Fiber purchased a wireless provider - Webpass - back in 2016 which is deployed in parallel to their fiber offerings.


> 5G in general (strictly inferior to incumbent, speed isn't faster, latency is higher, not as reliable)

I'm guessing this is a US thing? In Europe, 5G is definitely faster while latency is on par with 4G. YMMMV between EU countries though.


You're right, it's definitely better than 4G, my wording was unclear, more in the sense of "Would I make this my home ISP?" than "how did 5G go?" (I would have thought cell providers would have 20-30% of the market now, ah, the follys of youth...)


TBH I think a lot of it is many people still don't understand the product or misunderstand their actual needs/usage. Plenty of "normie" households can easily meet all their needs with a decent 5G fixed wireless install. As we see more cord cutting we'll probably see continued growth in fixed wireless.

FWIW, most other ISP types are treading water in terms of overall subscribers while the only real growth overall in new subscribers is fixed wireless. Your gut probably wasn't wrong that fixed wireless will probably grab 20-30%+, but just timescale-wise off a bit.

https://www.opensignal.com/2024/06/06/5g-fixed-wireless-acce...


5G home internet is the preferred in Australia where fibre isn’t present.

Even at my house where I have FTTH, my mobile 5G connection is persistently faster and quicker, that is both bandwidth and latency are superior on my phone from my home location.

Of course, the pricing is structured so you’re better off paying for both, either fixed internet plus mobile phone plane, or fixed 5G and mobile phone plan, depending on what is available at any specific location, but typically not all three options.

Thank you centrally planned infrastructure.


It's crazy that almost every house is able to be attached to a pipe carrying high pressure water that will flood if it is broken or attached wrong, thick wire carrying high current that will shock you, a pipe containing explosive gas, and a six inch cast iron pipe full of poop, but adding one more connection to a tiny thin strand of glass wrapped in plastic is too expensive.


A lot of the houses that don't currently have modern high speed internet access also don't have water pipes and sewer pipes. They have wells or water collection/delivery and septic tanks.

Electricity and twisted pair phone line is really all that's been pulled to their property.


"... very hard time competing with simply laying down some optical fiber."

You end up learning this in your own home. Some things are fine with a wired ethernet connection, it's really only my laptop and phone that use wifi.


You can say the same thing about running wired ethernet to your TV in the living room. It's simpler and more reliable than wifi. But wifi is much easier and quicker to install. Which one do most people use?


For most users (me included), there is zero difference in user experience between using wifi or Ethernet for their TV. Otherwise, running wired Ethernet would probably be a lot more popular.


So you think. You may be right, but most users won't even realize that a good chunk of their "buffering" / "Internet is slow today" / "Netflix is broken today" problems might just be a WiFi issue, and it would go away if they used a wired connection.


Optical fiber is absolutely the simplest and best option for almost any form of long distance connectivity. Maybe this technology will become cost/performance competitive in about 15 years after the HFT firms have invested billions trying to extract an extra cent out of our financial markets.


The economic burden usually falls on governments, so, like StarLink, Alphabet is probably hoping for some of that sweet, sweet government subsidy/grants for military applications.


GPS signals are atomic clock signals. The receiver triangulates its position by comparing the time delays between the signals originating from different satellites. The receiver itself doesn't require a good clock since it only compares signals with each other.


And you can even update your clock info from the GPS signal. So the only dependency is GPS or similar.

But would Iranian missiles even use GPS? Isn't accuracy limited for civilian use for precisely this reason?


No. The US stopped degrading civilian GPS accuracy in 2001[1]. Although the US retains the ability to degrade civilian GPS in specific target areas.

Regardless, if you’re building a long range missile, you need some ability for it to navigate. If you’re not using GPS, then what would you use instead? Additionally there’s nothing preventing you from using multiple navigation systems in tandem and fusing the results together, which is almost certainly what these missile do.

Sensor fusion reduces the impact of stuff like GPS jamming, but certainly doesn’t eliminate it. The over all system will be less accurate with fewer inputs, and if you’re the one faced with a high speed missile flying at you, I suspect you’ll take every edge you can get, regardless of how small the impact might be.

[1] https://en.m.wikipedia.org/wiki/Error_analysis_for_the_Globa...


>Regardless, if you’re building a long range missile, you need some ability for it to navigate. If you’re not using GPS, then what would you use instead?

US ICBMs and submarine-launched ballistic missiles use a combination of inertial and celestial navigation: in space of course there are no clouds to obscure the stars:

https://en.wikipedia.org/wiki/Celestial_navigation#:~:text=I...


Many cruise missiles use terrain contour mapping. In principle at least it seems like it should work for airplanes too.


Doesn't really work if you're trying to navigate over an ocean. Cruise missile also operate over relatively short distances (1,000km - 5,000km, most being 1,000km - 3,000km) compared to long haul flights, plus they also fly at very low altitudes (which helps them avoid enemy radar) which makes terrain contour mapping easier.

Not to mention, if you're designing a cruise missile, you're not that bother about how your navigation system might interfere with other aircraft, or ground systems in the area. I doubt having thousands of planes flying around shooting radar straight down at the ground would work particularly well.


A error correction technique I learned as a young land surveying assistant is to put a gps antenna on a known fixed point location. The delta between the fixed point and the point of measurement is cancelled out to get a more accurate read.

We did this to trial some new (at the time) surveying equipment when the primary equipment was optical. It would save time for really long measurements through the forest and mountainous terrain .


You can even subscribe to services which do this for you! There are a few companies with large-scale networks of fixed receivers, and you can get the observed offset from a node near you via the internet, usually via "NTRIP".

Getting correction data from a node a few dozen kilometers away isn't quite as good as having your own fixed base station a stone's throw away, but it's way more convenient and for a lot of applications plenty accurate.


GPS Accuracy used to be limited, but that ended decades ago.

There are rules about GPS hardware that say that they should cease working above certain speeds and altitudes for guided missile purposes. But that is a firmware issue. I’m sure the Iranians have figured that out if the are even using off the shelf hardware.


Yes, they use superposition of the most "classical" quantum states (coherent states). These are called (Schrödinger) cat states, since his thought experiment was about a quantum superposition of a very classical object (a cat): https://en.wikipedia.org/wiki/Cat_state


It is theorized that CERN is powered by the bodies of dead physicists that turn in their graves every time someone brings up the Schroedinger cat to an audience that doesn't even know complex numbers.

There's also a smaller power section filled with computer scientists that turn when someone says that the quantum computer offers exponential speedups.


Guess why they named their chip Ocelot which is a big cat


Ocelots are small cats. Big small cats but they purr https://youtu.be/HpwenyMq0Os?si=Ic9g_zuR4e99wrTB


Most cats purr. Cheetas even meow.


The distinction between "big cats" and "small cats" is whether or not they purr. Ocelots and Cheetahs are wild "small cats" which can purr. Lions, tigers, panthers are "big cats" don't purr. There's a taxonomic distinction.


With an optimistic 10% annual return, this would amount to 1/5 of Cornell's budget.


then they need to find some cuts because Uncle Sam has a maxed out credit card and can't keep making up the difference whether he wants to or not


Uncle Sam doesn’t have a credit limit: Uncle Sam has chosen to take on debt so rich people can avoid paying taxes. If we had rich people pay at the same rates they paid a few decades ago, didn’t have caps on the maximum amount of taxable income for social security, etc. we could return to the balanced budget we had at the turn of the century before the Republicans lowered taxes for the express political goal of forcing program cuts.


[flagged]


Can you really not think of any ways that our economy is different from those? For example, were either of them the largest economy in the world operating the global benchmark currency? Were their debts voluntary, incurred solely to allow the richest people to pay less in taxes?


Do you honestly believe that either of those two examples are actually comparable to the USA’s credit status?


Uncle Sam is intending to go even further into debt for tax cuts.


Sovereign debt is not the same as your household budget.


How about Uncle Sam starts taking money back from rich people instead of foisting more debt on workers and slashing my benefits so that they can buy another yacht?


As a European, I cannot make any sense of the low-contrast pictograms in "modern" software (VS Code, Obsidian, etc.).


Ironically, Visual Studio introduced monochrome icons in the early 2010s, and the protestations were so deafening that they introduced colours back in.

Fast-forward a decade, and nobody even bats an eye. I also struggle visually navigating around VSCode when I'm not using a keyboard, but we seem to be a minority.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: