Hacker News new | past | comments | ask | show | jobs | submit | ntonozzi's comments login

Wow, it's amazing how much better the Subaru's automatic braking system is.

I worry that hitting a pedestrian at night is the most likely way I'd seriously hurt somebody, and I want to encourage automakers to prioritize the safety of pedestrians and other road users, so Subarus will be high on my list the next time I'm shopping for a car.


Subaru is just casually shipping better vision-only TACC than any other car company (I include tesla in this comparison, when just activating TACC) and nobody is paying attention to the fact that front radar is just not needed.

Subaru is going back and forth between vision-only and radar assisted, and also going through suppliers and project structures for EyeSight-branded systems. Current camera unit is supplied by Veoneer in Sweden, slightly older ones were outsourced to Hitachi Astemo, before that were mostly internal R&D and so on.

Current latest gen Subaru has a forward radar.


So all mainstream cars are vision only? No ranging like lidar?

Just about everyone ships radar, not only for collision avoidance but adaptive cruise control.

Honda and Mazda use radar as well.

I agree. Weirdly, perhaps one of the most harmful parts of e-waste is how valuable it can be: https://www.npr.org/sections/goats-and-soda/2024/10/05/g-s1-....


One important factor this article neglects to mention is that modern text embedding models are trained to maximize distance of dissimilar texts under a specific metric. This means that the embedding vector is not just latent weights plucked from the last layer of a model, but instead specifically trained to be used with a particular distance function, which is the cosine distance for all the models I'm familiar with.

You can learn more about how modern embedding models are trained from papers like Towards General Text Embeddings with Multi-stage Contrastive Learning (https://arxiv.org/abs/2308.03281).


Yes, exactly. It’s one of the reasons that an autoencoder’s compressed representation might not work that well for similarity. You need to explicitly push similar examples together and dissimilar examples apart, otherwise everything can get smashed close together.

The next level of understanding is asking how “similar” and “dissimilar” are chosen. As an example, should texts about the same topic be considered similar? Or maybe texts from the same user (regardless of what topic they’re talking about)?


It turns out homeowners with veto rights are a lot more powerful than capital in the US.


Could you buy land from existing lower voltage lines? It seems like a solvable problem.


This is wild! What have you found it most useful for?

Have you tried a more straightforward approach that follows the ChatGPT model of being able to fork a chat thread? I could use something like this where I can fork a chat thread and see my old thread(s) as a tree, but continue participating in a new thread. Your model seems more powerful, but also more complex.


This is my daily GPT driver, so for almost anything from research to keeping my snippets tidy and well organized. I use voice input a lot to take my time and form my thoughts and requests, text-to-speech to listen for answers too.


Here's an example of the backend code:

https://github.com/redplanetlabs/twitter-scale-mastodon/blob...

It certainly looks like it does a lot with their DSL, but as a newcomer it's really hard to parse.


I suggest starting with the tutorial and the heavily commented examples in rama-demo-gallery, linked below.

https://redplanetlabs.com/docs/~/tutorial1.html

https://github.com/redplanetlabs/rama-demo-gallery


Any examples of the Clojure version?


All the examples in rama-demo-gallery have both Java and Clojure versions, including tests. There's also the introductory blog post for the Clojure API which builds a highly scalable auction application with timed listings, bids, and notifications in 100 LOC.

https://blog.redplanetlabs.com/2023/10/11/introducing-ramas-...


This trial made no sense. Try stepping on your accelerator and brake at the same time and see what happens: https://podcasts.apple.com/us/podcast/blame-game/id111938996....


1. There’s an identified mechanism by which the software could accelerate without the pedal depressed.

2. In my car, if I step gently on both pedals, they both take effect. (Which is reasonable: starting uphill is a thing.). If I step harder on both pedals, the car chimes at me and the motor output is reduced automatically.


In every car, the brake is much, much stronger than the accelerator, and easily overpowered by braking. Every one of these cases was simply a person who accidentally stepped on the gas, thinking it was the brake. If you ever feel like this is happening to you, pick your foot up and try again.


... and put your car in neutral and kill the engine by holding the start button.


I’ve been using baseten (https://baseten.co) and it’s been fun and has reasonable prices. Sometimes you can run some of these models from the hugging face model page, but it’s hit or miss.


But you still need to page in the weights from disk to the GPU at each layer, right?


You should only need the weights for the experts you want to run. The experts clock in at around 400 MB each (based on the 800 GB figure given elsewhere). A 24 GB GPU could fit around 60 experts, so it might be usable with a couple of old M40s.


Can you explain why? I write web apps, and I typically want to know an instant in time when something occurred and then localize the time for display. Why would I need to store a time zone for that?


Storing UTC for past events is totally fine because it is unambiguous.

However, if you are writing something like a calendar app and storing timestamps for future events, you run into the issue that time zones have the annoying habit of changing.

Let's say you have a user in Berlin, and they want to schedule an event for 2025-06-01 08:00 local time. According to the current timezone definition Berlin will be on CEST at that moment, which is UTC+2, so you store it as 06:00 UTC. Then Europe abandons summer time, and on 2025-06-01 Berlin will now be on CET, or UTC+1. Your app server obviously gets this new definition via its regular OS upgrades.

Your user opens the app, 06:00 UTC gets converted back to local time, and is displayed as 07:00 local time. This is obviously wrong, as they entered it as 08:00 local time!

Storing it as TIMESTAMP WITH TIME ZONE miiiight save you, but if and only if the database stores the timezone as "Europe/Berlin". In practice there's a decent chance it is going to save it as a UTC offset anyways...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: