Hacker News new | past | comments | ask | show | jobs | submit | underyx's comments login

I had the exact opposite experience with support, possibly the best one yet with any company.

I opened the live chat on the battery life support page and the chatbot asked (paraphrased) "Is your battery bad? Put in your email address". I put my email in and the next automated message was "Seems like your battery is underperforming, where should we send a replacement ring?"

I got my replacement a few days later.


Chiming in to say the same. Best support experience I’ve ever had with a company. Battery was lasting less than 3 days, talked to the bot for 2 mins and confirmation of a new ring being shipped was received a day later.

I had the exact same pleasant experience. I had a Gen 2 ring with battery issues, and support was fast and painless with a new ring on its way in a couple of days.

Same here. Got a replacement right away

Same experience here. Support was effortless and new ring came quickly.

I have the same experience

Title:

> 300μs typo correction for 1.3M words

Article:

> 300μs for correctly spelled queries and ~5ms/word for misspellings

Aren't these contradictory?


The important number that changed here was the 30ms to 300μs for correctly spelled queries. 30ms for typos wasn't something that we considered a problem and improving to ~5ms was really a side benefit of fixing things for the correct spellings.

Additional latency for actual typos seems ok, but degradation for correctly spelled queries felt really bad. The thing that was really damaging to the UX for us was when latency was spiking for search queries where it didn't need to.


Sure, but why does the title claim 300μs typo correction?


That's a good point, so we did a s/correct/detect/ on the title above. Thanks!


The title says 300μs typo detection.

How? If the query doesn't finish in 300μs, return error :)


Agreed. Confirming correct spellings is the basecase of the system and what the title is referring to, but that's not clear.


Also, 300µs _per word_, looking up a word in a corpus of 1.3M words. Not 300µs for correcting 1.3M words!


There's a blog post from 6-8 years ago I can no longer find anywhere, where someone made node.js create similar ambient sounds and compared it to listening to hard drives doing mechanical work to understand what the computer was doing. Apparently it was actually useful to audibly hear when, for example, a request failed due to an authentication problem which had its distinct sound emerging from the pattern of code execution on that branch.

Does anyone have a link to what I'm talking about?


I recall something like that. You might be talking about baudio: https://www.npmjs.com/package/baudio . I also recall seeing some website demonstrating the idea of soundifying a stream of incoming metrics/events so you could listen to it and your brain could sift through and automatically pick our patterns, changes, and anomalies. I think there was a demo using the twitter firehose as an example? But for the life of me I can't remember anything more than that.


This:


How do y'all deploy fine-tuned models? I have separate projects for staging and prod, but it doesn't seem like a fine-tune can be shared across projects.

Am I wrong to split projects by env? Am I expected to run fine-tunes separately per env (surely not)? Am I missing an option to share fine-tunes across projects?


Are you talking about fine-tuned models that you host yourself, or fine-tuned models from a hosted provider like OpenAI?

What do you mean by an "env" here?


I mean OpenAI hosted fine-tunes (same as referenced in OP.)

I have a staging deployment and a production deployment. Ideally anything that I roll out to production, I can try on staging first — including switching from gpt-4o to a fine-tuned gpt-4o. I don't want the production API key to be accessible by my staging app, so I have two separate projects within the OpenAI dashboard. One is called my-app-prod, and the other is my-app-staging.

To illustrate the problem further, I also have infrastructure to eval models before changing what production is running. The eval infrastructure also has its own OpenAI project, so that I can set separate billing limits on running evals. Any fine-tuned model needs to be benchmarked first, but again, I'm not sure how to make the same fine-tune available to both the eval project, and the production app project.


Hey, engineer on the OpenAI fine-tuning team here. We know this is something of a pain and we’re trying to come up with a way to allow you to share / move models across projects. If you really need to share a model across projects right now the best way to do it is to train the model in the default project, it will then be available to use and all other projects. That’s not an ideal solution obviously but is the only mechanism available currently


Gotcha, thanks for the answer! We're a pretty heavy user of OpenAI (my company's called Semgrep) and I'd love to know if you ever run private betas or work with development partners for features like project permissioning. If so, feel free to email me at bence@semgrep.com — otherwise, appreciate your work on improving this!


The very first sentence of the article says

> sharp drop […] as the COVID-era crime wave recedes.


Their policy from an email sent out Oct 2023:

> Many of our riders choose Waymo for the clean and consistent vehicle we offer. To ensure every rider gets this experience, we’ll be applying a vehicle cleaning fee for riders who leave a mess behind in the vehicle, such as vomit, excessive trash, and smoking odors.

> For those that self-report their mess during their ride (not including smoking), the fee will be $50. For issues that go unreported, we’ll charge riders $100 for the first violation and increase the fee for subsequent violations. Repeat trash and smoking related violations may also impact your account standing.


That’s an awesome policy. Compare to car share services (in SF.. apples to apples) such as Gig Car and Getaround which allow unsupervised general access (i.e. no driver there to witness car treatment)… those are generally pigsties. Blunt ash all over the dashboard, used kleenexes in the door handles and cupholders, trash on the floor. It always blew my mind that the perpetrators weren’t fined into the dirt. Good for Waymo.


Considering that I’m in a social bubble of considerate people all of whom wouldn’t leave a single bottle cap in a car, this makes me despair at how people must be outside of my bubble.


I'd say the vast majority of people don't litter or trash things, but the few that do ruin things for everyone. It only takes a few bad apples (on in this case trashy people) to ruin something for everyone else.

Its like living with 4 roommates, but one of them leaves their shit everywhere. It could make the whole place look unkept for everyone else.

And because we generally don't want to pick up after other peoples shit/mess things may be left trashy for a long time before anything gets cleaned.


A very small percentage of people will trash a clean environment.

A much larger percentage of people will add trash to an already trashed environment.


Plus there's a camera in the car. Pretty easy to keep riders accountable, and solve any dispute.

Compare that to other car sharing options (e.g. Zipcar, where cars are usually a mess, but there's no way to keep users accountable).


Hmm zipcars in my area (London) are usually clean.


I wish it was the same here. In NYC (Brooklyn specifically) it's 50/50.


I think in the US kids use buses (school buses) at a higher rate than adults. The bus in the image is also colored yellow, just like school buses. Perhaps it has more to do with kids just having more recent experiences solving the same problem in real life?


Any source for the valuation number? Article does not mention.



Usually it is a proportion. 100k for 10% of the company is a 1M valuation


Yes but that does not explain why better text prediction is worth $6B to these people


> that does not explain why better text prediction is worth $6B to these people

Same reason Cohere has a weirdly-high valuation relative to revenue: it's a national champion.


Any source for the 10% number?


https://www.saas-capital.com/blog-posts/private-saas-company...

edit: If you're that curious just call Hemant up and ask him yourself?


This does not mention Mistral at all. Are all replies to me in this thread from LLM agents?



That sounds like an example to demonstrate the concept


Oh, well that didn’t answer my original question at all then.


Any source for that is an example?


If someone pays 100k for 10%, the valuation is 100k * 10 = 1M. There’s not really a source for that, that’s just how market valuations work


Well of course they’ll be using the latest technologies and therefore tiling will be done with that new shape from last year that never repeats patterns.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: