Hacker Newsnew | past | comments | ask | show | jobs | submit | unixpickle's commentslogin

The `try_roots` example here is actually a _counterexample_ to the author's main argument. They explicitly ignore the "negative discriminant" case. What happens if we consider it?

If we take their "parse" approach, then the types of the arguments a, b, and c have to somehow encode the constraint `b^2 - 4ac >= 0`. This would be a total mess--I can't think of any clean way to do this in Rust. It makes _much_ more sense to simply return an Option and do the validation within the function.

In general, I think validation is often the best way to solve the problem. The only counterexample, which the author fixates on in the post, is when one particular value is constrained in a clean, statically verifiable way. Most of the time, validation is used to check (possibly complex) interactions between multiple values, and "parsing" isn't at all convenient.


I was thinking a similar thing when reading the article. Often, the validity of the input depends on the interaction between some of them.

Sure, we can follow the advice of creating types that represent only valid states but then we end up with `fn(a: A, b: B, c: C) transformed into `fn(abc: ValidABC)`


To optimize for fast nearest neighbors, I chose 256 dims. Notably, this actually hurt some of the pre-training classification losses pretty severely compared to 2k dims, so it definitely has a quality cost.

The site uses cosine distance. The code itself implements Euclidean distance, but I decided to normalize the vectors last minute out of FUD that some unusually small vectors would appear as neighbors for an abnormal number of examples.


The "shop for random products" direction was actually fun for me too. Reminds me of amazon.com/stream a bit.


Probably the same complaint as

https://news.ycombinator.com/item?id=43375415


You definitely highlighted a shortcoming of the feature vector model in this case. Indeed it's quite a small model trained on a single Mac for about a week, so it's not very "smart".

I'd expect that this is a problem that could be solved by using larger off the shelf models for image similarity. For this project, I thought it would be cooler to train the model end-to-end myself, but doing so has negative consequences for sure.


I think it would be a useful feature. For the sake of being a fun project, I didn't use CLIP because I only wanted to use models that I trained myself on a single Mac. However, to make this more useful, text search would be quite helpful.


Yup, it's a small model I trained on my Mac mini! The model itself just classifies product attributes like keywords, price, retailer, etc. The features it learns are then used as embeddings


Ideally pose and lighting wouldn't matter as much as it currently does.

I think using a better model to produce feature vectors could achieve this, or perhaps even finetuning the feature model to match human preferences.


This should just be called "why VPNs are useful", i think?


This seems to be pretty much exactly a standard Bayesian deep learning approach, albeit with a heavily engineered architecture.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: