Hacker News new | past | comments | ask | show | jobs | submit | treefarmer's comments login

Yeah, in case anyone else has the misfortune of having to work with multi-dimensional data in MATLAB, I'd recommend the Tensor Toolbox, Tensorlab, or the N-way Toolbox.

It is not, other than sometimes in the case of equal contribution. The first and sometimes second authors are the most important, and the last author is often the advisor/senior researcher supervising the work.


This is not accurate; it depends on the subfield. As a rule, the more theoretical the subfield, the more likely that alphabetical order is used. See e.g. papers from a theoretical conference like STOC vs. a systems conference like HotOS.


If you look at the papers of the third author [1], almost all of them seem to be alphabetical by last name.

[1] https://arxiv.org/search/cs?searchtype=author&query=Kuszmaul...


Interesting! I didn't realize it varied between sub-disciplines of CS, I guess.

Theoretical computer science and cryptography both typically do alphabetical. Maybe because of their adjacency to pure math?


He also permanently damaged his sense of smell/taste, so I feel like he might not be the most reliable source...


Has anyone actually tried any of these AI interview tools? From their website, it looks like the interviews are extremely short (max duration of 20 minutes?), even if you pay $55+/mo, which really limits its utility.


I can explain why we intentionally keep interviews to 20 minutes - research and our experience show that's the sweet spot for productive technical interviews. Longer sessions often lead to mental fatigue and diminishing returns, while our focused format helps candidates maintain peak performance and get actionable feedback. Plus, you can do multiple sessions to practice different skills, making it much more effective than one long, exhausting interview


Try this one, free (just pay tokens) and unlimited length interviews https://funapps.ai/app/live-interview-help


The downside of this approach is that it can affect the search results returned. But I found that if you add " -fuck" or " -fucking" to your search term, it disables the AI summary without significantly affecting your search results (unless you happen to be looking for content of a certain kind).


You will miss out on the category of strongly worded but helpful content.


You can probably find some other term that disables the AI but is unlikely to occur naturally in the articles you'd like to find, e.g.: "react swipeable image carousel -coprophilia".


Try it with * -tiananmen -square*


Good idea, you can make it even better (i.e. less accidental filtering) by quoting the phrase and adding a random addition, ex:

    -"tiananmen square 1902481358"
This way it won't interfere if you ever happen to actually want results that mention the place.

Hmm, I'm not sure about my testing now, even with innocuous stuff the AI thing isn't back. Maybe something I did scared it off.


> it can affect the search results returned

Will it still work if "fuck" is part of a quoted phrase? If so, you could avoid it by constructing a phrase that contains the term but isn't going to match anything, ex: -"fuck 5823532165".


What if you take the George Carlin approach by inserting fuck in the middle of normal words?


If you're looking for that kind of content, you could remove the minus sign?


Well, yes. You'll probably find some very niche kink videos though, depending on your search


archive footage of the Queen of Fuc's husband?


I want to know if it invokes rule 34?


Is there distributed server support? I see it on the list of new features with (currently PoC) next to it, but is the code for the PoC available anywhere?

Also, would there be any potential issues if the index was mounted on shared storage between multiple instances?


The code for the distributed search cluster is not yet stable enough to be published, but it will be released as open-source as well.

As for shared storage, do you mean something like NAS or, rather Amazon S3? Cloud-native support of object storage and separating storage and compute is on our roadmap. Challenges will be maintaining latency and the need for more sophisticated caching.


S3 support would be absolutely killer.


The content and idea of the article are interesting but the writing is so terrible that I couldn't bring myself to finish it. The article is clearly written/augmented poorly by AI because of how many meaningless paragraphs (that convey absolutely zero information) there are.


I thought I was super clear, I will take your comments into consideration next time it is the first time I've heard someone say that. Out of all the comments. I really appreciate the feedback this is the only way someone gets better by getting feedback.


Yeah, this is the main issue with the suggestion. Embeddings can only be compared to each other if they are in the same space (e.g., generated by the same model). Providing embeddings of a specific kind would require users to use the same model, which can quickly become problematic if you're using a closed-source embedding model (like OpenAI's or Cohere's).


Had the same thought - it's also annoying to update the PDF once links die, so I doubt that'll happen often. I guess it might be helpful if you want it as a coffee table book...


Would love it if it was available and open source so people could use it in their own projects (or on their own hardware), instead of only being available on Intel's AI Cloud. But cool idea and execution nevertheless!


Yeah, would love to built-in support for this in PyTorch or TF


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: