Yeah, in case anyone else has the misfortune of having to work with multi-dimensional data in MATLAB, I'd recommend the Tensor Toolbox, Tensorlab, or the N-way Toolbox.
It is not, other than sometimes in the case of equal contribution. The first and sometimes second authors are the most important, and the last author is often the advisor/senior researcher supervising the work.
This is not accurate; it depends on the subfield. As a rule, the more theoretical the subfield, the more likely that alphabetical order is used. See e.g. papers from a theoretical conference like STOC vs. a systems conference like HotOS.
Has anyone actually tried any of these AI interview tools? From their website, it looks like the interviews are extremely short (max duration of 20 minutes?), even if you pay $55+/mo, which really limits its utility.
I can explain why we intentionally keep interviews to 20 minutes - research and our experience show that's the sweet spot for productive technical interviews. Longer sessions often lead to mental fatigue and diminishing returns, while our focused format helps candidates maintain peak performance and get actionable feedback. Plus, you can do multiple sessions to practice different skills, making it much more effective than one long, exhausting interview
The downside of this approach is that it can affect the search results returned. But I found that if you add " -fuck" or " -fucking" to your search term, it disables the AI summary without significantly affecting your search results (unless you happen to be looking for content of a certain kind).
You can probably find some other term that disables the AI but is unlikely to occur naturally in the articles you'd like to find, e.g.: "react swipeable image carousel -coprophilia".
Will it still work if "fuck" is part of a quoted phrase? If so, you could avoid it by constructing a phrase that contains the term but isn't going to match anything, ex: -"fuck 5823532165".
Is there distributed server support? I see it on the list of new features with (currently PoC) next to it, but is the code for the PoC available anywhere?
Also, would there be any potential issues if the index was mounted on shared storage between multiple instances?
The code for the distributed search cluster is not yet stable enough to be published, but it will be released as open-source as well.
As for shared storage, do you mean something like NAS or, rather Amazon S3?
Cloud-native support of object storage and separating storage and compute is on our roadmap. Challenges will be maintaining latency and the need for more sophisticated caching.
The content and idea of the article are interesting but the writing is so terrible that I couldn't bring myself to finish it. The article is clearly written/augmented poorly by AI because of how many meaningless paragraphs (that convey absolutely zero information) there are.
I thought I was super clear, I will take your comments into consideration next time it is the first time I've heard someone say that. Out of all the comments. I really appreciate the feedback this is the only way someone gets better by getting feedback.
Yeah, this is the main issue with the suggestion. Embeddings can only be compared to each other if they are in the same space (e.g., generated by the same model). Providing embeddings of a specific kind would require users to use the same model, which can quickly become problematic if you're using a closed-source embedding model (like OpenAI's or Cohere's).
Had the same thought - it's also annoying to update the PDF once links die, so I doubt that'll happen often. I guess it might be helpful if you want it as a coffee table book...
Would love it if it was available and open source so people could use it in their own projects (or on their own hardware), instead of only being available on Intel's AI Cloud. But cool idea and execution nevertheless!
reply