A decent fraction of research is open access these days (it's highly dependent on the field, though). That fraction is also growing. We're building systems to access full texts whenever possible.
Hadn't seen that. Very complementary. We don't address staying up to date or pull in community value metrics (other than total citations). It's a somewhat different goal to broadly gather emerging ideas and stay informed.
Can you be more specific? Like break it out into AND and OR statements? Or just more iteration back and forth? We find people more familiar with the system learn better strategies than the LLM can suggest.
I did see that, and maybe I should expand my comment. From the perspective of someone doing long-term research: Undermind is a startup and is subject to the vagaries of VC funding. Currently, AI is in fashion among VCs, but my guess is that the fashion will be shorter lasting than the usefulness of the search results. So having tried out the product and finding a nice literature list in a relatively new area for me, my first instinct was to store it among my org files because the probability that your company will disappear (or severely degraded upon acquisition) is high.
Anyway, I do not mean to detract from the accomplishment -- and I liked the product! So I hope you take the above feedback/nitpicking in the spirit it is intended.
Hi Tom, I will agree that printing your current UI to PDF on the user's browser side is not ideal. (and not ideal for you either as it breaks the site's branding and links in my case)
For our use case as well, I wanted to forward the offline-readable version of your result page and had to save the webpage with a browser extension.
I think allowing users (or maybe just the paid users) to "Save as PDF" the search page result will be a very easy feature to monetize.
I understand that "share the link" is a much more appealing call to action for your startup, but you're probably not going to change the mindset of the entire industry away from PDFs overnight, and you do stand to gain by making this a (possibly paywalled) feature.
I think we associate learning/discovery with those moments because they happen together, not because they're causally related.
I think this is somewhat equivalent to how we used to have to learn 100 different integrals and derivatives in calculus. That's somewhat helpful. I learned to see patterns in math like that, the same way I learned a decent bit by browsing irrelevant abstracts and follow citation trails. But physically memorizing 100s of integrals is mostly a waste, and so are the irrelevant abstracts. You'll be much better at math (and hopefully science) if you can learn ~10 key integrals or read ~10 abstracts, and then spend the rest of your time understanding the high level patterns and implications by talking with an expert. Just like I can now ask GPT-4 to explain why some integral formula is true, which ones are related, and so on.
And that's the last point - these literature search tools aren't developing in isolation. We will get to have a local "expert" to discuss what we find with. That changes the cost-benefit analysis too.
Thanks! That's helpful to hear. Honestly just did numbers because the LLM has no trouble remembering which is which, and it's easier to programmatically parse out the citations to build hyperlinks (compared to names/years, where little variations creep in).
Yeah I get it, but if someone said "check out undermind", I could easily hear it as "check out undermined", as they are pronounced identically for most English language speakers.