Hacker News new | past | comments | ask | show | jobs | submit login
A new old kind of R&D lab (answer.ai)
289 points by jph00 11 months ago | hide | past | favorite | 90 comments



Hi folks -- Jeremy from Answer.AI (and fast.ai!) here. Happy to answer any questions you have about this new thing that Eric and I are building.

One thing I'm particularly keen to explore is working closely with academic groups to help support research that might help make AI more accessible (e.g. requiring less data or compute, or becoming easier to use, etc). This includes the obvious stuff like quantization, fine-tuning adaptors, model merging, distillation, etc, but also looking for new directions and ideas as well which might otherwise be hard to fund (since academia tends to like to build on established research directions.)

I've opened my DMs at https://twitter.com/jeremyphoward for a while so feel free to ping me there, or also on Discord ('jeremyhoward').


Something I've learned as I've dived a bit into fine-tuning is that it's really hard. I even have what I think is good data, and a lot of it. But, as a non-ML engineer who just wants to use these tools as good APIs (and someone with the ability to ensure I have a coherent data pipeline), I woulnd't know how to proceed if I wasn't already working with an ML engineer who's been doing this sort of stuff for a while.

Specifically, I fine-tuned gpt-3.5 and llama-2-7b on some real-world usage data, and I can't tell a single difference between the quality of these outputs compared to "base" gpt-3.5 with the same requests made to it. Moreover, if I attempt to remove a lot of the "static" sections of a prompt (after all, there's 2k+ lines of data it was fine-tuned on where this is all present), both models just go completely off the rails.

I'd love to get into a world where I can fine-tune a bunch of different models. But it's so, so much harder than just calling OpenAI's API and getting really good results with that and some prompting work. If you're able to help crack that nut then there's a lot of people like me who would pay money to have their problems solved.


One thought: If you want to be able to remove the static part, you could consider fine tuning without the static part. If you fine tune with, you’re teaching the model that the desired behavior occurs only in the presence of the static part (hence the going off the rails).


2k lines is (probably) not a lot of data.


That may be the issue. I can get about 6k lines of data today and in time, could have a bunch more. Each line represents a full request (user input, model output, full prompt, etc.) so I would intuitively think that 2k of these is good enough.

And it's actually good enough for llama-2-7b, insofar as it transforms this model from "terrible with my prompt in production today" to "this could be workable, actually". But my bar is to be better than gpt-3.5 and require less text in my prompt, but no fine-tuning I've done so far has achieved that.


It’s not reasonably possible (currently?) to get the same performance from a 7 billion parameter model as a 175 billion parameter model with just an additional 6000 lines of finetuning data.


It gets very similar performance, so maybe it is possible. But also when I fine-tune 3.5, its response quality is indistinguishable from when I use the base model.

All of this is to say: this shit is way harder than it needs to be. I'm not an ML engineer but I do know my data and how to get it. Why is it still so hard to specialize a model?


Because we don’t have a clear understanding of how the things work yet.


Yeah it really needs to be much easier. Hopefully we can help with that.


I like your tutorials so much! I think you're a gift to the ML community. How did you guys secure $10m VC while putting this on the webpage? I'm not criticizing or anything—just want to know how one can pitch an uncertain idea and yet receive generous funding.

> We don’t really know what we’re doing If you’ve read this far, then I’ll tell you the honest truth: we don’t actually know what we’re doing. Artificial intelligence is a vast and complex topic, and I’m very skeptical of anyone that claims they’ve got it all figured out. Indeed, Faraday felt the same way about electricity—he wasn’t even sure it was going to be of any import:

> “I am busy just now again on Electro-Magnetism and think I have got hold of a good thing but can’t say; it may be a weed instead of a fish that after all my labour I may at last pull up.” Faraday 1931 letter to R. Phillips

> But it’s OK to be uncertain. Eric and I believe that the best way to develop valuable stuff built on top of modern AI models is to try lots of things, see what works out, and then gradually improve bit by bit from there.

> As Faraday said, “A man who is certain he is right is almost sure to be wrong.” Answer.AI is an R&D lab for people who aren’t certain they’re right, but they’ll work damn hard to get it right eventually.

> This isn’t really a new kind of R&D lab. Edison did it before, nearly 150 years ago. So I guess the best we can do is to say it’s a new old kind of R&D lab. And if we do as well as GE, then I guess that’ll be pretty good.


That's a very fair question! The key is to find aligned investors. In this case, we found investors who believe in the fundamental opportunity, have the patience for us to figure out how to capture it, and believe that we're the right people to do it.


I like this approach, it could turn out to be quite Edisonian.

Along the way you could really uncover some things that others would not.


They were betting on the reputation and track record of the principals, not a specific business plan they’ve got right now.


If I understand correctly, your focus will be on smaller models and what can be built on top of them? How relevant do you think fine-tuning small models will be once AGI is widely accessible? Do you think smaller models can co-exit in a AGI future?


I plan to write a longer article about this in fact! In short, I think we already have AGI, but I'm not sure it's going to get dramatically better quickly (i.e I don't think it's a clear path to ASI).

I think AGI is actually less important than we all expected it to be, and that it doesn't replace more tightly focused domain-specific models.


> I think we already have AGI,

This is the only time I have seen anyone claim this. Can you elaborate?


Computers that can reliably write programs / higher-order logic has long been one of my big fundamental bars for useful AGI. It will clearly get better, but for a lot of what knowledge workers already do... The bar has been cleared, and now engineering is playing a much easier game of catch-up.

(Hi Jeremy!)


Yes, I'm inclined to agree, it is very flawed AGI, but AGI nevertheless.

When you ask chatgpt4, "tell me the pH an electric conductivity of a 0.5% solution of sodium 2-ethylhexanoate" and then it writes a python program to calculate it, that is pretty much AGI level ability in my book.



stay tuned


Conversely, how relevant do you think 2023 techniques will be at all in a hypothetical AGI future? Doesn't the relevance of these questions embed some pretty strong assumptions?


Great to see this effort! Couldn't think of a better duo to experiment with it!

>with academic groups to help support research that might help make AI more accessible

Jeremy, are you already talking with Sky Computing Lab? I recall they had an interesting project about SkyPilot which seemed helpful

https://sky.cs.berkeley.edu/ https://github.com/skypilot-org/skypilot


No I'm not, but I'm aware of their work and it looks great - I should certainly chat to them!


I'm super keen to at the very least keep an eye on this! Good luck.

Question: the world is basically an RnD lab right now, with hourly releases of ideas, lots of cross-pollination, all focused on discovering value a bit behind the bleeding edge of the superscalers. This seems to be playing in exactly that space, but obviously you feel there's a need and a reason why this model would succeed. Could you expand on that?

(Thank you both for all you've done.)


I'm looking forward to seeing what what projects you work on at Answer.ai! I'm a big fan of your fast.ai courses and all the work you've done in the AI industry.


This is a great initiative! Given what you want to achieve, I presume you will have a wide range of (atypical) profiles in your team.


Without a doubt


Congrats on launching! This initiative looks awesome, and I'd definitely love to take you up on that and chat about it for a bit to understand the vision and what kind of engineering culture you're hoping to build. Sent a friend request on Discord!


"team of deep-tech generalists—the world’s very best, regardless of where they live, what school they went to, or any other meaningless surface feature."

How are you planning on hiring?


We're trying to figure that out -- we'll probably try a few somewhat non-traditional approaches to find the kinds of people that would be a good fit for our team.


open a space to spitball ideas for future models. You never know when a stray idea might spark a revolution. After all, not too long ago, "Attention was all we needed"


Do you have an Answer.AI discord? Could you start one please?


Looks great! What kinds of products might the lab build?


Very interesting read. My name is Emmanuel Lusinchi, and together with my colleague Georg Zoeller, we represent Omnitool.ai, a fledgling platform in the AI landscape with a mission that resonates deeply with the ethos of Answer.AI.

Firstly, congratulations on the launch of Answer.AI and the vision you've set forth. We have been following your work with great admiration - I took the FastAI course in what feels like an eternity ago (3 years) and I have been recommending it to anyone interested in foundational AI. But more to your post’s point: your commitment to harnessing AI's potential to create practical end-user products is not only inspiring but aligns with our own philosophy.

We have developed an open-source "AI lab-in-a-box". A platform that seamlessly integrates a multitude of AI models, both locally and cloud-hosted, through a single unified interface. The aim is to simplify access to the latest developments in AI - both on the technical side (knowledge to run AI models and connect them together) and on the financial side (access to GPUs) . We believe this to be useful to accelerate experimentation and iteration but also facilitate teaching AI - giving teachers a simple, consistent tool and giving students practical experience with the latest models so they can experience first hand complex and often too abstract concepts such as Bias. By reducing friction and lowering barriers of entry, our platform aims at democratizing access to the latest AI technologies, providing almost anyone with the tools and flexibility needed to push the boundaries of what's possible with AI.

And we do believe that our platform could serve as a valuable tool in your R&D processes, speeding up Answer.AI ability to rapidly prototype and refine applications that leverage foundational research breakthroughs.

Moreover, we share your concern about the widening gap in understanding AI's capabilities and its implications. We believe that transparency, education, and open-source collaboration are key to bridging this gap, ensuring that AI's benefits are widely distributed and its risks are responsibly managed.

We are reaching out to explore potential avenues for collaboration. Whether it's simply helping you evaluate and perhaps use our platform into your R&D workflow, co-developing new tools, or simply engaging in a dialogue to share insights, we are eager to contribute to the incredible work you're undertaking at Answer.AI.

We would be honored to discuss this further with you. Please find more information about our platform and its capabilities on our GitHub: https://github.com/omnitool-ai/omnitool. We are also open to setting up a demonstration or a meeting at your convenience to explore synergies between our organizations.

Warm regards,

Emmanuel Lusinchi Co-founder, Omnitool.ai emmanuel@omnitool.ai Georg Zoeller Co-founder, Omnitool.ai georg@omnitool.ai

P.S. A few links:

Intro video we made for Replicate: https://www.youtube.com/watch?v=DbKVUhWCYOI The interesting part is around 0:40, showing how any Replicate model (https://replicate.com/explore) can be added to the platform and connected to others in two clicks.

Github: https://github.com/omnitool-ai/omnitool

Website: https://omnitool.ai/


Eric Ries here, happy to answer questions about Answer.AI or any of the related themes Jeremy talked about in the announcement post: rapid iteration, R&D, startup governance, long-term thinking, etc.

Excited to see what comes out of this new lab. And if you're interested in joining the cause, please do get in touch. Both Jeremy and I are on this thread and generally reachable.


How do you look at hiring "experienced people" vs. "enthusiastic interns" on something like this? More generally, how quickly do you think the team will grow, and what the ratio should be between the "old" and the "young"?


Very hard to guess how it might all shake out. I would say that both Jeremy and I have an almost fanatical belief in the power of uncredentialed outsiders. So I would guess we will be more looking for curious open-minded generalists more than any specific age or experience level. I do expect we will grow headcount rather slowly, but that doesn’t mean we will launch infrequently


Thanks!

What are your thoughts on this model of promoting breakthrough innovation?

How to fund Breakthrough Innovations in Science (Puja Ohlhaver @ DeSci.Berlin) https://www.youtube.com/watch?v=guLDNMAOn24

Puja has a few talks on such things, many very related and worth listening to imho. But most relevant: she's been working on a mechanism design to use quadratic funding in an existing hierarchy to move funding power from funders to on-the-ground researchers who best predict "breakthrough research" areas -- i.e. at which intersections. This idea of "breakthrough innovation" is objectively measured and rewarded as "research that becomes highly cited, and which draws together disparate source citations that have never before appeared together."

So the idea is that in successive funding rounds, funding power slowly accrues in the people who best predict where research innovation will appear. Even if that turns out to be *gasp* grad students.

(I'm particularly interested to see Polis, a "wiki survey" tool I've been using since 2016, be used as one of the signals in such a system. It can help make the landscape of beliefs and feelings that ppl bring to the process more legible, especially at the collective level. Which is important, because high-dimensional "feeling data", when placed out-of-scope in other systems, are often a reason why we get trapped in local minima of innovation that inhibit the recombination of ideas.)


I was going to link to Polis after I read the first part of your answer, but I see you’ve beaten me to it. And in so doing you’ve pretty much answered your own question. Thanks!


heh thanks for the reply :)

I am probably a bit too enthusiastic about applications of Polis-like's (in the "when you have a hammer" sense), but there's a bit more to the system's mechanism design than just Polis -- it's just one signal of many during a full-day event format.

I expect some form of the system she describes to be the basis of much research funding in the coming years (following prototypes in more nimble cryptocurrency/governance communities)

There's an upcoming pilot with real funding in late Feb, that I'm excited to be supporting on! If you have time to watch her video, and find it interesting, you should def get in touch with her after that


Hi Eric, I’m a professor of human centered design in the Netherlands and I help train design students to prototype and design new AI user experiences. Could you share some ideas for AI experiences that you don’t have time to pursue but wish other people would explore?

We’ve prototyped many different tools before. However, the space is frankly disorienting because there is so much opportunity. Any suggestions to inspire engineering students to develop useful explorations?


Sure, just some ideas at random, but the most important advice is just to try new things and see what feels good:

- dashboards or other reports that call you when something changes, so you don't have to log in to see what's changed

- extremely personalized settings that remember exactly who you are and what you like to do with the interface, to the point that it basically uses it for you

- rapid prototyping interfaces, doing things like "make it real" demo

- extremely simple apps that use AI in the backend to do amazing things. how about a camera app that just sends everything it sees to GPT4-v. think how much easier that would be than loading up a translator app, taking a picture of a menu, uploading the picture, etc. just figure out what I might want to do based on the fact that I took a photo

- artistic/musical/creative apps that require only your phone and that you can noodle on while you have 5m of idle time. maybe the AI works on it silently in the background and then the user gives notes or feedback whenever they have time. end product is a pro-level artistic work that reflects the user's taste level but the AI's mastery of technique


If you're visiting london and feel inspired by the "make it real" demo, ppl in that circle routinely demo at Maggie Appleton's rad Future of Coding events[1] (and many other talented people building UIs and interfaces).

Here's a video of https://twitter.com/hturan 's interface for doing high-dimensional explorations of model latent space using the browser hand gesture recognition API: https://imgur.com/gallery/mxEhVZ1

[1]: https://lu.ma/foclondon


Are you open to work with other companies that are already working in the field? Or you are limiting participants to individuals?


I expect quite a bit of partnering to make sense, though nothing concrete to share at this time. We explicitly designed this to be non-competitive with the best companies in the field (who have the things they do well covered).


For those who aren't family familiar with Jeremy Howard.

Jeremy was a cofounder and chief scientist of Kaggle (a competitive ML platform)

Jeremy also started Fast AI with Rachel Thomas. Fastai is one of the best ways to learn Deep learning even today.

Jeremy is a great teacher and have been a voice of debunking AI paranoia and closed models.

Really rooting for Jeremy!


He is also the cofounder of Fastmail .. a popular email provider with others here..


Hi Jeremy & Eric, great to see your newest endeavor. I hope that Answer.AI builds on the success and impact that fast.ai has already enjoyed.

Given new developments in hardware (by companies not named NVIDIA), I'm wondering if you are keen on exploring the next generation of model architectures and optimization procedures that might exploit newer hardware. In other words, will research directions pivot based on the hardware lottery?[1] Are you in conversations with companies developing these alternative chips?

[1] https://arxiv.org/abs/2009.06489


Yes I think it's a big opportunity too, although I'm mainly looking at leveraging the work Modular will be doing to harness diverse hardware, rather than handle that ourselves.


Is there really a shortage of companies that are trying to do foundational AI research, and also build the results of that research into end-user products? Off the top of my head, the list of such companies would include... you know, literally every large tech company. If the idea here is that they can do it at a much smaller scale, and more cheaply, that's great. But it's not clear to me from this article what the radical new approach is that will enable that.

I wonder if putting out effectively a press release before actually doing the work is the right approach. If they launch a product or two and they flop, people will say this approach was doomed from the start. It would be better to create a compelling product in stealth, successfully launch it, then reveal how it was done. That would create more buy-in to the idea that such small R&D labs can work.


Very excited to see how the Lean Startup guy applies his own ideas!


Sounds like a lab I was born for.

I think there’s a big group of individuals out there that are misfits both for regular jobs and science. Entrepreneur-ish generalists.


Will you still be working on educational content on the side? (e.g. updating fast.ai and/or making one off lectures like https://www.youtube.com/watch?v=jkrNMKz9pWU)

Either way thank you for all the amazing free content you've already put out and good luck on the new endeavor!


Yes I think so, although I haven't decided quite how to best do that yet.


Building on top of what exists makes sense. There's science and then engineering and we need more engineering. I am curious though how far the $10M will go and what the plan is. Building on top still needs some kind of training and the money won't go very far for anything large scale. I know they know this, I'm just interested to know the plan.


Yeah we actually have to make money! We can't just spend.


Here's a short thread on the announcement, from Jeremy Howard, one of the founders: https://twitter.com/jeremyphoward/status/1734606378331951318


Thank you for fast.ai. I'm glad the sweet VC is moving to smaller players to increase the diversity of ideas instead of it all being captured by OpenAI.

The world needs more players doing tinkering. AI is only going to work for everyone if everyone knows how it works and can tinker with it, instead of only the big corps and governments having access to it.

We need all of Mistral, Anthropic, OpenAI, Meta FAIR, MS Research, Deepmind, Tinycorp, Answer AI, Perplexity, Stability AI, Huggingface and 1000 others to be GPU rich and idea rich.

We need NVidia, AMD, and 100s of other Chip makers. We need Huawai, Xiomi, Samsung and tons of others to be players.

AI only works if it is highly distributed and battle tested by many. We die when power is held by the few to oppress the masses.


Some questions, if you are still around to answer, because this is exciting!

1) Have you read about Vannevar Bush, and what he has written, and his body of work?! :)

2) What sort of people are you looking for / to work with?

3) Would it need to be full-time? Are you looking to hire people full time (the generalists you mention), or are you comfortable working with people who are happy not cashing a cheque from you because they have jobs and other commitments / priorities, but still believe in what you are building and would like to invest significant time in supporting / driving the mission forward for some limited (or no) financial compensation? Because I’d like to check if I fit! :)

(I’ve also spammed you on twitter with a dm, but with more personal details, etc.)

Thank you!


Yes I'm reasonably familiar with Vannevar Bush's amazing body of work. We're likely to hire passionate and interesting outsiders on the whole, but really anyone who is extremely curious, pragmatic, and happy to get their hands dirty on everything from the lowest levels of implementation detail on up could be a good fit.

It's easiest to work with people full-time, but I wouldn't set any hard and fast rules.


fast.ai (the first iteration) was what got me into ML, when I was a sophomore undergrad. It played a big role in my career choice and progression!

To this day it still is my first recommendation for those learning. Congrats on the launch, excited about the future!


Is there a way of contacting these people? If you're in this thread could you email me please? (in my profile), I can show you some cool ideas and what I am building!


Maybe DM one of the founders (who has DMs open to everyone) https://twitter.com/jeremyphoward


good idea


Hi Jeremy, thanks for fast.ai and Kaggle and that refreshingly honest & open interview on Latent Space. It sounds like you and Lattner are on good terms. Any plans to partner up with Modular?


I love Chris and his work is amazing. I certainly hope we can work closely with Modular, although nothing specific in place yet. I was at ModCon last week and I think that Mojo might just be the future...


I'm excited by this, and I wish y'all the best!


thank you!


> deep-tech generalists

What is meant by this?


> a new kind of AI R&D lab which creates practical end-user products based on foundational research breakthroughs

This isn't new and if anything it's the de facto standard for just about every AI research lab these days. OpenAI is the obvious example of an AI lab with tightly coupled product and research roadmaps and ChatGPT is the most prominent example of a successful research-driven AI product. A few years ago it could be argued that DeepMind and (fka) FAIR were siloed off from their respective orgs, but these days they are littered with product teams and their research roadmaps reflect this influence as well.

They do try to claim that what they are doing is different from OpenAI because they are focused on applications of AI whereas OpenAI is focused on building AGI, which is a laughable mischaracterization of OpenAI's current roadmap. I personally have a hard time believing that path to AGI runs through the GPT store.

Accomplished researchers in AI can fundraise on their reputations alone, and Jeremy is no exception. The primary differentiator of any new startup in this space is the caliber of its researchers and engineers. But this post is really grasping at straws to claim that their value is from some new approach to R&D, which is a totally unnecessary framing.


I’m just glad this exists! Very Wright Brothers vibe.


"but they were also on their way to being controlled and understood by a tiny exclusive slither of society."

I think that should be "sliver".


I was going to say they're synonymous; checked Wiktionary which calls it 'nonstandard' for sliver, though common in the UK (where I am from and live) blaming 'th fronting'.

I assume that's the name for, ahem, 'that is just anovver word for it'.


I'm Australian so I didn't know about this -- I guess I'll switch to "sliver" since it seems it's more broadly understood.

Thanks for letting me know!


No wuckers ;)

I didn't know either, I only really checked because I was curious if they had a completely different etymology and only happened to be spelt^ and used similarly.

(^or if, like spelt and spelled, the same root had just come to be used in two ways for the same.)

So, note to self, slither is not a noun! (Except to mean limestone rubble apparently, but I think I can ignore that.)


Note to self, "a sliver of hope" is not to be confused with "the silver lining".


In American English, "slither" is more frequently associated with the movement of snakes, specifically. A snake "slithers" by "moving smoothly over a surface with a twisting or oscillating motion." [0]

[0] https://www.google.com/search?q=define%3A+slither


Depends on your opinion of the reptile nature of those doing the controlling.


An animal muppet would know.


So many people writing their pitches start by talking about scientists from the 1800s, these days. It's a pretty big red flag to me.


Not sure why it's a flag. We have lots to learn from how science was done in the past, and from the actors who did science.

Recent science is pretty objectively at a low point (proportionally to overall) in "breakthrough innovation" research. It's possible that specialization is to blame, as it reduces intersectionality of fields.

Details here:

‘Disruptive’ science has declined — and no one knows why https://www.nature.com/articles/d41586-022-04577-5


How so?


Scope issues, mostly. I'm left unsure of what products to expect, so I'm less likely to follow up/check in later on. A lot of time is spent on the analogy, too, which may mean there's not much substance to say yet. It may have been worth waiting to announce until they had something specific to present. I don't know! I'm just an outsider/random person's perspective.


> which may mean there's not much substance to say yet

They don't need to talk about substance. Jeremy Howard + Eric Ries gives more than enough just with their names.


Elon has taught me not to believe in that idea.


I don't think much of Elon either, but if he was starting a new company it's probably a good bet it's going to succeed (despite the Twitter debacle).

I don't think there is a better predictor of future success than past success. Or at least I can't think of one - can you?


Past success... at what? The relevance of past success is likely a lot narrower than you may think. Neither person here has "past success" at pushing a vague idea-less organization through to value and profitability. In fact, I would wager both would caution against such an idea, absent their personal involvement.


While Faraday discovered induction, wasn't it Maxwell that unified electricity and magnetism? Given what answer.ai is attempting to do, Edison seems like a great example since he was both a brilliant inventor and an absolutely shrewd businessman.

I am excited for more research in this area, since there is currently a huge gap between foundational model research and practical applications of AI.


as someone guilty of the same - it makes your startup grander than it seems by tracing lineage from greater historical figures to yourself. of course most of these comparisons are overinflated... but you need to be a little ambitious to try to start something. if you live your life without trying to be a part of history you have a much lower chance of affecting it


When will the hype curve finally end...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: