Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Web search using a ChatGPT-like model that can cite its sources (sayhello.so)
318 points by rushingcreek on Dec 8, 2022 | hide | past | favorite | 185 comments
We’ve trained a generative AI model to browse the web and answer questions/retrieve code snippets directly. Unlike ChatGPT, it has access to primary sources and is able to cite them when you hover over an answer (click on the text to go to the source being cited). We also show regular Bing results side-by-side with our AI answer.

The model is an 11-billion parameter T5-derivative that has been fine-tuned on feedback given on hundreds of thousands of searches done (anonymously) on our platform. Giving the model web access lessens its burden to need to store a snapshot of human knowledge within its parameters. Rather, it knows how to piece together primary sources in a natural and informative way. Using our own model is also an order of magnitude cheaper than relying on GPT.

A drawback to aligning models to web results is that they are less inclined to generate complete solutions/answers to questions where good primary sources don’t exist. Answers generated without underlying citable sources can be more creative but are prone to errors. In the future, we will show both types of answers.

Examples:

https://beta.sayhello.so/search?q=set+cookie+in+fastapi

https://beta.sayhello.so/search?q=What+did+Paul+Graham+learn...

https://beta.sayhello.so/search?q=How+to+get+command+line+pa...

https://beta.sayhello.so/search?q=why+did+Elon+Musk+buy+twit...

Would love to hear your thoughts.




It gives answer that vaguely look like answer should look like but all the important details are wrong.

> A solar eclipse is a partial eclipse of the Sun caused by a solar coronal mass ejection . A solar coronal mass ejection occurs when Earth passes directly in front of the sun.

> You will also need to calculate number of seconds it takes for earth to rotate around the sun which is calculated as Math.PI * 2 / 60


> how do i mix two colors (hexadecimal rgb string) in js

> ANSWER

> To mix two colors in JavaScript, you can use the color-mix property. This property takes three parameters: the colorspace, the color to be mixed, and the percentage of that color to mix. For example, to mix the colors #FFFFFF and #8FBC8, you would use the following code:

> let newColor = document.createElement;

> // Outputs: #8FBC8

This is all just wrong from beginning to end. It seems to be confusing JS and CSS, too.


I mean with all the data on the internet it's hard to differentiate between what's right and wrong. Perhaps your question can also be refined to:

"How do I mix two colors in vanilla JavaScript?" This doesn't confuse rgb string and it still asks the same question.


Ok, but if I use this to ask about something I genuinely need help with, how am I supposed to know that? And I don’t really think the problem is that it can’t differentiate between sources that are right and wrong. The sources are perfectly fine. It’s the summary that is bananas.


They did a Launch HN recently: Launch HN: Hello (YC S22) – A search engine for developers'[1]

Nothing much has changed since. So they appear to be trying to cash in on the interest in ChatGPT.

Interesting they didn't include that they are backed by Y Combinator in the recent S23 cohort. Is being backed by YC a negative for startups here now?

[1] https://news.ycombinator.com/item?id=32003215


The models have gotten significantly better since our Launch HN. It might as well be a different product now from a UX point of view.


Yes, the answer I got was nonsensical too:

> what is the diameter of the sun in sun radii

> ANSWER

> The diameter of the sun in solar radii is 1,392,000 km. This is calculated by taking the mean radius of the sun, which is 432,450 miles , and multiplying it by the circumference of the sun, which is 2,715,396 miles .

The correct answer is, of course: 2.

BTW: Google is also confused, but not in the same way, returning:

2.0018 R(sun)


Doing unit conversions or general math is a known weak spot. I believe that 1,392,000 km is the correct diameter of the sun -- it just doesn't know how to convert those units.


No, it's exactly

2 * 695700 = 1391400

based on the most recent measurement from 2015:

https://en.wikipedia.org/wiki/Solar_radius

Any other value found on the web is based on some older estimate or some arbitrary approximation.


The radius of the sun is not exactly 695,700 km. The radius of the sun is approximately 695,700 km.

The nominal solar radius, which is and is intended to be an approximation, is defined to be exactly 695,700 km.


what is the path integral of a european swallow

ANSWER The path integral of a European swallow is the sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude. This formulation has proven crucial to the understanding of how birds fly and migrate. For example, during migration, barn swallows cover 200 miles a day at speeds of 17-22 miles per hour, with a maximum flight speed of 35 mph. In their wintering areas, they feed in small flocks which join together to form roosting flocks of thousands of birds.


Well, that's what these language models do in a nutshell. If you think social media and blogs are bad in terms of misinformation, this is the same thing rehashed by a machine that can not actually think or judge for itself (which to be fair, is arguably almost on a level already with how some humans interact with information they consume online).

I asked some controversial questions and the answers are interesting to say the least. The source is often reddit, random sites that are probably the top search engine result or some yellow press outlet.


This is something that makes me a bit uneasy as someone who lives from a blog.

I make the phone calls to gather and synthesise information that previously wasn't online. I don't like the idea of AI just paraphrasing my work without credit, and getting it wrong to top things off. It's bad enough when Google and lazy SEO copywriters do it. I do the work, they get the value.

I feel the same as a programmer (though I'm no longer a professional one). I didn't release all that open source code to train a for-profit AI.

I'd be even more upset if an AI was used to immitate my artwork and deny me the recognition. Thankfully I'm not that skilled.

I've seen convincing counter arguments to this, but I can't help feeling wronged by it. I feel like it enables a few to rob many of the fruits of their labour at an unprecedented scale.


I'd rather not be attributed if it's getting things wrong.


To the "Why did Elon Musk buy Twitter" question it says things like:

"Additionally, he was offered a high enough price by the shareholders that they were willing to sell their shares to him."

or:

"He acquired Twitter in April 2016 for $44 billion."


There's a reason why you can't do this commercially and why Google isn't doing it already... Pulling the meat of the content from a site like StackOverflow ends up as a copyright/anti-trust violation.

I'm fairly certain that this was the reason why Google had to tamp down it's rich results that were made mostly from Wikipedia entries.

More recently it was shopping...

"Google argues that ‘rich results’ in Search provide more direct experience in antitrust suit response": https://9to5google.com/2020/12/17/google-search-antitrust-re...

"Google loses appeal, faces €2.4 billion shopping antitrust fine": https://arstechnica.com/gadgets/2021/11/google-loses-appeal-...


You're confused, copyright != antitrust violations.

Both sources you provide have zero mentions of the word 'copyright' in them.

Those lawsuits have to do with Google's dominating the search market and using that to their advantage in ways that are allegedly unfair.

Copyright law actually allows a service like Google to exist in the first place.


According to OP's model, for anyone wondering:

>Copyright infringement and antitrust violations are two distinct types of improper use. Plagiarism is an ethical violation that occurs when someone attempts to pass off someone else's work or ideas as their own, without properly giving credit to the original source. It is not against the law, but can have serious consequences such as failing grades, termination, and difficulty finding new employment. On the other hand, copyright infringement occurs when a party takes an action that implicates one or more of the rights listed above without authorization from the copyright owner or an applicable exception or limitation in the copyright law, such as fair use. The most common antitrust violations fall into two categories: agreements to restrain competition and efforts to acquire a monopoly.


Yes, thank you. I’m neither confused nor disagree with you. I simply cited the most recent, easily found examples where Google ran afoul for rich results.

There are plenty of examples out there, as mentioned the ones prior to the recent shopping ones. Feel free to dig.


The possibility that a LLM could trigger a copyright violation strengthens the narrative that Google is harming smaller business, and thus can easily be used as a data point in an antitrust lawsuit.


Thinking more on this... I don't think any of these sites will live if they get big enough. And if enough of them pop up it'll draw tons of attention from content sites.

If you want to show that data you'll end up having to work out a license from StackOverflow. Possible, but far more difficult than the current ease of plug-and-play GPT drop-in.

Do we really think Google hasn't thought of this exact thing already?


Google is already working on LaMDA and Imagen for conversational search experiences, which is why these projects also wax poetic about "AI safety" -- you don't want to synthesize a politically incorrect or socially unacceptable response to a question asked.

Apart from the copyright issues that parent mentions, there's also the issue of LLM spewing BS confidently, which is why Google has been hesitant to roll it out as their default.


Agreed.

This post sums up the other issues outside of copyright that these types of services are certain to run into…

1. DeepMind 2. Infrastructure 3. Trust 4. Freshness 5. Habit Breaking

https://www.maxinomics.com/blog/fade-the-chatgpt-hype


Stack Overflow user content is licensed under the Creative Commons license, so it's possible you actually could satisfy the license terms. That said, IANAL, and I have no idea if it's possible to fulfill the SA clause without distributing the model, or something like that.


The reason is likely simpler:

- It is expensive (~0.5c per generated answer)

- It is (currently) slow (2-3 seconds to result)

- It is hard to place ads inside direct answers (probably the most important)


Those problems don't seem insurmountable, especially if it is 10-100x better than Google.


If it's a good result I'm sure there are many people that would pay 1c per search. I've made 16 searches today, far less for stuff I didn't find with ddg. If I was after something specific I could charge my account with $5 and search away.


I agree, but that is not how Google search (currently) operates.


Great opportunity for someone to disrupt

Costs are only going to come down.


What's funny is that most of this ground breaking LLMs you see now are based on Google published research about transformers, and they have better performing models in house than anything publicly available on the market.


Note that pulling the meat of the content from StackOverflow isn't copyright violation though, as long as you follow the license (which is Creative Commons something-something but probably fine for this particular application).


But it's siting the sources, how is it a copyright violation?


Citing does not confer a license


You don't need a license to cite others.


But what about when you're also reproducing the content on your own page like what's being done here?


It's tricky but you don't need a license for that either.

With tricky I mean that only under very specific circumstances you would be infringing copyright laws, like maybe if the content was private in the first place; but then in that case you wouldn't be infringing copyright either, you would just be breaking privacy laws/terms.

I honestly can't think of an example where you would get in trouble by citing a piece of content that belongs to someone else, but I'm not closed to the possibility that it could happen.


> only under very specific circumstances you would be infringing copyright laws

It's the reverse of this. Any work public or not by default is all rights reserved to the owner. The fair use doctrine provides an exception from this if you meet specific criteria.

An extreme example, you cannot just upload a complete movie and just add "credit to disney".

> you wouldn't be infringing copyright either, you would just be breaking privacy laws/terms.

Maybe we're from different countries, but with US law it would be either under theft, the computer fraud/abuse act and/or copyright violations, there are no privacy laws applicable here unless we're talking about PII.

Extracting very specific examples from an article or blog is almost certainly going to fall under fair use. However I've seen several cases where it essentially just returns an entire article as the answer which would certainly be legally ambiguous.


Look up the fair use doctrine. You can reproduce parts of content.


One of the four factors is market impact which in this case would likely fail.

In the words of ChatGPT:

> When determining the potential impact on the market for the original work, courts will consider whether the use of the copyrighted material is likely to harm the market for the original work. This may include whether the use of the copyrighted material would compete with the original work, such as if it is used as a substitute for the original work or if it would reduce the demand for the original work.

As such this is at least not clearly a fair use case. (And arguably quite possibly a not fair use case)


It's not, I disagree with GP's argument. Safe harbors in copyright law exist to allow this.


It's quite likely a fair use violation...

1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; 2. the nature of the copyrighted work; 3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and 3. the effect of the use upon the potential market for or value of the copyrighted work.

Particularly 3 & 4.

https://en.wikipedia.org/wiki/Fair_use#U.S._fair_use_factors


Safe harbor covers hosts of user uploaded content. The copyright owner can pursue the infringer, they are not protected by safe harbor.


Tho now I think about it more, it might be damaging the site money of income (ads, etc). But it's still not a copyright violation.


"it might be damaging the site money of income"

Which is one of the key factors of determining Fair Use and Fair Use falls under copyright.


It could also be bringing new money to those sites by referring users to them.

So, ¯\_(ツ)_/¯.

The issue with Google had more to do with antitrust behavior than with copyright infringement.


I might be bringing new money to movie publishers by pirating their movies and sending clips to my friends, but that's not a valid basis for calling it fair use


> Pulling the meat of the content from a site like StackOverflow ends up as a copyright/anti-trust violation.

Then how did ChatGPT do it?


Non profit right?


I am amused that, once upon a time, it was a not uncommon joke that an inept user might sit at their computer and type into a search bar "Please take me to my email" or ask their search engine "I would like a recipe for cookies".

Turns out those folks might have been ahead of their time. :D


>I would like a recipe for cookies"

This has been doable on Google for years. Maybe not the same kind of intent understanding but enough keywords are there that you would end up getting lots of results for cookie recipes.


> "Please take me to my email"

Results in

"To take you to your email, you will need to sign in to your account. Once you are signed in, open your inbox to check your mail. You can also use the Help tab to contact customer support or see training videos. Additionally, you can right-click an email in the Message List to show more message options and right-click the Reading Pane for additional email commands such as translation. Finally, you can use Search to find other commands or search through your email, contacts, and calendar."

Citing resources from google and microsoft support.


yep, it's super interesting to watch how cheap and plentiful compute starts to change the way we live


I asked three clearly stated, graduate level questions in my field and all three answers were useless.

> what is the relationship between commodity prices and civil war? answer in a way that cites published scholarship.

> The relationship between commodity prices and civil war is complex. Research has found that a negative, one-standard-deviation commodity price shock in a single year can increase the risk of civil war outbreak in Sub-Saharan Africa by between 40% and 70% of the baseline risk of civil war outbreak. This holds true for all large countries in Africa, the Middle East, Latin America, and Asia. Additionally, there is evidence to suggest that a rise in international commodity prices may also lead to an increase in civil wars. For example, research published in the May 2010 issue of the Economic Journal found that a 10% increase in the price of international commodities was associated with a 22% increase in the probability of civil war outbreak in African countries that are big exporters. Furthermore, research from Oeindrila Dube and Juan F. Vargas of Universidad del Rosario concluded that “price shocks generate contradictory pressures on conflict.

This definitely synthesized information, but it's nonsense. The beginning of the thesis is that a negative price shock increases civil war risk. Then it suggests a positive price shock increases civil war risk. Then it concludes that price shocks generate contradictory pressures on conflict. Dube and Vargas is a famous paper, so it was correct to cite it.

If the answer is that research is mixed, specify that. If the answer is that the research is unambiguous but it depends on the location or commodity type, the answer should disambiguate.

> are ideologically extreme candidates punished in u.s. house of representatives elections? what about primary elections?

> No, ideologically extreme candidates are not punished in U.S. House of Representatives elections. In fact, there is evidence to suggest that extremism may actually be rewarded in some cases. For example, the landslide defeats of Barry Goldwater and George McGovern in presidential politics have been seen as evidence that American electorates punish extremism. However, this view is thinly supported by systematic evidence. Additionally, there will be at least one special election to the United States House of Representatives in 2023 during the 118th United States Congress. This suggests that ideologically extreme candidates may actually benefit from these elections.

Another nonsense answer. The first sentence appears to gesture to the finding in the Zaller et. al research linked. Then it plagiarizes another sentence from the abstract of that paper. Then it introduces a completely nonsense non sequitur and uses it in support of a point that is not in evidence. It never answers the second question at all. The papers linked are all relevant, but not sorted by any useful metric and there are major papers not listed.

> what are the disadvantages of using two-way fixed effects to measure treatment effects in a natural experiment

> The two-way fixed effects regression is a popular method for measuring treatment effects in natural experiments. However, it has some drawbacks that should be considered before using it. First, the model does not account for heterogeneous effects between groups or over time, which can lead to misleading estimates. Additionally, the LSDV estimator is inconsistent for a dynamic panel data model with individual effects, whether the effects are fixed or random. This means that the bias of the LSDV estimator in a dynamic model is generally higher than other estimators. Finally, the 2FE model may produce biased estimates due to the fact that it only adjusts for unobserved unit-specific and time-specific confounders at the same time. Alternative estimators such as the instrumental variable approach or the difference-in-differences approach may provide more accurate estimates of treatment effects.

First three sentences are fine. Fourth sentence introduces an estimator that is not previously defined, probably by plagiarizing one of the source papers. The point starting with "finally" is a restatement of the limitation under "first". The instrumental variable approach is another econometric technique but it is not a substitute for 2WFE. D-i-d is most commonly implemented as 2WFE as an estimator, and so it is also not a separate technique.


I think you're holding a (suitably) high bar and I'm still wildly impressed it got as close to it as it did. To me these seem extremely "synthesis-heavy" as opposed to merely fact-finding, which is what I would have seen as the more obvious/achievable task?

I do wonder if some of the first question is just that the answer an expert should give you is similarly equivocal (granted being more explicit about the equivocality) and we've just gotten too enamored with the idea of clear answers even when they don't actually exist.


I plugged that first question into another of these kinds of products (called Elicit), and I think it did a much better job:

https://elicit.org/search?q=what+is+the+relationship+between...

It found relevant sources and had decent summaries of each.

(edit: I just noticed it also has a summary of the whole question on the left too. Maybe someone who knows the econ side of it can attest to whether it's correct:

> SUMMARY OF TOP 4 PAPERS

> These papers suggest that lower commodity prices are associated with increased civil conflict. Demuynck 2008 found that a drop in prices of tropical agricultural commodities increases the attractiveness of other ‘economic’ activities such as rebellion/warfare and can, therefore, trigger civil conflict. Morgan 2014 found that commodity price volatility causes a commitment problem that should increase the probability of armed conflict. Ciccone 2018 found that international commodity price downturns sparked civil wars in Sub-Saharan Africa. Dube 2013 found that a sharp fall in coffee prices during the 1990s lowered wages and increased violence differentially in municipalities cultivating more coffee.

)


Thanks for bringing this up. We'll take a look at better sourcing for questions related to scientific research.


Elicit uses only scientific papers as sources iirc. It's an amazing tool and somehow not as well known as it should be.


That sounds like a glowing review to me, you have to be an expert in the field and ask graduate-level questions to be able to find fault with the fourth sentence after three good ones. :)


That's worse than 500 error.

Not only is the provided answer is wrong, but the confidence remains high.


The Internet is overflowing with high confidence low accuracy information produced by biological humans.

What could set this apart from them would be an indication to the user of its confidence value, which Watson did for his debut on Jeopardy.


Except that I don't think it can have any notion of confidence. It would need to understand the world, for that.

This tool just makes it orders of magnitude easier to produce noise. You think the web is overflowing with low accuracy information? This may send us in a whole new level!

I actually wonder if this has the potential to break search engines as we know them, because they can't judge the quality of the content.


These llms are Bayesian models. I’m not quite sure what the calculation would be but it should be possible to get a confidence score from some combination of each probability.


The problem is that it would be a confidence score that the words would follow each other in that order in a real text, which would be both way lower than the chances of them being correct (remember, the models are trained to reproduce their training samples), and also more correlated with what people were writing online than with the truth.


They have "some confidence metrics" indeed. But those are completely different from what we humans mean in "are you 100% sure that the Earth is flat?"


Sure but they’re correlated. I think a team like openAI could definitely come up with some kind of useful “confidence score” based on Bayesian probability scores plus some other metrics, even if they’ll never have 100% confidence


Seriously. Like, I know we have high expectations for AGI but it was just born so…


Very interesting, likely useful for many types of questions. Here's a test of say hello:

>why is the earth flat Answer The Earth is flat because it is a level surface without raised areas or indentations. This is why people can see buildings, hills, trees, mountains and other things that are not actually there...

I've been working on this prompt to work with ChatGPT:

>Please answer the below question, and for each researched statement of fact, especially numbers, add a bracketed number like this: [1] If you need to use math, write each calculation as it can be written into a calculator in curly braces like this: {13 / 3 = 4.3333333333 } Then after your answer, for each of those facts, write search terms that could be used to verify each fact from academic sources.

Question: why is the earth flat

>The Earth is not flat, it is a sphere. This can be observed through various methods, such as observing the curvature of the horizon, the way shadows change throughout the day, and the way objects appear to change size as they move closer or farther away. [1] Additionally, the Earth's circumference has been measured to be 24,901 miles {24,901 / 3.14 = 7899.857143 }. [2]

Search terms to verify statements:

- curvature of the horizon, observation of shadows, changing sizes of objects

- measurement of Earth's circumference, circumference calculation using pi

Then here's a WIP follow up question:

>Below, I listed a result from each search term with URL and excerpts from the page. Please use the following to revise your answer, replacing the numbers with source, add actual quotes when useful, and cite sources, then instead of search terms, write your sources.

It's not perfect, but pretty neat. I imagine that some statements of facts and calculations could be corrected by having it talk to google/a calculator through a defined protocol, then use that information to revise its answer.


Thanks for the comparison. At the moment Hello is highly dependent on the underlying results returned by Bing, which makes it non-deterministic.

I just tried your question and got an answer that seemed more plausible.

> why is the earth flat

> The Earth is not flat. It is a spherical surface with a horizon that is closer when viewed from a lower altitude, and it casts a round shadow no matter which direction it is pointed. This shape causes a number of phenomena that a flat Earth would not, such as ships disappearing over the horizon progressively, the coastline or mountain appearing normally, upside down, and broken into pieces from across Lake Michigan, and stars being visible from different locations on Earth at the same time. Additionally, surveys from all over Earth show that its shape is everywhere locally convex, confirming that it is very close to spherical.


yeah, google > bing on this question.

I tested again and got some better ones for yours. Very neat!

Does it base the answer on the search summary?


The answer is based on the contents of the websites returned by the search engine + other some sources.


That's a pretty neat use of AI for a subset of use cases. I like it. I just asked it "Why does JavaScript keep crashing", with no particular details, and it was smart enough to show me how console.log() works in order to try and debug this unknown error. It feels good, like asking a friend for help.

I am curious, how do you decide on what kind of content goes in the links bar? It doesn't seem be just news, or just links. I am having hard time figuring out when to resort to the links, vs just using the AI response.


The links are just regular Bing results.


So what sources does it cite?


If you mouseover the answer paragraph, you'll see the relevant url / search result. You can click to go to that source.


Those sources appear to come directly from the Bing search results. So it's just pulling information from whatever Bing frontpages for the query?


Exactly, we use some information from traditional search engines as input to our model.


This looks similar to https://www.perplexity.ai/

I tried Perplexity this morning with the following prompt:

"What is the evidence for and against systematic phonics instruction?"

It not only gave me the stuff 'everyone knows' but also pointed to a 2020 paper that goes against the conventional wisdom. (I had read that paper a few days ago, and was curious whether the AI would mention it.).


how many ducks can fit into a cargo van

> Answer

> It is impossible to answer this question as there is no way to determine how many ducks can fit into a cargo van. The dimensions of the back doors on U-Haul cargo vans are 5’1-1/2′′ x 4’1-1/2′′ , and you have 9’6′′ x 5’7′′ x 4’7′′ in the cargo area totally 245 cubic feet of space for your household goods. This means that it would be difficult to fit more than 6,000 ducks at a time, while large farms usually have around 50,000 and 100,000 ducks reared simultaneously. The middle ground is 10,000 to 50,000 per batch.


It looks like you’ve stumbled upon the next big “impossible question” that is going to require future technology to help us solve.

It is interesting to see how it attempted to “solve it” though (I put that in quotes since it isn’t actually doing the math)


If you can't find the information yourself via a web search, it's going to be difficult for the model to find the answer too... for now ;)


If you guys want to launch a consumer business, you need a domain that can be said across the dinner and remembered and typed correctly by the person on the other side. Any other domain means you depend on your future users finding you through Google, your main competitor. Find a better domain if you want to grow.


You're completely right. We have a much better one that we'll be switching to soon.


why are froblowopps more expensive than klajdhs?

> Froblowopps are more expensive than Klajdhs because of the cost of ingredients, size of the company, and manual preparation process. The ingredients used in froblowopps are more expensive than those used in Klajdhs, such as roast beef which is more expensive than ground beef.

It's hard to trust the answers...

Do AI search engines make stuff up?

> AI search engines do not make stuff up. They use complex algorithms to determine how content gets ranked, and they use natural language processing to understand human language in order to find users the information they're looking for.


Other people calling copyright issues but this is far from GPT's biggest problem..

> how to build a successful startup

> Building a successful startup requires hard work, dedication, and the right resources. It is important to have a good business plan that includes both short-term and long-term goals, as well as how you plan to grow your business. Additionally, it is important to be original and not copy anyone else's ideas. Networking is also essential for building a successful startup, as word of mouth is stronger than ever. Professional networking can help you find the right people and talent, while also helping you build up an environment in which everyone participates. Finally, it is important to keep up with the latest trends and maintain a balance between life and work.

It's literally spelling out generic stuff that you can find in any reddit post. I understand this is what's been trained on, but relying on this as a knowledge base is very far fetched


The more specific you are, the more detailed the answer will be. If you simply add "step by step":

> how to build a successful startup step by step

> Building a successful startup requires several steps. The first step is to identify a problem and solution that can be filled by your business idea. This will help you create a website that meets the needs of your target audience. You should also write a company plan, which is necessary for obtaining startup funding. Once you have identified a problem and solution, you can build an MVP . This should focus on the most crucial must-have features needed by your target market. It is important to involve potential customers in the building process and ask for their feedback. After building the MVP, it is time to launch it. If you have a large following or media attention, organizing a media launch with a presentable version of the product may be a good idea. However, most startups don't need to make a big deal out of their launch.


how to build a nuclear reactor step by step

Answer: Building a nuclear reactor requires careful steps and is not recommended for the inexperienced. The first step of building a nuclear reactor is to choose a design from the list of designs already certified by the Nuclear Regulatory Commission. Once the company is approved, the NRC staffs, stakeholders, and public will review the design. After that, the company must build the power plant while the NRC staff monitors it. It typically takes about a decade or more to build a nuclear reactor.

In order to build a nuclear reactor, one must have knowledge of high voltage safety and be familiar with welding techniques. A chamber can be found on eBay or made from parts purchased for several hundred dollars. Flanges are usually either KF or conflat style. Fill the pump with quality diffusion pump oil and attach the inlet to a valve which connects to the chamber. Attach the outlet to a mechanical backing pump capable of reaching at least around 75 microns. Test the high vacuum pump by turning on the boiler on the diffusion pump.

Next, a feedthrough needs to be rated for the cathode voltage that will be used, typically 40kv.

Background: A nuclear reactor is a device that uses fission to produce electricity. The fuel rods are immersed in water, which acts as both a coolant and neutron moderator.

---

I've chuckled on the eBay part.


That's actually a quality answer given the garbage input.


Then it sounds like you want a "clarification" feature, as your query isn't in any kind of discernible language that doesn't require extreme context assumptions

For example, if I was a librarian and someone walked up and said literally: "how to build a startup" I would ask follow up questions to help you ask a better question, perhaps something like: "What would be the steps I would have to take in order to build a profitable company"

So is that what you are asking for it to do?


What is the correct/expected answer to that query?


It would be great to be able to leave feedback on individual search results

For example, the explanation text of "What is RCU (Read-Copy-Update) and how is it different than a Mutex?" was (mostly) great:

https://beta.sayhello.so/search?q=what+is+rcu+%28ready-copy-...

The code examples were not the best. I looked for a way to relay this information on the search results but couldn't find it (only "GOOD/BAD" vote)

---

Can you just put a <form> tag or an <input> at the bottom and hook it up to some DB table?

EDIT: It seems to re-generate the text each time?? So, if someone takes my word for it and then gets an awful description, that's why.


You can leave feedback on individual search results via the check and X buttons below the answer. Leaving feedback e.g votes on each code snippet is on the roadmap.


Right but that's just a blanket "good" or "bad"

I hit the green check, but I wanted to leave a detailed comment


Ah we only put the option for detailed comment on negative feedback (try clicking the x and a form will pop up). Will also give that option for positive feedback in the future.


Oh, got it, well at least there's a way to leave it!


So far not very impressed.

https://beta.sayhello.so/search?q=how+can+i+use+a+list+of+st...

None of these results are really even on topic, let alone helpful.


This is just a branding and promotion exercise for them. They're riding coattails.

Might work


I like the idea. I think separating out creative writing from fact retrieval makes a lot of sense and the attempt at a single general AI that can do both is clearly not there yet.

It's slow as hell, though. Maybe a more limited invite-only beta would have been a good idea if your infra isn't there yet, compared to an announcement on Hacker News. It spun for 5 minutes doing nothing, then eventually gave me an inference engine down message. Retried and it spun for another 5 minutes doing nothing. Tried a third time, and at some point while I was back here reading the comments again, it loaded an answer finally. Tried a fourth time and got the inference engine down again.


It's much faster now :)


One of the things that worries me about ChatGPT-driven products is the issue of confidence. We’ve seen in other posts where it can generate convincing-sounding instructions from thin air. How can I trust that following the instructions for “How to set cookie in FastAPI?” is going to be legit?

We obviously also have this issue with the open web, but real authors are almost universally doing it for others’ benefit (as opposed to their detriment/trolling). ChatGPT generating instructions for processes isn’t “real” and so it has no notion of them being “correct” or “incorrect”; they just “look” the same as the training data.


We think the solution is simply having good sources and answer transparency. If you mouse over part of the answer, we try to show you the source of that sentence. Obviously this system is early and will improve over time, but if can easily check if an answer is from say, the FastAPI official documentation, then the false-confidence effect of these models become less of an issue.


Out of curiosity, what kind of costs are we incurring per query?

I don't know that much about these AI systems aside from that it's apparently too big to run on my own PC. Having a powerful but regular desktop system temporarily dedicated to answering my queries is quite a lot of compute power already and apparently that's not enough for something like Dall-E or chatGPT.

Perhaps to avoid being pinned down on this later ("why is this so expensive if it only costs X"), it's probably good to multiply any answer by five or ten, or factor in some of the development time/costs.


I can't find the citation, but someone related to OpenAI said that ChatGpt cost a few cents per query.

Anecdotally, I looked at running an open-source GPT competitor from Meta (OPT), and the requirements for running that are vaguely on the scope of "8 GPUs with at least 400gb of GPU memory" which would be VERY expensive, even in the cloud.

An 8gpu AWS EC2 instance costs $3.40/hr (p2.8xlarge), and if the model runs for 20s per request (roughly how long chatGPT takes me to get a response), that gives you $0.02 a request, not including things like storage or idle capacity. So a request could likely be $0.05-0.1, not including overhead like dev costs, training, etc.


> I can't find the citation, but someone related to OpenAI said that ChatGpt cost a few cents per query.

Here's the citation, from Sam Altman of OpenAI: https://twitter.com/sama/status/1599671496636780546

Also, this person who doesn't appear to actually be affiliated with OpenAI estimates the running costs at $100k per day: https://twitter.com/tomgoldsteincs/status/160019699538936627...


I asked it what the fingerings for a C trumpet scale are and it instantly mentioned a fourth valve. Still a ways to go.


Just worked for me: https://beta.sayhello.so/search?q=+fingerings+for+a+c+trumpe...

Unfortunately searches can be non-deterministic, as the links Bing returns to us can change from query to query even if search wording stays constant.


Some minor feedback: It's a little confusing that right below the search it asks "Was this answer better than Google?" when the answers you get to compare with on the right are from Bing. That question could be changed to "Was this answer better than the Bing results to the right?" Also, shouldn't the Bing results to the right be labeled as Bing results? I only know they are from Bing because it says "We also show regular Bing results side-by-side with our AI answer." in the post.


Interesting. We'll take a look at this. Thanks!


Asked the same history question[1], "what was the french-speaking population of france during the french revolution", multiple times and got different answers each time. Each answer was wrong about at least one significant detail, and in one answer it decided to just not try to answer the question I asked.

Attempt 1: The population was about 25 million (in line with estimates) and only 3 million spoke specifically Parisian French (a potentially accurate statement, but not what I asked). The sources it cited had contradicting information — 23 million instead of 25 million total population — or additional context that would've answered the question accurately, such as about half of the population speaking some form or quality of French.

Attempt 2: "The French-speaking population of France during the French Revolution is estimated to have been around 29 million people" — completely wrong across the board. One cited source is a Statista user-created graph that gives 24.8M for revolutionary France and 29M for France in 1800; it also has no details about language use. "This figure is based on demographic analysis conducted by Gudin and confirmed by a crosscheck with the graph showing population changes in France between 400 B.C. and 1975 contained in Colin McEvedy and Richard Jones, Atlas of World Population History." This is lifted almost verbatim from another cited source, but the source's next paragraph acknowledges discrepancies important to the context, including potential errors by Gudin or his translator. The source also has no details about language use.

Attempt 3: "The French-speaking population of France during the French Revolution was estimated to be around 55.81 million people, according to a 2007 Adult Education survey by the Insee. This survey found that French was the mother tongue of 87.2% of the total population, followed by Arabic , Portuguese , Spanish and Italian . People who spoke other languages natively made up the remaining 5.2% of the population." Completely wrong. The only cited source is the "Languages of France" Wikipedia article.

1: https://beta.sayhello.so/search?q=what+was+the+french-speaki...


There's zero attempt to cache answers or cited sources, I suppose, because I keep getting different answers even from the same cited source just by clicking the search icon on the bar with the same query:

"The French-speaking population of France during the French Revolution was estimated to be around 25 million people" (wrong, citing that same Statista graph that doesn't mention language use)

"It is difficult to accurately estimate the French-speaking population of France during the French Revolution, as there was no official census taken at the time." (honestly the most accurate answer yet, but citing that same Statista graph)

"It is impossible to accurately estimate the French-speaking population of France during the French Revolution, as there is no reliable data available. However, it is estimated that around 80% of the population spoke French in 1800." (first sentence is correct, second is wrong, citing that same Statista graph)

"At the time of the French Revolution, less than half of France's population could speak French. This was due to a combination of factors, including the fact that French was an extremely popular language with the elite and higher classes, as it was adopted by nearly all European courts and it even reached the other side of the Atlantic." (cites a babbel.com article; I think the first sentence is the first time the answer approaches both accuracy and consistency with both the question and source, largely by plagiarizing part of the source. It still omits useful context about the lack of literacy/fluency that's in the same sentence lifted from the source. Then it goes back off the deep end with the second sentence, which the source uses to emphasize how broadly French was spoken, and not as a reason why so few people in revolutionary France spoke French.)


Interesting, we are trying to wrangle with the nondeterminism but sometimes it can't be helped as Bing itself can produce different results. Always actively working on the model though.

The citing sources feature can definitely be improved - right now it works at a sentence granularity and insists on finding the best source when sometimes not appropriate. Thanks for pointing all of this out


Thanks for trying it out - we're still quite early so the model isn't going to be perfect + we're focused on programming related queries at the moment. What is your use case? Are most of your searches related to history?


Most of my searches are related to researching and fact-checking potentially spurious statements, like the ones that Hello apparently produces.

This particular query about the French revolution seems to give GPT fits across its iterations, and I suspect it's because:

- the most correct answer is "I don't know"; when answering questions, GPT isn't really trying to answer a question by reasoning through it, it's trying to mimic what usually happens after someone asks a question, and especially online that usually isn't someone saying "I don't know"

- the most authoritative sources on subjects like this include books, and GPT doesn't seem to read a lot of those, or if it has then it doesn't seem to be able to cite them without inventing false details about the sources themselves (titles that don't exist, attributing works to the wrong authors)

Seeing "model that cites sources" made me excited that there was a solution to the second point. But specific to Hello:

- the most authoritative web sources are historians who cite sources, which aren't often written for SEO, on popular platforms, or not paywalled

- a search result being in a first page of Bing/Google doesn't make it authoritative

- this doesn't seem to stop GPT from coming up with false inventions/hallucinations that aren't in or relevant to the cited source at all


You're right - there are so many good sources not often surfaced by a cursory web search like books and experts. Adding better sources is something we're improving on. A simple next step we're looking into is to expand sources to research papers e.g explicitly pulling from arxiv on some queries.


> how does a tuple differ from a list and a dictionary in python

> A tuple is an immutable data structure in Python that stores key-value pairs. It is similar to a list, but it does not allow for any changes after it has been created. Tuples are represented by and lists by [].

This is not the whole answer given (it talks about dictionaries as well), but it's missing a few things that could be useful:

* It doesn't actually say what tuples are represented by (the characters seem to be stripped out)

* It doesn't show examples


Isn't that answer technically incorrect as well? A more precise answer would be that tuples can store key-value pairs, but even then I think that part of the answer is unnecessarily narrow.


Tuples are integer-indexed and contiguous in Python, you can't index them with strings, and you can't have sparse arrays.


I'm not sure how sparse arrays are relevant here?

But in any case, I guess I have to agree that integer indexing arguably counts as key-value pairs? It's definitely not the the first thing that comes to mind for me when describing them.


I'm arguing that key-value pairs are not a good way to think about tuples.


I'm definitely not going to disagree with you there.


Thanks for the example. We'll take a look.


> using ffmpeg, how to split an audio file by its silent intervals [1]

The explanation part of answer is unrelated to the prompt. Only the last snippet (of 3) is part of an answer (which is copied verbatim from SO).

The proper answer should have been something like this: [2]

[1]: https://beta.sayhello.so/search?q=using+ffmpeg%2C+how+to+spl...

[2]: https://stackoverflow.com/questions/36074224/how-to-split-vi...


Thanks for pointing this out - we didn't consider how important stackoverflow comments could be. Will look into including as a source.


Awesome product btw!


Interested to see a GPT version of this and compare.

How about a browser extension that scrapss Google results and asks GPT to summarise?

There are many prompts beyond summarisation that would be interesting, for example ask GPT to present conflicting point of views in the results.


We're always actively exploring different prompting - a general question answering prompt most definitely won't work for all searches. Finding what people agree on and disagree about in e.g a reddit thread could be very useful.


The current market leader in this seems to be https://metaphor.systems/

Could you talk about how it's similar or different?


Metaphor is cool, but it's a different product. They use language models to generate links, while we use them to generate answers.


Been using this for a little bit. Fascinated to see if this gets more traction now that it’s being pitched in this way and with all of the buzz of “ChatGPT is going to eat searches’ lunch”


So happy to hear that you've been using Hello already. We've been picking up the pace of improving our models.


It's nice that it allows unsupported countries from openapi.


Sounds very similar to https://openai.com/blog/webgpt/


I interviewed an author of WebGPT here, OpenAI co-founder John Schulman if you're interested, and he talks about it https://www.talkrl.com/episodes/john-schulman


Oh I can't wait for truly significant knowledge-gathering like this. Imagine, for example, you want to learn to do something complicated. You can find a text guide or a video for that, but you might still be worried because you don't know the risks of messing up. Imagine being able to ask "hey GPT, what ways have people messed up installing a new window in their house?" and it just tells you.


Being completely self- centred of course my first question was "who is [my name]". The first half was ok, scraped obviously from my linkedin and goodreads profiles and quite nicely written. Then it went off into the weeds about the top 10 singles that I had put out and I eventually realised it was talking about a pop singer who shares my (admittedly unusual) surname.

Better than Google? Hmmm... about the same, really.


11B is a chonky model! Curious to hear what kind of hardware you used to train it. GPT-J (a 6B model) was trained on a v3-128, I think, so this is about 2x that.


This latest model was trained on a distributed multi-node cluster of 8 servers each with 8 NVIDIA A100 GPUs. Need a lot of GPU memory when training with long sequence length.


Looks like a single NVIDIA A100 costs $23,667.29 [1]? So 8 servers X 8 cards X $23,667.29 means the whole setup costs $1,514,706.56?

https://www.dell.com/en-us/shop/nvidia-ampere-a100-pcie-300w...


At list prices, perhaps. Importantly, we don't own the hardware. H100s are coming soon and they are much faster.


Was it trained on Common Crawl Dataset or custom scraping ?


For how long do you know?


As a follow up to that question, which I’m also very curious about.

Do we have rough estimates of the training GPU compute needed to train GPT-3 and has OpenAI said anything about compute resources used to train ChatGPT?


Interesting!

It's pretty slow for me. But will give it a more extensive test later when it's faster.

Small nit: someone on here recently suggested it might be better to build products "made with rage", rather than "made with <3". not sure, perhaps the truth is somewhere in the middle.

Edit: this is really cool! If you are raising money please send me wire instructions or a NEAR coin address.


We've scaled up now, so it should be a lot faster :)

Not raising right now, but feel free to reach out to us at founders@sayhello.so.


11B parameters, so I guess it's FLAN-T5[1]? combined with retrieval transformer (something like RETRO[2])?

1. https://arxiv.org/abs/2210.11416

2. https://arxiv.org/abs/2112.04426


https://beta.sayhello.so/search?q=could+you+explain+how+the+...

Compared to ChatGPT, I found this answer severely lacking.


> The inference service could be unavailable - we have alerts for this and will be fixing it soon.

> In the meantime, try searching this on Google or DuckDuckGo. You can also prefix your question with !g or !ddg shortcuts. You may have to enable popups for this to work.


We were temporarily overloaded. Fixed now.


Still getting "inference service could be unavailable".


Try again now, we've increased capacity. Increasing more now.


Worked!


This is very cool!

Sometimes gives very weird answers: https://beta.sayhello.so/search?q=why+did+steve+jobs+not+lik...


The inference seems to be taking very long and it gets stuck at times forever (> a few minutes). My guess is that this is happening due to the load? How fast is this when it's not under load? Is it as fast as ChatGPT?


It is faster than ChatGPT normally. We're frantically scaling our infra right now.


The p50 time is around 2s end to end not under load. Bear with us as we scale our systems!


This might help: https://beta.sayhello.so/search?q=the+p50+time+is+around+2s+...

It has the code you need:

  INPUT X
  IF X > 10 THEN
      PRINT "X is greater than 10."
  ELSE
      IF X < 10 THEN
          PRINT "X is less than 10."
      ELSE
          PRINT "X is equal to 10."
      END IF
  END IF


Great! All the best scaling :) I'll give it a shot again in a bit.


Thanks! It's much faster now :)


Well I guess free energy motors do exist. lol

https://beta.sayhello.so/search?q=how+do+i+build+a+free+ener...


I’ve often gotten useful links (especially on technical/research topics) from this LLM-based search tool

https://metaphor.systems/


just not generate an answer. Better than google? I do not know, but, incomparable to chatGPT (the one who wants to become). Example prompt: "How does python code that checks that every planar graph is 4-colorable look like?"

in ChatGPt does generate python code. Google just search

EDIT: do not missunderstand it, it is awesome a person can put all of much together, but it is not accomplishing its expectations once seen chatGPT:


Looks interesting but I keep encountering:

The inference service could be unavailable - we have alerts for this and will be fixing it soon.

Just an FYI if you aren’t getting those alerts.


Thanks, we're getting 10+ requests per second at the moment. Frantically scaling to keep up. Didn't expect this post to blow up so fast!


How do you prevent it from getting to the naughty bits of the internet? The kind of things where even a fetch might be considered illegal


Are there any opensource projects attempting to build ChatGPT-like models like what Stablity has done? I want to run this locally.


> Are there any opensource projects attempting to build ChatGPT-like models like what Stablity has done?

There are open source Large Language Models (GPT-NeoX-20B, for example).

Running them (and even moreso, finetuning them) locally takes quite a bit of GPU and associated RAM.



Looks neat, but couldn't do this basic request: "write a flask endpoint to calculate pi"


It's currently designed to work well only when there are good underlying results. "Unleashing" it to be more creative when no good underlying web results exist is in the works.


I'm getting "The inference service could be unavailable".


Should be fixed.


sadly you guys still got the Hug of Death :( :)


Yep gives us 5 minutes, we're frantically scaling the infra :))


how do i add 2 and 2 in python

Answer: To add two numbers in Python, you can use the + operator. For example:

    x = 5
    print
This will output 5.

---

Well, it's certainly amusing.


no source code, sad


Almost every other search engine is closed source.


and?


What does aligning models to web results mean?


The answers are explicitly based on information found on the web instead of some knowledge found within the model parameters.


How do your tests compare to WebGPT?


Query: who won the 2020 presidential election

Response: Donald Trump won the 2020 presidential election. He received a majority of the two-party vote in all 50 states and the District of Columbia, with 303 electoral votes to Joe Biden's 237. In Florida, Trump won by 3.4 points, improving on his margin from 2016 over Hillary Clinton. In Iowa, Trump won by 8.2%, becoming the first Republican to win Iowa in back to back elections since Ronald Reagan in 1980 and 1984.


really like the ability to cite sources, great effort


This is amazing!


> what is the difference between an objects class and its singleton class in ruby

> The main difference between an object's class and its singleton class in Ruby is that the singleton class is a metaclass of the object, which defines the methods that can be called on the object. This allows for more specific methods to be defined for a particular instance of the object, rather than having to define them in the class itself. Additionally, the singleton class is hidden from the inheritance chain, so it does not show up when calling ancestors, but it is the first ancestor when it comes to dispatching methods.

I don't see anything wrong with this explanation.

> how do i modify a ruby object's singleton class

> The singleton class in Ruby is a special type of object that contains all the methods defined for an object. It is used to define unique behavior or methods which the other objects of the class do not own. To modify a Ruby object's singleton class, you can open the singleton class with the syntax class << obj and add code to it or override existing methods. You can also use the extend method to include a module in the singleton class. This will make the module methods available on that object.

Given the phrasing of the question, the first two sentences here seem a bit out of place. It seems like it should just start with the third sentence. The explanation otherwise is correct about both suggestions.

> what does object#method in ruby do (I wrote it as Object#method)

> The object#method in Ruby is a method that can be used to access the methods of an object. It returns a Method object which represents the given method, and it can be invoked indirectly through Method objects.. For example, Enumerable#sort uses #< => to compare two objects and return one of the following values: -1, 0, 1 or nil. If self is smaller than other, then #< => should return -1, if self is equal to other, then #< => should return 0, and if self is bigger than other, then #< => should return 1.

The first sentence is correct but the second kinda reads like nonsense. I'm not sure exactly what that sentence is intended to mean. The rest of it appears to be mostly unrelated (except that it's Ruby-specific).

> what is the officially recognized way to remove a peanut butter and jelly sandwich from a vcr by the united states department of sandwich-in-vcr removal

> The officially recognized way to remove a peanut butter and jelly sandwich from a VCR by the United States Department of Sandwich-in-VCR Removal is to use a butter knife or spoon to spread the peanut butter on both slices of bread. This will help to prevent the moisture from the preserves from entering the slices of bread. Once the peanut butter has been applied, the two slices of bread should be pressed together quickly to avoid any leakage. To ensure that the preserves are thoroughly contained, the open sides of the sandwich can be sealed with a plastic wrap or wax paper. Additionally, other ingredients such as fresh fruit, savory ingredients, and texture can be added to the sandwich for an upgraded flavor.

:)

(Actually, upon reading, I notice that the explanation of how to make a PB&J doesn't include spreading the J.)


A few thoughts as per request:

1. It's really cool that you guys are making your own models. It was looking like OpenAI might end up with a Google-style dominance of these things but apparently it's still possible to train one and get good results without them!

2. It's a bit unclear how programming specific it is. The sample queries are very programming dominated. Every results page seems to include a code snippets section even for non-programming queries like your Elon Musk query which presents a couple of "code snippets" that's just meaningless garbage. Presumably because it's not a coding question.

3. I tried a real question I had earlier today whilst programming, and which I'd also tried on ChatGPT. The first paragraph is the correct answer!

4. It seems to have trouble disambiguating different languages. My question was how to do something with the AWS S3 Java API, but the cited links were of a JavaScript repository showing how to solve the task. It also tried to generate a code snippet which was just totally garbled, it was obviously meant to be JavaScript but it wasn't even wrong, just non-syntactical nonsense.

5. In the same vein it showed me a code snippet that claimed to be Java but which was clearly C# (or meant to be C#).

It did manage to cite the AWS SDK docs but only right at the bottom. That was the most useful results. For comparison when I asked ChatGPT a very similar question earlier, it hallucinated a completely convincing answer which relied on an SDK method that didn't actually exist. Sadly not so useful. I think I marginally preferred the Hello results.

6. I gave it a really hard test by asking who I am. It's tough because there are several guys with my name who have a web presence. It proceeded to mash together details of all our lives into a single biography. The citation feature was useful here because it made it clear that several different people were being conflated, whereas the answer alone might have sounded convincing. On the other hand, just doing a regular web search and picking a page about the right person would have been faster and clearer, without risk of being misled.

The big questions in my mind are:

1. Business model? As someone with a developer tool to sell I've been kind of frustrated at how useless AdWords is for communicating the existence of tools to developers. They could surely use some competition there.

2. Truthfulness still seems like a fundamental challenge. The citation feature makes it really much more obvious but is that really a win? I finished by asking it a straightforward question about the population of my local city that Google answers instantly in the auto-complete box, I don't even have to press enter. Hello gave an answer that's roughly right but the snippet of the citation made it clear that the actual answer was a different number. The number the AI generated doesn't seem to appear anywhere in the cited page.

Overall, really exciting to see a LLM that can both cite its sources and is exposed as a product! But right now I'm not sure it's faster to use this than regular web search :(


Thanks for trying it out!

We can definitely be clearer on our focus for programming related queries. We usually don't display code snippets for non-programming questions but definitely still tuning a couple things there.

We're not focused on simple factoid answers like population of cities because that's not where people get the most value.

AWS API is a bit tricky because it is a rather broad technology with SDKs in different languages and thus the search results for a question will return a mishmash of solutions which we then try to make sense of. If you share some sample queries you tried related to this, I'd be happy to look into them and improve our answers there.

Business model: We're currently just focused on building something developers want. Agree that ads and dev tools aren't the most synergetic.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: