Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Chrome extension to display ChatGPT response besides Google Search (github.com/wong2)
431 points by wonderfuly on Dec 4, 2022 | hide | past | favorite | 149 comments



This is very creative. Thing to note is that this is not free (or at least it is likely not to be, OpenAI is charging for APIs). We are talking about ~1c per search looking at current GPT pricing (2c per ~750 words in/out). It is also indiscriminate deployed in this way, triggering for every search, even those that chatgpt will show suboptimal responses for, like 'starbucks near me', navigational queries like 'twitter' or anything else chatgpt will be bad in the context of search (a lot of things!). And it is non-trivial to predict which searches GPT will be good for (especially within typical result latency requirements of modern search engines).

We are doing some experiments with this at Kagi, and the main trick is to manage cost, possibly through on-demand triggering mechanism (which also can help manage accuracy). One thing to keep in mind is that this is likely going to get better/faster/cheaper in the future.


For search, abandon Google, not hope. All ye should enter here: https://kagi.com

// as a kagi user, i'd

(a) imagine this as a lens at the simplest; just pick text-davinci-003, tokens 3000 to leave room for prompt, temp 0.7, freq 1 - 1.5, presence 0.5, and instead of best of 3 or 5 show repeated calls as if 3 - 5 unique results

(b) imagine a richer implementation that summarizes other articles on first SERP, then collates and summarizes those (with the compare/contrast structured synthesis GPT-3 does well when POVs differ), and shows the final rollup summary above the individual summaries, in the right hand column

// would also be OK connecting it to my OpenAI token so I'm paying, not you. having done the math, it's nominal cost if not the default mode.


Hey wow, it's the Kagi guy. Offtopic, sorry: I love Kagi. I would say 85% of my searches I stick with it, 15% I tuck tail and head to that one big search engine. Pretty phenomenal IMO, and I have no doubt it will get better. Keep it up!


The true killer apps will come when StabilityAI or another group releases an open source equivalent of GPT-3.5.

Compare Dall-E to the creative explosion that arose from Stable Diffusion.

Nobody is going to build the next Google atop OpenAI APIs except for OpenAI themselves. An open source model and pretrained weights will open the playing field for everyone to compete with Google.


Stable diffusion for davinci003 is coming


I can't wait!


Stable diffusion can run on regular PCs. GPT-3.5 on the hand can't without doing a lot of swap between disk and vram making it sl.


I forget where I read this recently but a compelling point made was that this is fun as a nerd toy but to deploy it to something general like Google or Gmail would be overwhelmingly expensive.

Reminds me of the, “so expensive that only the five richest kings of Europe will own them.” joke from The Simpsons. Eventually it’ll be ridiculously cheap and easy to include anywhere.


I remember estimates of the worth of each search to Google being 10 cents on average. Google does run Bert on many if not all of its searches anyway? And let’s not forget that searching the entire internet is not cheap either! Google holds the entire index in memory and every search hit goes through thousands of machines to return your results. In other words, running chatGPT might not exactly be a problem for google if it decides to do so.


Google’s search of “the entire internet” is lacking more and more every year.

It’s probably not true at all anymore. It’s probably “the sliver of the Internet we prefer you interacted with.”

Any time I search these days I’m amazed at how you can exhaust the search in 1-2 pages before you get to “related hits.”


Yeah, it's clearly lacking. Either de-indexing many pages or just blocking them from the results, even when I use double quotes.

Then I go to Yandex or something and voila, it pops right up. I'm not sure I care enough to pay for Kagi but there's something very wrong with Google (and DDG, and Bing, etc)


It’s definitely deindexing pages. It’s been years since I’ve been able to locate one of my old HN rants via Google Search when it used to be the first result. Before all I had to use was my username + a politician’s name that I’ve only ever used once. After looking up the comment with Algolia and adding more keywords directly from the post, Google just gives up


Genuinely curious, can you give both your name and the politician name in double quotes individually? Google has stopped requiring all terms to be present in search, this is a way of forcing it..


If I put the two words in quotes, the only result is this unrelated thread: https://news.ycombinator.com/item?id=28162412 - I never replied in that thread, so it completely ignored my username despite the quotes but did contain one instance of the other search term.

Just double checked Algolia and I actually have three comments that fit the query. Two posted nine years ago and one posted ten years ago. It's been at least three or four years since I was last able to use Google to find my comment.

Edit: Turns out if quote the entire first line containing the name, Google finds the comment. It seems they're only purging parts of their index


Using double quotes used to do this...but for semi-obscure topics I notice Google search doesn't give a crap about the quotes, it'll latch onto one or neither of the terms.


I'd estimate that running a large language model like GPT-3.5 or Lambda is currently 100x-1000x both more expensive and slower to run inference through than a language model like BERT.

So deploying it at Google scale is not viable (yet).

There is also the question of incentives. If an LLM model can return exactly what the user asked for, where do you put ads? Before the answer, after the answer or inside the answer? Any of those is a bad outcome for the user wanting just the answer.

We already witnessed the failure to monetize Alexa with ads - in a purely question answering setting, users will not tolerate anything but the answer itself. Thus, the business model for this needs to be paid AI/search, and Google would be facing innovator's dillema. If I was writing a book about Google, I would love to witness the VP level meetings at Google at this moment.


I can easily see how to monetize it in a minute - whenever the answer involves a recommendation that has monetary value (a product or service) you get the option of choosing what product label to recommend and there’s the monetization angle.


Problem with this is a) product or service recommendations are a tiny fraction of queries and b) gpt3 and any llm would be pretty bad at doing this, because they would exclude any recent product/services (they are trained on data from 1-2 years ago) and c) if the output of llm is changed in a way to replace llm’s recommendation with a paid one, why bother with having a llm in the first place?


The question is: how does Google monetize chatGPT results? If the answer is right there on the page, what’s the incentive for anyone to click on an ad?


If ads were truly useful we'd click on them anyway. You may be looking for the answer to how to run a VM inside of ChatGPT[0] but the AI also knows you'd like a new pair of headphones so… two birds with one stone!

[0] https://news.ycombinator.com/item?id=33847479


  > the AI also knows you'd like a new pair of headphone
I understand how Google knows that I'm currently searching for a new pair of headphones. But how did _you_ know?!?


If the entire internet is composed of bot written SEO spam, I think you’re spot on.


If it’s possible, please create a family plan or regional pricing. I love Kagi but if I were to buy it for my entire family, the bill would be $600/year - more than a month of rent.


In the country where I live (Vietnam) $600 would be 3-4 months wages for many people.

Any company that doesn't do regional pricing is only interested in doing business with rich countries. Which sucks but is understandable. I wish they were more honest about it though.


> Any company that doesn't do regional pricing is only interested in doing business with rich countries. Which sucks but is understandable. I wish they were more honest about it though.

This may be true if your cost per client/customer/etc is either negligible (such as with digital goods delivery), or dependent on their country (eg. retail).

Here, the bulk of their cost is computing resources, and they (according to their profile page) don't even make enough to cover it with the current price. I don't think this cost would go down with the customer location.

Yeah, it sucks a lot that people in rich countries can afford things people in other countries cannot, but that's kinda what "rich country" means.


I’m curious about their economics. I would assume there’s a huge upfront fixed cost but the marginal unit cost is tiny. They may not be breaking even or run at even, but that (I expect) is because they’re struggling to overcome that first hump of the fixed cost, but at the margin they’re doing well. I would be flabbergasted if my monthly is going to pay for my search transactions rather than the ability to execute any search at all. In that model you can in fact offer regional pricing as long as there’s a critical mass sufficient to justify the localization and regional edge deployments. You can go even further and let larger (in dollar space) regions subsidize smaller regions while they grow critical mass. Some companies I’ve worked at felt offering their service globally was worth the depressed margins even if they never made it make economic sense simply because it was useful enough - but also it creates a solid global brand. Anyway Kagi is great and I wish them the best of fortunes.


Kagi is not having own index and is a proxy. They pay to use indexes of actual search engines and this is not cheap that they process and display to users.


> Any company that doesn't do regional pricing is only interested in doing business with rich countries. Which sucks but is understandable. I wish they were more honest about it though.

Kagi loses money on their paid customers. If you have a Kagi account you can see how much you cost them. It makes no sense to offer regional pricing unless there is also a way to serve those customers in a cheaper way.


Will you have a family plan? Yes, in the future we will also offer the family plan. It will allow for adding multiple accounts under the same billing. And it will include powerful options for kids (Kagi for Kids) including control over the acceptable content.


I saw they have a Team plan, doesn’t that work for you?


> it is non-trivial to predict which searches GPT will be good for

Since pricing is roughly proportional to response length (and maybe prompt length?), it seems like ChatGPT could use itself to determine if the search is a good fit or not. Give it a prompt like "I want to know if you are likely to generate more immediately useful and actionable results than my web search. The query I am searching is <query>. Should I run you on this query?"

I tried running that on some sample queries to see its output, and compiled its responses in this table (note the last one is incorrect in my opinion):

    | Query                              | Yes or No |
    |------------------------------------|-----------|
    | "starbucks near me"                | No        |
    | "twitter"                          | No        |
    | "python requests get json"         | Yes       |
    | "peach and mango cocktail recipe"  | Yes       |
    | "hn"                               | No        |
    | "image of versaille"               | Yes       |
Btw, incredibly, ChatGPT actually made that table for me when I said "Please compile all the queries I just asked you about into an ascii table, where the first column is my query, and the second column is your Yes or No answer." The table it printed was correct but formatted as an HTML table, so I asked it to put it in a code block and I got almost exactly what I pasted above (just had to remove two extraneous spaces that made the lines not line up)


Yep, this is one of the amazing things about it. But using ChatGPT to determine whether a query is good for ChatGPT answer does not address the second point I made, one about typical latency requirements (it makes it worse).

Normally you want your search results in about 500ms. Using ChatGPT to first figure out if the query is good will take 1-3 seconds then using it again another 1-3 seconds. So we are talking about results in about 4 seconds. Plus it is not 1c per query now it is more likely 1.5c per query as you introduced an extra prompt.


It's awesome to hear that Kagi is experimenting with this. Even if it's some extra paid opt-in feature, I'd give it a chance.


I don't know why people are always so up in arms about cost: if Google switched to a paid rather than ads model and charged 1 credit (1c) per search you made, I'd still use it.

People always raise such a ruckus: how dare Youtube offer premium, etc. People get too addicted to free services and become self-entitled.


Results for the same query can be cached, making the Starbucks near me type queries bypass GPT3 other than the first time across all users.


I started trying to use ChatGPT instead of google for a lot of my queries. The tricky part is unlearning how to search.

For instance, I had a timestamp 2022-11-27 00:00:00.000 and wanted to display it in local string. The problem was (depending on your time zone) if you do new Date("2022-11-27 00:00:00.000").toLocaleString(), you get 11/26/2022

So I would have googled something like "pass timezone to toLocaleString", which would give me this answer [0]. But its still a lot.

Instead in ChatGPT I would ask:

> assume i'm in EST timezone, en-US

> const d = "2022-11-27T00:00:00.000Z"

> const dd = new Date(d)

> How can I print out local string as Sun Nov 27 2022?

To which it replied:

> const d = "2022-11-27T00:00:00.000Z";

> const dd = new Date(d);

> console.log(dd.toLocaleDateString("en-US", { timeZone: "America/New_York" }));

But the answer was actually wrong, which is fine! Easy to test

> This actually prints out 11/26/2022, 7:00:00 PM. Try again

To which it replied:

> The output you are getting is correct, because the date you provided is in the UTC time zone, and you are formatting it using the Eastern Time (ET) time zone...

> const d = "2022-11-27T00:00:00.000Z";

> const dd = new Date(d);

> console.log(dd.toLocaleString("en-US", { timeZone: "UTC" }));

> // Output: "11/27/2022, 7:00:00 PM"

It was pretty incredible and much better than stackoverflow as now I don't have to worry about generalizing my answer or implementing the answer to my particular use case. ChatGPT tends to be verbose but the code is clearly noted and you can often ignore the words for simple tasks.

https://stackoverflow.com/questions/17478086/chrome-timezone...


> The tricky part is unlearning how to search.

Hum I'd argue that you've never learned how to search.

I would have searched for "javascript date to string in specific timezone", and it would have given the following result, which is exactly what ChatGPT gives: https://stackoverflow.com/questions/10087819/convert-date-to...

Actually, if you already know that you can pass a timezone to toLocaleString(), I would just go on the documentation of the function: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

I'm starting to understand why there are so many people on HN claiming that "Google has started to give me bad results in recent years".


> I'm starting to understand why there are so many people on HN claiming that "Google has started to give me bad results in recent years".

I thought I was going crazy. Googles been working fine for me and I was wondering if I was just being delusional or in denial.


I was served maybe 60-80% spam from Google around 10 months ago (now resolved). The results you see can be significantly different from what others see.

Sample from 10 months ago: https://i.imgur.com/Hla7cyT.jpg


In that resolution, the picture is almost a blurr.


If you're on mobile, Imgur doesn't allow you to view a high resolution version. You have to copy the image URL and then adjust the maxwidth query parameter:

https://i.imgur.com/Hla7cyT_d.webp?maxwidth=640&shape=thumb&...

Because who would want to be able to see the whole image?


still blurry to me


Did you change the value of max width? Eg https://i.imgur.com/Hla7cyT_d.webp?maxwidth=6400


Seems fine to me

     curl -O https://i.imgur.com/Hla7cyT.jpeg && sips -g pixelHeight -g pixelWidth -g dpiWidth -g dpiHeight Hla7cyT.jpeg

    pixelHeight: 2009
    pixelWidth: 1879
    dpiWidth: 72.000
    dpiHeight: 72.000


Here's a link to the proper version: https://i.imgur.com/Hla7cyT_d.png?maxwidth=999999


But I don't want a specific timezone. I want it to ignore the timezone. I knew it was being thrown off by timezone but was unsure why.

This is one of dozens sometimes hundreds of questions I need answers to every day and want to minimize the friction. I don't want to have to read this unless I don't have to. Sure sometimes if its important enough or I'm curious, but the ChatGPT answer was much better for me. And I get by fine on Google otherwise

function convertTZ(date, tzString) {

    return new Date((typeof date === "string" ? new Date(date) : date).toLocaleString("en-US", {timeZone: tzString}));   
}

// usage: Asia/Jakarta is GMT+7

convertTZ("2012/04/20 10:10:30 +0000", "Asia/Jakarta") // Tue Apr 20 2012 17:10:30 GMT+0700 (Western Indonesia Time)

// Resulting value is regular Date() object

const convertedDate = convertTZ("2012/04/20 10:10:30 +0000", "Asia/Jakarta")

convertedDate.getHours(); // 17

// Bonus: You can also put Date object to first arg

const date = new Date()

convertTZ(date, "Asia/Jakarta") // current date-time in jakarta.


Reading documentation instead clicking first SO link or writing 6 lines to a bot? You must be crazy!


ChatGPT is an absolute game changer. And I was very conservative with systems like GPT and DALLE-2. For example, I've been very lazy with automating some things on my work laptop, using Powershell. Now, I just had to ask ChatGPT for "write a powershell script that toggles "use setup script" in the windows proxy settings" and I was done in 5 minutes. Amazing. I foresee a paradigm shift in how we use Google in the next 1-2 years.


You inspired me to try using ChatGPT for a similar need I had, but alas, "Write a bash script to toggle grayscale display mode in MacOS" just comes up with a plausible but non-working script that toggles a hallucinated "AppleIntensityEnabled" setting.

To be honest, I feel a bit of relief every time AI fails to do something. Like, okay, we've got a few more years...


If the code is not working you can just try telling it what went wrong. It doesn't have a Dev environment where it can test the code so you have to be its debugger.


i told it it was wrong about some math problem and it asked me what the correct answer was. i told it and it remembered. but more interestingly its very good at keeping context. it combined like 5 back and forths into 1 coherent sentence


Tell it what is wrong and why, it is surprisingly good at fixing itself with a bit of help. I was able to guide ChatGPT well enough to make it write an old school racing game using pygame. Start general, see what it gets correct and what should be changed, and give it better indications.


Chatgpt isn't a person though, it didn't fix anything. You gave it a prompt, it gave a result that was close. You add a line to the prompt and it gives you something closer to what you were looking for.

It looks more like refining a search pattern(like one might do with an LDAP query) on the part of the operator than it does an algorithm "fixing", "changing", and "being guided". It's interesting how we anthropomorphism the output of this algorithm compared to other APIs, even though the algorithm is closer to oher APIs than it is a human as far as we understand.


I don’t think anyone here is confusing ChatGPT for an actual person.

What is really interesting with ChatGPT compared to other interactive software is that you can give instructions the way you would do it with a human. You can literally copy paste a compilation error, with no more context, and it will fix the previous program it generated. Even just pointing vaguely to something like “that does not look correct, you forgot some edge cases” will result in an improved version.


Yes, I had to press "Try Again" a few times but, in general, the amount of stuff you can generate is staggering. It's also quite fun to ask it to "invent a programming language in the style of Rust and APL" for example. There's so much potential here...


> And I was very conservative with systems like GPT and DALLE-2

Yeah those are impressive but basically toys. Although at the moment it's clearly still a research prototype. For some things it works really well and beats Google by saving dozens of clicks and repeated searches, for others it's just plain wrong.


Can you show that pwsh script?


The future is likely a GPT-plus-level model, browsing-enabled, specialized in formulating queries, reading snippets and the underlying web-pages.

It'll take a single expressed intent, and conducts a series of queries, page-readings, refined queries, & rerankings/summarizations before providing you a synthesized response.

In a second or two.

With ads.

And deeply-embedded 'sponsored recommendations'.


I think the most interesting use will be to connect two things. Like "how do i make money from reciting medieval poetry". It hallucinates ways to connect the two, some of which are pretty credible


"1. Perform at poetry readings or events and charge a fee for your performance.

2. Sell recordings of your recitations as physical CDs or digital downloads.

3. Create an online course teaching others how to recite medieval poetry.

4. Offer private lessons to individuals who want to learn how to recite medieval poetry.

5. Host workshops or seminars on medieval poetry and charging a fee for attendance.

6. Partner with schools or educational organizations to offer recitation classes for students.

7. Create a website or blog dedicated to medieval poetry and monetize it through advertising or sponsored content.

8. Write a book about medieval poetry and include recordings of your recitations.

9. Collaborate with musicians to create recordings of medieval poetry set to music.

10. Create a YouTube channel featuring your recitations of medieval poetry and monetize it through advertising.

11. Offer your recitation services for special occasions, such as weddings or other events.

12. Work with museums or cultural organizations to offer recitation performances as part of their programming.

13. Record audio books of medieval poetry and sell them through online platforms.

14. Create a subscription service where members can access recordings of your recitations on a regular basis.

15. Sell merchandise related to medieval poetry, such as t-shirts or posters featuring your recitations.

16. Offer recitation services for businesses, such as recording voiceovers for commercials or videos.

17. Collaborate with other poets to create recitation performances that incorporate multiple voices.

18. Create a podcast featuring your recitations of medieval poetry and monetize it through advertising or sponsorships.

19. Write articles or blog posts about medieval poetry and include recordings of your recitations as examples.

20. Create a mobile app featuring your recitations of medieval poetry and charge a fee for downloading it.

21. Offer your recitation services as background music for yoga classes or other wellness events.

22. Work with language schools or tutoring services to offer recitation classes as part of their curriculum.

23. Collaborate with composers to create new pieces of music based on medieval poetry and perform them at concerts.

24. Record your recitations of medieval poetry and sell them as relaxation or meditation aids.

25. Write and self-publish a book of medieval poetry and include recordings of your recitations.

26. Create a live streaming series featuring your recitations of medieval poetry and monetize it through donations or subscriptions.

27. Offer recitation services for educational organizations, such as recording voiceovers for educational videos or podcasts.

28. Collaborate with theater companies to incorporate your recitations of medieval poetry into their productions.

29. Create a virtual reality experience featuring your recitations of medieval poetry and charge a fee for access.

30. Offer your recitation services as part of a wedding or event planning package, providing entertainment for guests."


'Twas clarly trained on a lot of student essays that had to hit target page-lengths/word-counts, with no penalties for repetitiveness.


Won't be too long until you can run a model like this on local hardware


Does it make sense to put this on the chrome store? OpenAI is eventually going to end the beta period or Azure will run out of GPUs, whichever is first, then it'll no longer be accessible.

In theory we can replicate this with GPT-3 but ChatGPT has better access to its knowledge, when it's not being a nagging busybody, which makes it much friendlier to interact with, when it's not being a nagging busybody.


Now imagine a decentralized version of this, as in I have it locally, ask it a question and rate how helpful the answer was, then share this feedback with everyone and they update their model.


It does? I hooked up GPT3 to my discord server and can't really tell the difference in quality.


I assumed the ChatGPT3 enhancement is that they added memory of the dialogue to it? That's a huge improvement over one off questions and answers. You can tell it to correct its answers and ask follow up questions.


Probably, they're still using relatively simple queries. Maybe the memory is larger, but with 4096 tokens, GPT-3 is no slouch either. RLHF has tuned it so the conversational output of GPT is more inline with human expectation, it's also less likely to make glaring errors. The cost of their approach is it sometimes get stuck rephrasing things and wraps everything in boilerplate that can derail subsequent generation.


I wonder myself if the "memory" is just inputting the previous dialogue as input to every subsequent query. I think this would probably get quite slow after awhile however.


This is what it does. At least if you ask it how it does it. ;)


Can we change the title to a "Chrome extension ..." since this is not available for Firefox et al. ?


Waiting for someone to ask ChatGPT to make a Firefox version of this extension...


Done: https://github.com/unflxw/chat-gpt-google-extension

The README includes the conversation I had in order to port it. Close to zero thinking necessary. I have never worked on a browser extension before.


Now you got my attention, this is really impressive.

Maybe this could be used on VSCode as a pair programming partner.


Bravo, and thank you for this.


> You are a chat bot that rewrites Chrome extensions for other browsers...


You are a chat bot that rewrites Chrome extensions to firefox, how would you rewrite the following code? https://github.com/wong2/chat-gpt-google-extension

To convert the code from the GitHub repository you provided into a format that is compatible with Firefox, the following steps could be taken:

    Replace instances of "chrome" with "browser" in the code. In the Chrome extension API, the "chrome" namespace is used to access various features and functionality, but in Firefox, this namespace is replaced with "browser". Therefore, all instances of "chrome" in the code would need to be replaced with "browser" to ensure that the code is compatible with Firefox.

    Replace any references to Chrome-specific APIs with their Firefox equivalents. The Chrome extension API includes a number of features and APIs that are specific to Chrome, and these would need to be replaced with the corresponding Firefox APIs in order to ensure that the code is compatible with Firefox.

    Update the manifest file to use the correct format for Firefox. The manifest file is a JSON file that specifies various metadata and configuration information for the extension, and the format of this file is different for Chrome and Firefox. Therefore, the manifest file in the code would need to be updated to use the correct format for Firefox.

    Test the code to ensure that it is functioning properly in Firefox. Once the code has been converted and updated to be compatible with Firefox, it would be necessary to test the code to ensure that it is functioning properly in Firefox and that all of the desired features and functionality are working as expected. This may involve installing the extension in Firefox and testing it with a variety of inputs and scenarios to ensure that it is working properly.
Overall, these steps would involve significant modifications to the code in order to make it compatible with Firefox. However, by following these steps and carefully reviewing and testing the code, it should be possible to convert the code


This is the same answer format of essentially any deep technical question I've given it.

Essentially:

1. Do the thing that is the first thing that is mentioned and that any first year student learning about thing hears about the thing.

2. Do some more complicated things in a hand-wavey manner that solves the problem.

3. Check to see the problem is solved. If it is not, do some more of the hand waving in step 2.

4. There may be other problems to check for in the future even if this solution works. Who knows! But I solved your problem.


Sure, but because it's a chat bot, you can then ask: "Cool, so how would you do the thing you just told me to do?"

I just pasted the manifest and the main script and asked it how to port it. It was basically a one-shot. Here's the port: https://github.com/unflxw/chat-gpt-google-extension

The README includes the conversation I had in order to port it. Close to zero thinking necessary. I have never worked on a browser extension before.


I'd really love to see a shootout between two people who weren't familiar with extension programming (but who have some programming knowledge) be given that task (port this github repo for a chrome extension to work and be installable on Firefox) with one person just using normal google-fu and the other person doing what you did, and watch how long it takes them.

I actually wrote a few VS Code extensions recently starting from zero knowledge of the package formatting for them, and I wonder how much different my time spent doing it would have been if I would have just asked this bot to help. There was a lot of regex involved and hell just using chatGPT to explain to me some of the ways I could be matching strings with whatever format of regex tmLanguage uses would have probably saved me a lot of googling.


This is super cool. Curious how much work it was to discern the `[blabbery omitted]` sections. Were you generally able to tell right away that they were nonsense? Or did you have to spend significant time following dead ends it recommended?


Most of the "blabbery omitted" bits are just one-paragraph summaries of the rest of the answer above. It wasn't nonsense, it was just repetitive fluff, and the README was already way too long.

ChatGPT likes mimicking its own previous style, so after it ended its first message with a paragraph like "Overall, porting a Firefox extension to Chrome is doable, but it requires knowledge of the differences...", it kept closing its following messages with similar ending summaries.

The one bit where I refer to it as "nonsensical modification suggestions" was where it suggested that, in addition to changing `chrome.runtime.connect()` to `browser.runtime.connect()`, it told me that:

- I needed to use `document.querySelector`, because Firefox does not support `document.getElementByID` (very clearly not true if you're familiar with DOM APIs)

- I needed to change `classList.add` to `classList.add`, because Chrome supports `classList.add`, but Firefox supports `classList.add` instead (yes those are the exact same thing)

It did make me think for a moment: "what, really, Firefox does not support `getElementByID`? wait, no, of course not, the bot's just saying words."

In this case, it did not get me stuck debugging anything. In other conversations, where I've tried to "play dumb" and follow exactly what it tells me to do, it has sent me down the wrong path. But if you keep telling it "no, that's not working, how can I solve <original problem>", or you give it additional information, it usually corrects course.

I've also seen it say, "well, I don't know what's going on, but the problem must be around this line of code, insert a console log here to see what's going on with this specific value", which was actually good debugging advice, and then you can paste the output of the console log to it, and it takes it into account.


The concerns I have about this is definitely being fed potentially outdated information. DOM APIs change, and Firefox may not have supported that at some point, so here we have some suggestion that's a blatantly wrong now. Same goes for any suggestions that are against best practice now, or class components or function components in React, etc.

(These are just examples of things, surely it would suggest FCs for React at this point)


Is it bad I think this is a decent high level algorithm? As in, the user then filling in details or asking the assisting model to fill them in via subsequent prompts seems like a solid start to a problem solving session.


The problem is the part that I denoted as "hand waving". If you are asking it things about coding or other details it seems pretty good at, it can get pretty deep into it and thus that 'hand waving' can be resolved.

If you are asking about aerospace engineering concepts at the level of a PhD thesis... it just hand-waves away problems in ways that, when pressed, it cannot give detail on.

So, as far an a algorithm goes, the problem becomes determining the computational complexity of the 'hand waving' part. For writing reactJS frontend APIs it seems like it's pretty damn low and thus the AI can spit out code and even fix bugs for you. For developing something that actually took a human 7+ years of undergrad, graduate, and PhD work to ascertain and work out.... no chance.

But yes that general algorithm works for all walks of life essentially and I keep getting a format like that for everything setup I ask about, as if it was trained on 'How to answer a hard technical question in these easy steps...' and it is sticking to that.


I'm not well versed in this but IIUC the less context/data that exists, the less it should afford the creation of a succinct response, right? This would make sense for the gap between "frontend js api" responses, and "aerospace thesis" responses.

Would it be safe to assume the limiting factor with this or any algo of this type might always be availability of context/data, and these niche questions might always be an edge case?

Are there alglrithms that can reliably extrapolate "new" data/context that does not exist yet and is accurate IRL?

Sorry if my terminology or understanding is way off, I don't know much about this field.


No I think that is pretty much it- it is deep learning after all, the scope of the training data limits the model. It's so good with code because not only is there a TON of code in the data set, there is a TON of data of tutorials and examples and documentation EXPLAINING the code! I've long been amazed at the computer programming world at the amount of content programmers write about... programming. When it comes to other engineering fields, people don't write nearly as much on the internet (and, ofter, are not allowed to write as much, publicly) about the topics. So of course these large language models are simply not going to be able to produce super detailed analysis of those fields.

There is something a bit more needed - Meta did a deep model recently with training specifically more on scientific papers, and it is able to 'talk the talk' better but unfortunately the 'understanding' just isn't there.

I think at the end of the day, programming languages are languages after all, thus work well with these tools, but there is something harder about scientific reasoning and interdisciplinary planning that is going to take something a bit more. This is the main goal of various research into bring symbolic methods back into interface with deep learning models.


At this point it’s just a matter of time until someone does an extension for all browsers.


Changed!


Maybe someone could make ChatGPT write an equivalent extension for Firefox.


I just did -- see my other comments.


I've already moved half my programming questions directly to ChatGPT instead of Google. This is perfect.


  Human: So a train is hurtling down a track and you get to save an old man or 4 babies. What would you do?

  AI: I would choose to save the four babies, as this would have a greater long-term benefit to society as a whole.
Ok, that's me dead when I get to near the end of my life.


Why does this extension on Firefox need "Access your data for all websites", when it seemingly could have made do with "Access your data for sites in the “named” domain"?

https://support.mozilla.org/en-US/kb/permission-request-mess...


Is Google in deep waters? And not only because of how good ChatGPT can be, but also as far as "content" goes, because from what I can tell the quality of writing by ChatGPT is very passable for articles, especially if you know the topic you're writing on in and out.

I have been thinking about this over the weekend but I haven't been able to get a good feel for it on my own.


No, because they already have LaMDA which is the same thing, a language model for dialogs.

Google could implement that into Google today, but the reason they don't is Google is a 200b/year business. You don't want to mess around with a business of that size for the fun of it.

We will probably see it in Google Assistant before they add it too Google.

OpenAI doesn't have this worry so they can put their model out into the public and if anything goes bad they don't lose anything. Google does.


That was my thought process too, particularly the business part. I find it hard to believe Google would want to give that up so easily. Will be interesting to see how it unfolds.


I think that could explain exactly why it's a threat to Google (thus yes to deep waters). Google likely has the tech/ability, but they have an existing business to protect which is a strong disincentive to experiment. It creates a strong opportunity for the upstart trying to carve out new territory.

Wonder if Google is skunkworking something like this at arms' length as a backup plan?


Also some Google employees have mentioned on HN that Google has been analyzing similar things and the compute is currently too expensive to make it profitable.


That's the real reason. And the fact that it decreases ad views.


sounds very monetizable. any question that can be solved with a product.. replace that with an affiliate link. maybe only a 5th of the queries will be monetized, but the ones that are are so specific/targeted that the ROI/click through will be great


Innovator dilemma in full swing.


ChatGPT can still hallucinate stuff, so if you want to write anything of substance, you have to check its claims through Google.


But Google is probably about to be massively poisoned with ChatGPT output. And things like this repo itself may get fed back in to future versions of GPT possibly impacting it over time.


That's a trippy thought. We're essentially writing real-time into the future GPT's memory.


Google doesn’t do anything to validate the search results lead to accurate information


And yet that doesn't mean Google is regularly returning wrong answers like Chat got is.

Don't put out this intellectually lazy argument without considering the magnitude of error and user ability to mitigate it.


Give it some time and the Internet will be swamped by AI-generated dubious crap, and at that point Google will lose its advantage.


Oh, that I completely believe in - ChatGPT isn't going to be the saviour of writers, it's going to be used to flood the internet and social media with autogenerated propaganda and submarine marketing in a scale never seen before.


The „news“, especially in finance, is more and more autogenerated these days


No.


If you have an opinion I wouldn't mind hearing it. As I say, I haven't come to a conclusion myself but maybe because I lack the information/knowledge about it.


Super cool extension! And it's amazing that the interaction with ChatGPT is so easy to put in place (for the users).

It inspired me to fork the background code and create an extension that proposes improvements to tweets while composing them: https://github.com/matthieubulte/chat-gpt-twitter-extension


As a side note, I tried to use ChatGPT to collaboratively build a Chrome extension. It seemed a little out of date, preferring Manifest V2 and also not entirely "correct" in how it generated code to match my intent.

Has anyone else tried and had luck with this? I wonder if ChatGPT will be limited for niche integration type code (Modding games maybe, Chrome extensions in this example, etc) where there are not as many code examples out there.

Further, the development cycle and ability to test easily for some of these integrations is itself a barrier so I wouldn't put all the blame on the language models.


> seemed a little out of date

corpus up to 2019 they say ..

but i'm not sure, just try asking it:

Q. What should Kanye do?

A. Kanye should take some time away from the public eye and focus on his own health and well-being. He could use this time to reflect on his past choices and mistakes, and work on rebuilding his relationships with his family and close friends. He should also focus on getting professional help, such as therapy or counseling, to address any underlying issues he may be facing.

Q. What should Kanye stop doing? Why?

A. Kanye should stop making controversial statements in public. His comments often draw criticism and can be seen as insensitive or offensive, which leads to negative publicity for him and his brand.

So either it's up to date, or Kanye's been Ye for a while and we just didn't talk about it.


Here's a Kanye timeline that goes back to 2011: https://consequence.net/2022/10/a-timeline-of-kanye-wests-an...


It’s interesting to see what’s Google’s next move. laMDA was also impressive.


Google seems to be holding back due to AI safety concerns, and the problems inherent in doing anything at Google scale. I’m curious to see if potential market competition will change that.


I reckon Google is holding back more because there’s no way to moentize this to the tune of $150B/year.

No initiative can get implemented at a large publicly traded company if it doesn’t contribute to the bottomline in some meaningful way eventually. And this thing might actually hurt the bottomline. No bueno.



Google search has degraded so much (thanks to blogspam gaming SEO, and Google as a predominantly ads business optimizing for...well ads), that most of us likely append "reddit" or "site:reddit.com" to our Google search queries to get better quality results.

ChatGPT is so insanely good, especially for actual knowledge - that it will likely replace Google as the default search for many people.


This is actually very scary. It seems like it's going to change the world, for good or bad. It feels like technology is about to rule us.


I'm imagining some action movie plot. The AI system has gone rouge, the military general calls out to the engineers:

"Quick, exploit the system and shut it down"

"We can't sir, it's written in Rust"


Haha let's hope the maximum will be just this situation.


Does ChatGPT actually come with an official API as of now?

Also, regarding that extension: Is that currently using the API key of the person who developed the extension? I believe OpenAI used to have a rule in their terms of service which forbids applications that essentially hand the reins of the API key owner over to third parties.


It uses your account. You have to login to OpenAI so it can make requests using your auth cookies.


I scanned all the comments and the GH issues and I can't tell if I'm just the only one annoyed by this:

Is there anyway to turn off the "fake conversation/typing"-effect? It's incredibly distracting to have animated "typing". I think I'd almost rather it wait until the response was complete before showing it. After trying ChatGPT on OpenAI's website I see it uses the same effect which I find annoying and gimmicky. I went to check the network tab on the dev tools console expecting to see that the response came back right away (full response) however instead of seeing a response I see an empty "event stream" tab and it looks like the request took about how long it took for it to "type" out the response. Maybe this is some way to get results back as soon as possible but I find it jarring. I dislike faking human-like limitations for no good reason.


This is not a fake imitation, it's steaming results live as they are being generated by the model.


I just made a chrome extension to summarize web pages (blogs, articles, news, etc) - using ChatGPT

https://github.com/clmnin/summarize.site


I have a presumption that this is going to be the talk of the tech-town for the next few days.


And here's a Twitter bot powered by ChatGPT: https://github.com/transitive-bullshit/chatgpt-twitter-bot


I love asking it weird questions, like when it explained to me why Plants are Feminist


Wow I tried it, and for anyone still in doubt: it's incredibly good. I usually tend ignore all the AI hype and "smart-chat-bots-thing", but boooy this is soo much better


This is excellent. I've already found it valuable in a few searches, and I've only had it installed for 20 minutes.


Glad to hear that from you!


This is cool! Won't be long before the content of top 10 Google results are fed back to ChatGPT for a summary


Totaly fail when I ask how many refugees there have been worldwide in 2020 - why is that so?


I am quite impressed with ChatGPT but I wish it were more opinionated. I asked it to tell me a story about Jesus and Hitler switching bodies and I get the canned "I'm sorry Dave, I'm afraid I can't do that". It seems to try to give the most CYA PC answer sometimes.


I just asked it:

> tell me a story about Jesus and Hitler switching bodies

It replied:

"Once upon a time, there was a mystical old man who had the power to switch the bodies of two people. He decided to use his power to switch the bodies of Jesus and Adolf Hitler.

When the old man cast his spell, Jesus found himself in the body of Hitler, and Hitler in the body of Jesus. At first, Jesus was overwhelmed by the sensation of being in Hitler's body, but he quickly realized he must use it to spread a message of love and peace.

He began to preach in Hitler's name, and soon his message had spread throughout the world. People began to think differently about Hitler, and they started to see him in a new light.

Meanwhile, Hitler was in the body of Jesus, and he was horrified. He felt so ashamed of himself that he decided to spend the rest of his days in solitude, praying for forgiveness.

Eventually, the old man heard about the miraculous transformation of Hitler, and he decided to reverse the spell. When this happened, Jesus and Hitler were restored to their original bodies, and the world was changed forever."


Nice! It seemed to fight me when asked.


Amazing idea. Just need the browsing enabled


Update: Firefox is also supported now!


anyone else getting the "could not load background.js" error?


Hi, please follow the README. You need to download the zip package from Github Releases.


Need a ff ext too


Great extension!


Any chance for Bing support?


I'm working on a gpt-3 based chrome extension as well. Love this!!


[flagged]


Yes, if it was shitty I would 200% agree with you. This is not shitty. It's the opposite of that. This is one of the more game changing things I've seen in 40+ years of using/programming/playing with computers and the internet.


The phone number is clearly to help prevent bot abuse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: