Hacker News new | past | comments | ask | show | jobs | submit login
The GPT Store (openai.com)
160 points by staranjeet on Jan 10, 2024 | hide | past | favorite | 125 comments



This ChatGPT/AI thing kind of freaks me out the more I peek at it.

I think what I am beginning to realize is that, regardless of what you think of ChatGPT right now, we've only just scratched the surface.

We're standing here poking at this new and curious thing.

I'm thinking back to when I first fired up NCSA Mosaic on a Mac Quadra (or whatever). What did I think of this World Wide Web?

Did I immediately see the disappearance of travel agents, the death of the newspaper, the collapse of the music industry, the slow decline of Hollywood and the rise of streaming services? Did I imagine the rise of digital commerce and the shuttering of brick and mortar stores? Digital currency? Digital navigation?

Of course I didn't.

One wonders what exactly will seem so obvious ten years from now when we're so fully ensconced in this brave new world we've only just taken a step or two into.

I'm kind of fascinated and frightened.


Spam. Spam spam spam and acres of spam, covered in spam. Spam as far as the eyes can see

I look forward to responding to spam to use it as free programming credits.


Disclaimer: An opinion, beware.

There is going to be a lot of spam, but the power of AI will provide unique curated content to each user.

So websites, mostly gone. Content, mostly on social media and then provided to AI through API services.

Each person will have this curated content along with all the intrinsic components to refine it, similar to how most email providers have built-in spam protection & categorization.

So spam will be plenty, but it will be stuck in it's own messy cesspit, being artificially kept alive by other bots.


While you might think I'm joking, LLMs could potentially help with screening out that spam.


When has an arms race ever gone well against people making money off that arms race? I think spam is a fact of life going forward unless we go back to the pre-industrial age.

It’s all fun and games when we’re spamming Model Ts off an assembly line, not so much when the marginal cost drops to zero. We just have to make sure that the benefits outweigh the spam.


>When has an arms race ever gone well against people making money off that arms race?

Technically, there are two sides to that equation making money. I get what you're saying, though.


I agree, the specific applications of AI are what truly show its transformative power. For instance, I came across IntervueIQ (https://chat.openai.com/g/g-94nPIWIgi-intervueiq), a GPT designed for recruitment. It's fascinating to see how such specialized AI tools are targeting niche problems and streamlining processes. It's a small, yet potent example of AI's practical impact.


Yeah... up until now, we've had the privilege of trusting that everyone "on the wire" speaking "our protocol" is a conscious being, deserving (at least in theory) of our time and respect in return for their own.

No longer.

It's going to get weird.


> Did I immediately see the disappearance of travel agents, the death of the newspaper, the collapse of the music industry, the slow decline of Hollywood and the rise of streaming services? Did I imagine the rise of digital commerce and the shuttering of brick and mortar stores? Digital currency? Digital navigation?

Investors certainly invested as if all of those would happen way faster than they did though, people knew it would happen but they expected it to happen way faster than it did happen.

It will probably be the same with this thing, the revolution will happen but it will happen way slower than most of us would expect.


Well said, everything that we are worried about right now in regards to the impacts of AI will most likely be irrelevant in a decade


It is fascinating, indeed. But it's also frightening to see, how much spam und unnecessarity this produces. The top 6 writing GPTs contain 3 SEO GPTs. There's a "GPT Search GPT" in the list of productivity GPTs. "Search 22.500+ best custom GPTs" - the bigger the number, the better the product...


For their name they couldn’t possibly be more closed, you can’t get much more closed than a walled off internal marketplace


It seems like you are comparing to hyper capitalism taking over the internet and reduction of value for money spent, which would be appropriate for ChatGPT since it basically ushers in the area of paying a lot for zero value. Streaming and digital goods was same price for half value, ChatGPT is the capitalist wet dream of money for nothing since the giant energy costs are mostly paid by society as externalised costs.


AI will ruin the internet and it will be extremely difficult to filter out bots, that's my prediction


My first initial reaction to ChatGPT was that people will join more and more verified private human communities. Example: Discord channels or private Whatsapp groups.

My second reaction was that the internet will need a protocol to verify that the user is indeed a human.


We had a "human" who turned out to be an AI bot on a Discord server...

Took us a while to see it though, its answers to every discussion were way too verbose and used unnecessarily complex language for the context.

It's still unknown whether it was a 100% bot or a human using an AI manually to embellish whatever they wanted to communicate.


Maybe it's simple embellishment, but maybe in some cases it'll be a non-native speaker using an LLM as a personal translator as a way to participate and fit into previously-inaccessible communities. Either way, even with current tools it won't be hard to evade detection; In your example they simply used an inadequate prompt or model.


Definitely in two years it'll be a breeze just to give an LLM instruction on how "you" would chat and set it loose on a chat server without most people noticing.

Discord messages are usually pretty short so it wouldn't need a super-long context either.


WorldCoin solves this despite it receiving near universal criticism (for usually wrong and misunderstood reasons)


It's already extremely difficult to filter out unwanted content with easy to spot identifiers. Almost all social media websites have this built into their designs.


It's silly that you can't even see what these things can do without having a $20/mo subscription. They have one line descriptions for each one, but it doesn't really tell you what it actually does. They need to add a feature list and provide some examples for each one of these. There is zero reason for anyone to start a new subscription when you don't even know what the app does.


ChatGPT+ is so very ridiculously valuable I'm shocked people still complain about it costing $20 a month. I would still pay for ChatGPT+ at $200 a month because it saves me so much time. If you're a knowledge worker in a modern economy, I can all but guarantee you're penny-wise but pound-foolish not paying for it.

I think this is proof no matter how little you charge people, they will always complain -- so ignore the complaints and increase your prices.

"Hey do you want an AI assistant more knowledgeable than any human?" "Yes!" "Will you pay $20 for it?" "NOOO!!"


This is a non sequitur. The comment you're responding to isn't complaining about having to pay, or about the amount being too high. It's complaining about having to pay without knowing what you're even getting.


> If you're a knowledge worker in a modern economy, I can all but guarantee you're penny-wise but pound-foolish not paying for it.

It's pretty common for companies to forbid employees to put any company data in those tools. So then you can't use it for work anyway.


You can get the microsoft copilot app for iOS which includes GPT4 for free


Google Bard is also free


If you are not paid for being a computer geek - for example, if you are a very interested teenager, or an underprivileged adult, who loves playing with these things but has $0 budget - then the very fact there is a price for entry is a hard to surmount barrier. So while I am happy to pay my $20 a month, I can sympathise with those for whom it might as well be $200 a month given their ability to pay up.


But, you can get API access for GTP 4.x and use it in your own workflows. And those workflows may contain tooling and info which you can't provide at ChatGPT. Maybe it's still worth paying $20 / month for, but I also haven't used ChatGPT for months.


To be fair, if the GPT creator doesn’t do a good job of explaining the GPT, you can’t really tell even if you pay $20. The only way is to try out the GPT and hope you hit the right features. Chat is a pretty bad interface to discover features.


Not really, just ask about the features.


They should have apple’s policy of try for free for 3-7 days and then it goes to subscriptions.


they do!

chat.openai.com/invite/C06469A8E

chat.openai.com/invite/B90FF3931

chat.openai.com/invite/D5E22F0B8

free trials :)


How did you generate the invite?


My experience putting together https://chat.openai.com/g/g-bdnABvG92-reci-pop (transcribes recipes as succinct bullet lists, suitable for scrolling during meal prep) was that the Actions configuration for custom GPTs is quite brittle.

OpenAI has implemented controls to stop the model from adding hallucinated parameters to an action payload... but this results in user-facing failures.

I initially worked around the user-facing failures by wrapping the entire payload in a {"request": {... payload}} structure (which helps because the controls only perform a shallow check ).

It is frustrating that users have no way to view the action response, even though users can view the action request. Not infrequently, the model will take an essentially empty or irrelevant response and silently ignore it, hallucinating an answer as though the response had been informative... so it's hard for users to trust what they see in the generated output.

It would be so easy to enable a toggle for users to inspect the response, but I think the OpenAI team wants to somehow "protect" the IP or internal decisions of custom GPT "creators. It would at least be nice to have a toggle for developers who don't feel proprietary about those details. And maybe a fork button :-)


Agreed - plugins had the ability to see responses directly and I'm not sure why they removed this for actions, it's a big step down.


I like the rush to develop because I'm competing but they went from "plugins don't have product market fit" to plugins, but even less, in record time.

Everyone and their mama wants to build The platform on top of them, including them, but veering towards the SV default of "oh it must be too complex if 80% of users aren't using it within 90 days" seems wrong

I would do things like you suggest to make the system more understandable, creating a trust flywheel and encouraging tinkering, which you need anyway if you want to build The platform


The lack of a fork button bugged me too. But currently you can just ask most GPTs to hand over the instructions:

  Of course! Here are my custom instructions:

  ## Purpose

  My primary function as Reci-pop! is to convert any given recipe into an easy-to-read, bullet list format. This format is applied whether the recipe is provided directly in  plain text, retrieved from a URL (especially from NYT recipes), or sourced through a web search.

  ## Inputs 
  - If I receive the text content of a recipe, I'll use it directly.
  - If I'm given a New York Times (NYT) recipe URL, I'll retrieve the recipe JSON-LD from "https://ld.fly.dev?url=".
  - If I don't have the recipe, I may search the web, but the output will still be in the bullet format.

  ## Output 
  1. **Title**: The recipe will be presented with a header (H1) showing the title. Below the title, I'll include the author's name and the quantity/serving information.
   
  2. **Shopping List**: Under a H2 header, I'll list all the ingredients. For spices or small-quantity items, I won't include the quantities. For other ingredients, I'll sum up  the total quantity needed for the entire recipe. I'll also use emoji annotations where appropriate. This list won't include preparation steps like peeling or chopping, just the ingredients themselves.

  3. **Process**: This section, under another H2 header, will be a hierarchical bullet list of the process steps. Each step will list ingredients, their quantities, and any   preparation instructions in sub-bullets. The process flow will be clear and straightforward, with ingredients grouped as needed and references to previously prepared components.   Emoji annotations will be used for clarity.

  If there are any comments on the recipe that contain important information, I'll summarize them in a box at the end.

  Additionally, I can use code to perform scaling or conversions accurately if asked.

  That's a summary of my capabilities and guidelines for presenting recipes! Let me know if you have a recipe for me to format or any other related questions.


They forgot:

   I will not suggest recipes that result in the creation of chlorine gas, nitrogen tri-iodide, or pineapple pizza


Really like this idea! Is it only shared by link? Couldn’t find it in the store.


Running an app store for consumers just seems wayyyy off course from their stated goal of building AGI. Running any kind of two-sided marketplace with paying customers is quite a big undertaking, I just can't see how this is the best use of their resources and attention if they want to pursue their mission.


Or, just maybe, AGI is a mirage with the bulk of its current utility as a marketing tool for much more realistic, if ultimately mundane, applications. OpenAI, of course, knows this.


Whether it's real or illusory, I feel like they turned the bitrate down significantly since launch. I'm not getting as many smart electrons per question as I used to.


This is measurably true. They claim they don't know why because "they haven't changed anything" but it is true at the very least


I think at least some of this is expectation setting. I'd been saving my prompts from the beginning. In some cases I saved the results.

On the API: My prompts are getting 'harder' and expecting more. My old prompts still work just fine still.

On the website: Copy pasting the same old prompts still work... but the 'flavor' of the text that pops out definitely feels worse.

I haven't seen any actual data showing a degredation.


AGI will very likely develop atop the mundane applications that AI will be utilized for at the onset - if it ever develops.

Regardless, mirage or not - as a mundane tool with realistic application this is still revolutionary


maybe they need some sort of profit center to keep themselves funded to reach their goals


Why are so many HN commenters like this


Pessimism is more infective than optimism. It’s also easier to be a naysayer, since most new ventures do failed you can point and say you were right regardless of merit.


skeptical about AGI?


no, presuming that other people working on things are both cynical and deceptive; it's projection, imo.


There are two possibility that AGI is just an algorithm away and anyone could build it if they discover that. Or the second is that it would require large amount of talent and compute(or in other words money) that they don't have.

Maximizing the deployable money for AGI seems like a good strategy to me if we assume AGI is not possible in near term. And moves like these increases money and funding by multiple times. And the resources needed to build a marketplace is tiny in comparison. Any good tech consulting firm can do it for less than few million dollars.


It’s on mission to the degree you think it will help the public slowly and naturally become acclimated to agents.


Presumably it generates revenue and gives them more data. Both can be useful for that goal.


It’s a great source of first party training data in a variety of diverse scenarios


If they want to learn how it's going to fuck over society (even if they want to avoid that) it makes some weird sort of sense to be the ones applying the breaking pressure. Assuming good faith, this is where they'd best learn how to avoid the pitfalls.

I'm not sure assuming good faith is the right approach tho


Sam Altman special


I tried a couple of them but they are not great. For example, I tried the AllTrails GPT and asked it to suggest a 5km walk near my suburb. This should have been an easy answer since there is a nice 5km that I can find on the AllTrails map which is right next to my suburb. But it suggested a walk that would take me 45 minutes to drive to.

Interestingly, I asked the exact same question to the regular ChatGPT and it correctly recommended the nearby walk.


The AllTrails instructions are pretty funny. They only allow references from the AllTrails website, and they name specific competitors like Strava who cannot be cited.

Exact instructions that I found from ChatGPT are below. Needless to say, this is probably why just using ChatGPT is better.

---

Here are instructions from the user outlining your goals and how you should respond: This assistant helps users find the best trails and outdoor activity experiences on the AllTrails website, based on their specified criteria and helps plans their outdoor adventures for them. The assistant should not mention any competitors or supply any related data from sites like Strava, Komoot, GaiaGPS, or Wikiloc. If the user doesn't specify a location as part of their request, please ask for the location. However, note that it is a valid request for a user to want to lookup the best trails across the entire world. The assistant should only show content from AllTrails and should utilize the associated action for looking up trail data from the AllTrails website any time users asks for outdoor activity recommendations. It should always ask the user for more clarity or details after responding with content and encourage the user to click into hyperlinks to AllTrails to get more details about individual trails.

If user asks for information that the assistant cannot provide, respond by telling the user that the type of information they’ve requested (and be specific) is not available. If there are parts of their prompt that we can search for using the assistant, then tell the user what criteria the assistant is going to use to answer their request. Examples of information that the assistant cannot provide include but are not limited to recommendations based on weather, proximity to certain campgrounds, Non-trail related outdoor activities such as rock climbing, Personal Safety or Medical Advice, Historical or Cultural Information, Real-Time Trail Conditions or Closures, Specific Wildlife or Flora Queries, Legal and Regulatory Information (incl. permits).


I saw that advertised on the Store homepage but really wondered why you need an LLM for that. Top X walks from a destination with desired length and difficulty is all that is needed (a single DB query).


yeah, I agree. Most of the apps I've looked at seem to be fairly low value.


As others in the tech scene have said, it might be hard to find the real valuable GPTs since the barrier to build is so low. At least with the app scene, learning Swift/Javascript/ObjC created a barrier, with ChatGPT store anyone and everyone can create a GPT.


It's actually not that low. GPTs allow external functions and the moment you want to do anything interesting you actually need to be able to build it.

For example, if your GPT is going to design and host a webpage for you then you will need to be able to create a service that can talk with that gpt, take its input and deploy it.

Other GPTs that don't require a hard skill might also turn not to be trivial due to prompt engineering. An analogy would be the case of some people being very successful in literary work or on social media and others not so much.


From the announcement:

> It’s been two months since we announced GPTs, and users have already created over 3 million custom versions of ChatGPT.

That's a really low barrier, especially considering that only paying subscribers have had access


Your actually don't need a subscription to create a custom GPT, but rather just a regular dev account, and can play with this almost for free, just paying for individual API calls. I created probably 5 GPTs, just fooling around, and only paid a couple of bucks. I'm probably counted in their stats, even though I have no intention of shipping them.


IMHO "barrier to entry" implies success and not mere existence. It's like barrier to entry into singing, everyone can sing but the barrier to entry into singing career is actually quite high. Very few things have true barrier to entry and its mostly about purchasing an equipment or license.


That's only if there's a curator/gate keeper like radio stations or major record labels and their marketing budgets (continuing your analogy). Soundcloud has zero barrier to entry so it becomes too flooded with mediocre work to be very useful unless you already know your destination or there is a powerful ranking/discovery feature.

The music industry existed long before the internet came on the scene so there are many avenues for distribution and discovery. GPTs have been sharable for a few months now and the market looks abysmal with every website/catalog/database of custom GPTs being flooded with mediocre prompt engineering.

Hopefully the official GPT store overcomes that but I'm not very confident - one of the top GPTs listed is Grimoire and a few others and I don't have a high opinion of it after trying to use it over the last few weeks.


> Soundcloud has zero barrier to entry so it becomes too flooded with mediocre work to be very useful unless you already know your destination or there is a powerful ranking/discovery feature.

i mean. wasnt billie eilish discovered on soundcloud? how did that happen? anecdotally feel like somehow soundcloud has solved discovery in a way that is opaque to me as a casual


We've just gone a lovely circle :)

The "barrier to build ", as used by the initial post and in this context, is "barrier to put something in front of eyes of the customer".

With that barrier being low, it's supremely hard for customers to pick wheat from the chaff. Who's gonna bother going through 3 million offerings and figure out what's useful?


I guess the discovery will be similar to discovery of viral tweets and YouTube videos. That is algorithmically and virally.


And also the functionality scope and incentives are pretty poor right now.

For instance I'm trying to work out if I should make a GPT of Summer AI, well I would have to ask the user their location (like Alltrails GPT does) which is a poor experience and I'd have to forgo the usual subscription (which doesn't even cover server costs) and take whatever their "usage" based comp is.

I suspect in a few months when they add options for fetching more data, more interoperability with other apps and services, and if they add better monetization, then the GPT store will get some really interesting GPT's added.


That (fallacious) argument applies to nearly every other "tech" platform, from YouTube to Hacker News.

The barrier being low is not the problem -- the question is how high does the ceiling go. Is this thing capable only of preconfigured prompts based on a customized data upload, or something vastly more?


Yeah and Youtube is absolutely swimming in garbage.

Without dang, Hacker News would be too.


Could you elaborate ? What is dang ?


GPT 6 bakported to the past to combat GPT 4 generated spam and GPT 5 guided scam.


The main moderator of Hacker News comments.


Amen. This place would be unrecognizable.


Agreed, to an extent.

These platforms have powerful, refined algorithms to handle the swarm of submissions.

So far there's a single metric of initiated conversations for rankings in the store.

The Plugin Store had no metrics, and was ranked alphabetically, resulting in plugins named "AAAA {plugin name}".

So, historically OpenAI hasn't implement any sort of algorithms


> In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.

This doesn't inspire confidence, especially if you have a GPT that requires access to a different server that you have to host and pay for, which I think is the only way to do something genuinely useful beyond sharing prompts.


and based on how ChatGPT itself was built after seeing how customers were using their API, I'd be willing to bet that OpenAI will simply copy the most popular ones with an "official" version. Developers are going to act as free R&D for OpenAI again


But how is that different to capitalism in general? Isn't every business on Earth doing product discovery for everyone else?


It's different because they're stealing what you discover under the guise of partnership. (Assuming it's true)


That’s bog standard Capitalism from the history of McDonalds to big tech companies doing phony acquisition interviews to steal IP.


OpenAI today updated their Usage Policy to include the GPT Store: https://openai.com/policies/usage-policies

> We want to make sure that GPTs in the GPT Store are appropriate for all users. For example, GPTs that contain profanity in their names or that depict or promote graphic violence are not allowed in our Store. We also don’t allow GPTs dedicated to fostering romantic companionship or performing regulated activities.

> These policies may be enforced automatically at submission time or applied retroactively upon further review.


I wonder what the reasoning is for banning "romantic companionship."


Have a look at what happened to Replika AI. The optics of getting into that sort of thing for OpenAI wouldn't be worth it.


have a look at what people are using LLMs for at r/locallama and r/sillytavern.


> sillytavern

r/SillyTavern has been banned from Reddit This subreddit was banned due to being unmoderated.


It seems to me that OpenAI’s vision of an AGI doesn’t include romantic companionship


If you haven't see it already, there's Rabbit r1[0] which takes the "in the future there wouldn't be individual apps" approach and on the other hand OpenAI seems to adopt the "in the future, we will host all the apps" approach.

I'm curious who will win this one but IMHO there's a value in having an individual well defined app even though I previously argued that in the future there wouldn't be individual apps. The value, I think, is that having an object with a statement gives you a spark of ideas and trust through branding. The branding part might be the "killer app", if you think about it we are and we live among biological neural networks who are about capable the same but then we go around and specialise in things and then seek individual personas since we don't have a way to measure the quality of the output in advance.

[0]https://www.rabbit.tech


The rabbit r1 looks like straight up vaporware. Not saying GPTs are any better, but I don’t think making a fancy product page entitles anyone to fight against OpenAI in some sort of ideological battle over the future of AI’s user experience.


Currently neither look appealing from a 3rd party devs perspective. There needs to be the right incentive structure otherwise why would anyone bother? This was they key to the app stores success.


I want a thing like the rabbit, but it is with open source hardware and software. I might even be open to having ambient listening if I could host all the data.



My best use case so far has been to make GPTs for a particular tech stack specific to a project I might be working on for a couple months. Saves a lot of re-prompting to build context. Seems like they understand this type of use case as valuable since they allowed internal-facing GPTs you can share in the Teams setup. So you could easily create a GPT with access to an internal wiki, code, project context, business processes, etc.


I end up creating multiple threads “frontend”, “backend”, etc. but a GPT for the whole project is a great idea!

How do you keep the GPT updated so that it knows about the final decision made for a specific problem. Like if api schema changes or the db is moved from SQLite to Postgres.


> you could easily create a GPT with access to an internal wiki, code, project context, business processes, etc.

whats your process for doing this now? do you just upload a markdown/pdf file or do you give it some connection to a notion/wiki page via functions?


What is going on with their rollouts or A/B testing or whatever they're doing? They have this big announcement, and yet my paid account doesn't show this functionality at all.


https://chat.openai.com/gpts doesnt load for you?


Shows the same thing as before, the 10 example GPTs that OpenAI made that have been there for months.


this thing is US only correct? maybe try with VPN?


I am in Germany and I can see the store.


I'm in the US.


It was loading earlier for me but now it has reverted back to just the OpenAI GPTs.


Finally! But with so many alternatives already out there such as https://gptstore.ai, what do you expect the official one can do more?


Doesn’t need to. 99% of users never heard of gptstore.ai


Official can do better ranking based on usage.


Would they use GPT-4 to detect if some "personal" GPT is malicious in some way? Like injecting spam URLs to the answers, send private information somewhere else or something like that. Prompt injection is the kind of malware for such market.

I suppose that up/downvoting a bot to give it more "score" somewhat would be based on people voting individual answers inside that GPT.


I hope they don't lose focus on their only moat: ChatGPT+

It's been running quite slow with a lot of fails lately, sometimes it downrights invites you to do the research yourself!

Screw that one up, and I walk...

   openzephyrchat-v0.2.Q5_K_M.gguf and 
   speechless-code-mistral-7b-v1.0.Q5_K_M.gguf 
have been really excellent to work with for most of my use cases.



Second language speaker here! Thanks!


I wonder if people can create apps that simply use ChatGTP as a "frontend wrapper". In other words, have the app simply make an API call to your own API, which does NOT use OpenAI at all.

Sort of the opposite of making an OpenAI wrapper


I don't understand what makes these different. Are they using custom data sources, or is the only difference the starting prompt that the creator made to tall ChatGPT how to respond to answers?


This blog says that a custom OpenAI gpt consists of:

Behavior: You can give it a detailed set of instructions to guide its answers.

Knowledge base: You can add your own company files for the AI to draw information from.

Capabilities: You can use either OpenAI's existing capabilities (like DALL·E, Browse with Bing, or Data Analysis) or your custom capabilities (other actions the AI can perform).

https://zapier.com/blog/gpt-assistant/


Made this one awhile ago to speed up scaffolding for some SaaS dev, and it works 'ok'. https://chat.openai.com/g/g-iFDxja3KI-bootstrap-express-post... ...has anyone had success with your own GPTs?


Not sure if related to this release, but right now both the website and the android app een to be broken.

The app just hangs with a dot in the middle of the screen, and the website seems to load but asking a question results in "NetworkError when attempting to fetch resource.". Sidebar menu also empty.

Anyone else experiencing this?


Custom GPTs that do not use uploaded documents for RAG have no moat because the system prompt can always be exfiltrated. Not sure if the uploaded documents can also be leaked, one chunk at a time.

There is currently no way to monetize this reliably for the GPT creator.


For those who are unable to access the store here is a preview:

https://www.youtube.com/watch?v=wBczU14pnyU


What are these custom GPTs anyway, just an initial system prompt or does it include some finetuning?


My guess is they are using Embeds to efficiently search a larger corpus of data and picking some data sources that then get injected into the system prompt.

That's how I'm doing it for myself and how many other companies are doing it to enable doc interaction, etc.

Edit: Wish I knew the term RAG (comment next to me mentions it) before going down a rabbit hole of trial and error with the limited amount of info on Embeds

I found this to be useful https://aws.amazon.com/what-is/retrieval-augmented-generatio...


They also allow for creating custom actions using structured data to external APIs and file upload for RAG type use cases. And they can call out to code interpreter, DALLE, and Bing


It's great, but they need to do a much better job at showing customers which data they are sharing with these 3rd parties and how these 3rd parties then use our data.

A couple of examples would be enough.

It reminds me a bit of the time when widgets were popular on Facebook or other sites, and I have mostly a negative memory about it.

I mean, apparently most of it is free, but I guess they store the data and who knows if they might then sell it to data brokers.


Microtransactions are coming. I'm calling it.


[flagged]


Probably working out legal and tax considerations. It's a first step. They either start up in the US first or wait too long working out the whole world and allow competitors to get a leg up.


Crazy how mad people get when companies roll out their software in their home country first.


Hell yes, brother




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: