Hacker News new | past | comments | ask | show | jobs | submit | DerJacques's comments login

I agree that some parts of the process now seem more like “open”, but there is definitely a lot more magic in the new processing. Namely, threads can have an arbitrary length, and OpenAI automatically handles context window management for you. Their API now also handles retrieval of information from raw files, so you don’t need to worry about embeddings.

Lastly, you don’t even need any sort of database to keep track of threads and messages. The API is now stateful!

I think that most of these changes are exciting and make it a lot easier for people to get started. There is no doubt in my mind though that the API is now an even bigger blackbox, and lock-in is slightly increased depending on how you integrate with it.


I wouldn't say the black box issue is unique to OpenAI. I suspect nobody could explain certain behaviors, including them.

As for lock in, agreed completely.


Indeterminate context, unknown/hidden values and a stateful API are usually reasons for me to look elsewhere for a solution.


Looks super clean! Great work. The idea of always having the Chat GPT a shortcut away is very neat.

I have a similar toy-app called Chitty (https://chitty-app.com), and recently decided to pull it from the App Store and offer direct downloads instead, because Apple kept throwing wrenches my way. Most notably, in a random app update review I was told that I could no longer reference ChatGPT or OpenAI, and I was not allowed users to "unlock" functionality by letting them provide their OpenAI API key. Note that I don't monitize the app in any way, so asking people to enter their OpenAI key is the only way to make the app work without causing a major bill for me.

It seems like your app get's around the API key issue by offering in-app purchases, which is good! I would recommend also making sure not to reference ChatGPT and OpenAI too much in the app or the App Store page, though the details may depend on the specific app reviewer you're assigned :)


Also, thanks! Very nice to hear that someone appreciates my work, means a lot! Sad to hear Apple didn't allow you to offer the "bring your own keys" model. I was considering that it would be an attractive option for more tech-savvy users to avoid the apple tax. As app developers, we could still charge for the app, but not for usage. For more professional applications like coding with GPT-4 that would make a noticeable difference for end users spending.


It seems really like this Mac App Store is doing nobody good, neither users nor app creators. Everytime some new app points me to the store I get annoyed. It frequently hangs and just is glitchy in gerenal. Just give me the damn DMG and let me go on my way…


My offline LLM app (which does not use ChatGPT or any OpenAI APIs) was rejected 3 times in review, also during a random app update. With every rejection, they'd ask me to rename the app. Ultimately, I appealed and they relented.


That sounds like a bad time! Glad you were able to resolve it at the end.

I do feel that being in the App Store can give a lot of value (and downloads), but it definitely comes at a cost (mainly uncertainty).


I was also rejected for the app description, however I decided to argue against their ruling. I changed the description slightly, however when submitting a new build I pointed out that all my materials are written in accordance with OpenAI brand guidelines, and the app successfully passed the review.


Downloaded Chitty, entered API key, clicked "Alan Turing", crash. Do you want me to send the crash report somewhere?


Oh no, sorry to hear that. If you could send it to manuel<at>manuelt.de, that would be lovely! Thanks.


Alright, sent! I haven't had a problem since relaunching it after the crash. It's nice and snappy! As a convenience feature, I'm missing syntax highlighting for code snippets. But maybe that is what makes the other apps so slow? :)

And as far as features go, I don't see a way to set the temperature.


Awesome! Thanks a lot for the feedback. Syntax highlighting sounds like a great idea, and I wouldn’t expect it to be a big performance hit.

Exposing more API options (such as temperature) is definitely on the list! I just added custom function support (running small JS scripts, Wolfram Alpha and Shell commands), and more options is next!


I built https://chitty-app.com/, which is a free client for MacOS (available via the App Store). It supports GPT-3.5 and GPT-4 in a simple iMessage-like UI.

Chitty also allows you to create custom assistants. It comes pre-loaded with Abraham Lincoln, Alan Turing, Jane Austen, etc., but you it's easy to add your own. I added myself as a custom assistant, which allows for ultimate rubber ducking :D

The app supports streaming responses and markdown (incl. multi-line code blocks).


Haha, I try to not be too serious about it all. Lots of stuff happening in this space. I went back and forth on the name quite a bit, and in the end I opted for this tongue-in-cheek but short and potentially memorable name.


This is my first ever SwiftUI app, and I had a lot of fun building it. I'm usually a web-person, but Electron just didn't feel right for this particular use case. Because I'm using the latest version of SwiftUI, I had to require MacOS 13.1, a fairly recent version of the OS.

I also considered ways to not require users to enter their own OpenAI API key. For friends and family I would be happy to just bake my own key into the app (or proxy calls through my own API), but I don't see how that would be feasible on a larger scale, while remaining free.

I'm here to answer comments, in case someone has questions!


This looks great! Congratulations. I particularly like the "shortcuts" to common actions like "Insert JS/CSS". That's very useful!

I want to also give a shoutout to https://proxyman.io/. Proxyman is a native Mac App that also works as a local proxy and is a pleasure to use. I've been using it for similar workflows and can highly recommend it over Charles (the SSL handling alone is 100x simpler).


Insert JS is one of the most used features. Requestly also allows adding multiple scripts which are loaded sequentially So you can basically add a library URL like jQuery and write a code block which depends upon jQuery. Proxyman is a very good tool. No doubt over it. SSL handling is a challenge and I just don't like the way we have to setup things for Mobile app debugging, that's the trigger for building the native SDK for us.


I too used to use paid-version of Charles and then moved to Proxyman. I particularly like the scripting ability in Proxyman.


Interesting! What was the trigger point to move to Proxyman. How did you port the configurations you already had in Charles? Did you have to recreate them?


Proxyman looks like something I always wanted: does it allow you to say "when any browsers/webpage request this URL {url}, return this {content} instead of the original one"? And if it works like that, does anyone know something like that for Windows? Thanks!


Yes, You can do this with Requestly desktop app. You can use Modify Response feature to just specify a URL then specify the status code you need like 400 or 500 and then you can also specify what content do you want to return.

Requestly also lets you write a simple JS script to change something in the existing content. Here are some references for you - About Modify Response (https://requestly.io/feature/modify-response/), Change Status Code (https://stackoverflow.com/questions/50923170/simulate-fake-4...).

And yes, Requestly is available on Windows too.


Have you looked into Cypress (https://www.cypress.io/)? If I understand you correctly, it may actually do what you are describing here.


The combination of cypress and something like percy (https://percy.io/) has made me completely lose interest in testing components independently. The components that make up a UI are an implementation concern that I don't really care about testing independently.

I can write automated tests to verify that a component in a relevant context still functions, and can have a check at PR time to verify that the UI hasn't changed unexpectedly.


Yes cool but I want to be able to run the test in command line like I currently do with Jest where everything is mocked (even DOM) so it runs super fast (during CI) but then I also just wanna run the test in a browser for my development workflow.

Running all tests in Cypress would result in slow CIs.


It's actually a nice feature that has been available on Airpods Pro as well. The feature makes it feel like the sound is actually coming out of the iPhone / iPad / etc. When you turn your head, the sound source stays the same.

Note that the Airpods don't actually know where your device is located. It simply sets a "base orientation" when the sound is started. This base orientation is updated when you keep your head in a new position for a longer time ("resetting" the approximated position of your device).


But why? Surely the point of headphones is that the sound is right in your ears and not coming out of a small device in a specific direction


When you play back Dolby Atmos material, this feature sort of emulates having a surround sound system. So, even though you only have two earbuds, sounds do seem to be around you (coming from behind etc.) And if you move your head, the sound source stays the same, enhancing the illusion that there are 5 speakers around you and if you move, the sound changes.

The emulation is convincing and well executed, but it is still just a gimmick that only really works when playing back Dolby Atmos movies.


What "the point" of headphones is, is of course subjective. But I'm going to guess that for most people, the point of using headphones is have their audio portable and/or to not disturb other people. I don't think many people use them because they explicitly like the experience of audio playing "between the ears".

As to "why", people might actually prefer the (emulated) experience of a point sound source, as it could closer resemble talking to someone physically present (calls) or listening to a live performance.


Could it be that you were using type-aware linting rules? If you have these rules enabled, ESLint basically has to compile all your TS files to check if the rules are followed. Note that most of the ESLint rules are _not_ type aware, but following the setup guides quickly has you turning them on “by accident”. For more details, see: https://github.com/typescript-eslint/typescript-eslint/blob/...


Interesting. Wouldn’t that allow someone to sign up for service Y with an email address associated with an account in your system using service X, in order to get access to the account in your system?

Maybe there’s something I’m not seeing, but it seems dangerous to rely on the identity provider’s email address to authenticate the user.


Your local account is associated with X, you attempt to sign in with Y, the Y authentication was successful but there is no local account associated with Y.

Some heuristics (such as email address matching) means you indicate to the user that perhaps they meant to try X? They sign in with X, and now you have authentications from X as well as Y for the user.

You use the authentication from X to authenticate, and you associate provider Y with the account as well. From this point forward, either X or Y can be used. You might also indicate these on a user profile page, possibly with other options - the user may decide they want to either revoke authentication from X or Y or add on authentication with Z.

You also have a similar behavior with multiple authenticators if you are implementing Web Authentication/FIDO, however these are "pure" authentication with no attributes so your heuristics for this sort of pre-login suggestion would be limited.


Exactly this.


It's assumed that, if you're signed up for a service with an email address, you control that email address.

This is generally a reasonable thing to assume, and can be verified for whatever account providers you support.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: