Mostly to get around bad docs. I need to implement things using libraries, services, and APIs that aren't well documented, but the source code is public, and people have written code against them, and ChatGPT has read all of the things that I would read (and more) to get to the point where I can use it, so I just ask it.
The 2021 knowledge limit is pretty annoying, since libraries change so often, but it's still very useful.
It's ALSO very useful for asking stupid questions that I don't want to waste someone's time with. Like for instance, in bash, when you redirect stdout to a file with > filename it works, and when you redirect stderr to stdout with 2>&1 it works, but when you try to redirect stderr to stdout and stdout to a file, it only works when you do it in this order:
command > filename 2>&1
it doesn't work if you do
command 2>&1 > filename
which feels more natural to me, so I asked ChatGPT why that is, and it explained that you have to consider it from a filehandle perspective and that if you look in /proc/pid/fd you can see that 2>&1 is really redirecting to the terminal, and > is redirecting from the terminal to a file.
I would have had to find someone deeply steeped in unix/linux fundamentals to explain that to me, or I could just ask ChatGPT. I've done the same thing again and again - how are HSMs really different than TPMs? How are heat pumps different than ACs?
I'll read a reference to something, and immediately go to ChatGPT to learn more - "Can you give me a brief summary of the writings and opinions of Cicero?" and then I can spend 20-30 minutes learning more about stoicism, epicureanism, and whatever else I'm curious about. It's like being able to interview wikipedia.
>Mostly to get around bad docs. I need to implement things using libraries, services, and APIs that aren't well documented, but the source code is public, and people have written code against them, and ChatGPT has read all of the things that I would read (and more) to get to the point where I can use it, so I just ask it.
The 2021 knowledge limit is pretty annoying, since libraries change so often, but it's still very useful.
It's a bit surprising seeing all the positive replies to this thread. In my experience (using GPT-3), the time I spend debugging the code generated by the AI is almost the same as if I was to write it correctly in one go.
The comprehension aspect kinda helps me sometimes, like when I need to understand some obscure shell script.
We need a catchy name for this, like [Someone]'s Law. It would read: "Whenever someone complains about chatbots/ChatGPT being overrated, they are basing their experience on non-GPT-4 interactions." It comes up over, and over, and over, on this forum and all the others I read. People need to know GPT-4 and pre-GPT-4 are not the same!
I use GPT-4 and still have problems getting it to write simple code correctly sometimes. I find it immensely useful in many areas but writing actual detailed code doesn't seem to be a game changing use case for me (yet).
Tip: it’s often faster to keep iterating on the problem with the LLM than to stop after the first response. “As an expert TypeScript programmer, you should know that X isn’t correct. Please try again.”
Sweet, now it is stuck in a loop where it says sorry, here’s a correction then spits out the same code. Do you have an example chat where that worked for you?
An example that didn't work that I thought should was to implement a Base62 converter. I thought it would easily be able to do it as the problem is so well scoped (an probably even exists in its training set). However, it either had poor performance characteristics or wasn't correct.
This is how an experienced dev would like to work - break the problem down and hand off some of the more well scoped problems to AI.
I like asking ChatGPT for things that I know I can understand, but do not feel like figuring out (today I asked it how to write a specific SPL template for API gateway transformations). I spent hours looking over the internet for someone that had done it before, and then ChatGPT just spit out exactly what I was looking for (after two extra requests to better indicate my intentions).
Oh yeah, unless I know I need to interact with a website or view pictures, if I want to know the answer to a thing, I just go to ChatGPT now and talk to it. Way faster and the signal to noise ratio is incredible.
ChatGPT (plus) helps me every day with boilerplate code or even finding off by 1 bugs and small things like that.
Sometimes I also ask it to rewrite something clearer, or add some comments sparingly.
Copilot can get really aggressive. Sometimes it's right there, "what I was thinking about! Woo! magic!" And sometimes, it's like "OMG let me hit the return key". So I often turn it off and forget it's off for a week...
Both have their pros and cons. Copilot is more like an autocomplete on drugs. ChatGPT is a scaffolding rental shop.
Also ChatGPT is a dream when it comes to being a polyglot with a limited memory.
One thing I am confused about is using ai for boiler plate code . Shouldn’t that be a macro or code generated, or even an aspect ? Having large snippets of ai generated code outside of a macro or code generation tool does not make any sense .
Its helpful to do boilerplate in languages or legacy code you touch less often, e.g. React class components.
But also, it can reasonably set types for the inputs of say a React component, just based on an example (made up) usage of some props you give it. And you can even, if you are very uncouth, simply generate the sensible props for a certain kind of object given a description by say the user.
It can do things like generate controlled form state management etc, which was quite error-prone to do with macros when things get complicated.
It can summarize commits for documented code accurately too, especially if you comment what you've done for some reason or another. This saves a good 2-3 minutes an hour I didn't know I could save, for the average of say 1-2 commits I'd do usually per hour.
Notably GPT-4 is needed to do these consistently well.
I mentioned this elsewhere yesterday, but I've used it to do things like stub out an API -- I tell it what I'm making, what endpoints I think I need, and ask if there's anything obvious I'm missing. Sometimes it gives me some ideas for additional endpoints. Then it implements stubs for every endpoints and all the associated code and I'm ready to start adding actual logic.
There are sometimes tools available for this, but the nice thing about ChatGPT is that it's effectively a single tool that can be used for any reasonably well-known API framework in any language.
Stubbing out endpoints is not usually a task that takes a long time and usually the code is auto generated from a mechanism like openapi.
I could see it generating out an openapi file but there are already so many templates out there to start from it does not make sense to use ai for this, in my opinion.
Sometimes you have some boilerplate-ish code with complicated but repeating patterns, but it's only like 3 or 4 sections, so figuring out how to combine it isn't really worth the effort compared to spending 10 seconds for copilot to write the other 3 instances and checking the result.
Another example could be that you would really like to use macros/templates for something, but you can't due to external requirements, like Unreal Header Tool being unable to parse those if you want to export your classes to the engine. So you get to write the same code multiple times for different types. Amazing!
Probably yes… but making and maintaining good templates is hard. It doesn’t get done because probably the person who needs to make the template is different than the user of it.
So the AI gets the benefit of having a template without the trouble of getting someone to make and maintain each one.
You still have to write the code for the macro. With ChatGPT you can just say “write a mapper between these two object in C# 11, make the target members uppercase and skip X, Y and Z”
At least in java there are open source libraries that do this that are pretty standard. For instance, Mapstruct. It, for the most part is already automagic.
Doing it for a mapping feels like a deficiency in the code unless chatgpt is putting thoughts behind the variable names and reasoning about what mapping makes the most sense.
I do one-off text transformations that not worth writing a script for. Eg. paste a bunch of unformatted data and format them in json, find patterns in huge text etc.
This is the best use I've found so far as well, which is a bit disappointing for something that many hyped to be so revolutionary. It's handy, but sadly not a major paradigm shift.
Translating between (natural) languages is another nice one. You can throw in json structures or fluent files or what have you and most of the time have it come out with pretty good translations that are formatted correctly.
Agreed. For example, I recently had an auto-generated SQL query that I needed to debug. It was formatted on one line and was impossible to read. I asked GPT-4 to clean up the query and it gave me something that was much easier to consume. I definitely could have written some script for this or tried to find another tool, but it took me 20 seconds to just ask GPT-4 and get a reasonable result.
I've used to take a giant json file and help me get a handle of the scheme. it wouldn't all fit in the window so I just cut some of it out.
I also used to take a chunk of html and replace all the class names and src urls with ...
I also had a list of 300 codes that kind of had semantic names and it was able to build me a table that mapped to a second list of about 20(code, descriptions).
It’s eliminated a mental barrier to building products where I was worried before I had limited time to work and needed to focus on things that drew from existing familiarity.
Now I’m able to focus on the product/outcome even if what’s involved is outside what I’m familiar with and at least get started building.
The net effect has been I can build a wider variety of things in a wider variety of tooling with more enjoyment and less drudgery.
- text summaries
- sysadmin stuff
- debugging stuff
- boilerplate stuff
- unblocking writers block
- integrating disparate stacks
- data transformation and algorithm stuff
- most marketingy stuff: copy, images, campaign/activations
I think to get good value out of it you really have to get a sense for what it's good for and what it's not.
I don't have it write my day to day code, because that's complicated and usually niche enough that it's not likely to give a good result.
But it's awesome for something like writing a quick and dirty shell script. most recently I needed to bulk rename 50 or so files with an interactive piece for a special case. I described the operation to GPT and it spit out a nearly perfect shell script. Could I have written the same script in half an hour? Yes, but it was sure nice not to have to.
I also like it for just bouncing around an idea. Recently I was thinking about writing a program to make a midi device from a guitar hero controller, and I was able to get a good sense of the available APIs / libs in a few different languages with a 3 minute back and forth with GPT. Again, I could have easily searched around myself and come to the same answer, but removing the friction is pretty nice.
> I think to get good value out of it you really have to get a sense for what it's good for and what it's not.
> I don't have it write my day to day code, because that's complicated and usually niche enough that it's not likely to give a good result.
Yep totally agree. I don't think you should just treat it as a black box - throw in literally any prompt you can think of and then use the response verbatim, without even reading it, and expect to have success.
You need to exercise some judgement and be a bit discerning. And think about what areas of your work are most unique to your own skills and experience, and what are the bits of work around the fringes that don't need to be done by you - and are not really part of your core "craft", and, are likely to be well suited to the capabilities of something like GPT-4.
And like every other tool, using it effectively is a skill and requires perserverance, practice, and adjusting your approach as you go. Which means the user can get better at getting value of out an LLM over time. It's not a static or all or nothing thing.
For half of my searches or so, I get better answers via some of the AI search tools.
Especially for code. When i just want to quickly know "how to do x in language y". Like 'how do i filter a list in python so i only get elements with the attribute city="london"':
I’ve been experimenting with pairing a tool I wrote called Promptr [1] with another tool called Open Interpreter [2].
I start with a prompt that teaches Open Interpreter how to use Promptr, and then I discuss what I’m trying to accomplish. It’s certainly not perfect, but there’s definitely something good that happens when you can iterate using dialog with a robot that can modify your file system and execute commands locally.
I've been using phind.com as a replacement for Google.
It's been a mixed bag due to how quickly they iterate on the app (and introduce bugs). But lately they added their own Phind Model, which is free, has unlimited uses as opposed to GPT-4, and sits nicely in-between GPT-3.5 and GPT-4 in performance.
More often than not, it doesn't give a good enough answer, but it may nudge me in the right direction.
For example, it may say some keyword that I can use to search for the right thing on Google, or cite sources that I can use to investigate further.
I've been a software engineer for decades and I still use it daily for that purpose.
It's like having superpowers. Even if I know how to do something, sometimes explaining it is easier than writing out all of the code. An example recently would be in a TypeScript project when a class-based approach to something was deprecated, in favor of a functional approach. I only had to paste the old function signature and say "convert this to an arrow function". That already is less typing, but after that I was able to paste the other examples and say "do the same with this" and they were all correctly quickly converted. Was it easy to do myself? Yes. Was it faster to do myself? No.
Or, I may not know how to do something. In a toy desktop project I had in C#, I wanted an image to fade to greyscale and then fade out. That, I had no idea how to do. So I simply told it that, and added "optimized for performance", and it gave me a function including a hard-coded object that instantiated an array of arrays with certain values. Where did ".3f, .59f, .11f" come from? I don't know, and that point I didn't care, because the whole thing worked perfectly on the first try. In this case, a project for myself, only the result mattered. I did go read the documentation later to see why that works, just out of curiosity. Plus, it explained it ... and was right.
Obviously I review the code and am careful what I send it, but if it's going to shave minutes off my day every time I use it, this stuff adds up.
I think that's what's the most interesting is when it can tell you what the code does. or have it go line by line and add comments. great for learning something new.
I've been using GPT a lot for the past few years, but lately it's mostly for refactoring code (when it would take more time to do it through the LSP or manually).
I just select the code I want to perform some changes on, hit a keybind, ask for what I want, and it does it. I've been so impressed with gpt-3.5-turbo-instruct that I defaulted to that instead of gpt-4.
I use it in Neovim [0], in my terminal [1], in many specialized tools (long live function calling), and through the chat UI when I brainstorm. I'm using Claude as well for some things.
I use it alot to help me write functions and small python scripts.
It helped me build a script that takes a wakeup word "GPT" or "Hey GPT" and then grabs the next words and sends it to the the GPT api and responds back with TTS.
Also, helped built some scripts to take notes from web pages, youtube videos, a mini chat window, FTP images from Dall-e and Pexels to my website.
had it write me script that can call GPT api to generate a list into a sqlite database then takes that list and calls GPT api with another prompt.
it helped me build a script that pulls my email from an imap account.
tons of bootstrap/css snippets for website widgets.
It helped me build a poormans vector database. I took a list of keywords got embeddings then saved it to a column in sqlite and ran it through a sentence similarity function to find the id of the closest match then had it mark the one as ignore.
.htaccess expressions for taking url slugs and redirecting to a posts.php page which is a real pain in the butt no way I could have done that without ChatGPT.
I'm using it in three ways on a current project to reduce admin overhead for my company:
1. I've written a program that reconciles all my invoices against bank/cc transactions at the end of each quarter. My accountant otherwise has to do this by hand. It uses OpenAI's APIs to read the PDFs, parse out the invoicing party and amount(s), and as a fallback when classical NLP fails to parse dates.
Originally I tried to use GPT-4 to do the reconciliation as well, but that was not successful. What worked better was getting it to write me a first cut fuzzy algorithm and then taking it from there.
The outcome is easy to check manually using the tool, my company isn't huge.
As for mistakes, that can happen with human processes too. Tax authorities are used to that possibility. Reconciliation is tedious and error prone. AI can do a better job than humans. Certainly the motivation for me to do it is to reduce the round-tripping between me and my accountant where he points out I've forgotten to submit things.
Here are some of the things I have used ChatGPT for:
# Writing
- I've had it rewrite some of my blog posts to give them more style
- I've asked it to help me with some business letters
- Rewriting my letters to city counselors state / state legislatures
# Admin
- Explaining some parameters for NetworkManager
- Helping me figure out why my rewrite rules in an .htaccess file weren't working as expected
- Asking questions about different versions of PEM certificates and using openssl to convert them
- Restoring some software RAID arrays with lvm
- Journalctl filter options
- ffmpeg commands
# Coding
- I was working on a side project in a new language (to me) using Vala and Gtk4. ChatGPT was mostly wrong on everything, but sometimes lead me in a useful direction.
- Generally I haven't found ChatGPT useful for my work coding
# Other
- Explanations on Double Entry Accounting
- Guidelines on helping my sister talk to her 3 year old out expressing empathy for their dog
- Writing Haikus for my wife. This was an interesting back and forth where ChatGPT starting asking me more questions about my spouse, our relationship, hobbies, and so on.
- Help writing personalized Dad jokes for a father's day card
- An examination looking at the imperialism/militarism in Star Trek from the point of view of the Federation and from the point of view of the other society
- Questions about recipes (replacing items, using fresh items instead of canned/jarred)
Overall, I've found coding to be the least successful aspect of ChatGTP (granted I'm still using 3.5). Possibly, this is because I tend to use less popular languages (work is all elixir/erlang). But even trying to do some python/pytorch work, I found it constantly gave answers that didn't actually work.
However, I have found it really great for explaining topics. It can give pretty good metaphors and you can have it explain its answers. I've also found it being really helpful in writing. I think I am usually able to express my idea clearly and organize my thoughts, but my writing style is very pedestrian. ChatGPT is able to take my outlines and fill them in with my desired style quite well.
I use ChatGPT4 many times a day many conversations, here are recent topics I can share
- looked for a camera lens of a particular focal length and mount but also within certain physical dimensions. it pointed it out successfully
- I wanted to make a meme using Juan Joya Borja’s famous “spanish laughing guy” skit, I told it my topic and asked it to write a script for that format. It was familiar and make a hilarious script. Great! I then added the script to subtitles. I asked it for subreddits I could post it on. Success. I asked it for applicable hashtags for social media, that worked really well on tiktok before the audio got flagged.
- my building is doing HVAC repairs unsuccessfully and telling me cryptic things about the progress that I dont understand and need accountability for. I told what they say to ChatGpt4 verbatim, and it points out the issues of what they are saying, and what frequently happens with contractors and building management. I have been able to have better conversations with them on what to fix now. And theyve admitted to problems.
- it helped me do some shipping. really mundane stuff I didnt know how to do or what to order so that I wouldnt be at the post office long. types of paper, types of stationary, I further browsed types now that I knew the words on Amazon (I didnt know the words so search engines are always deficient then), and then I ordered a Uber Delivery from Office Depot instead. (Totally cancelling my Amazon prime now for Uber Delivery)
“but muh hallucinations” I get to an answer compatible with reality far quicker for tasks I just wouldnt have engaged with before
I use the Kagi search engine as my default. If I find myself on a lomg webpage, I can prepend the URL with `!sum` and it will bring me to an LLM generated summary of the page. If I don't find the information I want in the ssummary, I can click "discuss this document" and ask the exact question I am looking to have answered.
I also use !q for information that I trust can be extracted from top search results by an LLM.
I use GPT4 like I would ask a question on StackOverflow. While my prompts are relatively long compared to a web search, I’ve found the results really good.
I usually state the problem, provide code context, motivations, and end with restating what I want as a result,why and a question.
Here is short example from my history:
—-
I have the following docker command to help me backup data from some containers.
```sh
(omitted for brevity)
```
I want to improve it by not only copying and zipping all the related files in the container volume, but using pgdump to take a copy of the database before doing the copy and zip. How might I change the above to achieve this?
—-
This takes a lot longer than starting a web search but the quality of answers is high, and I find faster than wading through maybe semi related content farms,cookie dialog a, prompts for news letters etc.
One thing to remember is like StackOverflow answers, the code might not be up to date, or have bugs etc, as I test I feed issues back into it with any relevant context. I’ve started building a Jetbrains IDE plug-in around this workflow for myself with the ability to use self hosted models, improving it as I learn new tricks and find what workflows I prefer.
I use Kagis summarizer. It saves me a bunch of time because it answers the question - is this link worth reading?
It has an HN mode, so here is its current summary.
- ChatGPT and other AI assistants can help improve productivity by answering questions faster than searching online or asking another person. This includes explaining technical concepts, providing code samples, and helping with minor tasks.
- However, the quality of code generated by AI is sometimes inconsistent, and debugging may take as long as writing it manually. For complex tasks, AI may not be much faster than a human.
- AI is most useful for getting around poor documentation, asking "stupid questions" without bothering others, and learning new concepts through interactive "interviews."
- AI can help with one-off text transformations and formatting tasks that aren't worth writing custom scripts for.
- While AI may struggle with writing production code, it can help with boilerplate, stubs, and minor repetitive coding tasks.
- Different AI systems have varying capabilities. ChatGPT is best for interactive explanations, while Copilot is more like an "autocomplete on drugs."
- It's important to understand an AI's limitations and use good judgment about what types of tasks it will and won't handle well. Day-to-day coding is often too complex.
- AI search engines can provide code samples and quick answers to common "how do I" questions, saving time over traditional search engines.
- Summarization, translation, and documentation generation are other useful applications of AI for productivity.
- By offloading minor, non-core tasks, AI helps users focus on more creative and challenging work.
* Great for sanity checking your plans or designs (ex: "What are the best ways of securing a multi-tenant application?")
* Great for bootstrapping a presentation or pitch (ex: "Please create a presentation outline about Istio, intended for an audience unfamiliar with Service Meshes"
* Simple questions/reminders (ex: "How can I make it so a bash script exits on error?")
"How do I $ foo in $language_or_framework?"
Stuff like "What are the top 25 things I should do as a $x-year old man living in $state in the US to maintain optimum health and happiness?"
Also for factoids, health, medical, economics, etc questions.
I think the medical stuff, in particular, is probably superior to advice from my local healthcare options. I use it for veterinary purposes, as well, and back it up with medical guide lookups and Googling reputable sources.
All of this requires a certain level of intelligence, skepticism and trust-but-verifyism on my part.
PS: Yes, I understand how you're flabbergasted that asking it for code works for me when you get nonsensical results. You don't need to leave me a comment. I have no explanation for you.
I have it write nearly all my code now. It's incredible at pushing out sys admin sorts of python scripts. The code is clear enough and if I see it doing something a little too complex, I can suggest a more simple solution and it just does it. Ridiculous.
I use copilot/chatgpt to autocomplete tricky APIs for me: pandas, matplotlib, flask, etc. Stuff like is this done with method chaining or is it in the weird array-style notation (pandas). Is it a function or a parameter that changes the secondary axis label (matplotlib). I can’t wait until it’s built into Excel for some of the formula nonsense there.
I also occasionally use it to translate languages. E.g. I’ll write something I know how to do in python and ask it to translate to JavaScript where I need something on the frontend.
Stuff like that takes out about half of the time coding that used to be documentation lookups. But then again I only code 20% of my time now, so your experience might be different.
I use copilot in vscode. It watches what I type and inline suggests the next line or code block based on what I’ve been doing and say a function name or the events I’ve not yet added listeners for. I hit tab to accept. It saves me at least 20% of the time i would spend typing and is insanely good.
I have copilotx conversation in vscode that is aware of my codebase and active file. I can quickly get up to speed with a new project or library by asking it where x is or how to do y in this library. Or generate a vega chart or json for this so I can see a real example of the structure. It’s very good at these tasks. But can be out of date with some libraries or thibgs outside of vscode world.
I also have chatgpt with plugins. This lets me ask it about current code as it can pull the latest version of a github repo or multiple repos, or specific versions, and has more structured responses so it performs much better than copilotx currently does at certain tasks. This all saves me days and weeks of thoroughly reading docs to get a direction and plan. Instead these ai point me in the right direction immediately.
I just got the new voice and browsing features in chatgpt. Yesterday I had a technical voice conversation with the voice from HER about how I wanted to architect a feature. I was at my blackboard talking through the implementation and edge cases and the voice pointed out several angles I hadn’t thought about and worked with me through solutions. I did this while my phone was on my desk without having to touch it for the whole conversation so I could stay deep in thought at the blackboard. This is wildly more efficient than talking with a human where I have the overhead of social dynamics and navigating their communication quirks.
I also used chat gpt 4 heavily over the last few months to set up my company asking it all kinds of corporate, tax, legal, banking, etc. advice. Everything it told me I double checked at the source on government websites and it was wildly helpful. Saved me lots of money on consultants, reading stuff, and pointed me in the right direction and let me think through weird edge cases. Now that I have access to browsing and voice this will get super charged when I have my next question.
This is wildly more efficient than talking with a human where I have the overhead of social dynamics and navigating their communication quirks.
I can see that this was beneficial for you, which is nice.
I can also see this being a huge, huge problem for team dynamics. Didn't like talking to people before? Well now you never have to again! You can go off on your own and generate entire new architectures, codebases, whatever...and and then discuss with the team,,,if the team dynamics sucked before...playing nice with others is what life is about, I even think "AI" may have to learn this lesson too.
I’m an introvert. I previously only used writing long notes to myself to think before engaging with a coworker. Now I’ve unlocked “voice without social dynamics” to do that which is a whole other dimension.
Team dynamics now for extroverts mean instead of waisting and hour of phone time for them to slowly talk through basic ideas and catch up to the introverts who could prepare for the meeting on their own, now extroverts have a way to think through and prepare for meetings so the meetings become more valuable and the extroverts (at least to me) become less irritating to work with.
I’m also looking forward to having voice gpt participate in zoom calls to cut the extroverts off when they’re dragging on their out loud thinking so we can all move the call forward.
As an introvert i find it difficult to cut someone off mid thought because I assume like me everything they say is well thought through and important. I can’t guage when it is and when it’s just stream of consciousness from them until they’re finished and yield the floor and by then it’s too late.
the current status quo of “this meeting could have been an email” or “could have just read and commented on my doc instead of waisting an hour of ten people’s time” is not good for team dynamics or morale. And we finally have a solve for it
I have to say, I love chat gpt. I use it primarily to learn new frame works and to understand apis I have never worked with before. I also use it to help decipher error messages that I can’t find any help on. You can ask it very direct questions about any cloud stack or Unix detail.
That said, I can’t identify with a lot of the use cases here. They either seem to be a workaround for a non existing language feature or introduce a lot of liability.
I’m worried that ai may be promoting poor coding practices . IE the future of code is neither oop or functional, it is all just a copy pasta of chat gpt, endlessly and mindlessly chained together.
Personally, I use AI-powered project management tools to automate tasks like scheduling and data analysis. Chatbots also help streamline customer support.
Additionally, AI-driven email categorization saves hours by prioritizing messages. It's about finding the right AI tools for your workflow, and there are plenty of options out there.
And last but not least, I also use TranscribeMe in order to transcribe voice notes to text: https://www.transcribeme.app/r
use it to contribute to open source. one of the biggest challenge for someone while contributing is to get a hang of a complex codebase with not-so-well maintained community supported docs. tons of abstractions for the sake of extensibility makes it even harder for someone new to navigate. have been building and using https://chat.collectivai.com to solve this pain and make open-source contributions easier, more accessible and faster.
It's small things and reassuring myself, most of the time. Sometimes I let it scaffold basic tests for React components ("assume a working Jest testing setup, use @testing-library/react" does quite some work there). Sometimes I have some logical conditions where it takes me a minute more than expected to make sure I get the logic. I'll have ChatGPT explain what that code does, just to make sure I'm not running into some stupid errors my caffeine-lacking brain produced.
Ok I find it amazing it’s cheaper to host your own on bare metal ? I was also curious what type of stuff you’re building as a web developer ? Django ? Must be decent scale?
I host things on GCP which scale to zero in quiet times, they saved me a bunch of cash…
I mostly recommend it to my colleagues so they don’t keep bugging me on how to do things. This has kept my productivity (thinking about stuff) sky high.
- I've used AI to write and improve Python scripts. And to help me fix errors.
- Also used it to write or improve wording on emails, posts and comments.
- Used it to advise about where to travel,, things to do,
- AI useful for chatting with then you are feeling down.
- Asked AI for advice on things.
- Used AI to get answers quicker than Googling.
I use it a lot when I'm programming in languages I don't know well. ChatGPT is generally better for this than Copilot, but Copilot is pretty good. I would describe it as <Copilot is to Autocomplete> as <Autocomplete is to just raw text entry>. You still have to know things but it makes the coding process go faster and easier.
If I don’t want to think about a problem/situation (or it’s a new problem/situation), I have a prompt that describes the company I work at and then I add the question, “What should I do if __________ happens?” And if the answer seems reasonable I just do it. Saves a ton of thinking and internal debate over what to do.
If you’re coming to this thread to say how great or how terrible it is, can you share a positive or negative example use? So much of these discussions is people talking past each other because the interactions with the product are non-public and siloed, but at least with ChatGPT, chats are easily shareable!
Lecture prep. Much of my teaching I have to convey accurate definitions of terminologies and give real world examples. ChatGPT etc speeds this process up.
1. *Getting Around Bad Documentation*:
- ChatGPT can provide clarity on topics that are not well-documented, especially when the source code is public. This is particularly useful for libraries, services, and APIs.
- Helps in understanding command-line or code behavior. For instance, understanding redirection in bash commands.
2. *Quick Knowledge Retrieval*:
- ChatGPT can provide a brief summary of various topics, effectively serving as a conversational interface for accessing knowledge.
3. *Browsing Mode and Plugins*:
- Plugins, such as pdf readers and web browsers, can extend ChatGPT's capabilities.
- Some comments mention tools like "Phind" which combine ChatGPT with source documentation embeddings.
4. *Code Debugging and Generation*:
- Users get help with debugging code, including identifying and rectifying JSON format errors.
- ChatGPT can generate boilerplate code and assist in finding small bugs like off-by-one errors.
- The tool can be useful for understanding and generating code snippets, especially in less-frequently used languages or frameworks.
- Some users employ ChatGPT to stub out APIs or to get suggestions on API endpoint designs.
5. *Text Transformations*:
- For one-off tasks like transforming unformatted data into JSON or finding patterns in a large text.
6. *Learning and Clarification*:
- Helps users learn more about various topics, such as understanding the differences between certain technologies or getting summaries on specific subjects.
- Useful for "rubber duck debugging" or clarifying coding concepts.
7. *Code Refinement*:
- ChatGPT can assist in rewriting code for clarity or in adding comments.
- Some users compare ChatGPT to tools like Copilot, noting the distinct strengths of each.
However, some users highlighted limitations. While GPT-4 is seen as a significant improvement over GPT-3, some found that it might not always generate perfect or highly detailed code. Some users also feel the need to iterate with the model to get the desired output.
Part 2:
*1. Text Analysis:*
- Finding patterns in large texts.
- Parsing unstructured data.
- Extracting unique IP addresses from server log files.
*2. Product Development and Workflow Enhancement:*
- Breaking down mental barriers and enabling exploration outside one's familiarity.
- Building a wide variety of tools and products.
- Tasks such as text summaries, sysadmin tasks, debugging, boilerplate code, overcoming writer's block, data transformations, and marketing tasks (copywriting, campaigns).
*3. Coding and Scripting:*
- Assisting in writing quick shell scripts.
- Providing information on APIs and libraries.
- Code refactoring.
- Assisting in function and script creation.
- Turning descriptive tasks into actionable code, such as converting functions, creating bootstrap/css snippets, and generating regular expressions.
- Helping in reconciling invoices against transactions.
- Assisting with tools like Github Copilot.
- Writing and restructuring blog posts, business letters, and other forms of communication.
*4. Admin and System Tasks:*
- Assistance with NetworkManager parameters.
- Debugging .htaccess file issues.
- Guidance on PEM certificates and openssl conversions.
- Restoring software RAID arrays with lvm.
- Filtering options with journalctl.
- Working with ffmpeg commands.
*5. Miscellaneous:*
- Providing explanations on concepts like Double Entry Accounting.
- Guiding in personal interactions.
- Assisting with search queries, e.g., "how to do x in language y".
*6. Collaborative Tools:*
- Integrating GPT with tools like Promptr and Open Interpreter for enhanced dialog-based file and command modifications.
*7. Search and Web Assistance:*
- Improving search results in tandem with search engines.
- Assisting in pulling email from IMAP accounts and other web-related tasks.
Overall, GPT is seen as a valuable tool that can assist in a myriad of tasks, especially when users exercise discernment and optimize their approach over time.
Part 3:
1. *General Information Retrieval*: Directly asking ChatGPT questions rather than going through search engines.
2. *Specific Task Assistance*: Assistance with various tasks, from locating a particular camera lens, generating comedic scripts for memes, clarifying HVAC repair updates, to simplifying shipping needs.
3. *Web Page Summarization*: Using engines like Kagi to get summarized versions of long web pages.
4. *Coding*: Seeking assistance similar to querying StackOverflow. For example, improving docker commands, generating code based on specific requirements, and helping users understand error messages and unfamiliar frameworks.
5. *Code Review and Design*: Validating the sensibility of an API design, checking coding patterns, determining class names, assessing the readability of a code snippet, and providing boilerplate code.
6. *Autocompletion and Code Translation*: Using tools like Copilot in VSCode to autocomplete code, suggest code blocks, and even translate code between languages, e.g., from Python to JavaScript.
7. *Medical and Health Advice*: Asking health-related questions, both for humans and veterinary purposes.
8. *Architectural and Design Discussions*: Engaging in technical conversations regarding system design and architecture.
9. *Corporate and Business Set-Up*: Seeking advice related to corporate setup, taxes, legalities, and banking.
10. *Learning and Development*: Using GPT to learn new concepts, frameworks, or to better understand complex topics.
11. *Content Generation and Assistance*: Bootstrapping presentations, sanity checking plans, and generating content for various purposes.
12. *Reduced Social Overhead*: For introverted users, the AI provides an avenue to think aloud and discuss ideas without the social dynamics and potential pressures of human interaction.
Note: Some users also expressed concerns about team dynamics and the potential negative impact of AI on interpersonal communication and collaboration.
Part 4:
1. *Code Assistance and Development*:
- Pulling repos and asking questions through plugins like "AskTheCode".
- Contributing to open source by understanding complex codebases.
- Scaffolding tests for React components.
- Making bad code readable by converting complex expressions to simpler structures.
- Generating boilerplate code.
- Assisting in new or unfamiliar problem/situations with contextual questions.
- Helping with web development tasks, including creating Ansible roles and Docker files.
- Code review for files such as Docker files.
- Automating tedious mathematical calculations for game development.
2. *Project and Task Management*:
- Automating tasks like scheduling and data analysis.
- AI-driven email categorization to prioritize messages.
- Using AI tools for workflow optimization.
3. *Content Creation and Assistance*:
- Transcribing voice notes to text.
- Assisting in lecture preparation, especially for accurate definitions and examples.
- Improving the wording in emails, posts, and comments.
- Providing travel advice and general queries.
- Offering companionship for emotional support.
4. *Efficiency and Workflow Improvements*:
- Acting as a "real-time intern" to offload commoditized tasks, such as writing boilerplate CSS code or stubs of API client code.
- Keeping the user in a productive flow state by assisting with small problems.
- Replacing Google searches for software development and DevOps queries.
- Offering context-aware code assistance without compromising confidentiality.
- Preventing procrastination by quickly resolving minor obstacles.
5. *Document Transformation and Data Handling*:
- Converting document formats (e.g., from CSV to JSON).
- Assisting with document-related tasks using Python scripting.
6. *Miscellaneous*:
- Offering alternative tools and suggestions to explore.
- Ensuring quality of generated code using TDD and specific compilers.
- Offering suggestions on how to navigate open-source projects with difficult documentation.
- Helping clarify complex logical conditions in code.
- Offering a method to handle unfamiliar programming languages.
- Offering solutions that save time and reduce internal debate.
This summary captures the core ways users have found GPT useful in their workflows, with an emphasis on coding, content creation, and efficiency improvements.
Here's a high level summary of the major use cases described below. Scanned at Fri 20th Sept, 21:31. Compiled with claude and chatgpt.
Programming/Coding:
1. Get help writing code faster:
- Utilize ChatGPT to autocomplete code, suggest code snippets based on the context, and provide solutions to coding challenges. It can also suggest alternative approaches to implement a particular functionality, saving development time.
2. Fix bugs and errors:
- Describe the error or bug to ChatGPT, and it can provide a list of common solutions, possible causes, and steps to debug the issue, making the debugging process more efficient.
3. Translate code between languages:
- ChatGPT can assist in translating code snippets from one programming language to another, making it easier to work across different technology stacks.
4. Generate boilerplate code:
- Generate starter code for common tasks like setting up a REST API, creating config files, initializing tests, etc., with the help of ChatGPT, accelerating the project setup phase.
5. Improve existing code:
- Request ChatGPT to review, refactor, and optimize your code. It can suggest improvements such as better variable names, code structure, optimization techniques, and adding comments for better code readability and performance.
Writing/Content Creation:
1. Summarize long articles/documents:
- ChatGPT can scan through lengthy texts, extract key points, and present a concise summary, enabling quicker assimilation of information.
2. Translate content to other languages:
- Provide text to ChatGPT, and it can translate it to various languages, aiding in global communication and content dissemination.
3. Write first drafts:
- Outline your ideas, and ChatGPT can help draft initial versions, saving time and effort in the early stages of content creation.
4. Expand on ideas:
- With a starting sentence or paragraph, ChatGPT can develop a more comprehensive piece, aiding in brainstorming and content expansion.
Research/Learning:
1. Answer questions on demand:
- ChatGPT can provide immediate answers to a range of queries, offering a faster alternative to manual search.
2. Explain complex topics:
- It can break down complex topics into simpler terms, providing a clearer understanding and personalized learning experience.
3. Find code examples:
- ChatGPT can supply code samples for particular implementations, facilitating hands-on learning and problem-solving.
4. Get alternate perspectives:
- It can present different viewpoints on a topic, fostering a more well-rounded understanding and critical thinking.
Administrative Tasks:
1. Schedule meetings and appointments:
- Allow ChatGPT to handle scheduling, thus freeing up your time and mental energy for other tasks.
2. Fill out forms/paperwork:
- ChatGPT can pull information from databases to quickly fill out templated documents, saving time on routine paperwork.
3. Track tasks and todos:
- Utilize ChatGPT for task management to stay organized and focused on important work, delegating task tracking to the AI.
I use it (LLMs / GPT-4) as a sort of "realtime intern" to offload the more commoditised tasks that I don't really need to do - and that I don't contribute unique value to by doing myself. So some boilerplate CSS layout code, sketching out unit tests, initial stubs of API client code. Lots of "filling in the gaps" - which frees me up to spend more time thinking about higher level architecture, structure, business problems, abstraction/refactoring, planning, design etc.
Due to the way my brain seems to work, it also keeps me from getting stuck on small distracting problems which would previously create some resistance or procrastination - causing a break in my flow of work. It essentially keeps me in a productive flow state for longer periods than I could sustain without it. Any problem that is not in my "critical path" that I want to be focused on in the moment, and that could be easily solved by an LLM, gets "outsourced" as such.
This is in addition to it replacing about 80% of my software development / devops related Google searches. Because I work across quite a wide range of disciplines, I'm often looking for quick answers to questions about some technology stack that I'm not using daily. It's perfect for that. And I have enough familiarity with what I'm working with to sense-check/QA the responses.
I believe you do need some subject matter knowledge and experience to get the best out of LLMs though. I think many people are verbatim copy/pasting code out and complaining when it doesn't work. I very rarely find I waste any time debugging or correcting problems - because I either spot them and correct them in real time - still saving me a lot of time regardless - or I structure my prompts in a way that avoids these problems in the first place - by breaking the request down in to granular enough parts that I can pretty much predict how accurate the response will be (most of the time; very).
And in the scenarios where there is a bit of back and forth, trying different ideas and debugging in realtime - this is almost always a much faster (net) process than if I had done the same iteration myself.
As a point on usage and confidentiality, I don't use integrated coding assistants like Copilot - everything I do is sandboxed - so nothing confidential goes into the LLM. Specific details in my prompts are "anonymised" as I enter them (as in, I self-censor) - so I get the benefit of a lot of assistance from LLMs but with no sharing of any information that I would deem confidential. I plan to experiment with tighter integration into my workflow (eg. Copilot type assistance) with a private LLM instance at some point, but I'm comfortable with the balance of productivity and confidentiality at this point.
I do also have a Hammerspoon shortcut that will take the currently highlighted text in any app, and send it directly into OpenAI's endpoint. So I can highlight a mixture of comments and/or code in my IDE and immediately send them to GPT-4 and have that highlighted text replaced (or appended to) by the response. This gives me contextual assistance without having a constant live feed into a proprietary LLM ala Copilot.
The 2021 knowledge limit is pretty annoying, since libraries change so often, but it's still very useful.
It's ALSO very useful for asking stupid questions that I don't want to waste someone's time with. Like for instance, in bash, when you redirect stdout to a file with > filename it works, and when you redirect stderr to stdout with 2>&1 it works, but when you try to redirect stderr to stdout and stdout to a file, it only works when you do it in this order:
command > filename 2>&1
it doesn't work if you do
command 2>&1 > filename
which feels more natural to me, so I asked ChatGPT why that is, and it explained that you have to consider it from a filehandle perspective and that if you look in /proc/pid/fd you can see that 2>&1 is really redirecting to the terminal, and > is redirecting from the terminal to a file.
I would have had to find someone deeply steeped in unix/linux fundamentals to explain that to me, or I could just ask ChatGPT. I've done the same thing again and again - how are HSMs really different than TPMs? How are heat pumps different than ACs?
I'll read a reference to something, and immediately go to ChatGPT to learn more - "Can you give me a brief summary of the writings and opinions of Cicero?" and then I can spend 20-30 minutes learning more about stoicism, epicureanism, and whatever else I'm curious about. It's like being able to interview wikipedia.