Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: GPT copilots aren’t that great for programming
193 points by swman 12 months ago | hide | past | favorite | 135 comments
For context, I'm a experienced SWE working on some fairly complex things. I've been using programming GPT copilots for 6-8 months now and lately I've been using them less and less.

I think for complete beginners or casual programmers, GPT might be mind-blowing and cool because it can create a for loop or recommend some solution to a common problem.

However, for most of my tasks, it usually has ended up being a total waste of time and leads to frustration. Don't get me wrong, it is useful for those basic tasks for which in the past I'd do the google -> Stack Overflow route. However, for anything more complex, it falls flat.

Just a recent example from last week - I was working on some dynamic SQL generation. To be fair, it was a really complex task and it was 5pm so I didn't feel like whiteboarding (when in doubt, always whiteboard and skip gpt lol). I thought I'd turn to GPT and ended up wasting 30 minutes while it kept hallucinating and giving me code that isn't even valid. It skipped some of the requirements and missed things like a GROUP BY which made the generated query not even work. When I told it that it missed this, it regenerated some totally different code that had other issues.. and I stopped.

When chatGPT first came out I was using it all the time. Within a couple weeks though it became obvious it is really limited.

I thought I'd wait a few months and maybe it would get better but it hasn't. Who exactly are copilots for if not beginners? I really don't find them that useful for programming because 80% of the time the solutions are a miss or worse don't even compile or work.

I enjoy using it to write sci fi stories or learn more about some history stuff where it just repeats something it parsed off wikipedia or whatever. For anything serious, I find I don't get that much use out of it. I'm considering canceling my subscription because I think I'll be okay using 3.5 or whatever basic model I'd get.

Am I alone here? Sorry for the ramble. I just feel like I had to put it out there.




I think of them as a more intelligent autocomplete. I don’t lean on them too heavily but I find that they make my life 5% easier by autocompleting based on style and known names of things versus relying wholly on the LSP. (Copilot)

On the GPT-4 side I’ve had great luck with dealing with complex SQL/BigQuery queries. I will explain a problem, offer my schema or a psql trigger and my goals on how to augment it and it’s basically spot on every time. Helps me when I know what I want to do but don’t know precisely how to achieve it.


I think the real power will come when generative models get combined with e.g. refinement types, which are more or less analogous to contracts. Imagine, you decompose a problem into some functions with some contracts, and you get implementations for free. Plus, they will be guaranteed to match the specification!

I was discussing this with GitHub when they were hiring for Copilot, but understandably they wanted to get the basic functionality right first. I think it is the next step, and a very interesting topic for a startup or OpenAI et al. to tackle. It has the potential to make programming both more robust and faster, possibly bringing us closer to the correctness levels of classical engineering disciplines.


Similarly, Stephen Wolfram elaborated in a Lex Fridman interview on a combination of having GPT express computational language (like Wolfram Alpha).


This feels more or less in line with what my team has found, we gave everyone a copilot seat in our Github org, and anecdotally speaking it feels like we've seen roughly a 5-10% increase in productivity. This is of course self reported and not measured against any metrics. Assuming we're right about that, its an easy sell when we account for what our internal hourly rate is.

we also found the same as the OP, It's good for simple problems or boilerplate, not great for more complex problems.


I use copilot and write a whole lot of code every day. It's very very hard for me to believe anyone could get a 5-10% boost in productivity. It's often incorrect and when it is correct, it's often trivial.

It is a value add, but I'd put it closer to 0.5% if that. Over, say, 8 hours of coding time, it might save me a couple of minutes total.

Which, from a company expense perspective is still worth it, but an order of magnitude less than your anecdote.

It's very hard for me to envision how copilot could save someone that kind of time.


I'm not diagnosed but I almost certainly have something like ADHD. It feels like my brain will take the most insignificant excuse to start paying attention to something else. Copilot boosts my productivity by 25-30%, easily, by removing many of these "excuses".

I am an experienced programmer, but sometimes I'll have a clear intention in my head of what I want to do, and the knowledge/experience of what code would accomplish that thing, and I still back away from it and start looking at something else, whether work, social media, whatever. Writing this, I know it might sound ridiculous or lazy, but it's true.

Copilot bridges this gap, often miraculously. The gap between thought and code is shortened, and the windows of time where I might lose focus seem to be drastically shortened.

For me, the key is that I do know how to write most of the code that Copilot writes for me, I'm just not good at actually writing it, or at least doing so in a sustained, consistent way.


I feel like a lot of this is just that ADHD brains (and brains in general) want to work at the right level of abstraction. If I have to add headers to 20 word docs, it’s probably a bit quicker to do that manually than to write a script to do it, but I write the script anyway because the alternative is too boring. A lot of code is like that too, and “getting the AI to understand what I’m trying to get it to do” is more interesting.


Super interesting perspective and insight. I really appreciate you chiming in. Hadn't considered something like this.


I can relate to that experience.


Interesting, the experience is very different for me. What language are you working with? I find it to be pretty strong in my Python and JS projects.


Many, including python + js. What does it provide? How do you use it?


what is the cost of analyzing the proposed solution and fixing it so it does what you want and how does that compare to not using AI and writing the code yourself?


That's an excellent question. I'm typically able to stay focused on a task in spite of distractions, but working with generated nonsense while coding something new has derailed me quite a few times.

The novelty has definitely worn off for me, at least.


Me too. I'm writing a compiler for fun and its extremely helpful to have it auto complete entire simple functions, like converting AST nodes to a string representation for example.


Doesn't it defeat the purpose of writing a compiler for fun if ChatGPT is going to write most of it?


I don't think the fun part is actually pressing on the keyboard.


Never done it myself, but I imagine understanding how the components all fit together is more valuable/fun than dealing with the minutiae of every function.


i find the inline suggestions of copilot distracting in any but the most mundane of cases. I know you can disable them but it still feels like it should be enabled instead of hotkeyed


a more intuitive search as well. i am going to fly to paris in april and wanted to know what time it would be in denver when i landed. chatgpt was better-suited to the task than a multi-step duckbingle search


A Google search could do that 20 years ago.


I personally will never use the flight time zone related queries anymore. Almost like it didn’t know how time zones worked.


Is it worth paying 20$ for 5% improvement


Unless your time is borderline worthless, obviously yes.


Yeah I think this captures my experience well. Not so much copilot as administrative assistant for my editor.


I've stopped trying to use GPTs for complex tasks like what you describe, but I find them to be invaluable for getting a lot of grunt work done on my hobby projects.

As a concrete example: GitHub Copilot has been absolutely life-changing for working on hobby programming language projects. Building a parser by hand consists of writing many small, repetitive functions that use a tiny library of helper functions to recursively process tokens. A lot of people end up leaning on parser generators, but I've never found one that isn't both bloated and bad at error handling.

This is where GitHub Copilot comes in—I write the grammar out in a markdown file that I keep open, build the AST data structure, then write the first few rules to give Copilot a bit of context on how I want to use the helper functions I built. From there I can just name functions and run Copilot and it fills in the rest of the parser.

This is just one example of the kind of task that I find GPTs to be very good at—tasks that necessarily have a lot of repetition but don't have a lot of opportunities for abstraction. Another one that is perhaps more common is unit testing—after giving Copilot one example to go off of, it can generate subsequent unit tests just from the name of the test.

Is it essential? No. But it sure saves a lot of typing, and is actually less likely than I am to make a silly mistake in these repetitive cases.


Agreed, GitHub Copilot is fantastic if you give it something to work off of and outsource repetitive tasks to it. You still need to babysit it a little bit (it made a very subtle bug once that took a while to figure out), but it does a great job of generating code based on the other things you've been doing. It's a great little assistant to make coding less tedious.


On the other hand, if you are writing repetitive code - are you missing an easy abstraction? Or if not, some easy code gen? If I find myself starting to write repetitive code It usually means I did something wrong.


This mentality has been taken entirely too far in software development. People think if they have to write the same single line more than once then they are doing it wrong. That could not be further from the truth.

Unless you find yourself writing the exact block of code, without any modifications, 3 or more times it’s better to have the “duplicated” code. Until you hit 3+ times you are just guessing at future usage and in my experience developers, myself included, are terrible at guessing the future.

I’ve watched “DRY” code turn into a monster when someone, like myself, tries to force a bunch of use cases into a single flow in order to be DRY. You end up with confusing code that’s trying to do too many things in a single function/block littered with if/else such that stepping through it (in your head) is complicated and error-prone.

I regularly ask the developers who work under me to first duplicate the code and use it a few different times before going back and deciding “can this be made generic without standing on our heads?”.

Recently I had to deal with an item list component that was made to display 2 vastly different types of data. The component was responsible for rendering out the items themselves since they had some UI similarities. The code is nightmare with a ton of input properties to tweak the display/functionality based on the type of item you want to render out. All in the name of DRY, the “well these look similar so we better abstract this and use it in both places”.

When I was a younger developer I thought this way and wrote code this way. It’s unmaintainable, entirely too “clever” (that’s a bad thing), and hard to reason about. Nowadays I value readability and ease of understanding over “DRY for DRY’s sake”.


My experience exactly. I think the DRK (Don’t Repeat Knowledge) acronym (not my invention) is a better target. Too often people mistake similar structures of code as “repetition”.

Instead, if you focus on not repeating the important “knowledge” of your code (algorithms, business rules, etc), it’s easier to avoid the trap of over-abstracting.


Agreed. I’m fine if you want to extract sub-parts of the function “generateId”, “hashX”, “lookupY” and have 2 functions that call 2 out of 3 (different 2) of the shared helpers/functions, just don’t create a single function that if/else’s the 2 paths.

Better put, if you have logic A, B, C, D and 2 code paths:

Code path 1: ABD

Code path 2: ACD

Then you should have:

Function 1: ABD

Function 2: ACD

Not

MegaFunction: A (if X then B) (if !X then C) D

Too many people see a common “A” and “D” and rush to have a common function that if/else’s the B/C.


In case you're not aware of it the name for what you like is WET ("Write everything twice/thrice") rather than DRY.


This is why I specifically identified that there there are cases where there is no reasonable abstraction. DRY has become so much of a mantra in the industry that it's taken for granted that any repetition must be unnecessary, but there are times where that's just wrong.

The first example I gave is parsers—even after you've factored out as many helper functions as you can, you'll eventually hit a floor where you have to use those helper functions and piece the results together. That floor is necessarily repetitive—you end up with a bunch of 5 to 10 line functions calling the abstractions and feeding the results together.

Unit tests is another example: I find that when people get too DRY in unit tests it just ends up obfuscating what the test is testing and makes changing the program later harder. Same as with parsing, there is a floor for how much abstraction is reasonable and once you have hit that floor you still have to go through the rote repetition of stringing functions together and asserting things about the results.

With a parser you actually have to do this, but with unit tests people will as often as not give up on the tedium and just not cover all the edge cases. Copilot enables developers to stop writing the repetitive code and spend their unit testing time thinking of edge cases.


This has more to do with the fact that copilot by nature is additive and not editing your code. There was a recent report about how code quality is dropping in codebases because of the repetitive code being introduced by copilot.

The next evolution as you are saying would be to detect if its a repetitive code and modularise it or refactor it.


You can do this today with the VS Code extension. There’s a context menu for copilot to do things like “explain this code”. I asked it to make my current class more DRY and it refactored validation and parsing for a dozen or so arguments. It had one minor misunderstanding of my intent which I fixed by hand.


Copilot is the easy code gen in this case. It's even surprisingly good at Rust macros.


Yes! It's surprisingly good at Rust in general, and does a better job with macros than intellisense does.


Threads like these often feel like reading a lot of "Does anyone else feel like your Phillips screwdriver just isn't very good at hammering in nails? It kind of works, but I'd rather not use it for that."

I really have zero desire to ever be programming without Copilot again, and have been writing software for over a decade.

It just saves so much time on doing all the boring stuff I wanted to tear my hair out on wondering why I even bother doing this line of work at all.

Yeah, you're right, it's not as good as I am at the complex more abstracted things I actually enjoy solving.

So it does the grunt work like writing the I/O calls, logging statements, and the plethora of "nearly copy/paste but just different enough you need to write things out" parts of the code. And then I do the review of what it wrote and the key parts that I'm not going to entrust to a confabulation engine.

My favorite use is writing unit tests, where if my code is in another tab (and ideally a reference test file from the same package) it gets about 50% there with a full unit test and suddenly my work is no longer writing the boilerplate but just narrowing in the tailored scaffolding to what I actually want it to test.

It's not there to do your job, it's there to make it easier in very specific ways. Asking your screwdriver to be the entire toolbox is always going to be a bad time.


I'm a salesman, not a SWE. I use GPT-4 to write simple scripts and excel macros that would otherwise be impractical for me to figure out manually. At first I was trying to use GPT-4 to write emails, but I'm very picky on tonality and was rarely satisfied with the results.


Interesting! What do those scripts do?


Yeah i am gonna even judge anyone who says copilot does not make them productive. Like what code could you possibly be writing that copilot is not autocompleting you properly? Yeah if you dont know what to write then copilot cant auto complete you.


It seems like you’re disinclined to believe people’s personal experience as being different from your own, so you judge them negatively.

But have you thought about it in the reverse? What does it say about your work that it is so easily replaced/supplemented while others report that theirs is not?


Never said i am above judgement judge away. Its as basic a right as free speech.

Co pilot is about auto complete. You write one word and it writes the 99 others because the rest is predictable. If its not working for you it means you dont know that first word in my opinion and instead of fighting back you should upskill


No need to fight anything. When it is actually useful, I’ll be happy to use it.

Is “upskill to prompt ai” the new “learn to code”?


Lmao could very well be.


100% agree with you. Copilot does the really boring stuff for me so I can work on actually solving the interesting problems. Copilot was down for a few hours the other week and I was so sick of typing out all the boilerplate stuff it usually does for me that I moved onto non-coding tasks until it was back online.


Serious question: why is anyone writing "boilerplate" when we have tempting engines and macro systems or even simply higher-order functions? I feel like I haven't had to write any appreciable amount of boilerplate in at least 20 years as I can automate that garbage with better abstractions.


I believe the mistake people use with copilot is that they attempt to write large projects or functions when they lack the knowledge base of the underlying technology.

I prefer to use it as more of an autocomplete on a line per line basis when writing new code.

Typically, I use it for small and concise chunks of code that I already fully understand, but save me time. Things like "Here's 30 lines of text, give me a regex that will match them all" or "Unroll/rewrite this loop utilizing bit shifting".

I also use copilot as a teacher. Like to quickly grok assembly code or code in languages that I do not use everyday. Or having a back and forth conversation with copilot chat on a specific technology I want to use and don't fully understand. Copilot chat makes an excellent rubber duck when working through issues.


It's pretty amazing how much of the boring parts of programming can be abstracted away with intelligent temporary comments and enough neighboring context using Copilot.

Maybe they need to do a better job at teaching users how to be productive with the tool.


He’s not using copilot though, and I think that is part of the problem.

Copilot gives me what I need to scaffold everything I am building.

Asking ChatGPT questions is good for kicking around ideas, but little more.


Having a conversation with it is kind of amazing, perhaps underrated.

A very long series of questions can totally brief you on tech you don’t understand or have a base in.


Unfortunately the hallucinations make it problematic. Yesterday I was working in a chart library I had never used and Copilot was adamant in convincing me to use non-existing methods. You can question it, but when it's backtracking you have no idea if you triggered it or if the information was wrong.

At least with practical implementations you can still verify the output through tests or trial and error but it becomes even more fragile when asking about facts or knowledge.


I'm talking about something a little different. When you get into the level of method names and so on then yes you're definitely right.

But I have had incredibly useful conversations on broad strokes stuff. A question like "what are some of the various options for scoring the similarity between news stories for the purpose of showing similar content to a user" or something can be really really useful, much more so than trying to find that perfect blog post about it, because you can ask follow up questions and sort of evaluate the various key words and concepts you'll want to eventually learn.

Stuff like that.


> I'm talking about something a little different.

Hallucinations are still a problem. The AI doesn't actually know "what options are useful for scoring ...", it's just regurgitating what someone else told it and when you ask it follow up questions it's unable to reason.

AI's main feature is the first thing we disqualify people for in interviews. It's knowledge is pure buzzwords and hype.

Example: I just asked chatGPT to tell me why framework X is better than framework Y. I am familiar with both frameworks. After the list chatGPT gives a disclaimer that starts with "However, it's essential to note that Y also has its own strengths". It then proceeds to list a few features that both frameworks share, a few features that are subjectively better and a few features that are actually bugs/code smell.

That's not the AI's comparison of the two frameworks, that is just someones blog post that has been regurgitated to me in a way that gets around copyright.


I was writing a sudoku solver today and there was a bug that took me a while to track down (can't remember exactly how long - could be a few minutes to a couple of hours).

I asked ChatGPT to find the bug and it didn't find it. I also asked GPT4-Turbo to find the bug and it also couldn't find it. In the end I found the bug manually using tracing prints.

After I found the bug, I wondered if GPT4 could have found it so I gave the buggy code to GPT4 and it found the line with the bug instantly.

To me this shows that GPT4 is much better than GPT4-Turbo and GPT-3.5


What’s the difference between GPT4 and GPT4-Turbo?



In short, turbo supports 128k tokens vs non-turbo's 8k. So if your input fits into 4k (leaving 4k for output), there's no difference.


It's like the story of the amazing singing dog. Not amazing because he sings well (he doesn't) but because he sings at all.


> Who exactly are copilots for if not beginners?

The thing is: in software engineering, you're very often "a beginner" when using new technology or operating outside your familiar domain. In fact, you need to learn constantly just to stay in the business.


This is an important point.

I’m not a beginner per se - I started writing Objective-C and Python more than a decade ago and I’ve written a depressingly large amount of SQL in that same period. But when my current employer decided I was going to be a web developer, I needed to start from the ground up with Django.

Copilot has been a godsend for me. I still need books and Stack Overflow, but the conversations I’ve had with Copilot about architectural decisions, project structure, external library choices, syntax, etc., has saved me a ton of time that I would have otherwise spent reading ad-riddled Medium articles to learn.

As a not-beginner beginner, it’s been a huge productivity boost for me.

Agree with op though that it’s pretty bad with SQL. Other than reminders about basic syntax, conversions from T-SQL to Oracle SQL syntax, or mindless column aliasing, I don’t bother much with it.


This is the essence of my experience with LLMs. I don't need their help to walk my talks. But they help me immensely with skipping phases like "let's climb this curve for 30min to create some config and forget most of what you did before, then load it back and forget what you learned". They broaden your knowledge, not deepen it. It's a vague-memories extension with everything in it.


In my experience:

- For basic autocompletion is ok, on lazy days I even find myself often thinking "why the ai is not proposing a solution to this stupid method yet?".

- For complicated coding stuff is worthless, I've lost a lot of time trying to fix some ai generated code to end up writing the stuff again, so I rely on google/stackoverflow for that kind of research.

- For architectural solutions or some research like looking for the right tool to do something I found it quite useful as it often present options I didn't consider or didn't know in the first place, so I can take them also in consideration.


I get a tremendous amount of value from ChatGPT, like you said for things where I would previously have to google -> stack overflow it's incredibly useful. It works as a insanely good search/autocomplete and that is worth a ton. I love being able to sketch a function with an example input/output and have it return something correct, or at least close 95% of the time. As an experienced dev it's easy for me to look at it and get to 100%.

It's also so helpful to be able to just ask questions of the documentation on popular projects, whether it be some nuance of the node APIs or a C websockets library, it saves me countless hours of searching and reading through documentation. Just being able to describe what I want and have it suggest some functions to paste into the actual documentation search bar is invaluable.

Similarly I find it's really helpful when trying to prototype things, the other day I needed to drop an image into a canvas. I don't remember off top exactly how to get a blob out of an .ondrop (or whatever the actual handler is) and I could find it with a couple minutes of google and MDN/SO, but if I ask ChatGPT "write me a minimal example for loading a dropped image into a canvas" I get the exact thing I want in 10 seconds and I can just copy paste the relevant stuff into MDN if I need to understand how the actual API works.

I think you're just using it wrong, and moreover I think it's MUCH MUCH more useful as an experienced engineer than as a beginner. I think I get way more mileage out of it than some of my more junior friends/colleagues because I have a better grasp on what questions to ask, and I can spot it being incorrect more readily. It feels BAD to be honest, like it's further stratifying the space by giving me a tool that puts a huge multiplier on my experience allowing me to work much faster than before and leaving those who are less experienced even further behind. I fear that those entering the space now, working with ChatGPT will learn less of the fundamentals that allow me to leverage it so effectively, and their growth will be slowed.

That's not to say it can't be an incredibly powerful learning tool for someone dedicated to that goal, but I have some fear that it will result in less learning "through osmosis" because junior devs won't be forced into as much of the same problem solving I had to do to be good enough, and perhaps this will allow them to coast longer in mediocrity?


"I worry our Copilot is leaving some passengers behind" https://news.ycombinator.com/item?id=39411912


I have been using Cody (sourcegraph.com/cody) for about 6 months now and it's completely changed the way I write code. But, there was an adjustment period to learn how to work with the tool. Expecting a code copilot to just give you working code 100% of the time is unrealistic today, we may get there eventually though.

I've been writing code for close to 20 years now across the full stack, I have written a lot of bad code in my life, I have seen frameworks come and go, so spotting bad code or spotting bad practices is almost second nature to me. With that said, using Cody, I'm able to ship much faster. It will sometimes return bad answers, i may need to tweak my question, and sometimes it just doesn't capture the right context for what I'm trying to do, but overall it's been a great help and I'd say has made me 35-40% more efficient.

(Disclaimer: I work for Sourcegraph)


A lot depends on what you use them for.

I don’t find them that great at large scale programming and they couldn’t do the hard parts of my work, but a lot of what I do doesn’t need to be “great.”

There’s the core system design and delivering of features. That it struggles with. Anything large seems to be a struggle.

But generating SQL for a report I do sporadically on demand from another team?

Telling me what to debug to get Docker working (which I am rarely doing as a dev)? Anything Shell or Nginx related (again, infrequent, so I am a beginner in those areas)

Generating infrequently run but tedious formatting helper functions?

Generating tests?

Basically, what would you give a dev with a year of experience? I would take ChatGPT/Copilot over me with 1 year of experience.

The biggest benefit to me is all the offloaded non-core work. My job at least involves a lot more than writing big features (maybe yours does not).


It’s incredible for my use case.

I have been involved in software and implementing technical things since the late 90s and from time to time have been pretty good at a few things here and there but I am profoundly rusty in all languages I sort of know and useless in ones I don’t.

But I’m technical. I understand at sort of a core level how things work, jargon, and like the key elements of data structures and object oriented code and a MVC model and whatever else. Like I’ve read the right books.

Without ChatGPT I am close to useless. I’m better off writing a user story and hiring someone, anyone. Yes I can code in rails and know SQL and am actually pretty handy on the command line but like it would take me an entire day and tons of googling to get basic things working.

Then they launched GPT and I can now launch useful working projects that solve business problems quickly. I can patch together an API integration on a Sunday afternoon to populate a table I already have in a few minutes. I can take a website I’m overseeing and add a quick feature.

It’s literally life changing. I already have all the business logic in my head, and I know enough to see what GPT is spitting out and if it’s wrong and know how to ask the right questions.

Unlike the OP I have no plans to do anything complex. But for my use cases it’s turned me from a project manager into a quick and competent developer and that’s literally miraculous from where I’m standing.


I'm the beginner you are talking about, I have mixed feelings about coding with AI.

I'm not a programmer, I'm a student in acc/fin, to use a weird analogy, if you are a chef, I'm a stereotypical housewife, and we think differently about knives (or GPTs).

I differentiate between tuples, lists and dictionaries not by the definition, but by the type of brackets they use in Python. I use Python because it's the easiest and most popular tool, and I use Phind and other GPT tools because programming is just a magic spell for me to get to what I want, and the less effort I have to spend the better.

But it doesn't mean that GPTs don't bring their own headaches too. As I get more proficient, I now realise that GPTs are now giving me bad or inefficient advice.

I can ask a database related question and then realise, hang on, despite me specifying this is for Google BigQuery, it's giving me an answer that involves some function I know is not available on it. Or I read the the code it recommends for pandas and realise, hang on, I could combine these two lines into one.

I still use GPT heavily because I don't have time to think about code structure, I just need the magic words to put into the Jupyter cell, so I can get on with my day.

But you don't, and you actually think about these things, and you are realising the gaping flaws in the knife's structure. That's life. YOu have a skill and there comes pros and cons with it.

Like a movie reviewer who can no longer just go to the cinema and enjoy something for the sake of it... you also can't just accept some code from a GPT and just use it, you can't help not analyse it.


GPT-4 was addictive for me. I subscribed to replace online language classes and it was an excellent instructor. After passing the exam I unsubscribed and I'm living fine with 3.5. My screen time definitely lowered :) I bought $5 of GPT-4 API credits to use when 3.5 really can't do the job, but it rarely happens. Asking MS Copilot is also another great way to use GPT-4 for free. On the job I mostly use GH Copilot for code completion and it's great as it provides suggestions that are in line with the code style of my team. On serious tasks all chat bots allucinate and I also feel I'm spending as much time correcting them as if I studied the topic from scratch, because I (want to?) believe what they say but end up wasting a lot of time fixing their suggestions. I'm also thinking about SQL, today it suggested me an `UPDATE table JOIN another_table SET column` and I was surprised I could use JOINs in update statements, but the bot was so sure it was the right keyword. I tried to understand where _else_ the syntax error could be, until I turned back to postgres official documentation and verified there's no JOIN, only FROM, just like I remembered.


Between your post and Air Canada’s learning they have to honor policies their chat bot hallucinates and relays to customers, it seems like the zeitgeist is starting to comprehend the inherent limitations and risks of LLMs.

I find that kind of heartening, honestly.

But it’s by no means a death sentence for AI. Plenty of dimensions for massive improvement.


It wouldn’t surprise me if ChatGPT is an improvement in Air Canada customer service anyway in terms of information provided.

It is just that the bot in this case wrote it down, which made AC liable.

I’m an Air Canada elite and am part of several Facebook groups of similar people. It is notoriously difficult to get clear information on Air Canada policies for anything. Even concierge (for Air Canada’s top tier loyalty members) staff are often giving contradictory information.

Their rules for everything are extremely complicated and they have a fairly large back office constantly fixing even addition errors in terms of points allocation and status progression. They literally aren’t adding up spend totals correctly.

It is quite possible that Air Canada just didn’t tell the bot anything about bereavement fares.


Now if only the C*O class would learn that they can help but certainly won't replace developers. Then we can get back to a more normal hiring market. This downturn in tech is very bizarre and much worse than the dotcom bubble.


That’s a very interesting idea, that the layoffs and hiring freezes -and the resulting reality that morale is at an all-time low among every one of the dozens of engineers I interact with on a frequent basis- is a consequence of leadership banking on that they can run their humans into the ground and then replace them with automation.

There used to be the tiniest bit of restraint when the only available replacements were also sentient meatbags which would need to be trained.

It makes sense that this rampant, gleeful, wholesale exploitation has been enabled by the idea that AI can replace folks faster than attrition brings things to a halt.

Sick, but feels truthy


While I’m more on the infrastructure side of things, I see a similar issue. Like you mentioned, it’s great for lookups of API documentation and getting examples etc. I have also used it for things like templates and boring boilerplate. I have come to look at it as a lookup tool and something that converts my thoughts into code. I could see myself sitting at home and doing a lot of coding by voice and a vr headset in the future if the tools continue to develop. At the moment I think we just need to come up with a better way of integrating it into our workflow. I’m starting to wonder if something like visual programming could work well with the “ai” “figuring” out the content of the blocks we connect and basically lets us influence the generated code by the io. That could be a solution to coding on tablets and phones with minimum typing.


Not alone. I was pretty grouchy about it a few months ago. It seems to be getting better, though.

I code all over the stack, usually some bizarre mix of python, pyspark, SQL, and typescript.

TS support seems pretty nice, and it can optimize and suggest pretty advanced things accurately.

Py was hopeless a few months ago, but my last few attempts have been decent. I've been sent down some rabbitholes though, and been burned -- usually my not paying attention and being a lazy coder.

PySpark is just the basics, which is fine if I am distracted and just want to do some basic EMR work. More likely, though, I'll rummage my own code snippets instead.

The speed of improvement has been impressive. I'm getting enthused about this stuff more and more. :)

Plus, who doesn't enjoy making random goofy stuff in Dall-E while waiting for some progressbar to advance? That alone is worth the time investment for me.


My personal reason for not using GitHub Copilot / etc:

I was testing ChatGPT-3.5 with F# in 2023 and saw some really strange errors. Turns out it was shamelessly copying from GitHub repos that had vaguely related code to what I was asking - this was easy to discover because there's not much F# out there. In fact the relative sparsity of F# is precisely why GPT-3.5 had to plagiarize! It did not take long to find a prompt that spat out ~300 lines verbatim from my own F# numerics library. (I believe this problem is even worse for C numeric programmers, whose code and expertise is much more valuable than anything in .NET.) OpenAI's products are simply unethical, and I am tired of this motivated reasoning which pretends automated plagiarism is a-okay as long as you personally find it convenient.

But even outside of plagiarism I am really nervous about the future of software development with LLMs. So often I see people throwing around stats like "we saw a 10% increase in productivity" without even mentioning code quality. There are some early indications that productivity gains in LLM code assistance are paid for by more bugs and security holes - nothing that seems catastrophic, but hardly worth dismissing entirely. What is frustrating is that this was easily predictable, yet GitHub/OpenAI rushed to market with a code generation product whose reliability (and legality) remains completely unresolved.

The ultimate issue is not about AI or programming so much as software-as-industrial-product. You can quickly estimate increases in productivity over the course of a sprint or two: it's easy to count features cleared and LoC written. But if there are dumb GPT "brain fart" errors in that boilerplate and the boilerplate isn't adequately reviewed by humans, then you might not have particularly good visibility of the consequences until a few months pass and there seem to be more 5-10% bug reports than usual. Again, I don't think the use of Copilot is actually a terrible security disaster. But it's clearly a risk. It's a risk that needs to be addressed BEFORE the tool becomes a de facto standard.

I certainly get that there's a lot of truly tedious boilerplate in most enterprise codebases - even so I suspect a lot of that is better done with a fairly simple deterministic script versus Copilot. In fact my third biggest irritation with this stuff is that deterministic code generation tools have gotten really good at producing verifiably correct code, even if the interface doesn't involve literally talking to a computer.


I have been programming for over 40 years, mostly in fairly verbose languages but nowadays mostly in JavaScript and clojure, both of which can be very concise.

I find I spend most of my time thinking about the problem domain and how to model it in logic, and very little time just banging out boilerplate code. When I want to do the kind of task a lot of people will ask gpt for, I find it's often built into the language or available as an existing library - with experience you realise that the problem you're trying to solve is an instance of a general problem that has already been solved.


You are not alone.

At the core AI/ML is giving you answers that have a high probability of being good answers. But in the end this probability is based on avarages. And the moment you are coding stuff that is not avarage AI does not work anymore because it can not reason about the question and 'answer'.

You can also see this in AI generated images. They look great but the avarage component makes them all look the same and a kind of blurry.

For me the biggest danger of AI is that people put too much trust in it.

It can be a great tool, but you should not trust it to be the truth.


I've found that they offer poor solutions but good starting points. I learned a lot more about using Leaflet and d3 for a personal project because GPT4-Turbo gave me a solution, which I took as a starting point. It gave me insight that illuminated the documentation for each that was previously opaque to me. As such, I find value in GPTs as accelerators for learning. They're non-judgmental so you needn't worry about whether or not you asked your question correctly on SO.


In my experience, if I use github copilot chat as a natural language interface to programming it's basically flawless. That is, I tell it exactly what I want the code to he like but in english, and the llm just translates that into code for me. When using the inline github copilot, I find that whenever I am making changes that are obvious, it gets things 100% right. If there are no obvious mistakes or additions to my code, it makes up completely useless noise. I just ignore that.

I have had some experiences where I did not know the language or library that I wanted to work with, and the results were disastrous. It hallucinates exactly the api that I want, but unfortunately it doesn't exist. As an aside -- that api SHOULD exist. The api design is very obviously missing this extremely important feature. I think llms may help people to figure out what kind of apis and features should be intuitively expected and increase software quality. But it still wasted a ton of my time.

My conclusion is actually that llm copilots will be bad for beginners because they don't know enough to specify the request in enough detail, and they don't know enough to see any errors and be able to understand why they were generated.


Yeah it's obviously useless for anyone doing advanced programming work. I'm writing compilers (for AI models of all things), and it's just not there. It has some amazing wins. I have fed it a description of ISAs for proprietary chips and had it generate correct kernels, and such. That is impressive, and it's very useful for references for quick tutorials on common enough things (but do check the docs). However, it's not able to do abstract thinking really yet.

On the other hand? If I want to add payments to my web app (not a web dev, but I mess around with side projects for fun), and I don't want to read the stripe library docs for example, it's pretty good at getting me started installing the right library, and generating the boiler plate. Same with various boilerplate for code generation.

On the other hand, ChatGPT (which has been clearly trained on stuff I've read) has an uncanny ability to describe my esoteric side projects much better than I can, so there is that. I have some DSLs written in Haskell and it's actually really good at providing answers with it, and explaining things succinctly.


Mathematical reasoning only exists associativity in any pure language model so inherently any tasks that requires any sort of mathematical complexity isn't one you should be approaching without at least defining the logic, algo, interfaces, inputs, expected outputs, tests etc. yourself first IMO.

With that being said, IMO, you can optimize your dev work to get the most use from GPT4 by providing better context, examples etc. these don't really fit into the co-pilot workflow but can fit into instruction-tuned model workflows. (GPT4 / Mixtral / Mistral / CodeLLAMA etc.)

E.g. use the first 50k tokens in the context window to pre-prompt with relevant context from your code repo/sdk documentation/examples whatever prior to actually prompting with your task. And demonstrate a test in your prompt that displays what you want. Taking a "few-shot" approach with specified validation (in your test) results in considerable improvements in accuracy for programming (and other) tasks.

(YMMV)


I learned Haskell recently, and I thought ChatGPT was a great support for that effort. It helped me with definitions, standard recipes and disambiguation. What it wasn't very good at was coding: it couldn't solve anything I did not know how to solve, but it was quite good for things I did know how to solve but couldn't be bothered to. For example, I could ask it to re-write a messy snippet of maps and filters as a list comprehension, and it would. The comprehension it produced may not work, but I could still glean enough from the approach that it would be useful.

I'd also ask it to re-write existing code "more canonically" or "more succinctly". Again, it fails to do this almost every time, but often it uses something that I didn't know existed whilst doing it, which I found to be valuable.


As a newb to programming I find it helpful for boilerplate and “smart” autocomplete tasks (which makes sense, as a LLM is simply autocomplete on steroids). I have found it helpful for Regex as well, which is a topic that still catches me. It can also be interesting (albeit typically somewhat useless) to bounce ideas off of it and have it attempt to solve problems. It rarely helps, but in my experience humans haven’t been much better outside of my professor.

I’ve found its best at completing repetitive but simple tasks, such as organizing data or making simple scripts. Anything more complex or that cannot be easily explained in a way it “understands” will run into problems. For instance, telling it to program a custom variant of Dijkstra’s Algorithm for my specific input data? No go, it doesn’t work. However, a simple script to replace any text from column 1 in a CSV with its neighbor text in column 2? Works first try.


Once CoPilot can understand entire workspace and have deep understanding of your project, it will likely seem a lot smarter than it seems now.

All I use it for is to avoid repetitive stuff as it's exceptionally good at guessing my next step.

The autocomplete bits feel wrong most of the time, and as fast as API updates happen it's mostly a wash in terms of productivity.


1) Some areas where it was a great success and saved me a lot of time:

* For my current gig, I recently had to play the best computer game of all time: Linux. ChatGPT was invaluable here. "How do I figure out where the network connections are managed on this device?"

* Also ChatGPT managed to assist me in creating a task breakdown for a prototype that I was building

2) Area where it was of some help. I think using traditional methods would have been just as good in these cases:

* Sample code for the usage of TS clients for MS Graph and notion

* generating an almost working OPC UA dummy server in TS using "node-opcua"

3) Areas where it was of no help and mostly a one way box where I could enter my thoughts. Not sure if using ChatGPT here was a win or a loss, though - you could still think of the process as some form of notetaking with feedback:

* Working on some complex tree recursion things

* trying to come up with the interface for an infrastructure-as-code framework


Having some "AI" that generates code in your IDE is roughly equivalent to copy-pasting code from the first google hit. Great for search/autocomplete/filling out obvious patterns. The grunt work. But terrible for doing anything novel or interesting - which presumably is the entire reason you have a software job.


It being useful for the basic Google/Stack Overflow tasks makes it worth it for me. It saves me a lot of time being able to simply bring up the exact answer to whatever small "how-do-I-do-this-again" problem that fits into my code without rewriting it.

You're right though that it isn't very useful for complex problems. Complex SQL queries in particular are hopeless. But I probably only spend 10-20% of my time on stuff where I really have to think about what I'm doing anyway.

The autocompletion is the best part for me. When I start writing a function called `mapUsersToIds` or whatever and it autocompletes the 3 or 4 lines of code that I now don't need to write, that saves me a ton of time - I'd say I'm 30% more productive now that I don't need to spend time typing, even if the autocompleted code is exactly what I was going to write in the first place.


I used Copilot, but the code it generated was either bad or it made up things that didn't exist. The same goes for ChatGPT.

Both tools work if you don't care about the quality of the code and are working with boring tech, otherwise, it's a total disaster.


I agree, they aren't great at programming, but I've found value from LLMs for rubber-ducking and quickly getting into new framework / library / language by using your existing knowledge and let the LLM augment that for you. "How would I do X using Y", "Is this performant, and does it have these pitfalls". It works surprisingly well even though the output may not be 100% correct, but it will get you started faster / point you usually at the correct direction. Also used chatgpt to generate marketing text which I absolutely cant write. [1]

1: https://mouselocker.cloudef.pw/


Wow, not a Mac user, but thanks for the use-case idea! Absolutely missed it before.


As someone building in the AI space I can give my unbiased opinion. I do not think AI is the big productivity boost we expect them to be yet. I have been a software engineer for around 6 years now and I the only AI tooling I prefer and have really can not live without today is autocomplete (a better and smarter LSP).

When it comes to making a more complicated change, things are never easy because of the limited context window and the general lack of reasoning and inherent knowledge of the codebase I am working with.

Having said this, GPT4 has been really good for the one off questions I have about either syntax or if I forget how to do "the thing I know is possible I am just not sure" or the mundane things like docker commands or some other commands which I need help with.

But... if you guys have seen Gemini-1.5 Pro I was seriously mind blown and I think the first time I felt a LLM is better than me and that has to do with code search. I have had my fair share of navigating large codebases and spending time understanding implementations (clicking go-to-reference, go-to-definition) and keeping a mental model.. the fact that this LLM can take a minute to understand and answer questions about codebase does feel like a game changer.

I think the right way to think about AI tooling for programming is not to ask them to go and build this insane new feature which will bring lots of money for you, but how they can help you get that edge in your daily workflows (small quality of life changes which compound over time, just like how LSP is taken for granted in the editor now a days).

Another point to mention here which I believe is a major miss is that these copilots write code without paying attention to the various tools we as humans would use when writing code (LSP, linters, compilers etc). They are legit writing code like they would on a simple notepad and that is another reason why the quality is often times pretty bad (but copilot has proved that with a faster feedback loop and the right UX its not too big a hassle)

We are still very early in this game and with many people building in this space and these models improving over time I do think we will look back and laugh how things were done pre-AI vs post-smart-AI models.


i tried it once. it kept lying to me and misunderstanding trivial sentences. maybe its better these days, but i hate it with a passion.

me: Can you tell me how to implement XYZ without using standard library? gpt: use function y from the standard library.

throws computer out of window

it can't even properly understand basic language. it remembers occurences and does statistical analysis... it just prentends it understands, in all cases, even if its accidentally right. a multi-billion dollar project to get answers from a toddler blurting out random words it's heard around it...


Anecdotally people are saying it may actually have gotten significantly worse recently. So it’s possible that the results you might have got from GPT-4 a month ago are no longer anything like as accurate or useful.


It can help with complicated tasks. When prompting via the chat interface, it comes more naturally to prompt with a full description of the problem.

With too much assumed context, it only does a good job of spitting out the answer to a common problem, or implementing a mostly correct version of the commonly written task similar to the one requested.

When you use copilot, are you shaping your use to its workflows? Adding preceding comments to describe the high-level goal, the high-level approach, and other considerations for how the code soon to follow interacts with the rest of the codebase?


I haven't used copilot much at all, but I use chat gpt occasionally. One of the queries I find it does the best with is: "What's an idiomatic way to do XYZ in language ABC".


Similar experience. I really only still use Copilot to:

- generate short blocks of low-entropy code (save some keystrokes)

- get me off the ground when using a new library (save some time combing through documentation)


I’ve had good success with ChatGPT and with Copilot, but I’ve found it really depends on how I use it and my expectations.

It is NOT good at generating complex functions or code but simple cases and I do not use copilot for “comment driven development” as my experience with that wasn’t good.

I use it as a fancy autocomplete, I use it to explain code, I use it for refactoring code (eg it’s good at tasks like “change all variables backed to snake_case”), I use it to provide api usage examples for functions, I use it to generate basic docstrings (which I then hand edit to add the detail I need), I use it to point out flawed in code (it’s bad at telling me if code is good, but it’s pretty good at spotting mistakes if the code is bad).

I’ve also used it to generate functions or algorithms that I wouldn’t have been able to do by myself and that includes complex SQL (TimescaleDB windowed queries), in an iterative process. I find the best results are when you generate short snippets at a time with as few requirements as possible but with as much detail on what you’re asking for as possible, and then it’s an iterative process. You also need to guide it, it works if you already know how you want to approach it and just want the AI to figure out the nitty gritty specific details. I’ve used this to generate a concurrent bitset-based memory allocator. I don’t think I could have done it by myself.

I’ve also had success generating little Python scripts in their entirety, although to get the exact behaviour I wanted was a process of repeatedly pointing out problems and eventually noticing it was not where the LLM was editing. So it’s important you understand every bit of code that it generates and don’t accept code until you’re certain you understand it.

As an example of a recent script it generated for me, it reads a TOML file containing a table, it looks for a key that you specify as a command line arg. It then generates a CSV for all keys it finds. Eg it turns this:

    [test.a]
      x = 1
      y = 2
    [test.b]
      x = 3
      z = 4
Into this:

    x,y,z
    1,2,
    ,3,4
It did this a lot faster than it would have taken me to do it myself.

I’ve also had good experience using ChatGPT to do pros and cons list of approaches or do some basic research for new eg into algorithms (now that it can search the web).


You're not alone, I have the exact same experience.

I already gave up on anything complex, but it also fails at relative simple things.

It goes like this: The first answer does something useful, but is not the full solution or contains a bug.

When told, it apologizes and gives code that does not even compile. Then when trying to get it in the right direction, it gets worse and worse.

Then it hallucinates a non-existing library,that should solve the problem.

And in the end I end up writing the code myself...


Better to restart the discussion than trying to ask for corrections, the context window gets filled with confusion.


yes that are only good for simple cases, but at least for me those are not satisfying to solve

so letting it write some boilerplate (copilot) or solve simple problem that I don't want to waste my time for is providing me enough value to be worth spending money

also I find it very useful in filling out tests, where it mostly correctly fills other test cases given single example - again, not hard but time consuming

for harder problems I often find it useful when I debug something that I'm not familiar with, it usually give half of very generic advices but often suggest some valid starting points in investigation (eg recently I had to debug openvpn server and I didn't touch them in years, issue was manifested with error about truncated packets and gpt was able to correctly suggest TLS negotiation)

sometimes I also use it rewrite/refactor code to see if maybe I would like different style/approach - this is rather rare, at begining I found it useful, now I think gpt started to often corrupt code or replace sections with comment with makes it much less useful


I have cancelled Copilot after trying for few months because sometimes it used to take forever to suggest code. Now I use Duet AI and it is code suggestion on steroids. Sometimes it suggests some good code and I feel how terrible I am for not thinking of that approach. But I think we are progressing fast with this AI things and it won't be long before everyone can code with these tools.


I feel you. In my experience the paid ChatGPT's answers does not outperform the basic google stackoverflow route for solved problems - which is not surprising considering how those models are created. For more complicated stuff, in my case, for example, weired Java dependency injection use cases, it gave me equally wrong answers I could found in the internet anyway. Only faster.


I don't know how to code. I wish I did, but it is a skill I have never been able to pick up. I am hoping that eventually, at some point, AI is able to be a zero code interface so I can create something with help of course.

I can't code, but I undersand coding concepts like variables, Booleans, etc.


> or learn more about some history stuff where it just repeats something it parsed off wikipedia or whatever.

Be careful with that too, it will also spit out whatever urban legend it read on a subject without making a difference between the facts it got on Wikipedia and the bullshit it read elsewhere.

Keep in mind, they are language models, not knowledge models.


I've had times where the AI saved me lots of minutes of typing. Example: I have a piece of data that need to be mapped to an almost equal piece of data, but types differ slightly enough that I need to type the equality for each of them. Having it auto complete all of it (tens of lines) at once is such a relief.


it’s for satisfying the wet dreams of management in large companies dreaming of replacing developers with AI.

the issue is that it will take a long time before they wake up and scramble to re-hire the people they laid off. in the meantime i predict a lot of developers will have pivoted to other careers, retired, became AI devs etc.


You're not alone. This has been the experience of myself and nearly all the people on my team. Basically, everyone has stopped bothering with using LLMs for this purpose. In our experience, using these tools doesn't increase productivity, but does decrease code quality.


Since I don't see it mentioned here at all, I highly recommend trying out https://cursor.so - GPT4 powered and absolutely helpful with refactoring code and adding new features, not just finishing the current line.


What an odd decision to fork VSCode and ask developers to switch instead of extending VSCode.


Not sure where, but I had read it’s because they wanted to integrate with functionality that VSCode doesn’t provide by extensions. So they had to fork it to realize their vision.

Edit: https://news.ycombinator.com/item?id=38055805


I agree, but I think of it more like a dirty work kind of thing, that it can do for me, like to repeat switch cases, instead of just copy pasting them, and change. I'd say its worth the 3-4% benefit it gives me.


I haven't used GitHub Copilot in awhile but I find ChatGPT is decent at generating the boilerplate for a new project but it does degrade the more I use it on that project. Often I have to either give up using it or start a new chat and give it my latest code.


I'm using it happily in all projects I'm involved but I'm not writing code often.

It helps tremendously.

Also when you look what Google is doing internally: gpt might not be perfect right now, but I would try to keep up with testing/using it regularly enough to not loose sight.


There are two use cases that work for me:

1. Complete tedious / repetitive code

2. Help with new/rarely used things. For instance, I'd ask it: write a function that fetches the price of the Apple stock from Bloomberg for the year 2023. And then I can use that a tweak it as I want.


The real advantages come when you move off of copilot or GPT onto something without their restrictions. Sure it can’t go pull code from github but it can give you more accurate results than the dumbed down prompts the SaaS providers give you.


Co-pilot is junk. The only thing that works passably for coding is ChatGPT using GPT-4 and prompts where you ask it write and then implement a software spec, stopping after each code section to ask for review and revisions.


have you tried codeium? it's pretty good for a free tool, it also does RAG against your whole project


Does codeium let you ask questions about the whole codebase, such as "which function handles foo?" or "what are the files under folder bar collectively responsible for?"


I asked it for help with solving an issue with a small C function a few days ago, it suggested that instead of using `head.prev` I use `head.prev`. Not a typo, it just echoed my own code as a solution


It's hit or miss. Sometimes, I give it relatively complex requirements, and it spits out a running program. Worst case, it at least points me in the right direction.

I save well over an hour a month with it, so it's worth it.


If I have a linux problem and copy/paste relevant parts of the log into an AI, the result is more often than not useful and quick. It's a real help for admin tasks. Is it programming though ? :)


I can't tell if I'm imagining it, but I feel like the quality of ChatGPT has dropped for software dev.

I used to get very helpful responses but recently find myself correcting it a lot.


Did you try GPT4? That isn't my experience though. It can explain fairly complex machine learning model to me with high accuracy.


It’s funny, gpt 4.5 I’ve been able to use to write 100% of my database migrations with little coaxing for the past few months.


Yeah but database migrations are the kind of grunt work I'd classify as a good usecase for it, provided you review them carefully. I think OP was saying for actual problem solving it's fairly limited.


I find it useful for oneliners and scaffold things like building mappers from one object to another.

Other than that don't trust it.


I think once you know what you're doing, they are actually great for the boilerplate stuff.


These tools are simply not useful. I have been saying this right from the beginning.


i turned off copilot when i did a trial of an alternative and when the trial ran out i just stopped using either. i still use chatgpt over stackoverflow but i don't miss the proactive prompts.


This is just another facet of the current problem with 3-month boot camp graduates thinking they are software engineers and technically illiterate managers hiring them for SWE positions. Of course, for that level of competence, GPT copilots are true marvels because the code quality is the same or even better.


I mean most of the time coding is not spent writing code or fast . its finding bugs or weird edge cases

some days i write 2 lines of code, or just try to find stuff. all this copilot talk feels like its never about real software development


For elite engineers, it is a 100x’er for productivity.


this is non-sensical. For beginners probably.


Welcome to the other side of the bubble )




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: