Hacker News new | past | comments | ask | show | jobs | submit login

For the past 6 or 7 years I've had a side gig as an external examiner for CS students, and I've gotten GPT to pass most of the programming questions they ask the students in the programming exam. It's things like building a double linked list, sorting a binary tree and so on, and it's very good at it. On the flip side, I've yet to get it to do anything useful for me in my real job that isn't writing documentation. It's likely because it has a lot of data on how to pass CS courses, but not so much data on how to get entity Framework, Asp Versioning and Odata to work together. Probably because very few people have been crazy enough to attempt the latter.

I think it will be very interesting to see what the teachers, and institutions, do about it though. It's not like students couldn't find these things on the internet before GPT, but it's so much easier now. Over all I personally think that a nice shake-up of the way we educate people might turn out to be a good thing though. That is of course easy for me to say, as someone who doesn't actually have to do any of the shaking-up.




Students found way to cheat exams before too, what is great about Chat GPT is that is a great tool for a student that actually wants to understand and learn. With Google you can find an answer, maybe a good explanation but with a good Chat AI you can ask back for details, paste your broken solution and ask for the AI to spot the problem and explain it to you.

I see that some teachers will no longer allow projects printed on paper, it needs to be written by hand to force the student to read the content, I think this is bad for some subjects where the student does not care about, they will just forget instantly what they wrote.


> CS students, and I've gotten GPT to pass most of the programming questions they ask the students in the programming exam.

Highschool level questions aren't an issue for a bot trained specifically to answer such questions. A google search query could pass with flying colours.


I’ve gotten ChatGPT to write a lot of mostly correct code for me. Mostly around the AWS SDK in different languages. There is so much sample code out there it helps.

It can also write CloudFormation, convert existing CF to Terraform and idiomatically correct CDK code in Python, convert code from Python to all of the target languages I’ve given it and convert Python AWS SDK (boto3) code to bash/AWS CLI.


Maybe you are much better at prompting it than me, but I’ve gotten absolutely no value out of it in regards to my professional coding. Not for lack of trying either. I’ll give you a few examples of what I mean.

Friday we decided to split our EF migration into multiple context files, something we should’ve done from the beginning to be fair. Anyway, we deploy with Azure DevOps and we migrate for our more permanent testing, staging and production slots through a VM agent on our VNET. We don’t have a DevOps engineer, and we don’t completely know what we’re doing but it works. I figure I could copy paste the code four teams, giving us 5, one for each context, but I didn’t want to do that. So I asked ChatGPT and it gave me an answer involving Bash, which doesn’t work in our DevOps pipeline, as in, it doesn’t exist in the context. I told GPT this, and fed it more information and it proceeded to give me the exact same code four times before it gave me something different, but also unusable code. That’s one of the good examples of it failing. Because it made things up and they didn’t work. So no harm done, aside from the wasted time.

A few months back we tasked it with writing something for our Odata client library, and it did. It also worked, only, it didn’t reaaaaalllyyy work because it was flawed in a way you might not notice if you weren’t an experienced developer. Basically what it had build wouldn’t scale effectively, or at least not in a cost efficient manner. It would have been efficient enough to run for a long time without anyone noticing, however, but it would have been rather expensive and it would’ve broken down completely under “normal” loads, if we ever grow to twice the size we currently are. But it worked and it looked fine, and that’s sort of the scary scenarios. Or maybe they aren’t scary because we’ll likely be well paid to clean up those messes for years to come.

I do think it’s likely to become more and more useful as time passes. I mentioned documentation in my GP, and I am genuinely amazed at how good it is at writing code documentation. You obviously don’t feed it sensitive code, but it’s been able to accurately describe functions from their name, input and return values alone. Seriously, it’s taken a function name and written a piece of JSDoc that was miles better than what I had written, accurately guessing complex context (which it only saw maybe 10% of) from the function and variable names alone.


I haven’t done in “development” projects since ChatGPT became a thing. It’s mostly DevOps these days. My specialty in consulting is combining App Dev + DevOps + cloud. I’ve mostly been using it for helper scripts and JSON and Yaml wrangling like this one.

https://news.ycombinator.com/item?id=34566980


I’ll be sure to check it out. To be more specific it’s about using dotnet EF core to generate a .sql artifact and then executing it using mssql server tool.

I wanted to just generate one big .sql instead of one per context, but maybe you just can’t.


It's an interesting and legitimate question. But just as it takes business time to pick up on new technologies and adapt, it might take institutions as well. I've seen reddit posts of people getting writing flagged as AI written when it wasn't. The question is, how long until they start embracing it as a tool to use to write with?

Many of the institutions have billions of dollars put back. Harvard for example google tells me 53 billion. So I don't think they are just going to go away, they will adapt somehow.


The solution is trivial. Have students write the algorithms in pseudocode in class by hand, on paper.


No, thanks. The coding exams I've been in were either handwritten, or in a text field which loses focus every time you pess Tab. Both were a miserable experience and absolutely pointless.

How about this: test actual understanding instead of desperately making trivial tasks needlessly challenging.


Or how about: stop spoiling students and make them pick up a pencil and write down some damn code.

I took tests by hand all my life and even in college I think most computer science tests that weren’t “implement this big project” were all done by hand. Professors didn’t give a fuck about fancy test software: bring pencil, bring paper, here’s a test, good luck.

Your hand isn’t going to cramp up writing double linked list or binary heap functions. Plus you’re going to have to do this anyway for whiteboard interviews to get your first job.

If a CS student can’t do this by hand, then just get out of the field. We do not need more intellectually weak engineers, especially as the amount of money one earns in tech seems to increasingly attract money hungry brogrammers that write shitty code. And I’m sure ChatGPT will empower an entire generation of slackers who think software engineering is just writing the right prompts in plain English.


> Plus you’re going to have to do this anyway for whiteboard interviews to get your first job.

Or they could improve interviews so they test more than memorization.

> If a CS student can’t do this by hand, then just get out of the field.

Why? Should construction job applicants also dig holes using old shovels to prove they aren't weaklings depending on modern machinery? Completely pointless.

> We do not need more intellectually weak engineers

But that implies current "intellectually weak" engineers were good enough for handwritten whiteboard interviews, so what's your point again?


I had exams in university that involved writing short pieces of (usually pseudo code, one exam asked us to write C) code on paper. Pre-gpt or anything like that. Done just because all our exams were on paper to simplify logistics.

It was fine. It tested actual understanding of the problems at hand.

No one should be asking someone to write long pieces of code by hand, but a dozen lines of code to implement a simple algorithm is fine.


I think the point was that some students might substitute ChatGPT understanding for their own.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: