I understand the problem here and I sympathize with professors in these circumstances. Learning the fundamentals will remain important. But as others have pointed out: if students are unwilling to learn, or are taking shortcuts, it will ultimately hurt them in the long run.
One thing tho is that things like Copilot put the lie to the hypothesis (propagated mostly by the Google-style job interview) that intensely coding clever for-loops to perform algorithmic magic (for things usually already in the standard library) is the best measure of the competence of a software developer. If it can literally be done by a machine, maybe we should be measuring based on something else. Especially since this kind of thing is best done on the job by looking in a good textbook or using the standard library. Probably grading at the university level needs to consider this also.
I haven't used Copilot. I doubt I'm its intended audience. After 30 years, actually writing code is perhaps the easiest part of my job. The mechanics is the easy part. The big picture thinking and figuring how to get it all together into a system is the hard part (and honestly there are plenty of people far better at it than I am). Now, if we get a Copilot for that ... then our profession is in trouble.
> if students are unwilling to learn, or are taking shortcuts, it will ultimately hurt them in the long run
That's a very limited, and dismissive view. If you allow students to use copilot, you're handing out certificates or diplomas to people who can't code. That will very quickly erode the value of your institute's certification, and with it, that of the other students. Otherwise, why not give everyone an MSc in CS? "It'll only hurt themselves in the long run".
> coding clever for-loops to perform algorithmic magic
A for loop for searching or summation is not algorithmic magic.
> maybe we should be measuring based on something else
No, why? Students have to understand the fundamentals. What those fundamentals are depends on the expertise. Once you've shown mastery, you can use more advanced tools. Teaching has always been like this, and for a good reason.
I'll be honest: has anyone here ever looked at a resume with an MSc in CS and said "Yep, I bet this person knows how to write code well"? Surely there are some research level jobs where a PhD in CS is necessary, but for most coding jobs, an MSc is just 2 years of missed experience where you're learning bad habits and antipatterns by never having your code used by other people in any strenuous way.
Certainly there are a lot of pros and cons to getting a BA in CS. But "learning how to write code" is not the major advantage of a university education, nor should it be.
Lets face it: There is so much theory crammed into some CS degrees, bachelor and master degrees, that students do not get to code much. Real coding is often learned by oneself in side projects or on the job. If students do not engage in side projects, how do we expect them to be proficient computer programmers out of university?
However, I see this as a failure of CS education. Our work is with computers. If we do not teach, how to use the tool we work with properly, then we have failed. I know CS is not only developing software, but also a lot about the theory, but it should not exclude practice of working with the most commonly used tool, which graduates will use, the computer.
Often even things are taught wrongly (do not get me started on "every noun is a class" kind of teaching of OOP), by people, who have not developed a single larger code base for decades.
That said, the industry has at least in that regard caught up a bit, and realized, that fresh from university most people do not write good code immediately.
Your argument is akin to saying "there's so much theory crammed into Astronomy that students do not get to look through a telescope much"
CS fundamentally is not about computers. It's about math and the flow of information between theoretical machines. Computers just happen to be a physical implement.
The idea that universities should teach to develop industry-fit individuals is just an extension of commodifying university education. Why not let the industry handle training for what they need their people to do, and let universities focus on their intended goal: create more university professors/researchers.
What about moving the responsibility (and financial risk) off students' backs and to the corporations that want to exploit the knowledge gained in a degree through apprenticeship and learning to code through experience?
We don't do it because it means trade schools for developers. Which, in truth, is exactly what 90% of what we do in the industry needs. But it's not glamorous; it's not a degree. Trade schools are for proles, and we aren't proles, now, how could we possibly be?
If taken seriously by universities, this would be a really hypocritical stance (on the universities' part).
CS departments are getting more students and funding in recent years precisely _because_ they are seen to be providing valuable education for would-be programmers and practitioners in the software industry. At the same time, some people like you perpetuate the myth that only the theory matters in a CS degree, and it shouldn't have any bearing at all with the demands of industry.
If what you propose is becomes reality, expect CS departments shrink to 10% of their current size and high school graduates skip university to take on apprenticeships. That's actually fine by me, but I doubt that's actually what you're envisioning.
The fact is that most universities these days _pretend_ that they can prepare somebody to become a software engineer by providing CS courses. That works to some extent, but as you say, the degree is generally more geared towards training researchers and academics. There's no fundamental reason why universities shouldn't provide a more practical-oriented "software engineering" degree of sorts, much like how they provide other engineering degrees (eg. Civil Engineering) as opposed to pure sciences. _Except that they generally aren't capable of doing so_, so they happily accept the sub-optimal arrangement that CS is the de-facto degree for prospective software engineers, while maintaining the whatever perceived high ground where they insist on teaching theoretical, impractical stuff because most of the faculty don't know a thing about developing software in modern environments. Basically they want to have their cake and eat it.
Software engineering has become such an important part of modern society that IMHO it should at least have the recognition and respect as such. Imagine somebody arguing that universities shouldn't provide Civil Engineering courses because, you know, it's just physics and applied maths, and civil engineers should just do a Bachelor of Science in Physics with a theoretical focus, leaving the practical knowledge of how to build a building to apprenticeships, because universities are too busy mentoring the next Einstein. Would you want to live in a society like that?
> Why not let the industry handle training for what they need their people to do, and let universities focus on their intended goal: create more university professors/researchers.
Which would result in universities eventually going out of business. An outcome I'm not opposed to, provided what replaces them functions better at training young people for real life.
> Your argument is akin to saying "there's so much theory crammed into Astronomy that students do not get to look through a telescope much"
I doubt, that a computer with its endless possibilities of using it and programming it, is comparable to a telescope. Also I did make note, that CS is not only about computer programming. So I am fully aware of the following:
> CS fundamentally is not about computers. It's about math and the flow of information between theoretical machines. Computers just happen to be a physical implement.
However, this does not mean, that universities should forego teaching about computer programming properly.
The alternative is, that we acknowledge, that first we study CS, but then employers finance year long computer programming classes for new employees, taught by people, who are actually good at this. The kind of people, who have lots of experience with different systems, programming languages, paradigms and teaching in general. That'll be expensive.
> What about moving the responsibility (and financial risk) off students' backs and to the corporations that want to exploit the knowledge gained in a degree through apprenticeship and learning to code through experience?
Yes indeed, that is one alternative.
I am saying, that employers should not expect and of do not expect good good to come out of people, who recently graduated from university and have their BSc or MSc. I don't think we are in disagreement about that point.
However, realistically this might lead to some employers being to much up in the clouds, only ever accepting employees, who were taught elsewhere and students having a really hard time finding _any_ job at all, because no one is willing to expend the resources to teach them. Furthermore, the teaching will probably be very one-sided, gaered towards what that particular employers uses in their tech stack. It is also highly unlikely, that most employers will have teachers, who are really that good, lets say up to Abelson and Sussman standards, if we may wish for something.
Depends on the school and depends on the person. The same way many companies in the US assume any grad from MIT is a good engineer, there are companies who look at resumes w/ an MSc and bump them up. From personal experience, MScs are particularly well-regarded in Brazil, for instance.
My question wasn't "do companies think that an MSc is good" but rather "Have you, personally, ever noticed that a candidate with an MSc was better than a candidate with equivalent experience without an MSc".
I think we can all agree that whether a company thinks something is good is completely unrelated to whether that thing actually improves your programming ability.
I don't think it would be held against you? Especially if it was "going back" for it.
I think the uncharitable view people take of a Bachelors followed immediately by a masters but with no PhD is that they presumably weren't planning to go for a PhD, and given that, had the choice between a) doing 2 more years of school for very tenuous benefit and b) going directly into a job where entry level pays 1.5x+ the median household income in many states. And they chose a).
So I think the uncharitable interpretation is then that the person was afraid of entering the adult world or felt like they couldn't hack it.
> has anyone here ever looked at a resume with an MSc in CS and said "Yep, I bet this person knows how to write code well"
A Masters in CS doesn't tell me the person knows how to code well, but it does tell me that they are _more likely_ to
- Have had experience with a wider variety of algorithms, data structures, and general coding concepts
- Have written code to perform more complex things
- Written code for someone "outside of a classroom"
Certainly the last two are more directly accomplished by actually being in the workforce. However, an educational environment is a far better source for the first one. Now, I'm not saying having a Masters is necessarily _better_ for a developer but
- Given two developers with no work experience, one with a Masters and one with a Bachelors; I would generally assume the one with the Masters is a better developer.
- Given two developers with a similar amount of work experience, one with a Masters and one with a Bachelors; I would generally assume the one with the Masters is a better developer.
- Given two developers, one with a Masters and no work experience, one with a Bachelors a couple years of work experience; I wouldn't make any assumptions about their skill (they're certainly different, but I wouldn't assume one is a better developer than the other).
The same logic holds true for me when people try to convince me that a degree in CSci offers no benefit at all, and that it's better to just self study and start working. There's nothing you can learn during a degree that _can't_ be learned by self study; but there's a lot of things that are much more likely to be learned in an academic environment.
I'll generally rate an MSc lower than BA/BSc. (The exceptions would be if you needed the MSc to meet some visa requirement, or if you had an BSc in some weakly-related field beforehand.)
From downvoters: I would genuinely love to hear a defense of an MSc per se. What university has their shit together enough to actually accelerate/compress 2-4y of work experience into 1-2y years of academic project, which is what's always the claim? If all you're doing is the courses, those are accessible to 4th year undergraduates in every university I know.
To a person, everyone I've interviewed who did an MSc for the coursework had to because they finished their BSc with minimum effort and no one was interested in hiring them. Even worse is when you see 4y BSc, 2y work, 1y MSc.
This just sounds like you’re punishing candidates because you have an inferiority complex.
Many master’s holders started out thinking they might have been interested in research and decided industry was a better fit after seeing how the sausage is made. You’re “rule of thumb” punishes undergrads who were inspired to push the boundaries of knowledge in the field.
Cool personal insult - who has the inferiority complex here?
Your explanation can be reframed as "many MSc holders did not invest any attention to how the academy works during their BSc years" - why should I prefer them? My rule of thumb punishes such people compared to those with a broader view and better self-knowledge, yes.
> holders did not invest any attention to how the academy works during their BSc years" - why should I prefer them?
More inferiority complex. Some people give a shit about research and want to give it a try to see if they can make it work. There is no way you understand the nuances of how NSF or DoD funding works for your specific areas of research until you spend a lot of time in the weeds of grant proposals and reading reviews.
> My rule of thumb punishes such people compared to those with a broader view
This is a joke. Most BScs have no clue at all how academia works because they don’t care. They are absolutely not more enlightened to the nuances of career prospects in research in their areas of interest.
There are wide gulfs in the funding available depending on sub fields of CS. Research security and you’ll have plenty of money and opportunities at national labs, research arms of corporations, and academia. Go the P/L route and it gets dry pretty quickly.
> Cool personal insult - who has the inferiority complex here?
If you’re gonna try to insult, at least get it right. It would be a superiority complex if you wanted to view my post through the lens of pushing “how smart” grad students are. Yours is an inferiority complex because you’re taking out your lack of institutional credentials on anyone who has them with some convoluted logic about how people are stupid if they spend extra time in academia.
Graduate degrees aren’t meant to simulate work experience or take more classes, they’re intended for research. If your MSc lists no research and is applying for some sort of boilerplate frontend job, then I understand the hesitation, but normally one would take that experience into a cutting-edge field which focuses on innovation rather than making web apps as fast as possible. In jobs with an element of R&D, university research experience is very useful.`
I mean that's the theory, and it still seems true for PhD programs. (So I guess I'd include "ABD" as another possible exception if the program also granted an MSc but it feels like this is much less common than it was when I graduated - the many problems of grad school are now better communicated to undergrads.)
But for MScs, nah. The more focused coursework is directly accessible to undergraduates, and the projects are often like "port this 20yo tool to a new kernel version" or "build me a DB schema" or a few other things you'll have also done but 3x over in your first year of a real job.
For most university CS labs, "research experience" comes down to 1) writing code in an overcomplicated way (because it's theoretically cooler) and 2) with no thought to future maintainability (because once the paper's published, the code gets thrown in the garbage). There are a few CS jobs where those skills come in handy! But in most jobs, even jobs doing "R&D", they're bad habits.
One does not typically return to undergraduate study after gaining a first BS degree. If you have a BS (or in my case a MS as well) in a not-directly related technical field and wish to study CS, you get a graduate degree. This often requires extensive additional coursework, in my case about 12 additional semester hours. Thanks for ranking people like me lower than someone with less education; I doubt I would enjoy working for you anyway.
I think an accelerated bs/ms might be another exception you should consider. (This being when a bs student can do another year immediately following to get their ms.) It makes sense on ROI as many companies have HR departments more likely to get you into the interview process, and then it’s often a slightly higher starting role. Also, from a knowledge standpoint, some graduate classes help practical abilities only. A 500-level databases course, for example, isn’t research-oriented. It just teaches more advanced topics that might actually matter in the workplace (an intuitive understanding of how/where to use an index, more complicated examples of projects with database connections, etc.). I fall into the case of your first exception, so I’m not actually sure undergraduate courses don’t teach these things. I’ve had a few 500-level classes that had absolutely no research orientation (“object oriented design and development” is another example of a class where the projects are just super-complicated examples of what you experience in industry). Pursuing an accelerated degree shows that the student wasn’t careless in thinking research was something they wanted to do.
Undergraduates in CS can fill their senior year with 500 level courses easily. Motivated ones also much of their junior year. Why would I take the applicant who did it five over the ones who did it in four? Worrying about the “MSc” label at the end for the same work is just credentialism.
Your confusion in this thread stems from the assumption that there is only actually enough CS to cover 4 years for a good student and 5 for a mediocre bs+ms. This is complete bullshit because the truly motivated undergrad would spend the senior year doing 500s and then the next year doing a bunch more for more breadth.
At any of the large CS universities, there are usually 4+ masters tracks you can go down (security, hpc, p/l, HCI, algorithms, etc).
Someone who quits with just a BS has no interest in going further theoretically. The absolutely did not cover all of the topics available - let alone get into research.
Credentialism is bad, yes. It’s important to get over prejudice that less education is worse, yes. Saying that an ms student with the same gpa (adjusted for graduate grade inflation) from the same university as a rule is worse than a bs is ridiculous. You’ve gotten things twisted past overcoming prejudice. It’s great that programming as an industry has lots of room for people who go without education, but your logic is delusional. Of course a 4 year student who accomplished the same as another who took 5 is better, but that doesn’t justify your statement that started this.
I agree with you on the value of a master's, but not about the some of the assumptions you make about the people who get them.
The most legitimate use of a MS degree is to move from one field to an adjacent field (eg math to CS). You know the basics, and you just need the advanced coursework in the other field.
A lot of people I have seen use a master's as a pure signaling tool, getting a high-profile University on their resume. For example, I have seen a lot of resumes with "BS from Unknown University" and "MS from Stanford/Harvard/MIT." In interviews, these students have not performed nearly as well as students with a BS from Stanford/Harvard/MIT, so I tend to discount the MS if I see this pattern.
Another common reason for the MS that I have seen is as an immigration tool. These people are usually pretty smart.
I got my MS directly after my BS without looking for work. In my case, your claim is that experience was actually detrimental to my knowledge, somehow erasing part of the value of my BS.
Maybe that's why you're getting downvoted. (Not by me, for the record.)
But my options aren't ever "candidate A with BSc" and "candidate A with MSc", it's "candidate A with BSc" and "candidate B with MSc". I don't assign negative value to the MSc per se.
It’s also stating the obvious to some extent. Extrapolating “master’s” to “PhD”: a company designing and selling hardware verification software would be better off hiring someone with an algorithms PhD to work on their SAT solver. Plenty of self-taught folk are better programmers, but experience parsing the academic literature defining cutting edge SAT techniques is valuable here. This isn’t even really an “R&D” role in which the research experience is necessary. If you care about ad hominem aspects of the discussion, I identify as a code monkey.
I'm not the OP, but I find a lot of the time the University program isn't too relevant and it's really the work experience that matters.
For example, I used to work in our IT department that handled the CS/Math departments at our University. We are a fairly well regarded school in the computing fields. However when we did co-op hiring of students for roles we had in the department there was one thing that stood out. People who only had programming experience through the school were not good programmers. It was the people who either had past work experience or had their own side projects who actually knew their stuff well.
At the end of the day I think CS degrees often go heavy on the theoretical and I find much of the theory is not applicable to the majority of CS/SoftEng jobs out there. I feel like someone who is motivated can gain the skills to be hirable in many software companies out there without going to school. I view things like co-pilot not as a problem. This will hopefully force the professors to make assignments that teach people more complex and work applicable content, rather than having them write generic intro to programming style cookie cutter programs.
Yes, and I hire lots of people without CS degrees. But I do see guided study accelerating the process. People who come in with no CS background do need some years to backfill missing knowledge, or they've some years in self-study on top of a 4y degree.
I don't see an MSc accelerating anything; in general one year of MSc seems worse than one year of work experience or one year of motivated self-study, and the students who do it were not the top tier of BSc students.
As my previous reply to your comments says, I disagree here. It depends on the nature of the courses. I think some graduate coursework falls into the category you’ve mentioned of “accelerating learning” of stuff that’s very applicable in industry. Sure, a research-oriented course about neural networks isn’t going to help someone write SQL. You could even say that an academic introduction to AI is a waste of time, but a survey of software engineering methodologies and the resources surrounding them (SWEBOK, CMMI 1.3) is going to help a student in industry over someone who hasn’t invested that time (and being guided in an academic setting is only going to help).
Nonsense? If you know what distinguishes these things as nonsense, why didn’t they consult you when compiling these sources? Surely you could have helped them produce value. /s
Seriously though, you clearly need a reality check. SWEBOK is by no means necessary; it’s far from it. Saying it’s a complete waste of time shows that you have poor judgement on the subject. To your earlier point, having someone guide you through the poignant aspects of CMMI 1.3 is exactly the kind of value added by paying a teacher (where you would otherwise have to sift through piles of less valuable material). I only preach out of a sense of trying to help you and the others you have influence over. If you’ve actually worked yourself into a position where you hire people, and you’re not just pretending on the internet, you could do yourself a favor by being a little more open minded. Maybe consider that your original reply has been downvoted on a forum with “hacker” in its name.
Yes (in the field of CS, in other fields it might vary). That said, for me the most important difference was a thing around expectations: I felt fine to spend all the time I wished on the thing that became my thesis, to the exclusion of everything else, while at the same time not feeling the obligation to do so.
> If you allow students to use copilot, you're handing out certificates or diplomas to people who can't code. That will very quickly erode the value of your institute's certification, and with it, that of the other students.
Very true. Like it or not, the reputation of educational institutions plays a significant role in candidate selection processes at many companies. If a company brings in 10 graduates from a local college and discovers that 7/10 of them can’t actually write basic code, they’re going to become more hesitant to bring in other graduates from that college. Meanwhile, someone who has Stanford or Harvard on their resume is virtually guaranteed an interview. Reputation matters far more than I ever thought it would.
>Meanwhile, someone who has Stanford or Harvard on their resume is virtually guaranteed an interview.
The funny thing is though - the CS programs at Stanford or Harvard are more research based than anything and hardly teach students to code. You are forced to take maybe 1 or 2 "learn to code" courses and everything else is just math/theory or assumes you can code anyways. And that shouldn't be a surprise; it doesn't take 4 years to learn to code; there are plenty of 15 year olds who can pass leetcode style interviews. Everything else you learn is far more valuable (although it's not a given that you will get a chance to apply it).
Given the month of preparation that normal candidates do? Sure. That’s basically what they do every time they have to take a test about a subject that doesn’t interest them.
Computer science is to programming what materials science is to construction. It produces really interesting new building blocks that actual practitioners can put to amazing uses.
But research scientists aren't necessarily great engineers, and engineers aren't necessarily great research scientists. That's OK though, it's why we have separate disciplines.
The CS program at my university, and most people I've spoken too, wasn't research oriented. At least not until the graduate level. Maybe at really respected places like MIT. Yes there were algorithms classes and such, but it was more about memorization than actually studying how they were made or how you would make a novel one. Final projects were always about building a real-world-ish application on a team.
But I'd say about 20% of the students that graduated still didn't understand things like references very well. They'd write their code with "foo.bar" and when it didn't compile they'd randomly change "." to "->" or add random asterisks around until it compiled.
(Classes were primarily in C++. We did some classes Java as well, and in my 3rd year Microsoft came in with a big sponsorship to change all the Java classes to C# classes.)
I mean, it's more like "woodworker" vs "carpenter." We don't ask people framing houses to plane their own wood. Most practitioners in our industry don't apply much fundamental CS knowledge.
After graduating most CS grads seem to forget foundational CS stuff anyways.
I never got a CS degree. So I trained myself. While working and also a lot of a year when I was unemployed during the .com crash.
Which only made me more frustrated to find that most jobs are just "you nail these two boards together" and most programmers are clueless about things like what the "relation" in "relational DBMS" means.
In construction - in the UK - there are first and second fix "chippies". First fix do the big stuff and second fix do everything after the plaster is dry(ish).
There are also cabinet makers and furniture makers, turners and several more wood related trades. Here, woodworker is a synonym for carpenter and carpenter is rarely used as a job title. Generally a specialisation is used.
I happen to own an IT firm and I prefer to see engineering qualifications on CVs (resumes). I'm a rather shabby Civ. Eng myself! I graduated just before the shit hit the fan in the UK in 1991 - recession, >10% unemployment (in Devon and Cornwall) etc. Can't recall what inflation was up to at the time but I think it was worse than now.
While it would likely be prohibitively expensive, I really wish there was an independent board that would certify software engineering as a certification and required an oral defense in the same style as a PhD.
I recognize that it's kind of gatekeeping but it feels like more and more diploma mills are just hanging out degrees, not to mention all the coding camps out there.
As someone who has excelled in non credentialled industries and seen enough credentialed proffessionals coast by,
I much prefer that companies do their diligence in hiring and firing, rather than ousourcing it to central regulatory body destined to ossify.
I hope, for the sake of your software industry, that this only relates to calling yourself an engineer, and not to the general practice of software development. If you have to be a licensed engineer to work in software in Italy then that'd be a colossal drag on anything interesting happening in that industry.
No, that's not mandatory. Actually, only the big, shitty, consultancy shops (Accenture, Deloitte, etc) look stricly for licensed engineers, mostly because, funnily enough, they're usually hired for much less than the average SWEs. Actually there's kind of a shared feeling in the industry that licensed software engineers are dumbasses.
They want licensed engineers so they hire them as consultants and pay no sick leave and vacations.
Becoming licensed is mostly needed if you want to work on your own. You pay less taxes but have ZERO safety, so in theory you should earn much more and work for many customers. But those shitty consultancy companies hire consultants and pay them very little.
Just raise the bar, rather than getting students to reverse an array, get them to do something more complex that is difficult even with the use of modern AI tools. We currently hand out degrees to people who couldn't write a program in raw assembly but it doesn't matter anymore. The world has moved on from that being a required skill.
I’ve used copilot to turn an rfc into code. It wasn’t perfect the code did need to be changed a bit but it was surprisingly close. And I ended only rewriting 4-5 lines that were totally wrong and 3-4 just for clarity. Out of like 40 lines of generated code.
Great. If an undergrad can find and rewrite those 10% of lines that are wrong, this is exactly what I care about to be an effective programmer. Especially since a good chunk of programming is reading code by former programmer (including self()) and interpreting and debugging. If they can use copilot with confidence to be more productive in getting to that point even more bonus points if I were a manager.
tbf I dropped out of a CE degree (and college) because I couldn't pass discrete math so I might not be the best logic programmer... I doubt I could pass leetcode interviews, I mostly work on the fringes of software dev.
A lot of CS education is limited by students' ability to code.
If everyone was great at coding after a year or two, you could do a lot more algorithm analysis,theory, engineering principles, mathematics and other sciences, etc.
The problem an earlier comment expressed is that the student will sabotage themselves.
What else did you think that means besides exactly what you just said?
They sail through the beginning and splat in the middle, because they didn't have to learn anything in the beginning.
My comment is reponding to one that says the problem is that students get diplomas witbout learning.
I am saying that diplomas and certs are not a problem, not that there is no problem. The problem is what the earlier comment said in the first place.
Actually I'm not even sure that's a problem either in the long term, but it is the immediate problem. It may just be during a transition period while schools have not yet adapted to reality and are still worried about calculators in math classes. But at this moment, it's a problem.
Tools of abstraction like Copilot raise the threshold for what can be considered "fundamentals." Yes, knowing how to add, subtract, multiply, and divide is important. However, now that calculators exist, we can teach people to do so much more than simple arithmetic. So, too, with Copilot.
> If you allow students to use copilot, you're handing out certificates or diplomas to people who can't code.
Many would argue that this is already happening. Moreover, we have no plan to stop it, so it's no good saying that we should not lose progress toward that goal.
Software engineering experience can be gained at a code school, or by working on open source projects, if you already know some coding from one of the many free online resources.
Copilot doesn't change the fact that trivial questions (such as 'write a Fibonacci function') can already be googled.
What I don't know (I can't try the 'free' Copilot without providing a credit card), is how well it understands unusual constraints and separate pieces of C++ classes that make the real programs we write.
For instance, a simple test would be: "write a fibonacci function but skip the number 5". Does it do the right thing?
Another challenge would be a C++ class with several fields. That's what our real programs look like: composite data objects, mixing several different algorithms together to implement complex behavior.
Maybe my data structure has a hash table of items, as well as a direct link to the largest item. When I say: "write the function to insert a new item in the list, and remember to update the largest item if it is larger than the current one", would Copilot do the right thing? Each step is easy in itself (adding an element to a hash, comparing an item to another one).
> Maybe my data structure has a hash table of items, as well as a direct link to the largest item. When I say: "write the function to insert a new item in the list, and remember to update the largest item if it is larger than the current one", would Copilot do the right thing? Each step is easy in itself (adding an element to a hash, comparing an item to another one).
In general it won't solve all your problems, but it's helpful for automating simple things like this (but you still need to test edge cases).
With this prompt in Python (which I'm more familiar with):
from dataclasses import dataclass
from typing import TypeVar
T = TypeVar('T')
@dataclass
class MaxDict:
items: dict[T, float]
max_value_item: T
def add_item(
It completed:
def add_item(self, item: T, value: float):
if value > self.items[self.max_value_item]:
self.max_value_item = item
self.items[item] = value
This was my second attempt; first I called it `max_item` and the completion did something about comparing the key.
As a human, I’m having trouble understanding the problem description. What does it mean to “skip 5”? I’ve always understood the definition of a Fibonacci sequence to be, starting with 2 numbers (usually 1 and 1) the next number in the sequence is the sum of the previous 2.
So starting with 1 and 1, we get 2. 2 and 1 is three. 3 and 2 is five. What do we do from here? Do I not add 5 and 3? If not, then what do I add? 3 and 2 again? Do I repeat 3 and get 6? What does it mean to “skip 5”?
I've been writing code for nearly as long and man, you have to try it. It's more like autocomplete on roids than something that generates Fibonacci methods automagically.
The completions are often more “similar to stuff I’ve typed before” rather than “generate this working function”, and it’s not an order of magnitude improvement over just regular intellij completion.
…but I find it’s generally better.
Up to you if you consider it worth the cost of paying for it, and the macro completion is so-so, but “its rust so copilot can’t do it” isn’t really true.
I guess any "abuse" is the fact that any code reproduced verbatim doesn't carry the license under which it was originally published. But the reality is that what Copilot is most useful for is (semi) intelligent autocomplete, and generating functions that almost certainly exist as StackOverflow answers. It won't do any thinking for you when it comes to the bigger picture.
someone can still mirror your stuff on github. I wonder if they should make a special open source license, that disallows use of the source code for the purpose of training something like Copilot.
> I wonder if they should make a special open source license, that disallows use of the source code for the purpose of training something like Copilot.
Since Microsoft uses material for copilot outside of licensing on the basis that it is Fair Use, that would probably have no effect in practice on whether or not the material is used in training something like Copilot. For that to matter, you’d first have to win a lawsuit on the basis that training something like Copilot requires permission of the copyright owner of the training material, to invalidate the premise of Microsoft’s action.
I don't care about all the handwaving here, it comes down to this.
The moment a fragment of code from a GPL'd or AGPL'd project shows up almost verbatim in someone's closed source or non-copylefted, etc. project, and someone proves it, sparks are going to fly.
And it's probably already happening, just people haven't discovered it yet.
How many years did the Oracle/Google lawsuit go on for? And in the end about a handful of a lines of code only tangentially related to the issue at hand?
Part of the lesson from that must be: employers should be telling their workers to say TF away from Copilot or things like it. And be careful in general when browsing source. License literacy is critical.
I don't touch it because I need to feed my kids. I don't need my career exploded. Overcautious? Maybe. I'll let someone else find out. I make a living in and around open source software.
> The moment a fragment of code from a GPL'd or AGPL'd project shows up almost verbatim in someone's closed source or non-copylefted, etc. project, and someone proves it, sparks are going to fly.
I'm hoarding popcorn and can't wait for that, honestly.
> License literacy is critical.
It's beyond critical, but most people I have talked says that they see the right to copy and use any code they see online. They don't care.
Building this open source corpus was not easy, and we need to defend it too. This is a culture.
Anything generated by copilot, which if it is a derivative work, is not something that copilot can hold the copyright on.
From the auditor's perspective it doesn't matter if you copied it out of stack overflow, from some GitHub search, or copilot. You, the human, didn't check the license / plagiarism detecter. It is you, the human, claiming copyright on the work you are creating which may incorporate material from other sources.
Copilot isn't claiming fair use.
You could argue that the model that copilot runs from is a derivative work (and this is going to be interesting when it gets to the courts, because, frankly no one will come out the 'winner' on this when trying to explain it to a judge) - but that's not the code that a human is claiming to be their creative work and is ultimately the license violation.
Personally, I (not a lawyer), believe that copilot is on ok ground - but anyone using it needs to do their due diligence in verifying that the code that they've incorporated is licensed appropriately - just as if they've copied something from Stack Overflow - who knows where that copied was copied from.
I have less concerns with identifiable code from copilot than humans not caring about the licenses of their source material in creating human generated content.
It's not really an issue when you're a large software corporation; you already have mechanisms in place to check for license compliance in everything that ships, including F/OSS plagiarism checks.
I think that's the part that people who don't think it's worth the money aren't getting. This kind of system is godsend for the likes of Infosys, TCS etc. So the immediate threat is to the jobs there - but the side effect is that it'll make it all even cheaper, so we'll see more "outsourcing to the cloud", so to speak. Often to the obvious detriment of quality, but that doesn't seem to matter in this market.
There is a common trend of devs on HN getting angry that their work has been "stolen" to train copilot, While none of them raised the same concerns when everything else like art, music, literature, etc was used to train other models. Now that it affects them its a real issue.
Yes. It's especially amusing given that much of the other things you note are actually intended for commercial use (i.e. sales) from the start, unlike open source software.
I just don't understand the OSS community sometimes. "Software should be open and free (libre) for me to study and modify" includes what Github did for copilot. If you don't want your software to be free (in either sense), don't host it on an open source platform, especially one that makes it available gratis to the public.
There's possibly a valid argument that any private repo code that was used for copilot doesn't fit the proper definition of "open" (or gratis). But I haven't actually read the Github license around this, so I don't know.
Your argument fails to distinguish between "open source" and "free software".
Copyleft, free software, GPL style licenses do not have their source open purely for the purpose of studying and modifying. Their licenses also require that derivative works also be free and that such modifications be distributed.
Copilot does not comply with this. And so violates the spirit of those licenses, and probably also the letter of the law.
In what sense? Copilot isn’t a derivative work in the sense these licenses usually are understood to mean. And given that they’re open source code bases I expect licenses to explicitly disallow things, and consider anything not explicitly disallowed as permitted.
> Copilot isn’t a derivative work in the sense these licenses usually are understood to mean
The phrase “derived work” is, IIUC, a phrase from copyright law. And you’d have a hard time convincing me that Copilot-generated code is not a derived work from its training data.
> And given that they’re open source code bases I expect licenses to explicitly disallow things, and consider anything not explicitly disallowed as permitted.
That is very much not how copyright and licences work. Copyright law gives the copyright holder the exclusive right to make copies of the work, making derived works, (and to do some other related things, like making a public performance of it, etc.), so to do any of those things, you need explicit permission, i.e. a license from the copyright holder to do it. A license is not a list of things you are forbidden to do; on the contrary, it is a list of things you are permitted to do, which you would not otherwise be legally allowed to do according to copyright law.
Sure, but there are things you can do without a license because they're not copyright violations. You can read the work, learn from it, and sometimes make quotations under fair use.
This is a novel scenario. It seems unclear how the courts will interpret it? Never mind what we think, will they decide it's a derivative work, or is it a transformative use?
“Fair use” is, technically, not actually permitted by copyright law. ISTR that “fair use” is only a defense you can use when you are being sued for copyright violation.
Suppose we create a new AI image generator, and use as training input every image ever made of a Disney character (official images by Disney, that is, no fan art), including every frame of every Disney movie. Could we just use the output images of that AI however we wanted to? (Not withstanding trademarks.)
Looks like there is case law that fictional characters are protected if they are "sufficiently delineated." I don't see how that applies to code, though.
This is unclear. I have never seen an open source license that was explicit about this. Seems like a grey area.
It's not even clear how often training machine learning algorithms on code results in copyright violations. CoPilot does have a setting to detect and disallow direct copying, but how well does it work?
This legal uncertainty is enough that I wouldn't advise using it, but maybe people who use it will be fine?
I found carefully reviewing the suggestions it gave me more work than actually writing the code myself. Granted, I only used it for a day, but many of the suggestions were subtly wrong, needless inefficient, or used outdated/deprecated paradigms/standard library stuff.
I only used it for a language I'm very familiar with. I'd be a lot more hesitant using it for a language I'm less familiar with because I won't be able to spot the problems so easily.
I’ve really found no use for it at all. It doesn’t understand the codebase it’s being used in. I can’t tell it to write a service that gets data from another internal microservice, oh and make sure you do it in the same way the other services are implemented so that this passes code review… it can cough up slightly wrong answers to leetcode problems, but who has a job where that’s useful?
I've found it extremely helpful in writing highly repetitive code that's too complicated for a Regex find/replace. For example, I used it when writing a recursive descent parser in Rust for a hobby project.
I wrote the grammar in a comment at the top of the file, wrote and imported the AST enum, and wrote the first production. After that, I just prompted Copilot and it worked its way down the grammar, producing the parser functions one at a time. The CLion integration was able to consider the imported data structures as part of the prompt, so it even stored everything in the correct AST nodes.
For something like that, it's easy to verify that it did it correctly (through visual inspection and testing), and it allowed me to write the entire parser in about 2-3 seconds per rule.
I don’t think they mean that it can’t, just that the better way to think about the advantages it gives an experienced engineer is more along the lines of “autocomplete v2”, i.e. a keystroke-saver.
Yes because often times while I'm unable to recall the exact idiosyncratic keywords incantation that I need, copilot will retrieve them automatically. This saves me a context switch to MSDN docs or stack overflow.
It automates a great deal of boilerplate crap that you have to do especially in web frameworks such as angular.
I can picture autogen boilerplate being collected and distributed in versioned "community expansion packs" to popular languages, and I'm not looking for fragmentation like that in my tools. I really don't want my IDE involved like this.
I'd rather see Copilot used to expand existing libraries. Pulling potential additions to your own library off of Copilot would be an interesting twist on the situation. A hacktoberfest-alike based off this would be weird.
Nobody is forcing you to use it... Others like myself find it useful and to be a huge timesaver. If it doesn't benefit you, then just don't use it. Why must people be so vocal about not liking something? I get that you don't think it would be useful for yourself, but it sounds like you've never used it, yet are against it enough to come bash on it in a thread.
I think you should give it a try and see what you think about it. I was hesitant about it at first but was very surprised at how much time it could save me from having to look up things on Google. I've found it especially useful when I'm switching to a language I may be less familiar with. I can understand the basic logic of what I want to do, but would have to spend time looking up how to do it in this specific language. Orrr... I just have copilot help me out and generate such a solution.
It's obviously not going to be a tool that is applicable to everyone. Just like how many manual labour oriented contractors have a bunch of tools, each of them may have their own set of tools that slightly differs from the other person. That is okay, and there would be no need to try and bring someone else down for their choice to use a certain tool.
This is a good example of why I dread Copilot: even if Go specifically couldn't express this any more concisely, there is a language that can and Copilot's very existence makes it less likely for that other language to be used as much as it deserves.
Besides, the generated example seems to be missing code to gracefully handle the case where len(filtered) is zero. Maybe there's a precondition that prevents that from happening or maybe a division by zero is exactly what you'd want, but at face value it looks like the bot did a rush job.
Zero is gracefully handled; the mean of a zero-sized set is best represented by NaN, and this would be idiomatic in most languages' IEEE754-style handling.
Saturation is not. This is what really bugs me: If I'm going to drag in a billion GPUs of external computation (or a dependency, which is basically the same thing but with human brains), I want it to provide the hard algorithm I can't write, not the easy one I can. I am not limited by typing speed.
Agreed about saturation and the choice of variable name, but the code would trigger a division by zero and not result in NaN: https://go.dev/play/p/vYm4tSNEJ7M
(Also, in--say--Ruby and JavaScript 1.0/0.0 is Infinity and not NaN.)
Your playground link shows a build error. 0.0/0.0 at runtime will be NaN. And in basically every language, 1.0/0.0 is Infinity. But we're talking about 0.0/0.0.
Both good examples of coding where you should be thinking instead, though.
I think if anything there is far too much thinking going on here, for the tiny example I copied from the window I literally already had open with the function I was working on.
For what it's worth, Copilot (correctly) inferred a loop variable called "detection", I imagine based on similar usage earlier in the function. And there is already a conditional in place to prevent invalid operations; if I remove it I see a new suggestion:
if len(filtered) > 0 {
This tool is far from perfect, but it very much sounds like you folks haven't used it. If that's the case, I would encourage you to research it like all tooling and draw some informed conclusions about it's applicability instead of making assumptions.
Not until there's a setting that can guarantee the automcomplete is based on verifiably license-unencumbered source that is automatically tracked in some kind of sourcemap that tells me which parts of "the code I didn't write" comes from which other project and file inside that project.
Until then, copilot is a giant liability that ensures I can't use it for code that my company ends up owning, nor can I contribute code I write with it to literally any open source project because in a very real sense: I didn't write it. I just assembled it from parts unknown, and those parts may end up being lawsuits.
As a hypothetical, what would that case actually look like? I'm suing you because I have a strong belief that part of the codebase of your personal project was assembled from code I wrote and didn't license permissively, so now I'm claiming ownership?
Obviously IANAL so this is largely conjecture, but until we _actually_ see how this would play out in court, I'm leaning towards this being less of a legal issue than folks here act like. For personal projects, I'd say the likelihood of some other engineer reading your code, noticing a similarity or duplication, and dragging you to court for it is near 0.
- "you were hired to write code for us, not to use an autocomplete service that makes us liable for both copyright and patent lawsuits, I hope you like getting fired."
- "as per this project's license, we can only take code on board that you contributed under our license, but an audit shows that your PR/MRs contain tons of GPL/MIT/Whatever licensed code instead. We're going to have to back all of that out, and we're going to revoke your contributor status"
- etc.
If you don't know where the code in your autocomplete comes from (and copilot can autocomplete large swathes of code) then literally anything that comes out of "you didn't write this code" may apply. From fraud (depending on what contract you signed) to trademark infringement, to license violations, to even just simply misrepresenting your skills to an employer. As with all things, it's a sliding scale, but just because the majority of incidents will be on the bening part of the spectrum doesn't mean the litigating part doesn't exist, and that's what your legal department plans for.
Work for a big company? Good bet you're not allowed to use copilot. And depending on the company, not even "for your personal projects" because you might accidentally read someone else's license encumbered code that you would not have come up with yourself and may now open your employer up to "you stole our ideas instead of properly crediting/paying for licenses".
I would also be comfortable with a service that was willing to broadly indemnify me as the customer from copyright and patent claims arising from code generated by their service. I doubt that will ever happen either.
The grind you have to go through in a "google style" interview has nothing to do with the actual work you normally do. It's done to show 1) you will do large amounts of boring work without questioning the need for it 2) you got the fundamentals and you know what's there / when using something you can grok what's going on under the covers. Nobody needs to implement a sorting algorithm until they do - the risk of NIH and reinventing the wheel is high. When you do you need to have a good understanding of why
Nobody at Google wants you to do boring work without questioning the need for it. That would be a waste of money and time.
Hard interview questions are asked for the same reason hard questions are asked in SAT tests. The assumption is that if you can do a hard thing in the interview (implement a mutex using a semaphore or whatever) then you can probably write simple code too.
A few years ago I was working somewhere that had a low hiring bar. We were using an opensource graphql library to interact with some service. The graphql library didn’t expose some queries we needed, so I forked the library and added those queries.
After I left I heard through the grapevine that they were very unhappy with my work. They didn’t have anyone on the team who was skilled enough to maintain the code I wrote, so I left them in a bad place.
In retrospect I’m still not sure what else I could have done to solve the team’s graphql problems. Not that, but what?
I’ve seen many instances where a low-skill programming place either lives with a bug, or alternatively, adopts a solution that involves meticulously copy&pasting a workaround into 300 places.
Having said that, I know great, intelligent programmers, sone of who worked on compilers, who are simply bad in a whiteboard setting. Having a filter is necessary, I’m just not convinced a Google-style interview filters exactly the right thing.
The google-style interviews are certainly filtering for something though. An unfounded theory of mine is that the unfortunate eagerness of Google to sunset their products [1] might be a side effect of hiring for people that rather want to solve exciting coding problems and furthering their career by using X new language/framework, than maintaining legacy systems and keeping backwards compatibility, which might not sound so exciting.
Aren't all FAANG using these style of coding interviews? And if you look at Amazon, they have almost the opposite problem of pushing out too many (AWS) products.
I'm going respectfully disagree with you on the first point. Do you really need a Phd to maintain an internal javascript framework with a handful of users? (True story). A lot of the hires are made to starve the competition of good people.
The low hiring bar is relative. If all you do is put together crud apps and it's mostly stitching things together you might not even need a formal CS education. In fact a lot of people I know are wildly successful because they found their niche. They would have an exceptional bad time in a whiteboard setting.
Leetcode interviews have maybe just become an example for https://en.wikipedia.org/wiki/Goodhart%27s_law - as long as people aren't aware, it's a great signal, but after people catch onto it, it becomes a measure of something else
> After 30 years, actually writing code is perhaps the easiest part of my job. The mechanics is the easy part. The big picture thinking and figuring how to get it all together into a system is the hard part
This is the thing that you won't be learning about in a bootcamp. Coding is easy. Knowing what to code is hard, sometimes very hard. As is keeping it simple.
I code for a living and Copilot can't do the least part of my job, though it is possible it could find uses here and there around the fringes (perhaps not even then).
I'm coding audio DSP along the lines (real example here) of, "Given that the Dolby noise reduction system is deeply part of the sound of classic seventies and eighties recording, what parts of it can be generalized to make DSP processing producing the same general effects on the sound, and having done so, how should this be tweaked and adjusted to optimize for pleasurable results?"
This doesn't exist on Copilot. Damn near nothing I do for work exists on Copilot, and even if it (being an MIT-licensed public code library) gets slurped up into Copilot, Copilot has absolutely no way to determine whether a 0.8273 in the algorithm should be leaning towards 0.83 to brighten the sound of something happening, that it would do this, or why you'd want brighter rather than darker, or whether you are right in wanting this or would do better to go darker knowing that other things will happen to the listening experience.
Copilot's inability to do these things is much like Dall-E's inability to do art. There is much visualization and a striking absence of purpose or intention. With Copilot, it will do all the boring or common stuff for you. If that's your job, maybe you're not aiming at the strengths of humans, rather you are aiming at things too easy to automate…
> the hypothesis (propagated mostly by the Google-style job interview) that intensely coding clever for-loops to perform algorithmic magic (for things usually already in the standard library) is the best measure of the competence of a software developer
I think the hypothesis is, rather, that people who can successfully run that gauntlet are smart and motivated. The details of the problems they’re solving along the way are almost incidental.
In my master program in CS there was a guy unfamiliar with the concept of "find the minimum number in an array". Turns out he just had paid someone to do all of his assignment in the bachelor.
This is why I care more that you can tell me about the algorithm at a high level, and reasons to use it. I don't care about you writing code outside of a computer. I don't write code on paper and then write it in my editor. I open up my browser, look up the documentation and refresh my understanding of the algorithm to make sure I am not overlooking anything. I do some research to figure out if I even need it, maybe there's some even superior way to do what I'm trying to do in my programming language at that moment.
This happened in mathematics about a decade ago when Wolfram Alpha came out. Lecturers started complaining that Wolfram Alpha could solve assignment problems with worked steps.
In both cases I think this is a real opportunity; we can let students get more quickly to bigger problems and systems thinking by leveraging these tools. It requires professors to start thinking innovatively about how to teach and assess these subjects.
> After 30 years, actually writing code is perhaps the easiest part of my job. The mechanics is the easy part.
Actually Copilot can really help people like you.
Even if the mechanics is the easy part, knowing how to touch type and knowing an editor like vim, can help you get your thoughts from your mind into code faster.
Copilot can do the same thing. It can act as a super powerful text editor augmentor to get your thoughts into code faster.
Just because the straight ways of a racing course aren’t the most challenging, doesn’t mean that going faster on them won’t improve your racing performance.
I had a C class in my first year of bachelor and the exam was a paper exam where we had to write code by hand. I thought that was dumb but now it sorts of makes sense.
> Especially since this kind of thing is best done on the job by looking in a good textbook or using the standard library.
The way that job interviews continue to fixate on having a huge amount of information stored in your head, rather than your ability to be resourceful, is so frustrating. Oh well, maybe it's about time I go back to Leetcode and see how quickly I can grind out a recursive DFS algorithm...
I was skeptical of copilot, but since trying it for two months, I'm really impressed. It's a smarter auto complete. For $10/month it pays for itself in one day.
You have to be careful with the output it produces, as it often looks sensible but is wrong. Especially if it contains numeric constants. But on the whole it speeds things up, the same way good code auto complete does.
There is a copilot for that. I think you’re making programming seem more intricate than it is. It can build a crud app. It can build distributed systems. It can write web3 contracts. There’s not a whole lot besides that except maybe rare structural projects that don’t happen very often.
> I doubt I'm its intended audience. After 30 years, actually writing code is perhaps the easiest part of my job.
I think that’s exactly the intended audience. Copilot just cannot write code for me, but it’s really good at inferring patterns and giving me the next step.
I was not sure about copilot but it's amazing. You can put your pseudo code up in comments and it fills out the boiler plate. For things like plumbing info from the db to the front end it's amazing how much time it saves.
It saves me more time than the monthly cost of the service. I've never tried things like Jedi or copilot before - I definitely enjoy it more than I thought I would.
While it is true code pilot is quite amazing and can produce some very decent snippets, if the exercice you give is specific enought (using a special database schema, file format, or API), with objectives that are not common (ask the user specific questions, and draw conclusions out of it or react to events), then it cannot produce a fully working program. I use code pilot regularly now, and it does speed up my work, but I can't do anything with it without understanding the problem at hand.
There is nothing wrong with the student assemblings and fixing parts of code that exist. Doing that requires understanding those parts, how they interacts and what you can do with them. It's not killing tests. Plus as professionals, we all do that anyway. Heck, I learned doing that. I'm paid doing that.
However, there is something wrong with academic exercices that I see in the wild. Things that are very abstracts, requiring way more than programming knowledge, or stuff that are asked out of context completely. Students are both understimulated, and given too hard bites to chew. It's a terrible way to learn, and to be tested IMO.
Tests nowaday don't tests for knowledge, they test either for compliance, or act as a filter instead of a feedback loop. This is not education.
In fact, I've yet to see school IT course that I didn't find deeply flawed, and I completely understand the students for cheating at them to that they can suffer as little as possible. They are not given a chance to prove themself or progress, they are not respected. They are forced fed junk and asked to spit it back all shiny.
I remember when I was a student, and I went to 11 different schools because of my twisted life. They all sucked. And I say that as I spent half my student life at the top of my class grade ladder.
As I moved as well, I experienced three different Canadian universities' Computer Science Program.
There was only one professor and course that I still remember 20 years later: CS408, Software Engineering, Professor Wortman, UofToronto.
Class project was in four phases, cumulative (basic functionality for the application in phase 1, progressively additional functionality in other phases, frequently strongly interacting with previous phase code).
Here's the kicker: After each phase, you had to swap your code with another team. So you had to pick up somebody else's code, figure it out, and then build up on it.
Few of us that had real-world working experience loved the course and flourished in it. This is what we are training for! This is what programming is like! You are taking real code and building a real thing with it!
About 250 other students signed a petition to the Dean on how this is unfair and awful and they will not put up with it. They were just too used to / spoiled by 16 years of 5 assignments per semester with abstract, entirely separate questions of 1.a), 1.b), 1.c), etc.
All I could think of - if you did not like this course, you are about to not enjoy next 4-5 decades of your life :D
Other than this one course, I can say that I'm a prolific, enthusiastic, life-long learner, and my university experience was the absolute dumps - it was far less about learning, and far more about bureaucratic hoops and artificial constraints and restrictions. I was top of the class in some hard courses (generative compilers etc), mid-pack in the some of meh courses, but in retrospective, my life opened up when I was done with academia and could work and learn in the 'real world'.
> Here's the kicker: After each phase, you had to swap your code with another team. So you had to pick up somebody else's code, figure it out, and then build up on it.
This is hilarious and is a good example of how sw engineering works in the real world. Love it.
There are a few issues with this kind of assignment.
The first issue is that it's really hell on the TAs with all of the administrative work needed to do the swapping between phases. Software engineering is already one of the more annoying classes to TA, and this makes it even less palatable.
The bigger issue is that, on the basis of fairness, you grade in later phases is very heavily dependent on factors that are outside of your control. If you get someone's code who just didn't complete the assignment--that is probably going to happen once a semester--you are going to be at a disadvantage. Even if there is an option to appeal to the TAs and get the unacceptable code base replaced with an acceptable one, that's still likely to be a few days' worth of work lost learning the codebase, discovering its incompleteness, and now you have to start over again.
I'll also point out that it's not exactly real-world experience. It is extremely rare that someone will be dumped on a codebase without a prior author remaining on to help them ramp up on the code.
I would expect that code you get from Stage N-1 is functional across a range of tests that are agreed upon upfront.
For the fairness aspect, I would also let students pick any Stage N-1 that is not their previous N-1. That would also teach em that not all is created equal (and probably identify approaches / ideas that they can incorporate in their work)
That is exactly what happened. You got 1% extra mark if somebody picked your code.
Seemed trivial at the beginning of semester,right?
...did I mention my team had previous real life experience?
We put extrnsive code and documentation and FAQs online (this was like 99 or 2001 don't remember, but we'll before Github etc) , and committed to 7 day support of our code base. We got 18 out of 22 picks after phase 1.
Of course competition was much stiffer further phases, but that too was like 3 valuable real world lessons learned for everybody!
Yeah, that statement was the only thing that stuck out to me in an, otherwise, pretty good writeup that I mostly agree with.
The only times I can think of where the previous author was able to help was either during onboarding (where you are expected to be given smaller-scope tasks with the context of that code being predefined for you) or during work on certain specific singular projects as a team.
But at any large big tech company, you are probably gonna be jumping around more than a few different codebases (more often than not, large and being worked on for years), even if you work on a singular team/product. And most of the time, you are expected to figure as much as possible on your own, only reaching out to the people when you get stuck or encounter a specific issue. Other than that, you are expected to be able to build at least a janky proof of concept with as little help as possible.
Most of the time, the help I sharply needed was rather targeted and not really related to the code itself overall. It would be something that is kind of extremely difficult to guess on your own without it being explicitly told (aside from docs/code comments), things like "everything looks fine, but I get stuck at this auth step, what's wrong there?" - "oh yeah, you gotta auth as a part of this specific group, so you need to be added to this security group in this config file".
I am not trying to say that i am some genius and can get the structure and overall mental model of how a codebase works all on my own easily, only with the specific "gotchas" (like the ones mentioned in a paragraph above) giving me issues. Not at all. The only way I can mentally get a structure of the codebase and how the whole system works and operates (even as a rough/simplified model) is by looking at the design docs/documentation (very helpful, but far from sufficient on its own, more like a supplementary material of variable quality) and, most importantly, making small code changes and debugging the code using breakpoints to see how it all flows and where. Imo, understanding code through debugging and reading a lot into the code is an irreplaceable step for understanding a section of any significantly-sized codebase.
I had a similar software engineering course experience to the one above with a lot of things out of my control.
I got an A. By being humble reflective and proactive, showing the instructor that I was looking for solutions to the problem and giving an appropriate amount of time and effort to the class.
I think you could remove a lot of your fairness and logistical concerns by at stage N choosing one solution and everyone group has to start from there.
Now for pedagogical purposes it might be that the professor chooses the best solution or a substantially worse solution.
There might be some concern that this unfairly advantages one group over the others but it’s not completely clear to me that it does.
Thanks for the note and perspective. Some clarifications:
1. As noted, each team got to review and pick the code they would take after each phase as long as it's not your own. I remember drawing a graph of picked code lines and it was fascinating, evolution / survival of fittest at work. But it wasn't something one didn't have control of.
2. I don't know why it would make TA work either better or worse, we may not be fully understanding each other. You have 22 programs to put through a standardized test suite based on formal requirements all teams had to code to. It was far easier to grade than any other assignment I ever had.
3. I guess different people have different real world experience. All of COTS/ERP and a lot of enterprise applications are somebody else's code. Millions of lines of somebody else's code. So that's the norm in my life. YMMV :)
> It is extremely rare that someone will be dumped on a codebase without a prior author remaining on to help them ramp up on the code.
The way this is more likely to work out IRL is that the last person who actually understood the design - not just the what's, but the why's - had long since moved on, and the "prior author" that you have access to can, at best, tell you where the dragons are.
I don’t know the size of your university projects, or the speed you worked at, but most of my university projects could be understood in a day. The coding part was by far the most time intensive thing at the time.
it's not rare at all for previous authors to just disappear. do you know what the turnover is at large companies??
reading the code is not the issue though, we also lose context. WHY was it done that way? was it a product decision, due to time constraints, or technical complexity? that should be documented but often isn't
My software engineering teacher did something similar. We had a giant group project that had to be worked on by the whole class. He split the class into 5 groups of 5, and we divided the work by group, and groups had to work together to integrate with each other.
Hats off to your prof. This is such a brilliant idea for how to teach, anchored in a solid reflection of what “doing the job” is like day-to-day outside the academic context.
Github Copilot won't give students anything they couldn't already google.
It seems to be a natural part of aging, to start to complain about education and kids these days because it's not the same as when we were in school.
I try not to do that, because I never forgot how my parents generation said the exact same about us when we were in school, and I guarantee my grandparents generation complained when my parents were in school.
Cheating is getting easier by the day, but the bottom line is that if the student is cheating, they're probably not absorbing the coursework (or if they are cheating and absorbing the coursework, there might be something wrong with the coursework).
The most effective policy might not be to try and mechanically prevent cheating, but to explain that, this is information that you're going to need if you actually want to go into the field and if you're using copilot here you are damaging yourself in the long run.
Maybe it's not fair to give as good grades to students who are cheating on assignments as those who are actually doing the assignments, but at the end of the day is the purpose of the course to measure the student or to teach the student?
Adjunct instructor here. Before I came into the program, cheating was on the rise. When given programming assignments, students would go to Geeks for geeks and download solutions, some of them mostly correct, and submit those for credit. One remedy was to try to make programming assignments harder and harder. Eventually, they got so hard that we were asking people new to computer science to implement a full arbitrary length integer calculator using nothing but a single tape Turing machine (in Java, not BF).
Eventually, some of us came to the realization that you can never prove that somebody is cheating. People have been known to hire tutors to do their assignments for them. There is just no end.
As a result, we evaluate students on four dimensions. Programming assignments,* homework assignments, but also class discussions and group participation. Those last two count for a small, but non-trivial percentage of the grade, and are usually enough to separate and identify those who understand what they are doing, from those who are just "following along" solutions that they find on the internet.
* one addition to programming assignments includes an analysis write up. Tell me in human words what is happening. Why do you see that effect. What is the running time. And separately, comment your code to tell me how it works. Those two parts count significantly towards the grade.
Any other professors here, what have you found that works?
> one addition to programming assignments includes an analysis write up.
when i was interviewing candidates this was the fastest way to filter those who understood and those who didnt.
ask someone to talk you through how to solve some real-life problem. to look at a stack trace and describe what they see, or to run a profile and trace through the cause of a hotspot/contention. show them an issue and have them live debug to root-cause and fix. it's okay if they use the internet, SO, etc. -- that's how we all do it. see what they have to look up! just listening to the amount of depth someone can verbally communicate during novel problem solving (including additional questions they ask you) turns a 60min interview that wastes time into a 10min one that tells you if you can move on.
a favorite one of mine was to ask the candidate to describe in as much detail as possible what happens between a keystroke typed into a google search box and the search results appearing. the diversity of replies is facinating. some will tell you "google returns results from its database", others will ask you if you want them to first describe how the keyboard works at an electrical level through the USB or bluetooth driver stack.
"Never memorize something that you can look up." --someone smart
That's a great way to evaluate someone! Individualized and personal, uses concrete problems, no artificial restrictions.
It's a shame our mass education systems cannot use it. They simply cannot apply such a humane method to hundreds or thousands of students. They are reduced to applying their bullshit test questions because it's the best they can come up with at their scale.
Ugh I had someone ask me to debug a verbally communicated error message in an interview.
Like they picked some random port configuration issue that had stumped them for days in the past and thought cool let’s remove the internet, the command line, and ask people to solve it on the spot.
I offered plenty of plans for how I would go about debugging, but I didn’t know the one simple trick.
I agree with trying to use challenges that are closer to the real work, but it’s really hard to do that without over-testing domain knowledge.
For instance, I ask candidates to do some asynchronous control flow. These are all candidates with js listed as their best language and I offer to let them look things up or to show them the promise apis they need, but a certain percent just refuse to engage with the problem because they don’t have the domain knowledge and seem to feel they’re being tested unfairly.
The problem does a really good job of showing a candidates grasp of all the tricky parts of js, so I keep using it despite the drawbacks.
> Ugh I had someone ask me to debug a verbally communicated error message in an interview.
i dont mean that the problem is only verbally communicated. i mean that the debugging process the candidate does is verbally communicated.
in the scenario you're describing you'd be sitting at that machine with access to the internet on another machine. like, you know, in real life.
> The problem does a really good job of showing a candidates grasp of all the tricky parts of js, so I keep using it despite the drawbacks.
async stuff in js is pretty good, but it's also easy to go too deep on it with some bizzarely poorly architected code. many js devs still fail to grasp all the implications of closures, or how to avoid memory leaks, or how to work with the GC rather than against it.
Cheating on programming assignments has been rampant forever at every undergrad institution I have experience with and somewhat present among graduate students. My experience spans about 25 years in that space now.
When I taught an introductory class, I gave open-book exams with no laptops or phones allowed. About 1/3 of the class was unable to write a syntactically correct for loop in Python despite our textbook being an introductory Python-based book chock full of examples. It was pretty clear that a subset of students were either working together on project assignments or out-and-out having someone else do the assignments for them. I mainly compensated for this by having a large part of the grade being a 1-1 meeting with me to talk me through the code. That and the exam had the effect of actually making cheating somewhat less worthwhile. But this approach simply doesn't scale these days - my max class was around 32-33 students and the time I spent meeting with students was insane. I haven't taught in a few years and understand that class sizes of 150+ are not uncommon. I could have never used the same approach with that many students. I probably would have doubled-down on exams and made exam length darn near impossible to finish without actually knowing the material well enough to do without referring to the textbook.
I was teaching more from a practice-based viewpoint so mostly I came up with "weird" projects that mirrored problems I spend time on (data cleaning, using existing libraries to do neat little things, and also having students pick a personal project to implement that I helped them scope appropriately).
We bitch and moan about interview whiteboarding but given grade inflation its kind of hard to trust university credentials. Grade inflation was kind of disheartening in that the worst students didn't really get a poor grade. But I also didn't have any problems with the top 1/3 of the class getting a very high grade - these students were motivated, understood the material, and often impressed me with where they got in a single semester.
I recently worked at a large state university. I remember a conversation with an instructor in a master-level operating systems course. One of the assignments was to implement a simple filesystem. The amount of cheating was insane. He started academic dishonesty proceedings against many of the students but the department pressured him to "work it out."
He had to basically interview each student individually and ask for an explanation of the code. Most could not explain what their code was doing (because they didn't write it).
They were all given an opportunity to resubmit original work. Many could not do it, because they simply didn't understand the basics of programming, or the language they were using, or how computers even worked.
That is just awful - I have similar stories. To me, it seems so unfortunate that students are just in this desperate grind to get all A's.
But I'm on the other side and don't have the stress of "my whole life depends on looking like an all-star" at university. Like, what do you do if you leave school with such a severe lack of actual skill?
OTOH, the last year I taught, every student in the program I was an adjunct for graduated with a job offer in hand. Maybe it all works out fine.
The majority of CS Masters students in the US don't have a BS in CS, or have one from a non top tier overseas university. Even more so than undergrad, Masters students in CS are there for the credentials, so I would expect cheating to be worse.
In reality there's only a small percentage who are genuine creators. The rest are copycats also because the opportunity to copy exists.
The other side is real world pressure for good grades and when you're young you want to party and experience life. You finished that assignment by copy pasting it. It's better than not finishing it.
However you get that anywhere in life. Why is, sorry, was Stackoverflow so popular?
Why is religion still a thing?
People want easy answers to hard questions.
I think grading on multiple dimensions makes sense, though scaling is a challenge.
As for analysis write-up: turns out, language models can also explain what the code does really well, in convincing vernacular. They generate reasonable comments.
As an illustrative example, I just typed in a little test into Copilot. I only typed the first line and the text that starts with "a paragraph explaining...", and typed a "#" for each comment. Copilot filled in the rest.
def compute_sum_recursively(n):
"""A paragraph explaining what this function does and why, and explaining its running time:
This function computes the sum of the first n natural numbers.
The running time of this function is O(n).
"""
# Base case: n = 0
if n == 0:
# The sum of the first 0 natural numbers is 0
return 0
else:
# Recursive case: n > 0
# The sum of the first n natural numbers is n + the sum of the first n-1 natural numbers
return n + compute_sum_recursively(n-1)
> but also class discussions and group participation
I really, really hated those parts of school, second only to group projects. I do appreciate why teachers want that, and I hear you about how it can surface those who are cheating, but I do wonder if it's causing the class to suffer for the alleged misdeeds of a few
School assignments are so different from the dynamics of a real workplace that I find this point very moot. Always hated them with a passion, but otoh I always enjoyed team math and programming competitions, situations where you have to do stuff that's actually hard, work with people you actually respect and trust them to do their job.
For me at least, the dynamics of "School Group Project" are basically completely different and separate from "Work team". I happen to do well in both, but I do not enjoy both: the motivations and structure and dynamics and goals and timelines of "School Group Project" are so much more artificial and ultimately pointless, and obviously to all from the start.
Same things with discussions - I happen to be an engaged student, usually front row, hand always up, discussing with instructor and team and colleagues and everybody. I like to be engaged and figure things out together. But I'll never be half the developer than my colleague who barely speaks a word unless asked. He's friendly, meek, polite, and excellent team player and developer - he just does not initiate conversations, especially in group settings. I can imagine he'd have a nice big 0 in that category if that was a grade criteria.
Which is not to criticize the professors who try to use group discussions; just to point out, this is not a solved problem at scale any more than interviewing/hiring. There's too much humanity and too few absolutes :)
In my experience, yes, since much like school the team gets the team's grade if you're trying to ship a product; there can be room for individual merits if the company has bonuses or a separate reward structure, but my experience is that expecting everyone on a team to pull their weight equally is a fool's errand (even if it's for something quite reasonable like a family emergency, or other "good reason")
In school it never bothered me having freeloaders get the same grade for my work, so long as they stay out of my way and don't "help." There's a very famous mechanic's poster: "repair: $5, if you watch: $20, if you help: $1000"
When teachers require recording of timelapses, we will have AI which will be able to generate text and a video of a user typing it. IMHO, they should start to require timelapses right now.
That requires either trusting that students haven't edited out the evidence of cheating, which is pointless, or forcing them to use specific, proprietary software to record the video, which is unacceptable.
Timelapse can be recorded online, just like a security camera with upload to cloud does, so it will be impossible to edit later. Cheating will be visible because of sudden changes in text of the program.
That is, as I said, unacceptable. Among other things, it means students can't use their editor of choice, and students with disabilities who use assistive technologies will be much more likely to be flagged as cheaters. It also won't stop anyone from cheating by doing the exercise with Copilot and typing it up a second time for the time lapse.
It would prevent people from simply going and downloading solutions from the internet. A lot of easier assignments are fairly well treaded and already have solutions floating around.
I teach data structures and algorithms among other subjects.
My assignments are similar to everyone else's (CLRS, JeffE, Leetcode) yet different enough that simply copy and pasting will not work.
All I ask is that they comment their code, analyze complexity and cite sources.
I've been using copilot for the last year in my live coding sessions. It saves so much time (especially in comments). If students want to use Copilot I do not
see a problem. It is just another tool.
Copilot is near useless without decent programming knowledge.
80% of time it is fantastic.
About 10% of time Copilot gives you subpar snippets. O(n2) instead of O(n) etc.
Then 10% of time it just gives you wrong snippets.
Just like GPT-3, once you go beyond a paragraph it loses context.
I do want to say that, those criteria exclude neurodivergent folk (who for many reasons may have never even been diagnosed).
Honestly I think moving away from grades is going to be the way forward. Part of this is also making university, etc cheaper so you’re not financially incentivized to not take a class one, or multiple more times.
Mass-produced education is like a train, once you fall off you'll get hurt trying to get back on. I've come to the realization that given the inhumane pace, difficult and poorly-taught curriculum, and ambiguous/out-of-scope homework assignments in a top-tier university's CS program, the majority of my university classes were either not worth taking, or I'd be better off self-studying if I found an intrinsically rewarding project (definitely less panic attacks and psychological trauma, sometimes even similar or deeper learning). Perhaps I'd have a better time in a less prestigious university that doesn't see its purpose as weeding out and breaking lesser students.
> Part of this is also making university, etc cheaper so you’re not financially incentivized to not take a class one, or multiple more times.
I'd rather people had better opportunities in life that didn't depend on education. There's a lot of people who hate school and are just going through the motions because they believe it's the only path to success in life.
Evaluation is vital but the current model used in education today is not evaluation, it is punishment. Failed to get the answers right? You are punished. You lose points. Your GPA plummets. You could even fail the class and have to take it again which means the punishment is not only social but economical.
There's real life consequences to this kind of evaluation. Huge consequences. Students cannot afford to make mistakes. There's huge pressure and anxiety before and during a test because the stakes are so high. Failing at this stuff can cost someone their future: future jobs, future career, even the student loans that enable them to study in the first place may only be afforded to students who get good grades.
Well if you have to meet certain criteria to get a title then "punishment" is exactly what I'd expect to see.
Isn't that the whole point of all this? You can't call yourself a med doctor if you have no idea about being one, if you don't know about the key knowledge areas.
Therefore you're not a doctor and don't get to have that title.
I think during education yes, during execution of the skills learned in education - not really but also depends.
A lot of people hate how tech interviews are done, right? Cause they have nothing to do with the job the majority of the time. Those are places where I can see improvements.
However, some jobs do genuinely need you to interact with a diverse set of people all the time, for those you need to make sure for the sake of your company & the candidate that they match the criteria. Other jobs, you need only maintain strong relations with small groups, and the requirements for those are a lot easier to hit - even the most introverted people can do well in smaller groups that don't change a lot.
I do really think a lot of evaluations that happen during education though makes a confrontational relationship with learning. You're constantly judged about your _ability to learn_and that has far reaching impacts beyond your first twoish decades of your life. You get scared of making mistakes, you get scared of taking risks, etc.
So, evaluate when it's important for the function & safety of the job, and the candidate. Skip evaluation when its evaluation for the sake of evaluation.
> The most effective policy might not be to try and mechanically prevent cheating, but to explain that, this is information that you're going to need if you actually want to go into the field and if you're using copilot here you are damaging yourself in the long run.
I don't teach CS, but I teach an online course in a STEM field. I give out a similar message ("you're only hurting yourself in the long run").
My anecdotal evidence is that my warning does nothing--I have trap questions on various quizzes/exams and the frequency of cheating hasn't changed with or without the warning.
My suspicion is that the students who cheat feel this way: "This is just some bullshit hoop that I have to jump through, so it's okay if I cheat. I'll figure out the important stuff when the time comes."
I mean, they're delusional, but I understand the mindset for cheating.
edit: amusing story.
Faculty members in my department are required to take an on-line course on how to handle hazardous waste. There are a few hours worth of videos to watch and a test that you need to pass at the end.
Last year, one of the faculty members took the test, compiled the answers and emailed them around to everyone else (to save them the time for this "bullshit task.").
I said to him, "Isn't this precisely the kind of shit that makes every faculty member angry when the students do it?"
> I mean, they're delusional, but I understand the mindset for cheating.
Parental (other similar) pressure can indeed be quite strong. Absent of that pressure they wouldn't be there in the first place, so perhaps it is not so much being delusional as a rational response to their environment? If they cheat their way through the parents are happy and then they can return to the life they otherwise would have lived. The time lost is unfortunate in some respects, but at the same time if you're cheating, the time investment likely isn't that great and an acceptable cost given the circumstances.
> the kind of shit that makes every faculty member angry
I find it curious that the customer using the product in an unintended way would be upsetting to the vendor. What drives such emotions? In my businesses, I couldn't care less how the customer uses my product. If they're happy, I'm happy.
> I find it curious that the customer using the product in an unintended way would be upsetting to the vendor
Because your customer, in so doing, is stealing from your other customers and undermining your reputation as a vendor.
Plus, you're disheartened because this interaction is a waste of time, regardless of the fact you get paid: if money was all you cared about, you wouldn't be in education.
> Because your customer, in so doing, is stealing from your other customers
That's always going to be the case, though. Imagine you built software that helped businesses find customers. Using it the intended way they can realize a small number of new customer each day distributed across your customer base. Now, someone finds a new way to use your software in a manner that you never envisioned and they're attracting all the customers, taking from those who used the software as expected.
But, really, who cares? You just pivot to the embrace the new way and carry on with life. That's your market now. It's fun and all to want to be the elevator operator of old, but at some point you have to realize that nobody cares about your nostalgia. Markets change.
> Plus, you're disheartened because this interaction is a waste of time
But it is not. The students are only there to abade social pressures and cheating their way through gives them what they want out of the deal. Very few students care about the academics.
> if money was all you cared about, you wouldn't be in education.
If you don't care about money, why are you marketing the social need so hard? This is like a car manufacturer marketing that their cars are great gateway vehicles when committing crimes and then lamenting that criminals are using their vehicles to get away...
Before the ridiculous, albeit successful, "If you don't go to college you will end up flipping burgers" marketing campaign, college only attracted a small number of students who were serious about learning and all was well with the world. Colleges still spend an inordinate amount of time justifying why the cost is worthwhile to keep up the social need illusion.
It's not hard to revert to the natural state. Fact of the matter is that you (not you personally, perhaps, but the group) don't want to.
You picked a very specific example of customer behaviour, with aspects of a zero sum game, that is by no means "always going to be the case".
And yes it's a waste of time! If you want a fake credential then it's much cheaper and easier to just lie on your CV. It's not like many employers will check your certificates, at least not in software. I'd really rather people did that than waste my time first trying to teach them and later handling their academic misconduct cases.
If I'm marketing social need, this is the first time I've heard of it ;-)
It is, but an acceptable cost to keep the social pressures at bay. There are a lot of things in life that are ultimately wastes of time, but worthwhile to maintain a civil society. Such is life. If you are happy to escape to the deep woods away from all others where every moment of your time is purposeful, good on you.
> If you want a fake credential then it's much cheaper and easier to just lie on your CV.
What would that accomplish? The business world couldn't care less about your past achievements. They don't care that you performed in a play, rode a horse, swam in the ocean, or went to college. There is no value in even putting a legitimate degree accomplishment on your CV, let alone faking it. If anything, it detracts from your standing as it shows that you're so useless you couldn't come up with anything more relevant to share about yourself.
Instead, it is parental pressure that puts one into these schools when they otherwise shouldn't be. And those parents will turn up at your graduation. Once you've thrown your cap in the air, so to speak, nobody will ever think about it ever again. Getting to that one moment in time is the so called hoops that one is cheating for.
> I find it curious that the customer using the product in an unintended way would be upsetting to the vendor. What drives such emotions? In my businesses, I couldn't care less how the customer uses my product. If they're happy, I'm happy.
There's a lot more going on here than maybe you realize.
First, the students are not your only customers. The administration is also one of your customers. If you make the course TOO difficult and fail everyone, your "administration customer" will find a way to get rid of you.
Conversely, if you make the course too easy and everyone gets to cheat and everyone gets an "A", then many students who pass your class may not pass various professional certification exams that are more rigorously controlled (i.e., no cheating). If that happens, then the school may lose accreditation for programs that you are associated with (i.e., if all the nursing majors fail their certification exam, your nursing program may get nuked). Then you get nuked.
Third, whether you like it or not, your relationships with your students is something you have to deal with. Students who don't cheat have a tendency to get really upset if they find out you turn a blind eye to cheating. Happy cheating customers leads to unhappy honest customers.
Fourth, if your fellow faculty members feel like you're going easy on the students to get favorable reviews (or because you're lazy or because you hate confrontation), they can make life unpleasant for you, too. They also aren't too happy if they teach upper level courses and you send a bunch of garbage students their way.
It seems that you're not managing your customers well. If, in any other business, you needed to adjust the behaviour of some customers to keep other customers happy you would provide incentives, like a price discount. You don't need to fail the cheaters, you just need to make it more appealing to not cheat to keep the non-cheaters happy. How are you doing that? Is there a financial discount to those who have shown to not cheat (or conversely a higher rate for those who wish to cheat)?
If not, you might want to try a new business. You may not be cut out for the one you're in. The business world is not very forgiving, nor should it be. Those who can't adapt need to perish.
> It seems that you're not managing your customers well. If, in any other business, you needed to adjust the behaviour of some customers to keep other customers happy you would provide incentives, like a price discount
Spoken like someone who hasn't done the job and is confident in his ignorance.
Yes, that is what "seems" implies. It suggests that there is gaps in the information and that one should come back with more details. There were also directed questions included to help guide one to where information was lacking.
Curious that a primitive emotional response has transpired instead. Given the greater context of discussion also about emotions, is there something about the job that attracts this type of behaviour?
That certainly would be true within the public education (primary, secondary) system, where the government is the customer. Hence why attendance is mandated. However, typically college level students are initiating and fulfilling the transaction, thus they are the customer. They offer up money in exchange for keeping arbitrary social pressures at bay. In rare cases they offer money in exchange for learning things.
It is possible for the customer to also be the product. Especially in the age of salable data, that is becoming more and more common. However, when that is the case there is incentives given to the customers to shape them into what selling them as a product requires. In context here, that would mean something like giving discounts to those who don't cheat, which I am not familiar with any college doing, so... That brings us back to why would a vendor get emotional about the customer not using the product as intended?
Some courses really are required bullshit to some students. A lot of professors don't realize this, but if you teach a required course, it is simple math to say that some of your students just don't need to learn what you are teaching. Sure, a lot of students do need it but don't think they do, but a good fraction of the class does not need to learn the content you are teaching.
The only remaining question is how to engage these people. I have seen good approaches to that problem and really, really terrible approaches, and it seems that most professors go for the terrible approaches. Unfortunately, engaging people who don't want to be there is a lot harder than just having a bunch of required tests.
These students are not all wrong that some courses are BS hoops to jump through. Meet them in the middle. Teach them something.
I know a lot of what I teach is not really relevant to students who aren't chemistry majors--I even say this UP FRONT at the beginning of the course. I tell the students that they should think of the course as a way of assessing whether they can think logically and critically about "weird things." Because life is full of weird things that need logical and critical analysis.
In general, they appreciate my candor.
That being said, in my experience, the students who cheat often delude themselves into thinking that they understand the material--and they really don't.*
Then they seem shocked when they take a certification exam (where they have to leave phones at the door and are closely monitored) and they bomb the exam.*
* I'm speaking in generalities here. I'm sure there are some cheaters who are being "smart" about their cheating.
In the time I was at school, I heard of a lot more cheating than would be justified solely by people conserving time. I have recently seen studies that argue that 50-80% of students cheat at some point, and something like 10-20% cheat in every course they take. Personally, when I conserved time, I just accepted the bad grades - it was easier than trying to cheat and not get caught. The OP was talking about students cheating because they are not engaged with the material. I think that's only part of the problem with cheating: there are lots of other factors, like pressure to get a high GPA.
One of my psychology professors had an alternative: The course I took was required for psych majors, but was also interesting-sounding and worked as an elective for non-psych majors, so at the start of the semester she announced to every one of her students that psych majors should switch to the 3-days-a-week one to get the in-depth knowledge they'll need for the rest of their major, and non-psych majors should switch to the 2-days-a-week one to get the watered-down version. The two classes would explicitly be held to different standards, because this way the non-psych majors wouldn't have to keep up with the psych majors. And the psych majors were warned that if they tried to take the easy out, they'd fall behind later on.
There's more to this: it used to be that a college degree carried with it something of a soft guarantee that the student could achieve a baseline level of work on their own. This acted as a useful filter for employers. If cheating runs rampant that filter becomes meaningless, and once the student has successfully cheated their way into the work force, their incompetence can (depending on field) do very real damage to the lives of others.
Because of this, there will always be considerable pressure to detect cheating and remove offending parties from the program, be that the students copying answers from the internet, or the institutions failing to detect the problem before handing out a degree.
A college degree used to signal competence. Now it signals you have the ability to show up and avoid doing something to get kicked out for 4 years.
I feel we'd probably get the same value to society if we just made grades 9-12 optional. It would remove the people that don't want to be there, improve the quality of instruction, and college would actually be a meaningful achievement so we no longer have requirements for entry level jobs to require a Master's degree.
If someone is just looking to fill a course requirement and has no interest in being a programmer, my response is "meh, whatever".
If someone is cheating and actually expects to land a decent CS job, my response is "good luck on the leetcode questions and the 5-7 hours of questions you'll have to answer between the intro call, panels, and an offer".
And if someone magically manages to get all that way without actually learning the material they will get absolutely crushed at work. Meetings where you have to give your professional opinion will induce fits of anxiety. Making changes to your code based on feedback and not having any idea what the feedback means. Even just the amount of work alone that needs to be done at most tech places crushes great engineers. Without the knowledge it will just stack up on you even further.
You underestimate the human ability to get away with incompetence. Unfortunately many of those cheaters manage to get in roles where they can hide their inadequacies, either by slacking of their colleagues that do actual work or by sucking up to the right managers.
Oh it definitely happens, I'm not saying it doesn't. But in those examples you mention I would say there are more people who are bad at their job than just the IC who "faked it".
True story. I got my CS degree pre-internet. I did well with programming. Some of my friends / peers were less talented. In one of my 300 level courses the assignment was more difficult than usual. I naturally figued it out. They struggled.
Evetually I gave a copy of my code / solution to Friends Group A and also to another Group B. They didn't know each other. I said, "Be careful! If you copy, disguise it."* Deadline comes. Everyone hands in their work.
The following week, we go to lecture. Prof walks in and writes a list of names on the board. I knew each name. I knew what was happening. I waited for the shoe to drop (i.e., my name on the list). The shoe never dropped**
I believe they were all given Ds. Not sure why they didn't fail (F).
* In retrospect, this was a stupid on my part. If they knew how to alter enough it's likely they'd be able to write it themselves. That is, I all but suggested they walk on water.
** Also in retrospect, the TAs + prof had to realize I was the source. The class was big but not that big. And if only some completed the assignment correctly, the source had to be not difficult to identify. I'm not sure why I was never pulled aside and spoken to. Thank gawd.
> "...this is information that you're going to need if you actually want to go into the field..."
That has never, ever stopped anyone motivated to cheat. Even here on highly-educated HN, you get posters with open disdain for what they learned in college.
> "... at the end of the day is the purpose of the course to measure the student or to teach the student?"
Both. Otherwise, the college diploma really does become the meaningless scrap of sheepskin that its detractors claim it is.
> is the purpose of the course to measure the student or to teach the student?
The very basis of the scientific method is measurement. If you can not measure your learning progress, how are you to know if your training methods are adequate or failing?
I really don't understand this idea that testing and measuring students is a problem. Even the dreaded "teaching to the test", sounds like a good idea. If doing so would somehow exclude important learning, that just identifies an area where the testing needs to be improved.
> this is information that you're going to need if you actually want to go into the field and if you're using copilot here you are damaging yourself in the long run
When you say into the field, do you mean academia or industry?
Because the things copilot does for you are absolutely not the things you need to know yourself to be in industry. They're, for the most part, things we tried to put in libraries (or more ideally language standard libraries for a lot of things).
The fact that is solves a lot of interview questions just means our interview process was absolutely garbage.
And I'm skeptical of the academic side of this as well. It sounds like professors from the early 60s being annoyed that students have compilers (another tool that saves a ton of work and repetition). Y'all are forgetting that this is the ultimate lazy man's field. This isn't the first time the basics have been swept away and replaced with something easier to work with (and hopefully won't be the last).
> They're, for the most part, things we tried to put in libraries.
We need some people who can do things like making those libraries. It also seems plausible that the people who have at least some of the knowledge and judgement to do that effectively will, on average, be more effective on more mundane tasks as well.
Larry Wall is one who wrote (somewhat drolly) about the virtues of laziness, but there was nothing lazy about what he did.
> We need some people who can do things like making those libraries.
Do we? For things as simple as copilot tends to put out? Why?
Do you also believe we need to keep people around that do other automated things? Plowing fields by hand? Hand compiling higher level languages (as the first lisp compiler was bootstrapped)?
I mean, keep the information around. Don't go burning textbooks on subjects just because we automated something. But, what exactly is the value proposition of having students do these things?
> It also seems plausible that the people who have at least some of the knowledge and judgement to do that effectively will, on average, be more effective on more mundane tasks as well.
And your claim is also that this is the only way to get the requisite knowledge and judgement? Have students take in a string from stdin with a format that changes every semester, munge it around, and do things with it instead.
> Larry Wall is one who wrote (somewhat drolly) about the virtues of laziness, but there was nothing lazy about what he did.
As someone who has done string munging in C, I'm not entirely convinced that creating perl isn't an effort saving defense mechanism (only half joking).
I find this a very puzzling reply, and it may be that I misunderstood to what you are referring with the "they're" in "They're, for the most part, things we tried to put in libraries (or more ideally language standard libraries for a lot of things.)" It might refer to "the things Copilot does for you" or alternatively "the things you need to know yourself to be in industry."
The thing is, regardless of which way you meant it, we need some people who can make the sort of libraries we need in part precisely because automation such as Copilot is no substitute (at least not yet.)
This observation does not (and is not intended to) endorse current methods of instruction or hiring; on the contrary, it supports spcebar's view that riding your way to a degree, certificate or entry-level position on the back of Copilot is not doing yourself any favors.
The point about Larry Wall is that we don't get labor-saving tools without someone making an effort.
> I find this a very puzzling reply, and it may be that I misunderstood to what you are referring with the "they're" in "They're, for the most part, things we tried to put in libraries (or more ideally language standard libraries for a lot of things.)" It might refer to "the things Copilot does for you" or alternatively "the things you need to know yourself to be in industry."
I do see how that could be ambiguous. That's on me. I was referring to "the things Copilot does for you". Generally speaking trivial (or near trivial) algorithms.
> The thing is, regardless of which way you meant it, we need some people who can make the sort of libraries we need in part precisely because automation such as Copilot is no substitute
Fine. Some people might need to be able to implement and maintain libraries filled with generic algorithms, especially language maintainers. That's still a very different claim from the original "this is information that you're going to need if you actually want to go into the field". That claim implies that it's a universal requirement, whereas the reality is the vast majority probably don't need that knowledge.
Implementing these things is tedious, and results in a bunch of code that has to be maintained vs using something out of the standard library. For example, sort or max functions, which are at the intersection of what I see copilot generate a lot and intro to CS classes. Even without copilot, that's not really a skill that the average practitioner needs to have ready at all times, in fact I'd probably block a PR that implemented either of those things in my projects, it results in extra code that needs to be maintained and can be broken by a typo or something silly.
> The point about Larry Wall is that we don't get labor-saving tools without someone making an effort.
My comment about perl was sarcastic, and probably not helpful. That, also, is my bad.
To be clear, I take it that you are saying that a) everything Copilot is currently capable of can be found in libraries, and consequently b) learning how to do those things oneself is a waste of time, so c) it does not matter if people entering industry as software developers cannot do that themselves.
My point is that even if this is the case for a majority of such people, we still need the people who make all the library contents that are beyond Coplilot's capabilities, and we both seem to agree that its capabilities are limited.
The thing is, a world in which a lot of people can be productive software developers, without even being capable of writing the sort of algorithms Copilot is capable of, is highly dependent on the people who design and write the libraries that implement not only those algorithms, but also a great deal else that is beyond Copilot's capabilities. The industry may not need everyone to be able to do that, but then it is entirely dependent on those who can.
One can certainly argue that there is no point in an education that stops at the ability to reproduce what Copilot does, but that would be something of a straw man, as education does not typically stop there. I agree that it makes a poor hiring benchmark, regardless of the role being filled.
> To be clear, I take it that you are saying that...
I'm going to answer these individually because the answers are all different.
> a) everything Copilot is currently capable of can be found in libraries,
The vast majority, perhaps not 100%.
> b) learning how to do those things oneself is a waste of time
It's a waste of time if the pupil doesn't actually want to do it. If they do want to do it, there's probably quite a bit of value to be had. It's personal value for the student, not as a field.
Sort of like how I got quite a bit of value out of reading the old ITS documentation I found on github once, but I don't think I'd recommend it as part of the standard CS curriculum.
> c) it does not matter if people entering industry as software developers cannot do that themselves.
Correct. Assuming cannot means "without googling".
> My point is that even if this is the case for a majority of such people, we still need the people who make all the library contents that are beyond Coplilot's capabilities, and we both seem to agree that its capabilities are limited.
Sure, but that's a relatively small number of people. My point is that if using these tasks to measure aptitude is causing you problems because copilot exists, it's perfectly fine to just use something else.
> The thing is, a world in which a lot of people can be productive software developers, without even being capable of writing the sort of algorithms Copilot is capable of, is highly dependent on the people who design and write the libraries that implement not only those algorithms, but also a great deal else that is beyond Copilot's capabilities. The industry may not need everyone to be able to do that, but then it is entirely dependent on those who can.
Fair, but that's already the situation we're in with programming languages. We're entirely dependent on those as a field, and the vast majority of practitioners wouldn't be able to create a compiler or be anywhere near competent in language design. I would put the large frameworks (e.g. spring) in the same category.
These trivial algorithms become like the opcodes for a particular processor. Someone has to know them, but basically everybody not working on a compiler can ignore it (unless it particularly tickles your fancy).
I do have one small issue with your wording, however. Specifically the use of the word "capable" in this bit:
> without even being capable of writing the sort of algorithms Copilot is capable of
The people going into the field definitely need to be capable of implementing these sorts of algorithms. You're going to fail at so much of software if you aren't capable of something that trivial. It just doesn't need to be taught. These algorithms can be looked up if they're ever required. The ones that come up frequently will be naturally memorized and the others won't. This is a field where you have to learn new things, often without any sort of available expert in the subject, likely this will happen with an entire language or how to use a particular library.
I'm comfortable with copilot the same way I am comfortable with a calculator. Sure, if you have to, you should be able to do a logarithm with a slide rule, but I'm ok with not teaching university students how to use a slide rule. And if a professor assigned a bunch of homework under the assumption that students were going to use a sliderule, and the students all used calculators, I'd tell them to just drop that particular lesson.
We have reached a point of mutual understanding and considerable agreement, but I feel we have circled around to the original problem.
In your last paragraph, you consider the case of calculator vs. slide rule, but one can, I think, make a more apposite case with calculator vs. learning arithmetic (and even if you don't agree that it is more apposite, the point can still be made!) By your logic, we should not be assigning any arithmetic problems that can be solved with a calculator.
But why stop there? Even current calculators can do much more than basic arithmetic, and their capabilities pale in comparison to those of general-purpose computers. I do not believe you can teach mathematics in a way that begins at a point beyond what has been automated, and before long that may be true for computing as well. This has not led to the demise of mathematics education, and neither will it for computing, for the same reason in both cases: anyone who needs to use these tools to get through the introductory classes will wash out later - which brings us right back to spcebar's original comment about only hurting themselves in the long run.
> I think, make a more apposite case with calculator vs. learning arithmetic (and even if you don't agree that it is more apposite, the point can still be made!) By your logic, we should not be assigning any arithmetic problems that can be solved with a calculator.
I think that's an excellent point, and one whose nuances shouldn't be un-examined.
I guess the difference to me is that arithmetic feels more fundamental. I may be using fundamental in a weird way here, but I don't mean it in the sense of something that you build on. I mean it in the sense that arithmetic itself is the idea that's trying to be taught. Whereas a sorting algorithm has both a explanation as a semi-platonic ideal and as an implementation and they're very different things. You can fairly effectively wield the implementation only using the idea in your head, mostly regardless of your understanding of the implementation. In fact, it's generally considered good practice to hide an implementation from your eventual users.
I'm not so convinced that you could do the same with arithmetic, although I'm open to arguments to the contrary.
Now that I'm thinking about it, I think this is the dividing line between math and computing in my head. The ability to separate out the real world part from the ideal and operate the former using only the latter.
Perhaps I'm mistaken, but I would be very surprised if a mathematician could operate without understanding arithmetic, but I've seen quite competent programmers construct APIs just fine without having been taught sorting algorithms.
yeah Copilot really changes nothing here, all these basic questions can be found with a quick Google search. The reason Copilot is so good at these types of questions is lazy professors assigning the same basic stuff so there is a ton of training data.
instead of writing an article crying about it, the professor could try making some unique questions to test knowledge of the underlying concepts
This seems like a problem for educational institutions, not for industry. We're already inundated with mediocre programmers, many of whom have CS degrees, which is precisely why the industry looks at job experience more than credentials [0]. I don't see how this is going to further reduce the supply of competent candidates.
What it will do, however, is make it difficult for CS departments to evaluate students based on homework, and it's not clear to me that this is such a bad thing. My experience with university CS courses has been that assignments are largely auto-graded, with systems akin to unit tests. In the common case, an overworked TA quickly spot-checks the source code to look for obvious signs of academic dishonesty, but that's it. So, universities are left with the following choice: give everyone an A, or grade differently. In the optimistic case, this might even produce a grading strategy that improves the value of a CS degree as a predictor of programming competence. It likely won't, in which case we have a status quo.
[0] This is a problem in its own right. I recently referred a grad student as a job candidate in my current company. I'm the most senior backend engineer, and this grad student was a highly competent contributor to my open-source project, which currently plays a strategic role in our backend. Said grad student is unusually bright and productive in a complex specialty (distributed systems), and I had to push our VP Eng very hard to hire him. His initial reaction was along the lines of "he doesn't have any industry experience, so we don't know if he's good". It gives me nausea to think about how many outstanding, 2-sigma engineers are rejected because companies don't know how to evaluate their talents. In the end, we ended up hiring him, but only because I stated that I would take full responsibility for his productivity. I regret nothing. </rant>
Northeastern University is a college which is extremely popular for it's co-op program. What is co-op? It's literally just working at a company for 6 months. Yet students proclaim how it teaches them so much better than classes and helps them so much to get a great job, and I mean, it does.
College is the new high-school: everyone goes, only to get a job, because you have to go to college and get a CS degree to get a job. Why? A lot of "compute science" isn't used in software development at all.
I think we should replace colleges with "boot camps" which are unpaid internships which teach high schoolers how to work in actual software development, in various fields (let's call them "entry level jobs which don't require 5 years of experience"). Why unpaid? Because companies will love it, and it will still be better than colleges where you pay a full year's salary in tuition.
What about people who actually want to pursue higher-ed computer science? Just don't replace all colleges. It seems like most people getting into CS are not doing it for academia, those that are will have a better time taking courses with others who are passionate about the subject instead of simply using it to get a high-paying job.
> We're already inundated with mediocre programmers, many of whom have CS degrees, which is precisely why the industry looks at job experience more than credentials
How does looking at job experience help to distinguish good from bad developers?
You make a good point - I’m not sure it (always) does. However, it’s been my experience that hiring managers believe it to be a good indicator.
My sense is that hiring managers think academic experience is so far removed from the needs of industry that that won’t be able to understand or evaluate the merit of academic experience. I’m of the opinion that they’re mistaken, but ah well…
I get that copilot is AI and pretty cool, but students could look up a Fibonacci program on Google before it existed. What's more these algorithms are written in books. If students wanted to cheat on your "write depth first search" assignment then they already were.
I understand what you're getting at, but feel like you're splitting hairs here since somebody who goes to look it up on Stack overflow is just going to copy paste... not type it out line by line like it's a page of BASIC from BYTE magazine.
I think the big change here is how much easier it is? Students, like everyone else, are lazy. Previously just doing the assignment was typically less work than cheating, and Copilot makes that much less common.
Type the assignment into google and press enter vs type it as a comment and press tab?
Seems roughly the same amount of effort, and since Copilot is not free anymore most students won't bother paying for access on any kind of scale that matters. Whoever was going to cheat will cheat regardless.
Besides it's one of those "you won't have a calculator with you everyday" fallacies. If you can solve problems with Copilot in class you can also do it at your job later on.
Not really, for small snippets, sure. But jobs require getting something done. Sure you might be able to tab complete 10 pieces, but at some point you have to tie them together to get a working program and those 10 pieces will have different assumptions and require some real understanding to integrate.
Setting aside the academic implications for a moment, if you think that Copilot-like models won't be a significant part of programming in the future I'd like to point out this research from Google:
> We compare the hybrid semantic ML code completion of 10k+ Googlers (over three months across eight programming languages) to a control group and see a 6% reduction in coding iteration time (time between builds and tests) and a 7% reduction in context switches (i.e., leaving the IDE) when exposed to single-line ML completion. These results demonstrate that the combination of ML and SEs can improve developer productivity. Currently, 3% of new code (measured in characters) is now generated from accepting ML completion suggestions.
So 3% of all code at Google is now written by AI autocomplete, and Google developers who use this technology are 6% more efficient than those who don't[0].
Is it bad that students are cheating on homework with this? As bad as it was when students used Stackoverflow to cheat, I reckon. But will working with AI models be an essential skill to learn for new (and existing) programmers? About as much as getting comfortable with Stackoverflow was, I reckon.
I don't mean to be pedantic here but 6% improvement in productivity is a rounding error. You can achieve a higher level of productivity by just removing the ping pong table from the break room. This would indicate to me there is a non-significant improvement in productivity. That seems consistent with my having talked to several senior+ engineers who tried copilot and found it simply wrong or unnecessary.
There might be AI written code in the future and surely the "no code" shills will latch onto it with religious fervor. But, much like "no code will eat the world", this too will fail. Until general intelligence is capable of producing large scale systems that need to be coddled and orchestrated, bespoke perhaps even brand new approaches to business problems, etc, there will always be many highly paid humans there.
The conspiracy theorist in me makes me think that all of this push to shill this product constantly is yet another way the FAANGS are attempting to dilute the pool enough to lower wages. Engineering is highly skilled, highly technical work. The actual work of someone higher than junior level typically evolved way beyond simple CRUD coding exercises. When you spend more time making sure copilot meets standards, the code is correct, etc it's simply not that useful. It feels much more like the fancy test case generators for some IDEs. For very trivial, borderline rudimentary cases, it works wonderfully and saves time. Once things become complicated and messy it just gets in the way.
Maybe I'm weird, but I do not get the hype. I was using Copilot for a few weeks as a Go backend dev with occasional work in the React/TypeScript frontend, and I thought it actually reduced my productivity. A lot of times, the suggestions were totally wrong, but looked correct-ish, and I would have to pause and basically debug that code, which generally took longer and snapped me out of my flow.
Some of the autocomplete was okay in that it saved me some keystrokes at times, but in general, it felt like more of a distraction. I especially hated the suggested comments; these were almost never close to what I intended to write, and it would always give me a pause when they'd pop up, and often forget what I was about to write.
There were a few instances where it magically did autocomplete a surprising bit of code, and I enjoyed the novelty when it would happen, but that wore off quick.
You can keep it disabled and enable it with a hotkey when you think it can help. That's my strategy.
You need to recognize what it's good for, like generating code similar to what you've written but a little bit different.
For example : I had a project in Rust where I would print a sentence describing an error that occured. I had a big match block with my possible error variants. The error enum was declared in the same file.
And so, when I added a new error variant, I just had to place my cursor at the end of the match and watch copilot generate exactly or almost what I needed, a similar block of code that the other cases, but with a different message and using the fields of this varient intelligently.
I agree, I just disabled it today. I'd say 3 out of 10 times it produced a good result. Most of the time the structure looked ok but the variable names were wrong, or the types were wrong, or it had logic issues.
Surprisingly, I thought it did great with autocompleting the comments. In Visual Studio, it would block function autocomplete which was really annoying also
I eventually turned it off too. It was good at certain things like filling out repetitive test data but overall I found it more distracting than helpful.
Straight up assembly language, preferably Motorola MC680x0 or some other contemporary that was meant to be coded by hand; it's the only way, man! Plus students will gain some appreciation for how microprocessors actually work.
In the same vein, just have them write code out on paper during exams and coding competitions like the way we did it back in my day. It's not as if they're not going to need that skill anyway when they get whiteboarded during interview loops when even the most lenient employers get exasperated at "CoPilot-only" graduates.
“Coping with google… students can just search for the answers by typing in ‘depth first search code python’. Buckle up and hunker down for one slippery slope of an article!”
I've always thought that the ability to communicate to other humans about programs and to be able to synthesize a conceptual understanding from looking at code are highly valuable and sometimes-overlooked in the workplace. Assignments to write a function don't really demonstrate either skill.
Maybe more assignments like:
- "Here is a function that is supposed to generate a fibonacci sequence, but it is not correct, explain why"
- Here is a sort function. What is the name of the algorithm implemented by this sort function?
The reality of any career in CS is going to involve a lot of maintenance/understanding of existing code any systems. Copilot's failure to replace all human programmers isn't because of it's inability to cough up complex code, it's because it can't communicate with the product owner, debug a program amidst a time-sensitive incident, or explain its work.
Agreed. I just tried this by writing three sort functions named "some_sort", "another_sort", and "yet_another_sort". I started comments after each with "# The algorithm used in some_sort is known as: " and Copilot correctly completed them in all three cases (bubble, insertion, and merge).
While this will definitely be an issue in the short term, it's part of a long standing series of panics where a new technology develops, and everyone worries that we'll lose the skills it abstracts away.
And to some degree we will! I certainly can't do long division anymore, and I struggle to navigate my own neighborhood without GPS. This is a bit sad, but I think something of an inevitability.
If history is any guide, academics should embrace the new tech and try to teach students bigger things that copilot can't yet handle, with the assumption that they'll use copilot to fill in some details--the same way we now teach students to use graphing calculators to solve problems that would have been out of reach for their grandparents.
There are so many things I never learned as an undergrad because we were too focused on algorithms. Database design, testing strategies, different architectures, programming language design, etc. Maybe these things can come to the forefront of the curriculum if we don't need to drill basic algorithms.
> I certainly can't do long division anymore, and I struggle to navigate my own neighborhood without GPS
These are not the same thing at all. Yes calculators and such have largely made long division pointless, but it's still quite useful being able to get around without GPS. especially around construction, GPS is awful.
You need to know the limits of the tools you're using. If you just say "I'm always going to use GPS, because it's always better" you are really doing yourself a disservice.
Guess how I learned query/replace in Emacs in 1985?
My friend who copied my Pascal programming assignment is now a director at a $1B hedge fund. I'm the founder of a series of scrappy, ramen noodle startups. I guess it's the "Senator Blutarsky" effect:
This reminds me of when my school teacher insisted that real engineers didn't use calculators. They'd calculate their sines and cosines from a table.
3 years later, the education ministry decided that was BS and let everyone bring calculators to math exams. Surprisingly, math scores didn't go up much.
This reminds me of a story that an old professor of theoretical physics told me. In the early nineties, he left the former Soviet Union for the United States to teach physics at one of the top universities. There he encountered the fact that American students were fantastically good at solving all his standard problems for integrals. It quickly became clear that the students were using the then-new program for symbolic calculations, Mathematica. As a result, our professor also mastered Mathematica and spent half the night finding such integrals that it still could not calculate for assignments.
I use Copilot every day and I can assure you it makes a lot of mistakes. I think that, at least in the short term, CS teachers will still find assignments where it makes mistakes.
This is why I love the disable javascript plugin for Chrome. One click and terrible sites like these are tamed. uBlock origin is also useful to block sticky headers.
Ever heard of just-in-time learning? Copilot (if it gets good enough) is the damn teaching tool! All they need to do is come up with a programming goal/project thats extremely important/relevant/exciting to them, and break the problem down into sub-problems for copilot. This time they'll actually give a shit to understand why copilot wrote a bug or why the code works. No more boring pointless teacher assignments that hold no relevance whatsoever for the students most immediate and important goals and concerns in life. Finally life is fucking good for students for once and they dont need to google every answer for an hour.
I love their take at the end which is to not expand the already crazy anti-cheating apparatus and just give students more interesting problems and let them use all the information at their disposal. I wish more professors took this attitude. Having every student implement the exact same famous algorithms is really silly when you take a step back. You don’t gain any appreciation for them because they’re unmotivated and you lack all the foundational
knowledge. The proofs/explanations are all totally bespoke and don’t generalize.
You really win as a professor when you design a project that naturally causes you to reach for the knowledge you’re trying to teach. One of the best was my networking class where the project had us secretly rediscover tcp by trying to invent reliable transmission over UDP and then once we sufficiently cut our teeth the second half of the course was implementing a real working version on top of the professor provided userspace ip stack.
Just add a requirement where the student has to explain how the code works.
To be honest there should be a class called language model querying because for simple tasks it does well but eventually you will have to debug the output (not just copilot but GPT3 and dall-e).
Much like reverse engineering, it's easier to come up with an explanation when you have someone else's working solution to examine than it is to come up with a working solution from scratch on your own. So that doesn't really help.
but maybe, they'll end up learning anyway? one thing i realized when i went back to studying online is that i was no longer the student who needed to prove something to someone else, and the typical test annoyed me. I get that this article is about a teacher trying to find out how much their students have learned a subject, but maybe that needs to change?
Article author here. Turns out language models already can explain how code works, pretty convincingly. I gave a prompt to OpenAI, got the solution, and then asked it to explain the solution (possibly easier to read in my tweet: everything in green below is from OpenAI - https://twitter.com/emeryberger/status/1560618825208320001). I wrote the comment asking a question ("# How did you solve this problem? Explain how you got each digit from each number.") and Tab completed the rest:
# I solved this problem by using the % operator to get each digit from the number. I declared and initialized a sum variable to keep track of the sum of the digits of the number. I set a temp variable equal to the input number. I then used a while loop to loop through each digit of the number. In the loop, I used the % operator to get the remainder of temp divided by 10. This gave me the last digit of the number. I added this digit to the sum variable. I then divided temp by 10 to remove the last digit of the number. I repeated these steps until all of the digits of the number were added to the sum variable. Finally, I printed the value of the sum variable.
But then you would have a foundation to figure out the new models. This is the essential problem when teaching any field because the field itself advances faster than one can effectively teach you.
That's why getting an Internship is always a good idea.
I wrote an essay “Introductory Programming Assessment Must Accommodate Copilot-like Assistants” about this same topic last month and came to the option 3 conclusion (use more exams or in-class live coding assignments):
http://www.observationalhazard.com/2022/07/introductory-prog...
And old colleague of mine used to teach CS, when students had to send off code they'd written to Cambridge University for it to be compiled and run, and the results were then returned, all by post.
Then you were able to compile and run code locally on your machine. Then the IDEs came along, with syntax highlighting and then linting.
Copilot is just the next evolution in writing code, and saving you the effort. I bet they railed against Timsort and it's inclusion in the Python standard library, but now who honestly writes their own sorting algorithms?
This kind of generational gap reminds me so much of the "Kids are spending too much time reading" > "Kids are spending too much time watching TV" > "Kids are spending too much time on their phones" rants.
Copilot is limited to short snippets. You can't just ask it to write a HTTP server. Also I don't see how having copilot generate the code for listening on a server socket is any better than when I copied how to do it out of a man page. Copilot doesn't add anything new, you've always been able to look up the documentation or look up how other people approach the problem.
The fibonacci example doesn't make much sense either due to how trivial it is. Without copilot you are just copying from the definition of the function and with it copilot copies it for you. The other algorithms that follow are both online and probably in their textbook.
Copilot can almost write an HTTP server. Here is how I did it:
1. Start a new python file with the line "# This is an implementation of an HTTP server"
2. Press autocomplete a bunch of times and stop somewhere when "enough" libraries are imported
3. Press "#" and let it autocomplete the comment about what comes next (global variables for PORT and BUFFER_SIZE in my case)
4. Again, Press "#" and let it autocomplete a bunch of comments and functions ("get_file_extension", "get_file_content_type", "get_file_size" in my case)
5. At this step, I had to cheat a bit since it was generating too many "get_*" functions, so I started a comment with "# Initialize" and let it autocomplete again to get "init_server_socket"
6. From here on, it generated handle_request, parse_request, get_file_path and send_response automatically.
7. Lastly, I wrote "def main" and let it autocomplete again.
This produced an HTTP server which almost worked. I just had to fix a small issue with "file_path", since it expected that files were stored in the root directory, but I wanted it to load files from the local directory.
The code is not great since it does not handle most errors gracefully and is vulnerable to directory traversal, but I didn't even have to think about the HTTP protocol while writing it, so it is still quite impressive.
A few good points:
It generated a huge selection of MIME types which I probably would have had to look up by hand.
The generated server is multi-threaded!
It automatically reuses the socket, so I do not have to wait a minute every time I restart the server.
The solution doesn't seem that hard to me. Put less weight on homework grades and more weight on projects and tests. If they cheat the homework they'll bomb on tasks that can't be solved with Copilot.
Yes, I mention this in the original article ("Well, how about we just weigh grades on exams more, and have students take their tests either using pen and paper or locked-down computers?").
I have been using copilot for two months and I find it more annoying than useful, am the only one that feel like this?, I also used tabnine, and it gives a lot of syntax errors in rust, so I could barely use it for a week. I definitely can agree that copilot is better than tabnine, but I feel like writing code is the easy part of programming, and copilot doesn't offer a crazy help to justify the monthly payment and that my private code is uploaded to Microsoft servers. Not only that, but I find that my IDE suggestion are good enough most of the time.
Yeah I was going to post this. I did some coding the other day and it kept spitting out code that looked right but was calling functions that didn't exist. It also competes with IntelliJ's auto-complete making things doubly irritating.
I leave it disabled more often than it's enabled, unless I'm doing something simple like wanting to copy a file, which in Go is stupidly obtuse.
I use it with python and vscode, it's great there. Sure, half the stuff it makes isn't correct, or nearly correct, but it's very close, and I just need to change a little here or there.
It also does a fantastic job of filling in method parameters from my local variables, again not 100% of the time, but 90% of the time it nails it.
I'm not sure how it would be done, there might not even be enough existing code to train it on, but I would like to try out Copilot for code-based CAD models in something like OpenSCAD. I've never tried it at all, but thanks for sharing your experience, all.
If what you’re asking of these students can easily be automated by a tool then the problem is in the assignment. Either don’t request it, knowing it can be automated with ease, or accept that bubble sorts are a long solved problem and move on to teaching other things.
Lest anyone think I’m saying don’t teach important concepts or that such things no longer matter: there’s a whole world of principles and fundamentals that can be taught and assessed without fizzbuzzing 100 students who already had access to Google and StackOverflow before Copilot came along.
Yes, directly assessing hands-on coding would be much better, but these tiny snippets aren’t really indicative of real-world tasks that will be needed anyway (given that almost anything so foundational will almost certainly be in a standard library), it doesn’t really prepare students for real-world development work, and there’s a general expectation that new graduates are going to be very very green and therefore need a lot of handholding regardless. Even more than that, you’ve no way of detecting plagiarism already in these assignments given that a couple of basic rename refactors is going to stop you from proving the code was lifted from elsewhere to begin with.
> If what you’re asking of these students can easily be automated by a tool then the problem is in the assignment. Either don’t request it, knowing it can be automated with ease, or accept that bubble sorts are a long solved problem and move on to teaching other things.
If assignments that are easily automated are a waste of time, do you have any suggestions on how instructors should help students move from "I know literally nothing about writing code" to "I know the basics well enough to solve problems that aren't trivially generated by Copilot"? In my anecdotal experience (albeit years ago) as a graduate assistant, for students who are totally new to programming that phase often lasts throughout the typical intro to CS course and it's not unusual for it to stretch into the second level course.
I don't think using copilot is much worse than autocomplete.
It's an improvement of our toolbox, students SHOULD learn to use it.
It's not much worse than using c instead of assembler, or java instead of c, or ...
Throughout time people have invented new tools that makes their predecessors more efficient in their proffesion.
Too many programmers these days behaves like the workers did when automatic looms were invented. Or when the chainsaw was invented.. or the computers.. or the...
The unavoidable question is, will our and their kids be able to function without their machine overlords?
In the 80s I had to come up with my own solutions because no internet and no books, being 11.
With the internet I searched it with the error messages. Pre 2000 I got useful and accurate results, post 2k I got more and opinions than results, which had to do with the amount of mediocre intellect on the net and Google trying to provide a larger list of results, but mostly the stupid made it onto the net.
Om github you already have the stupid I'd imagine since it's such a popular platform, although many have left it when Microsoft tool over. But many also let their work be abused by Microsoft to sell the Copilot service, so I'm guessing the stupid is very present on Github.
The evil of taking someone's work, that they shared with good intent to help other people, run it through a machine network and sell it to the "other man" again.
And now it's even corrupting our children and our children's children.
As someone who's used machine translation as a tool to learn 2 human languages, which I eventually gained fluency in, I'm really tired of AI being considered "cheating". Yes, you can use it to approximate answers to questions and be fairly certain those answers are correct. But you can also use it as a tool in your learning process to help you fill in the gaps until you're all the way there. In language, it's quite obvious that using the language will increase your abilities over time. In programming, that may not be so intuitive, but it's also true. The more software you build, the more you will understand about building software well, even if you use Copilot to help write your algorithms. I think the "problem" here is that tools like this make it difficult to enforce that others learn something. But for self-directed people, who are committed to learning and understanding what they are doing, AI is more of a useful tool than a cheat.
I’m not sure I have much sympathy for the problem. Copilot is going to force everyone to move up the abstraction stack.
I absolutely disagree with how Copilot has been trained on the intellectual property of others, and I think Microsoft/GitHub should be taken to court over it (and I say that as an IP cynic, but if you play that game, you should stick by the rules), but this technological cat is out of the bag.
Students (and everyone else) should be thinking much harder about how they test and verify the behaviour of any given piece of code and how they design systems to stick together. This class of technology is going to automate away many of the jobs where you just pump out code. This is fine. It’s no different to someone inventing a burger-flipping machine, and I’m sure many here would agree it’s better if humans didn’t have to flip burgers for a living (modulo solving employment being necessary for a decent/tolerable existence). It may turn out that the next step in software productivity is not a new generation of cool and highly-abstract declarative programming languages (4GL) but simply automating away the drudgery of writing code in the extant kinda advanced languages (3GL) which are ‘good enough’.
Arguably Copilot doesn’t change much about the software development process. You can already get software developed pretty cheap if you farm it out to low-quality contractors or a lot of juniors straight out of school. There’s a whole branch of the industry that will hire anyone straight out of university to manage and front development teams in South Asia. But as we say, quantity has a quality all of its own. And the only way to manage software (not code) quality with this method is to verify and test that the deliverables actually meet your criteria and expectations.
It’s currently a great time to focus on TDD/BDD and code specification because our time spent on the D is about to get a lot shorter. There is still a place for artisanal hand-crafted code. But the mechanical loom has arrived.
Extra nitpick regarding generated code: the `quicksort_random_pivot` implementation isn't what people commonly mean by "quicksort", as in: it's not in-place. This means that some properties like space complexity are also going to be different than what's usually expected of quicksort.
Programming and, even more so, software engineering are _trade skills_. There will ALWAYS be “cheaters” but you will know them by the end product of their work. Yes, it’s important to understand the fundamentals. An unwillingness to learn these things is a great indicator that you will be a failure as a professional. As another commenter mentioned, the focus on grades at all makes the problem worse.
More important than even fundamentals, though, is modeling a successful software development mindset and problem solving technique. Too many professors have not _had_ any success in software development, which is sad. I can count dozens of examples of my own personal growth that could have been handled in college, that I instead ended up inflicting on my first employer.
I don't know if the idea of future generations of programmers squeezing by on having a robot make best guesses to get their work done without understanding it makes me feel confident in terms of job security, or fills me with dread knowing that I'll surely be relying on the software they produce to access my bank, have my cars controls hijacked because it thinks it knows better than me, keep my life support running when I'm old, etc.
It sounds like copilot could ultimately be just another progression in the definition of normal or baseline, like using calculators in math classes.
Whatever the current state of the art and tools of the trade are, that's what they are, and probably the bulk of courses will just adapt to reflect that state of reality and incorporate that into the courses.
Outside of maybe some specialist courses that may still exist but most students wouldn't need to take, it would just be expected that most students use the current tools to accomplish the tasks. Even going so far as to provide free student access to any paid tools to avoid giving the rich kids an unfair advantage.
The nature of posed problems, and their teaching purpose (why they are posed and what you get out of them) would just change from what they are now.
Whatever the "calculator" can do, simply becomes uninteresting and not required for most people to worry about.
My first couple years of school I was taking mechanical engineering, and it was right during a transition where we were doing both manual and cad drafting. There is a lot of geometry tricks both physical drawing and math required to generate accurate views of any shaped objects intersecting with any other shaped objects, viewed from any angle, sliced at any depth.
We spent a lot of time and effort on that. That is all done by magic now inside the cad engine.
Someone still needs to know it, but 99% of people whos job is to design or otherwise manipulate models of objects do not need to know it.
I, who did learn that stuff, don't think an engineer who didn't have to know how to generate an accurate view with nothing but a pencil and a calculator is a less capable less insightful less quality engineer.
If anything they may be better for not having to spend as much of their intellect on mechanics & implementation vs ideas. The knowledge & skills I'm talking about don't contribute to their understanding of the real job, they aren't foundations that the later stages build on. It's just labor of no value at all except to the cad engine designer. It had a value in the past because it was a form of literacy. It was simply the only way to document the ideas you were fabricating.
Perhaps copilot is like that.
I don't know and I'm not saying it is, but it could be.
Ask canned questions, expect canned answers as well. Dumb or lazy incompetent students will look up solutions to problems like fibonacci sequence or sum of numbers without understanding them or solving them first on their own. This has always been the case, even before copilot came along. Copilot only made it even easier and accessible to even dumber students, and made it harder for dumb teachers to lazily give canned programming questions.
I haven't used copilot yet, so I don't know how intelligent it is at producing solutions that it hasn't seen before. Assuming it's not that intelligent, one solution would be to just add small, tricky variations or constraints to well-known programming problems. So instead of just asking "find the sum of numbers", ask "find the sum of foo numbers", where foo is some made-up property, like a number contains even number of odd digits. Then require the implementation to count down instead of counting up.
I don't know, I tutored an introductory course on data science in Python last semester, and yeah copilot would have been able to answer most of the problems. But students would have been able to copypaste them from somewhere else anyway. If you want to test them, have them explain what they did, and why they wrote it this way and not another way. But personally, I believe that people are at the university to learn learning, and to learn independence, and if they really want to cheat then it is their fault. You spend a lot of time (and in some places money) to be here, so you might as well be honest and learn. The degree itself is not as helpful as one might think. It is the stuff you learn for the degree that helps you later on.
But on the other hand, I found copilot useful as a tutor. When somebody asked about an API, I didn't have to akwardly search the documentation, I fired up VS code and have copilot suggest something (without the students seeing it of course ;-)).
>"Copilot is different, he said, "It actually generates novel solutions. Not like they're super-crazy, sophisticated, genius solutions. But it makes new solutions that are superficially different enough that they plausibly could have come from a student.""
virtually every cheating student has always slightly altered their solutions. At best copilot saves you the five minutes of Googling and refactoring. For the committed cheater who wants to avoid a week's worth of homework and learning that's hardly meaningful. And in fact probably not even that because you still need to actually read what copilot outputs because it makes obvious bot-like mistakes quite often.
I might have missed it today in your articles or comments here--it's been a hectic day--but has there been some study of just how different code would be given that the students are using the same text from questions? Is there randomization intrinsic to Copilot, or is it just because minor variations in textual input causes code to be so different?
My wife taught CS, she did catch cheaters pre-Copilot, and my first thought it that she probably would enter test questions and print out a reference sheet for Copilot generated results.
I haven't seen a study yet, but yes, Copilot deliberately (AFAICT) incorporates randomness. For longer code fragments, I believe this would thwart most plagiarism detectors.
I'll note that defeating plagiarism detectors is easier than many people think. I and one of my students wrote a paper on an automatic technique to defeat plagiarism detectors, and it was highly effective: "Mossad: Defeating Software Plagiarism Detection", Devore-McDonald and Berger, OOPSLA 2020 (talk and paper link here: https://2020.splashcon.org/details/splash-2020-oopsla/14/Mos...).
> techniques inspired by genetic programming with domain-specific knowledge to effectively undermine plagiarism detectors.
My creaky memory of CS Theory makes me suspect that, ultimately, Mr. Turing's halting problem may make resistance to cheating futile. [1] It's interesting that the paper is in OOPSLA, I used to follow the work from it more, especially when I listened to IEEE's Software Engineering Radio.
Thanks for your work in this area and your reply. DieHard and DieHarder are funny, creative names for fault tolerant memory managers. It's also nice to see a fellow humorist. At least we can rest assured that cheaters get what they deserve, like my classmate at Columbia. [2]
Regards.
[1] > He, for one, welcomes our new AI overlords.
Yes, let's suggest a Borg-like partnership with them. I'd be willing to drive "self-driving-car" tow truck--if they haven't invented a self-driving one yet.
I love the choice of an AI-generated photo for the header, and the hidden message of its poor quality if you look at it longer than a glance.
There's an emerging divide between those with experience with AI tools and people who tried them a few times and got hyped and blogged a maximalist view of consequences.
Do people no longer generally have exams on paper?
For lower level courses we had online assignments, and pen and paper exams - which is where the majority of the course grade came from. If you didn't [intimately] know what you were doing on the assignments, you'd fail the classes due to the tests.
Yeah, I totally get it. The purpose of education is credentialing and so cheating undermines that. If the purpose of education were solely instruction you would advise students not to use Copilot to finish assignments but those who do would only harm themselves.
The good students will take the time to learn and understand (even if they didn't write the code). Other students, as they always have, will look for short cuts.
Not everyone who take CS1* courses are CS (or similar eng) majors. We keep crying "Everyone should learn to code" and now that's made easier than ever some of continue to complain. Huh?
Full disclosure: I started to use CoPilot a couple of months ago. It does not remove me from the process. It does allow me to shift more brain cycles from the mundane to the more difficult. My general feeling is, I have yet to get close to taking full advantage of what it can offer me.
While copilot does great on short simple fundamental problems, it sucks (and requires good attention and experience to find all the bugs in generated solutions) when solving anything more difficult. Although I can totally see how this would make teaching the basics difficult.
That being said, I think Microsoft should be fined $100B every time they give one of their products away for free to students. Hopefully that will be sufficient incentive for them to stop so being so overtly anti-competitive.
> we teach in programming languages that don’t even exist
I remember reading about a framework that actually sort of does that - create randomized programming languages. It was meant as a security feature against code injections iirc, but technically it should work as a solution to OP's problem.
Although I guess some of the other suggestions are better and easier to implement - ask students to explain their code (select one person randomly per problem, it's quite common anyway) and give more weight to tests.
Only matter of time before CoPilot like systems (if they do not already) just reverse engineer the compilers to produce and link a dynamic syntax to know instances in plain-text source code of the “foreign” programming languages and render them within a language the user already understands; yes, there might be gaps in idioms, assuming a natively written language, but using example you provided, there in theory would not; ...and if need, return a final version in the target “foreign” programming language.
such systems might spring up, but the market is far more limited than for generalist CoPilot-like systems, and they will thus likely be less sophisticated and thus easier spot (by other automated solutions)
Might? This is the future of systems like CoPilot, reading existing source code only goes so far and it is only matter of time before the state of the art systems dynamically map compilers, even foreign ones, to source code. Once mapped, system will use its universal transcoder to remap the code to a know target language and even dynamically rewrite the compiler of the foreign language if appropriate to fill idiomatic gaps.
Ultimate end user for systems like these are not humans.
When I did my BSc in computer science we had to do exams which involved writing code using pen and paper without a computer in sight. Bringing that back would solve this problem ;)
Copilot still works quite poorly with problems that are not “Stackoverflowable”. Perhaps the best way to fight this is to create assignments that cannot be googled in 5 seconds.
These examples of producing known algorithms in response to function signatures make it seem to me like Copilot is an intelligent, vendoring "package manager", not something that solves problems by itself.
The perceived productivity boost that people get from it could be interpreted as a condemnation of how verbose a lot of coding still is even in known areas. Lots of brain cycles wasted on doing things suboptimally that we can already do optimally.
The difference is that package management hides packages somewhere deep and you're not supposed to care about code quality. If it popular and works, good enough. Copilot puts terrible code snippets right before your eyes and you'll spend more time making it better and fixing bugs rather than writing that code from scratch. I tried copilot and for me it's not ready yet. I guess my issue is that I don't like average code and copilot was trained on that code, so it's as good as average coder and that bar is not high. Or I'm too picky and high-minded, doesn't matter, the result is the same.
Doing algorithms and functions are the least of my worries as a web developer. I need a copilot that helps me determine which kind of ui elements i need to help with task. I need it to automate standard issues like make login functionality invludkng two factor or to determine data structures for a novel problem. That’d be great.
Eh, it's always been super easy to cheat on programming assignments. Often they're variants on toy problems that have been used in the past, or worst-case, you could crowdsource an answer on stackoverflow or Reddit
Just make sure Copilot is explicitly against the rules, there's not much more you can do than that
Here's a thought: Replace mandatory homework with proctored, time-limited exams on systems that don't have copilot available (or dramatically reduce homework's percentage of one's grade).
If one cares about cheaters (and I'm not convinced we should), doesn't that solve the problem?
Yes, I mention this in the article as one of the solutions: "Well, how about we just weigh grades on exams more, and have students take their tests either using pen and paper or locked-down computers?"
Curious, has anyone has seen a “cheaters” guide to programming — that basically teaches a predictable barebones process that assumes unmoderated access to internet, computer, etc - and using Google, StackOverflow, CoPilot, etc. to solve 80% of programming issues average programmer faces?
My proudest moment was in an embedded systems course, the teacher asked for a function to generate fibonacci linearly in assembly, there was a reaalllllly long pause, and I barked the answer across the classroom. I vividly remember some stunned, disgusted glances.
Here’s the thing. When you learn to be a carpenter, they teach you how and when to use certain tools. The same can be for any profession. This is a new tool. They shouldn’t be telling students not to use it, but when to use it and what it is suited for.
No big deal. If students have access to this, then instead of trying to teach them and grade them on fibonacci - let the next generation have larger assignments. Like building a 3d rendered first person shooter (a basic one with no AI)
Euclid said that there is no royal road to geometry. People who pick the royal road end up elsewhere. In a society that grades them and acts on these grades, it may very well end up on a king's throne. But not geometry.
Just wait until students start using GPT-3 for written essays.
Not giving anyone ideas or anything, but if you feed it your previous assignments and it learns your writing style it should be hard to detect with minor editing.
I agree with the point that there's no point in fighting the advent of AI-aided development, but it's also true that the invention of calculators didn't mean we stopped learning how to add.
This is no different from finding the solutions on wikipedia, or any of a thousand other websites. People that want to cheat will do so, and they might even be correct in a completely selfish sense.
GitHub/Microsoft know what's up, because they make Copilot free for students, but teachers have to pay for it. The best thing for Copilot is to keep teachers as oblivious as possible.
One of the best assignments I had in CS was "here is a compiler, there are nine known bugs in it, find and fix six of them." I don't see how Copilot could help in that case.
Just don't grade HW and have handwritten quizzes and tests in class? (Yes, the grad students will suffer when grading... unless you pay them and then they will be lining up)
Quote: "Oh, have I mentioned that Copilot is free for students? Yep, COPILOT IS FREE FOR STUDENTS. It integrates helpfully right in their favorite IDEs."
In computer class back in 9th grade there were one or two exams about Excel, where we were to use excel to compute some of the answers for questions. The fact that "memorization questions" (if that's the right English term?) about several functions and concepts were on the same exam sheet and that the Excel help function was right there was something that seemingly slipped the teacher's mind.
Although he was an older physics teacher who only really did computer class ("Informatik") because there was nobody else available or (more) qualified to do the job.
When later on there were classes on programming, he made us use TurboPascal instead of something less cluttered and actually useful in modern economy like Python.
That same teacher later on performed some show experiments about induction and electromagnetic forces in physics class where he let a magnet fall through a glass tube with several copper coils around it. So far so good, but he then tried to readjust one of the coils one the glass tube, which predictably shattered in his hands, nearly severing some fingers and actually severing some tendons, which IIRC were later replaced with other tendons from his legs or something. Needless to say, not the sharpest tool in the shed.
My other physics teacher during 10th-12th grade physics classes was a severely esoteric nutjob, preferring to hold long rants about "science" being able to diagnose diseases in people by somehow shining a laser at a drop of their blood and interpreting the reflection/refraction because there was a "connection" between the person and their blood (or something like that, I honestly tuned out his ravings after a while). Apparently his wife is/was some kind of homeopathic "healer".
All that is to say, computer and physics class teachers for me were "very fun" and "useful", since one did not have a clue that their students were "cheating" and both were not very in tune with modern and/or scientific methods.
It is not the lecturers job to police their students. People that cheat only end up cheating themselves, and nobody but the cheater should do anything about it.
Could you just use another teaching language? Does copilot work as well with e.g. Racket, Raku or Pascal, which all seem like decent CS101 languages to me?
1. The latest and greatest Codex is now twice as good on its own benchmark suite than the original version published a year ago.
2. It's just as good on Python as it is on JS, Scala, C++, Swift, TypeScript... and other languages are not too far behind. It's not bad at bash of all things.
> Here’s an approach that’ll work for sure: use some, let’s call them alternative, programming languages that Copilot doesn’t really know. (...) Sadly, I have news: Copilot’s love for programming languages knows no bounds! Racket! Haskell! ML! (...) Copilot is a ravenous beast: if any code in any language found its way into a GitHub repo, it’s already swallowed it up and is hungry for more, nom nom nom.
Not sure how true this is in practice - I only used Copilot once, with Python - but if it is that might invalidate this concept
In law school there generally isn't any homework in classes about the law. The only classes with homework will usually be classes in legal research or in classes that cover non-legal subjects from a legal perspective. As an example of the later, at my school there was a "Quantitative Methods in Law" class that basically was an introductory statistics class focused on applications involving law.
For the classes that are about the law (contracts, torts, criminal procedure, etc) it pretty much all comes down to exams. I only had one take-home exam [1]. All the others were in-class.
The exam questions were mostly essay questions. There would be a couple paragraphs or so describing some situation, and you would be asked what the legal outcome should be. Your essay needed to identify the legal issues involved, cite the relevant cases and/or statutes, and argue how those applied to the facts of the case to support the result you were trying to argue.
[1] Note: I am not a lawyer. I went to law school when I got burned out with programming. By the end of law school I was no longer burned out, and decided I'd rather be a programmer who knows a good bit about law than a lawyer who knows a good bit about programming.
grade on explaning the code and not the code itself?
Someone who can explain some code understood it (or memorized the explanations)
from an academic point of view, it is not that bad. even for the people in the "memorized" category, they memorized the explanation... brings them quite close to understanding IMO
I have a hard time being interested in anything that Emery Berger has to say about Computer Science Education, since he's doing nothing to support the field. https://github.com/emeryberger/CSrankings/issues/11
Grading students is probably an archaic way of teaching. You want them engaged not cheating. But ok, institutions want grades... What about this solution?:
I guess CS programs are going to need to grow up and teach real topics like ethics, technical communications, effective testing, and large-scale systems design instead of how to write simple loops, which is something I learned at the age of 4 from “Gortek and the Microchips”.
You can test loop-writing in an oral-exam, too. Thus your proposed solution for GPT-3 writing ethics essays is so good, that it fixes the original problem.
This article is giving me a headache. I understand the narrator's frustration but wonder why they have to write in such an animated and confusing way. Seems like they think they're making a script for a tiktok instead of an article.
Who should fix the bugs in the future? Nerds?
Where are they coming from 60 years from now?
E(very) S(ingle) W(ord) S(hould) be (no [ACCEPTED] completion here) W(ritten) W(ith) C(onscience). You should be responsible for your code submitted. Are your feeling confident?
Won't touch Co-Pilot. Probably you are spending equal time proof reading than _understanding what needs to be written_ in advance. At least that is what I am hoping about. Of course you don't have to proof read, and most things can probably be grasp in a glance. But you rely on a black-box. From a corporation. To me that sounds like agreeing with a dictator.
Wouldn't promise my craftmanship for things I didn't grasp in advance. But maybe that's a culture hindrance. I like it though. I want to know the code I am corrected about.
I want to archive mastery.
One thing tho is that things like Copilot put the lie to the hypothesis (propagated mostly by the Google-style job interview) that intensely coding clever for-loops to perform algorithmic magic (for things usually already in the standard library) is the best measure of the competence of a software developer. If it can literally be done by a machine, maybe we should be measuring based on something else. Especially since this kind of thing is best done on the job by looking in a good textbook or using the standard library. Probably grading at the university level needs to consider this also.
I haven't used Copilot. I doubt I'm its intended audience. After 30 years, actually writing code is perhaps the easiest part of my job. The mechanics is the easy part. The big picture thinking and figuring how to get it all together into a system is the hard part (and honestly there are plenty of people far better at it than I am). Now, if we get a Copilot for that ... then our profession is in trouble.