You seem to be starting too abstract. Analogs help and so does starting from a point where the students can experiment and get feedback on their mistakes from their own code, not from abstract concepts. These are lessons I learned when we designed and taught a 8 week coding course for Graduate students who had never touched code before. Feedback for us as teachers was instant. (Analogs. Analogs. Analogs.) Imagine learning about structural engineering - would you rather start by prototyping bridges with different materials and seeing how different structural designs hold up better, or would you prefer to be lectured about conflicting philosophies on proper road arrangement, materials and tensile strength first.
Analog example I used (explain like they're five): Front end, backend, middleware, databases are like a Macy's animated window display. HTML is choice and order of the mannequins in the window. CSS are the colors and positioning. Javascript is the string that makes visible characters move. Middleware (Python, Go, Java) are the sales clerks grabbing goods from the warehouse (database), serving customers, and updating the window panes.
The course did a lecture on HTML/CSS first day, Javascript second, dug in deeper by debugging a snakes game in JS, Git on day 3 (failed for same reasons you mentioned), Ruby on Rails (immediate productivity was key), Wordpress installation, then PHP deeper dive, etc. One student said "I can't believe I paid someone $20k to build me a Wordpress site that I can now set up and customize myself after a few days."
Always start with immediate productivity for non CS students - let them sink teeth into the ideal outcome, then modify that ideal, then learn how to break and fix it, then dig deeper into why and how things work before going to best practices. If they don't know why something broke, they can always go back to figure out how it should work first.
If I was to do it again, I would teach Git with paper - have the class augment a paper tree with paper nodes or something else tangible. You can even have them write a collective story instead of code. The more abstract the topic, the simpler the analogs to get the largest subset of users to grasp it.
I have to strongly disagree with this approach to teaching git. I've been in software engineering for a decade now and have taught many entry level and experienced devs about git. If you start out with trees and other git internal concepts you're just going to lose them.
Have them create a directory, put some files in it, `git init`, add all their files and commit. Have them make some changes, add and commit again. Then ask them how their fellow devs would see their changes. This leads into the distributed nature of git and pushing to remote repos. Continue from there with more practical work and repeat.
I've seen it take devs months of working with git daily to start to understand some of the more advanced concepts so it needs to be practical up front.
If you've seen devs take months of working with git daily to start to understand some of the more advanced concepts, then maybe the problem is the current teaching method. They might learn the more advanced concepts a lot faster if they learn the concepts first.
You're assuming they want to learn. I've seen devs work with svn for years which is conceptually much simpler and they still don't know how branching and merging work, they'll do things like checkout each branch separately and copy files over rather than merge.
Edit - also a favourite interview question for anyone that claims to be familiar with svn is "what does the switch command do", a quick and easy filter to seperate those that know from those that claim to know.
I felt their attitude towards SVN was silly, so I decided I would make my change the right way. But when I went to merge, I got a bizzare error about a 'missing revprop' and it knocked the wind out of my sails.
I'm ashamed to admit it, but I ended up copying the files just like everyone else on that project for the brief stint I worked on it. :(
I disagree.The principal behind git is very simple but it can be tricky for people to understand it. Visual representation of a tool,which most of the time is only a handful of command line entries, can be useful.
No offense to the OP but the syllabus is ludicrous. This isn't teaching anything. It's so packed with random software technologies, they will leave the course with nothing. Cut down on the syllabus and focus on a few topics. Make it project based so that you're building on the same project over the 4 days and learning more about programming.
Split the other stuff like git, databases, OOP, functional programming, machine learning my God, for another course. There is no way anyone taking this course will get anything from this, it will just be a confusing mess of information that is quickly forgotten.
I'm sort of curious how one could "learn" software engineering in a week. Judging from the article, it seems the course assumed very little prior knowledge about SWE (teaching OOP at the beginning). Wouldn't it make more sense to teach a course assuming some basic background knowledge (like OOP and command line utilities) and use the teaching time to discuss architectural patterns, testing practices, etc?
This is exactly what I was thinking. Software Engineering is a lot more than just writing code, however this blog post makes me think that he is teaching Software Engineering to a completely new audience.
It is very easy to overwhelm software students when trying to teach a broad array of topics (especially in a week!). He mentioned that his students ran into challenges such as not understanding the difference between 'python foo.py' and 'python3 foo.py'. How can you expect these students to overcome these beginner challenges and then still learn topics such as git, CI, machine learning (!!), functional programming, concurrency, processes, etc!
These topics can take years to learn properly, and require a good foundation. Maybe I didn't understand the purpose of this course, but it seems pretty insane
It's a smell that a course that dedicates an hour to object-oriented programming even mentions UML.
Wouldn't your audience be far better served if you offered dozens of focused sprint classes (1 topic; 1 hour) rather than some sort of eclectic marathon?
Biggest thing that needs to be taught as one of the fundamentals for software engineering is approaching problems with scientific method.
Design, testing, and debugging (which takes up more resources than writing code) require skill in forming good hypothesis and methodical way to test that hypothesis.
These type of courses should be designed more towards those skill sets than just learning to write code.
I've actually been thinking at work recently that our work is very much in line with making and testing hypothesis. When we add code or edit existing code, we hypothesize that the program behavior will change in specific ways, and then we test those hypothesis via various means of manual and automated testing.
I would actually add to the above though that I think it's important people learn to approach problem-solving from a mathematical perspective. Unfortunately, a lot of people that get into software engineering don't like math, and they would generally be averse to learning about software engineering this way, but I think a lot of bad code gets written because the person writing it didn't approach the problem with the right structure or rigor.
Where you say "mathematical" I'd suggest "logical".
Also, formal logic (Aristotelian, sentential, predicate...) is distinct from mathematics per se.
Agreed that fundamental problem-solving skills are in too-short supply.
I completely agree with you! I have only been in industry for about 3 years now, and quickly learned that I would be spending most of my time solving problems and investigating large, complex issues.
Do you know of any resources where I can learn more about forming good hypotheses and methodical way of testing? Maybe books or online courses, or is this sort of knowledge only learned through experience?
I agree. It's a nice attempt, but I don't see how this is practical. It falls into the same trap people looking in from outside the software development have been falling into for years. They assume that it's easy because they can do an excel macro and such. Software development is actually really hard. Complex systems are hard.
A parallel to this would be something like, "You built a bird house out of wood. We'll teach you how to build a skyscraper in a week." It's just not possible.
You learn to be software developer by starting to learn one small thing at a time until over time you accumulated a lot. This course is a collection of such starter things and there is nothing wrong with that.
I don't think people who go there expect to come out as experienced engineers. They expect to learn some this so that they can learn more later.
I think this a very good point. This is giving them knowledge of how to start, and hopefully the resources to continue to improve. Most computer science curriculums don't teach you how to become a software engineer. They teach you theory, probably with some practical by ways of labs and projects. Hell, when I got my degress in computer and electrical engineering, I don't think version control was ever mentioned, let alone continuous integration. I learned about this on the job as an intern, well, about VCS. CI wasn't popular back when I interned.
The amount of practical learning accelerated my first few years as a full-time developer. Went from barely using nix to spending nearly all my time there except for time spent in Outlook. Went from a cursory knowledge of C++ to having a beyond intermediate knowledge. Learned Python (back at version 2.2!).
I think a big point of a course like this is not to give you a full knowledge of the domain, but rather how to learn* about the domain. Software is constantly changing; to be effective, you have to be able to keep up. Which means a lot of reading. When I first started, Google wasn't a thing. Meant a lot of dead tree edition books (ebooks weren't a thing yet). I got my starting points when I was in high school from early forums and mailing lists circa 1995. Took a long time to research things at 56kbps. Also took an effort to convince my parents to buy me programming books at $40 a pop even in the 90s, but when they saw me reading them cover to cover instead of watching TV (easy to get motivated - we only had 2 TV channels) and spending hours slaving away on the computer on programming instead of playing Civ 1, they were more willing to buy the books.
Judging from the course materials and course description, it looks like it's a class for scientific researchers who can code, but only in a crude manner to support their other research activities.
The course is intended to help these researchers to understand software design and collaborative tools like git to be more effective at writing code.
I had trouble reconciling the presentation of the course as an introduction to software engineering with the claim that it will become "the basis for doctoral training programmes in research software engineering at Oxford". Shouldn't candidates for a PhD in software engineering know this stuff already? But it means "engineering of research software", not "research into software engineering".
Yeah, it seems like this would be better served to be split into two (or more) courses if you're assuming almost no prior knowledge. "Software Engineering" is a pretty ill-defined term, but I'd generally think of it as the _process_ of developing good software, and that sort of presupposes you can develop _any_ software. Coming in with some ad hoc ability to program (and probably some bad habits!) would be a plus - then the "process" part is actually solving a problem, rather than just feeling onerous.
I don't know exactly what the goal of the course is, or where he's teaching, but...
it seems like a really hard problem.
Making working computer systems---like a drawing program, or an Amazon deploy---can be really hard, or really simple. You can teach someone enough Python for fizzbuzz in twenty minutes. Teaching them enough...everything...to get a web version of fizzbuzz, that checks a database for the words (it might be "nargle" and "gargle" by user preference, after all), with a Python install that they manage, a postgres install that they manage, that they don't feel at a complete loss about if something goes wrong, and to collaborate with others on this...is another matter entirely.
I'm not one of those who believes programming ability is innate, but there are absolutely compounding effects that can certainly have basically the same effect. In retrospect, I thank my lucky stars for my 6th-grade computers class using HyperCard.
I taught a 2-hour session to a group of middle schoolers and high schoolers back in December - obviously, not as big a deal as the author, but still something I'm proud of - and ran in to the same issues. These were all students who either had done some programming or were interested in it. I started with "ok, so, first, go to your command line..." and it went sideways from there.
One thing that the students taught me that helps overcome some of these things is a site called https://repl.it/ . I stopped my plan, started using it, and taught the rest of the class using repl instead in order to focus on the concept I was trying to teach. Obviously, as a professional software engineer you need to know about git, command line, etc., but those aren't what software engineering is. They rarely excite new comers to the field. Using repl.it helped to get to the core ideas faster for people who don't really know as much.
I think there's a half way between both because the command line is a REPL, even the awful windows one. I think that's a better starting point than a web based IDE (I don't think repl.it is actual a REPL) because you can naturally expand to more tools, have access to your own files, etc. If you start out that way then setting things like core.editor become natural extensions of what they've already learned and not some magic incantation to get started.
Yeah... I tried that. It was confusing because different people had different operating systems (and therefore different command lines), and since they weren't familiar with the command line at all I would have spent the time talking about something completely unrelated to the thing I wanted to teach about.
I see where you're coming from with your suggestion, but I think in practice it contributes to information overload and intimidation to a large part of the population.
What is software engineering anyway? I've heard this term for almost 40 years, and still wonder what it really means since everyone has a different idea.
You're not alone. Wikipedia has an interesting "Controversy" section about software engineering[0], with a quote from Edsger Dijkstra:
... software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot."
Ah yes, I love this line: "How to program if you cannot."
The fact is, nobody can program, not when programming means building complex systems (pretty much anything bigger than 30kLoC) absolutely correctly. Not I, not you, not the late Prof. Dr. Dijkstra — nobody. We can't.
But we need to build these systems nonetheless. So, "how to program if you cannot" becomes an essential area of study.
Yes, it can be abused, but can so nearly every other programming construct. There are places where goto makes sense, such as error handling or breaking out of nested loops. I hate seeing a Boolean flag(s) being tracked for whether or not break out of nested loops. It's more typing and confuses the intent. Just goto the label following the nested loop.
Also, nearly any looping construct can be implented using a conditional goto, and if you go down to assembly, they almost universally are. Jumps are goto's by another name. I've seen exceptions used for control flow. Awful. It's about knowing the appropriate construct for the problem at hand.
I've seen do{...}while(false) used with conditional breaks to avoid use of goto in error handling and it's confusing as you're reading through an unfamiliar codebase to see abuse like that. As I'm reading through the code, I see the 'do' and I expect the code to loop, only to continue reading and find out it doesn't loop. Frustrating.
Dijkstra was arguing in favor of structured programming languages powerful enough that you didn't need GOTO. For example you mostly see GOTO for error handling in C, because C lacks a structured way for resource cleanup.
Dijkstra certainly wasn't arguing you should use trick like do{...}while(false) to avoid using the GOTO keyword.
Dr. Margaret Hamilton coined the term while developing the software for Apollo 11. She went on to design a system that permitted bug-free software development ("Higher Order Software" or HOS) and attempted to market it. It was panned by e.g. the Navy and Dijkstra, and languishes in obscurity to this day.
The basic details of the system are described in a book called "System design from provably correct constructs: the beginnings of true software engineering" by James Martin (he mentions Hamilton in the references.)
In modern terms you work directly with an AST and the UI only permits modifications by operations that preserve the correctness of the AST. It eliminates all sources of bugs, and it was simple enough to teach normal people to use it with just some coaching.
Anyhow, that's "software engineering". The rest of us are just pushing text around in buffers and hoping it isn't broken. There are a few people using math to make software, and it works out really well (E.g. https://www.categoricaldata.net/), but mostly "software engineering" as it is usually used is an oxymoron.
The claim that a "correct AST" would eliminate all sources of bugs is unreasonably bold, and I wonder if they really made that claim, or you misunderstood something? That just means the code compiles. Some languages are stricter than others, but none claim to eliminate all bugs.
Static analysis and safe refactoring tools are great, but they are certainly not going to protect you from implementing the wrong thing because you misunderstood the requirements or the environment; that's not possible even in principle. At best you can check for consistency (for example, consistency between a specification and its implementation).
> you work directly with an AST and the UI only permits modifications by operations that preserve the correctness of the AST. It eliminates all sources of bugs
With the caveat up front that I know nothing about this beyond what you have written: wouldn't this just catch compilation bugs? The compiler tells me if the blob of text I gave it is incorrect in terms of any syntax errors.
Business logic bugs and accounting for bogus data seem more important to me.
Yes, you and skybrian have got the idea. It wasn't magic, it just prevented entire categories of preventable bugs, syntax and semantics.
You could still use it to write correct programs that did the wrong thing. In other words, it couldn't catch bugs that occur "between the keyboard and the chair".
I can imagine this being useful for people new to programming. I've seen plenty of times where students get overwhelmed with all of the different keywords/syntax they just learned and don't know where to apply it. Being able to give a list of possible actions available from the current node(?) could be a huge help.
Apparently they sat down, like, accountants in front of this thing and they could derive trees (programs) that automated tasks just fine with a bit of coaching. shrug
(But that also worked with Lisp+Emacs!
> programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.
My problem is I've been trying to get traction on this for years and no one is interested. (Although type-checking is finally getting some love. It's not all doom and gloom out there.)
Most bugs could be prevented or eliminated automatically.† If we're not doing that then in what sense are we entitled to call ourselves "engineers"? (Just to bring it back around to the topic.) I mean, our bridges collapse all the time. This is not science?
> In a syntax directed editor, you edit a program, not a piece of text. The editor works directly on the program as a tree -- matching the syntax trees by which the language is structured. The units you work with are not lines and chracters (sic) but terms, expressions, statements and blocks.
This is still just syntactic correctness, you can only enter syntactically-correct Pascal programs, no typos either, but I don't think it did type-checking or anything like that. But it illustrates my point that if you're typing text into an editor and hoping it correctly describes a computer program, you're doing it wrong.
You might have a good case, but the way you present it makes it seem very disconnected from real-world development.
Equating the most trivial kind of errors (which are caught by the compiler anyway) with "all sources of bugs" ignores the kind of bugs which are actually hard to prevent and might slip into production. But these are the kind of bugs developers are actually concerned about. Preventing editors from saving a file with a typo in it is a solution to a non-problem.
If you want to interest people in these ideas, you should show how it can solve real problems.
> The overwhelming impression is that the authors have had more experience than education.
Yes, Hamilton worked on the software which literally put a man on the moon. But apparently this is worthless to Dijkstra since they doesn't use the correct academic jargon.
I'm folding in responses to your other comments here. Well met BTW.
- - - -
You found Dijkstra's review. I'm a fan of his but the guy was hugely arrogant. IMO he craps on them pretty harshly.
I can kind of see where he's coming from, but "he doesn't get it" IMO. He misses the point (or maybe HOS sucked compared to what was later described in Martin's book...)
- - - -
FWIW, James Martin went on to write "System Design from Provably Correct Constructs: The Beginnings of True Software Engineering" which is where I learned about all this. I don't actually know much about HOS specifically, only what's in that book. If you're interested that's the thing to read.
- - - -
> So how does this prevent any bugs which aren't already prevented by the language?
The UI would not let the user enter syntactically nor semantically incorrect "trees".
In modern terms, if you had a syntax-oriented editor (like Alice Pascal) for a language with good type-checking I think you would have most of what HOS et. al. was or did. At the time of Apollo 11 (circa 1969), type-checking was barely a thing:
> In 1969 J. Roger Hindley extended this work and proved that their algorithm always inferred the most general type.
> Equating the most trivial kind of errors (which are caught by the compiler anyway) with "all sources of bugs" ignores the kind of bugs which are actually hard to prevent and might slip into production.
Yes, I know, sorry. I admitted that "all sources of bugs" was hyperbole. I was going for rhetorical effect.
> caught by the compiler anyway
First, why wait? If the errors cannot be committed in the first place surely that's better than detecting them only at compile-time?
Second, compilers didn't catch those errors back in the day. The whole reason Dr. Hamilton made up this stuff was because existing methods, tools, and technology would have crashed her spaceship.
> ignores the kind of bugs which are actually hard to prevent and might slip into production
Every moment saved by the machine is a moment the humans can use to prevent or detect the errors the machine can't detect automatically.
Here we are back to Dijksra. You know he only got a physical computer when his colleagues forced him to get a Mac so they could email him, eh?
He held, and I agree, that the kind of errors you're talking about do not happen while typing in the software. They occur "between the keyboard and the chair". If I may wiggle a little, I think of "bugs" as glitches in the machine, while the kind of errors you're talking about I think of as just "errors". But I know that's idiosyncratic, and that most people lump them together as just "bugs".
The only thing you can do about them is think clearly.
- - - -
The original question of this subthread was, "What is software engineering, anyway?"
My answer is, "What Margaret Hamilton did."
My point is that we have had tools that systematically eliminate sources of error. All automatically preventable bugs should be prevented (modulus economic consideration, but here costs would be trivial for automation of error prevention and the benefits and cost saving would be pretty high.)
Otherwise, calling ourselves "engineers" is pretty lame. IMO.
- - - -
> If you want to interest people in these ideas, you should show how it can solve real problems.
Real problems, eh? :-) Sending a spaceship to the moon? And getting it back? And no one died?
To be fair, I don't know to what degree J. Halcombe Laning's software was influenced by Hamilton. She's pictured next to the "software" section of the Apollo Guidance Computer Wikipedia article, but not mentioned.
> The design principles developed for the AGC by MIT Instrumentation Laboratory, directed in late 1960s by Charles Draper, became foundational to software engineering—particularly for the design of more reliable systems that relied on asynchronous software, priority scheduling, testing, and human-in-the-loop decision capability.[14] When the design requirements for the AGC were defined, necessary software and programming techniques did not exist so it had to be designed from scratch.
Modern IDE's like IntelliJ or Visual Studio hightlights syntax and type errors in real time, as you type. You could even avoid typing and just pick tokens from the autocomplete menu. It would be tedious, but it is possible.
So I guess this part of the vision have come to fruition. It is a solved problem. Great!
I just fundamentally disagree with your terminology. Calling syntax errors "bugs", and redefining actual bugs to "errors" does not help anybody. The bottom line is that the major challenges facing software development is not a prevalence of syntax errors.
My definition is that software engineering is about the management of complexity.
Complexity in software proliferates faster than in any other discipline precisely because it is so easy to create and change. The difficulty is not in writing or editing a line of code, but in understanding how that line of code interacts with all the other code in the program. The essence of software engineering is building large systems in such a way that they are as easy to understand as possible.
I learned what an engineer was when dabbling in electromechanics.
I think after the 80s, engineering lost its meaning in computers because people could abstract the machine away and hodge-podge a lot of systems.
No need to carefully encode data, precompile addresses, state etc which would be similar to the dimensioning computations in physical systems.
ps: to extend the story, most engineered things have very carefully defined dimensions and limits which drives most things in the design. Max Wattage in your PSU will reflect in the gauge of wires, size of capacitors etc. A bit like choosing an arch when compiling.
I remember some threads in 2000s boards where some guys did talk about jobs like these in computing. They had said space and said cycles to work with, with that they could see what algorithm could compute the needs back against the limits. I found it especially interesting brain wise because you could "think" in hard figures, instead of glueing things without any idea what was really going on.
This went out the window in favor of portable software, multitasking, and reusable components.
As soon as you have software that can run on a variety of machines (including new hardware that hasn't been invented yet), there is no way to give a performance guarantee. (An exception might be something like a game console where hardware is fixed.)
Even if hardware is fixed, if you don't control the other programs running on the machine, you don't know how many cycles you'll get.
And whenever you allow a newer, "compatible" version of a library or OS component to be swapped in, all bets are off. No mainstream languages take performance into account when deciding whether a newer version of a library is "compatible". So, any security update could invalidate your measurements.
(And in a way, the same is true of hardware since you don't control the environment it runs in or what might be plugged into it.)
We can ignore all this stuff and it mostly works. We wouldn't have an Internet without it. (Imagine if web browsers had to give performance guarantees.) Instead we'd have some kind of fixed-hardware monoculture.
My definition is a bit different from what I generally hear, and is based on discussions with my father, who is a structural engineer. Engineering is about making a process legally defensible. A structural engineer can stand up in court and say, "As designed, this should have stood up to all expected loads in this location," and have it be taken as expert opinion.
In the US, where engineer isn't as controlled a term as elsewhere, we use the term engineer for all kinds of roles. In, say, Canada, engineer means a professional engineer in one of the legally protected categories. You can't use engineer as a job title outside of those.
So if you're trying to talk about software engineering, imagine something horrible has happened with a system you were involved in designing (say, Therac-25). What what you need to know and do to be able to say that this failure should legally qualify as an "act of god"?
Software engineering a practical art related to the large scale production of software (vs programming as a smaller scale production of programs). Given its practical nature and young age, it isn’t surprising that we don’t really know what it is yet (like how carpentry was practiced a few thousand years ago).
Software engineering is what programmers do. The idea that programming is a different or less-skilled occupation is false. The engineering part is any skill and knowledge you use that goes beyond the basic language syntax. Things that everyone who programs for a living has to have, such as the ability to determine requirements and organize code in a maintainable way.
The controversy is due to human biases and stupidities.
It gets used broadly for any kind of software creation, but I think of it as software that is created in an engineering context ( like software for machines ) where you apply engineering concepts via software. So software engineers would be able to, in software, apply principles of things like control theory to a problem.
Professional engineers in disciplines have to sign their name to their designs in fields such as electrical, mechanical, structural and civil engineering, and can be held personally liable, monetarily, losing their license, and perhaps at the extreme end, criminally liable.
Software "engineers" are not.
Licensed Professional Engineers are also bound by a code of ethics by the appropriate industrial organization. e.g. for electrical engineers, you're bound by the IEEE code of ethics, even if you dont practice in the field.
I'm a licensed electrical engineering intern (I've been a licensed EE intern for 16 years now), but work entirely with software; I've never worked as an EE, nor have I met the professional requirements to take the PE exam. I'm still professionally bound by the IEEE code of ethics. I have objected to certain "projects" over the years because of potential violations. I almost quit one job because I was being pressured to violate the code, but my boss was fired and the pressure disappeared.
Such a licensing and accountability doesn't exist in software "engineering".
That said, I do refer to myself as an engineer and not a developer, because, well, I am an engineer. I follow different practices than web developers because I work in finance and theres billions of dollars on the line if I fuck up. I also work at a firm that requires disclosure of any professional licenses and an attestation that you are in good standing with the governing body (which I am). Professional Engineer (Intern) its not on the list of options, so I always have to type it in, they're mostly looking for CPA types.
Accreditation does not change what a discipline is; it is merely a governance mechanism to determine who is capable of it.
Physicians practiced medicine long before there were formal licenses (and long after for that matter...see 3rd world discount medical/dental).
> Such a licensing and accountability doesn't exist in software "engineering".
That's not 100% true. There's Cisco certification, PCI certification. There's no HIPAA certification, but violations can and do cost millions.
Nor is it true that other engineering disciplines always require certifications. You can be a happily employed mechanical engineer without sitting for the PE exam.
I digress...the whole thing is a red herring. The definition of a discipline exists independent of the whatever accreditation procedures happen to exist.
When I was studying engineering in the early 2000's my degree was structured such that in the first year all the engineering students (regardless of whether they intended on majoring in Chemical, Civil, Mechanical, Electrical etc) all took a common set of courses.
This included classes such as:
Calculus, Linear Algebra, Physics, Chemistry etc.
There were two required CS courses (one in the first semester of first year and one in the second) The first semester course was "Intro to programming and algorithms" which began by assuming no one had any prior programming experience and started with the very basics "This is a variable", "This is a function" "This is a conditional statement", "This is a loop" that sort of thing. Eventually it covered algorithms like quicksort, bubblesort etc and really helped get you into the mindset of "This is how you use programming to solve a problem" For example I remember writing code to solve simultaneous equations (i.e Linear Programming), finding line of best fit through a series of discrete points things like that.
In the second semester the required class was "Intro to software engineering" I remember the course covering things like Object oriented programming - classes/inheritance public/private members and all that stuff, this was a component on unit testing. There was also an essay component to the course I can remember having to write an essay about the Ariane 5 explosion. I really didn't enjoy this course and the impression I got was not a lot of my friends did either.
I can understand the intent of making us sit through the course but in some ways I think it did more harm then good pretty much everyone came out of that class at the end loathing "object oriented" programming. At the time it was kind of a struggle to see the relevance behind the topics covered and even now with 15 years of engineering experience working in manufacturing what we learnt in that classroom is very divorced from what goes on in the real world. When I write code I am, typically speaking, not writing complex software that has multiple reusable components that talk to each other most of the code I write is trying to solve some type of problem. When I write code I'm usually thinking in terms of mathematics and problem solving rather than classes + objects and unit tests, my code tends to be a lot more in the style that was taught in the algorithms course rather than in the software engineering course I suspect quite a lot of "research code" would be similar.
When I studied, our software engineering class was more about process than actual development. About gathering/writing requirements, acceptance requirements, etc. Our final project for the semester started day 1 and didn't end until the week before finals. All sorts of requirements, design, architecture and acceptance documents. Then, we had to implement the system according to the docs.
The project was to define a course scheduling system. Had to be able to define which rooms had which equipment, capacity, etc. Then professors would be able to enter their needs of time and equipment, then we'd have to allocate the classes accordingly.
When I was doing my Ph. D., I also ended up teaching some software engineering related courses. One thing that struck me and that I've since fixed by becoming a software engineer was the notion that here I was teaching Software Engineering without any experience whatsoever actually engineering any software professionally. This is the big problem in academic education on this topic: the teachers typically have little or no relevant experience actually engineering software because they have been in university all their life and as such lack experience working for extended periods of time with an actual team on some actual non trivial software. Lets just say that I learned a lot about this stuff after I left the academic world.
A good practical introduction to software engineering should cover a lot of ground and most of it is going to be non technical. Learning how to program is mostly out of scope for a software engineering introduction. Assume the students have already learned some programming language and have some experience building small programs and maybe have already had some algorithm courses, etc. It doesn't really matter what language or tools they use. For the purpose of an introduction, pick something simple where the focus is mostly not going to be fighting with the tools.
The key thing to learn as a software engineer is working with multiple people and the project dynamics that are associated with that. So:
- project management and different roles in software projects, different ways of structuring teams, etc.
- overview of different process methodologies common in the industry and where they came from: scrum, kanban, watefall (don't do this), etc.
- different types of testing and their importance for continuous integration and deployment
- different strategies for estimating cost, complexity, duration, etc. and their flaws and pitfalls.
Most of the problems he is having seem orthogonal to software engineering, which he is trying to teach. It's clear he's teaching it as an intro course. In my opinion, SW engineering absolutely should be taught as a later course - after the student has had quite a bit of exposure to programming.
I wouldn't expect a SW engineering course to require teaching much command line usage. That's a thing for lower level courses. Same with editors.
Don't use git if it is confusing to students. Mercurial is as good and much friendlier. The potential benefits to git are fairly advanced and will not have any impact at an introductory level. Your goal is to teach the concept of revision control, not the internals of git (which every git advocate says one should know if one wants to use git well - just search any HN thread that comes up about git).
As for Python vs Python3: Again, I would not expect a SW engineering course to teach any language. The course should let the student pick a language of his/her choice.
I think there's a hype in quickly training software engineers. This article is an example. We poke around with tools to find quick and easy ways to create software. Tools have been improving quickly but creating software remains challenging. The tool race is influenced by market more than software engineering.
The discipline of software engineering has not changed much for decades. To teach effective software engineering, we need to start with the principles. Some fundamental questions:
- How do we create functional and scalable software?
- Given an existing (complex) software system, how do we maintain and incrementally improve?
One important principle I learned outside of school and training: software is never just code, we always need to think about the user ecosystem around the software. Failure to understand this principle leads to wasteful effort. For example, attempt to rewrite of the code.
These principles are very abstract. We cannot teach them easily. People use tools and new technology to cover up their lack of understanding.
> git, in particular, is decidedly unfriendly. What I want to do is commit my changes. What I have to do is stage my changes, then commit my staged changes. As a result, teaching git use takes a significant chunk of the available time, and still leaves confusion.
In college, the way my professors got around this and other things like it was by creating an alias file and scaffolding code. We would get a project that was 1/2 done with all they annoying crap, and then only have to implement the algorithms. Then we would save our work with a single simple command with no options, that was just a shell script for all the source control steps (rcs at the time, but the idea is the same).
They never actually taught us about source control. We just learned it on our own when we stopped having access to their aliases. But at that point we had enough knowledge that we could figure out it (or we were in our first job and the senior devs taught it to us).
If it's an introduction to software engineering then I think leaving out software craftsmanship or functional programming, etc is not necessarily a sin.
And git (the cli not the versioning approach per se) never made sense to begin with. It is extremely user hostile. So no surprises there.
I have heard often people's view that git is not user friendly, but it's always confused me because that hasn't been my experience compared to any other tool I need to learn. Can anyone explain more this view? Perhaps I'm too accepting and forgiving.
`git merge`, `git rebase`, `git rebase -i`, `git revert`, `git checkout`, `git reset`, `git reset --hard`, `git log`, `git reflog`, `git stash`, `git cherry-pick`, `git add`, `git rm`, `git status`, `git commit`, `git push` and squash, reword, fixup, pick and f@ck knows what else. Who wants to really use all this bullshit just to do versioning?
Especially if you are a beginner, this doesn't make any sense (and not even later in many cases)
In my experience during the last two decades or so in the industry, there is always a special kind of person in the IT field who takes pride knowing all the nitty-gritty details of the latest shiny versioning tool. Should it be IBM Clearcase or SVN or CVS or Git or Mercurial. Even though that most of these softwares only slightly improve the user experience and only claim to solve the users' problems, so they over promise and under deliver. But there is this special kind of person who takes pride in it.
To me this seems like someone who takes pride in knowing all the details of his toaster. Might be impressive, but also completely irrelevant for the 90% of the toaster users.
> Who wants to really use all this bullshit just to do versioning?
Nobody. It is a strawman or should be.
You can teach everything a beginner need to know about git as an ordinary team member in less than one A4 page and less than half a days work.
Of course this means part of what you teach then is when to talk to the git expert on the team but as long as beginners stay away from squash, rebase etc they should be fine.
Ok, add another half day about effective diffing and merging too, it is a big topic and useful outside the context of git too (local history etc).
>Of course this means part of what you teach then is when to talk to the git expert on the team but as long as beginners stay away from squash, rebase etc they should be fine.
The thing is, if you ask 10 git experts what should be taught to a beginner, you'll get 10 different A4 size pages. As an example, when my team (finally) switched to git, the expert on the team made a tutorial on what commands to use, and what to avoid/ignore. Looking at it now, he taught that rebase is almost mandatory and gives problems you're guaranteed to run into if you don't use rebase.
Not saying I agree with him, but that is one problem with git - no clear consensus on beginner friendly features.
It's a tool that you use many times a day every day for years, and understanding how to do it well helps you make money... doesn't it make sense to spend some time to learn about how it works and how to use it?
I think the unfriendliness is that features are hard to discover and the commands are at a lower level than beginners (and a fair few non-beginners) can tolerate. Additionally, almost everyone uses a pretty straightforward workflow with git, but if you look at the docs it looks very complicated. You aren't trying to git checkout -b you are trying to start a new feature. This is a weird layer to add for someone struggling to control a computer using the keyboard instead of the mouse.
The other thing is that the dev community has settled on a small range of simple and effective git workflows, where you only ever need 6ish commands, but check this out from man giteveryday (which man git sends you to if you want a simple answer):
I mean it's cool for a power user but a little intimidating for someone who just learned about man pages. Of course if you copy and paste it those (numbers) are going to give some weird errors.
Edit: I should mention that despite the critique I appreciate all the incredible work they did in exchange for absolutely nothing from me.
As someone who feels they understand git enough to use it... I think it is unfriendly.
The terminology is just a beast. I feel like every term / command is a bit of a hieroglyph that means nothing on the surface and then I have to associate with some other thing (possibly more hieroglyphs) and memorize.
Few git... isms really tell me / give me a clue what it does.
It feels like programming in a foreign language (god bless you folks who do that!) where I just don't have anything to hang on to, just memorize it all.
Try SmartGit. It gives you all the power of the command line and then some. A lot of the 5-10 line git recipes you see online are just a single operation in SmartGit.
The reflog is one example. Instead of copying and pasting hashes and checking out each one to see if it had the changes you want, just click the Recyclable Commits box and every commit from the reflog shows up in the normal commit and branch display. Just click one to see its changes, or click two commits to see what is different between them.
SmartGit is great. It makes Git completely usable to a novice and still useful to powerusers, it also makes it very easy to visualize the state of your repositories. For the most it hides things like staging, however you still need to understand the git "mental model".
I think the biggest problem for people new to git is that it solves problems they don't really have yet. They are solo on their own repo.... they don't see the point of branches, they only really ever need to stage/commit/push all their changes to a remote. So they learn the magic command line incantations until the day it goes wrong somehow, or they need to roll back, or need to start collaborating and they have to dig a bit deeper
With SmartGit the lesson to start with is simply to commit, commit, commit. Commit every little change. You don't have to worry about adding to the index, you don't have to worry about branches, just type Ctrl+K, type a message, and Ctrl+Enter to commit. Or select from the menus to do the same thing.
You can always go back and add a branch onto any commit. Just right click the commit, select "add branch", and give it a name. You can move branches around by dragging them and immediately see what branches are where.
And if you really mess up, click Recyclable Commits, and any commit you thought you lost will be there.
They do! A non-commercial license is free, and have a very liberal interpretation of this:
> A purpose is considered non-commercial only if the SOFTWARE is exclusively used to actively work on open-source projects, for learning or teaching on a public academic institution, in the spare time to manage projects where you don't get financial compensation for (hobby usage), by public charitable organizations primarily targeting philanthropy, health research, education or social well-being.
Back then, they shown me CVS in maybe 10 minutes. I never needed to Google commands and somehow knew how to use it after. It was easy to remember. Never run into problem I would have to Google to fix.
Git takes more and people constantly ask questions or have to have cheat sheets printed.
In my experience the reason why people struggle with git is because git is also their introduction to source control. Students generally get the gist of staging and commiting, but usually braches/merges/rebasing causes a lot of issues.
The other thing is that students are also scared to get their hands dirty with git. Some of them don't fully understand source control and therefore they are terrified that if they commit/merge something incorrectly then they will destroy the whole code base.
As someone who went through RCS, CSV, SVN, ClearCase, and some others... no, not really. Git just does a great job of taking things that are simple in every other VC and making them hard to understand. Git is powerful, but it's completely user unfriendly.
The key is to not introduce git in the context of a real codebase. The codebase is precious, and git (for a beginner) is a a bunch of confusing, sharp tools. The cost of a misstep is too high.
Working with a "throwaway" local repo was crucial when I was learning git. It was just a handful of gibberish text files in a few directories, and I experimented with "what happens when I run this git command?", comparing what actually happened with my expectations of what would happen.
The thing that really put git together for me was understanding that each commit id was based on the previous commit id. So you could have two branches with the exact same code committed...and have different commit id's.
That explains rebases...you take one branch and put it on top of another branches commits but REWRITE those new commit id's based on the new branches commits.
Merges, you keep the same commit ids but smush two branches together.
Also, when I do a git reset HEAD~1 and make changes I can force push and understand the my new code is overwriting the last commit.
I think that really helped my understanding of git.
Having taught software, devops, and the like for years at this point, while this article touches on some difficult parts of teaching, they're all just side effects of bigger difficulties of teaching software.
a) Everything is the "tip of the iceberg" - which makes teaching what you actually want to teach tricky. Which is why so many resources do the whole "we won't go into it, because this is a large topic." For example, Linux. Most projects and resources require this, but barely any actually go in-depth.
b) Everything is always changing - and so you have to either support your students in the face of these changes or constantly keep the materials up-to-date. This is one of the largest challenges, if not the largest challenge.
c) Everything has to be engaging - it's not enough to know what you're talking about. You have to know how to talk about it in a way that creates engagement and thus learning. This isn't something you learn how to do when slinging code left and right.
d) Everything needs to be TAUGHT, not said - the ability to teach is often an after thought for folks looking to educate. If you want to really help your students, you have to learn how to teach so that they can think independently. Not rely on cheatsheets, prep tests, and step-by-steps.
e) Every student needs the motivation to learn - usually instructors' will stop at spitting out their knowledge. The best instructors help their students push through their barriers, whether personal or professional, and get the learning done. It's easy to learn in a structured school system. It's hard to learn when you have multiple kids, a full time job, and all the emotional baggage of being an adult.
Now, to be clear, I'm talking about teaching modern, practical implementations of software...not CS theory or other things that are far more evergreen and less technical.
I was a teaching assistant for the Introduction to Programming course on the first year of a CS degree. At the very least I can say it was a humbling experience.
One thing I learned while teaching complete novices is that it is really hard for experienced engineers to get into the mindset of students. You need to forget everything you know and start more basic that you think it's necessary. You will need to teach people what are variables, list, functions, and so on.
As an example an student got confused when we wrote something like the following:
let X = 1;
let X = X + 1;
How could X be X + 1 at the same time? Maths don't work that way!. It might seem absurd to a programmer, but if you are in the mindset of someone who is only familiar with maths equations, it makes sense.
So yeah, I would forget git, command line, tangents and whatnot, and focus on very basic concepts instead.
So to become a "real software engineer" and not "just a programmer" one needs to know how to create requirements, use source control, do modular, and/or object-oriented and/or functional design, deploy your software, and take advantage of open source?
Those are things that all programmers need to know (assuming "programmer" means someone whose primary job is programming). Which is why the distinction between "programmer" or "developer" and "software engineer" is bogus. Its usually just an attempt to justify higher or lower compensation.
The secret of programming, and it's greatest challenge, is that the ability to get "good" at it is largely innate. Just like you can't teach people to prove math theorems (the ones that one encounters in an elementary college proof class), you can't teach great software engineering. Near anyone can learn to build a javascript app, beyond that, to higher levels of abstraction and complexity, things become murky. It's why large tech companies ignore past experience, it proves nothing. They need to see you use abstractions from memory on the spot to ensure you have the capability. It's not perfect, but nothing is.
Software engineering at least has a fundamental underlying educational system supporting it (comp sci, EE).
I think the harder thing is training sysadmins, there’s no real formal education. I know we’re downplaying a lot of the traditional sysadmin roles and pushing for more development in sysadmin but the sysadmin approach is still required, and the only thing that keeps it going is the corpus of industry knowledge, a myriad of vocational or vendor based certs and a willingness to teach.
I'd love it if there was some sort of "helper terminal" that could teach people things like git, and basic command line stuff as they went.
It could catch common errors and provide useful explanations. It could possibly even go as far as attempting to understand plain text commands, eg "delete the file file.txt" and suggest commands to do that.
Sounds more like "the challenge of using a mish-mash of non-integrated tools with no interface". Developers could definitively solve this by moving away from 70+ year old abstractions (files, terminal) towards things more in line with Smalltalk ideas. But I guess it's better to make thousands of beginners suffer each year, instead of admitting that the current way of doing things is conceptually flawed and outdated.
(And no, there is nothing special about copy-pasting commands and Unix pipelines that cannot be easily recreated in UI if the UI is designed to be programmable. )
Analog example I used (explain like they're five): Front end, backend, middleware, databases are like a Macy's animated window display. HTML is choice and order of the mannequins in the window. CSS are the colors and positioning. Javascript is the string that makes visible characters move. Middleware (Python, Go, Java) are the sales clerks grabbing goods from the warehouse (database), serving customers, and updating the window panes.
The course did a lecture on HTML/CSS first day, Javascript second, dug in deeper by debugging a snakes game in JS, Git on day 3 (failed for same reasons you mentioned), Ruby on Rails (immediate productivity was key), Wordpress installation, then PHP deeper dive, etc. One student said "I can't believe I paid someone $20k to build me a Wordpress site that I can now set up and customize myself after a few days."
Always start with immediate productivity for non CS students - let them sink teeth into the ideal outcome, then modify that ideal, then learn how to break and fix it, then dig deeper into why and how things work before going to best practices. If they don't know why something broke, they can always go back to figure out how it should work first.