Hacker News new | comments | show | ask | jobs | submit login
Computer Sciences Courses That Don't Exist, but Should (dadgum.com)
415 points by Scramblejams on Sept 11, 2015 | hide | past | web | favorite | 247 comments



Can I add another one? How about "The Network Doesn't Work The Way You Think It Does: A Distributed Systems Survival Guide".

As someone who has spent a good chunk of their career building high performance networks and another chunk of it working closely with developers who care a lot about those networks, I'm constantly amazed at how little top tier graduates from well respected schools know about networks.

Relatively basic stuff, like what does your application's traffic look like on the wire, how an OS determines what to do with a packet, how latency affects applications, and many other things are completely foreign to way too many developers. I could go on and on, but given the connected nature of things today, it seems like a very overlooked area.


This is Networks and Distributed Systems at UChicago:

CMSC 23300. Networks and Distributed Systems. 100 Units. This course focuses on the principles and techniques used in the development of networked and distributed software. Topics include programming with sockets; concurrent programming; data link layer (Ethernet, packet switching, etc.); internet and routing protocols (UDP, TCP); and other commonly used network protocols and techniques. This is a project-oriented course in which students are required to develop software in C on a UNIX environment.

http://collegecatalog.uchicago.edu/thecollege/computerscienc...

The 3 major C programming assignments last year: an IRC server, a TCP stack, and a router.

Best course I've ever taken.


I dug around and found the course projects: https://github.com/uchicago-cs/cmsc23300/wiki


Thanks for this.


Should be considered a typical CS course.


I know a few good developers that don't even know how a computer works. Everything is so abstracted you never really have to know.


As a devops/infrastructure guy, this drives me nuts. You abstract everything away, but then are shocked when you don't understand what's happening when the underlying layers fail or break down in unexpected places ways.


I understand your frustration, but the field has become too big for a single (average) person. When problems arise , you team up with someone who knows more about the issue or you learn on the spot. Nothing wrong with not knowing everything.


That's not my problem. My problem is the illusion presented that you don't need to understand the underlying infrastructure. Far too often I see infrastructure as an afterthought, secondary to the code.


What kind of organisations have you worked in? I've been lucky (?) to work in an organisation with a dedicated network/infrastructure team. My work life got a lot better when I gained more understanding of how the network was put together (vis-a-vis load balancers, DNS, data center locations etc).

I still don't have a "low level" understanding of the network I use.


I've always managed or been on infrastructure teams at other orgs; my latest gig is at a startup, and I'm the only infrastructure guy. Perhaps that is the problem.


Good chance to educate others? I know I'll be looking for an opportunity to work with an infrastructure guru at some stage in the future.


I'm going to embrace it as such! Thanks!


More likely when problems arise you "team up" with someone who knows [almost] everything.

And I agree with other posters that this isn't something new. That's the separation between senior and junior specialists (and no - if one doesn't know what network latency is, that one is not a senior frontend engineer).


Fundamentals as small, compact and trivial. Everyone can get a good grasp of all the basics before going on to any "real world" coding with its diverse, needless complexity.


It's rapidly becoming a two tier world with two types of programmers: those who are well trained in coding within known environments, and those who are like the folks in The Matrix films who ate the red pill-- they know what's really going on.


I'm an embedded systems engineer. It's always been that way for me: I own and compile the bootloader, kernel, filesystem, runtime libraries, and the application code. I have the schematics and gerbers for my boards. I have the reference manuals for every chip.

What's weird is watching people play with stuff like Raspberry Pi and they expect things to just apt-get and launch like it's always been done for them. And most of the time it does just work that way. Except when it doesn't.


many development enviroments would do good with some graphics representation of how much of the plattforms capabilitys is actually used up during a test run, how cache friendly the code is, and how much your latest changes affected this. And this coloured columns need to show in every svn/git commit message. Profiling needs to be not something that the embeded guy does after you messed it up.

Jerry Commited: +25% more time spend in function xy, due to chace unfriendlyness..

If you see it, and your colleagues see it, it becomes a issue. If it becomes a issue the perpetrator will look into it.


It's been like that for decades. Or do you really know what assembler your program compiles down to?


For some of them, absolutely. I change the assembly output and reassemble to verify that the atomics are necessary and fixing the cases I think they do.

As far as I'm concerned, if you're not removing the atomic instructions to make sure the breaks happen that you expect to happen, you're not really testing your code.


You don't really have to know the internals but I personally think you ought to know the abstactions themselves at the very least. To be more specific, you need not know say the internals of your compiler but you ought to know how compilers in general work. Same with protocols, filesystems, OS {vm,scheduler} ...etc.


A good developer must know at least about the standard latencies and about the memory hierarchy. Otherwise it is a really bad developer who is going to keep annoying the innocent users.


I am a recent CS grad and I am constantly amazed by how little I know about networks. Do you have a favorite book or online resource on the subject?



One fun thing is to download Wireshark and look at the packets being sent in detail when you do common things like browse the internet or use FTP.


I think this is a really good idea, but I'd add one alternate possibility is well. Doing this on a typical user's computer today will present a mind blowing amount of data--so much that it is easy to give up trying to figure out what's going on if you've never done it before. Instead, perhaps spin up a Linux VM, maybe even no GUI to start. Then run Wireshark on its interface. Much less noise. Should be much easier to begin dissecting.


No need for a VM, just do some filtering in Wireshark. This will help you understand even better what is possible (filter by IP, port, and many other things).


Agreed. I took a networking course at University, and at community college, but didn't really understand networking until I started fooling around with Wireshark.


For a little softer experience, you can load up Fiddler and just look at the HTTP traffic. It's a little mind boggling to me to envision trying to debug web apps without Fiddler, Chome devtools/Firebug and server-side logging inspectable at the same time.

Also a little terrifying to realize just how much incoming and outgoing traffic all the things on your computer are generating all the time.


It's even more fun to do it on a network, where you can see what things other people are doing.


Computer Networks by Tanenbaum: http://www.amazon.com/Computer-Networks-Edition-Andrew-Tanen...

It's a classic.


The CISCO press books for the CCNA course are fairly good.

I actually did the proper 4 semester CISCO academy course at night school a while back which should give you real hands on experience - one of the tests is the instructor breaks a system and you have to fix it


TCP/IP Illustrated is what I cut my teeth on back in the day. You will need to be able to read C to follow the code, which is mostly from the BSD TCP/IP stack IIRC.


The Stevens series is a classic, and you should find a used copy to stick on a bookshelf for the nerd cred and because they are damn fine boos, but I think you will find the Kozierok's TCP/IP Guide from No Starch Press to be a more modern and up-to-date text.


I ended up doing the CCNA course just to figure out how networking works. I was designing a distributed trading system and I wanted to actually understand how things worked rather than just cargo culting a few commands and praying. There's just so many acronyms in the networking space it's hard to figure out without an overview.

Even with the CCNA course, you're still not going to know about stuff like security and firewalls on a deep level.

But get the textbooks and read through them.


This is probably complete overkill , but my favorite is still McDysan/Spohn "ATM Theory and Applications." The first few chapters before you dive into the details about ATM are a very good overview of all communications systems.


Closely related I think: breaking into software / hardware / networks. A course that has people doing basic and maybe not-so-basic exploits on real systems. Including attendance at a conference such as DEFCON.

Our future software engineers need to have security as a fundamental concern to software engineering and not as something that's an 'add-on'.


A thousand times yes. I've met tons of good programmers, even CS/CE Ph.Ds for whom networks and anything wire-releated is black magic beyond mortal understanding.

Not only does it prevent them from dealing with or writing such systems, but it's also a major contributor to clueless security FUD. It's not that there's nothing to be afraid of... it's that if you don't understand how networks work you are probably afraid of the wrong things.


https://www.nsnam.org looks like a good idea, but my head is spinning from the first tutorials - I'd love a "network simulator" that could just replay pcap files. Do you think that programmers watching and understanding some of what's going on in a Wireshark session would help?


Also found via the course wiki - http://mininet.org/


Part of my bachelor of Software Engineering included calculating to the nanosecond the latency of networks from the kernel interrupt to the other side (including across copper, fibre, and waiting in RAM buffers along nodes on the way)


Network speeds I don't know about, but the book "High Performance Python" comes pretty close to the description of "CSCI 4020: Writing Fast Code in Slow Languages". Check it out!


Thanks for the tip. I've been exposed to Cython a little, but my experience was very ugly... will check this out.


Awesome. I was just about to say I'd pay for CSCI 4020.


my school offered both networking and distributed systems courses, and it was a tiny school with an even smaller cs department.

The problem is getting practical experience in modern techniques.


CSCI 0666: Implementing Reverse Binary Trees on whiteboards.

CSCI 0123: Implementing Monads in COBOL

CSCI 0555: Successfully correcting internet commenters.

CSCI 0556: Nitpicking points made in CSCI 0555.

CSCI 0777: Introduction to twitter bot polynomial time algorithms with an emphasis on winning shitty contests.

CSCI 0911: Learning functional programming with Haskell.

CSCI 0912: How come up with uses for knowledge acquired in CSCI 0911.


> CSCI 0666: Implementing Reverse Binary Trees on whiteboards.

Not of much practical use later in life, but an A in this course guarantees a technical interview pass with any major Silicon Valley company.


It's just a warmup question.


Here's a list of all the algorithms and data structures I've actually needed to implement by hand so far:


Fascinating. Taking the restricted definition of "algorithm" you seem to be using, here's the first few I can think of that I've had to implement (for money):

- Converting a dollar/cents value into text, e.g. 67.33 -> "sixty-seven dollars and thirty-three cents", in XSLT of all things. I wrote the first version in Emacs Lisp, then translated that to XSLT. I was working on manually translating a bunch of .doc files to a new layout for the automatic generation of contracts, and the vendor had supplied a version that, after reading its source code, I realized was broken on certain inputs, like 10001 or something. I guess it was made by some people that thought they'd never need to write algorithms when working as a software developer.

- I was asked to combine the information in a few spreadsheets with several thousand rows into a new one. I ended up needing to compute some statistics on certain ranges of a set of rows, so in my short Perl program I sorted the data by the relevant key and used binary search to find the ranges.

That's not a list for my whole career, that's just what I can remember from my first summer internship as a data entry/IT hob-job monkey when I was a teenager.


For whatever reason, there isn't a nice priority queue out of the box in the .NET framework, so that is about the only one that I've had to implement


Fisher-Yates shuffle may be the one "book" algorithm that I literally implemented a handful of times.

But there are a couple of more generic algorithms that I occasionally can't really help but implement in some variation or other. Usually because the "book" version of the algorithm that is already in the standard library of your language of choice doesn't quite fit your problem. Many classic algorithms are simple enough to code for your problem, than to write an adapter for the standard library version of the algorithm.

For instance binary search. It's a tool. Prefer not to implement it by hand (because less chance of bugs) if I know there's a library call that does it fast, but if need be, easy enough.

Sometimes you may not realize you're implementing a classic algorithm, say when writing certain recursive functions that are actually equivalent to a tree-traversal algorithm of a datastructure.

Sometimes you do realize, coding up a class definition for something, and then you notice "hey these fields actually make this data sort of equivalent to a doubly-linked list". That's when your algorithms-knowledge comes in handy, because it allows you to stop and consider, is a doubly-linked list really the most suitable datastructure for this problem? Do I know other clever "book" algorithms that fulfill this need? Are they better? More performant? Without the knowledge of the first algorithm you wouldn't even know to ask this question.

So the point is, you don't know what you don't know. Apparently you never needed to implement a "book" algorithm or datastructure by hand in order to get your programming task done. But do you know those tasks couldn't have been done better, faster, more performant, elegant, easier, if only you had the knowledge about the right algorithm for the job? Whether you would end up implementing it by hand or not, it's having that knowledge at the ready, that makes algorithms and data structures knowledge something that distinguishes one programmer from the other.

Then there's a bunch of domain-specific algorithms. Various types of numerical integration methods I've implemented by hand many times because you can almost always do better than naive Euler. And since they're in the inner loop, tangled up with the particular equations of your problem, calling them as a library function is just going to drain performance.


CSCI 0911 and CSCI 0912 are real, though they go by CMSC-16100, CMSC-22311, and CS240h (and I'm sure more).

So do I pass CSCI 0555? ;-)


Sorry, CSCI 0123 is required for a passing grade. :) I'm going to get so much hate emails from Haskell people.


These are only needed to pass an interview at Google, where you will spend all day writing FizzBuzz algorithms.


    if comment.id % == 5:
        Println('Study computer science, sell advertising')
hired? :D

I'm waiting for a CSCI 0555 student to correct the obvious bug.


I think it's because there's no space in the modulus-equals operator your FizzBuzz Archtitect wrote instead of writing useful code.


Okay time to upset people:

Dear 'All CS Departments of the World':

Please stop fooling your students in the 'Intro to Algorithms' class, and just rename it to 'Intro to Combinatorial Algorithms'. Because that's what you teach, and you completely ignore the arguably bigger area of 'Intro to Numerical Algorithms'. Numerical Algorithms, that are not only the basis of pretty much every branch of computational science (comp. physics, comp. chemistry, comp. biology, comp. civil engineering, comp. mechanical eng., comp. electrical eng., comp economics, comp. statistics, and on and on and on), but it's also one of the weakest link for a CS graduate to transition to Machine Learning and Data Science.

Or you could keep the course title, but upgrade the class, by teaching 50% of it as 'Algorithms and Data Structures', and 50% as 'Numerical Methods and Scientific Computing'.

Unrelated to above but relevant to the thread: Allen Downey and his team at Olin College are doing things worth looking into. (His Google techtalk: https://www.youtube.com/watch?v=iZuhWo0Nv7o)


Woah woah woah, are you suggesting that computer science classes should teach people how to compute things?! That's ridiculous! :D


In Portugal they do.

There is hardly any pure CS like it appears to be the US case.

All Informatics degrees are actually a mix of CS and Software Engineering.

For pure theoretical CS one needs to go for a Maths degree with focus on computation (after the third year).

EDIT: Typo the => be


Same in Spain. The degree is 5 years long (rather "was", now Bologna has changed all that) and teaches you all the way from the wire to A.I.


That's what the numerics course is for, at least at my university


I had to get a physics degree to get exposure to such topics.


Same here, but I graduated as an engineer.


We did learn both. But Algorithms was first-year stuff and Numerical Math was .. 3rd or 4th I think.

I agree that numerical computing algorithms are indeed very important topics. I was very excited when I learned about them (then took Numerical Math 2, and finally met my match, was above my ability to learn in one block).

I do think that it was a good idea to teach Algorithms first, though. I don't think I would have appreciated Numerical Math as much if I had taken it in my first year, but Algorithms was pretty accessible, cool, and precisely the sort of thing I had imagined studying Computer Science to be like.


How about "Building an End-to-End Software Solution - 101". Going through the entire development workflow (idea, design, prototype, beta release, delivery, and iteration). One of my good friends is currently going through school, but in the back of my head there is this nagging thought, that he will not learn what's needed for today's market.


> Going through the entire development workflow (idea, design, prototype, beta release, delivery, and iteration).

That sounds like what I was taught in school. Never used it. I'm just glad the waterfall model is dying.

It is kind of funny, me and all of my peers were required to do the whole requirements, design, implement, verification, etc thing. But nobody did, nobody.

Everyone did the same thing: Requirements for one small part, implement, verify, and then write the design document at the end to match the implementation. Repeat.

I'm really glad things like agile are here, not because they're magical solutions to all of life's problems, but because they're inherently compatible with how people ALREADY WORKED. Waterfall felt like a boat anchor, causing teams to get indefinitely stuck in the design phase (or to lie a bunch to effectively skip it entirely). Agile allows you to go away, get it 90% down, and then iterate, iterate, iterate until everyone is happy.

To be honest waterfall always felt like a programming methodology created by people who never programmed. It was like they took the method for building a bridge or a skyscraper, and applied it to software blindly. Agile methods feel like something designed by people who have actually programmed and know how they like to do it.


> I'm just glad the waterfall model is dying

To be fair, it's not.

> To be honest waterfall always felt like a programming methodology created by people who never programmed.

And agile was created by people who don't understand that most organizations need specific software developed on a specific budget.

A majority of companies that I see that say they are doing agile are still doing waterfall.

PS - I'm a huge proponent of agile and the agile manifesto, but it's not a one size fits all methodology.


You're living in a different world than I am. In my world, most organizations don't know the software they need until they figure out what they don't need in UAT.

The exception to the rule is when they're replacing an existing system (or consolidating multiple systems.) Then, they think they know exactly what they need, but then they won't figure out what they need until UAT when the real users finally see the system and point out the business rules that were considered unimportant.

So, why not iterate quickly, developing the software in one of two orders: * If one portion of the system can be used independently and generate value, why not build that first and get everyone using that? * If the system cannot provide value independently, build the MVP of the first screen they will use in the workflow and a view-only portion of the second screen? Then get real world users to tell you what they absolutely need before calling it done.

Once the first screen is done to their approval, then you take stock of the budget. You tell them there is a balancing act between cost and scope, and everything flows from there. (Or, you find that the value can't be realized and bail to another project.)


Hopefully this doesn't come off as patronizing, but given you're a programmer (had a look at your profile), then yes we definitely live in different worlds. I work with a variety of people who have very little empathy for developers, but effectively hold the purse strings on technology decisions. These people (which largely outnumber programmers btw) see the world in black and white -- things either work or they don't. Software development to them is a black box.

> The exception to the rule is when they're replacing an existing system (or consolidating multiple systems.)

This is pretty much 90% of the software systems and projects that exist today. Don't let the headlines of TechCrunch fool you. The world outside SV doesn't sit in Google, it sits in terminal screens and mainframes. All that money SAP, IBM and Oracle is making is all made by replacing existing systems (whether software or pen and paper). The people making the decisions don't care about MVPs or what screens look like. They want to increase efficiencies aligned with the objections of their role and not get fired for doing it. When I tell a trader at an asset management company they can't enter in incorrect values for a stock trade because we'll need to fix the data quality issue on our backend, they tell me "f--- off" because they'll simply pay their way out of that discrepancy.

So I hear you, but what you're suggesting exists in very few organizations (globally) or in a vacuum.


They're replacing the existing system because the interaction of programmers and users made an unusable system that didn't account for the business needs. Unfortunately, the people in control of the pursestrings don't have the same day-to-day perspective of the people actually using the system. I've seen documented processes be completely divorced from reality. The real challenge in these systems is uncovering the true business, and the true business isn't understood except as a collective.

It is challenging to tell a director that they don't understand their own business, however.

I've worked in a lot of internal IT shops throughout my career, and it is only recently that I've worked on B2C applications. That's why I explicitly say that replacing an existing system requires more iteration, not less.

How many times have you seen a user exploit a bug to get their work done and be angry when it's "fixed?" The real bug is that the business didn't understand their own process.

My job, when I was in internal applications, was to cut through the bullshit and really understand how the business was run. And anyone who thinks they can analyze the problem up front and give a precise estimate for how long it will take is delusional.


> And agile was created by people who don't understand that most organizations need specific software developed on a specific budget.

That's an odd criticism of agile. The whole point of agile is to prioritize so that the important stuff gets done first and does what it's supposed to, so when the money runs out you've got the most important bits working the way they should.

If you've got a fixed budget, you're waterfalling and the money runs out, odds are very high that the software is going to do what the requirements said it should do, but it will be near-useless in real life.


It isn't an odd criticism of Agile. In fact, I think the criticism is that Agile views all projects as things that go on until the money runs out, at which point the user gets whatever the team managed to finish. This ignores the world of projects that have hard requirements and hard deadlines. It also ignores the world of projects where the team needs to coordinate with other software and non-software development teams.

I have not seen a variation of Agile that works well for these situations. The closest I have seen was referred to as "iterative development", where the project laid out a multi-year series of vague milestones and performed three-month mini-waterfall iterations to reach them in series.


Any notion of "money runs out" is rendered useless when you have no idea what a starting budget should be.


> most organizations need specific software developed on a specific budget

And that is perfectly fine. It doesn't mean you cannot be agile. Agile is a matter of breaking it down, choosing what to work on, and verify progress. If your constrain is 4 dev team and 6 months, you plan after that. If you cannot make a somewhat realistic roadmap, you either reconsider the constraints, or are forced to move forward. In any case, with agile you know your progress compared to the overall roadmap every 1-4 weeks depending on your resolution, but it's infinitely better than the waterfall where the mangers throw the spec over the wall to the dev team, and climb over to ask why it's delayed after the deadline.


> A majority of companies that I see that say they are doing agile are still doing waterfall.

Actually 3 week long mini-waterfalls. :)


It's a methodology from when you had to schedule time for your program to run and at a minimum 24-72 hour turnaround on the smallest change. In that case it's worth taking extra time to verify things on paper and institute heavy practices to ensure correctness. Then you can just type "exec" in a terminal that completely changes the equation.


We have that in our CS program. We call it "Senior Capstone". I'm sure others have it, too.


Same here. We had to design and write a significant real-world application. You could suggest your own idea or take on a project that the profs had found through their connections with the business world. Either way, you had to convince the professors on the committee that it was worthy of receiving a full course's worth of credit (though most people wound up spending far more than that amount of time on it).


We had that at my community college too, we worked along with an government organization to produce software for them. Unfortunately most of our coursework involved writing documentation and doing presentations, so the software side of things wasn't as useful as it should have been.


The software engineering degrees here in Australia (not CS) have you do that a number of times. I wish I had studied it, but B.Sci in Maths/Chemistry and half a law degree was still quite enjoyable.


Oh god. We had that too. Course was called "Software Engineering". It was pretty much the most horrible thing I experienced in CS and for me sealed the deal: I was never ever going to be a software engineer. So glad when it was over and the next block was about actual science again.

In hindsight, I have to admit I did learn a useful thing or two.

(unfortunately those things are still only useful in software engineering)


We definitely have this in our CS curriculum to—working in teams of 10-15 to design, prototype, refine, and ship a product.


"Software Engineering" at Berkeley. The workload is really high, though.


"Software Engineering" is actually offered as a degree at many institutions.. Focused more on the construction of quality software focusing on software design and development processes over how to code and analyze algorithms. I'd almost call it applied cs.


Yeah, at Berkeley the class's intended subject matter is exactly that kind of design and development process best practices, taught by running a semester-long project from design through completion. There's just not a separate degree for it.


CSCI 1200: Debugging

Seriously. This should be a full subject addressed in detail, and early, unto itself. Instead it's typically a sink-or-swim byproduct of trying to pass any other science/engineering class.


Related to debugging and troubleshooting, I would assign Zen and the Art of Motorcycle Maintenance and work through its applicability to software development. Robert Pirsig, (who I believe had a genius IQ and worked for a time as an IBM tech writer), hit on many topics that serve developers well. His ideas around values, rationality and gumption traps have served me well.

“The truth knocks on the door and you say, "Go away, I'm looking for the truth," and so it goes away. Puzzling.” ― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values


It's been probably a good decade and a half since I read that book, and I remember enjoying it, but I don't remember anything being a specific parallel to debugging, necessarily (do you mean how he maintained his motorcycle, I guess?) Anything in particular stick out for you that might jog my memory?


My Pascal lecturer in 1984 recommended 2 books. The official "Findley & Watt" course book and "Zen & The Art of Motorcycle Maintenance". I loved it, but I suspect I was the only one who both bought it and read it.


Udacity has a course on Software Debugging: https://www.udacity.com/course/software-debugging--cs259. I haven't taken it, but it seems to have positive reviews. Have any HNers taken this course?


CSCI 1201: Designing software so that it is easy to test or at least easy to reproduce any bugs in it.


Agreed. Debugging is an art and discipline in and of itself, and one that so many developers lack. In the last couple of years I've taken a great interest in learning to debug efficiently, and yet despite my best efforts most web devs I work with still rely on variations of "printf debugging". A shame really, considering the sheer depth and breadth of tools that are available these days, even in languages like PHP.


I totally agree with "Debugging is an art and discipline in and of itself" for issues that are difficult to reproduce. With a reproduction, debugging should be straightforward for any engineer.

For reproducible issues, it's scientific method plug and chug for me: observe an interesting behavior, theorize about cause, test, rinse, repeat.

Unfortunately, I am too often amazed at the engineers that don't go through this exercise before asking for help.


For those 'hard to reproduce' issues, it's about knowing what debugging and forensics tools are out there and making sure that the systems you deploy support postmortem debugging and issue analysis. Things like creating crash dump files, then giving your field technicians or customers an easy way to get this data to you with an incident report (or even having the system do it automatically if applicable); logging systems that can be left switched on all the time and dynamically reconfigured (so that there's less chance of the damn thing being off when an issue does occur).

Also awareness of things like tools that analyze or try and provoke race conditions, etc.


The reason the 'printf debugging' might be so prevalent is the professors as well. I got told "printf debugging is incredibly useful, so in this class you can only use printf debugging to debug your program" in one of my classes. We also were to compile everything using Make and could only use Vi. It was still a good class, but maybe that's why so many students think they HAVE to debug with printf.


printf debugging isn't the best method, but it's almost universally applicable. I don't always get to pick which part of my stack is broken, so I need to be prepared to debug in any language, and I don't have time to learn all the debuggers (or the patience to use gdb for everything -- I've debugged user space php with gdb... it's much quicker when I can just error_log my way to the source of the problem).


Sure, I can appreciate that. But when you're talking about user-space PHP, why not leverage XDebug? Even using it as a "better" printf debugger, it's infinitely nicer than relying on "var_dump(); die;" -- and coupling it with any IDE or something like Codebug and you get a proper call stack to work with, variable inspection for the entire scope and even a REPL inside your live code at any point in your execution...


This.


Seriously though, it's probably time to separate Computer Science and Software Engineering into different routes of study.


Most schools I know of in Europe do this. Mainly Software Engineering is taught at engineering schools in close connection with other engineering departments and Computer Science is taught at universities in close connection with the math department.


But then the hardcore Computer Science program would only get 10 students enrolled per year! Think of the poor Computer Science researchers that need to come up with new notation for their Pumping Lemmas!


Saint Cloud State is in the process of doing this.


My former university (Manchester, UK) does this.


RIT already has done this.


And CMU, ASU, Penn state, etc.

- signed RIT SE Grad


It's not enough to fill a class, but I really wish school had taught me about linkers, dynamic libraries, and calling conventions. AKA "why must I use this old compiler to link against this old closed-source C++ library?"


Really this should have been part of your freshman C course. (Happily it was mine, like right at the beginning alongside Hello World.)


ha, freshman C course. Good one.

My school switched to Java and graduated more than a couple people who don't really get C at all.

Should be taught in OS. Really, it should be taught in a practical sense in an OS lab.


My OS class at least didn't even come close to covering that sort of stuff. Half the semester was on concurrency primitives and System V IPC :(


Same kind of thing here. System calls, concurrency, etc. Nothing at all about linking.


> ha, freshman C course. Good one.

What do you mean by that?


I think he means there was no freshman C course.


It's more the funny idea that C is still taught anywhere as an introductory language instead of the 'easier' Java or Python. (The actual course does exist. https://www.digipen.edu/coursecatalog/#CS120)


>UNIX "ls" tool is a case study in excessive command line switches.

'ls' is a bag of weird (bad?) UX. When first starting on the command line, I could never figure out how grepping the output of ls actually worked. Then I think I looked at some of the code, or someone told me, that when ls outputs, it checks to see where the data is going, if it's STDOUT then it pretty prints columns (and probably other things), otherwise it just the more sane 1 line per item.


The isatty(3) system call will let you know if STDOUT is a terminal.

There are two cases when you should definitely take advantage of isatty:

1) Your tool, by default, outputs in color.

2) Your tool, by default, outputs progress, which you keep on a single line by overwriting by outputting \r.

Both of these potentially annoying when your command is used in a script, and especially annoying if they are called from cron.

In other cases I agree with you, getting too creative violates the principle of least surprise.


In the case of 1/color, I respectfully request that all toolmakers leave an option for forcing color even if STDOUT is not a tty, so I can, for example, use less -R on colorized logs and still get element highlighting.


If you get stuck with a tool lacking such an option, you can run it under "script" to trick it. I recall other programs to do similar using a pseudo-TTY but I can't recall their names now (and with a name like "script" it was a miracle I got that one).


I just wish Unix command-line tools would refrain from including terminal escape sequences in their output when invoked with TERM=dumb. Homebrew, wget and youtube-dl, I'm looking at you.

Although some of these tools can be persuaded to stop including escape sequences with the right switch, the right switch varies with the tool, and learning the right switch to use with a particular tool requires reading a large fraction of a very long man page. In contrast, 20 years ago, IIRC all command-line tools I used respected TERM=dumb.

Since isatty() was mentioned, let me point out that directing the output of the tools named above to a named pipe instead of a pseudo-TTY is not enough to get them to stop sending terminal escape sequences.


Early UNIX used "ls | mc", where "mc" was a program to put output in multiple columns. This is in keeping with the UNIX philosophy of one program, one job.


Yes, via the isatty() syscall. ls is hardly the only Unix tool that does this; it's just the most well-known example.


See today's submission https://news.ycombinator.com/item?id=10198353, where the PDF discusses precisely that case and why it's bad.


Here I am using `ls -1` for years without realizing the redundancy.


Don't feel too bad. I didn't know until reading this that sending the output from ls somewhere other than stdout would cause the output to be one-item-per-line either, and I've been using Unix and Linux platforms for a very long time.

If I wanted that behavior, I would have typed 'ls -1' also.


You probably got that into your muscle memory before 'ls' got smarter. I'm sure we all have some habits like that.


ls is bad but I nominate ps as the ultimate basket case.


Yes I'm a 'ps -ef' guy and my coworker uses 'ps aux' forget vim vs emacs this causes the most arguments.


But aux gives you the %CPU and %MEM columns in addition to what -ef gives you. That's a winner for why I would always opt for using aux.

Often times when I am running ps that's exactly the information I want to see.

I guess you could argue ef's length as an advantage but if you're on a 1080 screen, meh.


I only recently learned that fax` prints a tree of processes! Not always what I want, but good to find which python processes are the ones I want to kill, and which are not. :)


Whoa! I always use pstree for that.


-ef shows you the parent pid which aux doesn't. My usage for ps tends to be to get pid information so I can kill processes with it. When a misbehaving parents spawns a lot of children easier to kill the parent.

I use 'w' command if I want CPU info.


I know it is heresy, but what I really want is a switch to make ls more like dir from DOS/Windows.


Unlearning Object Programming might include discussion of the legitimate uses of global variables, and how Singletons are just obfuscated and inconvenient globals.


>discussion of the legitimate uses of global variables

Short course, that one.


Module names are, effectively, global variables. The reason global namespace shouldn't be polluted is that it's so damn useful. Not using it for anything is like never driving your car to avoid wearing out your tires.


>The reason global namespace shouldn't be polluted is that it's so damn useful.

No, the reason is because the more interacting components you stick on it, the harder your code becomes to reason about.

It's more a restatement of the notion that software should be loosely coupled.


You realize that there's a war between people who write static code analysis software and "software architects", the former trying to detect global state buried in object hierarchies and the latter disguising global state as something else in the interest of "decoupling" things that need coupling.

Every class property is effectively global state (has the printer been initialized? what objects of this type have been created?) If you don't think class properties are useful...

Globals are a tool. Used badly they're terrible, but so are design patterns. For a long time many very useful pieces of software ran very well with lots of global state, but the egregious cases of misuse spoiled it for everyone. The problem with the "never use globals" approach is that if you need them, disguising them as something other than what they are makes things FAR, FAR more difficult to reason about.


I agree. For example, the OO project that I'm working with at my job has a bunch of global state, but it's hidden in references to structs and pops up in a million different places throughout the system. It's still spaghetti code but it's spaghetti code with namespaces.

The point of OO isn't to separate or eliminate global state itself. At some level everything is global(agreeing with you). The purpose of OO is to establish a strict separation of concerns. The idea is to say "We need a component of our system to deal with the Blargle operation," and then determining what elements of the system the Blargle functionality actually needs in order to be performed, then making sure that Blargle is aware of only those parts of the system. Additionally it means that system components that existed before Blargle was implemented are still blissfully unaware that Blargle is a thing now.

All of the features of OO languages, like encapsulation, inheritance, design patterns, and messaging are there to allow you to write software where unrelated components can be implemented and reasoned about entirely in isolation. You can do this in any computer language, and systems like the Linux kernel are good examples of separation of concerns that don't happen to use an OO language. You can also do this with global variables, but it takes more discipline.


There is some state that is global. Current user for instance, it is used globally, so why create a complex way of passing it around instead of just making it global?


I once worked on a system that had more than one user logged in at the same time, and my global variable got soo confused.


And then that is not a global state, my point is that there is global state, and trying to go through all kinds of academic exercises in "proper software design" is unneeded complexity. It's a tool, don't use it for the wrong job, but don't dismiss it just because other people use it the wrong way.


And your singleton user class worked better because?


One reason is that, if you have to pass that information to functions which use that information, you can easily see which functions rely on this global state, and which don't. The usefulness of this and the annoyance of passing it manually depends on the situation, of course -- read-only "state" like the current user, and write-only state like some global logging interface, is the sort where one function's use of the state doesn't materially affect another function's behavior, so it's more OK access directly than other state. (Another example: the database connection configuration info might be unchanging and read-only, but it's real useful to pass it around manually, because then you see exactly which callees are opening database connections.)


think I did "unlearning object oriented programming" in university - it was called "programming in matlab with other applied mathematicians"


That’s more like “being reduced to tears” than “unlearning” per se.


How To Prevent Scope Creep Without Alienating Your Boss Or Other Important People


In my experience it's not that bad, but you have to speak in their language. An agreed spec and deadline is a contract, and now they want to get more for the same price. I'm sure they wouldn't let a customer do that, so they understand that if they want to change the content of the contract, they have to negotiate.

That of course require that you have a sensible manager/boss, that doesn't force it his way.


That's a ph.d.-only "special topics seminar", still an unsolved problem under active research


How about 1. Version control systems and how to use them always 2. Practical introduction to sql and nosql databases 3. Database schema design along with practical projects 4. Introduction to HTTP 5. Introduction to Web frameworks 6. Introduction to mobile development 7. Mobile app design basics 8. How to clear job interviews

?


There's theory and practice.

When I started with computers in the dark ages (mid 1980s), there might have been RCS and SCCS... Yes, SCCS actually dates to 1972. RCS from a decade later. CVS ... woah, that is also an 80s kid: 1986, wasn't something I encountered1until later. Ah, initial release 1990. Subversion came along and changed everything in 2000, mercurial in 2005. I remember some heated Hg, arch, and Git debates for a while.

Unfortunately, I've got (bad) habits from most of these stuck to different layers of my brain....

Databases may be going through similar revision (though you'll still get a fair bit of mileage from 5NF.

HTTP may or may not be due for major revisions given variously:

1. Calls for distributed Web.

2. Issues with presentation and form factor.

3. Conflicts between document and application presentation.

4. Privacy and security.

Web frameworks change too quickly to be worth teaching. Best learn general UI principles. Dittos mobile apps.


See it is one thing that the tool you have been taught becomes obsolete and quite another that you are not even familiar with the concept of http, vcs, and sql. It is far more easier for somebody who knows mysql to understand and come upto speed on a postgres/mongodb than somebody who is not at all exposed to these concepts.

Moreover course curriculum needs to be updated every 2-3 years when such courses are taught to keep them relevant


Actually, in my case I'm not formally educated in compsci. A couple of programming courses but picked up what I know on the job. It's fairly effective.


I got almost all these things at my school, shame a lot of the teachers were really bad at teaching and almost all of us learned everything from google... You could argue that if the curriculum didn't include these things we wouldn't have googled the stuff either, so I suppose my education wasn't all bad


> Includes detailed study of knee-jerk criticism when exposed to unfamiliar systems.

This hits home.


CSCI 4035: Refactor Your Own Term Projects from Past Semesters


My Almameter seems to be doing that this year in their Design Patterns Course. The task is to find a large code base that desperately needs refactoring. I let a young friend use my old Final Year Project for this. Its scary how bad my code was when I graduated (never took Design Patterns) but at the same time, inspiring to see how far I have come since then


I had one course where one of the major projects was to add a significant feature to the major project you wrote for the last course.


That's a rather brilliant idea. Including code review from your classmates.


I think, another course should be CSCI 9000: How not to be opinionated jerk.


I think if you force CS students to take a few business courses (or MIS courses), it can help dissolve the 'opinionated jerk' [elitist] stereotype that can come from CS students. They see that software development is only a piece of the puzzle and while they'll never have to worry about handling insurance, someone who didn't spend classes learning code has learned this task for them.


You must have had different business classes.

The one class I had with business majors, I was completely underwhelmed as to their capabilities. I was amazed at having an upper division course that had such atrocious writing and thinking skills.


Its actually more along the line of I currently am teaching the 'Intro to IT' course required for all business majors. In one light, the course is about making the rest of business understand what IT does; however from an IT student perspective, it should also let them see the other side.

Each week I'm mulling over how to connect with both sides of the table, since I have both sides in my classroom.


Where are you gonna find faculty to teach that.


Maybe just a CS program staffed entirely by people who've had real CS jobs outside of the academia bubble.


You mean software engineering jobs?


How about "Programmer psychology: recognising and dealing with your own dogmatism through to diplomatically dealing with the dogmatism of other programmers."


If you cant profile it - it doesent exist.


CSCI 2020: How to Store Data

Discover the fascinating differences between fflush, fsync, and data actually hitting disk. Learn why single-byte writes are a dumb idea, and why accessing data through a cheap one-gigabit switch might not be as fast as accessing it locally. As preparation for being actual storage developers, students will be blamed for the professor's own mistakes, and for every problem that happens anywhere else in the system.


I am interested in learning this. Are there any books/study_material available, that you recommend.


I am a software engineer, I have a grad school degree in CS with many years of experience in the industry. One thing I often noticed was many people break some key computer science concepts while on the goal to achieve some minor usability improvements or some other minor goal such as code reuse.

There was one instance where in a multiprocess environment, one of the engineers wrote some library to access a database table and read its contents. In the process he/she maintained a cache of it in memory. Then somebody else decided to use that convenience library/methods in a different process and so on.., essentially cache coherence problems were left unsolved and when I brought this to the attention of people they shrugged it off saying when they see a problem they might handle it - which may be fine, but seemed like some key computer science concepts were overlooked in the goal to achieve software reuse (here reuse of existing accessor/cache code to the database)

I have seen design choices that cause race conditions or non-determinstic behavior, following some patterns or cargo cult programming because something is an industry trend etc.


I love "Unlearning Object-Oriented Programming".


I've seen so much criticism of OOP, can somebody provide more details/valuable links why "OOP is bad"? Honest question.


I'm a fan of Yegge's colorful and fun to read narrative explaining the issue: http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...

Essentially, OO forces you to think of everything as an object. That pattern is valuable at times, but the way you think about something and the patterns you're familiar with intrinsically shape your approach and how you can solve it which can be a bad thing.

OO is not bad; it's merely bad to only use OO, to teach it as the only methodology in schools, to use it to the exclusion of all else. Like most things, OO is just one of many options and is not always the best one.


Interesting interpretation. I liked this quote simply because it struck me as peculiar yet completely true:

> the Functional Kingdoms must look with disdain upon each other, and make mutual war when they have nothing better to do.


The class-driven polymorphism/encapsulation/inheritance model of OO which is sugar over imperative programming has often been criticized: http://harmful.cat-v.org/software/OO_programming/

OO as the idea of late bound, dynamically dispatched objects that receive messages, retain state, maintain access and location transparency and optionally delegate to other objects... that's not really controversial, but also rarely done. Nevertheless, most great research operating systems have been object-oriented, and this is no coincidence.


> that's not really controversial

This is the most controversial part of it. I've got no problems with classes, objects, imperative methods, dynamic dispatch and all that - these are just semantic building blocks, sometimes useful.

What is really damaging is this very notion of representing the real world problems in their infinitely diverse complexity as something so primitive and narrow as communicating objects.


They're not primitive and narrow, that's the point. They're blobs of state that can masquerade under any interface - files, devices, windows or whatnot, all while having a common primacy so they are easily introspected, cloned and extended. In the Spring OS, for instance, you have the ability to compose or create entirely new names from existing primitives like environment variables, sockets, files, etc. by operations equivalent to set manipulation.


> that can masquerade under any interface - files, devices, windows or whatnot

Why are you sledgehammering all these entities into this primitive and narrow way of thinking in the first place?

They are different. And in order for different entities to have some common properties you don't have to think of them as a hierarchical relation of communicating objects.


Treat them too differently without any uniformity, and you get a hodge-podge system of names that can barely interact.

An object here is just a blob of state that enforces a primal uniformity. Files still look, walk and quack like files, they're just represented under a common construct underneath which is immensely useful for the programmer and irrelevant to the end user.


OO is not the only robust way of representing uniform interfaces. OO preachers must stop claiming ownership of the concepts that existed long before OO, and are much better represented outside of thr OO blindfolded way of thinking.


I'm not an OO preacher, I'm just making the observation that most research systems designed for uniformity were object-oriented. Otherwise when it comes to languages, I currently mostly use ones based on functions and pattern matching.

You can't really speak of the "OO blindfolded way of thinking" when all you do is contradict without any substance.


Unix is fairly uniform, and not object-oriented. Plan9 is even more uniform and still not object-oriented. Lisp machines did not feature too much of an OO neither, and yet had a very unified interface to all the OS entities.

Other than that, OS research is nearly dead anyway, nothing interesting happens, besides probably Singularity, which is also not very OO - it does not play well alongside with static analysis anyway.


Unix is fairly uniform

Hahaha.

No.

Compared to Windows, maybe, but in the grand scheme of things it is not. Hence so many research systems like Amoeba, Spring, Sprite, SPIN and even GNU Hurd that tried to create a general overarching metaphor across Unix's not-quite-uniformity.

Plan9 is even more uniform and still not object-oriented.

9P is a transport-agnostic object protocol for representing resources as synthetic file systems. The trick many have with Plan 9 is that they import their prior knowledge of what a "file" is, but in reality Plan 9's concept of a file is quite different from other systems. This is right down to the Plan 9 kernel being a multiplexer for I/O over 9P.

Either way, my point is the "object" metaphor isn't really all that specific. Personally I'd love to see work on functional operating systems, but other than the boring House system, I don't think there's been that many.


> Either way, my point is the "object" metaphor isn't really all that specific

I still don't see how Plan9 is built on "objects".

> functional operating systems

I doubt functional metaphors are of any use in this domain.

My point is that the abstraction continuum is far more diverse and deep than something as stupid as communicating objects or reducing lambda terms. My bet is on a linguistic abstraction, which covers interface unification as well as many other things.


In real world business logic and modeling you have to work really hard to make it fit into an OO paradigm. It also gets hard to manage long term as your data structures kind of get stuck with all the restrictions that OO allows you to make. Javascript object literals are fantastic as they make data transfer objects simple, fast and not polluted by business logic. Once you get used to passing around DTOs that way then a functional style of programming becomes more useful. For the most part programming is getting data from one domain data model and then converting to another domain data model, OO doesn't really help there, and if anything is a hindrance. Unfortunately software engineers love to abstract to the nth degree so sometimes you get stuck with tight OO models of DTOs and transformation layers.


I think one of Rich Hickey's quotes expands on your point:

It has always been an unfortunate characteristic of using classes for application domain information that it resulted in information being hidden behind class-specific micro-languages, e.g. even the seemingly harmless employee.getName() is a custom interface to data. Putting information in such classes is a problem, much like having every book being written in a different language would be a problem. You can no longer take a generic approach to information processing. This results in an explosion of needless specificity, and a dearth of reuse.

This is why Clojure has always encouraged putting such information in maps, and that advice doesn't change with datatypes. By using defrecord you get generically manipulable information, plus the added benefits of type-driven polymorphism, and the structural efficiencies of fields. OTOH, it makes no sense for a datatype that defines a collection like vector to have a default implementation of map, thus deftype is suitable for defining such programming constructs.


I think the most dangerous advice I got was to avoid DTOs and similarly domain-behaviour-free object graphs. That corner of the .NET world at the time called such a model an 'anaemic domain'. Instead, we were encouraged to model problems with objects for every noun, methods for every verb, arguments for every adjective, and all the code dealing with them attached to the class.

It worked fine while our domain remainded small, but got ghastly quickly.


It's a way to keep doing all of the bad habits you learned from completely unstructured procedural code, including global state, but in a way which is marginally more encapsulated and therefore more likely to last longer than six months in the field, and might even survive maintenance.

It adds to this all of the extra oddness that object hierarchies bring, especially the oddness related to trying to force non-hierarchical concepts into a hierarchy. Mediate on whether an ellipse is a special case of a circle or vice-versa until you reach enlightenment, which is realizing that the question is stupid and that any situation which forces you to answer it is stupid.


I really just feel it's because most people who use OOP end up increasing the complexity of their code and have no idea how to use it to make things simpler, safer and more reliable.

It was like a lightning had struck when I realized that if you did OOP in a certain way you could make guarantees that some mistakes would be impossible to make, and that whatever I had been doing for the past several years was not even remotely OOP, even though everything was in a class.

The reason I'm interested in learning functional programming is because it seems like it too holds the promise of eliminating errors entirely, if done well, perhaps not in a mutually exclusive way.


Over the years I've come to believe that OOP is primarily an elaborate mechanism for keeping the lunatic in the next cubicle from shooting himself in your foot. Assuming the overall architecture is relatively sane, the lunatic can safely do whatever weird nonsense he like in the privacy of his own objects.

If all the developers on your team are sane, then OOP is just a bunch of unnecessarily verbose conceptual complexity.

(The unfortunate corollary, of course, is that the code from the lunatic in the next cubicle is just as likely to be your own code from six months ago)


Like all other wildly successful software paradigms, OOP is now out of favor.


There have been successful languages (from a popularity standpoint) that are regarded as OOP, but the success of OOP as a paradigm is highly debatable.


Huh? Imperative programming is still the dominant programming paradigm, as it's been since the first programmable computers.


This is the video that really made me cut down on the use of OO in my own code.

http://pyvideo.org/video/880/stop-writing-classes

It's about python but applicable to most languages.


It tends to be excessively verbose and often requires detailed naming of temporary objects when solving stupid simple problems like pipelines and data aggregation. Inheritance is often not the best way to compose properties. People who learn only OOP tend to not see the obvious value and convenience of first-class functions.


OOP is about large scale, and FP is about small scale. They best work together.


> OOP is about large scale, and FP is about small scale

Sorry, but this is an extremely audacious claim to make without backing it up.

I also think "FP is about small scale" is quite wrong, looking at empirical evidence.


If you are interested in this topic, I recommend you reading Martin Odersky's papers on Scala design.


Maybe you can just let it go and think of it as an "opinion" or based on years of experience? Doubt you will find a journal article laying the foundations you desire.


Modularity is about the large scale. OOP should not highjack the exclusive rights to the tools introduced long before it became popular.

I'd always prefer the SML module system to anything OOP for a project of any scale.


>I'd always prefer the SML module system to anything OOP for a project of any scale.

There's a problem. I don't know any project which was implemented with SML on scale (I mean at least several MLOC).


Ok, choose the Ada modules instead. And Ada was used for many MLOC scale projects.


I don't have experience with Ada, but I suspect, all its cool modular features can be expressed in Java (and sure in Scala).

Could you give examples of features that you would like to have in modern OO languages which are available in Ada?


I am not talking about any "cool features" in particular. My point is that for the large systems you need a decent module system and namespaces, not the OOP.

The fact that most of the statically typed OO languages also provide these features does not mean that OO is suitable for scalability - this is totally orthogonal to OOP.

This is a fallacy similar to attributing pattern matching and ADTs to the functional programming, while these features are not inherently functional in any way.

Try to build a large system in a dynamically typed OO language to see why OO per se is of no use for a large scale architecture at all.


I think, you are making mistake here. You shouldn't treat objects as objects, but as instances of modules. If you see it this way, everything looks much better in OO languages.


Why do I need any OO in the first place, if the only value they can offer is in a poor man substitute for modules? I'd rather use the proper modules instead, with generic (i.e., inherently anti-OO) features.



These are Software engineering mostly and not CompSci.


Bachelors is for real-world software engineering, Masters or PhD is where the serious CompSci should happen, IMO.

(I say this as someone with a bachelors only)


Which is sadly, too often lacking at many universities.


I suppose it's a philosophical argument more than anything, but I don't think it belongs universities. I don't think universities should be career schools, although I can see why others might think it should be since most people do go to school as a prerequisite to get a job nowadays.


These aren't even software engineering classes.



COMP000 Your Job Isnt Only Fun Shit


This would more aptly be titled 'Software Engineering Courses That Don't Exist, but Should.'


These should exist if universities had a goal of producing not scientists, but workers.


CSCI 0201: Principles of Version Control


So what do you teach for the rest of the semester after the first week?

That's the problem with teaching these things in university. They just aren't worth the time and money to teach at that level.


How about "How does development work in practice" that covers version control, debugging, documentation, dependencies, packaging. After that any project (all courses) involving more than 100 lines of code MUST be handed in as a repository that actually contains the development history, not a single-commit dump. Also installable as a proper language-specific package, rather than "put the path to the project in those two places, then run this script".


In each of the degree courses I have done, there have been a few sessions on general scholarship, essay-writing, etc. These are not actual modules, parts thereof, or even evaluated, but just part of the furniture of the whole course.

What you describe sounds like the kind of content that belongs in such a session for degrees that are likely to lead on to working in software development.

This gives rise to a further thought. Rather than this even being part of a degree course, it should be something run by a university's Careers Service. A series of "So you wanna be a ..." sessions for common graduate careers, that train students on some of the day-to-day practicalities of what those jobs entail.


No, it still sounds like a massive waste of time and money.

Just say that you accept assignments via version control. Done, learn it or fail. You don't need a course in it.


Reminds me of a question a help desk person asked me about 20 years ago. They were having all kinds of problems printing word docs from this new windows 95 PC. I was busy at the time so rather than educate them on the nature of WYSIWYG I just told them to update/repair the graphics driver. It worked and they thought I was a god.

It's hard to create a course on the interconnections of everything. The best course I ever took was a Cisco IOS programming course. That included protocol analysis, debugging routers, switches, etc. I'm not a network layer programmer. But what I learned about how everything works together has helped me solve complex problems ranging from Kerberos authentication issues to finding duplicate DHCP servers on the same network. Without this knowledge, I would have to hire experts and waste a ton of time and money to get to the end goal which is a functioning product.


How about a "class" or some pre session on best practices of typing on the keyboard in the CS departments?

For year, I'm still struggling to type without looking at the keyboard and I make frequent typos which slow me down. Being able to type quickly and have a good habit of placing fingers on the keyboard seem to be an important exercise. A guideline to choose the right keyboard, setting up personal shortcut or customized keyboard layout for each developer might also be worth learning.


Interestingly enough, this semester, I've begun including images of programming code that I force my students to type out specifically to give them typing exercises. There's no errors, and they're variants of stuff from the books so they don't copy and paste from a PDF. My logic is that while CS is mental, typing the code is still a physical activity we use. Like any other physical activity, the best way to get better is to drill the motions until they become second nature.

While there ARE typing tutors, I haven't found any programming specific typing tutors. I think a normal typing tutor is nice, but since syntax requires so many symbols, typing English sentences only helps so much. I also think (still in the 'physical activity' mindset) developers can use these typing exercises are warm-up exercises. Just like a sport, you stretch and warm-up so you a loose and ready. Typing out basic for loops and functions can help in the same light.


Have you checked out typing.io [0]?

[0] https://typing.io/


I have, and I hope they succeed (cause I'm building something similar!). I do think drills might be helpful, from a student's perspective, since they are still learning proper syntax. From a professional's perspective, typing whatever can help if you work with niche libraries (like Unit Testing)


there are so many online typing tutors that can help you with this : https://www.google.co.in/search?q=typing%20tutor


Usually those exist.


Automotive Metaphors and analogies for programmers

Seriously, why does every analogy have to be a car? Is it just the most complex system we can model in our heads?


Hey, sometimes it's a recipe, and sometimes it's a telephone system. :-)


Where I come from Software Engineering and Computer Science are two different paths.


I'd like to suggest a course in: how to argue with management about system or vendor implementations that you know will affect both future development time and user experience within your company.


Absolutely, I spent quite a few years on the other side, implementing systems for a vendor. We rarely had technical problems but usually encountered some kind of management failure, from either side.


CSCI 3350: Debugging- Beyond Print

CSCI 4600: When Big-O Complexity Becomes a Lie and Why


CSCI 2550: Software complexity and the art of Keeping It Simple.


CSCI 099: How to apply what you learned in comp sci.


I would take that classical programs class in a heartbeat. That seems like something that could be organized online somewhere.


No disagreements except that I might rename "Classical Software Studies" as "Software archeology"


Classical software studies? What about System 370, CP/CMS, IMS, CICS? There is so much that was forgotten.


CSCI 0001 'RTFM & Learning simple facts yourself outside a CS course'


"I'm so cool I'm going against the crowd" bullshit.


How to program by listening to your intuitions, not logically.


CSCI 3300 kind of exists in human computer interaction.


Another course should be network security for everyone


How about:

-Mastering Git (/competitors)

-Refactoring

-Project Management

-History of consumer information technology


CSCI 2100: Learning how to hate on OOP

CSCI 3300: Stuff no one cares about anymore

CSCI 4020: Pretending to write fast code in slow languages

CSCI 2170: I know about command line tools !1!!@ So h4x0r

PSYC 4410: Trivia

Seriously, this article is garbage


this is so good




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: