As someone who has spent a good chunk of their career building high performance networks and another chunk of it working closely with developers who care a lot about those networks, I'm constantly amazed at how little top tier graduates from well respected schools know about networks.
Relatively basic stuff, like what does your application's traffic look like on the wire, how an OS determines what to do with a packet, how latency affects applications, and many other things are completely foreign to way too many developers. I could go on and on, but given the connected nature of things today, it seems like a very overlooked area.
CMSC 23300. Networks and Distributed Systems. 100 Units.
This course focuses on the principles and techniques used in the development of networked and distributed software. Topics include programming with sockets; concurrent programming; data link layer (Ethernet, packet switching, etc.); internet and routing protocols (UDP, TCP); and other commonly used network protocols and techniques. This is a project-oriented course in which students are required to develop software in C on a UNIX environment.
The 3 major C programming assignments last year: an IRC server, a TCP stack, and a router.
Best course I've ever taken.
I still don't have a "low level" understanding of the network I use.
And I agree with other posters that this isn't something new. That's the separation between senior and junior specialists (and no - if one doesn't know what network latency is, that one is not a senior frontend engineer).
What's weird is watching people play with stuff like Raspberry Pi and they expect things to just apt-get and launch like it's always been done for them. And most of the time it does just work that way. Except when it doesn't.
Jerry Commited: +25% more time spend in function xy, due to chace unfriendlyness..
If you see it, and your colleagues see it, it becomes a issue. If it becomes a issue the perpetrator will look into it.
As far as I'm concerned, if you're not removing the atomic instructions to make sure the breaks happen that you expect to happen, you're not really testing your code.
Also a little terrifying to realize just how much incoming and outgoing traffic all the things on your computer are generating all the time.
It's a classic.
I actually did the proper 4 semester CISCO academy course at night school a while back which should give you real hands on experience - one of the tests is the instructor breaks a system and you have to fix it
Even with the CCNA course, you're still not going to know about stuff like security and firewalls on a deep level.
But get the textbooks and read through them.
Our future software engineers need to have security as a fundamental concern to software engineering and not as something that's an 'add-on'.
Not only does it prevent them from dealing with or writing such systems, but it's also a major contributor to clueless security FUD. It's not that there's nothing to be afraid of... it's that if you don't understand how networks work you are probably afraid of the wrong things.
The problem is getting practical experience in modern techniques.
CSCI 0123: Implementing Monads in COBOL
CSCI 0555: Successfully correcting internet commenters.
CSCI 0556: Nitpicking points made in CSCI 0555.
CSCI 0777: Introduction to twitter bot polynomial time algorithms with an emphasis on winning shitty contests.
CSCI 0911: Learning functional programming with Haskell.
CSCI 0912: How come up with uses for knowledge acquired in CSCI 0911.
Not of much practical use later in life, but an A in this course guarantees a technical interview pass with any major Silicon Valley company.
- Converting a dollar/cents value into text, e.g. 67.33 -> "sixty-seven dollars and thirty-three cents", in XSLT of all things. I wrote the first version in Emacs Lisp, then translated that to XSLT. I was working on manually translating a bunch of .doc files to a new layout for the automatic generation of contracts, and the vendor had supplied a version that, after reading its source code, I realized was broken on certain inputs, like 10001 or something. I guess it was made by some people that thought they'd never need to write algorithms when working as a software developer.
- I was asked to combine the information in a few spreadsheets with several thousand rows into a new one. I ended up needing to compute some statistics on certain ranges of a set of rows, so in my short Perl program I sorted the data by the relevant key and used binary search to find the ranges.
That's not a list for my whole career, that's just what I can remember from my first summer internship as a data entry/IT hob-job monkey when I was a teenager.
But there are a couple of more generic algorithms that I occasionally can't really help but implement in some variation or other. Usually because the "book" version of the algorithm that is already in the standard library of your language of choice doesn't quite fit your problem. Many classic algorithms are simple enough to code for your problem, than to write an adapter for the standard library version of the algorithm.
For instance binary search. It's a tool. Prefer not to implement it by hand (because less chance of bugs) if I know there's a library call that does it fast, but if need be, easy enough.
Sometimes you may not realize you're implementing a classic algorithm, say when writing certain recursive functions that are actually equivalent to a tree-traversal algorithm of a datastructure.
Sometimes you do realize, coding up a class definition for something, and then you notice "hey these fields actually make this data sort of equivalent to a doubly-linked list". That's when your algorithms-knowledge comes in handy, because it allows you to stop and consider, is a doubly-linked list really the most suitable datastructure for this problem? Do I know other clever "book" algorithms that fulfill this need? Are they better? More performant? Without the knowledge of the first algorithm you wouldn't even know to ask this question.
So the point is, you don't know what you don't know. Apparently you never needed to implement a "book" algorithm or datastructure by hand in order to get your programming task done. But do you know those tasks couldn't have been done better, faster, more performant, elegant, easier, if only you had the knowledge about the right algorithm for the job? Whether you would end up implementing it by hand or not, it's having that knowledge at the ready, that makes algorithms and data structures knowledge something that distinguishes one programmer from the other.
Then there's a bunch of domain-specific algorithms. Various types of numerical integration methods I've implemented by hand many times because you can almost always do better than naive Euler. And since they're in the inner loop, tangled up with the particular equations of your problem, calling them as a library function is just going to drain performance.
So do I pass CSCI 0555? ;-)
if comment.id % == 5:
Println('Study computer science, sell advertising')
I'm waiting for a CSCI 0555 student to correct the obvious bug.
Dear 'All CS Departments of the World':
Please stop fooling your students in the 'Intro to Algorithms' class, and just rename it to 'Intro to Combinatorial Algorithms'. Because that's what you teach, and you completely ignore the arguably bigger area of 'Intro to Numerical Algorithms'. Numerical Algorithms, that are not only the basis of pretty much every branch of computational science (comp. physics, comp. chemistry, comp. biology, comp. civil engineering, comp. mechanical eng., comp. electrical eng., comp economics, comp. statistics, and on and on and on), but it's also one of the weakest link for a CS graduate to transition to Machine Learning and Data Science.
Or you could keep the course title, but upgrade the class, by teaching 50% of it as 'Algorithms and Data Structures', and 50% as 'Numerical Methods and Scientific Computing'.
Unrelated to above but relevant to the thread: Allen Downey and his team at Olin College are doing things worth looking into. (His Google techtalk: https://www.youtube.com/watch?v=iZuhWo0Nv7o)
There is hardly any pure CS like it appears to be the US case.
All Informatics degrees are actually a mix of CS and Software Engineering.
For pure theoretical CS one needs to go for a Maths degree with focus on computation (after the third year).
EDIT: Typo the => be
I agree that numerical computing algorithms are indeed very important topics. I was very excited when I learned about them (then took Numerical Math 2, and finally met my match, was above my ability to learn in one block).
I do think that it was a good idea to teach Algorithms first, though. I don't think I would have appreciated Numerical Math as much if I had taken it in my first year, but Algorithms was pretty accessible, cool, and precisely the sort of thing I had imagined studying Computer Science to be like.
That sounds like what I was taught in school. Never used it. I'm just glad the waterfall model is dying.
It is kind of funny, me and all of my peers were required to do the whole requirements, design, implement, verification, etc thing. But nobody did, nobody.
Everyone did the same thing: Requirements for one small part, implement, verify, and then write the design document at the end to match the implementation. Repeat.
I'm really glad things like agile are here, not because they're magical solutions to all of life's problems, but because they're inherently compatible with how people ALREADY WORKED. Waterfall felt like a boat anchor, causing teams to get indefinitely stuck in the design phase (or to lie a bunch to effectively skip it entirely). Agile allows you to go away, get it 90% down, and then iterate, iterate, iterate until everyone is happy.
To be honest waterfall always felt like a programming methodology created by people who never programmed. It was like they took the method for building a bridge or a skyscraper, and applied it to software blindly. Agile methods feel like something designed by people who have actually programmed and know how they like to do it.
To be fair, it's not.
> To be honest waterfall always felt like a programming methodology created by people who never programmed.
And agile was created by people who don't understand that most organizations need specific software developed on a specific budget.
A majority of companies that I see that say they are doing agile are still doing waterfall.
PS - I'm a huge proponent of agile and the agile manifesto, but it's not a one size fits all methodology.
The exception to the rule is when they're replacing an existing system (or consolidating multiple systems.) Then, they think they know exactly what they need, but then they won't figure out what they need until UAT when the real users finally see the system and point out the business rules that were considered unimportant.
So, why not iterate quickly, developing the software in one of two orders:
* If one portion of the system can be used independently and generate value, why not build that first and get everyone using that?
* If the system cannot provide value independently, build the MVP of the first screen they will use in the workflow and a view-only portion of the second screen? Then get real world users to tell you what they absolutely need before calling it done.
Once the first screen is done to their approval, then you take stock of the budget. You tell them there is a balancing act between cost and scope, and everything flows from there. (Or, you find that the value can't be realized and bail to another project.)
> The exception to the rule is when they're replacing an existing system (or consolidating multiple systems.)
This is pretty much 90% of the software systems and projects that exist today. Don't let the headlines of TechCrunch fool you. The world outside SV doesn't sit in Google, it sits in terminal screens and mainframes. All that money SAP, IBM and Oracle is making is all made by replacing existing systems (whether software or pen and paper). The people making the decisions don't care about MVPs or what screens look like. They want to increase efficiencies aligned with the objections of their role and not get fired for doing it. When I tell a trader at an asset management company they can't enter in incorrect values for a stock trade because we'll need to fix the data quality issue on our backend, they tell me "f--- off" because they'll simply pay their way out of that discrepancy.
So I hear you, but what you're suggesting exists in very few organizations (globally) or in a vacuum.
It is challenging to tell a director that they don't understand their own business, however.
I've worked in a lot of internal IT shops throughout my career, and it is only recently that I've worked on B2C applications. That's why I explicitly say that replacing an existing system requires more iteration, not less.
How many times have you seen a user exploit a bug to get their work done and be angry when it's "fixed?" The real bug is that the business didn't understand their own process.
My job, when I was in internal applications, was to cut through the bullshit and really understand how the business was run. And anyone who thinks they can analyze the problem up front and give a precise estimate for how long it will take is delusional.
That's an odd criticism of agile. The whole point of agile is to prioritize so that the important stuff gets done first and does what it's supposed to, so when the money runs out you've got the most important bits working the way they should.
If you've got a fixed budget, you're waterfalling and the money runs out, odds are very high that the software is going to do what the requirements said it should do, but it will be near-useless in real life.
I have not seen a variation of Agile that works well for these situations. The closest I have seen was referred to as "iterative development", where the project laid out a multi-year series of vague milestones and performed three-month mini-waterfall iterations to reach them in series.
And that is perfectly fine. It doesn't mean you cannot be agile. Agile is a matter of breaking it down, choosing what to work on, and verify progress. If your constrain is 4 dev team and 6 months, you plan after that. If you cannot make a somewhat realistic roadmap, you either reconsider the constraints, or are forced to move forward. In any case, with agile you know your progress compared to the overall roadmap every 1-4 weeks depending on your resolution, but it's infinitely better than the waterfall where the mangers throw the spec over the wall to the dev team, and climb over to ask why it's delayed after the deadline.
Actually 3 week long mini-waterfalls. :)
In hindsight, I have to admit I did learn a useful thing or two.
(unfortunately those things are still only useful in software engineering)
Seriously. This should be a full subject addressed in detail, and early, unto itself. Instead it's typically a sink-or-swim byproduct of trying to pass any other science/engineering class.
“The truth knocks on the door and you say, "Go away, I'm looking for the truth," and so it goes away. Puzzling.”
― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values
For reproducible issues, it's scientific method plug and chug for me: observe an interesting behavior, theorize about cause, test, rinse, repeat.
Unfortunately, I am too often amazed at the engineers that don't go through this exercise before asking for help.
Also awareness of things like tools that analyze or try and provoke race conditions, etc.
- signed RIT SE Grad
My school switched to Java and graduated more than a couple people who don't really get C at all.
Should be taught in OS. Really, it should be taught in a practical sense in an OS lab.
What do you mean by that?
'ls' is a bag of weird (bad?) UX. When first starting on the command line, I could never figure out how grepping the output of ls actually worked. Then I think I looked at some of the code, or someone told me, that when ls outputs, it checks to see where the data is going, if it's STDOUT then it pretty prints columns (and probably other things), otherwise it just the more sane 1 line per item.
There are two cases when you should definitely take advantage of isatty:
1) Your tool, by default, outputs in color.
2) Your tool, by default, outputs progress, which you keep on a single line by overwriting by outputting \r.
Both of these potentially annoying when your command is used in a script, and especially annoying if they are called from cron.
In other cases I agree with you, getting too creative violates the principle of least surprise.
Although some of these tools can be persuaded to stop including escape sequences with the right switch, the right switch varies with the tool, and learning the right switch to use with a particular tool requires reading a large fraction of a very long man page. In contrast, 20 years ago, IIRC all command-line tools I used respected TERM=dumb.
Since isatty() was mentioned, let me point out that directing the output of the tools named above to a named pipe instead of a pseudo-TTY is not enough to get them to stop sending terminal escape sequences.
If I wanted that behavior, I would have typed 'ls -1' also.
Often times when I am running ps that's exactly the information I want to see.
I guess you could argue ef's length as an advantage but if you're on a 1080 screen, meh.
I use 'w' command if I want CPU info.
Short course, that one.
No, the reason is because the more interacting components you stick on it, the harder your code becomes to reason about.
It's more a restatement of the notion that software should be loosely coupled.
Every class property is effectively global state (has the printer been initialized? what objects of this type have been created?) If you don't think class properties are useful...
Globals are a tool. Used badly they're terrible, but so are design patterns. For a long time many very useful pieces of software ran very well with lots of global state, but the egregious cases of misuse spoiled it for everyone. The problem with the "never use globals" approach is that if you need them, disguising them as something other than what they are makes things FAR, FAR more difficult to reason about.
The point of OO isn't to separate or eliminate global state itself. At some level everything is global(agreeing with you). The purpose of OO is to establish a strict separation of concerns. The idea is to say "We need a component of our system to deal with the Blargle operation," and then determining what elements of the system the Blargle functionality actually needs in order to be performed, then making sure that Blargle is aware of only those parts of the system. Additionally it means that system components that existed before Blargle was implemented are still blissfully unaware that Blargle is a thing now.
All of the features of OO languages, like encapsulation, inheritance, design patterns, and messaging are there to allow you to write software where unrelated components can be implemented and reasoned about entirely in isolation. You can do this in any computer language, and systems like the Linux kernel are good examples of separation of concerns that don't happen to use an OO language. You can also do this with global variables, but it takes more discipline.
That of course require that you have a sensible manager/boss, that doesn't force it his way.
When I started with computers in the dark ages (mid 1980s), there might have been RCS and SCCS... Yes, SCCS actually dates to 1972. RCS from a decade later. CVS ... woah, that is also an 80s kid: 1986, wasn't something I encountered1until later. Ah, initial release 1990. Subversion came along and changed everything in 2000, mercurial in 2005. I remember some heated Hg, arch, and Git debates for a while.
Unfortunately, I've got (bad) habits from most of these stuck to different layers of my brain....
Databases may be going through similar revision (though you'll still get a fair bit of mileage from 5NF.
HTTP may or may not be due for major revisions given variously:
1. Calls for distributed Web.
2. Issues with presentation and form factor.
3. Conflicts between document and application presentation.
4. Privacy and security.
Web frameworks change too quickly to be worth teaching. Best learn general UI principles. Dittos mobile apps.
Moreover course curriculum needs to be updated every 2-3 years when such courses are taught to keep them relevant
Discover the fascinating differences between fflush, fsync, and data actually hitting disk. Learn why single-byte writes are a dumb idea, and why accessing data through a cheap one-gigabit switch might not be as fast as accessing it locally. As preparation for being actual storage developers, students will be blamed for the professor's own mistakes, and for every problem that happens anywhere else in the system.
This hits home.
The one class I had with business majors, I was completely underwhelmed as to their capabilities. I was amazed at having an upper division course that had such atrocious writing and thinking skills.
Each week I'm mulling over how to connect with both sides of the table, since I have both sides in my classroom.
There was one instance where in a multiprocess environment, one of the engineers wrote some library to access a database table and read its contents. In the process he/she maintained a cache of it in memory. Then somebody else decided to use that convenience library/methods in a different process and so on.., essentially cache coherence problems were left unsolved and when I brought this to the attention of people they shrugged it off saying when they see a problem they might handle it - which may be fine, but seemed like some key computer science concepts were overlooked in the goal to achieve software reuse (here reuse of existing accessor/cache code to the database)
I have seen design choices that cause race conditions or non-determinstic behavior, following some patterns or cargo cult programming because something is an industry trend etc.
Essentially, OO forces you to think of everything as an object. That pattern is valuable at times, but the way you think about something and the patterns you're familiar with intrinsically shape your approach and how you can solve it which can be a bad thing.
OO is not bad; it's merely bad to only use OO, to teach it as the only methodology in schools, to use it to the exclusion of all else. Like most things, OO is just one of many options and is not always the best one.
> the Functional Kingdoms must look with disdain upon each other, and make mutual war when they have nothing better to do.
OO as the idea of late bound, dynamically dispatched objects that receive messages, retain state, maintain access and location transparency and optionally delegate to other objects... that's not really controversial, but also rarely done. Nevertheless, most great research operating systems have been object-oriented, and this is no coincidence.
This is the most controversial part of it. I've got no problems with classes, objects, imperative methods, dynamic dispatch and all that - these are just semantic building blocks, sometimes useful.
What is really damaging is this very notion of representing the real world problems in their infinitely diverse complexity as something so primitive and narrow as communicating objects.
Why are you sledgehammering all these entities into this primitive and narrow way of thinking in the first place?
They are different. And in order for different entities to have some common properties you don't have to think of them as a hierarchical relation of communicating objects.
An object here is just a blob of state that enforces a primal uniformity. Files still look, walk and quack like files, they're just represented under a common construct underneath which is immensely useful for the programmer and irrelevant to the end user.
You can't really speak of the "OO blindfolded way of thinking" when all you do is contradict without any substance.
Other than that, OS research is nearly dead anyway, nothing interesting happens, besides probably Singularity, which is also not very OO - it does not play well alongside with static analysis anyway.
Compared to Windows, maybe, but in the grand scheme of things it is not. Hence so many research systems like Amoeba, Spring, Sprite, SPIN and even GNU Hurd that tried to create a general overarching metaphor across Unix's not-quite-uniformity.
Plan9 is even more uniform and still not object-oriented.
9P is a transport-agnostic object protocol for representing resources as synthetic file systems. The trick many have with Plan 9 is that they import their prior knowledge of what a "file" is, but in reality Plan 9's concept of a file is quite different from other systems. This is right down to the Plan 9 kernel being a multiplexer for I/O over 9P.
Either way, my point is the "object" metaphor isn't really all that specific. Personally I'd love to see work on functional operating systems, but other than the boring House system, I don't think there's been that many.
I still don't see how Plan9 is built on "objects".
> functional operating systems
I doubt functional metaphors are of any use in this domain.
My point is that the abstraction continuum is far more diverse and deep than something as stupid as communicating objects or reducing lambda terms. My bet is on a linguistic abstraction, which covers interface unification as well as many other things.
It has always been an unfortunate characteristic of using classes for application domain information that it resulted in information being hidden behind class-specific micro-languages, e.g. even the seemingly harmless employee.getName() is a custom interface to data. Putting information in such classes is a problem, much like having every book being written in a different language would be a problem. You can no longer take a generic approach to information processing. This results in an explosion of needless specificity, and a dearth of reuse.
This is why Clojure has always encouraged putting such information in maps, and that advice doesn't change with datatypes. By using defrecord you get generically manipulable information, plus the added benefits of type-driven polymorphism, and the structural efficiencies of fields. OTOH, it makes no sense for a datatype that defines a collection like vector to have a default implementation of map, thus deftype is suitable for defining such programming constructs.
It worked fine while our domain remainded small, but got ghastly quickly.
It adds to this all of the extra oddness that object hierarchies bring, especially the oddness related to trying to force non-hierarchical concepts into a hierarchy. Mediate on whether an ellipse is a special case of a circle or vice-versa until you reach enlightenment, which is realizing that the question is stupid and that any situation which forces you to answer it is stupid.
It was like a lightning had struck when I realized that if you did OOP in a certain way you could make guarantees that some mistakes would be impossible to make, and that whatever I had been doing for the past several years was not even remotely OOP, even though everything was in a class.
The reason I'm interested in learning functional programming is because it seems like it too holds the promise of eliminating errors entirely, if done well, perhaps not in a mutually exclusive way.
If all the developers on your team are sane, then OOP is just a bunch of unnecessarily verbose conceptual complexity.
(The unfortunate corollary, of course, is that the code from the lunatic in the next cubicle is just as likely to be your own code from six months ago)
It's about python but applicable to most languages.
Sorry, but this is an extremely audacious claim to make without backing it up.
I also think "FP is about small scale" is quite wrong, looking at empirical evidence.
I'd always prefer the SML module system to anything OOP for a project of any scale.
There's a problem. I don't know any project which was implemented with SML on scale (I mean at least several MLOC).
Could you give examples of features that you would like to have in modern OO languages which are available in Ada?
The fact that most of the statically typed OO languages also provide these features does not mean that OO is suitable for scalability - this is totally orthogonal to OOP.
This is a fallacy similar to attributing pattern matching and ADTs to the functional programming, while these features are not inherently functional in any way.
Try to build a large system in a dynamically typed OO language to see why OO per se is of no use for a large scale architecture at all.
(I say this as someone with a bachelors only)
That's the problem with teaching these things in university. They just aren't worth the time and money to teach at that level.
What you describe sounds like the kind of content that belongs in such a session for degrees that are likely to lead on to working in software development.
This gives rise to a further thought. Rather than this even being part of a degree course, it should be something run by a university's Careers Service. A series of "So you wanna be a ..." sessions for common graduate careers, that train students on some of the day-to-day practicalities of what those jobs entail.
Just say that you accept assignments via version control. Done, learn it or fail. You don't need a course in it.
It's hard to create a course on the interconnections of everything. The best course I ever took was a Cisco IOS programming course. That included protocol analysis, debugging routers, switches, etc. I'm not a network layer programmer. But what I learned about how everything works together has helped me solve complex problems ranging from Kerberos authentication issues to finding duplicate DHCP servers on the same network. Without this knowledge, I would have to hire experts and waste a ton of time and money to get to the end goal which is a functioning product.
For year, I'm still struggling to type without looking at the keyboard and I make frequent typos which slow me down. Being able to type quickly and have a good habit of placing fingers on the keyboard seem to be an important exercise. A guideline to choose the right keyboard, setting up personal shortcut or customized keyboard layout for each developer might also be worth learning.
While there ARE typing tutors, I haven't found any programming specific typing tutors. I think a normal typing tutor is nice, but since syntax requires so many symbols, typing English sentences only helps so much. I also think (still in the 'physical activity' mindset) developers can use these typing exercises are warm-up exercises. Just like a sport, you stretch and warm-up so you a loose and ready. Typing out basic for loops and functions can help in the same light.
Seriously, why does every analogy have to be a car? Is it just the most complex system we can model in our heads?
CSCI 4600: When Big-O Complexity Becomes a Lie and Why
-Mastering Git (/competitors)
-History of consumer information technology
CSCI 3300: Stuff no one cares about anymore
CSCI 4020: Pretending to write fast code in slow languages
CSCI 2170: I know about command line tools !1!!@ So h4x0r
PSYC 4410: Trivia
Seriously, this article is garbage