"When you join a startup, there’s a lot of emphasis on the design of the application, reusability and clean code, and the ability to conduct and undergo code reviews, as well as the ability to think of and build systems that can scale based on users and geography"
I can't help feel that if you're expecting a recent grad to hit the ground running with this level of insight and experience, your doing something terribly wrong. You should expect to train and mentor a recent graduate, not put them in charge of a project.
* Good design requires having an insight into the requirements of a system, often with an incomplete spec and unclear long-term goals. You can't really do this without actually designing and building real-world systems, and over-time understating where the possible failure points and missing requirements may be.
* Reusability is a double edged sword, if you generalize every bit of code into a a reusable configurable component you miss deadlines and end can end up with an 'Enterprise' mess of OOP code. It takes experience to know when to generalize, and when to KISS.
* Code reviews, best practices, and requirements are different at every company and in every industry. Even someone with experience has to be brought up to speed on this when taking on a new position.
* The "ability to think of and build systems that can scale based on users and geography" ... this is incredibility specific. I work on embedded industrial systems; they're massively complicated but are single user and often only on an isolated network. Knowing how to "scale based on users and geography" is a specific requirement for this person's industry, not software engineering in general. If you want specialization, you don't hire a recent grad (unless maybe your doing research and hire a PhD).
Edited to expanded on a few parts.
In the Real World(tm) of building software, you will work on systems built over years by other people, many of which have long since left the project or company. The level of integration tests, unit tests, documentation, and automation will depend entirely on the culture of the organization.
Databases, low-level I/O, sorting operations, and such are nominally provided either by the standard library or sourced from the community.
Infrastructure will rule your world. Good infrastructure, with smooth provisioning and deployment, will make your day a joy. Poor infrastructure, with lots of manual work and sign-offs from ten degrees of sysadmins, will do the opposite.
You will also work as a member of a team, with a wide variety in terms of background and experience levels, making clear communication, in all forms, an absolutely essential skill.
In college, very little of the above is actually covered
Sure, you'll learn lots of stuff about operating systems, algorithms, compilers, data structures, and the like. But without any real context on what that stuff really does.
But the big place that computer science education is a big let-down is on the collaboration front. There aren't many programs, perhaps none, where you are required to, as a member of a small team, maintain and modify code built three years ago by other people, and graded on your ability to produce work that can continue to be developed by future students.
Instead, the focus is on solving relatively straightforward problems as quickly as possible, with little or no attention paid to user experience, readability, testing, and any of the other things that make up the "engineering" side of the practice of building software.
This is true. The things of value (from an industry POV) that you actually learn at university are:
1) You learn to learn. You learn to pick up an entirely new area of knowledge (albeit sometimes based on prior learning) in six months to the point where you're proficient enough to pass an exam in it.
2) You learn to work to sometimes-unreasonable deadlines with minimal supervision (at least, if you graduate you've probably learned this).
3) You learn to stick with something for 3+ years even when it stops being fun.
4) You meet a bunch of people who form the start of your professional network.
Anything else you pick up is a bonus, and your first 3-6 months on the job are where you actually learn what you need for that job.
I can’t stand it when an engineer wastes hours looking for a library, instead of even considering actually writing code themselves. Many just can’t because they’re so accustomed to glueing together other people’s code.
Of course, I’ve worked with academic types that can never stop theorizing and actually be productive. That have zero discipline.
There has to be a balance. But yes, fresh out of school == fairly worthless for anything I’ve worked on ever.
That's too little theoretical background, not too much practical experience. If you take someone with a good grasp of the basics, you will never remove that grasp by giving them more practical experience.
The choice depends not only on how complex your own code would be, but also how mature and well maintained the library is (e.g. maybe a library that's just right is some solo developer's pet project, whereas a slightly more awkward fit is actively developed by a big company). That is the sort of thing where making the right choice really requires a lot of experience.
There is little point trying to teach work-related skills in a non-work-related environment. If I wanted to teach a bunch of kids how to work, the university system would not be a good starting point. It is the blind leading the blind, there are potentially literally thousands of people with at best limited work experience all crammed in together. That is not a sane environment for teaching people how to add value to others lives through working.
University is supposed to provide networking opportunities, cross-pollinate interesting theoretical ideas that have limited practical application and provide a feeder ramp into the research community. If universities try to teach employable skills it will be hugely inefficient compared to teaching people skills in entry level jobs.
Anyone trying to learn teamwork in a team of university students is not going to understand teamwork. In real teams it is not unusual to have 1 or 2 centuries of pooled experience in the problem domain being worked on and that creates a completely different dynamic for how the work is divided up and work gets done.
Not to sound flippant, but if this is the case, why do we bother requiring a university degree for really any job?
Or even the college's own toy systems?
Annually from the second year on during a computer science or software engineering program, students should be required to have one class where they spend at least one full day per week contributing to their choice of one of the following:
(1) An open-source project approved by the instructor.
(2) Production software used within the university, excluding systems dealing with student or faculty PII or sensitive information (grades, schedules, etc). Lots of opportunities here to contribute across the sciences!
(3) Non-commercial software benefitting the community or government at any level (from local to national).
The key difference between university and work is that in the former you are supposed to receive a pretty fine-grained assessment of your own skill, which is isolated and repeatable. In work, it's understood that promotion and slotting is a much more arbitrary affair and the marking criteria may not even be documented, let alone applied consistently. Also if your project fails because of bad work done by others, well, tough luck, that's what your salary is for. In university you're paying to be there, not the other way around, so the amount of random failure tolerable is much lower.
- customer oriented thinking (your not working for artistic beauty, balance technical arts and feature set ROI)
- pragmatism (even though they'll send a few "solve it then make it fast") you don't really get to feel the real pressure of aiming right at a good enough solution.
- optimization, not enough variations on a piece of logic to make it faster or whatever metric you need to optimize.
Things may have changed in college, but when I have very young in their career people start with me, one of their first tasks is to book a 15-minute meeting with me.
Learning Outlook/GCal/? and how to find a book a meeting room is base-level skill.
like im working with a recent cum laude CS grad ,and he was never told how SSL really works, nor DNS nor TCPIP
which makes it really hard to actually operate in the real world beyond IDE
- low level was for electronics/cpu architect
- the things you mention were for network engineer
- the mainstream was application developper, and you get to learn java and eclipse
I swear this is another facet of agism: some think that the 22 year old with a 4 year degree is nearly as valuable as the 40 year old professional, just needs some language experience and wham! Ready to go!
Maybe its an excuse and the companies who say that just dont want to pay for training.
How can anyone take the article seriously? These are all problems that startups punt on until after they've suceeded.
I've always thought so too. But it doesn't seem like any companies really buy into this any more.
After getting a few hires that only stay for 6-12 months, its a question of "is it worth it to put in the time to mentor them if they're going to be gone shortly afterwards?"
This is half tragedy of the commons of so few companies offering training and half companies being so god awful at offering raises as employees gain experience.
It has also lead to companies doing the "we only hire mid to senior level because the entry level is too flaky" which also alleviates much of the need for mentoring new grads.
It also produces the interviews where people are asking about the passion for the particular domain that the company is in, in hopes that passion will keep them around longer.
That said, if someone is planning only staying for 6-12 months regardless of the company's current pay and mentoring practices, I would agree with the "job hopping came first" as an artifact of the dot com boom days when everyone was trying to one up everyone else.
Thinking that free training which is being provided by someone else should fit your custom needs is, well, maybe not the best business case.
"When you join a startup, there’s a lot of emphasis on the design of the application, reusability and clean code, and the ability to conduct and undergo code reviews, as well as the ability to think of and build systems that can scale based on users and geography"?
I don't know which startup he worked for that was doing all this and still managing to stay afloat! Startups want you to crank out features and get clients in, everything else is a failed strategy!
On the plus side, I work 38 hours a week and not a minute more, and I get paid above median for the privilege.
Through internships, I feel I've learned the practical I need in order to work full time, and what I learn during internships is complimented by the theory I have learned through my degree.
So I guess what I am saying is that if you are doing a university program, you shouldn't expect it to give you the practical skills for working, that's not what it is for. Instead do lots of internships if you can, they are fantastic.
Software engineering students had to do process and design papers, they (theoretically, at least) learned about agile development, UML diagrams, and all that other SDLC bullshit. As a computer scientist, I instead took fun papers like computer graphics and cryptography.
All the things I missed learning because I did computer science, I picked up as I needed at work. Learning how agile development works doesn't take a semester of reading books, and what you learn in theory about the SDLC at university doesn't reflect reality in any company.
They do seem awful interested in whether I can find all matching subtrees in a binary tree, though. Which, now that I think about, is the one thing I learned in college but haven't done even once on the job once in 20 years (I have had to find minimum cost paths in a tree, though. I couldn't have done it on the spot at the whiteboard, but my data structures and algorithms class was helpful in researching and figuring it out years later).
I think algorithms questions are an OK litmus test for being able to program, but it shouldn't be everything or even 1/4 of an interview at a small company.
From first hand experience, I would strongly disagree...
I won't beat the dead horse of the sad state of tech interviews, but just wanted to point out that this statement does not align with my experience at all.
Are typical start ups really doing whiteboard interviews during their early stages? I expect white boarding from a company like Uber or Discord but don’t most still hire based on networks and recommendations?
I'd say when it stops accepting funding and determines it wants to stay as a small business. But that's probably a bit rare for companies that have taken on significant funding. After all, if I'm an investor I'd like my money multiplied (>1, thanks).
> Are typical start ups really doing whiteboard interviews during their early stages?
It varies a lot on the startup and the founders. Some founders don't have a large network of engineers (or - rather - engineers who would like to work at a startup for startup compensation; this is especially true with very early startups). You'll eventually have to hire from outside because your network will run out.
I've been hired as an early engineer at a seed stage startup. (Would not recommend) I've interviewed at dozens and dozens of startups in SV all across the funding range. Many of the ones I interviewed with did ask typical whiteboarding questions that you'd find at FAANG. After all, many come from FAANG and don't know any other way to interview. Or they try to imitate "the best".
Umm... every path is a minimum cost path in a tree, because there’s exactly one path between any two vertices.
He may have meant a general connected graph (not specifically a tree). In which case, there could be multiple paths and some may be cheaper than others.
It wasn’t a graph though as there was a unique path from the root to each leaf
That’s about as well as I’m going to do here it was a while ago...
Asking someone to find the matching subtrees in a binary tree might not be the best interview question, and to some extent it is kind of trivia for "did you grok your CS education well" - but the point of algorithms questions overall is to see if the candidate, when confronted with a tricky problem, will solve it.
And no, these tricky problems don't come up all that often, and you're often just doing basic coding plumbing work. But the candidates who can solve these problems can do all the plumbing work too, and when confronted with a tricky problem, they can get past it. And they also may innovate in an algorithmic way where a less deeply technical developer will either get blocked or come up with a worse solution.
Citation very much needed. If anything my experience suggests I can teach people with “soft skills” how to do algorithms and data structure tricks much easier than the opposite.
This is the exact opposite I have seen in 20 some odd years of working. Maybe if you mean deep technical skills as an expert in cutting edge ML? Thing is, most companies do not need that that depth of expertise. They need someone who can get build standard web/mobile apps, and not be an asshole (and even a bigger plus if they know anything about business/finance/project management). How many assholes have you ever turned into not assholes vs. how many people have you been able to teach something technical? Soft skills are hard for many people.
Wanted to add that a pretty standard soft skill of being able to check their ego often allows people to learn more easily than those who can't put their ego aside.
People who won't try new things, because they'll suck at it.
People who don't Google (they know how, they just don't) because they think all answers all received wisdom, not something between a SWAG, an opinion, and a mental model.
People who don't read things outside their domain because that's someone else's job.
People who confuse having 1 experience for 7 years with 7 years experience.
People who think you have to be an expert scientist, engineer, architect, philosopher, and mechanic to change a light bulb, so they never even tried.
People so worried about imposter syndrome they'll never deliberately exert any curiosity for fear it will be seen as ignorance.
Your whole life has goal posts laid out for you. You go to elementary school, get good grades so you can get into honors classes in middle/high school. And you do this again for college. Everything is quantitatively measured. How many questions did you get right? How many days did you attend school? How many extracurricular activities did you participate in? How many hours did you volunteer? All check boxes that other people hate laid out for you so that you can be measured in the funnel. And, artually, for some people this process can continue in their career. You get good grades in college, then you get an internship at BigCo. Then you get a job at BigCo. On and on until you die.
The problem is, you never had to figure anything out. So if you ever decide to do something non-standard or if randomness hits you too hard, or if someone along the way deems you "not good enough", now you're off the track. And you have never had to set your own goal posts. If you have to be constantly told what to do, employers are not going to want to hire you.
I think the lack of critical thinking goes hand in hand with an inability to function without rulebooks and structure.
I'd love to see a curriculum where there is a class each semester that is 100% project based, where the end result isn't as important as the process.
I got a good, long laugh out of this. This describes the exact opposite of my experience in a startup. None of those things were ever a concern until I worked for a company that already had a revenue stream.
That is, after being an indisputably young startup, the business declines to transition its identity into being a growing small business, and remains a self-described "startup" because it is secretly a huge business that is still in its second or third instar, ready to moult, and to also seem like a cooler place to work than it actually is, now that the HR department has multiple permanent employees.
The emergency all-hands pivot brainstorming meeting just doesn't seem like it can even take place in the same building as all those design discussions and code reviews.
Does anyone else think that "startup" has become a branding strategy rather than a descriptive term?
So ultimately it is called computer science for the wrong reason.
Unfortunately, even though it was one of the best such programs in the world, and turned out graduates who made 80k+, it was still generally looked down upon, as the degree that people took who "couldn't get into Computer Science".
It is unfortunate that people in academia severely look down upon anything that has a more practical focus on useful skills, and only respect theoretical studies.
Whether academia is responding to signals from industry, vice versa, or neither is an interesting theoretical question. In practical terms, one might argue that better route for those industries at least is to choose a degree with higher perceived value.
Fortunately, I wasn't talking about areospace, or defense, or whatever. The vast majority of people who come out of CMU, are not working in these very specific job areas, so they aren't relevant.
Instead, they are working at SF tech companies like Facebook, and Apple, and the like.
And if you look at the places that the Info Systems people go to, it turns out that the vast majority of them have the job title "Software Engineer", making a median salary of 90k$:
It turns out that a whole bunch of people with these degrees are able to get top software engineering jobs at top tech companies.
> In practical terms, one might argue that better route for those industries at least is to choose a degree with higher perceived value.
No, the facts show that a very effective way of getting a prestigious job, working at a top tech company, with the official job title of Software Engineer, is to get an Information Systems degree. The stats I showed, prove it.
The problem for most schools is an engineering degree is a specific, formally defined degree. It seems that ABET has recently updated their Software Engineering criteria, but for decades a software engineering degree required extensive curriculum in physical science.
It teaches you how to work as a web developer. Web development being something that people often call software engineering.
Use whatever word or definition that you want to group Info Systems under, but at the end of the day, these people with these degrees are still getting jobs at Google, FB, and the other top companies, and/or startups, and are giving the job title "software engineer".
That is the definition of software engineer that I use. It is defined as "Those people who are working at google, or top startups or whatever".
Cal Poly https://csc.calpoly.edu/programs/bs-software-engineering/
San Jose State https://bsse.sjsu.edu/
Penn State https://behrend.psu.edu/school-of-engineering/academic-progr...
Univ Washington https://www.uwb.edu/bscsse
Agree though that the Soft Eng degree is differs from CS largely by dealing with process, not technology. The first versions of the SWEBOK were pretty terribly biased towards a waterfall process.
Ultimately, my school could not offer a formal Software Engineering. "Engineering" degrees have not caught up with the times to include software. A software engineering degree would require completely irrelevant course in physical science - like thermodynamics, statics, and advance physics.
UC Irvine: https://www.informatics.uci.edu/undergrad/bs-software-engine...
Penn State: https://behrend.psu.edu/school-of-engineering/academic-progr...
Those three schools are > 30k students each.
(UCI doesn't appear to be one of them.)
The modern day web dev degree is called "Information Systems". Top schools offer it, but it is generally looked down upon as the degree that people take who couldn't get into Computer Science.
CMU offers a minor in Soft. Eng. It requires 6 courses (one is fluff) and a minimum 8 week internship in industry. What's also good about this CMU minor is that it's open to students of any major.
I have no affiliation with them, but like the model.
I did for fun two MIT graduate students labs (operating systems, and distributed system) that complement the lectures.
I found them very challenging and interesting. It took me a few weeks to complete them, and I didn't even do the "project" part which was more open-ended.
The end result is nice little feedback loop of far better understanding of the theory and concepts behind the code that I write feeding into writing better code, which then feeds back into better understanding. So now when I help peers (especially friends still in college), I focus less on the language and more on helping the concept click for them.
I can't say how well this works in practice in a university -- it sure seems to work for MIT -- but I know that in my professional life it has made just about everything I do far easier to reason about and my work is all the better for it.
Yeah, it came across to me that the expectation was that you'd learn to program mostly outside of the course. Or, really, that you already had a reasonable grasp on the basic concepts. Otherwise I think that course would feel to most people like being tossed into the deep end of the pool from a great height.
To be sure, with the campus version of the class there would be recitation sessions and other resources to get help on the programming side. There's also a companion textbook that goes into more Python details. But that's certainly not a class to "learn to program," much less how to work on a command line, use an editor, etc.
That may be reasonable for an MIT CS curriculum but most other majors probably don't have the same degree of implied prerequisites.
There's probably an expectation these days that students have some degree of exposure to computers. When I took an intro to computing course (FORTRAN) it was pretty much no expectations. But times have changed.
And I found 6.001x useful. But then I had a lot of experience with computers even if not programming full-time professionally.
Makes sense, guessing not many high schools even had one computer until the Apple II, which I guess would have been a bit after you graduated.
Side note, I love that it's possible to take a course like that online for ~free now.
I did take a FORTRAN course in college which people would consider very rudimentary today. This was the textbook :-) https://openlibrary.org/works/OL6795090W/A_Fortran_coloring_...
But I didn't really use a computer to speak of (other than as a text editor in grad school) until I was working--and later got into programming as a hobby.
> Students are taught formal testing methods such as static analysis, which checks code without actually running it. This helps them understand how to test software programs, but it doesn’t address the testing of distributed systems, web services, and infrastructure resiliency. Examination of these types of user interfaces and back-end systems is essential, Devadiga says.
This is a good observation. In my experience, the biggest hurdle new grads have to overcome is to learn to develop in a context where users and co-workers are going to depend on the projects they build, and need to be maintained. I didn't learn how to set up instrumentation, deployment pipelines, outage alerts, and other important pieces of infrastructure in keeping a web service online in my university classes.
> Because startups are heavy users of cloud computing platforms, it’s assumed that most software engineering students understand how these systems work—but that’s often not the case, Devadiga says. Students need a practical understanding of infrastructure architecture design patterns, DevOps, and cloud platform services like compute instances, object storage, and queueing services. The ability to create applications that can execute in services like Amazon Elastic Compute Cloud is important.
Another one that I think is on point. I did use AWS in a networking class, but that was just because the class instructors didn't want to deal with building dev environments for windows, mac, and linux. I didn't really get exposed to deploying and running a web application on a cloud computing platform. For how universal cloud computing is nowadays, it's surprising how little I was exposed to it in university.
I think it'd be great for universities to offer a "practical software development" class that didn't really focus too heavily on any one particular academic topic, but was more emphasized on imitating the kind of work that developers do in industry. It'd be cool students went through the whole class building, maintaining, and extending a single web service over the course of a semester or quarter. It'd be neat if grading was based not just on application functionality, but also uptime, average latency, and similar metrics - though that would probably be too variable to be fair, a student's site might go down while they're away on a sports competition or something like that.
These labs tend to organize students into pairs or small groups who proceed through a sequence of onsite exercises that attempt to closely emulate more complex scenarios; thus synthesizing the application of multiple book/lecture-learned skills in an environment that naturally elucidates and facilitates the practical systems engineering and communication skills that are only otherwise learned in the workplace.
Student performance in a lab is typically evaluated from structured record keeping of methodology and analysis (testing & documentation). There is also often a presentation component that helps students learn how to effectively communicate results to an audience.
One can imagine lab teams collaborating on all aspects of software engineering including requirements gathering, systems design, infrastructure provisioning, deployment automation, CI/CB/CT, integration testing, release management, diagnostics/troubleshooting, etc. Experienced graduate students can be recruited to play the various roles of chaos monkeys, users and business stakeholders.
Students would get a healthy dose of teamwork experience, and have some tangible software deliverables to seed their technical portfolios.
It even sounds pretty fun!
The latter is another complaint I've heard from company owners. Too many graduates who just don't know how to work with a company's non-engineers, as they've told me.
1. "They [start-ups and next-gen tech companies] want grads who can build scalable systems and program for large-scale, distributed, data-intensive systems that leverage cloud computing." - For start-ups, this is only true if your startup is in that space. Most "start-ups" are trying to find product market fit, and the systems are generally not large scale as a result.
2. "Engineers at established companies mostly work within their specialty areas." - While there's some truth to this, I think it's overstated. In the start-ups I worked with, "engineers" did not do market research. I worked with plenty of specialists at start-ups that worked with language X, but if you asked them to debug an issue in language Y, they'd refuse. In some cases, it can be the opposite of what this article states - engineers at a start-up can often focus on hands-on engineering tasks, whereas in a large company I find a larger portion of my time is around writing documents to align more people around an idea.
3. "The timing of the product release is crucial because it has a direct impact on acquiring customers and affects the bottom line. Significant delays can put a company out of business." - In a large company, you won't go out of business if your release is late, but it could mean that your product gets de-funded. While the change to the company might not be so much, a project getting de-funded certainly affects the daily lives of those involved. So I think this difference is overstated.
4. "Engineers for startups are more likely to play a significant role in defining the system architecture" - My experience is that this depends a lot on the start-up and the seniority of the engineer in question.
But they can Google.
They can study for interviews that quiz you on computer science trivia.
The interview process on the coasts is broken. In the midwest I don't have to deal with that shit.
Of course, we need a _ton_ more oversight involved to ensure that bootcamps are actually graduating people with a job-ready skillset.
A startup can hold out for MIT grads for as long as they want, but they ain't getting one if every other startup is doing the same.
_Someone_ is getting hired at the end of the day. A company that's looking to grow rapidly can't hold out forever.
Experience matters and it's not something you get in any school.
We had an intern come in that was still stuck in that student-mode of needing to be told everything and not asking pointed enough questions. After a few weeks they were up to speed and productive.
Frankly even experienced workers will still require a period to absorb specific knowledge.
I think the time required for fresh grads to be productive is way, way overstated and overestimated provided they have the right direction.
> Because startups need to bring their products to market quickly, applications are built iteratively, with rapid prototyping. The timing of the product release is crucial because it has a direct impact on acquiring customers and affects the bottom line. Significant delays can put a company out of business. While time to market is also vital for large companies, their software releases are typically for established products, and if they run late, such companies usually have the money to survive.
Some don't have enough people to properly mentor and train a new hire.
The page for the startup of the engineer featured in the article: https://www.linkedin.com/company/datarista/about/
> size 2-10 employees
I would contend that any company in the 2-10 range hiring a new grad is a mistake for both particles.
Some acknowledge the "they get a job, they get trained, and they go on to greener pa$ture$." In this case, the mentoring is a lost ROI in many cases.
My reading of the desires of the person featured in the article... they want to hire a person at new grad wages who knows how to debug a legacy code base, architect a cloud solution, come up with new product ideas, do market research, and be a participant in the "devops means we give developers root" culture... and be able to do this within single digit number of days.
Those are things that come with experience and failure. No class or boot camp will teach those things to a point where the person is competent in them without some good time to work on those skills (side note: this is where college often produces better candidates than boot camps - they've got more time to learn the tools).
Rambling on the above... I now have a one question that will distinguish a good candidate from a poor one: "How do you set a breakpoint in your preferred debugger?"
> mentoring is a lost ROI in many cases.
What investment? Grads are cheap, and time to productivity as I argued isn't that long, before which they aren't merely learning. Most of the investment here is on the worker's part, and greener pastures aren't available as quickly as productivity sets in. If you're fresh out of school and working, leaving a company is unthinkable in the first year, unlikely in the 2nd.
This isn't saying that mentoring can't be a positive ROI - just that the person needs to stay long enough.
> If you're fresh out of school and working, leaving a company is unthinkable in the first year, unlikely in the 2nd.
You should check out reddit /r/cscareerquestions and consider how many people advocate leaving under a year.
I will also note that for my own experience, while there are some developers who stay for a long time, there are more than a few that start and leave within a year.
In trying to get two developers that would stay on, we went through six developers. Four of which left within a year. This is for public sector and the lack monetary compensation shouldn't be a surprise to anyone... still, had four developers that we trained up and left.
The investment of time in getting them to that point where they could be useful contributors if they had stayed was more than what it would have taken the developers who mentored them to just do those tasks and not have hired anyone.
That doesn't touch on the actual pay and paperwork to hire someone.
Likewise, the pay isn't anything hidden either. This number is well known https://projects.jsonline.com/database/2019/4/Wisconsin-stat... (much of the technology side gets classified as IS SYSTMS DEVMNT SVCS ...). Compare those numbers to the averages for the area https://www.payscale.com/research/US/Location=Madison-WI/Sal...
People applying and taking the job know exactly what they are getting and going to get for compensation and benefits. Pay is a bit on the lower side, vacation is on the higher side, the pension is fully funded (actually at 102.9%).
Thus the ability to juggle things to retain a person is very limited - especially when that person hasn't been there for a full year.
Consider then, that the person is accepting the job, knowing that this is the public sector, knowing the wage history, and looking for another job elsewhere (many have left to move out west). It is not exactly practical for state government to try to match west coast compensation.
It isn't, though there may be a sharp difference in cost of living and that's a strong consideration with wages. That, and a pension and benefits is taken for granted by younger people. Besides govt positions tending to be less exciting, you can really put some money away. Moving up, though, is a sluggish process.
I'll probably hire you with a CS degree, but I know I'm going to need to teach you how to be a software engineer.
You basically do 4 years of math and physics with several years of calculus, differential equations, numerical methods, programming, and solving difficult problems. It is nothing like a trade school (no offense to that line of work...they probably make more money than me :)).
If you want a pure CS, usually it is a maths degree with major in computing.
This afternoon I met someone in lunch line that had just graduated in CS who said they tried to do EE but couldn't do it. I've seen others when I was in school drop out of EE to do CS.
Clearly both fields are equally complex and challenging, but I got the sense there's something particularly asymmetric and grueling about the EE curriculum.
I went to a state school and went CS, though I had several friends in ECE (electrical and computer engineering). The difference between the classes we took is largely summed up in:
* They took more math and physics than I did.
* They had fewer CS electives than I had.
* I had more 'breadth requirements' outside of of the core major than they did
* They had to maintain a much higher GPA to remain in the engineering college
* They had classes that involved a soldering iron and hours spent in front of mentor graphics (to the point it was nicknamed 'tormentor graphics')
The EE/ECE curriculum may have changed since then. It probably has. However, (at least at my alma mater) the engineering college is still as selective as always while the CS department unanimously voted that they won't cap enrollment (and instead figure out how to get larger lecture halls and more sections as needed).
And the latter have little motivation to stay current.
For some reason, I don't know/understand, this responsibility got shifted to taxpayer.
This may not be an option for startups but, then, maybe startups should focus on hiring more experienced professionals who can, with time, bring freshly graduated engineers up to speed, as well as helping design systems and processes tailored for the company and product.
What I'm wondering is: doesn't university prove that one is capable of learning programming/computer-related topics?
The hardest things I've learned about were:
- creating a computer graphics engine from scratch (I did this in Java which made it easier)
- reproducing a simplified version of Kevin Mitnick's attack on the Boston super computer (? not sure, memory is vague) with C, libpcap, tcpdump and other tools/libs
- being passable at reading x86/x64 assembly and understanding how a computer is built from the architectural level to a modern programming language, playing around with creating an MUL instruction for a toy ISA
- compilers and reading a toy ISA to then be able to read the toy machine code made for it
Is this practical? No.
Is this a lot harder than creating a new web app for a startup just starting out? Yes, if you know a programming language or two, then it's easy to hack things with JS together and also fairly easy to get up to snuff with ReactJS and ES<whatever_year_it_is> since it's mostly syntax features.
I don't have experience working with legacy systems or 1 million users large scale systems, so I can't comment on whether that's more difficult. However, since this article is about startups. I've worked for startups that existed 2+ years, and becoming productive on the job didn't take me long and I'm not an amazing programmer by any means.
My bachelor though, yea, that was a lot worse. My master in CS saved me by following security courses where the professors expected you to learn almost anything on your own.
It really depends on your skillset. I've got over 20 years experience, and honestly, I don't feel like I have the skills that most startups need either, nor the desire to forgo a decent salary in the hope that they "make it big" someday.