Two potential explanations: the tech industry can't stop talking about itself on twitter and blogs. How many mechanical engineers do you know that spend a large % of their day procrastinating by surfing the web and participating in meta-talk about the industry. Our day-to-day tool for doing our jobs also just so happens to be a worldwide communication device. Just like the media loves to report on themselves, the tech industry loves to post on the internet about the tech industry, so the effect of a single asshole's comment is magnified by a social network powered shitstorm, further entrenching the meme that tech is full of bigots and causing a flood of discussion and hyper-analysis of it.
The second explanation is that the tech industry is actually less mysogenistic than others, as you have noticed, but ironically due to the lack of widespread mysogeny like you see in areas like finance, when an "incident" occurs, a lively discussion happens because there is a critical mass of non-sexist males, not due to a lack of them. Off course, this feeds into the meme which cements it more as accepted truth, despite any real data or study showing that tech has a above average rate of male mysogeny.
Another possibility is that the tech industry is currently a desirable place to work. People generally don't care if you exclude them from jobs that they wouldn't want anyway, but if other people are having a lot of fun, changing the world, and getting paid for it, it really sucks if you're excluded from that because you lack a Y chromosome. Hence when stories appear about "software eating the world" or Google engineers getting paid $300K/year with $6M retention bonuses or startups getting sold for $19B after 4 years of work, everybody wants a piece of that, and any hint that it may not be a perfect meritocracy is problematic.
Honestly I feel like this type of viewpoint is one that almost every programmer gets to after about 10 years doing real work. The difference is tact. Some hack out some experiments in new programming paradigms and some post detailed ideas or concepts of how things can be improved, and others just try to gain street cred by just saying everything is obviously shit and we are all fools for not "fixing" things.
Well said, sir. Worth noting that this overview of where we've come from and how not far we've come was posted by a fellow is both clearly a genius, and who has his most popular github project written in x64 assembly. So I'd argue that even he sees the merit in the past when attempting to blaze a trail into the future.
For one thing, as long as our computers are binary, they are going to require instructions in a very specific way and anything we put over the top of them will expose that architecture to some degree or another.
My entire article is about things that happened between 1955 and 1969. The part where I state my thesis literally concludes "With a solid historical perspective we can dare to do better." I'm not sure how much more obvious I could make it that I "see the merit in the past when attempting to blaze a trail into the future".
Indeed. The announcement states, "If you already pay for storage, you’ll automatically move to a better plan at no additional cost," but that does not seem to cover people whose plans are simply becoming cheaper.
Great post -- I want to throw out another option though, which I am exploring myself. Your post seems to, like most posts on HN, be a very "web application" centric view of things.
There are obviously vast worlds of software engineering outside of web applications. Embedded/real-time systems, programming languages & tools, robotics, graphics, bioinformatics/computational biology, machine learning, machine vision, the list goes on. Why not "pivot" your career into a completely new subfield? The downsides: you will likely need to take a year off for self study and to build up a body of work to land you a job, and you will have to take a pay cut. But you will be able to build stuff, will remain excited on the edge of your field, and might be able to have a fresh slate and replicate those feelings you had when you started off in the first place.
Sounds like what I want to be doing. I'm curious if you have additional insight on taking on this path, I want to keep all of my options open and instead of say just pursue embedded/real-time systems and find out that the field is too narrow. What do you recommend to study as opposed to CRUD web-apps,
C/C++, ML, data analysis with Python/R or learn more domain specific knowledge such as Bioinformatics or graphics or quant finance? Much appreciated, thanks.
3. High performance (algorithms, advanced data structures, C/C++, memory management, DB internals, internet-layer and below networking, cryptography)
It sounds like you want to get into 3. A good way to start is algos/data structures. Almost all the fields the parent listed benefit from a very strong foundation in computer science: discrete math, linear algebra, algorithms, advanced data structures, etc.
Data analysis with python/R is more for scientists and mathematicians, not software developers. If you are being paid to analyze data as a programmer, usually you are analyzing very large data sets and you will be using a much more performant language or possibly even a highly parallelizable paradigm like MapReduce.
I am a database developer in a sequencing center. I would not recommend bioinformatics if you have any passion for good quality code.
There are some decent bioinformaticians out there, but most of them write crap 100 line scripts, and have very little desire to learn more than the basics. I am getting increasingly frustrated, as I can see I am clearly writing significantly smarter software then these people (most of them are just counting things and plotting the counts on a graph). I am seen by management as being the same kind of skill level / skill set.
If of course you are really interested in the research, then you may like it, but do not expect your coding skills to be valued.
hi collyw, thanks for your message. I'm indeed really interested in Bioinformatics. I've had experiences in undergrad similar to what you described (e.g., a folder with 50 Perl scripts that does a variation of the same thing).
I don't really mind the management slights as it's an occupational hazard of a programmer. What bothered me about academic research was how the doors were shut for you if you did not pursue the traditional PhD/post-doc path of life sciences research. Also that pure Bioinformaticians even with academic pedigree was considered lower in the pecking order in comparison to "wet-lab" folks.
As crazy as it sounds, I really miss the days of writing Perl scripts (Python probably now); running BLAST and plotting Information Theory graphs. How do you like your current gig and what do you recommend for someone who is looking to jump back into it?
My current gig has ups and downs. I like getting to code things from scratch in the technology of my choice, but too
often we are using the quickest crappiest solution to get things done (uploading database data using excel - supposed to be a temporary solution, but has been going on for two and a half years).
Apparently most labs have a shortage of bioinformaticians (at least that's what I hear here in Europe).
Yes, having a PhD will be helpful if you are looking towards a more research oriented side of things. If not you are more likely to be more of a technician, and your work will be a bit more production like.
My advice would just be to look at institutes that you would be interested in working at - their job pages. I am sure you could start with more production style things and move into a more research oriented role.
Here is the jobs page for science park I work in in Barcelona (pay doesn't sound much compared to the US). Not much up there just now, but I know we are hiring more people soon.
Richard Hammings has a similar advice regarding scientific research: "Somewhere around every seven years make a significant, if not complete, shift in your field. Thus, I shifted from numerical analysis, to hardware, to software, and so on, periodically, because you tend to use up your ideas. When you go to a new field, you have to start over as a baby. You are no longer the big mukity muk and you can start back there and you can start planting those acorns which will become the giant oaks."
The authors have a website - geometricalgebra.net - where you can download a program which will display all the diagrams from the book and allow you to manipulate them. You can also render arbitrary low-dimensional geometric algebra constructions which helps tremendously with improving your intuition about how things work. I'd say it's at a middle ground of mathematical sophistication - it's a good mix of proofs and practical usage. It's almost entirely dedicated to applying geometric algebra to computer graphics, so you won't get as much out of it if you're interested in applications to physics, or just pure mathematics.
If nothing else, I finally know what a quaternion really is after going through this book.