There have been times when I automated things and that was because I wanted to avoid repetitive work. I have never been able to automate something I had not manually done 2-3+ times, but once I did I knew what I needed to do to avoid manual work and make the process error free for next time same operation was needed. If someone else was dictating what they needed, I would not have been able to build my tools as effectively.
To me, a good developer is a "maker", who builds tools. For us the raw material is a computer and some free time, which makes experimentation and evolution of our tools easier compared to other professions.
It's hard to apply often though, because there are scenarios where you can do nearly nothing manually, like optimizing a logfile parser that handles hundreds of servers every day.
- Objectives are a way of saying prove to me first that you know where you are going before you go.
- We’ve come to believe too much in metrics and accountability.
- There’s no systematic way known how solve ambitious problems. (Otherwise they would have been solved already.)
For example, the original Google Search project succeeded, exactly because none of this was around. Traditional management only works for repetitive processes. Organizations that revolve around traditional management are incapable of innovating.
Such as social networks for cat videos and the like?
Come on, let's stop embellishing IT. Most of the times, we choose not to be systematic or thorough because we are divas. We like to think we're more special than we actually are, that we're too "conceptual" for metrics and that our horizons aren't clearly defined. This is - most of the time - BS. We just happen to have been dealt a good hand (working in IT, nowadays) which allows us to harness the large demand to our egos. IT professionals aren't artists or the like; they should be as systematic as other engineering fields.
The greater flexibility of software implies a different set of management practices.
I think engineers and scientists who worked on the Apollo program or the Manhattan project might disagree.
Uber knew they could do GPS tracking and send messages to phones.
Apple knew they could integrate a mobile phone chipset with a PDA one, and put some software on top.
Google knew you could index web pages.
None of these things were fundamentally hard problems on the scale of nuclear fission or landing men on the moon. There was little risk beyond some money, and it was largely a case of taking existing technology and putting it together in new ways. I'm not trying to claim these things didn't take some imagination and skill to develop, but to claim they were potentially impossible is overstretching, sometimes as an industry I think we just need to get over ourselves a bit!
I can't say that I agree.
Management Accounting techniques are generally based on knowing what the process is, or what the output looks like, ideally both. I would suggest that the atomic and space programs have involved many problems, at various recursive levels, where the process to follow, and what success looked like, where unknown going in. Research & Development is one of the classic cases for where Management Accounting techniques are consider useless or counter-productive.
I did a quick check on Wikipedia and the Apollo program cost in 2005 dollars $170 Billion dollars.
I believe the problem with IT is that vendors have sold management that their solutions will solve all their problems with little effort or man-power. The do more with less mantra has been oversold.
It could be the exception that proves the rule. Internally you might be able to say it was heavily research driven. And the global resources and brain power attached to the project were unprecedented (https://en.wikipedia.org/wiki/Manhattan_Project). Plus there was the motivation of a truly existential level threat. Even given all that it still could have failed.
For some reason I didn't think of Apollo in the same way. Maybe I'm missing the boat on that one.
Weird. When I look back on my life, I see blunders, randomness and missed opportunities. There's a great deal that I would change if I could do it over. I'd have thought that was the same for most people.
I hope I did justice to it. I myself am retraining my thinking to be more aligned with the Buddhist and absurdist philosophy. 30 years of an Abrahamic religion can have some serious and long-lasting hang overs.
I have always started on the solution to an ambitious problem by asking "what would have to be true for this to be solvable?" and from each of those things working backward until you get to things that are all true now. Then you can start solving the "almost" true things and work forward to the ambitious solution at the end.
Can you give an example of an ambitious problem you solved backward this way.
1. What would have to be true for us to have reliable transmission of messages over an inherently lossy network?
1a. some kind of method to ensure that a message got to its intended destination
2. what would have to be true to ensure that a message got to its intended destination?
2a. some kind of method of ensuring that failures don't occur, or if that is impossible, some kind of method of making intermittent failures not prevent the system from eventually functioning.
3. what would need to be true to make intermittent failures not prevent the system from functioning
Slightly misleading because I'm obviously working back from a familiar example, but I think the technique is applicable in some cases. At the very least your solution is born out of trying to make the impossible just barely possible rather than seeking to build layer by layer to achieve something possible.
However, with TCP/IP you can also see how using traditional methods also work.
The challenge there ended up localized around a couple of really hard sub-problems. One was that Netapp's storage model was file based (rather than block based) and so part of the problem was scaling the operation of a file state machine, and the other serious sub-problem was the creating a reliable way to keeping protocol/process state that was in "flight" in order to recover from failures. Working backwards led to what I ended up calling the "three layer cake" model which split the storage stack into parts that were loosely coherent and tightly coherent.
I'm strongly in favor of giving up centralized control in the interests of allowing emergent behavior to tackle opportunities as fast as possible. Leadership to me should be:
* Set the direction
* Gather smart, motivated people
* Create an environment that gets everyone on the same page
* Get the fuck out of their way.
Scaling: OpenCL and CUDA have both taken off, delivering everything they promised. Hadoop matured and spawned variants, and now Spark is starting a new cycle. Thanks to Amazon and Google we now have cheap computing cycles on tap with APIs usable by ordinary developers.
It seems to me that only Web Development and Mobile Apps are a morass. But web development was bound to be a step backwards given its strange history, while mobile app development can only be as good as the underlying platforms, which started from really rudimentary to get where they are now.
Another problem is that often if you create a platform, it is because you want to do something different than established platforms. This inevitably makes cross-platform difficult.
Think of what EJB, EJB2 and Spring did to Java. Think of Struts. Instead of fixing the real problems of the language, we went in a lot of side paths that barely got us ahead of where we started and, if nothing else, slowed progress down. Java in 2002 vs Java in 2012: Not really that much better. Working in Java before was like working with a broken bone, and instead of fixing it, the frameworks were painkillers. So we still had a broken bone, and then we were addicted to Vicodin.
We are also in a similar boat in cloud and virtualization. Lots of little tools that are supposed to make our life easier, but most are extremely fragile, because we build the infrastructure and sell it to the world before we understand the problem. The entire ecosystem around docker and coreos is the wild west. And then there's the distributed databases. Dozens of options, less than a handful that handle a network split in a sensible way. Just read some of the posts in the Jepsen series.
Sometimes we solve a problem well, quickly, and it's solved for good. Other times it feels that all this effort from the community is the same as being stuck in the mud. In those cases, the tools don't feel like stepping stones, or foundations to build upon. It's a bunch of different people making infrastructure that fail to learn any lessons from their competitors.
If anything, what is amazing is that we manage to get systems that work in some fashion, given that we are using so many broken tools.
And I certainly don't agree with your assessment that any problem has been solved "for good."
What I see happening is a natural evolutionary process informed by the independent actions of millions of people, and some hindsight-fueled speculation of what the value of central planning would be if it could be done with perfect knowledge. Basically, one works, has worked, is working, and will continue to work (and not just in software), and the other is a fantasy brought about by the human brain's biases, which has led to things like the USSR and China's current gov't, amongst other travesties.
It's amazing how complicated many multi-project, multi-levels-of-indirection rube-goldberg-machine-like solutions are to accomplish the most trivial of tasks. The ratio of actual "doing the actual required functional work" code to various "architectural best practices" and other plumbing is I'd speculate often in the neighborhood of 1:20.
Both electric power and railroads were deployed faster.
Technology stocks are staying hot, but nobody thinks of IT as effective. In anything that isn't a technology company, the IT department has rapidly become the slowest moving part of the company. Why hold on to the moniker like it means something?
But there is an exception to his argument that he should have acknowledged for the essay to really take hold: there is one area where we can work towards clear objectives that actually help solve problems we don't understand. That area is making existing solutions more efficient.
Moore's Law has carried a huge load in the advancement of IT. Ideas that were simply speculations become easy-to-accomplish when technology gets commoditized. Things that used to take months can happen in seconds. That opens up all kinds of new possibilities.
IT is full of folks making stuff more efficient. That stuff can be hardware, software, or just "things people do", like hail a taxi. The more efficient we make everything, the easier it is to create and combine stepping stones. That takes a good idea and makes it even better.
It happens a lot. "X for fun and profit" and "Y, and so should you" are other examples.