But I noticed that some people handled the situation fine. They stayed on management's good side, even though they were failing to deliver along with the rest of us. I will try to distill what I observed them do:
1) They did not fight on the estimates. If a manager forced them into a certain timeline, they registered their disagreement and just accepted the new timeline. I think they realized that fighting would just make the manager judge them less capable. (Pick your battles, eh?).
2) When the schedule slipped, they would communicate it in a way that made them seem more competent instead of less. The explanation usually had three parts: unforeseen events kept us from hitting the deadline, we accomplished some great things in the meantime, and here's why we're in great shape to hit the next deadline! For example: "When we made this schedule, we did not realize that AWS nodes were so unreliable! Despite this, our team has made incredible progress on implementing a fast method of storage and solid compression! We have reworked the schedule to reflect this new information, and we are already on track for the first milestone!"
It's possible that this trick just worked for the specific situation at $OldJob, but I really enjoyed learning it. They seemed to understand that certain explicit rules were not important, but that other unspoken rules needed to be followed. Are accurate estimates important? It depends on the situation! Sometimes wrong estimates can be more valuable than correct ones. Construction companies give wrong estimates all the time in order to win projects. Is staying on good terms with your boss important? Yes! Even if they are total shitbags, being adversarial won't help, only leaving will. These people demonstrated that there are often ways to fix a bad situation by breaking some explicit rules and carefully following implicit ones, and I wish I had the acuity to see these possibilities on my own.
> 2) When the schedule slipped, they would communicate it in a way that made them seem more competent instead of less.
In other words: Enable the bad estimates, then externalize accountability when they can't be achieved. In the past, I called these people "blameless go-getters" because they were always first to volunteer to take on work, but somehow it became everyone else's fault when they failed. If management is asleep at the wheel, it's a win-win situation for them.
You're exactly right that this method works in many companies. Like you said, it's all about understanding what the company actually values. When unrealistic schedules are forced on teams, the exact date might not be important. Instead, it might be leadership's dysfunctional way of emphasizing focus, urgency, and quick iteration. The savvy engineers and managers know how to put on a show that hits these key points while steadily delivering progress in the background.
Still, there's no escaping the fact that this is dysfunctional. More importantly, it doesn't have to be like this. It's eye-opening to move from a dysfunctional company like you described toward a company that values honest communication and understands engineering project management at the executive leadership level. When hiring someone out of a dysfunctional company like you described, it can take some time to break them of bad habits around schedule misdirection and estimation dishonesty.
Those people are basically doing their managers' jobs, and telling them that. If the managers aren't technical, they would be completely unable to do it, and are probably very afraid of somebody finding out, having somebody doing it will bring a lot of confidence.
> ... there's no escaping the fact that this is dysfunctional.
Even in places where people pretend to follow a strict "Agile" methodology, there's often a level of management that is brokering deadlines and promises for the "completion" of the whole damn thing.
To see it in contrast, think for a moment about a well-run hospital. At a hospital, the people who make the key decisions about cases are medical professionals, not managers. If a surgery was supposed to take 4 hours but takes 12, well, that's how long the surgery took. Everybody recognizes that the 4-hour number was an estimate, and estimates are not commitments. The most important thing is patient health, not manager feelings. Managers help organize the work, but they do not control the work.
I would love to see software development become a true profession, where stroking manager egos by making them always feel correct and in control is not the most important thing.
The managers are held accountable by their bosses, but usually what this boils down to is teams not doing what they said they would. Which of course happens from time to time because as the article mentions we're all pretty terrible at estimating and shit happens sometimes. It makes me nervous to think about changing jobs because it sounds like this is NOT the way it is in most other places...
I work in a place like this. It's a horrible idea. It's politics 100% of the time for managers, because there's no other way to climb for them. Welcome to half-brained initiatives and goalpost technologies being championed rather than ROI exploration and derivation, because the managers that do get stuck on projects that cost money don't want to talk about that.
Now extrapolate that to working in BigCo, which has tens of thousands of employees worldwide, with each country having its own unique unspoken rules and hidden undercurrents. The greatest lesson I learnt is the importance of giving people the benefit of the doubt unless they've really proven they are a bad actor, because over and over again problems turn out to be cultural at base.
- CYA above all. The chances of a significant project succeeding in such an environment are pretty slim. Prepare your umbrella from the get go for when the inevitably will hit the fan. Some heads will have to roll, and you will not be easy pickings as you have been prepping for this since day 1.
- Greenshift like there is no tomorrow. Your manager is going to anyways until 80% of the budget is spent and the last thing she wants is someone pooping on her parade. Always be positive, but make sure not to get caught on factual lies. Remember, you don't want to be the one easily thrown under the bus when she needs to find a scapegoat.
I chose not to stay in such environments.
Your points illustrate:
- Communicating your concerns clearly and in a timely manner,
- while also committing to do the work as it has been agreed to the best of your ability,
- and also communicating your progress clearly and regularly.
This ought to be valued by any employer.
Task X will take 2 weeks.
Task X will take 2 weeks of development time and 6 weeks to test, validate, productionize and roll out.
Task X would take 2 weeks if we drop everything else we're doing, but because we can't, it will take about 12-18 weeks to finish.
If you use this method, it's absolutely critical that the team always focus on relative effort, and not on time. If you want it to continue to work, you also have to be very careful how you communicate with team members regarding their progress and estimated completion dates - don't tell them that their component is "late" - this will make them think in terms of time next time you're estimating something, and will ruin your estimates.
This doesn't work. The problem is that you get periods where the work is relatively easy overall so 5 points ends up being tasks that take 2 days. Then you pick up new difficult features and estimate everything relative to all of the new hard tasks and you end up with 5 points meaning 5 days.
Sure you can gauge relative difficulty of tasks between each other, but in a regular developer's day-to-day life the range should really be 1-100 or so rather than making stuff fit into 12 points or less.
2. Don't use a short time interval or moving average to determine your points->time mapping. You're looking for the long-run average mapping, not a specific estimate for this developer today.
I think we should always have reference tasks in front of us -
Step 1: Figure out how to do this
Step 2: Do it
Estimation is incremental - sometimes you need to spend some non-trivial effort gathering the information you need before you can estimate the relative effort in building a system.
this is the crux of the issue.
on average you can estimate six month of work if you spend two weeks chopping it into tiny bits. rarely management allow that to happen and developers are rushed in with aggressive randomness.
these "developers are bad at estimating" pieces are mostly people that try to estimate effort from complexity which is the wrong approach a priory because complexity hides the project unknowns, and this methodology, along all other comparative methods, is just a Gaussian curve built out off gut feelings: sometimes converges to a realistic estimate for large enough values, but more often than not the center just shows the developer team bias.
Throughout my career, tasks which can be estimated relative to similar or identical tasks from recent history are the exception, and the rest is indeed gut feelings on complexity.
Step 4: Go to step 1
I'd be interested in reading the research you're referencing, if you have something specific in mind.
Specifically, the PSP (Personal Software Process) suggests using LOC as a proxy for size. e.g., a Small task might be 50 LOC, 200 LOC is medium, 700 is large. Then you can take those T-Shirt sizes and compare them to tasks you have done in the past and use that historical data to predict completion times.
I'm simplifying it somewhat (e.g., there are also Complexity inputs), but the process does work if the organization is disciplined enough to follow it.
FWIW, I "feel" the big issue is with loosely defined requirements which shift as development progresses.
And yes, while unmanaged, changing requirements is a major problem, even with proper requirements management you still have the time estimation problem.
It also didn't mention Evidence-Based Scheduling.
Also, the evidence-based-scheduling article doesn't mention points either.
However, in a large project, you can really only plan the beginning of the project this way. Eventually there are too many unknowns and you risk all the downsides of waterfall design. In my experience, a project like this has a burn down chart that goes UP over time (i.e. chunks are added faster than they are completed). This makes estimating the project impossible, especially if the chart is still concave-up (i.e. the rate at which it increases is increasing).
If you estimate large chunks, then you are less likely to add them over time, but you also won't be as good at estimating them. I think this is a good method for determining if a project is closer to a 1 year project or a 10 year project though. E.g. if you have 10 large equal chunks in a project and it ends up taking you 6 months to complete the first 1, you probably aren't going to finish by the end of the year, and you should evaluate if you are happy spending 5 years on it before you spend any more time.
i think? Anyone confirm/deny?
In fact if we were being honest with our managers we would be scandalized. We base our estimates on something like an 80% chance of getting something done (which already improves on the 50% chance by roughly doubling our estimate) and then every engineer I know has some informal rule about doubling or tripling the estimate before they hand it off to their manager. Those managers often add their own 20%. Then their managers see that something is up so they slash 50%. More than half of any project—perhaps even three quarters or more—is estimated as safety-factor. You would expect that if we were getting it wrong, we would be getting it wrong the other way, delivering early. The fact that we are late so often is a scandal in its own right. Adding more time buffer is clearly not helping much, since we have such a surfeit of it already. We must have wasted that safety buffer; there is no other explanation for how it disappeared.
And once you have set up these luxurious margins of safety it is already too late. Why is the engineer adding that extra factor 4x or whatever it is? It is because management is already at cross-purposes to engineering. That engineer is adding that safety because they do not feel safe without it. Any insufficiently safe ski-lift in active use will see hundreds of buggy ill-specified implementations of safety harnesses: I will go on this ride but I am taking some precautions.
The starting assumption has got to be that it is OK to fail on the deadline. This cannot be because of hidden information but must come as a result of trust: “we are eating into the safety buffer, that is totally OK and that is what it is there for, but I want to do whatever I can now to get you out of meetings and approach-planning and to fill out your timesheets for you and heck, even plan an amazing weekend between you and your fiancee to alleviate relationship stresses—and take any other distractions and loads off of your plate so that I can get your single-pointed focus on this feature and we can finish this.”
If management makes themselves that safety harness, then there is no cross-purpose.
Or conversely, stay in a boring-but-successful role too long at a company because they're able to play the game leisurely.
The problem is ultimately that the management buy-in opens management up to getting taken advantaged of. It's hard to balance accountability, when management via HR holds all the power. Peer reviews are rarely more than formality, used to reinforce the manager's personal assessment/feelings.
Or you just accept the research that people aren’t able to estimate time, and so it wasn’t a safety buffer on top of a real estimate so much as trying to compensate for the gut understanding that the estimate is pulled out of the air.
How likely is it that a surprise works in your favor? In my experience what you don't know rarely does. So my mental model is that completion times aren't normally distributed; they have a long tail. Additionally a task can't take negative time. This makes estimation tricky because a task can take much longer than the average. At my company we typically report an average and then explain why things took longer.
Which research would that be?
Manager says I want an A that does B? How long?
Most of the time estimate is N days, actual is N +/- 20%.
Sometimes it's N days, and you get lucky it's N/3. Instead of credit, you now have a reputation of padding estimates.
Sometimes it's N days, and you go oops, and it's 10*N. Now you are a slow incompetent.
This is why estimation is such a no-win for developers.
For areas that come closer like construction, they have centuries of prior experience and often the contractors are big enough to push back.
To use the example of this article, go to a boatyard, and say I want to 150 foot yacht with a hot tub and a 20 seat theater, and my budget is $10,000. You would get laughed out of the office.
> "Now you are a slow incompetent."
This is also symptomatic of organizations that behave adversarially and assume your intentions to be malicious. In that dynamic, even if you perfectly estimated things, you would still get in trouble for delivering what was agreed on instead of what was "meant to be agreed"...
Usually this is more the case in orgs who view software engineering purely as a cost center, instead of as a business differentiator. When considering a position at a company, this is a good thing to pay attention to.
This is spot on! I've heard this phrase before and it's never made any sense to me. Any business concerned about profit should never have a "cost" center. If that unit is not returning a positive ROI it should not exist. Of course software development costs money just like all other labor, building and materials but it's assumed the value generated by the work product is worth the investment.
Regulatory compliance is a good example of this. Both you and your competitors must be compliant, but past reaching that state there's no additional value add. As a business, you're thus interested in keeping the cost of compliance as close to zero as possible, all cost cutting that does not break compliance is good for the company.
Corner cutting doesn't work on any of the usual "cost centers", and most of them can be transformed into a differential.
I think my point is, the cost center vs business differentiator is not a good indicator of the root cause of this problem.
In my experience, this is more of a cultural issue, with it being pushed by engineering managers and tech leads who insist that everyone can be a 10x engineer if they just throw away their life.
When they’re past a deadline, it doesn’t matter why, because they were clearly just being “lazy”.
Estimates truly are a lose-lose for software engineers and they have probably caused me more grief than any other aspect of this field.
Asks the person who punishes people for being wrong.
Nₓ is of course is about 1 day past the "absolutely impossible" time. So now we have gone from 50% likely to hit the date to 5% or less. So at best we get yelled at for being "late" to a number that was about what the manager wanted, not reality. Or worse, we finish the visible work on the day of, while leaving a lot of hidden work (e.g., technical debt) for the future, increasing the size and volatility of future estimates.
And that's not even mentioning that all of the conditions of the original estimate (e.g., scope won't change) are breezily ignored even when the date is remembered.
It's a stupid game, and I've stopped playing it.
Care to share the secret of how you did that? :)
And I also wrote something up in response to a previous HN discussion: http://williampietri.com/writing/2014/simple-planning-for-st...
With software, many parts of the business can't "see" the development happening, so they don't know why it would take so long. At the same time, unlike a boat or building, software is more likely to be easily modified. Therefore there is less of a risk to get it all right initially. I do not condone said behavior I am merely pointing out a possible scenario in which it could happen.
It is also true that a developer is more likely to be a push over than a general contractor. Source: 11 years as as software engineer and my family has been involved with land and real estate for a long time.
It’s amazing how much this helps build trust. It’s hard when there truly are big giant components that take a long time between commits and builds. But I think those are much rarer than programmers think and say.
That's a massive ask unless management stops using their old style of management immediately while learning instead of slowly transitioning. There is zero reason for employees to stop protecting themselves from their incompetent managers. This is especially true in software where the average job length is much shorter, so you get less of a benefit from investing time into helping train a boss
I’ve seen projects with untrustworthy devs and untrustworthy managers. Oddly sometimes that worked. But I’m too old to work in hellish environments for pay.
Learned that the hard way on my first lead role.
Him: How long will this work take? Me: A year with this team. Him: A year? How is that possible? I was thinking six months. <starts going down the entire list line item by line item dickering on every little thing> Me: <after 15 minutes completely checked out of this conversation.>
When you don't have time to do something right you have time to do it over. It ended up taking us 18 months. To this day I firmly believe that because we tried to do it in half the time, it ended up taking us 50% longer. Accounting for our tendency to procrastinate, I believe if we'd gone in saying 12 it would have taken 14 at the outside.
Or you just have a reputation of not having the foresight to pad the release based on your estimate.
How long should it take?
Then I take that number and I ask myself:
How long will it actually take?
Then I double that number and add 30%.
My formula for getting a better estimate from engineers (call it "padding" if you wish), is to take their estimate, double it, and then round up to the next largest time unit.
For example, if they say a task will take 3 hours, I double it to 6 hours, then round up to a full day. If they estimate it at 2 days, I double it to 4 days, then round up to a full week.
This sounds like a joke, but I've found that in the real world, these types of estimates are closer to what ends up happening.
1) doing something that has already been done in a very similar way. This type of software is relatively predictable and estimates are generally reliable, assuming that is being done by the same team that worked on the prior implementations.
2) doing something new where NO ONE actually knows what the specification really is or how hard it will be to implement. Most of the time, people don't even know who all the constituencies that will have opinions or requirements to contribute to the specification. Estimating this type of work is largely a guess in the dark, and there are no reliable ways of producing valid estimates, so it is all based on prior interactions with the group and experience of the senior developers with this type of project.
Most managers think they are in camp 1), but in practice if that was true they'd just buy off the shelf software. You're doing your own because something is different.
Both of my parents worked their entire lives on the construction industry. It's a running joke that this never happens.
Atlanta rebuilt its 8 lane highway segment early .
General Leslie Groves was chosen to administer the Manhattan Project because he got the Pentagon built on time and on budget, and was viewed as someone of superior skill at these things.
Of course that's a nice dream in a management driven environment.
- happy Agile team? That alone will cost you 2 days of work per week due to meetings, planning etc..
- TDD, with a dedicated testing engineer?, no of course not, you do it yourself! Another 20% of added workload
- A shitload of tooling from linters to bundlers and whatever else that always needs some attention
- Deployement, done by a devops engineer? you must be joking right? That's also the work of the full-stack dev..
However it seems like developers everywhere are just adding these things because they heard that facebook or whoever was doing it without actually stopping to think if it makes sense for their team and their product. Before you know it you're left with a ball of tool/process mud where every little change requires passing a resolution at the UN to implement and deploy.
Typescript has won the well typed JS war and offers the most practical solution. There are Haskellly and Lispy alternatives that are sexier but typescript feels closer to the metal and working with JS interop is sublime while getting the benefits of types for refactoring, documentation and robustness. Always use it.
React is the best front end paradigm. It provides an excellent way to reason about the front end and avoid messy state and event handlers or bindings going around cascading shit into your ui.
Webpack is cool. You’ll need something to bundle and I feel it’s a good choice. I’ve had it singing some interesting tunes! It’s very flexible.
Also git goes without saying. And the other implied tool goes literally without saying :) and not it’s not yarn!
Once you get used to these tools it’s just not worth not using them. It’s a one time investment like learning touch typing.
Yes in js you just need a script tag but most languages and platforms off the web have a bundled/build process,
Ui toolkit and typed language so I don’t see the big deal in learning these for web dev.
You don't buy a car and say "I want those fancy wheels, yeah!" without considering the cost. But in IT, sure, management thinks most of that is easy or barely costs anything. I think that's the whole point of the article.
- Don't default to Typescript (or any technology for that matter)
- Don't default to TDD (or any dev methodology for that matter)
- Don't default to linters and bundlers (or any intricate tooling for that matter)
- Don't default to complicated infrastructure
A friendly PSA that you can still, in the year of our lordt 2019, create a static site with an HTML file deployed to shared hosting over FTP. You could also create a dynamic site with some PHP thrown in there.
Reach for the minimal process, tech, methodology, tools and infrastructure you need to get the next X pieces of work done, and complicate things only if the pros outweigh the cons.
I'm not picking on you specifically blobs, I'm sure you walked into an organization that employed all these things with seemingly no good reason. The thing is, there's always a reason, it just might not be a great one!
That said, if your goal is to estimate things more accurately, you might need to spend a little more time with your coworkers to define as many unknowns as possible.
Honestly that sounds like hell, we may live in a spiderweb of complexity but it's still a long way from the primitive web of the 90's which provided every conceivable kind of way to shoot yourself in the foot.
The hidden issue is that new web products are becoming increasingly complex due to tight competition, but businesses somehow do not expect that increased complexity means there is a need for larger teams and budgets. All this cruft in the modern web build pipeline is stuff made by developers, for developers, to help us navigate the impossible tasks handed to us in impossible timeframes. Some of these additions are very welcome, but the proliferation of complexity in development in general is mostly due to teams and budgets and time estimates not scaling accordingly.
Scrum is pretty gross, I agree.
Tooling, linters, and bundlers I mean. That's just programming in general tbh. Cmake is probably my greatest headache with C++.
> Deployement, done by a devops engineer? you must be joking right? That's also the work of the full-stack dev..
I don't know about fancy Kubernetes setups and what not. If you've scaled to that level then yeah, you probably need someone dedicated to things like deployment.
But for the most part I just use shell scripts, git, and make files for deploying our websites. Works pretty well.
Eventually there is no reflection on the things we do not great with.
Typescript can be learned to a sufficient level that it has zero drag and saves you time when you need to refactor or understand someone else’s code.
Webpack is a pain to learn but like git you learn it once and benefit again and again.
But codebase complexity can knock estimates for 6. Someone’s crappy spaghetti design can below those assumptions in an estimate by an order of magnitude.
And it could be a detail buried somewhere that wasn’t easy to discover. Like finding asbestos in your ceiling as you are about to remodel your house. Except a lot harder to explain to management!
That would mean looking at some things that people don't want to think about. So we try to feature toggle everything instead, and we dedicate extensive resources to keeping the old version of code as hot standby at all times.
These aren't bad solutions, it's just that they should be something we do in addition to being able to deploy quickly, not as opposed to.
If you are talking about decent test coverage, and explicitely about decent system test coverage, then yes, a dedicated test engineer can help a lot.
Another option might be to never do business with cash-starved companies. Founders and execs will be constantly worried about running out of money, acquiring customers, and pulling off miracles, and all that stress will trickle down on you probably for no real benefits.
A key question to ask an employer then is how much 'run way' / money they have left? Or whether they have a revenue source. It seems fairly probing, but realistically, not everyone wants to invest heavily in a company that might not even be around in 6 months.
Once you realize what's going on, you can turn that into a game.
Do you know the game where somebody asks you a bunch of questions, and they you allowed to answer with anything except "yes" or "no"? It's very hard to do, because it's such an ingrained habit.
If you really want to not give an estimate, make it a game to not give one.
But, give them something else instead. Work out a bunch of questions about uses cases / data volume / whatever, and say "if those were answered, we could build a prototype in a few days that would let us make a more reliable estimate" or something like that.
Another comment: coming up with good estimates is work. The other day somebody asked me if I could come up with a rough estimate for a (poorly specified, IMHO) project, and my answer was: no, I don't have time. If you need it anyway, formulate it as a task in Jira, so that it gets prioritized along with all my other work.
(Fun fact: we estimate our tasks in story points, so then we estimated how much time it would take us to come up with an estimate... :D )
None of the Celtic languages have an equivalent of yes and no. Just make a habit of speaking, say, Irish or Welsh at work. (To an English speaker, that sounds weird at first, but you get used to it quickly.)
Otherwise the engineer is relying on (possibly blind) trust that management won't try to "motivate" him/her into performing miracles (ie. breaking the project management triangle).
Too often the request is along the lines of "How long does it take to build a house?" not "How long will it take to build the house in these blueprints with these systems on top of a mountain that is also impervious to mudslides?". People can generally estimate the former but the latter is all about the unknown details.
> But the reality is that if you can make a probabalistically accurate estimate, then its likely that the task should have been automated by some other means already. In other words, its easy to estimate a task that essentially amounts to copy and pasting some well known CRUD API end point patterns, but any even remotely creative or novel work is almost guaranteed to be totally unknown.
Well, he sort of goes back and forth, but he includes this:
> The engineer comes back with this simplified description and says he can get a first version produced, but it will take a month instead of 2 weeks.
Which I see some variant of every time I see somebody rail against the unrealistic expectations of software estimation (which, by the way, I’ve been seeing people rail against since the late 80’s to no avail). The implication here is that if the manager had just listened to the developer and accepted his initial estimate of one month, the software would have been done in one month: the developer could estimate with precision, but the manager bumbled along and screwed everything up by trying to negotiate it down.
This is a dangerous position to take unless you’re absolutely sure about your one-month estimate: if you say one month, he says two weeks and you look him in the eye like the alpha wolf say, “no, one month, and no sooner”… you had damned well better be able to deliver in exactly one month. The reality is there’s probably no way to tell, _especially_ if other people are involved, so you’re better off shrugging your shoulders and saying, “yeah, sure, two weeks”, doing as much as you can, and preparing your story ahead of time.
It took 40 minutes to do that estimate.
But, TBF, several important decisions were made during the process of estimating, such as what should be excluded from the task, basic organization, and some research.
BTW the task being estimated was doing time estimates for a project (which came out to be 2-3 weeks).
My boss at my last job told me that he had observed that if he asked somebody “can you get this done by the end of they day”, he would get an accurate answer (either yes or no). Any further out, there was no correlation between what they said and what they actually delivered.
The outcomes are so much more predictable / better quality.
I've been mentally moving away from being paid to write software. I can see writing it for my own use, even professional use---I just don't want to be on the hook for ever-changing requirements, decided by people who are often kind, but not, in the end, competent. The best managers understand this and will give you leeway, but this is not a sustainable, repeatable thing. It lives and dies on one relationship.
I wonder sometimes: what if we just all stopped writing software one day, and started just using it? Writing software is a bad deal in a lot of ways---hard, socially isolating, etc---while using it is amazing---the computer does the work for you!
Don't get me wrong, I spent an hour at work today presenting on Lisp macros and loved every minute of it. But a dev career, for many, means a capped income and a razor's edge of apparent competence.
1. Doctor: In many cases, throw away your life and work 60+ hours and also you need to specialize and study long and hard.
2. Laywer: I don't know enough about the profession. The job doesn't transfer well to other countries.
3. Consultant: 60+ hour days are the norm, interviews tend to be based on quick thinking in the high school arena. To pass the interview, you need to have excellent and super quick high school level knowledge of: math, logic, social and political skills. This sound denigrating but I found it tough to do this quick.
4. Investment banker: 100 hours
5. Construction worker: your body will thank you later (/sarcasm).
Programmers have a certain set of advantages and disadvantages (I agree with your disadvantages), but how is it worse than other white collar jobs?
Go on a reasoning chain with me:
- what solves the problem nicely is to sell software. Selling good software can be one of the easiest and most lucrative jobs in the world. In practice no particular employee gets that---the company pays enough to motivate, but takes the rest. The solution, then, is to be the software company.
So...start a startup? "What a novel idea Dropit, on HN of all places!" I have actually "done this" (or thought I was doing it) multiple times (failed every time), but looking back I can see a lot of trivial mistakes I made. But at least I can find some---with many code-for-hire fiascos, the mistake was taking the job in the first place.
So my conclusion: accept the job you have, for now, while saving money and trying to have a good, normal life, and put some effort into seeking out new opportunities. FU money is a thing, as is FU market position.
1. Get a job for 32 hours per week
2. Work 10 hours on week days and 8 hours on one weekend day (take the other weekend day of), so that you clock 58 hours per week.
3. Don't take up your vacation, safe it.
4. Take the other 6 months off.
5. Oh, and pay less taxes. You're handing in 20% gross, but net you're only handing in 15%.
I think a schedule like this works for people like me, because I like to work hard and earn my freedom and then relax and doodle around for quite a while (i.e. 2 to 4 months) and then do a small side project and then work hard again.
Since I'm at the beginning of my career, it sounds like an interesting experiment.
Not for my situation, but for others: geo-arbitrage becomes interesting as you can literally fly to Thailand for 6 months and come back (a cheap retour ticket is found for around 400 euro's).
Dang, I thought 12 hour days were bad.
This is one of the reasons you often hears about an entire team moving to a new company. They have a dynamic that works, and they do not want to risk it.
That can be quite sustainable; that one (or few) relationships can be long-lasting. If good managers leave and are replaced by bad managers, often quite soon team members will leave to join that manager in their new company - it's a well known fact that the direct manager matters more for job satisfaction than the particular company.
I know people who over the years have worked at 3-4 companies for the same boss (not continuously), and if I was unhappy at my current position, I remember a few previous managers whom I'd call to ask 'are you hiring?' and in the tech field the answer pretty much always is positive.
What's the difference, though? It's insane how much of my work the computer does when I write code these days.
My initial proposal was to build it incrementally: tackle the obviously critical functionality to get something working and ship it, then define and implement additional features on an ongoing basis. I had hoped this would fly, since the company talked a big game about being agile.
But no dice. We had to have a complete plan for all the features currently supported (some of them of dubious value, many of them incomplete and inconsistent in specification), and it had to come with a specific timetable. Under protest, the team and I worked hard to produce high-level estimates, and ended up basically guessing that the thing would take six months.
No dice, it had to be delivered in three. I made a suggestion along the lines of the strategy suggested in the article -- we could identify a subset of features that the team would be comfortable could be delivered within the deadline. It would probably be stronger, I argued, since it wasn't clear that all the added weight added much value.
No dice: everything was priority one. I pointed out something like, "you can't fight the laws of physics". I apparently gave the impression that we'd get it done anyway, though my memory of the conversation was that I was sternly disagreeable.
Our team goes ahead and starts implementing items from a value-prioritized backlog. Fast forward three months, and we have a working system that supports the most important use-cases. We considered it past ready to ship, knowing we'd need to keep iterating. The response from management, predictably, was frustration at the missing features, despite the advance warnings.
Management goes into "high-pressure" mode, and for the next three months I do my best to keep the team insulated. After more or less six months of total development time, we finally replace the prior component. All the users agree that it has far fewer bugs. I and many of my team members grumble that it has far too many, on account of the fact that we weren't given the chance to ship the minimum viable product when it was ready.
I'm not really sure what the moral of the story is.
I've seen plenty of (experienced) businesses and individuals that refuse to work for toxic clients, because there is plenty of other work that pays exactly the same but without all the shit.
P.S.: According to your story, you sound like a good manager :).
Well done for at least attempting to introduce some flexibility. One lesson for the future might be: how to more forcefully stand up for what you believe is the right way to go - but of course this is easier said than done!
To me, it's - never waiver from your experience/gut estimates and a no is a no.
Software has an infinite quantity of numbers to use as tools to build an end product. A programmer's job is to decide what general ideas end up as what specific quantities. A manager cannot decide this, lest he do the programming job himself.
Money is a great tool for measuring quantities of material goods, or the value of material properties. But electricity provides infinite quantities, exchanging money for programmer time is closer to exchanging two forms of currency (binary numbers and dollars) for one another. Code is a constantly transforming river of numbers that we make a draw-down into a bucket that we call an end product. That is not a quantity of material good like iron ore, at all.
The factory model simply doesn't work and producing code should be treated closer to the stock market than the factory model. Early access/kickstarter/pay-over-time business models in video games represent an example of the constant transformation model working successfully. Business apps should be built by crazy people who will build them anyway and money holders should bet & invest in them like stocks that raise and lower value over time, rather than as promised end goods.
‘we just need a rough estimate, we won’t hold you to it’.
‘Ok based on the 2 minute conversation we just had I think about 3 months’
‘What! That seems way too long’
Other times I’ve been asked for estimates on features even though there is a hard deadline due to some external factor. I really fail to see the point of estimating anything when there is already a decided upon end date. Anyway I usually point out that they will need to just put the features in order of importance and I’ll work down the list. And they should start thinking about what can be cut. This usually leads to protests of ‘we have already cut everything we can’. But I have to laugh as we get closer to the deadline that suddenly not every feature is as important as was originally thought and magically get cut.
Me: You need to prioritize the items.
PO: I cannot, they're all important.
Me: If you do not, we will make them by the order we want,
possibly coin flip, but probably in order from easiest to hardest.
PO: Fair enough, you get them all done anyway.
Me: That is not a given and you know that, but I will send you an
email for confirmation that any of them can be dropped to
meet the deadline, okay?
PO: Hold on, can I at least pick a subset that you know will be done?
Me: Sure, and don't stop until you have roughly 3 equally sized,
by estimate, categories: Must-have, Ought-to-have, Nice-to-have.
You've added the qualifier he never seems to add, that someday you might not be in the position to act professional. This is a sad but accurate commentary on the current state of things in our "profession".
On the flipside though I do think a deadline is necessary for everybody involved, developers and managers alike. It really helps to limit feature creep. And it forces people to think about what they really want or need.
Manager: ‘we just need a rough estimate, we won’t hold you to it’.
Me: ‘Ok based on the 2 minute conversation we just had I think about 3 months’
Manager: ‘Ok, I guess then we need to outsource it.’
Me: ‘Ok, then I need 3 months for guiding them through it, and 3 months for fixing their bugs.’
For example, say a manager has an idea to find the largest group of friends in a massive social network. He wants a developer to write an app for that and has a 20K budget and 2 months. You could not write this app with 10 times that budget or time.
How can you determine which group of friends is the largest?
In short, estimate roughly. Agree to high level deliverables. Assert that technical management needs some control over strategic business decisions, and flexibility in implementation and timelines. If you can't get those things, temper your expectations, or look for a job with more competent leadership. The last point has become my guiding principle. Stop thinking you can "fix" an org from the bottom. You can only fix an org if they hire / promote you to do so, and empower you as such. That's what you ask for (or look for in your managers).
This is usually good enough for him to give a decent estimate to the clients for when they can expect fixes and features. Regardless of my estimates, not seldom other more important stuff comes up, pushing things a week or more.
Now that I can fully agree with. I can count on one hand the number of technically competent _good managers_ that I know. They're few and far between.
At my last startup, we got very good at releasing early and often. What would have been projects elsewhere got broken down into very small releasable units; our average story size was under a day. Very occasionally my cofounder would have us ballpark estimate two different batches of work using arbitrary points, just so he could get a sense of the relative cost of two paths. But we never estimated in terms of dates.
This worked for us because he really took advantage of the speed of iteration. Almost everything we built came with a question. In the next user test, would users understand it? Would they want to use it? Did they react as expected? When we sent some traffic to it, did people engage? Did the right people engage? Did they return to use it later, indicating value? Etc, etc.
The answers to those questions would drive what we did next. Because our goal wasn't to build features, but to make things happen in the real world. And once you get used to continuous learning, it's obvious that planning too far in advance is wasteful. All of the brilliant ideas you had a month ago weren't based on what you learned in the last month. Eventually, sensible people learn to stop producing a lot of plans that never get fully used. And, of course, to stop asking for estimates on them.
Estimates can't predict the future, of course, but good management will know that 'make the deadline, no matter how' is not the only option - changing scope is another popular option, for example.
If we absolutely have to have something by, say, Jan 1, then the first thing to do is figure out the absolute minimum for "something". The way I'll usually explain it: "Put yourself mentally on January first. If there is a feature whose absence would make you delay shipping despite the consequences, put it in the minimum set. If you'd ship without it, then leave it out."
Then we start building, measuring completion toward goal as we go. If the project is in good shape, we pretty quickly should be able to say, "Yes, we'll hit that months ahead of time," or maybe, "The date is at risk, but here's what we can do." As a bonus, we will also quickly have something we can put in the hands of users. Maybe for validation, maybe for early feedback, maybe for revenue!
Then as the date comes, we're just getting the nice-to-haves. We know we can ship any time, because we've been shipping frequently for a while now.
Even after such a high-level decision is made, a broad scope estimate must also sometimes be made. For example: "would we have time to add a chatbot to this product?". If so, you could need to hire people with the relevant experience and have them ramp up. But if you have an estimate in the spirit of "well, must-haves A, B and C are probably going to take no less than X months, and the chatbot would take just as long if not more, so it's better to drop the chatbot idea and invest its budget elsewhere".
But I just don't believe that the high-level decisions you describe can be effectively estimated in calendar terms. Even if you get the execs to cough up sufficient details for real estimates (which they won't), those decisions are being made at the point in the project lifecycle when people know the very least. Any user-focused team will learn a ton along the way that shapes the product, and presumably neither the competition nor the market is standing still. Better decisions driven by learning means scope volatility, which means there's a hard limit on the utility of estimates.
I think the best one can do on day 0 is give reasonable sanity checks and the haziest of ballpark numbers. But that's fine, because execs are making ROI calculations, and there's no point to making your I more precise than your R. And we all know how much of a SWAG business value estimates are early on.
* Tax-reporting software
* Adding a coupon system to your ecommerce software in time for Black Friday
Deliver working software one chunk at a time, however you have to break it down to make it so.
I'm not advocating having engineers estimate the completion date of their daily tasks, I'm just saying estimates aren't totally useless.
For another viewpoint that's not just mine, consider point 6 of the Joel Test, or the methodology of Evidence-Based Scheduling, also by Joel Spolsky (https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...).
As in: My early product hit whatever trend / wave / need therefore I must be a product guy (and not just lucky).
The product guy has a preternatural ability to understand what the masses want. Watching them work -- witnessing their process -- is something to behold. They will steer products in a direction regardless of cost, complexity of likely outcome.
The outcome, quite often, is to tank their company. Since they don't understand why they were successful in the first place, it's very likely that their success won't last.
But if you're along for the ride, wow, expect the following:
1. You're the greatest (available) engineer we've ever encountered, building super-complicated XYZ is going to take this company to the next level!
2. This is taking much longer than expected and isn't matching up with our expectations but I'm 100% sure of my vision because I'm a product guy.
3. We're running out of money (because the market conditions that gave us early success have changed) and super-complicated XYZ isn't going to rescue us -- because you're a worthless piece of shit of an engineer!
See what happened there?
They're sometimes hard to distinguish from a vanilla bullshit artist. The bullshit artist will tell you how well capitalized he is, tell you he only wants the best (meaning he thinks you're expensive) and then try to slowly whittle your sense of self worth down until you get "the offer":
The offer is game-changing, life-altering for you: Instead of continuing to pay you with money, they're going to start paying you with magic pixie dust. The magic pixie dust will make you rich "when everything comes together."
When you tell bullshit artist that you don't work for magic pixie dust, that's when you learn that, in fact, you're a worthless piece of shit of an engineer.
I actually respect the bullshit artist more: They're bullshitting other people but they know they're full of shit. Product guys, depressingly, bullshit themselves.
> It is our belief that over the 30 plus years of commercial computing has developed a series of sophisticated political games that have become a replacement for estimation as a formal process.
Unrelated: can we not do this thing with the whole left side of the screen being one static image? It's really distracting.
Agreed, ive been meaning to rework this
Caveat: I work in the financial industry, where the complexity of writing a compiler and whipping up a Tableau dashboard are perceived to be equal.
This is partly why some people are obsessed with very short builds. Some said seven minutes was ideal to avoid the developer getting preempted. Then it was three. Now some shoot for one.
I would bring this up and others would say they had noticed the same thing, but none of us knew why this phenomenon happens.
Then it hit me: it’s just Hofstadler’s Law playing out in the small. If you think you have five minutes, you will start something that you estimate will be five minutes, but it will take you ten, either because you were wrong or you get distracted.
There aren’t a lot of tasks that take one minute, and even if you’re wrong you only lose one minute.
If there's a single line you want to take away from this wonderful essay, it would be this.
Parkinson's law usually takes care of the rest once the resources have been agreed upon.
I'm grateful for "reader mode".
- - - -
There's a very interesting book, " Hollywood Secrets of Project Management Success" that details their system from an IT point of view. Here's a decent review: https://community.dynamics.com/nav/b/navigateintosuccess/pos...
> Inside this system lurks the biggest difference between the IT and Hollywood: movie industry exists for more than a hundred years already, they had enough time to develop and establish best practices and to prove them in practice to such an extent as to tell everyone: this is how you need to do movies; and everyone can trust it works, because it has worked for the whole industry for decades already. IT industry is very young, and exists for a few decades.
(Although, IT is technically as old as the cuneiform-inscribed clay tablet, eh?)
Specifically to this discussion, they (Hollywood) can set expectations reliably because have enough shared baseline experience of how long things take.
One way to interpret this is to ask yourself, of some new change in process or technology, "Will this stabilize delivery times?" But to even begin to answer that kind of question you have to establish reasonable ways to map between work done and results delivered, and then set up and track metrics. (Which is easier said than done.)
One problem unique to IT is the "interpretability" of our end products. Anyone can watch a movie, but most software requires at least some training to understand and use.
I thought I was brilliant when I came up with the solution of fixing the time frame and estimating the work that can be done in that time frame. Turns out I came up with sprints, 60 years after they were invented.
The fun solution for this would be to give 'hit dice' estimates for tasks. Assign type of dice and number of them to each eastimated.
How long will this take?
About 1d20 days.
Nobody will be happy with this, but it is the most realistic one. Cause tasks do have that variability to them.
The wises thing said here is:
"the reality is that if you can make a probabilistically accurate estimate, then it's likely that the task should have been automated by some other means already. "
Is there an answer to this problem? Maybe abandon long term estimates entirely. Having really short term estimates, with frequent updates.
But serious question - for live products especially, you do need to have some sort of schedule where you are launching new features every X weeks. So it's important to know how long your features will take, so you can have a constant cadence of updates. Plus a lot of times you will have marketing initiatives or other things that need to be coordinated with your releases.
So my point is that you cannot just remove estimates. There is a need for knowing when the current sprint / feature will be completed, and being somewhat accurate about it.
I do really like the point about re-framing the conversation to start by asking the manager how long they want the engineering team to spend on the new feature. That will definitely change the dynamic and hopefully should encourage a conversation about what is realistic to do in the timeframe that the manager has in mind, and how the feature needs to change in order to achieve it.
But after that, the engineer still needs to go through and create estimates to make sure what they just agreed on is actually possible, and then those same estimates are necessary to plan out the development to make sure you are on track. So yeah, you can never really remove estimates.
Am I wrong?
The only way you can have new features every X weeks, is if those features take no more than X weeks to develop. You can estimate a new feature to take 2X to complete, but that would still mean it can't be released "in time".
Of course, you might still want to have a regular cadence of feature announcements, or at least be able to plan them in advance. But I feel like the best way to do that is to decouple finishing a feature from releasing it.
Estimates are primarily useful in deciding what tasks to pick up first. Luckily, that usually needn't be that exact - you don't need to know to the hour how long some development is going to take, just how much faster one thing will roughly be compared to the other. A manager can then decide whether it's worth it to risk picking up a larger task that might provide substantially more benefit than the smaller task.
You can still have releases every 2 weeks, where each feature takes 1-2 months, if you have multiple small teams each working on a different feature at the same time. That is how we do it.
But of course, it's typical that each feature takes an extra 1-2 weeks of development time and many times other devs are pulled off their own projects to help out, so then those other projects are even more delayed.
It is also more difficult to initially assess the skills fit of candidates for knowledge based work, especially those that require creative problem solving, and unlike other engineering disciplines past outputs are opaque and hard to rely on as simple markers of past performance.
For a project, in order to produce a good estimate you need to understand scope, then align it with precedent, adjust for your resources and productivity profile and then view all of that through a risk lens to set probable outcome ranges.
For a programme, in order to produce a good estimate you need to understand and manage the risks, constraints and dependencies across all your projects and ensure that the projected benefits (both hard and soft) are still net positive, meaning the investment makes sense.
From my observations at least it doesn't look like the idea of development as "investment" in a product or service is very common. I'm assuming because time to market is often the ultimate driver rather than cost, in which case, congratulations, you should increase your costs on more numerous and productive resources whilst aligning your strategy and risks to iterate on smaller scopes faster so that dead ends can be quickly parked.
The problem isn't so much the estimation process as it is more generally poor portfolio/programme governance and management practices and more specifically a lack of risk management and understanding of contingency at those levels. I find IT, and Software Development more particularly, to be some of the worst offenders, but that is because the risk profiles of such projects are vastly different to the risk profiles of other types of work. The productivity of your resources is difficult to discern and a lack of precedent for similar-enough projects and knowing what their variables were, all meaning you really can't produce a reliable estimate with incomplete information.
“We” are usually not at all clueless. “They” tend to be.
It's a linkbait trope to use "you" in a title, because it grabs attention whether the topic has anything to do with the reader or not. That's why "you" is headline writers' favorite pronoun. Combining it with a pejorative ("you have no idea") makes it even more sensational. That's definitely the sort of title we edit.
When we edit a title, we look for a representative phrase in the article itself that expresses its point in a more neutral way. That's what a moderator did in this case. The language comes from the article's own summary of itself: "After all of these years, I finally came to one simple conclusion. With all due respect: we are completely clueless about how long things should take." Reading the article text to find how it states its own conclusion, removing any residual linkbait (such as the superlative "completely"), and making that the title instead is the best way we've found to correct titles that break the site guidelines.
Software engineers aren't being commoditized by being replaced by machines that write software. They're being commoditized by their own frameworks, libraries and tools.
Take the game industry as an example. Twenty years ago, your game company needed a big team of software engineers employed to write a game engine with advanced graphics capabilities (let's assume you want advanced graphics). Today, a single developer can just download Unity or Unreal Engine and have at least the technology available to them immediately (art is different but in many ways similar; automation and process improvements are coming for those jobs too, I'm sure).
So you don't need the same number of engineers for the same result. Sure, you have a big team of engineers employed at Unity Technologies or Epic Games, but that's now a shared resource. That employment is no longer duplicated at the companies that decide to use those engines.
Another example is the push for 'DevOps' and 'Cloud'. Think of all those system administrator jobs and IT departments being made smaller because now you can just spin up a server on AWS or have your CI infrastructure managed by BitBucket.
It's been my experience that delivering business value is taking longer because of this, not less. It's hubris to believe that a single person can be competent enough in all these domains to replace multiple people who focus on specific domains.
By distilling DBAs, Configuration Management Engineers, System Administrators and Software Developers into single people business is getting shittier products less frequently which incur more operational overhead.
On the other hand, I could also see a reality where, since many low-level problems are solved for you, management expects more out of you, so the number of engineers stays about the same, but you get a higher level of productivity.
I wouldn't put money on this balance lasting forever though. To me, that's too close to dismissively saying "it's different this time", with regards to our profession.
This is still a fairly common phrasing, and in general, people around me generally take it mean "roughly" or "plus or minus a few," which is how one character interprets it. Another character then explains that its actual meaning is, paraphrased, "probably not less than 5 years or more than 500 years."
While I had a vague awareness that "on the order of" was not mathematically equivalent to "roughly", I don't remember learning the concept in such a memorable way. Similarly, an order of magnitude are commonly understood as "a lot more than", when they also have a precise mathematical definition.
I think that reason we are terrible at estimating the time something will take is that humans struggle to think in terms of timespans that are actually representative of the variance in an estimate, not to mention the fact that as additional variables are introduced, that variance will almost certainly increase (even if the midpoint decreases). I'm not sure of the causal direction here, but our misunderstanding of terms meant to succinctly express "somewhere between" sure doesn't help things.
(side note: there's also a discussion about designing a revolutionary organization that really clarified my rudimentary understanding of circuits. The book is truly stunning at times.)
Estimates should be for "making the best decision now" and not judging performance. Those are orthogonal concerns.
It's very fair for the business to need to weigh "well if X takes 10 days and Y takes 20 days, and X makes more money, of course we'll do X". That is what estimates are for.
However, if X ends up taking 15 days, the business (CEO, product, even CTO depending on their closeness to the solution) shouldn't/can't decide if that is a performance issue--only the engineer/the engineer's manager/etc. can make that call.
And maybe it is, maybe it isn't.
Granted, pulling this off is really hard; tangentially I'm now working in the construction industry which has the very same problem: this home should take 9 months. It took 10! Who's fault is that? Well, that's the wrong question. The right question is how could we have known that delay sooner/better, and mitigate it if possible this time and more importantly next time.
If you're interested in working on a "humane" project management system (I just made that term up and it's super early, so disclaimer/etc), reach out </shill>.
Then double everything.
The individual task estimates are often out, but the overall estimate is close... as if, with a population of tasks, there's regression to an accurately estimated mean.
(An alternative explanation is that I over-estimate, and Parkinsonianly, work expands to fill time.)
Granted this is for manual labor where you know exactly what will happen. Estimating more uncertain environments is a lot harder. Nevertheless, I have grown used to using this time-chunking approach for all sorts of new tasks. I think shorter timespans (10 minutes) work better for estimation but for actual work, 25 minute "sprints" are good.
Basically someone stands around with a stopwatch and monitors a process a few times and then gets a feel for the average time the process takes.
You can do the same thing in software with the caveat that you must be doing pretty much the same thing.
e.g., if the last 5 times you had to build the basic framework for a CRUD program in Ruby it took 10 hours, then it's likely that the 6th time it will also take roughly 10 hours. However, doing the same thing in C means that the estimate goes flying out the window.
None of this is rocket surgery. It's "just" that most software teams won't do it for one reason or another.
Having said this, the idea that you can ask for something in 2 weeks when your eng estimated 4 weeks is ridiculous. I would automatically add 50% to my eng's estimates because their estimates are probably too ambitious to begin with (I fall to the same problem when estimating my own work).
If you consistently get undercut in your estimates, get out! I've seen this behavior most often in marketing agencies and game companies. I would not work for these types of people regardless of the money they are paying.
That's what the data in the book seemed to point to. Programmers with no deadlines were more productive then programmers put in the control group. However there might be data showing otherwise, I would be open to it.
Then there is the other type of PM who will continuously manage back the client expectation. To the layman, this person seems to tell the client (internal or external) bad news on a daily basis; it'll take longer, cost more and you're getting less.
It's the latter PM + team that actually will get the flowers and the cake on launch day and a have the happy devs that did not have to sleep under their desks while the former team will be near burn out, client unhappy even though they probably technically did deliver more (but also too late). Usually that former team won't get more money for it either...
I have seen both in startups and fortune x companies; I have seen both as contractors and as internal teams. For the anecdotal part here; the ones with a PM like the first one in a product company internal setting, those companies all failed that I have seen/worked with. For contracting work, it can work, but it's a stressful and panicky way of working which often results in one-off contracts.
You inevitably come to something like sprints and good, timely bad news talks. People who have the micromanagement type need for control are just not going to survive those as their struggle breaks down even inside a 2 week sprint period.
A quick search for 'how to communicate what you want with software developers' turned up what I expect is at least modestly helpful: https://www.entrepreneur.com/article/224816 (2012) but I'd venture that the topic really is worthy of much more in depth treatment.
A key element missing at least from that post is: How to confirm that the developer does understand what the vision is? If that can be confirmed, they would then be in a position to modulate the vision vs. reality, and, even contribute some of their own creativity. I suspect that this modulation frequently occurs with a start from misunderstanding. The divergence can be dramatic.
One thing i noticed in discussions like this one is the amount of excuses made for Management's/Manager's lack of Technical Proficiency. This has to stop. If you are in a Technical Domain and leading/managing Technical People you have to know the domain decently well. There is no other alternative. Else you are reducing the effectiveness of your Engineering Team and the Company as a whole to your level of incompetency. The resulting effects may not be visible immediately but sooner or later you will drag the company down to oblivion (eg. see what happened to the same HP).
What's hard to predict (at least for me) is writing non trivial config files or using some new library or language or finding a new algorithm. Some of those things you might be able to estimate how long it might take to estimate the time it will take but other times you just have to say "I don't know" and time box it.
Explain why you do this. Flag when variance is screwing you anyway.
99% of Wix users need a website that's just a big phone number and street address that gets indexed in Google. A "used sailboat" is exactly what they need.
If I tell sombody that a full custom site costs idk 20 times more than a Wix site they might not want to pay for that.
I would love to know the context behind this. Beautiful website by the way.
i'm in poland. my type of manager would be a person who accepts me, gives me clear target and constraints and leaves alone, person who does his job, person who doesn't bullshit me and says the truth if we're in shit or not and realises that we are all on the same boat and we're also waist deep in shit so there is no point in fighting each other.
on 16 IT companies I worked in, there was only 2 such people. that's 12.5%.
shouting and batshit crazy behaviors are normal.
someone here wrote that some things are cultural. a lot of stuff is cultural in poland. its a catholic fundamentalists country. so anyone who solves problems is a problem. people here had no type of french revolution or industrial revolution. basically its feudalism in modern dress.
last guy i worked for was freestyling everything. he had no plan. he told me he uses his imagination and he tells people what he had imagined and they have to do it. i found out about this when i was trying to resolve communication issues that i had with him. he often imagined new stuff and forgot to tell me about it and during code reviews he blamed me for not doing what he wanted me to do. i ditched the guy because he didnt wanted to do anything with that problem. the guy sabotaged himself throught the whole time. sometimes he had some strange outbursts.
there is a lot of such people in poland. a lot of office workplaces are like kindergartens. there's constant chaos. no plan. big ideas but nobody wants to wait. people who want things done asap usually are abusing others, not to mention that they are total morons cause abusing people takes time they dont have. most companies are micromanaged.
thing that keeps me going is knowing that, as someone said here, this stuff is cultural and that there are some better places. although after all those companies i'm kinda like a dog from the impound. i dont know how to trust people. word nation is for me the smallest joke possible to say. i'm afraid of polish speaking people.
It's like a expert in tech, business and computer science trying to predict stock prices. I mean that's the expert of experts failing.
I've had ex-programmer managers say "I just want you to add this graph to the app. I could do it in 20 lines of python." In reality, the request is more like "Add in this plot which is the result of a long running calculation. The calculation has to run in a background thread so the app is still responsive, even though the whole program has been architected with a single-threaded design. It needs some mechanism in the UI to indicate it is making progress. We'll also want a way to cancel the task. Half the datasets are using a different sign convention, so you'll need to automatically handle that. Actually, the data is polluted with garbage, you'll need to spend an unknown amount of time debugging a legacy system to figure out where the bad inputs are coming from in order to understand what can be done to filter that out...so we can have this plot by end of day, right?"
Apologies for ranting about my "internal software" days.
"integrate with external API" - had a project where there were several external data sources (financial service providers) to import on a regular basis. One had an actual API, commercial service, good docs - took a few weeks.
The others are...
1. "hey, we'll send you a nightly file, except it's not always nightly, because it depends on someone running the job and if they're not here, you won't get it".
2. "here's our SOAP WSDL" - "this doesn't work" - "oh... try this other one" - "that doesn't work" - "try this one, but just don't use some of the endpoints cause they don't work" - "OK, but this doesn't really work either". Now... intersperse those sentence fragments with a minimum of 4 business days via email (sometimes going for a couple weeks because 'vacation' time).
3. X was working for 7 months, then stopped. "oh, we changed the file name and format of what we push to you yesterday". no warning, no documentation on what the change is. just... pissed off end users who are now saying "my data is wrong!"
Figuring out how to take data from a file or SOAP or REST endpoint and process it - that's not terribly hard. Figuring out how to deal with more than half a dozen vendors who are not 'really' in the business of providing data, but do it half-assed anyway - there's no end to 'figuring it out', because it's a moving/changing target.
I'm not naming any negative names but I'll mention that quovo.com was comparatively pleasant to work with - they're an actual commercial service. however, less than a year after we coverted a system to use them, they were bought out and some of the useful functionality seems to be sunsetted already. I'm not on that project directly anymore, but talk to some colleagues still involved in it.
From the client's standpoint, it's all "integrate with external data providers". "You did one, the others can't be that hard". But each provider is a completely separate island of functionality, documentation, responsiveness and professionalism.
For the record, no, you shouldn't be providing me with client SSNs as their identifiers (quovo didn't, but I'm surprised at others that do, in at least one case that's the only way they provide client identifier data at all).
In fact hearing this line, no one will work for you. Frame it and make it your motto, see what happens.