Hacker News new | past | comments | ask | show | jobs | submit login
Twilight of the programmers? (danielbmarkham.com)
85 points by signa11 on June 23, 2023 | hide | past | favorite | 71 comments



The idea is right, but his vin example is poor. If the system doesn't actually reflect reality than the system is poor.

A better example might be someone say asking for a report that shows all the part time employees that are getting overtime time and a half pay. Which makes sense at first, until you consider that an employee working more than 30 hours (or whatever the number is) is classified as full time, and its not possible to get overtime unless your working more than 40 hours. So the programmer then has to go back and clarify, "do you want to find employees that were hired as part time but are now classified as full time, or are you interested in part time employees collecting holiday/danger/etc pay multipliers?"

Saying "it can't be done" is the kind of thing you would get from a junior employee that doesn't understand the business nor the code. The more senior guy understands the mismatch and asks questions that clarify what the intentions actually are.

So, the VIN example would make more sense if they clarified the system only has license plate data, and partial or missing VIN information because the data wasn't collected. In that case the work item includes getting someone to do that data entry/etc work of assuring the VINs exist for all the vehicles as well as creating the vin->mileage report.


You might also have employees who are ostensibly "part time" and average 30 hours a week but occasionally work more than 40 and get overtime pay. The real world is full of messiness like this, because what people actually do on a given day is rarely physically constrained by policies and job classifications.


Recall during the dot com crash at RadioShack. District manager kept complaining that a bunch of displays were not getting setup. We were a busy understaffed store so nothing got done during the day. Finally I stayed until 3am once and got everything done.

District was unhappy that I got extra pay. No thanks at all for resolving the matter.


The coupling between results and costs is fundamentally broken in the minds of most people who get to make the decisions


> The idea is right, but his vin example is poor. If the system doesn't actually reflect reality than the system is poor.

Here’s some examples I’ve experienced over the last 10 years. I have operated as a primary (but not sole) developer for wireless valve automation products in large agriculture irrigation. Much of oir guidance comes from older/seasoned/conservative individuals who like things they can easily understand and maintain farmer/irrigator autonomy.

- Example 1 - Early on, we prototyped some early emerging cloud centric stuff and the company wouldn’t have it. People still had too many recent memories of buggy/down cloud services due to security outages, infrastructure, you name it. So we did some novel things with proximity based security (rather than tradition user/pass with model), Bluetooth, MQTT, and edge centric computation. In these environments, power is king, so you go light and small (because you might be running on a solar panel).

Fast forward, and every other play in this space is doing some cloud centric subscription operation, with lots of REST APIs and the like. Suddenly, the company wants cloud based controls (as opposed to handheld), and infinite logging/memory. It’s hard to explain to them “edge centric is tough to just infinitely scale storage and transport” and even harder to explain why the pressures they brought to bear make these things harder and harder to accomplish.

- Example Two - we’ve built two generations of proprietary radio network protocols for Val control (first on top of Zigbee like DSM; second on top of LoRA—not Wan). Responsiveness and low power and reliability totally informed this design, as well as the patterns of how agricultural valve operations occur.

But suddenly, with California water use legislation, the end customers thing that this provoked should easily be modified to transport constant feed flow/pressure telemetry. And again, it’s impossible to Ty and explain why you can’t have your cake and eat it too, without coming across as a priggish programmer that doesn’t want to advance in the company.


Tending to provide solutions as well as the tradeoffs tends to be better accepted. Saying we need to replace all the existing HW, but then it will work once we buy a billion dollars of AWS leaves the decision to people who are paid to make such decisions.

I've been there, I worked for a place that wanted to copy a competitors feature set with myself and a couple other programmers, when the competitor had a team of like 200 people working on the same thing. My response was basically, look I might be able to get a prototype in a year or two, but no guarantees on stability, feature set, etc. If we hire a half dozen people, it might be done in two years, but its going to take them 6 months to become productive and then your going to still be missing X, Y, Z.

And I assumed that would kill it, but you know I ended up spending a year working on a prototype by myself anyway. Which was "fun" and I learned a lot, but was a giant opportunity lost for the business because it turns out a half ass prototype can't compete with a solid polished product in POCs.

edit: This isn't to say "i was right" because in that case the right decision was probably to hire the extra people and make a significant move into that market, given the state of the existing product which died just a couple years later. But the business people didn't or couldn't expend the resources to make us competitive. So, they were probably hoping for a miracle, which many were delivered but a bunch of small miracles doesn't equal a big one.


The explanation in the article didn't really click for me, and I still don't fully understand why, according to the author, programming is so special compared to law, medicine, etc.

I do of course understand why a given request from the client might be impossible to satisfy, given the abstractions that are currently in the code, but at least in some cases you can feasibly tweak the abstractions or build new ones. OK, that's not always possible, but ... so what?

So, as far as I can see, the only conclusion left to draw is "Abstractions of a given subject can be mutually exclusive." OK ... and? Surely that's Logic 101. After all, they're abstractions and by definition they remove some information that's present in the subject itself, which means that information that's present in some abstractions of the subject won't be present in different ones.


> I do of course understand why a given request from the client might be impossible to satisfy, given the abstractions that are currently in the code

Not quite. A request from the client might be impossible to satisfy given _the request_. May be not this particular request, but some previous requirement that this system has to uphold.

I have a lot of times found myself in the situation where I explained to stakeholders that they essentially have bugs in their requirements, as they were logically inconsistent and self contradictory. That's what the article is about — it almost never happens in other professions.


> That's what the article is about

I thought the article's subtext, or implication, was about how line-of-business software writers will soon be replaced by some LLM that attempts to follow the clients' requests as-best-as-possible when its generating its own software domain-model and simply refusing to ever push-back to the client: the ultimate obsequious SWE - which means that software-quality will go down the drain.

...which I think is a rather specific prediction of the future and doesn't consider whether automated program-synthesis (whether LLM-based or otherwise) would be able to interrogate proposed requirements, identify flaws and inconsistencies, and require the user to correct their input before generating the code that defines the system.


We have this problem today. I've never worked on a team that didn't have at least one person whose coding ability largely stopped at what you could copy and paste. Pre-internet, they were assigned non-programming tasks or things that could be done via copy-paste-modify of existing code.

Collegues on my current team of this caliber have Bard open. This is an observed phenomenon, and I'm not surprised they reach for the less capable but incumbent provided option. I don't have enough data to say this has negatively impacted code quality, but I can rule out most positive results.

From the above example, at least one person wants to disable Typescript because it makes code [from them but of unknown origin] harder to execute. That is not promising.


> I've never worked on a team that didn't have at least one person whose coding ability largely stopped at what you could copy and paste.

I am fortunate then - I have never been in that situation. You have my sympathy.

> Collegues on my current team of this caliber have Bard open.

wat

------

I don't like using ChatGPT for code-gen (and I haven't tried any of the others like Bard or LLaMa) because I still don't understand how it's capable of doing-what-it-does. I have thrown prompts at GPT to write my code for me (which is tiresome: as often writing the prompt takes more effort than writing the code myself) but on occassion I'll throw it something that I'll think it can't possibly solve or figure out, and I get blown-away by what it generates (e.g. on a lark I asked it to write a C# program to solve belt-layout problems in Factorio, and the moment I saw what it generated was... a genuinely scary and unsettling experience for me: many people are dismissive of GPT etc, simply saying "oh, that's just because it's been trained to do that" - but I refuse to believe that OpenAI specifically trained GPT to generate C# code for Factorio - and on a whim I asked it to translate that code to Haskell and that, too, was... something-that-just-shouldn't-be-possible.

Yes, the code it generated was incomplete, had missing references, and more besides, but remarkably the syntax it generated (for both C# and Haskell) was 100% correct.

I avoided ChatGPT for a while after then, not wanting to be rattled again - but I recently tried it again, by prompting it to generate code that I was already writing to see if it knew of a better approach or even as a time-saver (as C# is still not as expressive as I'd like); specifically, I was writing a ModelBinder for ASP.NET Core and it generated code that definitely would have "worked", but had plenty of room for improvement (mostly things that static-analysis would have caught). I wasn't able to observe ChatGPT generating "substantial" bodies of code: it seems best for generating something like a ~200-line class to solve a specific narrow problem (things exactly like that ModelBinder implementation I was working on), especially when I'm stuck for ideas on how to solve a problem entirely by myself.

------

Another example I just remembered as I write this was me asking ChatGPT a straightforward question about MSBuild: what do I need to put in a `.csproj` file to prevent the project from being built when the host environment is non-Windows. This is a screenshot of the interaction I had with it: https://imgur.com/a/TuejAYn - it starts-off on the right track, but hallucinates small details, or even directly contradicts itself, or generates output that is the precise opposite of what I asked.

If ajunior-developer[1] wrote me something like what ChatGPT generated there, then it's easy to attribute that mistake (i.e. that of swapping the `==` and `!=` operators) to simply being a typo (at best), but at worst just absent-minded. But when ChatGPT does it I just don't understand how it's even possible for it to make a mistake like that while it clearly is quite on-top of the rest of the problem-space. And because ChatGPT makes those mistakes unpredictably, all the time, it means I can't trust the code it generates.

The other reason, of course, is that it eliminates the challenge (if not thrill?) of me getting to solve a problem by myself, using my own innate reasoning abilities - and being the one who gets to solve problems is, I think, part of every engineer's vocation and sense of identity regardless of field (chemical, civil, mechanical, software, etc).

-----

My last thought is that I'm not worried about GPT taking my job or otherwise replacing me: the SW industry already tried that 30 years ago with outsourcing to India, and that didn't kill-off west-coast SWEs - I'm only going to start getting worried about GPT et al. when they do advance to the point where they're almost entirely autonomous and self-directed, while also able to gather-requirements while also challenging assumptions in those requirements, perform its own verification and when the code it generates is at least on-par with what we can write. And I do believe we'll get to that point within 3-5 years from now (e.g. I assume it's straightforward to train an LLM to interrogate incoming requirements and convert that into some kind of normalized domain-model; and already we can wire-up GPT to external APIs, so we can do a conversation-loop between it and compiler error message output, etc).

-----

[1] I dislike that term, personally - I feel it's unnecessarily condescending or even infantilising (I'm also fortunate to have never had that as a job-title, and every company I've worked for either had a numeric level nomenclature, or used silly/informal job-titles.


>a genuinely scary and unsettling experience for me

Every time I think of a good thing to ask ChatGPT (free version), it gives me garbage. Not only for programming tasks, but also for, basically, short essays in English on some topic. By garbage, I mean superficially polished but fundamentally incoherent when you pay attention to the details. It simply doesn't grasp the topic, because, I think, there is no mechanism for that. I don't see a whit of difference between code and plain English - they both require something it doesn't have available.

Hypothesis #1 is that I'm just shitty at prompting it.

Hypothesis #2, that I like better for reasons of vanity, is that I think of new ideas or questions constantly and most people do not. If you ask anything that exists on the internet then you will be tricked.

>when ChatGPT does it I just don't understand how it's even possible for it to make a mistake like that while it clearly is quite on-top of the rest of the problem-space

It literally only makes plausible things, true or false. It generates falsehoods that are adjacent to truths or at least to human-produced utterances, in an abstract space. That space naturally has a huge number of falsehoods swirling around every fact it digested. How is that not intuitive, and intuitively so dangerous as to be practically useless? It is inherently the best bullshit, the most superhumanly crafted bullshit, by the much quoted definition of bullshit I first read on HN.

But I do keep trying it again and again, at intervals, to see what it has to say.


> ChatGPT (free version)

You shouldn't jump to conclusions before you try GPT-4


ChatGPT is easy to understand. It's a bullshit-generator-as-a-service. It's a very complicated probability model which determines which sequences of words are most accepted by humans. You put in some words and it generates the most acceptable next word, and the one after that, and the one after that, ad infinitum. (If you want to play with a GPT model in this style, I suggest NovelAI for a small monthly subscription price). Now, *Chat*GPT is this same thing dressed up to look like a question-and-answer machine. They probably just stick some kind of delimiter between messages.

I have read an AI paper somewhere that argued that recent improvements to AI are not from new kinds of AI being better, but from new kinds of AI being able to exploit larger amounts of computational power before hitting the point of diminishing returns. And OpenAI's computational power is gargantuan in AI terms (but probably less than the Pentagon uses to detect nuclear strikes or whatever they do).

It happens that correct answers and true facts are very acceptable to humans, so ChatGPT will often produce these, but in fact it doesn't "know" that the answer is correct - only that it's very acceptable. You know what else is acceptable? Confidently stated answers that are subtly wrong.

"Auto-GPT" is self-directed ChatGPT. It already exists. It's basically a bullshit generator that generates instructions and then follows them and adds the results back into the question for the next iteration.


> I refuse to believe that OpenAI specifically trained GPT to generate C# code for Factorio

Why? Isn't it pretty plausible that someone on GitHub has put up a Factorio belt layout implementation in C#?


Doesn't that also happen in just about any profession that involves building or engineering? I don't see how it's unique to code.


Yes, thank you. That's also what I thought after reading clarifications along these lines in the thread.


I don't know, I've never worked in these professions.


The worst this ever got, we had a multi-step form for data input (or as the old farts used to call it, a 'wizard'). Someone added a new requirement that meant that we went from needing data from section 3 to populate section 4, to also needing data from section 4 to populate section 3. I eventually had to draw them a diagram because they couldn't see what the problem was.

In the end we had to both soften the requirement and rearrange the 2 sections, possibly into 3.


> or as the old farts used to call it, a 'wizard'

... Am I old?


Contradictory or inconsistent requirements? I've heard lots of anecdotes around that sort of thing in hardware engineering. It happens whenever abstractions meet implementation.


That's true in a universal sense, but humans don't exist in a universally-logical world.

Yes, all code/abstractions are broken, ie, all models are broken but some are more useful than others. That, however, is not the point. The point is that these models are inconsistent in an invisible way even to experienced practitioners in whatever domain the code is in. It's not the the code or abstraction is broken. As you say, that's a non-event. It's that the code can show where the people are broken in ways that they themselves do not understand. That's the new part. That's where the line about the code talking back to the people comes from in the essay.

An abstraction is a person generalizing and codifying something. This is about code correcting abstractions, not abstractions being more or less useful than one another. I believe your directional arrow here is backward from the intent of the essay.


Since the whole discussion (post + comments here) is a little abstract, I wonder whether this short comic sketch might serve to exemplify through caricature what we’re trying to get at:

“The expert” https://youtu.be/BKorP55Aqvg


Interestingly enough, someone made a response video:

https://www.youtube.com/watch?v=B7MIJP90biM

Although, to be sure, the response from management et al will almost definitely indicate that this isn't what they were looking for at all.

[I know an electrical engineer who was part of a team that provided an engine control component to a customer and their response was that even though the component 100% meets all of the requirements they told them about it apparently didn't conform to the requirements that they were not told about (or perhaps that the customer was unaware of)]


My name is Scott Williamson, and I don't know the definition of a line.

And while it's possible (likely?) that management also doesn't know what the definition of 'line' is either, I'm pretty sure nobody agrees those green lines are red.

A frequent problem in requirements communication is when the customer aggressively insists that they understand all of the definitions you are using ("I'm not an idiot."), they don't actually mean the same thing you mean by them.


> It's that the code can show where the people are broken in ways that they themselves do not understand.

Sorry, but I really don't understand how that's the case. Can you give a concrete example? I know you gave an example in the essay, but you didn't go into the specifics and you explicitly said its details don't matter and the reader shouldn't look at them closely.


Disclaimer: UK based teacher here not an IT specialist

About 15 years or so ago there was a slogan in colleges in the UK about 'personalised learning'. People sat in meetings and agreed that learning should indeed be personalised. I used to ask questions like how do I personalise learning with 25 teenagers in a basic maths class at 2pm on Friday.... no answers. The phrase 'personalised learning' had no 'operational' definition. It meant anything the middle managers and inspectors wanted it to mean. A buzz word if you like. I think the point the OA was making was that writing software to deliver things forces precise operationalised meanings for things - you can't write code if you don't know what the output is supposed to be.

Below is teaching stuff...

Obviously at 2pm on Friday you present to the median in the room and then keep the more confident going with harder tasks during the individual work bit of the lesson and give the dazed and confused more structured skills based work, that is basic teaching practice. I always got the students to fill in a learning log (piece of paper) at the end of the lesson - a line on the log for date, 'what did I learn today'(1) and comments.

Turned out that was fine! I was officially doing 'personalised learning'. Who knew?

A well known elearning provider in the UK took the 'personalised learning' slogan more seriously than I did and produced the following...

* Screen based MCQ diagnostic quizzes on Maths and English - carefully designed questions with good feedback on distractors - the quality of this stage is absolutely critical

* Tests were adaptive so if a student got a lot of questions wrong on (say) percentages, they got easier ones, and if most answers correct they got harder ones

* Results logged against micro-topics. After test completed, student got an 'individual learning plan' with links to screen based materials. The 'learning plan' could be printed out and had enough detail to allow a teacher to run coaching sessions for that student. The linked learning materials were an add-on product.

The diagnostic component is widely used in colleges in the UK. The learning plan bit is an add-on not so widely used. I used it and had good results with the students who don't like sitting in classrooms.

(1) one lad put 'I learned how to sleep with my eyes open'. He'll go far I think.


I am having trouble following the thread of your back and forth here, and I feel the article also flounders in its seemingly intended message, but I can give you an example that I thought the article was actually about when I saw the headline: NP-Completeness.

I am unable to remember or find the source but I read a quote by a prominent researcher who quipped that complexity theory was developed to create a corpus of experts to point your manager at when they ask you to “just solve the traveling salesman problem by end of week”.

https://en.m.wikipedia.org/wiki/Computational_complexity_the...


Honest question: you've never ran across a situation where varying business users are describing realities in the business which are inconsistent with one another in various ways?

Because there's this whole school of thought that you can code around inconsistencies using various techniques, higher levels of abstraction, rule-based code, and so on.

These things are all possible, so whatever example I give I'm completely sure that you'll be able to "solve" it in code.

The issue here, however is that when we fix these inconsistencies in code we're taking what should be a business or analysis decision and sticking it in the solution framework. That works for a while until you create some monster nobody knows or can maintain.

I want to help you, but this is a business event, not a coding one, so it's dependent on depth-of-knowledge and general intelligence of your business partners. I'll give you something trivial, with the caveat that there's not a coding response to this that's appropriate. That's the point of the essay.

Let's say you have three customers. For various reasons they can't be in the room at the same time, so you're stuck listening to them sequentially describe some new CRM system you're building. Customer 1 tells you that the system should maintain an estimate of the total value of the new customer in dollars. You add in a decimal field to the system. Customer 2 comes by later and says nope, we need international currency support, so perhaps you add a currency type (insert lots of various possible solutions). Later still, Customer 3 comes by and gee, what was wrong with those guys! We actually deploy in a lot of places where barter is used, so the future value obviously needs to be expressed in farm animals. Maybe you switch to a string, screw 'em. Or maybe you have fun doing lots of coding stuff. We coders can solve anything given enough code.

But that's not the point. The point is that these bozos have completely different ideas of what the heck the app is going to do and us coding around that isn't doing anybody any favors. In fact, we're hurting them. We're making a mess at the same time. This disagreement is a business one and they need to figure it out, not us. No amount of fancy coding is going to fix the fact that these guys are walking around with different mental models.

Now, imagine that scenario with 50 symbols instead of just a few. We see logical inconsistencies as coders that would never appear in a conversation if we didn't bring it up. That's a precious gift that we've never had before, mainly because until 50 years or so ago we've never had hundreds of thousands of coders diving into tens of thousands of domains and finding these problems at this level of detail.

As a programmer, maybe it's not important. Maybe it is. Not my call. Things I might think trivial are critically important and things I feel important may be trivial. I'm not the lawyer telling those guys something about the law. I'm a programmer. I make logically consistent stuff in great depth. I'm just a kind of mirror, a type of mirror that is completely new to our species.


So is this just "trying to formalize and build automation for a/some business process[es] reveals ambiguity in people's mental models about that/those process[es]"?

I'm still not sure that this is unique to programming. Lawyers seem like the closest immediately parallel - I've seen lawyers in this same sort of "you're all asking me individually-reasonable questions but you have hidden assumptions that disagree with each other which would render the legal advice I give any one of you somewhat useless or harmful" situation in a large company.

And that seems like very much the same sort of "you're asking for a technical answer to an ill-defined problem" issue.

Medicine has its own "this is the standard playbook, but everybody is unique, and sometimes it's going to be harmful for various reasons, and a good doctor has to be able to spot and navigate that and go off-book" stuff too.

Code and medicine both have the "hardest" intersections of these areas, IMO - legal matters will ultimately get decided by people, not by machines or still-pretty-mysterious biological processes.


I think you are correct that it's not completely unique to programming, but it seems understandable why some might feel otherwise. With the modern world looking to apply code based solutions everywhere imaginable, developers will tend to uncover such conflicts in problem areas that law and medicine will never interface with. Lawyers and doctors are also more likely to hold societal positions of respect and authority that most developers will never possess, so they don't have to get drawn into fruitless arguments quite the same way.


Thanks, I now understand what you were trying to say. Having understood it, though, I don't agree that programming is completely different from any other field or, as you say, a mirror that's totally new to our species.

AFAICS, the issues you describe happen in programming only when you have multiple stakeholders and are a function of stakeholder group size (and the inevitable, consequent politics and miscommunication) rather than any fundamental features of programming as such. And the same issues also crop up in any other field where there are many people who are all making mutually incompatible requirements of a given solution (architecture or product design come to mind as examples).


> Honest question: you've never ran across a situation where varying business users are describing realities in the business which are inconsistent with one another in various ways?

All the time. And whenever that happens it means either the software-system's domain-model needs revising, or the user's mental-model needs adjustment - or sometimes both simultaneously.

> Because there's this whole school of thought that you can code around inconsistencies using various techniques...

"coding around..." sounds like compromising the domain-model and/or brushing awkward details under a rug.

> The issue here, however is that when we fix these inconsistencies in code we're taking what should be a business or analysis decision and sticking it in the solution framework. That works for a while until you create some monster nobody knows or can maintain.

When the project becomes "a monster" it needs to be refactored - and then it will be knowable and maintainable again.

Scope-creep is inevitable in every software project - we know how to manage it.

> Let's say you have three customers. For various reasons they can't be in the room at the same time, so you're stuck listening to them sequentially describe some new CRM system you're building. Customer 1 tells you that the system should maintain an estimate of the total value of the new customer in dollars. You add in a decimal field to the system. Customer 2 comes by later and says nope, we need international currency support, so perhaps you add a currency type (insert lots of various possible solutions). Later still, Customer 3 comes by and gee, what was wrong with those guys! We actually deploy in a lot of places where barter is used, so the future value obviously needs to be expressed in farm animals. Maybe you switch to a string, screw 'em. Or maybe you have fun doing lots of coding stuff. We coders can solve anything given enough code.

In all 3 cases, it is the system's domain-model that is inadequate, not the customers' requests (which are very reasonable, honestly).

The SWE industry largely abandoned waterfall over a decade ago, and I'm also seeing declining interest in SCM (and I've personally never worked at an org that used SCM), so I'm not convinced the problems you're describing are really that much of a problem, if not just "tuesday".

> The point is that these bozos have completely different ideas of what the heck the app is going to do and us coding around that isn't doing anybody any favors. In fact, we're hurting them. We're making a mess at the same time. This disagreement is a business one and they need to figure it out, not us. No amount of fancy coding is going to fix the fact that these guys are walking around with different mental models.

I see that the wider-point was about how we should be handling mutually-exclusive customer-requirements (rather than that specific case you outlined (i.e. handling hetereogenous accounting methods, which isn't really mutually-exclusive, IME), but I'm not seeing how that carries you to your conclusion

> Now, imagine that scenario with 50 symbols instead of just a few. We see logical inconsistencies as coders that would never appear in a conversation if we didn't bring it up. That's a precious gift that we've never had before, mainly because until 50 years or so ago we've never had hundreds of thousands of coders diving into tens of thousands of domains and finding these problems at this level of detail. > > As a programmer, maybe it's not important. Maybe it is. Not my call. Things I might think trivial are critically important and things I feel important may be trivial. I'm not the lawyer telling those guys something about the law. I'm a programmer. I make logically consistent stuff in great depth. I'm just a kind of mirror, a type of mirror that is completely new to our species.

I really have no idea what you're trying to say here...


I've got an idea of what's being intended here, but it's sort of... 'obvious' in a way. OP, tell me if I'm way off.

IT/dev folks sometimes have a unique high-level view of an organization that many others don't have. Personally, I've been in multiple orgs where I'm sitting in with various depts, and being asked/tasked with certain things. These things are in direct conflict with other depts' requests, but I'm the only one who knows that, because I'm the only one who is jumping between departments.

I can raise these conflicts as issues that need resolving. Sometimes it's recognized and conflicts are ironed out. Sometimes not.

That may be one example at one level of what OP was trying to get at(?)

Even within a dept, and a single request, the more questions get asked for more clarity, we can often uncover logical states that aren't always blindingly obvious to others. Again, you can raise that and look for resolution (ideally, before you start), but you may not always get resolution.

It sort of sucks sometimes because you can be seen as a know-it-all or "always trying to bring things down", but you/I/we are often the only ones who have to attempt to reconcile logical contradictions, or at least conflicting requirements that require political capital, not technical chops.

"We have to send people their passwords via email".

"We can never store or transmit plaintext passwords".

I've had to have this mediation multiple times between parties. Request 1 is somewhat bad on its face, but trying to resolve these with multiple parties is sometimes really not fun, depending on the politics involved. And much of these issues come down to politics (and poor management, but poor management allows politics outsize influence in the first place).


Yes, exactly.

The only thing I'd add is that raising those issues and trying to find resolution is probably even more important than the code. People can see whether the app is working or not. They can't see when these conflicts happen unless somebody helps them.


> Can you give a concrete example?

Spellcheck. You enable the service to learn from your mistakes and to keep you from looking like an idiot.


There’s a big difference between a doctor or lawyer and programmers. Doctors and lawyers are licensed professionals who quite often have their own business. Most programmers are employees doing what they’re told by their bosses. A plumber is much closer to a Doctor or lawyer than a programmer.

There are some programmers who are not on retainer but are consultants. These types typically tell clients what can and can’t be done and the clients typically respect their opinions, unlike butts in seats programmers.


I think that what we're losing is a habit of mapping our squishy human desires onto a formal system. Maybe you didn't chose a toolchain that treats it as a formal system. There isn't a proof assistant to be seen for miles in any direction. But when the guy says "give me foo" and you say "that's impossible", you're consulting a formal system and translating a proof back into that squishy human desire language.

The author is maybe a bit too attached to the idea that the system is an ideal representation of the real world. Given a landscape of systems that might represent it, probably there's one that lets you give the guy what he wants, and probably it's not the one you've built. Spend enough time building something and it changes how you see the world, so that's an expected bias.

I agree that it's a loss. The statistical approach used by AI's or by any sufficiently complex system of customer facing employees (which likely isolates the customer from the programmer) tends towards creating responses that are likely to make the guy go away, which is not always the same as responses that gave him what he asked for.


I don’t know if I’d agree just yet. In the past, I’ve generally found that the better I got at something, the less time I spent in it. The better that I got at coding, for example, the more time I’d have to spend (proportionally) in ops, design, marketing, etc., all the things that weren’t coding.

I think that the better that AI gets at churning out the dumb stuff, the more that we’ll actually spend time on interesting things. I think the real hazard is that indeed, people will churn out lots of dumb stuff. AI can write lots of code. Can it accurately read lots of its own code? I think the difficulty in managing a large code base is what will keep people writing actual code.


We can consider the history of self-programming computers to see how this plays out. The first self-programming computer system was called an "assembler". Later ones were called "compilers". It played out about how you have predicted.


Doesn't sound all that different from other professions. When a client asks for something logically impossible, it's a case of them not being as good at creating logically consistent models. The domain is about as irrelevant as whether the totaled car was going to work or to a birthday party.


The difference is in the level-of-detail.

Humans can't create logically-consistent models in-depth, no matter what the domain. Programmers, at least old-school domain-model-driven programmers, live in a world of constantly reminding people of this.

Put differently, "logically-impossible" is a relative term. It's relative to human intelligence, not domain area.

This is much akin to chess, where really smart people may be able to track a dozen or so symbols in a mostly-consistent way while others may just track a few. But nobody tracks 50, at least without some sort of logically-consistent computational domain model.


So the key difference is that in law, the state enforces an outcome whether it's logically consistent or not, in medicine death enforces an outcome in all cases, but in computer programming if something is illogical you have to deal with it on its own terms (either fix the world and start collecting vehicle mileage monthly or give up on the report)?


Yes.

Another way of stating that is that all professions (aside from math-based ones) are inconsistent at some level of detail and have various workarounds for that.

Even accountants are paid to give "accounting opinions" about the various legalities and appropriate nature of the books. Looking across a bunch of domains, once you get people involved, around the edges of the problem domain accumulates a lot of fluff that for throughout history didn't matter. It still might not matter. Or maybe it does. We're the first to spot it.


From what I can see, I agree with your point here.

I've had a parallel thought for a few years now. Historically, if you wanted something done and you weren't going to do it yourself, you needed to find someone else to do it for you. Imprinting the exact specification of what you wanted them to do is impossible, so you give them some highlights, explain some edge cases, and make sure they understand the high level and off they go to do ... something. They'll figure it out I'm sure.

For simple work, anyone would do. For complex work, it pays to have hard nosed managers with good memories who aren't afraid to yell at people 'slacking off'. And for the truly difficult work you need aged experts who trained under the best for decades and have practiced for decades more (and who have their own students following them wherever they went).

So we have all sorts of skills and traditions for explaining to people what we want to happen.

Explaining to a box of sand and copper on the other hand .... exposes some logical flaws that simply did not exist before.


The company going bankrupt could be the equivalent to death in medicine.

I think you hit a good analogy with medicine, since death is not the only bad scenario. Following this analogy, technical debt could be akin to having to medicate for life.


> Humans can't create logically-consistent models in-depth, no matter what the domain.

One of those inconvenient truths. I've found in my practice that computers, to a degree, can be a crutch that helps with that. Once your model becomes inconsistent, your program will stop working properly, whether you noticed the inconsistency yourself or not.


> Humans can't create logically-consistent models in-depth

Please clarify what you mean by a "logically-consistent model", and please explain what you consider constitutes an "in-depth" model. Can you give some concrete examples of this?


What this Daniel Markham has done is describe the job of software architects. Software architects are supposed to keep the business from logically painting themselves into a corner. They look beyond the current project and the current requirements to ensure a system is being built that can be used to easily solve future business needs.

It's a thankless job because you're saving people's butts years from now. Fortunately, I've been at this long enough and have saved people's butts enough times they think I'm prescient. Nope. I just think things through. All the way through. Beyond what you're asking right here and now.


It's not thankless if you have solid engineers and engineering managers. And if you don't have those things (at least somewhat), they're liable to make a complete mess of things regardless of how good your high-level design is. Good macro engineering vision can't be realized without good micro engineering.


Actually, it's also the job of lead or senior engineers who have to teach their experienced and educated teammates how to read and write a state diagram and how to use it in order to show them why the code/service in question is unreachable from the current state -- and should remain so.


In the past when someone asked for something impossible, I offered up the closest possible work arounds, and let them choose. I give them all the detail they need (but no more) to understand what the limitations are if they ask why it's impossible.

They have jobs to do, that were possible before things got automated, they should remain possible afterwards.


I do the same, but sometimes you enter this neverending scope creep in trying to satisfy the original request.

Example -

At one of my former jobs, management became concerned about developer productiveness. I was tasked with creating a UI + backend that would scrape our bitbucket repositories, and output a coder's commit history, LOC, and some other stuff.

I quickly ran into an issue. This company had over 700 repositories - and by my estimate, added about 20 new ones every quarter (it was a large company with an absolutely massive codebase). Well, even doing a scan of ALL repositories would quickly eat up my rate limit with the API, so I came back to the stakeholders and told them what they were asking was pretty much impossible with the current limitations of the API.

"Can't you build a cache? or a database? or scrape it yourself?"

Ok, sure. Iteration 2 I built a database and queried it for the answers the users wanted. It spun into me creating nightly jobs that would scrape repositories and create this massive cache. Then the stakeholders began complaining about "diffs" of several hours - up to 1 business day - and I again ran into limitations with rate limiting. So that led me to try to make the caching system "smarter" and only archiving "hot" repos, and so on and so forth.

In the end, of course, they didn't like the look/feel of the UI and scrapped the project after several months and I took a performance hit for it. Probably for the best, I know what they wanted to use that tool for (filtering devs by arbitrary "productivity" standards) and it should've died at the vine.


This is a constant problem when managing the expectations of product owners or stakeholders. You tell them "X is not possible, but I can give you Y with Z drawbacks/risks." They say, "Yes let's do that." and when you do, the first thing they say is "Great, but I noticed Z drawbacks happened. Can we fix those?" Arrrrgggghhhhhh....


What's the total disk-size usage of all those repos checked out locally? Sounds like you should just have rely on git (or mercurial?) itself for the whole thing. For example on an EC2 VM which persists the repos on an EBS volume, spins up, calls the API only to get a list of created/moved repos, clones and fetches, then from there you operate on everything locally.

I've done similar analysis for orgs with same order of magnitude of repos on both GH and BB. No need to re-implement caches, diffing, or other optimizations that the VCS already handes for you.

Sorry to say it but sounds the requirements were completely sane and realistic.


Not really, no, they wanted certain metrics that could only be obtained by the API. But how intuitive of you to suss out their exact requirements without me even detailing them! you should do my job.


> developer productiveness

> LOC

This definitely should've died at the vine.


isn't it correct to ask bitbucket for a higher rate limit?


> Our profession does structured analysis in a way no other profession has ever done.

Yes, it's because if you go and ask most software companies what engineers are supposed to do, the priorities have more to do with business than with computer science. If instead it worked like other industries, engineers would be expected to focus on software ONLY.

There's no right way to do it of course, however I do think there is a lot of value (even business value) that is lost when engineers have non-technical responsibilities because it misses the point of engineering, which is to have machines solve problems.


This mismatching reality thing is a super common phenomenon in software dev and I think it’s key to internalize that there is often a shockingly big (and acceptable) difference between “the amount a perceived world needs to match the real world to be technically accurate” vs. “the amount the perceived world needs to match the real world in order to be good enough” AND, even more bewildering, very often the “good enough” difference version is the only one that can reach the goal of the software!


I'm more impressed with his giant signature at the end of the post, than the post itself.


In an ideal world product/project manager types are supposed to suss out these impossible requirements before they even make it to the programmer, but I'm still astounded at the amount of times in my career I'm either asked to do something impossible, suicidal, or flat out wrong. I always just imagined it was the same in other professions, but upon thinking further after reading this, I guess it might not be the case.


I don't think that's the job of project managers. They are there to keep the project within scope, schedule and budget, find the resources you need and remove distractions so that you can focus on your craft. At least in my experience, it is expected that the lead or senior software engineer deals with the requirements, identifying missing, incomplete or out of scope items and communicate any issues to the PM so they can be discussed/negotiated with the customer or end user. Some of my best PMs don't even know much about software engineering.


The law example is kindof funny to me because my attorney and I get into these discussions that boil down to “yes, you’re right—the Constitution absolutely does say what you think it says about your civil rights, but that’s irrelevant. The judge is still going to convict you and you can appeal it all the way to SCOTUS and you’re going to loose every time.”


What is it with people misspelling lose lately?


Computers used to be humans skilled in doing careful painstaking calculations for clients that were guaranteed not to contain arithmetical mistakes (which would ruin the computer's reputation). Computers had aids like log tables to do multiplications via the process of addition and so on, with slide rules coming into play more recently.

Programmers are currently humans skilled in the careful painstaking generation of code to accomplish specific tasks, but in the future programmers might be complex AI systems. However, some humans who understand both computation and programming will be required to talk to people who want these services but who don't know how to interact with the programmers and their computers.

Where this is all going is an interesting question... will general AI systems arise that are capable of making their own decisions about what goals the programmers and computers should be targeted on?


> I firmly believe that there was something very, very important there. Something we're losing.

Yeah, it's called critical thinking. There's a lot of people that lucked into a lot of money, but without the critical thinking skills to understand their own survivor bias or that the world where they merited such a windfall does not, in fact, exist.

> Neither you nor the person you were helping had any idea of how this was all going to work out, and that was just fine. It always worked out, and you always got paid. Mostly.

This is false. You actually knew exactly how it was going to work out, because if you got to the point of framing out the problem, you already performed the required analysis.

You want to know what happened? You solved it. You won. You automated the drudgery, but the way capitalism and public companies are designed is that you need to keep chasing returns on capital. Returns that aren't there anymore because of the law of diminishing returns.

So what do you do? You invent a world where those returns exist. Of course they don't, and the world itself doesn't exist. But you have to invent it to make shareholders and all the workers believe that it does.

So there goes your critical thinking out of the window.


By and large, I agree with the author's observations.

> I think some of the best programs were essays, in the sense that the authors didn't know when they started exactly what they were trying to write.

That's the premise that Lisp was built upon. As far as I know it's the only programming system that comes with the intention to let you write “essays”. All the other languages I know are targeted more narrowly.


Basically, modeling real life means modeling something that is at this very moment, unknowable and from many perspectives, paradoxical. Law and medicine are walled gardens of focus where paradox is filtered out (to a debilitating degree often) to solve specialized problems. We are tasked with using flexible machines to solve general problems, many of which are beyond absolute solutions.


Gödel is snickering in his grave.


Sure, but the role of structured analysis is also done by Product Managers and Architects - if we end up in a world of AI code monkeys, the work shifts to these roles, themselves also assisted by AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: