Hacker News new | past | comments | ask | show | jobs | submit login

Made me think of this comic by commitstrip [1]

- Some day, we won't even need coders any more. We'll be able to just write the specification and the program will write itself.

- Oh wow, you're right! We'll be able to write a comprehensive and precise spec and bam, we won't need programmers any more.

- Exactly

- And do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?

- Uh... no...

- Code, it's called code.

In the end, we'll just move the abstraction layer to another level, that's it.

[1] http://www.commitstrip.com/en/2016/08/25/a-very-comprehensiv...




Yup. One of the most important things I've learned in my programming career is that reality is damn more complicated that it seems. At all levels - both physical and abstract.

The real problem is that humans imagine things and communicate them at very high abstraction levels, without filling in the levels below. Much like a children's drawing of a car is nowhere near detailed enough to serve as a blueprint for building one, our usual descriptions of stuff don't contain necessary details at all. If you want to build a real thing, all the work to fill in the lower levels of detail needs to be done. You can't magick it away. Today, that work is done 10% by the people writing specifications and 90% by programmers figuring things out. In the future it might be done by a computer - but that computer would necessarily need to be a human-level artificial intelligence, doing all the work programmers do to go from the way we communicate to a working program.


You don't fill all the lower levels of details. You're likely not caring about generated machine code, you're likely not caring about details of memory deallocation, you're likely not caring about Unicode quirks when you're comparing strings, you're likely not even aware which algorithm is used when you're multiplying two BigDecimals, you're likely not care which characters are allowed in HTTP header names, and so on and so on. A lot of implementation details are already implemented. There's no reason that programs won't be abstracted further away. Sure, it still is programming, but with less unnecessary details (so more people can do it).


You're thinking of the lower-level details of the computer but the issue is the low-level details of the problem you're trying to solve. The amount of detail necessary to solve even the most basic business problem is far more than the average client can ever describe on their own.

All the computer abstractions help solve computer problems and in turn make it easier to implementation solutions but doesn't make the business problems any smaller.


Yes, what you describe is the point at which abstractions meet your problem from the bottom. You're not developing solutions yourself down to level of pure physics, but that's because we have a solid abstraction layer of digital computing. Your job is to work down from the problem idea until you connect to that layer.

It still doesn't change the fact that the surface area of that layer is enormous, and so is the space between it at the bottom and your problem at the top. All the details in between is what you need to figure out.

I'm not completely sure the bottom can be qualitatively raised very far up. The digital computing abstraction is a qualitative model change, from physics to mathematics. Experience suggests that when you decompose a business problem, the lowest algorithmic details very often end up on, or slightly above, that bottom layer - so you need the ability to work on that level. Of course, if you're willing to limit your scope and problem domain, then a lot of things can be abstracted further - but they won't generalize.


Most programmers/developers don't deal with the problems that you mentioned on a day-to-day basis. They are already abstracted away, and most programmers/developers try to make sure that they deal with the actual problem at hand instead of repeating the same problem in each project. The actual problems are not going to get any easier. The abstraction levels will get higher, but the actual problems dealt with will not.


You're right, but it doesn't seem that hard to teach/guide people to be precise.


The fact that you can get Linux kernel developers who rail strongly and vehemently against compilers because the compiler optimized it against what you said, which wasn't what was wanted, strongly suggests that it's not a matter of precision.

More specifically, people generally have a vague idea of what they want. While you could explain a fully precise mathematical model that corresponds to that idea, people don't really have a good way to know if the model is correct or if there is some hidden corner case that causes problems. So they accept the model until they find one case where it was wrong.


It's hard to impart the true understanding of just how much details a real-world problem has without making someone try and solve some real-world problems.


Right on. What many, and especially this writer with his talk of tech being more "human" don't understand, is that code isn't a language _for computers_. It's a human language used to describe something unambiguously, to the point where it could be eventually interpreted as machine instructions.

People not understanding this has, in my experience, led to a lot of very bad decisions, especially around outsourcing. So many people in 2006 thought that American/Canadian developers would disappear in a few years as everything was outsourced. They didn't realise that writing a "comprehensive and precise spec" to send to a team of developers on the other side of the world, and have them implement _exactly_ what you asked them to, was actually creating more work.


The comic misses the point. Yes, it's still called "code", but as you point out, the abstraction layer moves another level up.

This is the whole point behind the reverence for "declarative programming" in the Erlang world (and probably others). Tell the computer what to do, not how to do it. The "how to do it" part should instead be the responsibility of the programming environment (which can - hopefully/ideally/eventually - do a better job than the programmer at organizing the "how" in a manner which appropriately balances efficiency and robustness).


I think it is a nice way to look at it. I always thought all people in all professions are trying to figure out some "code", just in different abstraction layers.

I mean, a lawyer learns the "code" of law. A psychologist learns the "code" of human, which has a lot of black boxes but one can still make partial sense of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: