
Why software sucks? - jjude
http://www.scottberkun.com/essays/46-why-software-sucks/
======
david927
If we invented the car, but there was no reverse and no left turn, we could
say that the problems were due to poor drivers and poor planning, but the
problem would clearly be that the car is not sufficiently wieldy.

You can say that software sucks because of poor programmers and poor project
management, but the truth is that the code is not sufficiently wieldy. There's
no way to manage the code. I can't query all places in the code where the UI
interacts with a database column. Accounting systems can give you a variety of
reports based on abstractions at a variety of layers, slicing the information
in different ways (horizontally, vertically, etc). Software systems? Go fish.

Software sucks because at some point we got too excited about what we were
doing and stopped (sufficiently) caring about how we were doing it.

~~~
pbz
"I can't query all places in the code where the UI interacts with a database
column." Why can't you?

------
gaius
There is one thing that makes software suck, and that's how far apart its
developers and users are. I dream of going to SAP's offices in Germany and
seeing how they book their own vacation and submit their own expenses. I can't
believe they do it with their own product, or it would be slick...

~~~
singular
There certainly can be a disconnect which impacts a developers ability to see
what a user wants, but even if there is no disconnect, bad software can still
result.

Consider internal software - there is literally no disconnect whatsoever, but
very often the software is terrible. This is relevant because it raises the
issue that the business simply may not care about quality, and thus putting in
the work required to make the thing good might simply not be possible.

There are many, many factors involved in determining whether software sucks or
not, and key is the understanding or lack thereof of what the users want and
need, agreed. But keep in mind, there is a big difference between seeing what
the users' need superficially vs. understanding clearly and deeply what the
users' need and implementing it well.

~~~
benohear
At the risk of sounding like a broken record: Paper protoyping. Even if it's
for yourself, it never feels the same to interact with a product than you
think it will. This holds even more true when it's for other people.

For internal projects, investment is obviously a huge problem, and once you've
built the wrong thing you're unlikely to get more money to do it right. But
for a given price point, there's a huge range of quality you can obtain. Paper
protoyping helps you maximize that at very low expense.

------
cubicle67
We were discussing this recently whilst trying to come up with a guideline
about when the computer should "help" and when it should just leave you in
peace. We looked at lots of software we hated and loved, and tried to pinpoint
the why. One of the things that came up most about hated software was that it
had a high frustration level

Who's ever wished for a giant red "PLEASE STOP HELPING ME" button on their
computer? Everyone, right? One of the causes of frustration we looked at was
when the software helps you, but gets it wrong. Frustration is increased when
the effort to fix the software's "help" is anything other than minor, and
increased again when the computer repeats it's efforts at helping. Throw in no
obvious way to get it to stop and you're in for some blood-boiling times.

The guideline we came up with was to not concentrate so much on how cool it is
when things go well, but think about what happens when the software guesses
user intent incorrectly. Consider:

* How often is the software likely to get it wrong?

* what was the level of irritation is caused by getting it wrong?

* How much effort is required to undo the computer's help?

and weight it up against how much effort you're saving the user when you get
it right.

Thinking about this has caused us not to add a particular cool feature we had
planned, because even though (we thought) it was very cool, there was about
30% chance or so of it being wrong, and the payoff wasn't worth it.

------
pinaceae
Software sucking is a direct outcome of future user's involvement in the
design process.

as in: if you're writing software for a client and the client drives the specs
directly, the outcome will suck balls.

evidence: all enterprise software installations ever done. from ERP to CRM,
the vanilla software packages might even have good UX/UI, but once
implementation with all its "critical" customizations is done, you'll end up
with a turd.

~~~
zdw
I tend to think that splitting development into API and front end solves some
of this.

The main problem with "enterprise" software is that it's done cheaply, and
violates levels of abstraction. Thus why it frequently requires specific
versions of antiquated technology ("Our CRM requires IE6", "We can't upgrade
past Windows 2000").

If it was designed properly, you could easily rip out both parts and
iteratively redesign them to meet business and technical goals.

~~~
mattquinn
Seconded. The university I'm attending uses PeopleSoft (acquired by Oracle) as
their ERP system, and it's horrendous. Not a week goes by where I don't hear
students and faculty members openly complain about it.

I've got a live PeopleSoft installation running on a server in my apartment
that I'm outfitting with code to do exactly what you mentioned - split out the
back-end from the front-end. So far it's going brilliantly - for some sadistic
reason, I enjoy trying to reduce the complexities of these applications.

~~~
shurane
You know, I wanted to do something very similar with my university's
installation of PeopleSoft - but my intent was more focus on improving the UI
and frontend than backend. But I have to ask: how did you get a copy of
PeopleSoft?

~~~
mattquinn
Actually, what I'm working on involves both front-end and back-end. I've got a
UI that trades data back and forth with a web service endpoint called
Integration Broker within PeopleSoft. I'm focused on enrollment right now, and
currently I've got a system that allows me to enroll in classes using the new
UI on a live PeopleSoft install - all without touching/modifying the business
logic in the delivered vanilla PeopleSoft implementation.

Re. the PeopleSoft copy - Oracle provides all of their software (and master
license codes) for download for evaluation purposes through a portal called
eDelivery. I had to read a few hundred pages of documentation, but after a
month I was able to get all the components to talk together. I'm trying to
convince Oracle to give me a non-support license so I can cover myself
legally, but I'm getting the silent treatment since it's just me and I don't
have the budget of a CTO lol.

------
ken
A little while ago I overhead a conversation between some friends of mine, one
of whom is an interior designer who does residential remodeling:

a: "We always tear out everything, down to the rafters." b: "That sounds
expensive." a: "It can be, especially if you find out the wiring or something
isn't up to code." b: "What if someone wants only---" a: "No, there's no
'only'."

In my experience, if programmers always had a complete design specification
before starting, and if we always "tore up" the old mess (down to something
unquestionably stable) before making any improvements, our software would be
much, much better. It'd probably also cost more.

The biggest cause of this that I see is that the people paying for it are not
the people using it. In the consumer market we are starting to see that good
design does sell (I think it was Gropius who predicted that the cost of design
is amortized). But with enterprise software, the people paying for it are
rarely the ones using it, so they're perfectly willing to buy a lousy program
if it means saving cash in the short term.

~~~
icebraining
_In my experience, if programmers always had a complete design specification
before starting, and if we always "tore up" the old mess (down to something
unquestionably stable) before making any improvements, our software would be
much, much better. It'd probably also cost more._

And by the time the software was done, it'd be useless. A house doesn't get
obsolete in ten years, but software - which is much more complex than a house
- can be obsolete in ten months.

Not to mention that I'd rather not be reinventing the wheel every six months.

~~~
4as198sGxV
I often hear that but it is no true in my experience. Maybe your run-of-the-
mill flavour-of-the-month hipster web 2.0 app has a limited shelf life but
most enterprise apps stay deployed for years.

~~~
icebraining
But is it because the needs haven't changed a bit, or because it's good enough
and it'd cost too much to replace it? Because if software was always tored
out, that would only accentuate the latter problem.

------
evincarofautumn
“No one makes bad software on purpose” is not strictly true: many esoteric
programming languages are designed to be “bad” in interesting ways. That
extends to any subversion of highly-regarded tropes in games and other media,
even the non-interactive kind. It’s also not true that “in order for people to
say ‘this sucks’ they have to care enough about the thing you’ve made to spend
time with it and recognize how bad it is”. People routinely criticise
programming languages and other software they’ve never used before, even if
said software doesn’t actually suck much at all.

------
benohear
Software tends to suck when either you build the wrong thing or you bit off
more than you could chew and the implementation of a good thing ends in
disaster. Or both.

Generally the "lean startup" credo holds true here: Build an MVP fast, get
feedback and iterate.

However, there is one technique which doesn't get enough airtime in my view
and that is paper prototyping. It gives you a surprising amount of quality
feedback with a fraction of the effort of coding something up (and no bugs!),
and allows you to iterate there and then.

~~~
gaius
Very amusing that you think that as that approach is what got us into this
mess. MS (not to single anyone out in particular) has always been notorious
for rushing a buggy 1.0 to market then iterating it. That's why you need a
quad-core 3Ghz PC with 8G RAM to write a single page of A4 now.

~~~
serbrech
I can't agree that a buggy and slow version of a product is an MVP. There are
a lot of things that you can cut from a product to make it minimal. I don't
believe quality is one of them. However, I also believe that user feedback
cannot be the only factor to build something truly great.

~~~
gaius
Windows 1.0 and Word 1.0 were viable by definition, as people did buy them.

------
b1daly
One reason I don't think gets mentioned enough for software sucking:
developers don't really use the product enough to optimize it (they don't have
time). Especially for complex applications. If there isn't an effective
feedback loop from users about pain points, and an institutional drive to
address them, major problems in software persist version after version.

------
cousin_it
In this thread many people give different reasons why they think software
sucks. This prompts some meta-questions:

1) Why are all these people so confident, if only a minority of them can be
right?

2) What procedure would make everyone converge on the same correct answer?

------
j45
Software sucks because too often developers fail to learn what the software is
supposed to do.

Developers trivialize things and interpret them in far "superior" ways that
lead to huge gaps of "they never told us".

Development has to move from just coding, to learning to _first_ understand
what details are being managed, and how those details interact in a system in
all the stages the details/data exist.

If we believe every business is becoming a software business, the reverse is
true, all software developers must understand the business more and
continually develop the skills to be the bridge between business goals and
technology.

How to do that?

Shut the hell up and learn.

Ask (and learn) why things are done a certain way. Uncover any competitive
advantages the business has from doing things a certain way before getting on
the high horse and deciding to improve the world because it's so obvious.
Developers may be surrounded by non-techs, but they certainly might be
surprised to see the organization itself does have processes and competitive
advantages that have to be maintained for the business to survive.

Avoiding the classic SAP-esq kiss of death of doing it the SAP way, wiping out
the competitive advantage (seen first hand), and then spending tons of money
customizing and automating any ERP to get back to what they had before (and
more) seems silly, but I can't say the 70% of failed software projects fare
much better.

So, before we think we understand something, shut up.

Before we think we know better, shut up.

Before we think we can simplify things, shut up.

Before we think we can make things more efficient, shut up.

Shut up, listen to the people using the current systems and processes and
learn what is working for them, or not, first.

Shut up and learn. Don't finish people's sentences. Don't tune out. Don't
think things are beneath you. Don't think you've seen it before, or built it
before.

At each step ask them if you understand their process correctly before going
off to formulate a faster way of doing things for their confirmation.

Software has the power to uplift the lives of people and help them get more
done with less effort. If you don't value this, don't make the rest of us look
bad for your laziness and inability to continually develop your own skills.

Once you have learnt why the business does what it does, the way it does, it's
fair to ask the question "How should it be?", and see what differs. That, is
the beginning of what you should start thinking about.

Having integrated custom systems and built new ones to replace existing ones
since '99, this is the single worst thing I see. Enough developers simply
don't have a healthy paranoia of their understanding. Knowing a little bit
about something can make developers just as dangerous as the "business" folks
they judge for doing the same. It all comes out in the wash with the 70%
software failure rate.

Without understanding the data of the business, how it interacts, exists in
different stages, and how it needs to be input/output, and why, amongst other
things is the leading contributor to software failure.

It's as much "the customer didn't know what they wanted" as "developers failed
to understand their job is to go learn the business first and then design
something to build".

To be clear, this can mean working in people's positions first hand to see
what they're going through / facing that they can't explain to you.

It can mean seeing what state a business is in, infancy (no systems or
processes), adolescence (some systems or processes), or maturity (a mature
system and process, even if it's all manual).

Do those three scenarios equal one approach to all of them? Hell no.

There is, though, a few common things to keep in mind:

\- Ask customers to teach you the business as they know it first. Pretend
you're the next apprentice, or the owner's right hand man. Ask to be taught
not just how to do everything, but why it's done that way.

\- Your goal is to get more done with less effort. The software you design and
build should not simply make less work for some, and more for others. It
should free people from BEING the tools and systems, to USING the tools and
systems. The people of an organization should do what they know best, instead
of being computers, they should be interacting with each other, and customers.

I could go on a long time about this. But it's Sunday and I hope the positive
wishes come through.

~~~
benohear
So in summary: When you have a great idea, translate your intended statement
of "we can improve things by doing X instead of Y" to the question "why are
you doing Y?".

And again: Paper prototyping. Sticking an interactive bit of paper in front of
a user is an effective and inexpensive way to get him to explain the holes in
your great idea, and you can adapt it there and then and maybe come up with
something that fixes the problem that you were hoping to solve, without
creating a ton of other issues.

~~~
j45
Why, always before how.

A question I like asking is, "Teach me why this needs to be improved".

So when a question like this takes us 8 hours a week, every week, comes up,
you can say we can save 416 man hours a year if you let me work on this for 40
hours.

I'm not sure whether or not there's paper prototyping. I use a lot of
different tools, the most important of which helps me whittle down an idea to
it's essence. For me that magic happens on whiteboards, and on paper first.

I've recently started doing it using a stylus on an iPad and a Galaxy Note
with increasing success. I almost prefer the iPad or Galaxy Note because I can
keep erasing and refining to get the perfect layout/design.

------
lwhi
Surely the banner of user experience, and it's component disciplines, try to
directly tackle this problem? There is a disconnection between the motivation
of developers and the needs of users, but areas like IxD, IA, usability
research, user experience planning and HCi all help to bridge the gap.

------
sodiumphosphate
The human race excels at engineering shovels and hammers, knives and other
primitive tools; for anything more complex than that our capabilities are
still pretty much infantile.

Give it a few thousand years. If we can manage to survive it, the miserable
suckiness of our software will taper off.

------
javascriptlol
Software is too complicated. Now that we have a generation growing up with
computers software engineers are happily brain-damaging them into expecting
hugely complex software that doesn't really work. The browser is the perfect
example. Why can the user manually reload a page? Shouldn't it all update
automatically? The answer is: it's that way because it was convenient for lazy
designers a long time ago and nobody bothered fixing it. And it will be
rationalised as a good decision now that people have been damaged into
thinking it's a good thing. What's there has blinded people to what is
possible. Don't let people fool you into thinking that there are good reasons
why we ended up with, say, C instead of Forth. It's all rubbish. We've become
a field of charlatans.

~~~
gaius
Err, what are you supposed to do when you edit a file on your webserver and
want to see how it looks? HTTP is not NFS!

But you are mostly right, software has far, far too many pointless layers of
abstraction now, requiring vast resources just to do trivial tasks. There's
nothing 99% of people use a wordprocessor or a spreadsheet for that you
couldn't do on an 8-bit micro in the 80s. Games these days are just not fun,
whereas the 8-bit days were a golden era of creativity. We need to take it
back to the old school.

~~~
dhx
On automatic website refreshing:

See guidelines 97 and 98 from Jakob Nielsen and Marie Tahir's book _50
Websites Deconstructed_ [1] for the primary reason why there has been little
interest in removing the "refresh" button from browsers.

Technical limitations don't really exist (and if they do exist it'd be fairly
easy to solve). Server-sent events[2] and WebSocket[3] are already implemented
in the latest versions of popular browsers. Modules or implementations within
popular HTTP servers already exist for doing HTTP push (they tend to use older
AJAX-like techniques though).

If usability was no concern (or very carefully handled) it'd be fairly easy to
write your own nginx module or "WebSocket server"[4] that uses inotify to
check for file system changes. For each change that impacts an open WebSocket
connection, a "refresh this page" notification can be sent to the browser
(which then uses JavaScript to force a page refresh). There is a potential for
smarter refresh mechanisms in browsers that maintain the current scroll state,
field values, etc but you'd still be frustrating the user with severe
usability problems.

[1] <http://www.useit.com/homepageusability/guidelines.html>

[2] <https://en.wikipedia.org/wiki/Server-sent_events>

[3] <https://en.wikipedia.org/wiki/WebSockets>

[4] [http://altdevblogaday.com/2012/01/23/writing-your-own-
websoc...](http://altdevblogaday.com/2012/01/23/writing-your-own-websocket-
server/)

~~~
gaius
Yes but none of these things existed in 1993. You can call it "lazy" I suppose
that TBL didn't implement all the features you take for granted 20 years later
before releasing the first browser(!)

~~~
javascriptlol
It's lazy when your platform still has mandatory polling decades after
interrupt driven programming was invented.

~~~
icebraining
Interrupts are not without drawbacks. There are valid reasons to not implement
them, particularly on the web.

~~~
javascriptlol
So, WebSockets are a mistake then?

~~~
icebraining
Websockets are like airplanes: useful, but not for daily commuting.

~~~
javascriptlol
So do you have any technical points to make, or just more dumb excuses? If you
don't want interrupts, you just use them to implement polling.

~~~
icebraining
Interrupts are stateful on the server side; that creates problems in terms of
scalability, both due to increased memory usage and by being less flexible
(either you have each user "locked" to a single process, or you have to
implement state sharing, which adds overhead).

It can also be extremely wasteful - if I leave a tab open for hours or days,
you'll have to waste your resources and mine to keep pushing me stuff I won't
see, while now I just hit refresh when I want to see new content.

In terms of usability, it's often jarring to watch content change when you're
interacting with it - that's why even sites that implement real time
notifications often have a link or button that you have to press to update the
UI. In many cases, doing that completely negates the benefits of pushing.

Also, see the thesis on "Architectural Styles and the Design of Network-based
Software Architectures":
<http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm>

Polling is a good default for the web; it fits most use cases (content that
rarely changes) in a simple and economical way. WebSockets are useful for the
exceptions.

~~~
javascriptlol
It fits most cases.. except that web applications are _constantly_ polling for
information. Which may be fine, but you can implement polling with interrupts.
What part of that do you not understand? Are you aware that the OS is using an
interrupt-driven system to poll the server? It's just hidden from you, so you
have no choice. This is just dumb engineering. And to say that it's good for
"most things" is just lazy thinking. "Most things" are that way precisely
because of the crappy architecture. Rationalisations.

~~~
icebraining
_It fits most cases.. except that web applications are _constantly_ polling
for information._

Web applications are a very small part of the web as a whole.

 _you can implement polling with interrupts_

But why would you?

 _Are you aware that the OS is using an interrupt-driven system to poll the
server?_

Yes. So? In that case, interrupts are a great fit. In others, they aren't.

 _This is just dumb engineering._

Why?

 _And to say that it's good for "most things" is just lazy thinking._

It's not good: it's _better_ for most things. There's nothing lazy about
evaluating the options and choosing one.

 _"Most things" are that way precisely because of the crappy architecture.
Rationalisations._

Sure, everyone who implemented this shit is dumb and lazy. It couldn't be that
they have good reasons for doing what they did.

I see you criticized me for not providing technical points, yet now you refuse
refute them. I don't see the point of this conversation anymore.

~~~
javascriptlol
You haven't made any technical points. You just waffled nonsense about
scalability. I see you're bowing out of the argument because you have no case.
Once you have interrupts on a platform, you have polling. If you're stuck with
polling you can't recover interrupts. Which is why we have websockets decades
late. The truth is there is not tension between interrupts and polling.
Interrupts are plainly superior, since you can opt out of them trivially. But
the platform should not opt out ahead of the developer. This turns out to be
inadequate, so we get polling implemented inside interrupts and then a
separate mechanism for interrupts. And the interrupt driven stuff has stupid
reload buttons and so forth on the GUI. It's dumb engineering because
increases complexity and leads to bad results for the user. You have no case.
Polling is dumb. And if you think software engineers ever needed "good
reasons" to add a bunch of pointless complexity to things, then you simply
have no clue on the history of software development.

~~~
javascriptlol
Notice how the posts that are downvoted have no replies, because the
downvoters cannot argue their case. How sad.

~~~
jacalata
I had not downvoted you, but I thought about it - because you are making
ranty, ad hominem attacks ("lazy thinking", "dumb excuses"), and you appear to
be ignoring or handwaving away all the actual points anyone makes. I have no
interest in engaging you in any discussion for these reasons, and I expect
that other people feel the same way: hence, downvotes and lack of replies.
Feel free to do what you wish with this information.

------
michaelochurch
Here are some thoughts I'd put into the mix.

1\. Software that is built to deadline will decay, even if the developers are
good. That doesn't mean that an occasional deadline is the end, but if a long-
term "deadline culture" sets in, get out. A long-standing deadline-oriented
culture means you should be looking to jump to another project or company
before the maintenance phase starts, because (1) the maintainers will be
underappreciated (that's typical deadline culture) and (2) once the original
architects get promoted it will be politically impossible to point out the
real reason maintainers are unable to deliver in a timely fashion, and the
slowest one to run away will get eaten by the bear. It means that technical
debt will never be paid off; management will never budget time, and engineers
will be too busy to clean up the code. Software engineers generally lack both
the political pull and the broad-based knowledge to push back on deadlines and
tease out which ones actually matter and which don't.

2\. Entropy. Good software is less stable than bad software. Think of this as
akin to the "broken windows" theory. Once software reaches a certain state of
degradation, each change, although it might fix a bug or add a feature, will
make the state of the software worse. There are creeping kinds of badness that
can't be caught in incremental code reviews, such as adding 10 lines to a long
for-loop or a "necessary" boolean parameter to a method that over time ends up
with 15 boolean parameters. Often the managerial solution (once it's far past
too late) is to put maintenance of this bad system on the calendar and make it
someone's full-time job (instead of a shared responsibility) but no one wants
that job and often that work is allocated to marginally skilled junior
programmers with no clout. Then you get adverse selection: the more skilled
people in that set will leave the project (or company) before they put in
enough time to become decent at it.

3\. "Pay as you go" maintenance, which includes periodic fixit spells, is
always better than after-shit-breaks maintenance. That said, existing tools
don't make it easy to revert quality degradation. IDEs really don't perform
this function as commonly used. (I'm sure IDEs _can_ be really powerful if
well-learned, but people who are dedicated enough to master IDEs are also
dedicated enough to jump wholesale to better languages for which IDEs are
unnecessary and often poorly-supported. IDEs, in large part, exist to
compensate for weak languages.) Code can rot in any language, but one
advantage that languages like Scala and Python have is that, because they have
REPLs, which are far more useful than any IDE, people can interact with the
software at a code-level and fix things while the code is in that "moderately
bad" state before it is too late. In 2012, I wouldn't start anything important
in a REPL-less language. (C is not "REPL-less" because Unix is the C
programming environment. C++ is, not on account of language intrinsics but
because it has departed from the small-program Unix philosophy and is used for
large-object programming which _requires_ interactivity at a code level.) At
least some programmers will have enough of a sense of ownership and citizenry
to clean up failing code as they work with it, but if you deprive them of the
REPL, the one tool that any good programmer will recognize as essential, they
won't put in the work.

REPL or Fail: <http://michaelochurch.wordpress.com/2012/02/07/repl-or-fail/>

4\. The transition from being a mediocre to a good and then to a great
engineer is about moving away from being an "adder" (someone who increases
codebase complexity and functionality, thereby having an additive business
value-- _ignoring_ long-term costs of complexity, which may or may not offset
that additive value) to a "multiplier" (someone with broad-based positive
effects that make the whole team more productive). Contemporary tools and
programming environments (Java, C++, IDEs, IOC, dependency injection
frameworks) are about helping more mediocre engineers become solid adders at
the expense of the really great engineers, whose creativity is constrained by
less powerful languages and tools. One of the goals behind Microsoft's
professional certifications, the design of VB (and later, the hijacking of
Java), and the attempted ghettoization of the command-line (which good
engineers like) was to make it possible for huge teams of "commodity"
programmers to be productive as adders, with the hope that "someone" would
have the patience to staple together the zillion classes they cranked out.
From an MBA perspective, this is a win, because 2-4 times more people are
eligible to be adders, but it also holds people back from becoming
multipliers. The long-term problem is that a team without _any_ multipliers
will accumulate complexity and the emergent design (because you want a solid
engineer doing your design work, and you can't get them in commodity-
programmer environments, "design" coming out of a commodity shop will be ad
hoc) will be disastrous.

5\. With a few exceptions, the real fuckups in software don't seem to be
blameable on a single person. They usually emerge either from jobs no one does
(because the people who care about them being done aren't in power) or that
too many people do (once code has been passed over by too many hands, it turns
to shit).

~~~
akeefer
I'd be curious to hear why you're so anti-IDE: do you have extensive
experience working in Java with a good IDE like IntelliJ? I've heard this
"IDE's are to help mediocre programmers be mediocre" argument before, but it's
so alien to my experience with IntelliJ (which I've used every day for about
10 years now) that there's nearly no way to reconcile that argument with my
personal experience. When I'm writing C or Javascript, I'm hesitant to, say,
rename a method, because finding and fixing all references is a pain. In
IntelliJ, it's trivial. The end result is that I refactor my Java code much
more aggressively than code in a language where I don't have a (good) IDE.
Similarly, while you use the REPL to explore libraries, I use the IDE:
exploring source in a Java project is trivial because every class and method
is instantly cross-linked, and my IDE knows where all the code is (including
for my libraries). It's not exactly the same as a REPL (I can't call the
method right then, of course, but I'll get to that in a minute), but it serves
a different purpose, and your argument in that linked post about how IDEs
aren't made to read code is, honestly, laughable: it's way, way easier to read
and explore a Java code base within an IDE than it would be in a text editor
and a REPL. Now, you can argue that Java itself is verbose enough that reading
it is painful because of all the boilerplate: sure, that's a fair point, but
it has nothing to do with an IDE. If you had a language with cleaner syntax
_and_ an IDE, that would be better than just a language with cleaner syntax
and a REPL when it came to reading code.

In addition, it's worth pointing out that many Java programmers use unit tests
as a poor-man's REPL; it's not the same, but it serves a similar purpose: I
want to write some code, then execute it to make sure it does what I think.
It's less dynamic, but it has the advantage of leaving you with regression
tests, and it does let you explore and quickly iterate your code. If I'm not
sure how to use a library, I'll do exactly what you'd do with a REPL: I'll
write some code to use the library, then write a simple test that executes
that code, and then I'll iterate my way to a correct solution. Again, the
integration of the IDE with the unit tests makes running, debugging, and
bouncing between the test and the code much easier than it would be in, say,
vim/emacs and a terminal.

My point is that good programmers in any language find a way to do the sort of
iterative evolution and exploration of code that you act like is only possible
with a REPL, allowing them to fix errors early.

Many of your other points here are good, I just really feel like your "the
REPL is essential" argument is pretty misguided.

~~~
maxs
Regarding IDEs: I completely agree with you. And I say this as someone who's
used Emacs and REPL for 10 years.

I find that I am generally faster in development (at least with new libraries)
in Java, than I used to be in Ruby and Python. This is all thanks to the
"discovery" ability of IDEs. (I admittedly never tried a Python or Ruby IDE.)

And I have the same feeling regarding refactoring. It is not merely limited to
renaming a method. I find myself very often making major structural changes to
my code. Moving packages around, introducing interfaces, changing type
signatures. In this regard working with an IDE makes me feel like a "software
architect", I get a big-picture of the project in a much faster and better way
than I used to with purely a text editor.

I also feel I waste no time on boiler-plate code (which admittedly Java has a
lot of). In Netbeans (I am sure it's the same in Eclipse and IDEA) the code
generation abilities are terrific. For instance, I can just write "class C
implements Interface", press Alt+Enter+Enter and see all interface methods
written out and ready for me to fill in the implementation.

Regarding the REPL: I've found that with strict typing, I just end up knocking
out the code that _I think_ should work, and then I test if it works. I can
sometimes type for 200-300 lines without running the code, then test it and
see that it actually works. Of course, sometimes it fails too: luckily Java
debugging is easy and incredibly capable.

However, if you really want a Java REPL, you can have something a little bit
similar with BeanShell (you can even embed it into NetBeans).

