
How much do we bend to the will of our tools? - Teckla
https://thorstenball.com/blog/2020/02/04/how-much-do-we-bend-to-the-will-of-our-tools/
======
bediger4000
I know this article relates to programming, not linguistics, but...

I live near a street named "Elati". Humans pronounce that Uh-lah-tee, all 3
syllables nearly the same emphasis.

Google maps calls it Ell-uh-tee, emphasis on the "Ell".

How long before human language bends to the will of Google maps' pronunciation
algorithm.

I don't think this is good or bad, merely interesting, and I'd like to know
the name linguists have for this phenomenon.

~~~
silicon2401
The closest I can think of (not a linguist) is hypercorrection, which is
essentially 'correcting' what's already correct, and ending up with something
that wouldn't previously be considered correct. But language correctness is
defined by speakers, so eventually hypercorrection could eventually become the
new standard.

[https://en.wikipedia.org/wiki/Hypercorrection](https://en.wikipedia.org/wiki/Hypercorrection)

~~~
caleb-allen
Is the use of the word "literally" an example of this?

~~~
silicon2401
I would say no. "Literally" has been used in a non-literal sense for
centuries.
[https://www.etymonline.com/word/literally#etymonline_v_30161](https://www.etymonline.com/word/literally#etymonline_v_30161)

------
thedirt0115
"Can we explain the differences in identifier length preferences between
language communities by pointing to the availability of reliable auto-complete
in one and lack thereof in another?"

I'd have to say "no" to this. What immediately jumped to mind was Java vs Go,
where there seems to be a strong preference towards shorter identifiers in Go
even though there has basically never been a time when Go did NOT have good
autocomplete capabilities.

One might argue that Go tends to have shorter identifiers because the authors
of a lot of the earliest Go code came from a C background where there wasn't
autocomplete. But nearly every language has reliable auto-complete these days,
so I'm gonna stick with "no".

------
robomartin
Well, when you have a Sawzall you pretty much let it drive...just pull the
trigger and go.

Half-joking, of course. Now, that might seem like an off-topic comment but, if
you think about it, the analogy might be somewhat reasonable.

BTW, I am NOT implying this is a bad thing. And yet I have to thrown in the
"With great power comes great responsibility". I like intelligent tools that
help me do my job. I don't care about things like --sorry-- the cult of vi/vim
(text entry speed is irrelevant).

My reality is that, as a multi-disciplinary engineer I find myself context
switching all the time, from mechanical engineering, to optics, electronics,
embedded, FPGA's, workstation, mobile and even web. When you context-switch
like that the tools can really help you make the switch with less friction.
So, yes, when you are changing languages and frameworks with some frequency
rather than working in the same ecosystem for years, things like intelligent
completion and suggestion, intelligent awareness of libraries and frameworks
and a few other features can be massively helpful. This, BTW, is why I like
the Jetbrains tools, not only are they excellent and well done, they help with
all sorts of intelligence and integrations.

But, yes, if you don't know what you are doing a Sawzall can cut off your leg.

~~~
pjc50
Ironically tools like Jetbrains Resharper let me simply avoid doing large
amounts of text entry with things like "extract interface" or "auto-generate
interface implementation methods" or "auto-generate switch labels". A few
keystrokes and hundreds of lines appear! And they let me avoid doing a lot of
navigation by navigating to related items for me.

As you say, the speed of keypressing is irrelevant when most of the real work
is in staring at the screen and puzzling something out.

~~~
robomartin
Years ago I had one glorious debugging "session" that consumed no less than
six months of 12 hour days. I was going absolutely insane. Nobody could figure
out what was going on.

This project involved a large FPGA driven by an embedded processor talking to
UI code running on a PC. Without getting into technical details I can't cover,
things were not working correctly in one particular set of states.

I and others went through the relevant code with a microscope, even rewriting
some of it out of paranoia. For months we could not figure out what was going
on.

One glorious Sunday at 2 AM I found the problem. It was an error in
calculating the coefficients for a poly-phase finite impulse response filter
in the FPGA. The code was calculated in a monster Excel spreadsheet. The
coefficients were copy-pasted into the embedded code. The embedded processor
loaded them into the FPGA during operation. The thing didn't work.

The Excel calculation used the ROUND() function when it should have used
ROUNDUP() in the math. An innocent mistake, likely the result of haste. It
cost us SIX MONTHS. This incident, perhaps more than anything else, made me
recoil at the idea that code entry speed is in any way relevant to the
delivery of working bug-free product. It truly isn't. In fact, going fast can
cost you dearly.

~~~
catdog
[https://en.wikipedia.org/wiki/Pentium_FDIV_bug](https://en.wikipedia.org/wiki/Pentium_FDIV_bug)
comes to mind

~~~
robomartin
Yeah, that was a good one.

Engineering of good reliable products, whether software-only or
multidisciplinary has nothing whatsoever to do with the ability to enter or
edit text quickly or efficiently.

I am not sure they care --given what they wrote I don't think they do-- but it
is very wrong for a respected entity like MIT to spread false information, as
is the case with the nonsense they wrote in this CS course regarding vi/m. At
the very least it betrays an utter lack of knowledge of history and, even
worse, a fabrication of a preferred narrative and reality.

These editors did not come to exist for any reason other than we had crappy
character-only screens, crappy keyboards, no GUI's and terminals that
communicated with computers at 1200 BAUD. Package all of that together and the
only thing that makes sense is something like vi. In that context vi made
sense. In fact, it was necessary. Evolve all of that to modern hardware, UI's
and performance and one would never author such a tool, not even close, in
fact, one would be laughed out of the room --and rightly so.

------
christiansakai
Yes I noticed myself doing this and also my colleagues when I used IDE.

Now I mostly use vim and I notice myself trying as best as I can to code so
that I can understand it with plain text editor. And it shows. I personally
keep using vim because of this reason.

------
pjc50
Nah, I've seen people write things that horrible long before they had tools to
deal with it. People used to write BASIC that looked like this:
[https://cdn.arstechnica.net/wp-
content/uploads/2012/12/BASIC...](https://cdn.arstechnica.net/wp-
content/uploads/2012/12/BASIC-code.jpg)

Readability is nice, but it's a luxury, and a subjective one.

------
fhood
I don't understand. Is it really all that common for developers to auto-format
code and not spot check the result?

Is using auto-formatting tools particularly widespread? Don't most people just
format the code to their liking as they write it? Am I completely off base
here?

~~~
Macha
Yes. Not just that, but those developers then add these tools to be enforced
in their CI pipeline because "It's good to be consistent", and if that tool
occasionally barfs out something extremely suboptimal, that's just the cost of
consistency.

~~~
nothrabannosir
I'm sensing some sarcasm, but I want to go on record and say I actually
completely agree with this message.

My time working in Go with its institutionalised gofmt was a blessing.

------
pixelrevision
The example in the article certainly did not bend to the will of the “code
review” tool. The issues had nothing to do with a linter, the author just
never went back to consider if it was legible.

