
Learn C - wx196
https://medium.com/tech-talk/afcfa2920c17
======
danso
For anyone who hasn't browsed through Peter Seibel's "Coders at Work," one of
his subjects is Fran Allen...it's kind of funny because I do agree that
learning C has been valuable to the high-level programming I do today (but
only because I was forced to learn it in school). But there's always another
level below you that can be valuable...Allen says C killed her interest in
programming...not because it was hard, but because of, in her opinion, it led
engineers to abandon work in compiler optimization (her focus was in high-
performance computing):

(Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of
Programming (Kindle Location 6269). Kindle Edition:
[http://www.amazon.com/Coders-Work-Reflections-Craft-
Programm...](http://www.amazon.com/Coders-Work-Reflections-Craft-
Programming/dp/1430219483) )

 _Seibel: When do you think was the last time that you programmed?_

 _Allen: Oh, it was quite a while ago. I kind of stopped when C came out. That
was a big blow. We were making so much good progress on optimizations and
transformations. We were getting rid of just one nice problem after another.
When C came out, at one of the SIGPLAN compiler conferences, there was a
debate between Steve Johnson from Bell Labs, who was supporting C, and one of
our people, Bill Harrison, who was working on a project that I had at that
time supporting automatic optimization...The nubbin of the debate was Steve's
defense of not having to build optimizers anymore because the programmer would
take care of it. That it was really a programmer's issue...._

 _Seibel: Do you think C is a reasonable language if they had restricted its
use to operating-system kernels?_

 _Allen: Oh, yeah. That would have been fine. And, in fact, you need to have
something like that, something where experts can really fine-tune without big
bottlenecks because those are key problems to solve. By 1960, we had a long
list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are
higher-level than C. We have seriously regressed, since C developed. C has
destroyed our ability to advance the state of the art in automatic
optimization, automatic parallelization, automatic mapping of a high-level
language to the machine. This is one of the reasons compilers are ...
basically not taught much anymore in the colleges and universities._

~~~
habitue
I think she might have given up on programming a bit prematurely. The pendulum
has obviously swung completely the other way with high level languages like
Haskell pushing forward compiler optimization, and JIT VMs pushing forward in
other directions. It's actually an exciting time for "smart compilers".

~~~
irahul
> I think she might have given up on programming a bit prematurely. The
> pendulum has obviously swung completely the other way with high level
> languages like Haskell pushing forward compiler optimization

C compilers were optimizing long before Haskell. From her interview, I don't
understand why she couldn't work on optimizers even if someone else advocated
optimization being programmer's repsonsibility?

~~~
habitue
> C compilers were optimizing long before Haskell

Right, and they've come up with some excellent tricks too. But the reason she
felt optimization wasn't going to progress as far is because C is a lower
level language than some of the other languages out at the time, and there are
necessarily less tricks you can do in a lower level language because you have
to infer the intent of the programmer more, and rely on optimizing idioms etc,
rather than optimizing actual constructs of the language.

How does a compiler optimize a single "goto"? There isn't much it can do
unless, for instance, it notices that the goto is found in an idiomatic
pattern that results in a loop. Then it can make a decision whether to unroll
the loop or not. If the language gives you the loop construct, it can skip the
"recognize the idiom" step (and the associated risk of guessing wrong), and go
right to optimizing loops. Similarly, in higher level languages than C, the
programmer can express their intent more directly, and therefore the compiler
can take less risks when guessing "Ah, I see what you're trying to do, here's
the fastest assembly that accomplishes that"

~~~
waps
To convince C users why Haskell has the potential (but currently only
potential) to optimize better, it is best to just point to one example that C
can never hope to optimize : deforestation.

[http://book.realworldhaskell.org/read/profiling-and-
optimiza...](http://book.realworldhaskell.org/read/profiling-and-
optimization.html#id679430)

C will never be able to do that. Before optimization : the programmer requests
a list to be created, fill it by calling functions and then passes the
completed datastructure list (or tree) along to another function, which
executes commands according to what the list contains.

After optimization there is no more list. Instead the function the list is
passed to will call a generated function that generates exactly the needed
elements of the tree just-in-time. Result: no list, no memory (aside from 1
element on the stack), no allocation, no clearing of memory afterwards.

Of course the downside is that it's very tempting (and encouraged) to write
programs that don't contain these optimizations you'd have to do manually in
C/Java/... and just have them run. What you'll miss as a Haskell programmer is
that the program is effectively dependent on those optimizations for it's
complexity (for example: optimized program is O(n), program as written is
O(n^n). Then you insert what looks like a tiny change, say, sorting the list,
which prevents optimization from happening and _boom_ , your binary switches
from O(n) to O(n^n). All tests will obviously pass, yet your boss is unlikely
to be happy ... At this point it is extremely hard to figure out what just
happened)

------
andrewvc
This is a fair point, but it brings to mind another point I didn't really
understand till the last couple of years. Learning about how compilers work is
just as important. Building a small lisp compiler was a life-changing
experience for me in terms of going one level deeper, as much as understanding
C was. For those who've never written lisp before, the reason I recommend a
lisp compiler is that lisp compilers are the simplest to write, and the most
expressive in terms of runtime strategy since lisp syntax mirrors a compiler's
IR (intermediate representation). I think this is just as important in todays
world because languages like Ruby and Javascript are both directly inspired by
lisp. We also live in a world where Clojure is seeing a steady rise, for once
a Lisp with large potential for significant commercial adoption.

I can highly recommend the book Lisp in Small Pieces, $93 on amazon, and worth
every penny. Walking through the building blocks and design decisions of a
language changes the way you code. Every language you look at winds up being
internally translated into your own IR whether you've written a compiler or
not. Understanding the inner workings of a compiler however adds depth.

~~~
mbel
I totally agree; actually a small, incomplete Scheme interpreter (which may be
used to develop a lisp compiler later on) can be easily created in few hours
(depending on used language) [0], without any prior experience. It's surely
not as deep learning experience as building a compiler, but I think it's still
worthwhile.

[0] <http://norvig.com/lispy.html>

~~~
irahul
A more complete interpreter:

<http://norvig.com/lispy2.html>

------
kunai

      You'll learn to feel every line of code you write
    

This is _exactly_ how I felt when I started writing C.

I started out with .NET and Java. Both had quite high abstraction levels, and
not much deep integration with hardware. I took my control statements,
conditionals, and high-level object-based programming for granted -- I never
WANTED to learn C. I took one look at K&R and was turned off by the insane
amount of low-level work you had to do compared to Java or Python, even to
write a simple character counter.

I came back to it. I was slow at first, I was confused, but I learned how to
write in C. I learned how to control memory myself. I learned how to do my own
text handling. I learned how to avoid buffer overflows and segfaults.

In many ways, it's like going from a Mercedes-Benz to a Miata. There's a lot
less comfort, and you have to do a lot more of the shifting and oversteer
yourself when the Benz used to do it for you, but you really _feel_ yourself
driving, and when you step back into the Mercedes, you feel numbed.

It's the same with C.

~~~
sigkill
A small part of C was beaten into us for 2 years at the 11th and 12th grade.
Today (I'm not a CS guy by the way), even when I write programs in python or
any high level language I have this nagging feeling of 'loss of control' and
that the code is not efficient enough. I know this is wrong but I can't help
this feeling when now things are handed to you as compared to flipping every
figurative bit in C.

~~~
pointyhats
It is right not wrong. Being decoupled from the machine by several layers of
abstraction means you're just rearranging the furniture rather than whittling
it out of wood.

I hate rearranging the furniture; it gives no satisfaction.

Satisfaction is a big part of quality in your life.

------
betterunix
We need _less_ code written in C, not more. We already have a problem with a
massive, unreliable, insecure ecosystem of legacy C code that is hard to
escape; writing more software in C makes that problem worse.

Most software is written to solve high-level problems. Using a high-level
language is sensible, time-saving, budget-saving, improves portability, and
saves on headaches later. The same rule that applies to COBOL should apply to
C: only use it when you have to deal with an existing legacy codebase (and
only if rewriting that codebase with a better language is not possible).

~~~
kmm
Was there any study done to the (un)reliability of C? I know for a fact
practically every piece software I use is programmed in either C or C++.

The sole exceptions are Anki and Gentoo's portage system, both Python. And I'm
pretty sure the reason portage is so extremely slow is because it is in Python
(I've checked, it's not I/O-bound). And Anki is some very unreliable software.

In fact, give me a single big desktop software project made with a language
that is not C or C++.

~~~
pjmlp
Just three, ask for more if you wish:

The original version of Skype was done in Delphi.

The first versions of Mac OS were done in Apple Pascal.

Photoshop was originally done in Apple Pascal.

~~~
asynchronous13
1) the original UI for Skype was written in Delphi. The core functionality is,
and has always been written in C/C++.

2) Mac OS is not a desktop application.

3) Adobe Photoshop is all C/C++ now. They converted it _to_ C/C++ because they
decided that was a better choice. Its tough to sell that as a case for a major
desktop application not in c/c++.

~~~
pjmlp
The parent poster asked for desktop software written in languages besides C or
C++ without any reference of hybrid applications or timeframe when they were
written.

Since when desktop operating systems are not desktop applications?

I can provide other examples, but most likely I will get extra requirements
along the way, so why bother...

After all, you can always use the argument that any application needs to call
C APIs when the OS is written in C, therefore all applications are written in
C.

~~~
kmm
What a victim complex. If someone is debating that we should stop using C, and
I ask for software not written in C, it's obvious that I'm looking for
contemporary software. And the reasoning that every software is "C" because it
needs to do system/library calls would be ludicrous, and to provisionally
accusing me of that is insulting.

I've done some research myself, and there are some interesting and big
projects in languages other than C. Eclipse in Java, Dropbox in Python to name
two. But my point still stands. I looked at all the desktop applications I
have installed and they're without fault C or C++.

~~~
thedanfilter
My impression is that desktop software is/was largely written in
C/C++/C#/Objective C/Objective C++ because those are the languages to which
the Operating Systems of today expose their APIs. For example Win32 = C/C++,
Cocoa = Objective C, Metro = C# (or I guess anything compiled to the Common
Intermediate Language (CIL)???).

Now that said, most languages provide a bridging layer that allows them to
call out to those "native" APIs. These are used to enable API ?wrappers?,
however due to these being provided by third parties (I believe) there has
been a tendency to gravitate toward the "blessed" language of the OS vendor.

One of the big advantages of Java (to me at least) was that it provided a
platform independent windowing capability inbuilt within the JDK that has been
maintained by Sun/Oracle/(and Apple) as new operating system revisions were
released.

Note, for example, that C/C++/Objective-C/Objective-C++ programs
aren't/weren't allowed in the Mac App Store (I'm not sure if this is still the
case...). (Personally I ported a Java App to Objective C++ due to this.)

But generally I agree with your point that this isn't the only reason why C
was so pervasively used. However, you also need to consider that a large
number of programming environments and tools were specifically developed to
aid C/C++ programmers, e.g. Borland C++, Visual C++, Code Warrior, XCode. It's
also worth remembering that the GNU C Compiler and Debugger were import
contributions to free software back in the day.

But also consider distribution of compilers etc. I think it is pretty fair to
say that a lot of programmers learnt to program using Borland Pascal/C++
because at that point the Internet was not as accessible as today and copies
of these could be "obtained".

The advent of Internet has not only allowed the distribution of compilers and
environments for other programming languages, it has also meant that the
languages used for backend systems, i.e. web servers and web applications, is
irrelevant to the user's web browser.

Anecdotally, for safety-critical system software an issue with some languages
other than C is that they have not been suitable for real-time systems. I
don't know much about this other than that exception handling and also garbage
collection can cause issues due to their non-determinism.

I fear that you'll think that the above is a bit too much like saying "all
software is 'C' because it needs to do system/library calls", however I think
it's probably fairer to say "all software is 'C' because many people have
really, really liked it" and "better the devil you know".

~~~
pjmlp
Metro is an updated version of COM.

On the desktop side, you have C++/CX which is C++ with some language
extensions to make talking to COM easier.

For those that would rather use plain ISO C++, there is the Windows Runtime
template framework.

.NET code is JITed or compiled AOT with NGEN and makes use of CCW to interop
with WinRT.

<http://msdn.microsoft.com/de-de/magazine/jj651569.aspx>

On the mobile side, Windows Phone 8 only has native applications, even .NET
gets compiled to native code.

[http://channel9.msdn.com/Shows/Going+Deep/Mani-Ramaswamy-
and...](http://channel9.msdn.com/Shows/Going+Deep/Mani-Ramaswamy-and-Peter-
Sollich-Inside-Compiler-in-the-Cloud-and-MDIL)

------
asveikau
> You’ll realize that Object Orientation is not the only way to architect
> software.

Actually, pretty much all the "good" C code I've seen is using some form of
object orientation, for example using structs with function pointers, or
information hiding through incomplete types and void pointers. I think the
better observation is that you don't need much language support to do mostly-
OOP.

That and a lot of people in those "other" languages are using language support
for OOP as a bit of a mental crutch. (For example, many people working in Java
or C# especially tend to write lots of indecipherable OO spaghetti.)

~~~
btilly
People who use the preprocessor heavily tend to use a fairly non-OO design.

See <http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html> for an
interesting example of a design technique enabled by that. I also note that
said technique is problematic in practice not because it isn't good - it is -
but because it causes people to suffer a brain freeze.

~~~
asveikau
I tend to see these as orthogonal. What does a macro do but generate more
code? So shouldn't the use of macros be reducible to ordinary code? That, and
you can still employ this stuff at the micro level and your chatter between
compilation units or library boundaries can still be OO.

~~~
btilly
In my experience, people who like to use the preprocessor come up with very
non-OO abstractions. Similarly people who program heavily using templates in
C++ tend not to use a lot of the OO features there either.

I may be more aware of this than you because my design sensibilities by nature
are not particularly OO. This is not to say that I can't produce and
understand OO designs. I both can and do, particularly if I have to cooperate
with people who expect that. But given the choice, I'm more likely to head in
a different direction. (A dictionary full of closures can be surprisingly
useful...)

~~~
asveikau
I guess I don't see a strong distinction (or see them on the same axis)
because I like the preprocessor and use it when I feel it's useful, but I
still write fairly OOP-like public interfaces to my code.

~~~
btilly
I guess I'm not sure what you mean by "fairly OOP-like public interfaces".

My strong suspicion is that you're referring to features that were standard in
imperative programs prior to OOP becoming popular. For instance structs in C
are functionally equivalent to records in Pascal which date back at least to
ALGOL W in 1966. By contrast the earliest OO language is Simula 67 which came
out a year later.

If so, then I disagree with you about what "OOP-like" means.

I first learned about the value of information hiding, etc, from _Code
Complete_. The first edition of which was published before OO programming had
become mainstream, and therefore says not a word about OO. That said, the
ideas in that book translate well into an OO world. (And the second edition
does talk about OO concepts.)

~~~
asveikau
If the same ideas of OOP have been around longer than the term itself (and I
do believe you are right in pointing this out), do they cease to be the same
ideas? This to me is degrading into a "tomato, tomato" discussion.

Reminds me of arguments I used to get into with a friend. My friend was really
into naming patterns and stuff. I really disliked that, and thought to give it
a technobabble-ish name distances the practice from the concept itself, which
is usually more straightforward than the name suggests. I don't think your
comment is the first thing I've read that steers me in the direction of "OOP
as a buzzwordy name for preexisting, good, clean practices".

------
stiff
What you really want to say is not "learn C", but "learn computer
architecture, algorithms and programming language semantics", C just happened
to be the most convenient medium for this for a long time, but today there are
perhaps better choices, like Go and I really hope C will finally go away one
day, while people will still have to understand things like indirect
addressing, hashing, pass-by-value vs. pass-by-reference an so on and so
forth.

------
pjmlp
Please don't.

Learn Delphi, Free Pascal, Modula-2, Ada, Oberon(-2),... and see how it is
possible to have down to the metal strong typed languages with native
compilers.

Then understand that C and C++ ubiquity is an historical accident due to the
way UNIX spread across the industry.

~~~
sliverstorm
Accident? C did exactly what it was designed to do; allow UNIX to spread to
other architectures.

~~~
pjmlp
Any high level language would do it in a time and age where most OS were still
100% coded in Assembly.

Lets say UNIX used PL/I or Algol 68, the article would be called "Learn PL/I"
or "Learn Algol 68".

~~~
sliverstorm
I can't say you _couldn't_ have written UNIX in Algol 68, but C was very
specifically developed by K&R for building the underpinnings of UNIX. They
didn't just "pick" C out of a pool of available languages- C was made for
UNIX.

(This is probably a gross oversimplification- please don't hang me out to dry
for it- but it summarizes my understanding of the origins of C)

~~~
pjmlp
C was made with UNIX, not for, mostly because the authors did not like to use
other available languages.

There is hardly any feature in the language that makes it better than other
higher level languages for systems programming.

Please note that I am older than C, for me higher level language is anything
higher than Assembly.

~~~
sliverstorm
I suppose I'm descending to semantics and quibbling over meaningless details
at this point, but I was just taking issue with the idea that C wound up in
its current position on accident. It was not by chance that C makes up the
underpinnings of UNIX, nor did (to my knowledge) the spread of UNIX have much
to do with chance.

I don't mean to contend there are no other languages that _could_ have been
used.

------
obviouslygreen
While I certainly agree that understanding the "how" will give you a better
appreciation of the "what" and a more thorough understanding of the "why"
behind many languages (I was one of the obnoxiously vocal detractors when
American universities overwhelmingly moved from C/C++ to Java), I think this
goes a little far:

 _When you internalize these lessons, your approach to programming will
completely change._

You can know which parts of a language are heavier without knowing how they're
implemented. Granted, this is certainly not the same, but in a practical sense
it is certainly close. Your approach to programming will only "completely
change" if you've been doing things with absolutely no understanding of the
performance impact of the choices you make.

There are certainly people in that situation, but anyone who has had to do any
profiling to find bottlenecks in code (which I would guess means most people
who have been involved in professional development for more than a year or
two) likely has at least an idea of which data and control structures are
going to cause issues in languages they've used at length.

So yes, I agree that understanding what's going on is good... but not everyone
has the time or inclination to delve into the depths of C, and there are other
ways to come about a useful understanding of language internals (e.g. actually
reading about a language's specific features rather than simply guessing at
their implementation based on the assumption that they're written using
standard C idioms).

------
jchonphoenix
A little off topic, but I think people need to realize that when someone says
they're a programmer, all they mean is that they write code.

I know a lot of my friends (especially the ones in tech heavy cultures like
Google, Facebook, Palantir, Dropbox) easily forget that when someone says "I'm
a software engineer," they mean they write code. It doesn't mean they've gone
through the same coursework, can reduce 3-SAT to everyday problems, form
intricate algorithms, or have written their own memory allocator.

Keeping this in mind might make conversations go smoother.

~~~
doktrin
> _when someone says they're a programmer, all they mean is that they write
> code._

How is this an inaccurate or misleading statement? If they write code for a
living, they're programmers.

> _A little off topic_

IMO that's an understatement.

~~~
cthackers
There is a difference. The difference between someone who works as a
programmer, who takes it just as a job but of course can be extremely good at
it - just like anything else you practice a lot; and someone who is a
programmer. They are lexically the same but they mean different things. The
last one, beside working (or not) as a programmer, does it for the art of it.
Who not necessarily learns or does something (ships) because it's needed or it
provides anything else beside the joy and fulfilment of learning and
understanding it. For example, at work, when you want to stop programming you
open your own personal open source project and relax for an hour with it. Then
you go home after 8 hours of programming and start up your IDE and carry on
with your art that may or may not ever see the light of day and it makes no
difference.

~~~
doktrin
That's hazy territory. While I understand what you're trying to get across,
there's really no point adding layers of hidden meaning and subjectivity to
the term "programmer".

(non-software) engineers, lawyers, graphic designers, accountants, chemists,
doctors define themselves by occupation and not some subjective non-metric of
passion. Why exactly should programming be different?

I'm not arguing that programmers shouldn't be passionate about their work, or
work on side projects. However, there's really no point in prevaricating over
what a "Real Programmer"(tm) is, beyond the very simple working definition we
have.

~~~
cthackers
Because we're talking about programming here. If we were talking about music
for example, the same will apply. There are singers and there are artists and
there is a clear difference in a song well made and a commercial song. Even if
the last one makes you more money. But for programmers there's just that one
term to use. So the need to point the hidden meanings.

~~~
doktrin
There are no "hidden meanings". You're making them up.

edit :

Words, like functions, work best when their meaning is simple and clear. Like
functions, there's nothing wrong with combining them (e.g. "good programmer",
"passionate programmer", "programming craftsman", "software composer"), but
shoehorning multiple definitions into a single word will just inevitably lead
to confusion.

------
wmt
I always felt shame that I knew C but didn't understand how the code works
inside the CPU. Finally I got a grip of myself and got the book with the
dragon on the cover, I managed to make my stupid C compiler. Turns out, it's
not that hard, and doing that finally helped me understad stack and heap much
better!

It also gave me the confidence to look at how bytecode based languages work --
turns out they're also not magic, and can be understood my mere mortals.

TLDR, assembly and compilers can be understood, and it'd fun too!

------
mimog
How does a musician without any formal CS education and a admitted lack of
understanding of some very fundamental software things, such as pointers and
memory, land a San Francisco dev job? I thought those jobs were in very high
demand

~~~
binarycrusader
You'd be surprised at how far "not an asshole, actively learns, and is good at
programming" can get someone.

~~~
zedshaw
Not as far as "total asshole, does coke, and actively kisses ass" does at the
same companies. <rimshot>

~~~
jacques_chester
And you've only got 1 out of 3. :)

<rimshot>

------
conradev
I started programming for iOS via the same route, playing around with
Objective-C in Xcode, but have since learned C. I have found that an
understanding of C proves to be essential if I try to do anything reasonably
complex (e.g. <https://github.com/conradev/BlockTypeDescription>).

I have also found that another area in which newer developers fall short is
understanding how Xcode works. Those familiar with interpreted languages do
not have to compile their code. While they may understand the concept of
'modules', they don't know the difference between static and dynamic
libraries, what 'linking' is, and how the compiler finds header files. Someone
not knowledgeable of how this works will come upon innumerable headaches down
the road, especially trying to integrate third party libraries.

I would suggest a supplementary post entitled "Learn How Xcode Works, You
Cheater"

~~~
hbridges
Might could do that! Actually one of the biggest mysteries to me for a while
was how header files worked, how the compiler finds symbols, build
dependencies, linking libs, etc. Having to mess around with Makefiles helps
you digest those things, but then Xcode treats them in an entirely different
way. #include is a complex beast

~~~
tjdetwiler
#include is actually a very simple beast, once you realize that it's much
easier to digest.

------
peripetylabs
There exist proof and verification tools for C, like ACSL and Frama-C, [1] or
LCL and Splint. [2,3] Not to mention the myriad of static and dynamic analysis
tools. You can _prove_ that a piece of C code does or doesn't contain certain
bugs -- this is not the case for higher-level languages (except perhaps
Haskell and ML). That is why C is used for highly sensitive projects like
avionics.

I would say: learn C _and high-level languages_.

[1] <http://frama-c.com/acsl.html>

[2] www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-74.pdf

[3] <http://splint.org/>

------
rmrfrmrf
Please please PUH-LEASE read "The C Programming Language" if you're going to
learn C. It is THE most important programming book I've ever read. No other
book comes close to concisely and elegantly describing a language.

~~~
gnuvince
If K&R is the most important programming you've read, you have many more books
to read methinks :)

------
bmoresbest55
I have to totally agree with this. I have almost completed Learn Python The
Hard Way and it has been very good to me. I have been programming in many
different languages while in school (I just graduated this May, lucky me...).
I must say that this might be the most complete way that I have seen to learn
a new language. I am definitely doing Learn C The Hard Way and I am definitely
going to donate/pay for these materials. They are extremely beneficial to any
programmer even if you know all of the information.

~~~
zedshaw
Thanks! Glad you liked my book.

Any criticisms for improvements?

------
amasad
While I agree that all programmers should learn how computers and compilers
work. Learning C would just be a mean to that end. You could easily replace
"Learn C" with "Learn Assembly" in this post.

I think these type of posts reinforces the over-emphasis on languages,
frameworks, and tools in general we have in this industry. People restrict
themselves into camps of certain tools and then use that specific tool as a
general purpose tool for all problems they face in a sort of nationalistic
way. And then go on the internet and start arguing over which over-used hammer
is better, so to speak. Now, what is worse is that newcomers to programming
quickly adopt this attitude which makes it even harder for them to learn the
right things.

The other day I made a trip to the Academy for Software engineering NYC[1] and
it was a great experience watching kids learn how to program. They were using
Python to learn. However, I was really disappointed when a few kids expressed
interest in learning Foo language because someone told them that Foo was the
best language ever! I told them that after learning how to compute, they can
go on and learn all the languages they want to learn and even create their
own.

[1]: <http://www.afsenyc.org/>

------
uggedal
I "learned" C by writing a smal runit/daemontools replacement. Most of the
work was learning the features I needed from libc:
<http://uggedal.github.io/going/going.c.html>

------
ergest
I agree with the original poster. Learning C definitely allowed me to
understand (and feel) the code from the inside. If you just use a Hashmap or
Array without knowing how you'd do it in C, you're just playing with Lego
pieces. (I also agree with his view on C++ but I won't poke that hornet nest)

------
kyllo
Thanks for this, I'm going to work my way through _Learn C the Hard Way._ I
started on Java, but now I use a lot of Ruby and Python, and both of those
languages' original/main implementations are both written in C, so I want to
learn how C works.

------
smegel
> You’ll realize that Object Orientation is not the only way to architect
> software.

I don't think C is really the way to learn this lesson - try a functional
programming language to really get a feel for an entirely different
programming paradigm.

------
kuchaguangjie
There is no doubt that learning c & assembly language, help programmer to
understand higher level languages better, even though they will not use c or
assembly in their real work.

------
shmerl
Somehow I thought that C is usually in the curriculum of any CS oriented
university / college program, so most have at at least rudimentary knowledge
of C.

------
sengstrom
Crap - is C a low-level language now? My view on the world is going to have to
adjust.

------
vbv
Get the C t-shirt for the C hacker in you: <http://teespring.com/c-faster>

------
NTDF
Very interesting thread. I moved to SV a year ago and my perspective on how
modern programmers think has changed enormously.

As a background, I am a software engineer on route to being a cpu architect.
My job is to understand how hardware works and what changes need to go into
the instruction set to allow modern software work better. I have some thoughts
to share.

I also think that a lot of folks in HN are true software engineers with little
hardware background. So, most arguments simply gloss over why (or why not) use
C.

Let us all be very clear. C is not a great language for app development. This
comes from a person who has worked his entire life with C. I have no qualms in
saying this.

C is primarily just an abstraction over assembly (not the same as machine).
With this in mind, you can safely assume C to be a "stateful" language. By
"stateful" I mean, each statement gets "executed" and the state of the CPU and
memory changes. The fact that multiple assembly instructions can get produced
per-line of C is what makes it better than lower languages.

That is all that is there in C. Everything else is an add-on. The standard
library functions (strlen(), printf() etc.) are all essentially a few assembly
instructions clubbed together.

Now, as it turns out, oop and other paradigms were created for the sole
purpose of making it easy to program. In other words, we want to lower the
barrier of entry into programming and have more programmers do things that
they would normally never be able to do. We were ready to take a performance
penalty with higher level languages so that more people can make useful stuff
with a computer.

For example, oop was made to create the illusion that a program really is
objects interacting with each other or functional execution. However, folks
well versed with the computer know that oop is eventually executed
sequentially.

The whole eco-system of web and app developers is able to thrive because they
do not have to deal with 'nasty' (I personally find them amazing) issues of
why things work the way they do. Let's just say that programming is becoming
commoditized and the barrier to programming is made lower by them not having
to deal with these issues. If we were still programming in C, most programmers
would have dropped out of their CS classes when they were freshmen.

An analogy would be you don't need to understand the internals of a combustion
engine to drive a car.

So, should everyone learn C? As someone who cares about expanding the powers
of a computer, I do not think wasting resources on it is worthwhile. There are
enough people maintaining C and using it for great purposes. Commodity
programming (application level) should not involve C as far as possible. Cpus
are fast enough to handle bloatware languages. I would personally prefer
people thinking of newer languages/paradigms that would expand on what people
can do with a computer.

As a programmer, you do not need to learn it. It is certainly helpful and I
would definitely encourage folks to try to understand it, if you are curious.
But no point wasting your time on it if all you are going to do is application
level development.

Now, if you are working on something really deep and involved, the story
changes completely. For example, folks working on router/switch programs, fast
search engines, OS, devices, simulations (basically everything cool in my
biased mind) need to understand it.

An analogy is that an you do not need to understand nitty-gritty details about
engines unless you are building something in the range of a Ferrari.

Just my thoughts.

------
peterkelly
Film at 11.

