

Practicing Programming - krat0sprakhar
https://sites.google.com/site/steveyegge2/practicing-programming

======
jdefr89
Although I appreciate what the article has to say (indeed its true); I think
we are having an issue with the reverse with regards to the SMARTS/SKILLS/JOBS
sections.

Most new computer scientist/engineers these days DON'T KNOW FUNDAMENTALS WELL.
You may say people who code in Assembly (or are learning it) are foolish; that
couldn't be farther from the truth.

Keeping up to date with your low-level computing knowledge forms a base to
work on any other abstraction layer up the chain. The core concepts important
in older skills will still be important 1000 years down the road.

With the older generation fading out I fear we will lose a lot of the
fundamental, gritty knowledge that all other programming is based on. You
don't see many new-age programmers learning about low-level details like
memory, and i/o. Yet, in years to come if we need to make changes and possibly
rebuild some things from close to the hardware we may end up in the dark.
Without the people who handled a lot of the hardcore stuff, we may potentially
lose knowledge when todays younger generation steps up to the plate.

So basically my argument is that any serious hacker should be able to
conceptualize a fully working machine from the logic gates to the software.
Understanding the primitive components of a computer system helps you
everywhere else and will most likely never be knowledge that becomes obsolete.

~~~
derleth
Fundamentals are algorithm design and complexity analysis, not the accidental
complexities of some specific piece of hardware nobody will be using in twenty
years.

On the flip side, there will always be teenagers. There will always be people
attracted to assembly language for the same reasons I was: A love of
complexity and half-formed (and mostly mistaken) ideas about what programming
_really_ is.

~~~
spec_laconic
I completely agree.

As I get older, I find the abstractions to be much more interesting; how does
one express a lambda in assembly? Functional programming in general? The
simplicity of Python and the beauty of Lisp / Scheme / f# keep me going as a
programmer.

I wonder, as assembly languages become more complex and malformed with the
introduction of ARM and all of the headaches of custom architectures, if there
won't be more people moving to the higher level, just for sanity's sake.

~~~
gizmo686
Actually, functional programming is very straight forward on a low level. When
you define a function, foo, the machine code for that function is at a
specific memory location. When the source code says foo(*args), the machine
pushes local variables, its current location in the code, and the args to the
stack. It then jumps to the memory location stored in foo Once there, the code
pops the arguments, does stuff, pushes the return value, and jumps back to
where it was called from. Once there, the return values and local variables
get pop'ed, and execution continues. Different compilers might use different
conventions, but for our purposes this is a good enough model.

The way a normal function call would look in assembly might be:

mov %eax <returnPoint> ;at assembly time, <returnPoint> is replaced with the
memory address of the operation labled returnPoint:

push %eax ;save the return point to the stack

push %ebx ;push local variables and the arguments

...

jmp <foo> ;jump to the memory location labled foo

returnPoint: pop ebx ;pop local variables and return values,

pop ecx

...

The thing to note in this example is that when you run the assembler, <foo>
gets replaced with the memory location of the function code. If we want to a
functional type thing, we need the address we jump to be able to change at
runtime, so instead of having the assembler hard code the memory location to
jump to, we need to get that information from a variable. This would look like

...

jmp [foo]

...

This time, foo is not pointing to a function, but rather to the memory
location of a function pointer. This means that if we want to make it so when
we call foo(), we run the bar() function, we would simply need to do:

mov foo <bar>

foo is the memory location of the function pointer, and <bar> is the memory
location of the function itself, so now when we dereference foo in the jmp, we
will execute bar().

With regard to the ARM architecture, I've done a little bit of work in it, and
it isn't more complex than x86.

------
jhspaybar
This is some great stuff. Even though it's old it doesn't really matter. The
same concepts still apply, and the ways to practice that he brings up are
invaluable.

As a programmer I often wonder how to improve, or if the things I'm doing
actually help. Typically my only measure is encountering a problem in the
wild, and being able to recall that one thing I read and typed up a few months
back.

To see how other successful programmers approach the problem of personal
growth so as not to become stagnant in their fields is useful, especially to
someone like me who is just starting their careers. Despite having gone
through college pursuing a degree in CS, the learning I did on my own is some
of the most valuable and now I have even more ways to keep learning
independently!

~~~
gizmo686
My solution to personal growth in programming is essentially, when I see a
problem and know (or suspect) the existence of a tool better suited to answer
it than what I would use, I learn the new tool.

Granted I cannot always do that because of time. And in big projects the tools
I have been using are the best solution because they are used everywhere else
in the project, and I still want/need my work to be understandable by others
without needing to learn a thousand different tools.

------
manish_gill
I have a related question. When I try to do 'interview questions', I usually
get stumped (I'm not really into math). In all the project Euler type problems
that I've attempted, I find myself continuously using the Brute-force
approach, only to find someone's very clever (and in hindsight, obvious)
method to do the same. Or usually, I'll google the problem to find how someone
else approached it, and only after I've studied it, or just 'took a peek',
will I attempt my own solution.

Now, I've studied Algorithms and data structures, and it's not that I'm bad at
algorithms. I can understand well defined (aka classic) algorithms just fine,
but I find it really hard to find (or create!) patterns in numbers and to
manipulate them in order to solve a complex problem.

Any suggestions on how can I improve myself?

~~~
gizmo686
1) Start with a brute force solution, and then look for optimizations. A good
way to do this is have it print out (or display in some form) each of its
guesses. You will almost allways see it do something stupid, at which point
you found an optimization.

2) Write downs as many observations about the problem as you can; if possible,
write them down in mathematical notation. See what you can find by combining
observations.

3) If you have some sense of what a solution might look like, write it down
and see where it goes; don't wait until you think you have the entire answer.

4) When you look at other people algorithms, as soon as you see them do
something you didn't, put there paper to the side, do what they did, and see
where you can take it on your own.

5) If what you are doing looks related to a subject you are not fammilar with,
research that subject.

6) Similarly, if you can reduce a problem (or sub-problem) to another one, but
you have no idea how to solve the new one, research to see if people have
already solved your new one.

7) If you every see a sequence of numbers, go here: <http://oeis.org/>

7b) If the only description in oeis is a link to the Project Euler problem you
are solving, then this technique is probably to cheety.

8) Invent to variables/functions

9) Math notation is there to help you, make up your own if it expresses the
problem more cleanly.

10) Never right a false statement on your work paper unless it is clearly
indicated by words like "assume" or "not".

10b) If you see a statement on one of your papers, assume that it is true even
if you forgot why.

~~~
manish_gill
Thank you for writing that. I've been struggling with this for quite a while
and your pointers can help me be someone more clever than a brute-force code-
monkey. :)

------
alexeiz
Great article. The programming practice "drills" come a little short though.
You'd be better off reading the article and then deciding for yourself the way
you should practice. My favorite ways of practicing and keeping my programming
skills up-to-date is to do micro-projects taking about 2 or 3 days. It can be
some challenging algorithmic problem, or solving a problem in a new language I
want to get familiar with, or using a new library to do some interesting
tasks, or trying a new build tool on one of my old projects. It's fun, it
doesn't take long to get bored or tired and you learn something new.

------
Cieplak
Practicing Programming (2005)

~~~
alpb
What's that?

~~~
krat0sprakhar
The essay was posted in 2005 by Steve Yegge.

~~~
packetslave
Was there something in the article that you feel is no longer relevant today?
If not, who cares when it was posted?

------
pycassa
sorry off topic.. but can some one please explain this "It's(American
football) more like playing chess than playing soccer."

~~~
RegEx
American football can be viewed as a chess match between the opposing coaches,
with the players having the job of executing the coaches' strategies as
perfectly as possible. Each play, both on offense and defense, is designed and
planned. Here's an example play chart[0].

Soccer, on the other hand, is relatively continuous. The closest thing to an
American football play would be how the teams prepare for Set plays such as
corner kicks and free kicks, or their overall formation. It's not viewed as a
series of "moves" like American Football.

[0]: <http://www.wheelbarrowsoftware.com/images/fpd/fs5.jpg>

