
Unix and Flow-Based Programming - nprincigalli
https://groups.google.com/d/msg/flow-based-programming/iaKhbABJ9fw/XlrMf-dnBgAJ
======
teddyh
> _I was going to mention Unix “cat” but forgot._

> _Wildly simple - it takes at most one argument._

That is asinine. Why would a program be called ‘cat’ if it can’t con _cat_
enate multiple files?

Also, his “grash” program is seriously deficient in the handling of signals –
it completely ignores the issue. There are _many_ subtle issues with signals
which have to be handled correctly when writing a shell:
[http://www.cons.org/cracauer/sigint.html](http://www.cons.org/cracauer/sigint.html)

~~~
throwaway999888
> That is asinine. Why would a program be called ‘cat’ if it can’t concatenate
> multiple files?

Because he only thinks it's the "echo file to terminal" command.

The main use of cat was mentioned in the Programming in the UNIX environment
article.

> The fact that cat will also print on the terminal is a special case. Perhaps
> surprisingly, in practice it turns out that the special case is the main use
> of the program. [...] But what about -v? That prints non-printing characters
> in a visible representation. Making strange characters visible is a
> genuinely new function, for which no existing program is suitable. [...] The
> answer is ‘‘No.’’ Such a modification confuses what cat ’s job is
> concatenating files with what it happens to do in a common special case -
> showing a file on the terminal.

[http://harmful.cat-v.org/cat-v/unix_prog_design.pdf](http://harmful.cat-v.org/cat-v/unix_prog_design.pdf)

~~~
mtdewcmu
Like a lot of people, he overuses cat when it's not needed. `cat FOO | more`
is approximately equivalent to `more < FOO`, except the first one spawns an
extra process for no reason.

~~~
throwaway999888
> except the first one spawns an extra process for no reason.

I wonder how many processes I could spin up in the time it would take for me
to figure out "hmm, no, invoking that command would use one more process than
necessary..."

I don't give a hoot about that stuff when I'm doing stuff interactively. e.g.
when using `more`.

------
SixSigma
> Then Tannenbaum came out with his book and it didn’t really take off.

> Then Torvalds open-sourced linux and that is what started to gain
> popularity.

 _cough_ BSDi _cough_

~~~
nickpsecurity
To support your point:

[https://web.archive.org/web/19990224090656/http://www.bsdi.c...](https://web.archive.org/web/19990224090656/http://www.bsdi.com/company/)

Seeing the list, now I'm sure which BSD that SCC likely used in their
Sidewinder firewall. They modified it to have mandatory access controls to
contain breaches of subsystems. This was before SELinux, etc. Never used BSDi,
though, so can't say much about it except its users were getting pretty of
mileage out of it given rep of products based on it. Least in appliances.

~~~
SixSigma
We used BSDi to launch an ISP in 1995, with 20 modems and 512kbps of
bandwidth.

I still have the Cdrom and we do hosting nowadays.

~~~
nickpsecurity
How much did it cost out of curiosity?

~~~
SixSigma
I really can't remember. I didn't order it myself.

------
bitwize
Well yeah, but that's not really how complex systems are developed anymore.
Modern software development uses the object as the modular unit of
granularity, not the process -- and APIs, perhaps augmented with message
queues, for communication rather than pipes, sockets, and file descriptors.
This is because it's much easier to statically reason about objects with well-
defined APIs and they can be composed more flexibly with appropriate fabric:
interacting locally or across process, user, or system boundaries.

This model dates back at least as far as Smalltalk but what really caused it
to take off was -- wait for it -- Windows COM. So modern development has moved
on from the Unix philosophy and embraced the Windows philosophy.

~~~
chubot
Uh, this is crazy wrong. Maybe in the desktop era objects were more important,
but processes have come back with a vengeance now that everything is a
distributed system.

I'm not sure what you mean about objects being composed across system or
process boundaries. Name a successful system where that's true. Who uses DCOM?
Distributed objects _failed_. I occasionally see people trying to bring back
this way of thinking (e.g. they want methods instead of RPCs and protobufs),
but they have not succeeded, for fundamental reasons.

In reality it's not either-or. You need both ways of thinking. Objects are
static (they exist as an agreement between the compiler and programmer, and
are usually thrown away at runtime); processes are dynamic. Many programmers
that think only in terms of objects learn the hard way that their programs are
brittle and inefficient. System operators think in terms of processes, and
this way of thinking is essential for resilient systems.

In addition, objects are losing importance in distributed systems because
state in a single machine's memory is not very useful. In real distributed
systems you need replicated/resilient state with varying consistency
guarantees. An object doesn't help you with any of those things.

