You will understand why Sed is concise the first time you write a lexer in C by hand. To lex a single character token you don't need any FSMs; the input character = the lookahead = all you need to recognize that token. You can use getc and ungetc to move forth and back in the input stream; the whole thing can be done with one tidy switch statement.
Lex and Yacc where non-existent, or at least a luxury at one point. Volumes where devoted to tweaking grammars so the internal data structures used by lexers and parsers didn't exceed the minimum core available at the time.
Okay Yacc and Lex were not there that time is the reason?
Now there are very much there, why is Sed still being used?
Ok like in my cases forced to learn?
No one is forcing you, but Sed is part of the Posix standard. It's not the Unix way or the highway, you also have Perl and a host of other scripting languages that are better than the stock tools in Unices.
The Lisp Operating System solved this problem by having a unified I/O format (S-expressions) and built the lexer and parser in the OS. The Lisp READ and WRITE operators are overloaded methods that parse and serialize Lisp code and data structures respectively. If your configuration syntax or programming language syntax is in fully-parenthesized prefix syntax (i.e. valid Lisp s-exps) you can read and write expressions, statements and functions (the grammar of which is all in your hands btw) right there in the shell, and from within all running programs. You could even redefine builtin functions from the shell. Imagine if you could shadow the definition of, say, fopen(3), the C function for opening FILE streams. You could instrument it to log all file access, and all the programs you fire up after this will have their access to files logged. You can have the functionality of ltrace(1) with in seconds.
Ask your smart questions in google and get enlightened.
because it works and because when you finally grok it it makes a warped kind of sense.
Most old unix utilities are very concise and elegant but they are hard to love when you come from more friendly environments.
Another reason why plenty of them are so sparse when it comes to command input is because you used to access these machines via serial lines (or god forbid teletypes) and any character you could save was worth it.
Hence VIs 'homerow' approach (no mouse, no cursor keys).
The only miss in this scheme is ctrl-l which normally would be a form feed but it's easy enough to remember.
When you look at it like that it makes perfect sense to put your cursor control on hjkl (at least for people that know the ascii control codes...).
I don't know which modifier on the original keyboard made it work that way but I guess it was simply the control key (because that makes the most sense given the above).
Would be nice to be sure though! (in vi it does the trick by not using a modifier at all, but by a mode switch).
shows the keyboard in more detail (but gets the location of the arrows wrong), it is interesting that 'line-feed' (10) and 'enter' (13) have separate keys.
Useful when you're copying from your terminal straight to a lineprinter and you want some extra space :)
The following page explains "Famous Sed One-Liners", including the one you mention (it's #36, though you'll likely want to work your way down from the top):
(CatOnMat.net is the site of one Peteris Krumins; he also has pages about AWK and Perl one-liners and he also has some other interesting stuff, e.g. he's gone through MIT's Introduction to Algorithms course videos and blogged each one, breaking the videos down by topics, with timestamps, as well as scans of his handwritten notes, etc.).
I actually still use sed once in a while. I sometimes show up at a corporate client and need to cleanup files on a unix or cygwin machine that is completely locked down (no install of anything allowed ever) and doesn't have anything easier to use (perl/python). At least you can always fall back to it and perform regexp replaces, fix unix/dos newlines, etc.
Languages are conventions agreed upon by a set of users; sometimes made official, but usually not.
اللغات ماهي الا قوانين وخواص متفق عليها من قبل مجموعة من المستخدمين: احيانا بصفة رسمية، وغالبا غير ذلك.
All domain-specific learning is but the acquisition of appropriate language for the domain used by its experts. Entire concepts will map into few symbols in the mind of a proficient user.
It means the exact same thing as the first sentence. The #\. at the end is a misrendering of the browser; you will need a dirction="rtl" meta tag in the head of the page to make it not suck.
(what you see as the "end" is actually the begining of the sentence; the period should have been the left most character, but it's wrapped around. arabic is small-endian ;-)
I've seen perl called 'executable line noise', there is probably an optimum somewhere for brevity vs readability.
Some language designers seem to have the approach that as long as gzipping the source still leads to a reduction in size their language can be improved upon...
Lex and Yacc where non-existent, or at least a luxury at one point. Volumes where devoted to tweaking grammars so the internal data structures used by lexers and parsers didn't exceed the minimum core available at the time.