I find searching bash(1) just fine with less(1) and do it quite often. The groff mailing list, the main discussion point for all things troff, is slowly chipping away at adding more semantic information to man pages. The BSDs have been making progress here.
Perhaps not, but what is the harm? I'm sure there are people who are running 1.1 that found out sooner that 1.1.1 is out then they would have?
I think its worth keeping in mind there is always going to be a lot of stuff on HN that we personally we consider noise. For example, I could care less about 90% of the JS libraries that are posted; but I realize the there is a legitimate interest that that kind of things from the front-end segment of the HN populace. Consider that as for a backend/systems dev like myself, that discussing new Go versions, fixes, features or libraries might be highly interesting. In any case, ignoring a little bit a "noise" is pretty trivial and is just part of living is a diverse society.
Coordinates in the X protocol are signed 16-bit so a screen can run from 0 to 32,767 along an edge. I agree though, the initial complaint is unclear and may be about something else.
Thanks for clarifying. So the following complaint about DPI does not make sense, given that a 50 inches wide screen with a resolution of 600 pixels per inch has 32000 pixels.
OK, I've gone and read the OP's text rather than just the quoted part. It seems he's talking about combining multiple X screens into one logical coordinate space, like I expect one or more of the existing extensions do, e.g. Xinerama.
The Go developers have said they deliberately didn't try and write Go in Go because they've done that with other languages, possibly Alef, I forget, and a bug can arise where the fix is obvious if it wasn't that the bug exists and would be triggered by the fix. Instead, a more awkward fix has to be figured out that doesn't trigger the bug. Of course, once the bug is fixed the original straightforward fix can be substituted but it's all unwanted hassle.
How often does this come up? Under normal circumstances, wouldn't you be able to revert to a version of the compiler from before the bug was introduced? If your still in initial development, then you have the old, foreign, compiler to fall back on.
Seemingly often enough that it deterred them with Go. No, you may not be able to revert to before the bug was introduced as it may be the bug was there ever since that feature was added. As soon as it is self-hosted, the old compiler becomes quickly irrelevant, i.e. the code rapidly diverts from what it can compile.
That is why bootstraping done properly is always done in stages.
You have a compiler that can only compile a specific subset and use that subset to write the real compiler. There are endless book examples how to do it.
Given who Go designers are, I think they don't have any issue keeping the C code around.
That's not how the books say to do it, and you're right, given who created Go you'd think they'd know this stuff. :-)
Many generations of the compiler are created. Let's say the compiler-in-C is worked on until it compiles subset Gosub1 which is just enough to write compiler-in-Gosub1 that duplicates compiler-in-C's behaviour. From now, compiler-in-C atrophies. G-2 features are implemented in G-1's compiler, though nothing uses them yet. The compiler's source then uses these, making it G-2 source, only compilable by a G-2-grokking compiler.
Weeks later we have a G-40 where a bug is discovered, introduced in G-20. It wasn't in the compiler-in-C so that's not useful. Choices include fixing it at `head', which can sometimes be awkward as described earlier, or fixing the initial G-20 implementation and then rolling forward all changes from there assuming the fix doesn't break code that was depending on the errant behaviour.
> That's not how the books say to do it, and you're right, given who created Go you'd think they'd know this stuff. :-)
Given that compiler design was one of my three main focus on my CS degree, I read a few books along the way. :)
> Many generations of the compiler are created. Let's say the compiler-in-C is worked on until it compiles subset Gosub1 which is just enough to write compiler-in-Gosub1 that duplicates compiler-in-C's behaviour. From now, compiler-in-C atrophies. G-2 features are implemented in G-1's compiler, though nothing uses them yet. The compiler's source then uses these, making it G-2 source, only compilable by a G-2-grokking compiler.
It is not required to do this so fine grained.
The first version of the primitive language can already be good enough to offer the minimal set of features to compile itself.
Afterwards the full language compiler gets implemented in this minimal version and used for everything else.
There aren't thousand versions of the compiler, you just need to be restrictive of what is used in the base compiler.
> Weeks later we have a G-40 where a bug is discovered, introduced in G-20. It wasn't in the compiler-in-C so that's not useful. Choices include fixing it at `head', which can sometimes be awkward as described earlier, or fixing the initial G-20 implementation and then rolling forward all changes from there assuming the fix doesn't break code that was depending on the errant behaviour.
As I explained this is not required because you only have G-2 as starting point, which is able to compile whatever is the current version of the language.
Additionally you get the benefit to eat your own dog food and as compiler designer check if you are doing the right design decisions on how the language works.
Sorry, I don't understand. You seem to be saying there's only two versions of the compiler, one written in a foreign language, e.g. C, the other in a subset, e.g. Gosub, called G-2. But then there's "whatever is the current version of the language", which suggests to me incremental improvements, e.g. the language develops as experience is gained rather than being fully planned on day one. So doesn't G-2 undergo changes to implement these? You may keep calling it G-2 but there are many (I never said thousands) of versions of it.
Now that Go 1.0 release exists and is stable. One could write a Go compiler using Go 1.0.
Eventually the compiler will reach a state that it can fully compile Go 1.0.
Now replace the C implementation of Go 1.0 by this new compiler and use it to write Go X.Y using only Go 1.0 features.
When the need to target a new OS or CPU arises, add a new backend that generates code for the desired target system in the Go 1.0 compiler.
Use the cross-compiler to compile itself with the new backend. Copy the binary to the new system, now use the Go 1.0 compiler to compile the Go X.Y version, whatever X and Y are.
You don't need to use multiple versions of the language and by keeping the feature set of base compiler small, it makes it easier to write cross-compilers.
This is flawed AFAICS. It assumes that because 1.0 is fixed in specification that there are no bugs in the implementation. To return to my original point, these smart guys are on the record stating that's why they didn't do a self-hosting compiler; good enough for me. :-)
Yes, C across AIX, Suns, Silicon Graphics, whatever those HP ones were, and others. Platform differences were common, bugs rare because many had been before me and they could always be worked around; I didn't have to fix a C compiler. When writing a compiler the aim is to fix the compiler.
This isn't getting us anywhere. We disagree. I value the opinion of that lot given their many decades of experience. I used to have your opinion, based on textbooks. They've made a good point, one I can see has considered thought behind it.
Pre-vim, one would get quite proficient at judging line distances, e.g. 11yy, through practice. Mind you, that was probably helped by text being a fixed size on a serial terminal and bigger than much text today. However, I do wonder if the article is over-promoting movement by lines. Movement commands like ), }, ]], and good old / should be readily considered as well.
perl -e 'chdir "/var/session" or die; opendir D, ".";
while ($f = readdir D) { unlink $f }'
It is very efficient memory-wise compared to the other options as well as being much faster.
It is also easy to apply filters as you would with -mtime or such in find, just change the end statement to:
It's not that I'd use Perl for the task, just that I don't think rsync is special. Indeed, rsync has the overhead of forking a child and communicating empty/'s bareness.