>>Without setting up the language environments? I doubt it.<<
You seem to have run nbody.python without setting up the language environment for it.
I'm going to ignore all the rest of your editorializing and try to find something of substance.
Let's just note that you've jumped from #3.1 to #12.1 -- ignoring #3.2 which checks for problems and the subsequent sections that work through those problems and explain some of what you later find so puzzling.
>>Where is that INI file? The README doesn't say. Oh its my.linux.ini (its not mentioned anywhere in the document).<<
You seem to have found the ini file.
>>No comment on what %X or %A may mean<<
Actual comments on what %X or %A may mean:
; %X %T %B %I %A in commandlines are replaced like this:
;
; nbody.python-4.python %X = nbody.python-4.python
; nbody.python-4.python %T = nbody
; nbody.python-4.python %B = nbody.python-4
; nbody.python-4.python %I = 4
;
; %A = [testrange] value or 0 when the program takes input from stdin
>>What, only python? Oh, I need to copy the programs from the "bench" dir. Where in the readme does it say that? Nowhere.<<
You don't seem to have read sections #3 through #11.
>>Okay, did that, re run the script, and it runs all nbody and regexdna benchmarks, and none of the others. Why?<<
You don't seem to have read sections #3 through #11.
>>Oh, it uses an old way of importing GTop<<
That actually is worth updating the readme about!
>>of course, I had to remove everything in tmp/* before re-running the test, otherwise it thinks there is "nothing to be done".<<
No you didn't. You don't seem to have read sections #3 through #11.
>>No summary dir here.<<
That actually was a bug! Empty directories weren't included in the snapshot zip.
>>improve readme by...<<
You didn't seem to read sections #3 through #11 (very important)
No, you didn't seem to read my suggestions at all.
>>What, only python? Oh, I need to copy the programs from the "bench" dir. Where in the readme does it say that? Nowhere.<<
> You don't seem to have read sections #3 through #11.
If you grep the file, there is no mention of the bench dir anywhere in sections 3-11. Please do tell where you found it. The readme does say how to add new programs, but it doesn't say where the original programs of the shootout are in the distribution.
>> of course, I had to remove everything in tmp/* before re-running the test, otherwise it thinks there is "nothing to be done".<<
> No you didn't. You don't seem to have read sections #3 through #11.
No, you didn't read what I was doing. I had to remove everything in tmp/* because I was trying to enable gtop by editing the source code of the program. The old results were made without gtop and only contained cpu and elapsed time.
Finally the "actual comments" on what variables are available for the commandline strings aren't written anywhere. Only "%A" is explained in 9.3 and "%I" isn't even mentioned.
So it looks like all my suggestions remain valid:
From the first suggetion, removing the "tmp" dir assuming that the summary dir bug is fixed in the future:
1) improve readme by mentioning the location of my.linux.ini and my.win32.ini and adding the complete syntax for the commandlines section etc. (important)
2) invert entries in the [alias] section, because the way its set up now it makes no sense. (not important)
3) separate test and environment configuration in different files and share the test configuration files used on the website to help people reproduce the same results. (very important)
4) fix the script to use the introspection bindings for gtop
Note that you failed to comment on (3) which will by far improve the bencher the most. Infact, that is exactly where I stopped trying to reproduce your results - I stopped at the point where I had to extract all the values for the testrange from your CSV files, as the ones used on the website aren't distributed anywhere.
But of course, you're free to continue ignoring useful comments from other people. Which brings me to the original point, we're in dire need of a new benchmarks game.
> You seem to have found the benchmarks game "original programs" in the project tarball.
> People measure their own programs with bencher, not the benchmarks game programs -- it's not dependent on the benchmarks game programs.
It is not, but if you sincerely want to make it easy for people to reproduce all or parts of the benchmark, you should at least document where they can find all the settings of the original.
>>I had to remove everything in tmp/* because...<<
#5.3 #5.4
Fair enough, I missed those two points in the README. A weaker point still remains though: its not very user friendly or obvious, like say a --force flag (or something in the spirit of make's --always-make).
These things may not seem important, but I think they are when distributing software with the intent of someone else running it.
>>testrange from your CSV files, as the ones used on the website aren't distributed anywhere<<
Ah in nanobench. Well that was definitely not easy or obvious, was it? My fault for not using find on the directory.
And how about that suggestion of splitting the configuration file to two separate files: program configuration and environment configuration? It would make it easier to run the benchmarks on the same set of programs with the same parameters, except in a differently configured environment. (the set of language implementations and their build and run commands would be a part of the environment configuration)
> You're comments are useful because they show what someone might become confused about.
That was exactly my point - to prove that your "easily" claim is a stretch. The README does not cover everything, especially not the parts needed for someone to bench the programs found on the website. Which is understandable since its a README for the bencher program.
Perhaps a separate README for the entire archive (documenting where the original configurations and programs are) will fix most of these confusion issues. More descriptive directory names would also help (e.g. "game-programs" and "game-configurations")
And if I seem so negative, its because I used to love the benchmarks game and felt that every change you've made lately significantly subtracted from the game's fun factor of it while not significantly adding anything.
For example, that recent removal of the (not very useful, but still extremely fun!) combined language comparison table. Why remove things? Its sufficient to simply warn the user. If they ignore the warning and take those numbers seriously, that is their own fault. Why should they be ruining the fun for everyone else?
>>to prove that your "easily" claim is a stretch<<
> Yes, you were trying to prove that -- you were finding fault instead of finding how to make things work.
> Other people found how to make things work, because that's what they were trying to do.
Not really. The first time I downloaded the zip (about a year ago) I sincerely tried to make it work and did read the readme. Spent about 1 hour then gave up. This time I simply retraced those exact same steps with the intent of showing the current deficiencies.
>>Its sufficient to simply warn the user.<<
> Was it sufficient to list
> Willingness to read the README
> as a requirement? Apparently not.
Yes it is. Its my fault for not reading the README completely. (but to be fair, all directories of the distribution were not quite covered)
You don't have to remove the complete zip archive then add a quiz to the website testing if the user read the README before allowing them to download the complete archive.
If that doesn't work either, what then? Will you completely remove the download link to the archive, ruining it for those willing to read the README? The way I see it, you've done your part, the rest is up to the user.
This is where I strongly disagree with your approach - and this is why I want to make a new benchmarks game.
Do you mean the project tarball or do you mean the bencher zip?
> I sincerely tried to make it work
Did you ask for help?
> This time I simply retraced those exact same steps
This time you've told everyone such-and-such isn't in the README when it is; such-and-such aren't written anywhere when they are; such-and-such aren't distributed anywhere when they are.
None of that stuff changed in the last year.
>>and this is why I want to make a new benchmarks game<<
So get the tarball tomorrow, read the README, find out what doesn't work for you and fix-it (if you're missing python-gtop install it), make measurements and publish them. Easy.
> This time you've told everyone such-and-such isn't in the README when it is
I clearly demonstrated all the deficiencies of the current distribution, especially in regards to running the same programs (with the same arguments) as those on the website.
your explanation of the commandline variables was missing from the readme
reference to the actual location of the ini file (my.linux.ini). I expect a section like this:
The bencher is configured with an INI file.
There are two example ini files included with the
distribution: my.linux.ini and my.win32.ini,
located in the "makefiles" directory.
this can't be found anywhere in the distribution:
The ini files from the game website can be found
in the nanobench/makefiles directory:
<listing of the files>
Then I will agree that the documentation is complete, and that the process to run your own benchmarks is almost "easy".
If you separate the configuration files, organize the directories more appropriately and write a more condensed README that skips the condescending act towards the user, then I will agree that running your own benchmarks is easy.
> Did you ask for help?
I wouldn't have had to ask for help if the documentation was complete and adequate.
I will however check out the tarball tomorrow.
Oh and what about the suggestion to separate the environment configuration from the programs configuration? No comment, I guess...
You seem to have run nbody.python without setting up the language environment for it.
I'm going to ignore all the rest of your editorializing and try to find something of substance.
Let's just note that you've jumped from #3.1 to #12.1 -- ignoring #3.2 which checks for problems and the subsequent sections that work through those problems and explain some of what you later find so puzzling.
>>Where is that INI file? The README doesn't say. Oh its my.linux.ini (its not mentioned anywhere in the document).<<
You seem to have found the ini file.
>>No comment on what %X or %A may mean<<
Actual comments on what %X or %A may mean:
>>What, only python? Oh, I need to copy the programs from the "bench" dir. Where in the readme does it say that? Nowhere.<<You don't seem to have read sections #3 through #11.
>>Okay, did that, re run the script, and it runs all nbody and regexdna benchmarks, and none of the others. Why?<<
You don't seem to have read sections #3 through #11.
>>Oh, it uses an old way of importing GTop<<
That actually is worth updating the readme about!
>>of course, I had to remove everything in tmp/* before re-running the test, otherwise it thinks there is "nothing to be done".<<
No you didn't. You don't seem to have read sections #3 through #11.
>>No summary dir here.<<
That actually was a bug! Empty directories weren't included in the snapshot zip.
>>improve readme by...<<
You didn't seem to read sections #3 through #11 (very important)