Hacker News new | past | comments | ask | show | jobs | submit login
Coding Intentionally in Bash Grains (exercism.io)
96 points by ihid 69 days ago | hide | past | web | favorite | 32 comments

I think exercism.io is hugely valuable for solving three problems:

- Where can I find a set of practice problems which will force me to really engage with learning a new language?

- Given a specific problem, what does the solution look like in various langauges?

- Given a specific language, what do the solutions to a variety of problems look like?

Unfortunately, exercism's web interface is really only set up to cater to the first of those.

So I made an couple of indexes to solve for the other two:



Nice work! I'd be interested in how we could improve the UI to cater better to your third point. If I'm interested in how a language looks I tend to find one good solution, then look at the poster's profile and browse other of their solutions in that language. I'd welcome your thoughts :) Would like to open an issue on https://github.com/exercism/exercism and we can discuss this more widely with the Exercism community?

Great! Actually I had opened an issue a while ago but with a much larger scope in mind -- to also do some AST analysis and try to clump the solutions by similarity, so that the UI could present "There are three basic ways to solve this..."


(that's my work account)

Ah awesome. Sorry I dropped the ball on that. I'll reply now. Have you seen about https://exercism.io/blog/automated-mentoring-support-project ?

I was really into exercism before they changed the interface. And I know that change is good, but the peer mentorship I got when the previous interface existed was decent, and at least pretty consistent.

With the new interface, I need to wait for official mentors, and while some have been way above and beyond, the dearth of reactions from anyone for long periods can be a real downer. I preferred the semi-OK amateur mentorship over the wildly inconsistent/non-existent mentor assistance.

There was one mentor that I would have happily paid directly to keep giving me feedback, but there was no mechanism in exercism for me to make that offer, even if exercism took a cut.

Also in the old interface, the community solutions were sorted by recency, so it was easier to interact with other fellow learners, and the new interface is not that, I'm not sure what the order is, but it's of no use for me to comment on a piece of code submitted two years ago.

> I preferred the semi-OK amateur mentorship over the wildly inconsistent/non-existent mentor assistance.

Just to chime in with data here, there is 5x more feedback per day on the new system than there was with the old system. Depending on the track, you'll get feedback in 3hrs to 7days (very occasionally longer), so it is definitely inconsistent in that respect, but you will get feedback. In the old system, you'd get feedback at all in under 10% of submissions.

> There was one mentor that I would have happily paid directly to keep giving me feedback, but there was no mechanism in exercism for me to make that offer, even if exercism took a cut.

In the future we'll be adding the ability to "pair" mentor/mentees who have both rated the other highly, so this will sort-of happen for you.

How do you define feedback?

I got things like kudos etc... and comments which were helpful, I would have considered either one feedback. In general, there seemed to be more interaction, regardless of whether you call it feedback, or social validation.

The pair mentor/mentee sounds good, I look forward to it!

So there are two modes: 1) Mentored Mode: Submit, get structured feedback from mentors, then publish and get stars (ie upvotes/kudos) and/or public comments. 2) Independent Mode: Submit, publish, get stars (ie upvotes/kudos) and/or public comments.

So Independent Mode is very similar to "classic" Exercism.

I'm defining "feedback" really as (1) - so there are 5x more comments (from mentors + public) now than there were in old Exercism (public).

So for a tangible example: - Feb 1st 2018 (classic): 162 comments, 110 stars - Feb 1st 2019 (new): 831 comments, 46 stars

I randomly picked that date, but every date is basically the same sort of ratio.

Lots of people say the same as you do that it feels like there's less interaction, but really there's a lot more. I feel like we're failing to make Exercism feel as alive as it actually is for some reason, but I'm not sure why :)

Are these numbers normalized to the number of people making the interactions? or the number of learners? Did you have 5x growth over the past year? Sorry, I am kind of being critical, but mainly because my experienced degraded so much. I trust you to run the place in what ever way you see fit, and any gripe I have is because I really want exercism to work for me. It's the best way I've up-leveled my skills outside of coding on the job, and I really feel like I hit a huge speedbump when the new site went live.

They're not normalised, no. But the theme holds true. We have about 80% more solutions submitted per day and about 500% more comments per day. So engagement per solution is much higher.

It's also worth pointing out that this whole thing is dramatically evolving. It may be that you're a more "expert" developer and doing the more complex problems that means you're effectively lower in the queue (harder to mentor than basic solutions). Things like https://exercism.io/blog/automated-mentoring-support-project aim to free up our mentors to give them more time to deal with the harder problems. And things like https://github.com/exercism/exercism/issues/4658 and https://exercism.io/blog/track-anatomy-project aim to make mentoring easier and more enjoyable, which means we keep mentors around for longer and they mentor more because it's more fun for them. It may also be that you're on a track that has had 5x or 20x growth (I'm not sure if there are any but you get the idea) in which case the speedbump might have corresponded more to the growth of that track than v2.

If you don't mind telling me your exercism handle, I'd be interested to dig into your situation further to understand why you hit that speedbump. It's easy for me to talk in terms of data, but I'm limited to seeing the general patterns rather than hearing real people's stories such as yours, so it's really valuable for me to understand. Feel free to email me (jeremy@exercism.io) if you don't want to disclose it on here :)

I've been independently doing the Javascript track. I've been waiting for feedback on my Forth-like solution for a couple of weeks now. I can understand why, since it's a volunteer effort, and mentored mode users are a higher priority. I am trying to resist the temptation to rewrite it until I get feedback on the version I originally submitted.

I haven't received any peer feedback on my submissions, either, and I've seen almost no stars or comments on any of the solutions I've viewed. There seems to be very little of either one. I think giving feedback is hard and deserves care, which is why I haven't given any myself.

These aren't intended as complaints, just observations on my experience. I've decided the main value for me is that it gives me a set of practice problems, and the opportunity to learn from other people's solutions.

If I’m not confused about the new feedback mechanism, the site specifically discourages comments about the quality of the code, in favor of questions about it. I assume that’s to avoid discouraging people with overly negative feedback.

This is frustrating to me; in the Erlang space, we don’t have enough people to abuse the system, so I used to be able to provide and receive feedback from anyone without any negativity.

For public comments, this is correct. For mentoring (in Mentored Mode) you will receive comments about the quality of code.

That's unfortunate.

The code presented in the article doesn't communicate the problem. It communicates a clever optimization that will break if the spec changes at all.

This code should be readable enough to give you an idea what is happening and how to change it.

  my $board-depth = 8;
  my $board-width = 8;
  my $squares = $board-depth * $board-width;
  my @grains-per-square = ( 1 .. $squares ).map({ 2 ** ($_-1) })
  say [+] @grains-per-square;
Which if you read Perl6 says number of squares, and in each one put 2^n-1 grains, then add them all up.

Hint: `[]` is the reduction metaoperator, it lets you specify an operator to use when reducing a list. [+] means "add it up".

It finishes basically instantly, and while Perl6 has gotten a lot faster, it is not a fast language (yet?). So no need to optimize this.

I mean you write the code and then marketing will have a poorly specified variation like "double only the odd ones".

If you implemented this as `2^64-1` then you are starting from scratch.

For the above the map only needs a tweak:

  map({ $_ %% 2 ?? $_ :: 2**(n-1) })
Or whatever it turns out that marketing meant.

In the article, I didn't see informational intentionality, I saw premature optimization and obfuscation.

Similarly, in your code, theres nothing about rows or columns inherent to the problem. There is a reference concept of a (standard) chess board that is really there to be 64 tiles in a linear set. I wouldnt add things that arent necessary or prematurely optimize for nonexistent future requests.

Orthogonal to your content, but perl6 allows "-" in variable names? ugh. I get that the "$" delineates a variable, but on first read it looks like "board minus depth".

Kebab (or train) case (foo-bar) is actually really nice to type and easy enough to read once you get used to it. It's nicer than using underscores because you don't have to keep chording the shift and minus keys.

As to mixing variable names up with subtraction, you put spaces in your math formulas? someVar-anotherVar*thirdVar is pretty unpleasant to read, so not being able to do that is not much of a problem.

Perl6's relationship to sigils (like $) is a bit weird at first, but is very consistent and "fits lightly under your hands" in practice. Suffice to say, sigils indicate context and constrain the type of data you can put in a container. If you want to refer directly to a value, you use a sigil-less variable:

  my \the-great-answer = 42;
See the docs on varibles if you want more information: https://docs.perl6.org/language/variables

Why would anyone name their variable "_-1" ? Oh wait—

`_-1` is an invalid variable name.

The `-` must be followed by an alphabetic character (or `_`) for it to be seen as part of an identifier.

So `_-1` is the same as `_ - 1`

      my \_ = 4;

      say _-1; # 3

      sub _ () { 8 }

      say _-1; # 7
It may be a bad idea to name a variable or subroutine `_` but that is for you to decide, not for Perl6 to decide. (It's not your overprotective mother.)


I suppose if you really want to do something completely daft like that, there is not really anything stopping you:

      my \_ = 3;
      say _-1; # 2

      my \term:<_-1> = my $ = 4;

      say _-1; # 4
      _-1 = 53;
      say _-1; # 53

      say _ -1; # 2
Note that it doesn't just create a variable named `_-1`.

What it does is much more powerful than that. It modifies the parser lexically to add `_-1` as a term. (Since it is lexical it stops being valid after the closing `}`)

The `my $` is just so that it has a rewritable container so that it can be reassigned to `53` later.

This can be useful for constants that wouldn't otherwise be a valid identifier, and for writing subroutines that are parsed as a bare identifier like a constant would be.

    constant term:<> = …

    sub term:<foo> () {…}


    foo(); # ERROR: Undeclared routine: foo used at line …

    foo 1; # ERROR: Two terms in a row

    map({ $_ %% 2 ?? $_ !! 2**(n-1) })
Ternaries in Perl 6 are done with ?? and !!, not with ?? and ::

My fingers hate that and I always, always get it wrong. Always.

Fortunately the error messages are very good.

On the one hand, the article's title that "solutions should be written in such a way that the intent is clear" makes sense for code which needs to be read and maintained. e.g. Martin Fowler suggests even 1 line of code methods can be useful if they describe a distinct intent. https://martinfowler.com/bliki/FunctionLength.html

But I don't think the point is well made with a contrived whiteboard-interview style problem.

I'm basically stuck on the Elixir track waiting for a mentor...

I'd be glad to pay per puzzle or a subscription to get faster feedback.

Please, exercism. Take my money.

Perhaps connect up with someone on codementor.io to review what you've done?

I just want to point out that it's really annoying that the "Available language tracks" on the front page are completely random and change on reloads. It's already hard enough to figure out where the language I'm curious about is, even worse when it changes after I click a new page.

Also, clicking "explore languages" at the bottom of the blog does nothing.

FWIW, the logged in view of this is an alpha-sorted list, basically this view https://exercism.io/tracks

I would guess the design choice on the front page change on reloads to try to surface something potentially interesting to everyone, but I am not affiliated with them/have no additional insight. I can see how it would be frustrating.

They must have removed the link you are referencing, because I don't see it now. Clicking it from the front page works (probably because it is an anchor link).

The "explore languages" link is now fixed. Thanks :)

Feel free to open an issue at https://github.com/exercism/exercism for the random language order. If there's consensus then it's an easy one to change.

Expressing the intent of the code significantly improves maintainability. I think it's important to realize that the problem of missing intent is often only noticed upon review of the code.

So in my opinion, the moral of the story isn't simply "design with intent", it's "get your code reviewed".

Often it makes it through review just fine. It's usually not discovered until someone unfamiliar with the problem or complete set of initial circumstances is exposed to the code, at which time their main focus is on fixing/enhancing the code and not changing its readability or documenting intent. The new dev vomits out some code to meet their deadline and now the code is even more convoluted than it was to begin with.

In a perfect world, readability would be one criteria during reviews, unfortunately it is often overlooked.

Since this is using Bash, it might be nice to mention that you can't use Bash's built-in math evaluation to solve the problem because Bash uses fixed-width integers and therefore

  let 2**64==0

You can do this in any area. In "Without Limits" [1], Bill Bowerman calls a team meeting at 7:27, specifically so everyone is sure to show up on time.

[1]: https://www.youtube.com/watch?v=yuAtOMGyUx8

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact