Hacker Newsnew | past | comments | ask | show | jobs | submit | nathanhammond's commentslogin

You both are in violent agreement and it is amusing to see in the wild.

As an 外國人 who learned Cantonese as an adult (I moved to HK) I'm jealous of the quantity and quality of materials that exist for learning (not Cantonese). That being said, there are _enough_ materials so it's nowhere near as rough as e.g. Shanghainese.

My opinions on hard language reduces to "is this the first language you're learning from a particular language family?" If so, it's hard to learn. But "is ontologically hard" isn't something that I think is really worth ranking. Any four year old can speak their mother tongue just fine.

But the perception of "hard to learn" did work in my favor for learning Cantonese: as a 鬼佬 who speaks Cantonese I was given lots of latitude to be bad while learning because of that perception. And now I could go back and learn Mandarin now and it would be _much_ simpler than the task that I had in learning Cantonese.

That being said I still write in 口語. Slowly learning 書面語 as I read more and more of it.


Hi Nathan, long time no see! :)

(Not sure if you remember or recognize me from this handle. I was with Chaak on the words.hk project . Also Jon spoke highly of you for helping with the tough problems on the fonts :D )


I did guess it was you; but wasn't sure. :P

Seeing your handle I am at risk of explaining something you may already know, but, this exists! And it was standardized in 1993, though I don't know when Unicode picked it up.

Ideographic Description Characters: https://www.unicode.org/charts/PDF/U2FF0.pdf

The fine people over at Wenlin actually have a renderer that generates characters based on this sort of programmatic definition, their Character Description Language: https://guide.wenlininstitute.org/wenlin4.3/Character_Descri... ... in many cases, they are the first digital renderer for new characters that don't yet have font support.

Another interesting bit, the Cantonese linguist community I regularly interface with generally doesn't mind unification. It's treated the same as a "single-storey a" (the one you write by hand) and a "two-storey a" (the one in this font). Sinitic languages fractured into families in part because the graphemes don't explicitly encode the phonetics + physical distance, and the graphemes themselves fractured because somebody's uncle had terrible handwriting.

I'm in Hong Kong, so we use 説 (8AAC, normalized to 8AAA) while Taiwan would use 說 (8AAA). This is a case my linguist friends consider a mistake, but it happened early enough that it was only retroactively normalized. Same word, same meaning, grapheme distinct by regional divergence. (I think we actually have three codepoints that normalize to 8AAA because of radical variations.)

The argument basically reduces "should we encode distinct graphemes, or distinct meanings." Unicode has never been fully-consistent on either side of that. The latest example, we're getting ready to do Seal Script as a separate non-unified code point. https://www.unicode.org/roadmaps/tip/

In Hong Kong, some old government files just don't work unless you have the font that has the specific author's Private Use Area mapping (or happen to know the source encoding and can re-encode it). I've regularly had to pull up old Windows in a VM to grab data about old code pages.

In short: it's a beautiful mess.


Is this published anywhere? Constraints I'm trying to solve for:

- Reasonable English corollary. (Doesn't have to be 1:1.)

- English-pronounceable Cantonese romanization.

- English-pronounceable Mandarin romanization.

- Doesn't diverge too much between the three.

- Characters that are identical in Traditional and Simplified.

- Pleasing to the relatives.


Yep, absolutely. These constraints probably yield nothing in return. We have take compromise in some of these.


Ember A11y

We're making Ember accessible by default.

Monthly Goals:

  - Support Ember.js' internal upgrade to the new rendering engine.
  - Adopt dynamically scoped variables to make ember-a11y function.
Skills needed: Ember.js, Glimmer, Accessibility

Slack: https://ember-community-slackin.herokuapp.com/

Ember A11y: https://github.com/ember-a11y Ember A11y Addon: https://github.com/ember-a11y/ember-a11y

License: MIT


Ember CLI

We're building a tool that makes it easy to build and maintain Ember applications.

Monthly Goals:

  - Finish moving off of Bower and onto our npm infrastructure.
  - Upgrade our internal npm usage from 2.X to 3.X.
  - Make our story for caching using Broccoli far more efficient.
  - Improve the Node ecosystem's publishing patterns for the projects we use.
Skills needed: Node, npm. Familiarity with Ember.js & Broccoli unnecessary but a bonus.

Slack: https://ember-community-slackin.herokuapp.com/

Ember CLI: https://github.com/ember-cli/ember-cli

License: MIT


1. Are you using Ember.computed's array methods? I tend to go with intermediately calculated arrays which I then union/diff/whatever is necessary for filters. It is also significantly faster than function-defined computed properties. The only bug I know of for this functionality was fixed in the 1.5 branch by @hjdivad.

2. The typical pattern is lots of computed properties. They're lazily calculated, so that makes them pretty cheap.


> 1. Are you using Ember.computed's array methods? I tend to go with intermediately calculated arrays which I then union/diff/whatever is necessary for filters. It is also significantly faster than function-defined computed properties. The only bug I know of for this functionality was fixed in the 1.5 branch by @hjdivad.

This was a single value (e.g. 'selectedItem') populated by a Ember.Select view/component. After changing the selection a number of times, all things watching 'selectedItem' completely stop firing off (until a page refresh). No idea why.


There is going to be some future work on some of the form elements. They're really not quite where they need to be.


Here is a pretty complete list of significant Ember applications:

https://docs.google.com/document/d/1ZWYq3gwkPTzUiyqr4x_asSj8...


In general:

    how much code the browser needs to parse and execute on every page load
The caveat being that that code only needs to be parsed and executed once in a single page application.

    weight of execution payloads across operations in your app
Presumably this cost is providing value to the developer in terms of reduced development time, reduced complexity, or improved correctness. You shouldn't typically incur significant wasted cost in execution, just a selection of tradeoff. Besides, rendering to DOM is still the longest tentpole by far compared to microseconds for JS execution.

EDIT: With regards to the "independent test," that is in no way a realistic use case. You would never add items to the DOM individually when you're in that tight of a loop. You would instead build up a cache and write once. The only thing of consequence that is being measured in that benchmark is each framework's DOM insertion speed.


>> The caveat being that that code only needs to be parsed and executed once in a single page application.

Sure, if you don't consider users that open lots of tabs, or users that close their browsers periodically (e.g. mobile).

>> Presumably this cost is providing value to the developer in terms of reduced development time, reduced complexity, or improved correctness. You shouldn't typically incur significant wasted cost in execution

Ideally that would be the case, but from my Angular experience, it's unfortunately not always so clear cut (e.g. filter caching is a good example of time wasted in refactoring for performance reasons)

>> You would never add items to the DOM individually

I was under the impression that this is more or less what everyone was criticizing about Backbone rendering a few months ago when that Om article came out.


You're making an argument about callback aggregation. That is just one of the features that promises enable. The seminal article on the topic is Domenic Denicola's "You're Missing the Point of Promises." http://domenic.me/2012/10/14/youre-missing-the-point-of-prom...


I much prefer a basic construct that makes it simple to do filtering and sorting that can be simply extended to any complexity. Sorting and filtering are really nothing more than set manipulation (which they state themselves) so with simple data binding this becomes a trivial exercise to build an impressive client-side search.

In Ember that might look like this:

    Ember.ArrayController.extend({
        filterA: Ember.computed.filter('fieldName1', function comparator() {}),
        filterB: Ember.computed.filter('fieldName2', function comparator() {}),

        joined: Ember.computed.union('filterA', 'filterB'),
        
        filtered: Ember.computed.uniq('joined'),

        sorted: Ember.computed.sort(function comparator() {})
    });


This is exactly what PourOver offers. You can chain filter results to any boolean complexity, as well extend the default filter types to optimize indexing and caching. Indeed, PourOver was trivial to make, an outgrowth of the very pattern you define above. PourOver is just an attempt to scrap that boilerplate, allow for the queries to be indexed, combined with sorts, and automatically rebuild when the collection changes.


Thinking about it more, I believe that my visceral reaction was to the imperative nature of the library. I was imagining trying to code something that generated a filter (e.g. PourOver.makeExactFilter("mythology", ["greek","norse"]);) and it seems like it would be unnecessarily complex without data-binding propagation/invalidation.

If I wanted to add "roman" mythology to that filter I would imagine something like:

    var mythologies = ["greek", "norse"];
    // ... create a collection with filters
    mythologies.push("roman");
    PourOver.makeExactFilter("mythology", mythologies);
But you don't get that last statement for free, nor the one where you join it into a collection, nor do I see a way to replace the previous mythologies filter. Any time you need to reprocess a filter you've got to run it through a series of imperative actions triggered from one of the events, and possibly throw away the collection and regenerate it (if you can't remove disjoint filters).

I would be pushing for a Object.observe/dirty checking/get-set version of this to take PourOver to the next level of utility. (?/Angular/Ember)


I think I'm a little confused. Why wouldn't you have just added the "roman" possibility from the get go? Could you provide an example of a situation when you don't know the universe of possibilities in advance?


If the user adds one, or new data enters from an outside source (pub/sub, sockets, etc)


Alright, this is a great point. It would be trivial for me to add addPossibilities. I think I'll do that! Thanks. This will really help for using PourOver to power tagging selection fields where you can ... add possibilities!


Exactly what @rattray said. It's something you get nearly for free in data-binding world and is much harder to accomplish in imperative code. I wish you luck, and make sure you blog about it when it's done! (I want to see how it's implemented.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: