Hacker Newsnew | past | comments | ask | show | jobs | submit | flitzwald's commentslogin

Really nice demo. But there's one semi-unrelated thing I noticed (disclaimer: yeah, I don't like Flash either, but): This demo, with a handful of animated bezier curves drove one CPU to 40% (using Chrome on a Mac). And this is something that I repeatedly encountered with graphics rendering done in Javascript. The whole "less flash"="less cpu-usage with animation"-thing might be a little premature.


I find that simple SVG animations frequently push up the CPU, but Canvas can do much better. I wouldn't be surprised if this same demo used only 2-3% of your CPU in Canvas.

WebGL should deliver another order-of-magnitude speed increase, but I'm holding off for another 6 months or so for things to stabilize before diving into the 3d context.


There is one thing in Objective-C that still gets my goat, even after 8 years of working with it: it's the lack of syntax for container-types:

[NSMutableDictionary dictionaryWithConstructorIDontWantToType:...]

is just a huge pain in the ass. Other problems with the syntax have been solved by introducing sugar at the right place. Why not

  id hash=@{ @"key" : @"value };
and

  id array=@[ @"value1", @"value2", @"value3"];


Can't get rid of the issue completely, but at least we can have:

#define MD(val, key, vals...) [NSMutableDictionary dictionaryWithObjectsAndKeys:val, key, ## vals , nil]

then do:

NSMutableDictionary* dict = MD(@"value1", @"key1", @"value2", @"key2")

And so on. Courtesy of http://news.ycombinator.com/item?id=1789839.


That ## vals syntax is new to me. What's the difference with the following?

  #define MD(...) [NSMutableDictionary dictionaryWithObjectsAndKeys:__VA_ARGS__, nil]


Both of them wouldn't work for MD() due to the trailing nil.

But for MD(val, key, vals...), it'd give you a slightly better error, complaining about the number of arguments when you do MD().

Otherwise they are the same in this case.

## removes one non-whitespace character – often the comma – before the ## if vals or __VA_ARGS__ is empty.

e.g

  #define MO_LogDebug(fmt, ...) NSLog((@"DEBUG " fmt), ##__VA_ARGS__)

  MO_LogDebug(@"This works as expected %d", resultCode);

  // Removes the comma automatically in here:
  // NSLog((@"DEBUG " @"This works too without args"), );
  MO_LogDebug(@"This works too without args");


Very nice, thanks for that.


So, languages that have primitives like this typically are designed to be used by people who don't know much about algorithms, or simply don't want to be hassled with it, either now, or ever: they want to write code, they want it to "work", and they want to move on to something else; in essence, we are talking "scripting languages".

Languages that some look at as "real programming languages", in comparison, tend to not have syntax like this, and the reason why is that you often, either now, or at some point later, are going to care whether the data structure you just allocated is a red-black tree, a hash map, a patricia trie, or even an AVL tree (which I include mostly to make a point: there actually are situations where it is preferred to a red-black tree).

When this suddenly matters, you are in the situation where what you want to be able to do is to make a very small modification to areas of your code where you need to select a different algorithm, in order to get the different result; you don't want to be forced to rewrite half your code to use a different syntax just because it was slow (I mean, if you wanted to do that, you'd have written it in Ruby and then recoded it in C).

Therefore, you find that it is normally the case in languages like Java, C++, and Objective-C, that there are no "built-in container types", as you will never find a container type that is actually correct to use in an even fractional majority of the cases; in fact, most of the time, there isn't even a single obvious choice in these languages for what class to use: you find default implementations of multiple algorithms.

Objective-C, here, is no different from this concept: NSDictionary is just an interface, and can be implemented by numerous backends. Apple has a rather good implementation backing the default version, and even attempts to switch between algorithms as the data structure grows, but your code is always just a few identifiers away from choosing a different subclass in that collection hierarchy.


Ah, I get it. Convenient container-syntax is only found in toy scripting languages for stupid people like -uh- Haskell? :)


Languages with macros and (often) monads allow developers to redefine the syntax and types to accomplish these goals. The point I am making here is that there is a reason why language designers make this tradeoff: it isn't at all obvious that languages should have a built-in container syntax, and when you find languages that don't you can (and should) notice a pattern.

(C++11 is actually an interesting thing to analyze regarding this tradeoff, by the way: the new "common initializer" syntax is designed to provide as much of the benefits as possible of a simplified built-in data type syntax without taking on the semantic burden of having it; however, it also does not provide syntax that ends up being entirely devoid of the type of the container.)


> Languages with macros and (often) monads allow developers to redefine the syntax and types to accomplish these goals.

You can't redefine haskell's syntax unless you're using Template Haskell (which is a language extension, not part of Haskell itself). It also has nothing to do with monads. Likewise with most MLs, or with Erlang. All of them have a literal list syntax.

And if containers have no reason to be special, why would strings be special? They're just sequences of unicode codepoints after all.

Continuing your argument into absurdity, why have literal syntax for most datatype at all really? You could just shove a bag of bytes into a constructor when you want integers or floats as well. Now you've got one literal syntax (which isn't even for a datatype): bunch of bytes.


The way the do notation syntax (used by most people for describing monads) in Haskell translates to function application allows you to do some fairly interesting things with syntax abuse.

As for strings, it is very seldom that you find interesting alternative implementations: the only one I can think of is a rope. Interestingly, C++11 now allows you to override string literals, so you can actually do this.


> The way the do notation syntax (used by most people for describing monads) in Haskell translates to function application allows you to do some fairly interesting things with syntax abuse.

Sure but it's not syntax redefinition.

> Interestingly, C++11 now allows you to override string literals, so you can actually do this.

You still have a literal string notation. Literal notations don't have to impede multiple implemetations, and the truth is there is generally a primary representation used for the vast majority of cases (even if that representation is a cluster class and flexible under the interface).

In Cocoa, the primary sequence and maps are NSArray and NSDictionary, what would be the issue with making those literal? And one of your objections is

> When this suddenly matters, you are in the situation where what you want to be able to do is to make a very small modification to areas of your code where you need to select a different algorithm, in order to get the different result; you don't want to be forced to rewrite half your code to use a different syntax just because it was slow

But that makes no sense: as long as all equivalent containers implement the same interface (which they do, or you can't swap them anyway) that creating an object be done with a literal or with a constructor and a bunch of messages has no influence on the rest of the code, the only thing you need to change is the initialization code in both cases.

Hell, a smart enough editor can even swap between the literal and the "constructor" versions of a given collection (IntelliJ can do that for Python dicts, for instance). Not to mention in many cases the non-literal can just take the literal as a parameter, if the collection with a literal syntax has been well chosen, that way you get your cake eat it.


Objective-C actually allows you to choose which string class you use as well using the compiler flag "-fconstant-string-class=class-name"

http://gcc.gnu.org/onlinedocs/gcc/Constant-string-objects.ht...


Your parent poster never said anything about intelligence in his post. In addition, Haskell does not have syntax for dictionaries/hashes, or really any "container type" other than lists (and list comprehensions).


I believe my key mistake was using the term "real programming languages" as the alternative to "scripting languages": the word "real" is quite harsh; I have softened the statement slightly by changing "we look at" to "some look at".

I will also, though, point out that I write almost as much (if not more) Python as I do Objective-C++ these days: I therefore can be said to certainly not consider Python to be for "stupid people", without including myself in that set. ;P


This is why I do most of my editing with objective-C in xcode, while I do most of my editing with python in vim.

Tab completion of methods is very nice to have, and makes using xcode as fast as using vim for me. (now, if I could have vim keybindings with xcode tab-complete, then I'd rocket through my editing...)


There's ConciseKit for precisely this problem: https://github.com/petejkim/ConciseKit

We use it in a lot of our code here. Its quite nice. You'll end up with something like:

myDict = $mdict(val, key, val2, key2);


And the values and keys are still backwards.


  #define KV(KEY,VALUE)   VALUE, KEY

  myDict = MD(KV(key,val), KV(key2,val2));
Ugly but a case can be made that this end justifies the means.

Another option is implementing your own dictionaryWithObjectsAndKeys where you flip the varargs and feed them back into the subclass dictionaryWithObjectsAndKeys.


wow thts pretty cool!


Please understand that to do anything in Objective C you must pass a message to an object.So when we write.

MyArray *array = [[MyArray alloc] initWithObjects:@"string1",@"string2",...];

Lets understand this in two steps:

Step 1) You first allocate an object of type NSArray by passing a message "alloc" to the NSArray Class object.Yes every class in objective C is really a class object in the Objective C runtime.Now this class object may allocate data or it may not.It may also return you a pointer to a previously allocated object(Yes that is a cool way of implementing the singleton pattern.In any case it will return you a pointer to an allocated memory space or nil.

Step 2) Now you tell the allocated object how to initialize itself.The allocated object may have been already initialized and it might just append the strings.The allocated object might be nil.

What I am trying to point out is that there is a lot of dynamism involved in writing an initialization in a two step message passing.It almost feels like the objects are alive.My opinion is that this is real object oriented programming.


The poster seems to be complaining that there is not similar syntax to many scripting languages, where you can do [@"string1", @"string2"], not about two-phase message allocation; he wasn't complaining for "dictionaryWithConstructorIDontWantToType:", he was complaining against it.

In this specific case, you actually can usually use "arrayWithObjects:" (yes, the syntax that the poster didn't like), and it is frankly preferred. If you are doing alloc/init, and (as in your example) storing to a local variable, you should also send an autorelease in the same statement, so as to guarantee exception safety of allocation for subsequent code; "arrayWithObjects:" takes care of all three steps for you.


Yes..arrayWithObjects is definitely a better choice in this case...I was just trying to point out what exactly a line like object = [[[MyClass alloc] initWithObjects:@"",...] autorelease] means...and why it is more powerful than writing object = @"",@"",@"",.

Although the latter approach can be taken by using macros ,I was making an argument for the beauty of the first approach.For example it would not be possible to allocate a singleton object with a line like that in Python.


The only problem I see with this approach is the Javascript-interpreter seeing through your pointer system and holding a reference to the object pointed to - which defeats the purpose of creating a possibility to segfault...


You can still have a null pointer exception.


That's not half as exciting as to write into the raw, uncharted territory of your virtual address space...


I guess, this has taken "minimum viable" to the next level.


I think it's more like "minimum visible"


This is a recent talk by him about scalable programming language analysis. And yes, he's still at google.

http://vimeo.com/16069687


Comparing anything to Kod isn't really fair at this point. It is currently at 0.0.3beta and not feature complete (I hope ;)


Sorry, but the only thing more painful to read and edit than pure regular expression syntax is string-escaped regular expression syntax. I still cringe when I remember writing this in C and Java.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: