Dervla Murphy (Irish solo travel writer, in her eighties now) has often expounded the value of travelling alone. Basically she says that if you travel with others it's a safe cocoon. But if you travel alone you end up observing much more and interacting much more with the culture and people you visit.
And this is a woman who went out of her way to avoid a 'safe cocoon'. Such as traveling up the Indus valley on donkey-back in the 1970s. She did it when she was a single mother and took her daughter, aged less than 10, along with her. Just the two of them.
> but it's not a programming language you're selling me, it's a whole environment
Well, yes, because programming languages don't exist in a vacuum. The currently available libraries, tools, and documentation are very important if you decide to actually do something with that language.
A moment ago, you complained about having to press a few more keys for the type annotations.
> I wanted to make the point that I don't automatically consider that "IDEs can autocomplete" is a positive feature
That there are IDEs which let you auto-complete everything is a positive feature.
Being toolable doesn't mean that those tools exist. If those tools exist, you can make use of them if you decide to use this language. This is a good thing.
What's with this luddite mentality that pervades some areas of software? This meme that epitomizes terminal-based, mouseless, IDE-less development is just seriously absurd. We of all people should embrace modern tools that make development more productive. Tooling is the future, we should be pushing the envelope not romanticizing the past.
As I explained, requiring an IDE to make effective use of a language is a marker for a complex language. The system as a whole may be an effective way to build programs for some people, but it's now more than a language. The logical extreme is visual programming which reappears every few years. After a while, most people rediscover that languages articulate concepts better than visual metaphors. IDEs aren't as extreme, but sometimes don't merely enhance editing the language, but become almost a required part of the language. I am sure Martin Odersky can write Scala programs in Notepad, but I myself cannot write a Scala program without mouse hovers explaining the inferred type of my variable. It's an effective total programming system, but as a pure language, it's so complex I can only program it in a certain environment.
But why is environment flexibility a requirement? When are you genuinely constrained to use, say, only a terminal? I can think of no situation where this is true by necessity (rather than artificial constraint).
I didn't even bring up environment flexibility, or terminals!
Since you asked, though, it is fairly nice to be able to ssh in to a box and make a code change, and recompile, for those languages that require that. Continuing with Scala as an example, I, myself, could not edit a Scala program extensively without benefit of an IDE. So say I have a Scala program sitting on a dev server where I'm building a batch image processing program. If Scala were as simple as Jacascript, I could easily use vi or emacs to iterate the development remotely. As it is, I edit and test locally on my laptop using an IDE, then push this big jar over to the server. So, there are plenty of cases where remote edit, compile, test cycle using a terminal is convenient.
And by the way, hackinthebochs, your line of argument down through this whole thread has been to call me a two fingered typist (accurate) with whom something is seriously wrong; a Luddite; and you erect straw man arguments like this, as if I'd somewhere argued for the environmental flexibility of terminals.
Certainly there was some extrapolation on my part (though I do find it amusing that I got the two-fingered part right). I was using your posts mainly as a jumping off point for discussion seeing as they seemed to be in the spirit of the mentality I was referring to.
The fact is we are being constrained by the past. We still program in plain text files, using languages and environments that are as basic as possible presumably to maximize flexibility of development. I don't see the point. There are those that eschew the mouse, or GUIs because the terminal is cool (or something). The fact is that this field is moving towards more and more tooling, hopefully improved visualization, and soon to be automation (imagine APIs automatically wiring themselves up, or the gruntwork drudgery of programming happening automatically). This is the future we should all be looking forward to, not placing arbitrary constraints on the languages and environments we use for the purpose of compatibility with outdated tools. The more constraints we place on ourselves, the longer this future will take to become reality.
(iv) I come from an EE background so as when it comes to fundamentals I find it a help to think in terms of message passing and transmitters and receivers. It clears up some of the debate for me about which is more fundamental, objects or functions. To me they are all things (transceivers) that transmit and/or receive messages. In the case of a function the transceiver is alive for the duration of the call. In the case of an object or actor it may be alive for longer (and, if stateful, respond in ways that are harder to predict and reproduce)
But this is just a way of thinking that makes me feel comfortable because I like to have a physical model. If someone else is happy with function application as being the fundamental building block then I have no problem with that. However you said that "there are theoretical reasons to believe that message passing is not function application (while function application is a special case of message passing)." I would be interested to get more background on this. Would you have any references for this? Thanks.
To the best of my knowledge, the insight that function application (and other control structures) can be seen as special cases of message passingh comes from the actor people [1, 2]. The first proper mathematisation is Milner's breakthrough encoding of lambda calculus in pi-calculus . This lead to fine-grained investigations into what kinds of interaction patterns correspond to what kinds of functional behaviour (CBV, CBN, call/cc etc), which in turn inspired a lot of work on types for interacting processes.
I don't remember off the top of who first showed that parallel computation has no 'good' encoding into functional computation (lambda-calculus). I'll try to dig out a reference and post it here if I find it.
But the upshot of all this is that message-passing is more fundamental than functions / function application.
 C. Hewitt, H. Baker, Actors and Continuous Functionals.
 C. Hewitt, Viewing Control Structures as Patterns of Passing Messages.
I don't fully agree with Pfenning here. The pi-calculus is not that good a proof system for linear logic: pi-calculus imposes sequentiality constraints on proofs that impede parallelism. Take for example the term
x!<a> | y!<b> | x?(c).y?(d).P
You have to sequentialise the two possible reductions on x and y.
The culprit is the pi-calculus's input prefix, which
combines two operations: blocking input and scoping of
the bound variables (here c and d). This sequentialisation is not true to the spirit of linear logic proofs (think e.g. proof nets).
Conversely, I don't think linear logic is a good typing system for pi-calculus for a variety of reasons: among them that linear logic does not track causality well, and because it doesn't take care of affine (at most once) interactions well.
These are good points and I'll have to consider them. I was planning on going back through Pfenning's notes again in a bit and I'll keep a more critical eye this next time. Thanks!
(Also, to be clear and with reference to other threads we've conversed on here, I think of linear logic and pi calculus, even if they aren't well-corresponding, as fantastic, unavoidable examples of non-function-application style programming.)
Thanks very much for that. I'll read them all. (I did a bit of reading around actors before including some Hewitt but didn't pick up on the message passing equivalence. Will read closely. The Milner paper looks very interesting)
” the prototypal inheritance model is in fact more powerful than the class based model. It is, for example, fairly trivial to build a class based model on top of a prototypal model, while the other way around is a far more difficult task”
This is precisely what Kay is arguing against in his second point. In Smalltalk everything is an object including classes but there is a sharp distinction between a class and other objects and it's very important to make this distinction. It's important to be explicit about which level you are operating on.
In general arguments about one thing being better than another because you can express one in terms of another don't hold up. Then you can argue that assembly is the most powerful of all. (see also the definition of Turing Tar Pit)
Iran high? I wasn't aware of that. Could I ask what you are basing that on. Do you mean Iranian militias which are effectively the army and doing things which could be said to protect their country (as with US soldiers in all kinds of places they probably shouldn't be). If so I don't think that comes under the heading "angry young jihadis"
Iran has backed many jihadist groups. Two examples: Hamas, they are a group against Isreal which receives support from Iran. Iran also backed the Shiite "Mahdi army" during the Iraq war which fought a jihad against American/coalition forces, and today are still involved in a 'defensive jihad' against IS.
There are other examples. Jihad has been fought by both Sunni and Shiite groups. But more so by Sunni groups such as IS and AQ - who have tended to have more of a global objective against the west. While Iran and Shiite groups (such as in Syria/Lebanon) are mostly interested in local power grabs to keep spreading their Islamic Revolution outside of Iran.
I highly recommend this New Yorker piece on Iran and their secret proxy wars:
You're shifting the topic. We're talking about how many young Iranian men become terrorists, not the support of various groups by the Iranian state.
Most nation states, certainly including the USA, have at one time or another given material and financial support to terrorist groups. I don't condone it, but Iran is hardly exceptional in that regard.
Good article but one thing to be careful of here is that if you just apply this pattern everywhere then you can end up in a situation where every view is decoupled from every other view and there are way more events in the system than there need to be. This happens more frequently than you would think.
In the example given of a date picker this is fine as you can easily see that you might want to re-use the date picker somewhere else. But there are often circumstances where a subview could never be re-used outside of it's parent view. For example suppose I have a main calendar view with subviews showing appointments. The subviews need to inform the main view when the user changes an appointment time so that the main view can re-layout all the subviews.
In this case there is no need for the main view to listen for events from subviews. The code will be much simpler and easier to debug if the subview just makes a call to the main view directly (the subview should be constructed with a pointer to the main view). When I look at code in the subview I don't have to guess who may be listening to the event, I can see the call directly. It also means that other views apart from the main view cannot randomly listen sub-view events as a 'quick fix' for some bug.
indeed, it's a matter of juggling competing maintainability considerations and there are cases where it can make sense to couple components tightly.
that said, in practice our UIs contain relatively few one-off components, and it often requires less dev friction simply to use the standard event pattern than to weigh borderline cases to shave off a little indirection at the cost of tighter coupling. the problem of "too many events in the system" is avoided by letting complicated components handle events from their own subviews and not just blindly bounce everything down to a global mediator. e.g. if no other views have to know about the 4 dropdowns and slider within MyWidget (and they usually don't), then MyWidget can handle all those subview events itself and simply present a single unified 'i've been updated' event for other consumers. essentially narrow the public apis between different actors in the overall system.
Do you think the commercial version of Qt would have covered what you needed? I would also be interested I'd anyone else here is in a position to compare Xamarin and Qt. It's very hard to compare the two from the outside. Thanks.
Xamarin provides a managed runtime environment (Mono) and a way to call native iOS and Android APIs from that environment. Xamarin Forms builds on top of those things to provide a cross-platform UI toolkit that uses the native controls on each platform.
Qt is a cross-platform GUI toolkit that draws its own controls using each platform's low-level graphics facilities. As such, the controls in a Qt-based application aren't fully native to each platform. Qt tries to mimic the native controls, but the emulation isn't completely faithful. A particular problem is accessibility for users with disabilities, e.g. blind users who need to use a screen reader. Last time I checked (several months ago), Qt didn't implement the accessibility APIs for iOS or Android at all.
Because of the non-native nature of Qt, I would strongly recommend avoiding it in favor of something like Xamarin or RubyMotion.
Thanks for that. The application I'm looking at has a very specialized ui. The functionality also counts for much more than native look and feel from the customers point of view so Qt might still be a runner.
On my case, I wanted to make use of the native file pickers, which only became available in Qt 5.4 via QML (not C++) for Android, with iOS and WP8 support still coming.
Granted, on Android's case the pickers are only available as of version 4.4, but they are available and it is also possible to use intents for lower versions and vender specific pickers.
On my specific case, I came to the conclusion that writing my own JNI layer would be less trouble than debugging Qt. However note that for me this is just hobby development, whenever I feel like coding for it.
Compared to Xamarin, The Qt Company seems to still be searching on what platform integration to sell to companies and how.
Considering the audience you might be right. I'm not a typical audience member. I read in the evening on a phone with a slow 2.5g connection. For that reason I use opera mini. HN works perfectly on it and the combination of the browser and whatever it is HN does for layout means I can load a HN page faster than anything else on the web. It doesn't seem to matter how many comments there are. So I hope it doesn't change. But you are right. I'm not typical.