Hacker Newsnew | comments | show | ask | jobs | submit | discreteevent's comments login

Dervla Murphy (Irish solo travel writer, in her eighties now) has often expounded the value of travelling alone. Basically she says that if you travel with others it's a safe cocoon. But if you travel alone you end up observing much more and interacting much more with the culture and people you visit.

And this is a woman who went out of her way to avoid a 'safe cocoon'. Such as traveling up the Indus valley on donkey-back in the 1970s. She did it when she was a single mother and took her daughter, aged less than 10, along with her. Just the two of them.

Eh.. grep is also a tool? Every function you write is a tool. The whole industry is built on tools and tools that make tools..

Sure. But programming languages whose claim to utility lies in an IDE -- they aren't necessarily bad, but it's not a programming language you're selling me, it's a whole environment: MS Visual Studio or whatever. A complex language requires lots of tooling like that, and a simple language doesn't. I have never missed autocompletion in Javascript, but I cannot function without IDE support in Scala. We're off the topic of OO in Javascript, but I wanted to make the point that I don't automatically consider that "IDEs can autocomplete" is a positive feature -- if you need autocomplete, that's a problem.

> but it's not a programming language you're selling me, it's a whole environment

Well, yes, because programming languages don't exist in a vacuum. The currently available libraries, tools, and documentation are very important if you decide to actually do something with that language.

> I have never missed autocompletion in Javascript

A moment ago, you complained about having to press a few more keys for the type annotations.

> I wanted to make the point that I don't automatically consider that "IDEs can autocomplete" is a positive feature

That there are IDEs which let you auto-complete everything is a positive feature.

Being toolable doesn't mean that those tools exist. If those tools exist, you can make use of them if you decide to use this language. This is a good thing.

By the way, JavaScript doesn't lack good tooling because it doesn't need good tooling. It's the way it is, because offering good tooling for JavaScript is really difficult. ES6's modules and classes will help with that though. The tools will make good use of this statically available information.


>> I have never missed autocompletion in Javascript

>A moment ago, you complained about having to press a few more keys for the type annotations."

For Typescript, I dislike the extra typing. I have never missed autocompletion in Javascript. Where is the contradiction?


One would think that 20 additional characters per function don't really matter if you can save 100 keystrokes.

Compared to JavaScript, I have to press fewer keys when I write Dart. (This would be also true without shorthands like method cascades.)


What's with this luddite mentality that pervades some areas of software? This meme that epitomizes terminal-based, mouseless, IDE-less development is just seriously absurd. We of all people should embrace modern tools that make development more productive. Tooling is the future, we should be pushing the envelope not romanticizing the past.

As I explained, requiring an IDE to make effective use of a language is a marker for a complex language. The system as a whole may be an effective way to build programs for some people, but it's now more than a language. The logical extreme is visual programming which reappears every few years. After a while, most people rediscover that languages articulate concepts better than visual metaphors. IDEs aren't as extreme, but sometimes don't merely enhance editing the language, but become almost a required part of the language. I am sure Martin Odersky can write Scala programs in Notepad, but I myself cannot write a Scala program without mouse hovers explaining the inferred type of my variable. It's an effective total programming system, but as a pure language, it's so complex I can only program it in a certain environment.

But why is environment flexibility a requirement? When are you genuinely constrained to use, say, only a terminal? I can think of no situation where this is true by necessity (rather than artificial constraint).

Well, first of all, I am not against IDEs. I use one most days of the week, and I'm productive in it. Some of the above discussion has misinterpreted my remarks. I only ever said, that a language making itself amenable to IDEs isn't a convincer for me, since languages requiring tooling to be effective are possibly less good as languages. So when somebody tells me Typescript is good because IDEs can autocomplete it, it's an unconvincing argument to me, since I prefer JavaScript which, having no type annotations, has less typing and little need for autocomplete in the first place.

I didn't even bring up environment flexibility, or terminals!

Since you asked, though, it is fairly nice to be able to ssh in to a box and make a code change, and recompile, for those languages that require that. Continuing with Scala as an example, I, myself, could not edit a Scala program extensively without benefit of an IDE. So say I have a Scala program sitting on a dev server where I'm building a batch image processing program. If Scala were as simple as Jacascript, I could easily use vi or emacs to iterate the development remotely. As it is, I edit and test locally on my laptop using an IDE, then push this big jar over to the server. So, there are plenty of cases where remote edit, compile, test cycle using a terminal is convenient.


And by the way, hackinthebochs, your line of argument down through this whole thread has been to call me a two fingered typist (accurate) with whom something is seriously wrong; a Luddite; and you erect straw man arguments like this, as if I'd somewhere argued for the environmental flexibility of terminals.

Certainly there was some extrapolation on my part (though I do find it amusing that I got the two-fingered part right). I was using your posts mainly as a jumping off point for discussion seeing as they seemed to be in the spirit of the mentality I was referring to.

The fact is we are being constrained by the past. We still program in plain text files, using languages and environments that are as basic as possible presumably to maximize flexibility of development. I don't see the point. There are those that eschew the mouse, or GUIs because the terminal is cool (or something). The fact is that this field is moving towards more and more tooling, hopefully improved visualization, and soon to be automation (imagine APIs automatically wiring themselves up, or the gruntwork drudgery of programming happening automatically). This is the future we should all be looking forward to, not placing arbitrary constraints on the languages and environments we use for the purpose of compatibility with outdated tools. The more constraints we place on ourselves, the longer this future will take to become reality.


(iv) I come from an EE background so as when it comes to fundamentals I find it a help to think in terms of message passing and transmitters and receivers. It clears up some of the debate for me about which is more fundamental, objects or functions. To me they are all things (transceivers) that transmit and/or receive messages. In the case of a function the transceiver is alive for the duration of the call. In the case of an object or actor it may be alive for longer (and, if stateful, respond in ways that are harder to predict and reproduce)

But this is just a way of thinking that makes me feel comfortable because I like to have a physical model. If someone else is happy with function application as being the fundamental building block then I have no problem with that. However you said that "there are theoretical reasons to believe that message passing is not function application (while function application is a special case of message passing)." I would be interested to get more background on this. Would you have any references for this? Thanks.

-----


To the best of my knowledge, the insight that function application (and other control structures) can be seen as special cases of message passingh comes from the actor people [1, 2]. The first proper mathematisation is Milner's breakthrough encoding of lambda calculus in pi-calculus [3]. This lead to fine-grained investigations into what kinds of interaction patterns correspond to what kinds of functional behaviour (CBV, CBN, call/cc etc), which in turn inspired a lot of work on types for interacting processes.

I don't remember off the top of who first showed that parallel computation has no 'good' encoding into functional computation (lambda-calculus). I'll try to dig out a reference and post it here if I find it.

But the upshot of all this is that message-passing is more fundamental than functions / function application.

[1] C. Hewitt, H. Baker, Actors and Continuous Functionals.

[2] C. Hewitt, Viewing Control Structures as Patterns of Passing Messages.

[3] R. Milner, Functions as Processes.

-----


You can take this work much further in an "FP" kind of way by noting that the pi calculus is a good proof system for linear logic. Frank Pfenning has some lectures on this.

-----


I don't fully agree with Pfenning here. The pi-calculus is not that good a proof system for linear logic: pi-calculus imposes sequentiality constraints on proofs that impede parallelism. Take for example the term

     x!<a> | y!<b> | x?(c).y?(d).P
You have to sequentialise the two possible reductions on x and y. The culprit is the pi-calculus's input prefix, which combines two operations: blocking input and scoping of the bound variables (here c and d). This sequentialisation is not true to the spirit of linear logic proofs (think e.g. proof nets).

Conversely, I don't think linear logic is a good typing system for pi-calculus for a variety of reasons: among them that linear logic does not track causality well, and because it doesn't take care of affine (at most once) interactions well.

-----


These are good points and I'll have to consider them. I was planning on going back through Pfenning's notes again in a bit and I'll keep a more critical eye this next time. Thanks!

(Also, to be clear and with reference to other threads we've conversed on here, I think of linear logic and pi calculus, even if they aren't well-corresponding, as fantastic, unavoidable examples of non-function-application style programming.)

-----


Thanks very much for that. I'll read them all. (I did a bit of reading around actors before including some Hewitt but didn't pick up on the message passing equivalence. Will read closely. The Milner paper looks very interesting)

-----


In Germany at least there is a big initiative backed government. See industry 4.0 and mass customization. They see it as the next thing that will give then a competitive edge

-----


” the prototypal inheritance model is in fact more powerful than the class based model. It is, for example, fairly trivial to build a class based model on top of a prototypal model, while the other way around is a far more difficult task”

This is precisely what Kay is arguing against in his second point. In Smalltalk everything is an object including classes but there is a sharp distinction between a class and other objects and it's very important to make this distinction. It's important to be explicit about which level you are operating on.

In general arguments about one thing being better than another because you can express one in terms of another don't hold up. Then you can argue that assembly is the most powerful of all. (see also the definition of Turing Tar Pit)

-----


Not really. Your first paragraph is also true for JS (think of ES6 "class" syntactic sugar). Some browsers implement the DOM in JavaScript (https://www.chromium.org/blink/blink-in-js, http://www.phantomjs.org/, https://github.com/andreasgal/dom.js). And there are things like asm.js and even Linux VM running in JavaScript (http://bellard.org/jslinux/). JavaScript/ECMAScript is one of the most misunderstood languages, please read up before jump at conclusions.

-----


Iran high? I wasn't aware of that. Could I ask what you are basing that on. Do you mean Iranian militias which are effectively the army and doing things which could be said to protect their country (as with US soldiers in all kinds of places they probably shouldn't be). If so I don't think that comes under the heading "angry young jihadis"

-----


Iran has backed many jihadist groups. Two examples: Hamas, they are a group against Isreal which receives support from Iran. Iran also backed the Shiite "Mahdi army" during the Iraq war which fought a jihad against American/coalition forces, and today are still involved in a 'defensive jihad' against IS.

There are other examples. Jihad has been fought by both Sunni and Shiite groups. But more so by Sunni groups such as IS and AQ - who have tended to have more of a global objective against the west. While Iran and Shiite groups (such as in Syria/Lebanon) are mostly interested in local power grabs to keep spreading their Islamic Revolution outside of Iran.

I highly recommend this New Yorker piece on Iran and their secret proxy wars:

http://www.newyorker.com/magazine/2013/09/30/the-shadow-comm...

-----


You're shifting the topic. We're talking about how many young Iranian men become terrorists, not the support of various groups by the Iranian state.

Most nation states, certainly including the USA, have at one time or another given material and financial support to terrorist groups. I don't condone it, but Iran is hardly exceptional in that regard.

-----


Good article but one thing to be careful of here is that if you just apply this pattern everywhere then you can end up in a situation where every view is decoupled from every other view and there are way more events in the system than there need to be. This happens more frequently than you would think. In the example given of a date picker this is fine as you can easily see that you might want to re-use the date picker somewhere else. But there are often circumstances where a subview could never be re-used outside of it's parent view. For example suppose I have a main calendar view with subviews showing appointments. The subviews need to inform the main view when the user changes an appointment time so that the main view can re-layout all the subviews. In this case there is no need for the main view to listen for events from subviews. The code will be much simpler and easier to debug if the subview just makes a call to the main view directly (the subview should be constructed with a pointer to the main view). When I look at code in the subview I don't have to guess who may be listening to the event, I can see the call directly. It also means that other views apart from the main view cannot randomly listen sub-view events as a 'quick fix' for some bug.

The point is that javascript tends to lead one down an event driven route. But you should only use events if you want to decouple things. Cohesion is also very important in design because it makes things simpler and more encapsulated. You need to decide what the boundaries of your component are. In this case I want the calendar to be the component, not it's subviews. So within the calendar I will make direct function calls, there are less events in the system and it is easier to think about and debug.

-----


But suppose you want to re-use the sub-view in another component, you would then need to clone the functionality and satisfy/remove all the directly coupled invocations.

This is where you have to decide how much coupling to introduce to balance comprehension vs future needs of the project

-----


indeed, it's a matter of juggling competing maintainability considerations and there are cases where it can make sense to couple components tightly.

that said, in practice our UIs contain relatively few one-off components, and it often requires less dev friction simply to use the standard event pattern than to weigh borderline cases to shave off a little indirection at the cost of tighter coupling. the problem of "too many events in the system" is avoided by letting complicated components handle events from their own subviews and not just blindly bounce everything down to a global mediator. e.g. if no other views have to know about the 4 dropdowns and slider within MyWidget (and they usually don't), then MyWidget can handle all those subview events itself and simply present a single unified 'i've been updated' event for other consumers. essentially narrow the public apis between different actors in the overall system.

-----


Yes of course it is very important in some situations.

"To survive in grand prix racing, you need to be afraid. Fear is an important feeling. It helps you to race longer and live longer." - Ayrton Senna.

-----


Do you think the commercial version of Qt would have covered what you needed? I would also be interested I'd anyone else here is in a position to compare Xamarin and Qt. It's very hard to compare the two from the outside. Thanks.

-----


Xamarin and Qt are very different.

Xamarin provides a managed runtime environment (Mono) and a way to call native iOS and Android APIs from that environment. Xamarin Forms builds on top of those things to provide a cross-platform UI toolkit that uses the native controls on each platform.

Qt is a cross-platform GUI toolkit that draws its own controls using each platform's low-level graphics facilities. As such, the controls in a Qt-based application aren't fully native to each platform. Qt tries to mimic the native controls, but the emulation isn't completely faithful. A particular problem is accessibility for users with disabilities, e.g. blind users who need to use a screen reader. Last time I checked (several months ago), Qt didn't implement the accessibility APIs for iOS or Android at all.

Because of the non-native nature of Qt, I would strongly recommend avoiding it in favor of something like Xamarin or RubyMotion.

-----


Thanks for that. The application I'm looking at has a very specialized ui. The functionality also counts for much more than native look and feel from the customers point of view so Qt might still be a runner.

-----


No it would not.

On my case, I wanted to make use of the native file pickers, which only became available in Qt 5.4 via QML (not C++)[0] for Android, with iOS and WP8 support still coming.

Granted, on Android's case the pickers are only available as of version 4.4, but they are available and it is also possible to use intents for lower versions and vender specific pickers.

On my specific case, I came to the conclusion that writing my own JNI layer would be less trouble than debugging Qt. However note that for me this is just hobby development, whenever I feel like coding for it.

Compared to Xamarin, The Qt Company seems to still be searching on what platform integration to sell to companies and how.

[0] http://blog.qt.io/blog/2014/12/10/qt-5-4-released/

-----


Considering the audience you might be right. I'm not a typical audience member. I read in the evening on a phone with a slow 2.5g connection. For that reason I use opera mini. HN works perfectly on it and the combination of the browser and whatever it is HN does for layout means I can load a HN page faster than anything else on the web. It doesn't seem to matter how many comments there are. So I hope it doesn't change. But you are right. I'm not typical.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: