This article does not ring true to me at all. The problem with making a big fraction of users into programmers is that we don't know how to do it. The LRG at PARC and then various groups at Apple tried every way they (we) could think of, and other ways have continued to be invented and tried. So far none work. Hypercard was indeed the most accessible development environment, but only a small fraction of Hypercard users ever wrote code (maybe 5%).
Apple has continued to make user programming of Macs as easy as they conveniently can (as far as I can tell having been out of there for a long time). iOS has a different goal -- making devices safe and usable -- which is intrinsically in conflict with maximum programmability. Even on iOS there are plenty of "user programmable" apps.
I think part of the problem here is that many developers take themselves and their developer friends as "typical" but that is totally not the case. This really treats most of the population with (unintentional) disrespect.
I have a soft-soft for Hypercard -- it was the first programming I did, back in middle school, on the school computers.
They were those old black-and-white macs with Hypercard, and a variety of stacks. We all knew the trick to turn on author mode (level 5 or something? it's been a while!), and from there you could explore and mess with existing stacks.
I still remember the thrill when I first discovered loops, and that I could write a script that would move a character on the screen in response to keys being held down. Then I discovered if statements, & made enemies that chased the main character. Honestly, I FELT LIKE GOD. I had created life.
No instruction, no books, it was all just poking around.
There's really something magical about an environment like that, and it's sad that we've gotten away from it.
(Maybe minecraft or something is the modern equivalent?)
I do wonder why Hypercard's modern clones haven't been more successful. Maybe there are just too many relatively easy interactive environments for any to stand out. For example any browser (with the debugger open) is a pretty amazing interactive graphical development environment -- now with full 3D rendering etc. if you get ambitious.
And how many people, given a spreadsheet, are willing to look into the formulas and tweak them?
Enough that speadsheets probably account for a trillion dollars worth of human productivity or more since their invention (just a wild guess).
I don't see how that is true. XCode is not installed by default. You can use AppleScript and Automator to automate some tasks, but you can't really call that programming.
The source code to the whole system is closed which limits discover-ability.
I remember in my childhood playing Gorillas on my DOS machine. I pressed a button and suddenly all the code for it popped up. I learned that if you change a part of that, the game would change.
There is NOTHING like that on a Mac install today. The closest thing is in the web browser.
If Hypercard was programming, so is AppleScript. So is Python. So is bash. I think we're talking about the general "programming" - since I also see Excel included alongside Hypercard in this reminiscence - not the more specific (and arbitrary, IMHO) programming vs. scripting language divide.
I loved Hypercard and was also a regular in the AppleScript world for a decade (and made a good part of my living on it at the time), but there are far more options for anyone remotely motivated to program today and with the exception of XCode (probably the least accessible option) they are preinstalled and freely available. And if downloading a free development environment with a simple installer is too great a burden today, I don't think you would have gotten anywhere in the good old days.
Programming in the 80's was different, for sure, but I have a hard time understanding the sentiment that it was somehow more accessible. Sure, BASIC was preinstalled on some platforms. Still, most platforms today have more, and better, options out of the box than any platform of the early 90's. Did you write C in the 90s for Mac OS (if we're accusing Apple of being inaccessible because of the lack of preinstalled XCode at the moment, then the 90's were far worse if you ever had to pay for THINK C or CodeWarrior).
Not to mention that, also unlike then, it's easy to access tutorials and videos galore on the Internet to show you how to get started with development.
Much of the Mac OS is in fact open source -- see Darwin and other projects. However having the code for the OS available -- or having XCode installed -- does nothing for user programmability.
This comment is a good example of why it doesn't work for us developers to see ourselves as typical users.
But this still misses the point about HyperCard, which was that Hypercard was its own toolchain. To get started with it, you didn't have to:
Install a separate editor
Learn a separate editor
Install a build system
Learn how to build a project
Learn how to share code
Deal with dependencies
It's the no-separate-toolchain feature that made HyperCard so accessible, and which was influenced by SmallTalk.
Modern programming is a nightmare in comparison - including modern app programming.
Just because Apple uses Objective-C doesn't mean it's approaching app development in a SmallTalk-like way. Getting an app out of Xcode and into the store is a horror story of provisioning profiles, debug vs production builds, sandboxes, entitlements, and so on. Obviously people manage to fight through this, but it's a long way from being friendly and accessible.
IMO only VBA gets close to being a successful no-separate-toolchain environment - which is one of the main reasons VBA became so popular.
(You could argue Python is, but I don't think it's equivalent, because unlike VBA and HyperCard the first thing the user sees is the editor, and not a product that's obviously and immediately useful, but also happens to include a code editor.)
I think the reason people are scared of such things is not because they are incapable of learning the tools, but because developer culture has a good reason to make it look hard. Otherwise, why are we payed so well?
It depends on how you define "programmer." At Microsoft in the late 90s/early 00s we were very happy that there were so many Visual Basic programmers, but several million of those included people who would not self-identify as a programmer. That is, they were people who would write VB oneliners in either Excel or Access; people who would make or modify "keyboard record & replay" macros; etc.
I agree that trying to make everybody a full-blown systems or application programmer seems like a very hard problem.
I certainly agree that self-identifying as programmers would be totally the wrong criterion.
Consider the number of people today who edit a URL in their browser location bar (for example to delete unnecessary trailing information if they are going to email it, or to try to fix it if it doesn't work). I bet a good study of a representative user population would find that only 10% or less would do this even if it was suggested.
If someone is willing to do that I'm willing to consider them a programmer in this context. They have made the basic leap to understand how to map between text and behavior, how to debug, etc.
Funnily enough, starting from Yosemite, Safari has started to hide that portion until you click into the address bar.
Design for extremes: if you make stuff for people who hate programming, that will make it easier for people who love it. You may end up coming up with ways to boost productivity (and fun) for people you never even thought about before:
But I don't see how to apply it to programmability -- and I don't think Apple has found any way. How could you make something programmable for people whose eyes slide over a URL without being able to see its parts? (Note: they maybe are willing but they can't, just like grandpa can't make his hands stop shaking and get the key into the lock. Naturally after enough frustration they come to "hate" whatever it is.)
It's a Conputer Chronicles episode on HyperCard with creator Bill Atkinson.
What's striking is how ordinary people (the used-car salesman for instance) actually _programmed_ their machines to solve a real-world problem.
This is such a drastically different paradigm then today, where even the savvy are more often consumers than producers, when it comes to computing.
Anyway, HyperCard was amazing.
I vividly remember the reaction of my fellow students. Given the mockery and jokes from my fellow students, you'd think they were watching a bad sci-fi movie. Most students discounted everything they saw, 'real men and real programmers used C'. I remember being so disheartened that it seemed we'd evolved so little in tools/languages from 1979 - 1995.
At the time, everything was Unix and C programming (DEC Alpha were just being installed on campus, Windows 95 had just been released). There were a lot of reasons Unix/C succeeded, there is a great classic paper about why C beat Lisp, and I agree with the author.
However, what always troubled me, is how my fellow students completely ignored any potential learnings from those videos. In many ways, those early Smalltalk programs were far more impressive than anything they had created, but they just wrote them off.
At GDC 2014, a post-mortem was presented on the classic game Myst. That was written entirely in Hypercard.
Think about it: currently, functional programming is, finally, getting some well deserved recognition in the wider programming world.
Yet almost everything it presents has been present in programming for as much as 45 years. The original paper on Hindley's type system was published in 1969. Milner's theory went to print in 1978. Scheme first appeared in 1975 and was already building off functional ideas that had been spawned by earlier Lisps. Guy Steele designed an actual Scheme CPU in hardware in 1979.
And yet even today, a non-trivial number of programmers react with absolute horror at the idea of a Lisp (usually based solely on ignorant trivialities like the parens-syntax), more or less exactly as your C programming classmates did in 1995, and while FP is starting to gain major inroads in some spheres, others dismiss the whole field as wank and Java and C remain kings that are unlikely to be unseated for another decade at a minimum, if ever.
We remain utterly bound to one model of hardware, one model of programming, and largely, only a couple models of operating system, after decades of development, because so many programmers react with horror at anything they're unfamiliar with or that deviates from the percieved norm, be it in features, syntax, or focus.
And God forbid you make anything that might actually be easy for non-programmers to learn. It will be more or less met with instant and persistent scorn, and its users derided and outcast, simply because they didn't use a 'Real Programming Language' like C. Go ask a BASIC coder what life has been like for the last 40 years, or a game dev who worked in AGS or GameMaker prior to the last half decade or so. Hell, I have a friend who still sneers at visual layout designers.
The divide described in the article is very much culturally enforced as much as economically.
People assume that functional wasn't used because of programmer prejudice.
Why don't people assume that functional wasn't used because there were good reasons not to use it?
Imperative programming lives in a world where CPU cycles and memory are scarce. Gee, once CPU and memory became abundant and free, people started using functional programming. Go figure.
Statements of this kind trouble me every time I see them, last time yesterday in the discussion about Greece.
I guess Marx's ideas aren't very popular over here. But implying that economy and culture need to be mentioned separately seems at least a little naive.
More fundamentally, you're ignoring the huge difference in power between consumers (in this case the programmers) and the people that create the tools and take them to market.
Both tend to enforce that divide, but for very different reasons; the coders for cliquish reasons as I described, and the business guys like Apple for the reasons in the original link.
it is bizarre
Prejudice doesn't seem to have anything to do with it. Functional programmers think differently, and what's obvious to the Functionals isn't to the Statefuls. And the Statefuls are currently most of the world. I've flip-flopped myself, because while I love the elegance of being a Fucntional, being a Stateful is just so much more productive. There are a few reasons for this: If I want to make a game, there's no good functional framework. If I want to write a script to get something done, like download a webpage, my goto language is Python because I know for a fact that their libraries work and that their documentation is almost always stellar. Contrast that with Lisp where you can spend at least a day just getting the environment set up in a way that asdf doesn't hate. Especially on Windows. (Yes, if you want to make games, Windows needs to support your dev environment.) My info about asdf is a couple years out of date, because to be honest I haven't felt inclined to look into it again after some bad experiences.
Haskell could be wonderful. Never tried it. Will someday. Until then, I'd love some sort of competition where a Haskell programmer and myself are given a task, like "write a script to X," where X is some real-world relevant task and not an academic puzzle, and see who finishes it first. It would be illuminating, since I'd give myself about a 30% chance of finishing first, but it would reveal what I'm lacking.
Arc had potential. It really did. Everyone just gave up on it, and it never attracted the kind of heroic programmers that Clojure did.
So the wildcard seems to be Clojure. It's a decent mix of performant, practical, and documented.
I'm out of time to pursue this comment further, but the main point is just that FP's problems have very little to do with societal acceptance or scorn. If you're running into that, you're probably running with a bad crowd anyway. It's mostly because imperative languages are popular, so network effects mean they'll just get better and better. If FP wants to chip away at that, it'll need to start off better and stay better. "Better" is many things, but it includes performance, cross-platformability (yes, Windows is necessary), documentation, and practicality (the ability to quickly accomplish random tasks without a huge prior time investment, Python seems to be the best at this so far).
I think one of Haskell's biggest marketing problems is that its strong points (strong static types + separation of side effects) aren't all that important in scripts (or any program that's small enough to fit in someone's head in its entirety), which makes it difficult to convince people of its merits in reasonably-sized examples.
What Haskell gives you are good, solid abstraction boundaries that you cannot accidentally break, and the ability to refactor code with a high degree of confidence that it's still going to work fine afterwards.
Neither of those are particularly helpful for any program that you might write in a competion, but both are incredibly important in day-to-day software development.
As for "getting work done" in Lisps/FP, I earnestly recommend checking out Racket. The developers have said that they've aimed to create 'the Python of Lisps' and by and large they've succeeded at exactly that. The documentation is thorough, the functional tools live alongside OOP and imperative ones quite nicely, the standard library is enormous, and DrRacket makes for a very good 'just open up the damn editor and start writing' tool.
Really, most of the FP languages are much more multi-paradigm friendly than Haskell, to the extent that I wouldn't even consider CL a functional-first language at all. CL's problems there are more a lifetime of neglect + Emacs loyalty, though Allegro and LispWorks offer more 'everyone else' friendly options. F# is fantastic as well, and integrates nicely with the rest of .NET and allows for a mixed paradigm while still getting all the functional tools and the power of type inference on top.
I recently re-installed my OS and setting up my Lisp environment (SBCL, Quicklisp, SLIME+Emacs) took about 30 minutes.
I don't see what the problem is on Windows.
My info is out of date though. Maybe it's better now.
> my goto language is Python because I know for a fact that their libraries work and that their documentation is almost always stellar.
> It's mostly because imperative languages are popular, so network effects mean they'll just get better and better.
This claim makes it seem like imperative/stateful is more productive because of functional itself:
> I've flip-flopped myself, because while I love the elegance of being a Fucntional, being a Stateful is just so much more productive
All of your other comments seem to be "imperative/stateful is more productive because of popularity, docs, and libaries".
Are you claiming both of those are true? Weakly claiming the first, strongly the second? Could you elaborate?
EDIT: Maybe a little longer.
So it's just a combination of mindset change plus a different "context" for programmers to get used to. That, along with deployment problems and sometimes lack of documentation, leads to a lack of incentive for anyone to make the switch. Since stateful is more popular, it's almost always better for productivity to start and stay a stateful programmer. However, for personal skill level, everyone needs to become a functional programmer for at least several months and try to develop production apps with it. Who knows, you'll probably get much further than I did, and figure out a way to convince all your friends to make the switch too.
Very recently computers got to the point where we can afford to write slow code, this is the reason higher level people are coming out of the woodwork.
So I asked the developer, does he use the famous ROS software for running the robot. He said, no way, because the robot had only an onboard Intel Atom of relatively low performance, all the complex multi-threaded behavior had to be written in low-level C. No space nor execution time for anything more.
There. Top that.
Their Hypercard experience is at the 38 minute mark.
But this is the problem with our profession, it's baked in at a deep level that old == bad. So every new generation reinvents the wheel, worse than the last. The truth is all the big problems in software were solved in the 1970s, if only we would pay attention. The only actual progress in computing is that the hardware guys make everything faster.
Do you have a reference to that paper on C and LISP?
Back then, Kay saw simulation as the killer app. He saw discrete-event simulators as a business tool. They had a hospital simulation, where patients went in and went through Waiting, Examination, Surgery, Recovery, Rest, and Discharge. This was visual, and you could click on the little patient icons and get something like "I am a victim of Bowlerthumb". Smalltalk was a descendant of Simula, which was a simulation add-on for Algol. So that was a natural direction.
It turns out that very few people use discrete-event simulators as business tools. Although you can model your simulator or bank branch in a discrete-event simulator and try to fix bottlenecks, nobody does this. That was a dead end as a concept.
Xerox's commercial product, the Xerox Star, was very much locked down. It came with a set of canned applications, and was intended for use by clerical staff. I don't think it even ran Smalltalk, but was programmed in Mesa or Cedar. It competed with a forgotten category of products, shared-logic world processors. These were low-end time sharing systems with dumb terminals tied to a server machine with a disk, used for word processing. Wang was the leader in that field. Those, too, were very locked down.
Back then, hardware cost was the huge problem. Kay said they could build the future because Xerox had the money to spend. (Xerox stock hit its all-time high in 1973). Each Alto cost about $20K at the time. Apple's first attempt at an Alto-like machine was the Lisa, which was a good machine with an MMU and a hard drive. But it cost around $10,000. The Macintosh was a cost-reduced version, and the original 128K floppy-only Mac was a commercial flop. It wasn't until the cost of parts came down and the Mac could be built up to 512K and a hard drive, with the option of a LaserWriter, that it started to sell in volume.
What made the Macintosh line a success was desktop publishing. "Now you can typeset everything in your office" - early Apple advertising. It sold, because it was far cheaper than a print shop.
Before the Macintosh, there was UNIX on the desktop. Yes, UNIX on the desktop, with a GUI, predates the Mac. Three Rivers, Apollo, and Sun all had good workstations with big screens in the early 1980s, before the Mac. The Three Rivers PERQ launched in 1979, five years before the Macintosh, with a big vertical CRT like the Alto. All these had some kind of GUI, generally not a very good one. Those were the first descendants of the Alto. They were used for engineering and animation, not just word processing.
We can bash Apple that they did not turn the masses into programmers at the same time, but it’s far from obvious this is even possible (on the contrary). And, at least for me, a hackable software device is a device that can be broken more easily, thus compromising the first and most important principle of usability. It’s so refreshing to be able to tell people that they don’t have to worry about the device, because there is almost nothing they could break, no matter how hard they try.
Also, this sounds awfully condescending:
I think one of the main consequences of the inventions of personal computing and the world wide Internet is that everyone gets to be a potential participant, and this means that we now have the entire bell curve of humanity trying to be part of the action. This will dilute good design (almost stamp it out) until mass education can help most people get more savvy about what the new medium is all about. (This is not a fast process). What we have now is not dissimilar to the pop music scene vs the developed music culture (the former has almost driven out the latter -- and this is only possible where there is too little knowledge and taste to prevent it). Not a pretty sight.
It’s like someone has got the whole computer interaction thing sorted out and is just waiting for the rest of the idiots to catch up. With all respect to Alan Kay, I’m not buying that.
This is really important. Mutability is a key requirement for creation, but also destruction.
What's more, the iPad is a capital P Product, which itself hosts Virtual Products, both of which have tightly constrained mutability. And this, it seems, really resonates with people: bright, shiny, consistently behaving virtual products. Apple wins financially by tightly coupling these virtual products to their physical product, but people win too because now they have, arguably for the first time, truly reliable virtual products.
Weird, I got a developer license and was building and installing random github projects on my iPad without any fuss. If you can't afford the license, go in on it with 100 of your closest friends and it's only a buck.
> But if you think that's okay because that means less technical support for friends, go ahead.
I'd happily pay 100 bucks not to do tech support anymore. I sent my mother my iPad 2 a year ago and haven't heard a peep from her about computing problems since.
History didn't start in 2008.
One of the iOS upgrades (7 or 8) brought massive performance problems, on a normal OS you would just downgrade but there is no such option on iOS.
Alternative browsers are also very slow, it looks like Chrome isn't able to take full advantage of V8. An Android device that costs less than a third than the iPad can run Chrome much better than an iPad2 can.
The iPad2 (WiFi) also has a GPS device, but it only works when you are connected to WiFi: I couldn't get it to work without WiFi which limits its usefulness.
Android devices with GPS always worked offline, you even have the option to download AGPS data when you're connected to a WiFi to speed up TTFF, but even without that it works it just takes (a lot) longer to get a fix.
Try finding the equivalent of a commonly used application
from Linux/Windows on iPad that isn't filled with ads, or limited features (unless you do an in-app purchase). For example is there a usable VNC client?
In the end the iPad is useful only for: web browsing (with default browser), reading emails, playing games.
An equivalently priced Android device can be much more useful, and you have a wider selection of useful applications.
With all these problems I couldn't recommend anyone to buy an iPad (much less an iPhone), and I'll probably sell mine at some point.
There are several good VNC clients for iOS, here are a few paid and free ones: http://www.thetechgadget.com/top-vnc-client-iphone-ipad-ipod...
Testing on my iPad3 right now, and it seems to work fine. iPad2 should really be considered a legacy device at this point (even if Apple still sells it).
> Alternative browsers are also very slow, it looks like Chrome isn't able to take full advantage of V8. An Android device that costs less than a third than the iPad can run Chrome much better than an iPad2 can.
And you can't use Safari at all on an Android device! Gasp! As for the JS engine, I'm pretty sure all apps have the ability to use the full-speed one now, and if Chrome is lagging behind, the blame lies with Google, not Apple.
WiFi versions of iPads do not have GPS. Only the cellular versions. And those cellular models can use GPS even with no cellular connectivity.
Complaining about iOS apps with ads is laughable when Android is just as bad or worse. And god forbid that developers make any money with a 99 cent app or in-app purchase. And it's obvious you didn't even try a simple Google search for "iOS VNC" as there are plenty of worthy apps.
I agree about the Android market, but at least there are alternative sources like f-droid which usually have a selection of better apps.
I tried 'vnc viewer' and 'vnc client', neither of which worked too well. There are probably better ones if I would've kept looking
It is just a shame that in an effort to make interpersonal engagement over computers easy and ubiquitous, the goal of making the computer itself easily engaging has become obscured
I wonder about the idea that popular music is undeveloped, tasteless, and ignorant. Take Michael Jackson's "Off The Wall" album. It's a very sophisticated album by any measure except, of course that it is pop music.
> He's lamenting that taste is not important to music's popularity.
Taste is such a subjective thing really all he is saying is that popular music is not to his particular taste.
Can you explain this and give an example? I've been around for a while and don't see, or perhaps don't recognize, this pattern. If anything, I observe the opposite: very strong programmers help the weaker ones by creating new languages, libraries, and tools.
Someone asked Alan Kay an excellent question about the iPad, and his answer is so interesting, and reveals something very surprising about Steve Jobs losing control of Apple near the end of his life, that I'll transcribe here.
To his credit, he handled the questioner's faux pas much more gracefully than how RMS typically responds to questions about Linux and Open Source. ;)
Questioner: So you came up with the DynaPad --
Alan Kay: DynaBook.
Yes, I'm sorry. Which is mostly -- you know, we've got iPads and all these tablet computers now.
But does it tick you off that we can't even run Squeak on it now?
Alan Kay: Well, you can...
Q: Yea, but you've got to pay Apple $100 bucks just to get a developer's license.
Alan Kay: Well, there's a variety of things.
See, I'll tell you what does tick me off, though.
Basically two things.
The number one thing is, yeah, you can run Squeak, and you can run the eToys version of Squeak on it, so children can do things.
But Apple absolutely forbids any child from putting a creation of theirs to the internet, and forbids any other child in the world from downloading that creation.
That couldn't be any more anti-personal-computing if you tried.
That's what ticks me off.
Then the lesser thing is that the user interface on the iPad is so bad.
Because they went for the lowest common denominator.
I actually have a nice slide for that, which shows a two-year-old kid using an iPad, and an 85-year-old lady using an iPad. And then the next thing shows both of them in walkers.
Because that's what Apple has catered to: they've catered to the absolute extreme.
But in between, people, you know, when you're two or three, you start using crayons, you start using tools.
And yeah, you can buy a capacitive pen for the iPad, but where do you put it?
So there's no place on the iPad for putting that capacitive pen.
So Apple, in spite of the fact of making a pretty good touch sensitive surface, absolutely has no thought of selling to anybody who wants to learn something on it.
And again, who cares?
There's nothing wrong with having something that is brain dead, and only shows ordinary media.
The problem is that people don't know it's brain dead.
And so it's actually replacing computers that can actually do more for children.
And to me, that's anti-ethical.
My favorite story in the Bible is the one of Esau.
Esau came back from hunting, and his brother Joseph was cooking up a pot of soup.
And Esau said "I'm hungry, I'd like a cup of soup."
And Joseph said "Well, I'll give it to you for your birth right."
And Esau was hungry, so he said "OK".
Because we're constantly giving up what's most important just for mere convenience, and not realizing what the actual cost is.
So you could blame the schools.
I really blame Apple, because they know what they're doing.
And I blame the schools because they haven't taken the trouble to know what they're doing over the last 30 years.
But I blame Apple more for that.
I spent a lot of -- just to get things like Squeak running, and other systems like Scratch running on it, took many phone calls between me and Steve, before he died.
I spent -- you know, he and I used to talk on the phone about once a month, and I spent a long -- and it was clear that he was not in control of the company any more.
So he got one little lightning bolt down to allow people to put interpreters on, but not enough to allow interpretations to be shared over the internet.
So people do crazy things like attaching things into mail.
But that's not the same as finding something via search in a web browser.
So I think it's just completely messed up.
You know, it's the world that we're in.
It's a consumer world where the consumers are thought of as serfs, and only good enough to provide money.
Not good enough to learn anything worthwhile.
Compare that with a classical Smalltalk UI:
The programming language of the Xerox Star system was 'Mesa'. Which was much more like what Apple used: Clascal, Lisa Pascal, and then Object Pascal, ...
I was using IBM 3270 terminals, in a special shared room for that purpose, to edit COBOL programs with fixed width fonts - ugh.
How strong the self-defence mechanisms would be if this system succeeds at allowing anyone to be equally able to program anything on the system?
Wouldn't be way "safer" for the statu quo of the IT community to hide implementation power behind the walls raised by the C learning curve?
Interestingly, the divorce between technicians and users/consumers is steady reversing the trend.
I, as a programmer, spend my days trying to express things as clearly as possible. Programming, as a whole field, is trying hard all the time to come up with safer and simpler ways to express precise thoughts.
Smalltalk is probably neat, but it’s not a magic bullet. There are no magic bullets. Anyone can’t be a programmer not because programming languages are hard. Anyone can’t be a programmer because you have to be able to think precisely, abstract, deal with complexity.
Mmm... I'm not so sure. "Programming as a field" certainly does, in the sense that if you take it as a whole and look into the field, there are people trying for those things. But by and large, many programmers seem to revel in some degree of obscurantism.
And I work with Objective-C a lot. There is a downside to theExtremelyLong:message passingStyle:functionNames
You could say similar things about applescript too, yet I've never met anyone who actually uses it:
Smalltalk was good and it inspired a lot of future things but it didn't succeed because it had a bad $ price / performance ratio in the 80s and the image system created too much coupling compared to files.
I actually like the selector syntax. I have type ahead and don't have to constantly look at code to figure out what parameter number something is.
[NSString stringWithFormat:@"%d items", itemCount] vs.
"%d items" % itemCount
[[[json objectForKey:@"keyA"] objectForKey:@"keyB"] objectAtIndex:0];
By what definition of success?
From 80's through '90s Smalltalk was used commercially in finance and manufacturing and … on large scale projects.
* publish - subscribe
* everything is an object
* message passing
* naming conventions for Classes, methods, and parameters
* Image-based editing as opposed to word-processing style editors
and so on...
In its domain, for ease of use back in the mid 1990's, Microsoft Visual Basic was the best. See the pattern: Visual Works Smalltalk, IBM Visual Age Smalltalk, Microsoft Visual Basic.
I did a double-take after reading that line before I noticed the publish date.
See for example a tweet from Bret Victor a few days ago:
Or this article by Alec Resnick:
Their argument can be roughly summed up as "We were on the right track with the constructivist approach to HCI in the 70s/80s, but capitalism ruined everything and now we just watch Kim Kardashian on our tablets". There's of course some cynical truth in there- I also spent my grad school years reading Papert and Piaget and Kay and Resnick (Mitchell, not Alec), and I also find their vision of personal computing very enticing. And in many ways I orient my work to fit within the frameworks they built- I have nothing but deep respect for them as academics.
But I don't buy the whole "tablets and phones are just mind numbing dumb entertainment devices". In the past 5 years, I have seen:
- my younger brother start an internet radio from scratch from a bedroom at our parents, that rose to thousands of listeners
- my grandmother use the internet to communicate with her geographically distant friends and family
- teenagers making short movies in their backyard, or learning how to compose music
- kids discovering themselves a passion for photography, able to retouch photographs without needing access to a dedicated dark room, all with a hundred dollar device (when I was a kid, a hundred dollars barely got you a walkman).
- illustrators empowered to work from their home and make a living by working with clients many thousands miles away
- and many, many more.
None of that would have been possible even 15 years ago. Our tools are much, much better - Garageband and Photoshop and iMovie and Fruity Loops and others offer the means to do things in your bedroom that would have cost tens of thousands of dollars and required hundreds of square feet of free space a few decades ago. Sure, you have some purists that might argue that Instagram is to a dark room what a McDonald's burger is to Kobe beef - but I take issue with that as well .
Naturally, there are people who only use these devices for watching movies and playing games. That's absolutely fine, and I'm not sure why computer scientists like to take such a haughty attitude towards that. Not everyone spends all their waking hours working on their next masterpiece; and in fact, even the people who produce masterpieces spend idle time doing unproductive things just on par with reading Reddit or watching silly YouTube videos.
Are computers the best they can be? Far from it, and many of us work very hard at it (including the people I quoted earlier). But the attitude that we are now in a tarpit of mind controlling devices and that the golden days of computing are behind us is deeply wrong. We have come such a long way.
For example functional languages eschew the use of variables but I think that has set back their adoption because variables merely attach human readability to logic (like comments). Seeing a big blob of functions is great for brevity but disastrous for long term maintenance. A compiler can trivially treat variables (especially immutable ones) like macros and transform them back into the same syntax tree that pure functional code generates. To say that another way: why don’t functional paradigms suggest that we break long blocks of code into shorter blocks with variables? They seem to be discouraging human instinct when there is no cost in encouraging it.
Also if we take a step back and see that the elimination of globals gets us most of the way to functional programming, then the main thing left is the notion of time due to externalities like input/output. Imagine for a moment working in a lisp environment using REPL. When you type some code and hit enter, how is that fundamentally different than a monad? Well, it potentially triggers the entire syntax tree to be reevaluated which is actually more expensive than handling input with a monad. But metaphorically it’s similar - we can just think of monads as places where the logic can’t proceed because it’s missing a piece of the puzzle. If we were using functional programming properly and assumed that we had a near-infinite number of cores and flops, we’d quickly see that monads are the bottleneck.
We should be able to translate between global-less Smalltalk message handing and Lisp monads, and let the compiler optimize away any monads that don’t depend on external state. To me, that suggests that working at a level below human-readable/imperative is generally a waste of time. We should be able to select any code block and use a dropdown menu to select the language we want it presented in. I remember the first time this hit me and I asked myself if it was possible to write something in HyperTalk/Smalltalk/C (or any other imperative language) and covert that to Lisp with these conventions. The answer to me is pretty self-evidently yes. Going the other direction from Lisp to an imperative language is even easier.
A ramification of this is that if you converted a C loop that uses immutable variables and no globals to Lisp, it would be evaluated into the most minimal logic possible because analysis would reveal that (for example) the elements being iterated over have no side effects between one another. In other words, the compiler would independently parallelize the loop and derive something analogous to map/reduce. Why are we doing that by hand? Surely there has to be a better reason than brevity but I struggle to find one.
When it’s all said and done, I think the arbitrary distinctions we’ve made between computer languages are relevant to education and worker productivity, but in the end are fantasy. I would have thought by now that we would have dropped the pretense. If computers worked the way I had imagined they would by this point in history, a tablet would have at least 1000 cores (or drop the notion of cores entirely and go with functional hardware), the compiler/runtime would consider the latency between it and the other devices around it in the mesh and adapt accordingly, all processes from the kernel to userland services to executables and threads would just be functions with the minimum permissions necessary to do their jobs, and most importantly usage would be reversed so that users would write macros through the use of human language rather than try to figure out how to do a task by hand from a locked down sandbox. All this code would be shared out to the world through some kind of hybrid Git/BitTorrent so that the solution for how to perform some declarative programming task would almost always already exist. And all of that would be constantly evolving with genetic algorithms and other software agents.
The tablet may as well not exist, because with the world’s computing power at your fingertips, why is your dumb terminal any better than another except for eye candy? It’s reminiscent of idol worship. Kind of gives me the heebie jeebies actually. And tragically moves us further away from technoliteracy with each passing moment.
At FPL, the large Florida electric utility where I worked, we went all-in, received training, and with pricey consultants, built several client server applications using GemStone as the Smalltalk server.
What most developers loved about Smalltalk was the liveness of the system, in that a just-in-time compiler made it possible to develop in a debugging environment. You changed a line in a method, and immediately you could interact with the revised app.
In 1995, Sun Microsystems released Java whitepapers. When I saw those I thought - Java is Smalltalk with C syntax, but without the image-derived liveness.
FPL switched from Smalltalk to Java, gaining all the benefits of reusable code, and object-oriented libraries.
I miss Smalltalk syntax, in particular named method arguments, and to this day comment my Java code to compensate like this...
message, // receivedMessage
this)); // skill
A particular feature of the Visual Works Smalltalk implementation was a concept known as "Save to Perm" which, after a thorough garbage collection, moved the remaining long-lived objects, e.g. nearly the whole language runtime, to a memory buffer thereafter free of garbage collection.
Even after two decades, no Java implementation has this feature without resorting to off-heap storage.
IBM got on board the object-oriented development wave in the early 1990's and introduced Visual Age Smalltalk. They bought a Smalltalk version control product "Envy" and created the development environment that has over the years evolved into the famous Eclipse (IBM's Eclipse "blocks out" Sun Microsystem's Java).
Initially it was very tedious to develop apps as you'd have to refresh the page each time , and the transpilation times began to add up. I read about small talk and realized that objective-c is very close to small talk and hence I could implement hot reloads in a similar way.
I proceeded to make a watcher process in node.js and made a process in the page listen via websockets. Whenever a file changed , I'd just fetch the file and execute it again , replacing it in the central class repository. All existing objects would instantly have their implementations changed.
It's much more fun to develop this way but it's also easier to make a mess. But it's speeded me up enormously and I end up missing it when I code in other object oriented languages.
Note sure if apocryphal or not.
Or put another way: yes, smalltalk addressed programmability, but it did not address usability.