Hacker News new | past | comments | ask | show | jobs | submit login
Alan Kay on AI, Apple and Future (factordaily.com)
198 points by pm2016 on Aug 2, 2016 | hide | past | web | favorite | 107 comments



> Part of the idea behind “real objects” was to act as “virtual machines” connected by “neutral messages” in the very same way as we were starting to design the Internet — and that the “virtual internet of objects” would map into the subsequent hardware network. The latter got made, and the former did not get understood or adopted.

Can anyone elaborate what he means by this (and the overall idea of "real objects"?


"In computer terms, Smalltalk is a recursion on the notion of computer itself. Instead of dividing "computer stuff" into things each less strong than the whole—like data structures, procedures, and functions which are the usual paraphernalia of programming languages—each Smalltalk object is a recursion on the entire possibilities of the computer. Thus its semantics are a bit like having thousands and thousands of computers all hooked together by a very fast network."

That's Alan Kay in The Early History of Smalltalk, which you might enjoy reading in full: http://worrydream.com/EarlyHistoryOfSmalltalk/


It isn't, though. Smalltalk "messages" are just function calls. It's not like all Smalltalk objects are running asynchronously, in parallel, with unsynchronized messages flowing around.


In my opinion this is the most important insight into Smalltalk: that Smalltalk is renaming indirect function calls as "messages". If Smalltalk message passing were asynchronous, and if Smalltalk objects were processes, then Smalltalk would be closer to the vision that Kay conveys.

I believe it was Jonathan Rees who emphasized the relationship between Smalltalk's messages and generic functions.


Smalltalk was descended from Simula, which was a variant of ALGOL-60 with discrete event simulation capabilities. As a side effect, Simula was the first object-oriented language. This confused things. Objects were associated with discrete event simulation, which led to the obsession with messages. Kay liked discrete event simulation - he thought that one of the big applications for personal computers was going to be simulation and scheduling. There's a little hospital simulation for the Alto shown in the Personal Dynamic Media book.

In a discrete event simulator, there's a notion of time, and a simulated pseudo-clock, but in reality, all the events are sorted in time order and executed sequentially. (You can schedule something to happen in 2 seconds, but that just puts it in the event queue in the proper place.) Locking against concurrency is not required. This serialized notion of concurrency is more or less equivalent to just calling functions.

Once people realized that object-oriented programming doesn't require discrete event simulation, the concepts parted company. OOP became more about encapsulation, although some languages still use the "message" terminology.

So this is a historical artifact. Those happen. In von Neumann's EDVAC report, where he laid out the design for most modern computers, there's discussion of logic gates as simplified neurons and synapses. Nobody thinks of logic gates that way any more, and we now know that neurons don't work like logic gates. But at the time, people thought of them as similar.


Messages were clearly not fancy subroutine calls in Smalltalk-72 and -74: they were a stream of tokens between the sender and the receiver. This was optimized away in Smalltalk-76 (and so -78 and -80) so that messages no longer seemed like the ones in Actor languages or Erlang.

But I don't think it is unfortunate that the name "messages" has persisted. Check out Squeak running on a 56 core Tilera chip with the RoarVM. Messages from an object to another one in the same core are indeed just fancy subroutine calls. But if the receiver is in a different core, then a message is a bunch of bytes sent from one core to the other. Even though the two messages are the same at the source level and at the bytecode level.


When you write

     i := (j + 1)
this is said to be sending a "+" message to j with argument 1. But how does the value get sent back to the assignment. For consistency, it ought to be j sending an assignment message to i with the the value. But how did j find out about i? The assignment message is sent to i, not j. Problem. So the Smalltalk convention is that each message sent produces a value. That value is determined by the recipient of the message; it's not just a send status code as in a real message passing system.

Values break the message passing paradigm, and force the "message passing" to work like a function call. So there was no real reason not to implement them as function calls. Then there was no real reason not to think of them as function calls.

A purer message passing approach would involve a callback when the result is available. That's how everything with a delay works in Javascript. It's a bit unwieldy to do that for every piece of computation, though.

Real asynchronous messages are something else. Go uses them extensively. Then you have locking, race conditions, lockups, and all the problems of concurrency, but you can get multiple processors working on the problem.


In Smalltalk-72 and -74 the assignment (left arrow character) was a message just like any other. This became a special case in Smalltalk-76 and later and became just a message again in Self, where you wrote

  i: (j + 1)
meaning

  self i: ( self j + 1 )
For tinySelf 1 I did implement each message using future objects. This is, as you say, slooooow to do for every piece of computation but there are implementation tricks that can optimize away all the cases where it is not really needed.

http://www.merlintec.com/lsi/tiny.html


> So there was no real reason not to implement them as function calls. Then there was no real reason not to think of them as function calls.

Indeed, and this was the insight discovered by Guy L. Steele Jr. while experimenting with implementing the Actor Model in Lisp, which was one of the insights that lead to the development of Scheme. At some point, while reading his code, he realized that actors and closures were implemented with roughly the same code, modulo alpha conversion, and thus a message pass was the same as a function call.

Of course, the real message-passing called for by the Actor Model became more practical with the introduction of tail-call elimination, because that meant that the stack did not grow out of control as the actors delivered their results via messages to the next actor. I have a feeling that the discovery of closures-as-actors and the discovery of tail-call elimination went hand-in-hand, and possibly even continuation-passing style.


> As a side effect, Simula was the first object-oriented language.

No, Simula was the first language with objects; having objects does not make a language object oriented. Smalltalk was the first object oriented langauge, the object was the central metaphor upon which the entire language was built, unlike Simula which was a procedural language with objects grafted on. Kay made up the moniker object oriented to denote this distinction, the term literally exists to describe the difference between what Smalltalk was and what everything else was. It does not apply to Simula despite the persistence of this misunderstanding.


I'm not sure that's a meaningful distinction without more detail. An x86 "function call" is a push, a jump, logic, a push and a return jump (same for a procedure call).

A "message" is something you can (more) easily adapt to be something sent over the network. In a high-level system, the distinction between a "function call" and RPC can also dissappear - but does it really matter if it is messaging that is like (abstract/high level) function calls or the other way around?

Consider if the ("x86") function call ends up being a jump from an x86 core on a server to an arm chip on a phone - it doesn't work.

But a phone might very well respond to a "getId" message. Might return a phone number or an ip address for example.


Real objects are processes.

Alan Kay has had many opportunities to specify precisely what he means by objects and messages, and based on reading his more recent (or less old) VRI papers, Kay draws inspiration for objects from biology, where swarms of molecules pattern match against the partially constructed molecule and build it up step by step. I've seen him give an example in only one paper.

Needless to say this is not how Smalltalk works.


> Needless to say this is not how Smalltalk works.

I've always gotten the strangest feeling that Mr. Kay made up the distinction between Smalltalk and later OOP languages after the fact. I think this is precisely the reason I get that feeling: because despite what Mr. Kay says about Smalltalk and OO, programming in Smalltalk feels very much like programming in any other OO language.

I don't mean any disrespect to Mr. Kay. I give him the benefit of the doubt in assuming that his intended language was different from the language that actually materialized, likely due to implementation issues. In that regard, as a language designer I can very much sympathize.


I interpret it as message passing between independent computing entities. For example, having a global object called "google" and an associated message "ask", or an object "HSBCBank" with a method "getMonthlyStatement".

Instead, we started to apply OO principles from a software engineering point of view, in order to write big monoliths.


The message send is actually an object in Smalltalk.

So, in process, out of process, to another box, this shouldn't matter.

For performance reasons, all kinds of optimizations are present.

In Pharo (http:/pharo.org), which is "Smalltalk inspired", there are new developments like the remote debugger:

http://dionisiydk.blogspot.be/2016/07/remote-debugging-tools...

It is based on Seamless.


Pharo isn't Smalltalk inspired, Pharo is a Smalltalk, one of several; it forked from Squeak to create a more business oriented Smalltalk rather than the garish toy for children that Squeak was.


I tend to think this means that the objects ought have a manageability and reality unto themselves, and be portable and manipulable beyond the application.

Instead of objects being a structure inside of a single process that manipulates the object, objects are general entities that exist broadly on a system. All things on the system use a neutral messaging format to talk to objects, getting data or asking for the object to run operations.


OOP came out of early "supercomputer" simulation work. Best i can tell, the idea is that each object should be possible to be assigned one or more CPU core (with each running a instance of the object). This then maximizing the performance of the simulation.


> Startups are not a good place to do research

I saw Alan Kay give a talk on Squeak in NYC when I was a little Sega Genesis kid... he made me want to be a programmer because I hoped one day to work on the stuff he does the way he does it. That's like wanting to be an astronaut.

The tricky bit is that startups pretend that they're doing that kind of work to hire engineers (or get press/funding/etc).


Often times they demand that you uphold the fiction in order to work for them, which makes for fun social experiments in deflating mass self-delusion.

Them: "We just really want to find people who are passionate about this work and these tools!"

Me: "Anybody who is telling you they're 'passionate' about making yet-another-dashboards-as-a-service product on top of a tired old Java Spring software stack is lying to you."

Them: "So you're not passionate about dashboards or Java?"

Me: "No. No I am not. What I am is capable of fixing your performance and scale problems so that you don't lose your reference customers and destroy any chance you have at acquisition or IPO. Look on the bright side. At least I'm not lying to you."

Them: "Oh. I see."

Me: "I'm passionate about making money."


You captured 18 months of personal depression in 7 lines :)


Sorry you've been dealing with depression. From experience I know that's a tough thing to grapple with.

Being compelled to maintain sustained cognitive dissonance is a form of low-grade chronic stress, and unfortunately depression is often a side-effect of chronic stress.

I'm always weary of places that are more interested in building a cult than a company for precisely these kinds of reasons.

Get well, mate.


A lot of us have been there. Hell, I'm there now, but in a far more mild way than I have been in the past, luckily for me. It got so bad at one point a few years ago that I wound up taking four whole months of medical leave due to severe depression and panic attacks, so I know how bad it can be, and you have my sympathy. If you think it would help, you might look into doing the same.

I know it can be super hard to motivate yourself to take the necessary steps when you're depressed, but hopefully the thought of some relief is enough of a motivator, if only barely. The first step is to see a psychiatrist if you haven't already.

If you need/want someone (not a mental health professional) to talk to, my e-mail address is in my profile. Either way, I hope things get better for you soon :)


Was this comment supposed to go in the "Non-passionate developer" [1] thread?

[1] https://news.ycombinator.com/item?id=12207970


Bravo!


I think I agree with him at this point. Which is a bit sad. 12 years ago I went into academia because I wanted to do self-driven research, but then I realized academia isn't a good place to do research unless you are at the top of the hierarchy.

Then ~6 years ago I tried starting a startup, working for startups, but as Kay points out that's not a great place to do research either. Really, you need to be ruthlessly market- and production-focused.

So I guess my first two guesses for where I could do my own research were wrong. My new theory: The best place to do research is in your garage.

Let's see if I still feel the same way 6 years from now.


Taleb has the right idea for how to do research, in my opinion.

First you get "fuck you money" (which is not necessarily millions and millions of dollars, just enough money so you don't have to be a wage slave anymore).

Second, you move to some place cheap with low upkeep costs and a nice garage/mad-science-laboratory on premises. That's where you do your research.


This is a really compelling vision but I've always wondered how one stays relevant in this scenario. Science is a conversation and it is really hard to avoid becoming an equivalent of a babbling madman standing on a soapbox when doing research on your own terms in your own garage.


well, I recommend you ask https://twitter.com/nntaleb about it directly then. he is not a hermit withdrawn from the world, but he doesn't work in academia, or for a private corporation, or for a government agency. he does research (in mathematics mostly, as well as finance, risk modeling, and interestingly enough ancient Levantine languages) and participates in those communities even though he is not employed by an institution.


Mr. Taleb is an outlier - a black swan if you will ;)

And he is a bit of a babbling madman - with his share of controversial ideas and vocal rejection of establishment.


You just need some form of reckoning. You can use markets. You can use advisors. You can use some metric you set for yourself. You don't need society to be sane, but you do need to replace it with something.


Pulling straight from my quotes file:

  "What one wants is to be able to talk with a diverse club of smart
  people, arrange to do short one-​​off research projects and
  simulations, publish papers or capture intellectual property
  quickly and easily, and move on to another
  conversation. Quickly. Easily. For a living. Can’t do that in
  industry. Can’t do that in the Academy. Yet in my experience,
  scientists and engineers all want it. Maybe even a few
  mathematicians and social scientists do, too."

  -- Bill Tozier, [[http://williamtozier.com/slurry/2006/07/17/diverse-themes-observed-at-gecco-2006][Diverse themes observed at GECCO 2006]]


Aren't some BigCos a good place to do research? Have you tried that approach?


Hard to afford a garage in Silicon Valley unless you rent it.


Indeed. I'm an HCI researcher - I've been in academia, in startups, and now in R&D at a large successful company.

I'm much happier at the latter. None of the pressure to publish/write grant proposals all the time like in academia, and much more time to actually sit down and think thoroughly about problems than in startups.

There are of course downsides - you don't have much control over the product like you might have in a startup, and you can't pursue completely crazy blue sky stuff like you might be able to as a grad student; but it's worked out for me.

If anyone reading this is interested in HCI research and designing future hardware; if you're as comfortable programming as you are with a soldering iron as you are in Sketch as you are reading academic papers and reflecting deeply about the field; please send a resume/portfolio to the email in my bio.


I think that probably works well in larger enterprises that are intentionally separating their Research orgs from their Product orgs. So that the former isn't ultimately beholden to the same success criteria as the latter. Since the whole point of Research is that it informs future Product, not Product demanding Research.

Though I've noticed an unfortunate trend where large enterprises create a Research org that isn't any such thing and is instead just an incubator for the Product org. It seems to result in sadness for both sides.


Yes, startups are not a good place to do research. Since the redesign of YC, with less funding per startup, startups are intended to fail, not pivot, if the original design doesn't work. When you think about it, YC's process is waterfall development.

PayPal was a huge pivot. Their original business was hardware security tokens.


He also said: > Value judgements require a value system.

When he talks about research, he's valuing a certain kind of research: laboratory, explorative research that may or may not reach a market.

Startups are, from a certain point of view, research in that they try to discover ways to fulfill customer needs, whether customers know they have them or not.

A few examples just from YC: Airbnb, Twitch, Dropbox. In particular, Airbnb discovered a need that people really had. Is this not research? I think it is, depending on your value system.


I think startups can be considered research, but one has to characterize it fairly. It is not technology research, it is not physics research. Your description of trying "to discover ways to fulfill customer needs, whether customers know they have them or not" is a fair research topic description. Though I personally disagree with it, IMO startups are doing research on how to get rich quick; discovering customer needs is only tangential to the issue and is not usually the best way to reach IPO/exit. Sometimes you get good products that don't suck (e.g. Dropbox), most of the times you get scams that try to capture as much users as possible to enable a good exit (and then product shuts down). So in a way, a lot of startups can be considered experiments in persuading users and investors.


I think startups, as they have been fashioned by the current venture capital pipeline, aren't a good place to do research. We need to redefine how people/community/customers participate in supporting the research/prototyping phase of building.

Creating a community of makers building together in an open and discoverable fashion is the first step. It first allows you to combine marketing with building, and attract community earlier, thus finding more knowledge-share, and hopefully building better products quicker.

Next, by sharing what you're building, we're able to tap into that community making along side and following your journey by using crowdfunding. Specifically continuous crowdfunding during the prototyping and research phase.

Now that you're building a company/product with your community , you can find support all along the way. Your audience is happy in sharing and support in the successes and challenges you will face.

Next when you're ready to launch a real product (after the prototyping/research phase) you should be able to sell pre-orders and crowdfund bringing these products to a mass market.

Beyond this you should be able to continue selling those items, then repeat this entire process with the community momentum, trust, and social proof you've now built up.

We're working to build that future for startups/companies/products at Baqqer. https://baqqer.com/


"The irony is that the ‘second Steve’ of the later Apple made and sold the equivalent of mental sugar water to all via convenient appealing consumer gadgets."

Ouch.


This stood out to me too. It's harsh, but true. I love that Apple is making computing devices more accessible to people, and I think that's a good thing. But I wish they weren't also making those very same devices more closed and controlled. Imagine if all of the original personal computers were as locked down as the iPhone is, in terms of being able to tweak/customize it, create and distribute software for it, or find other people's software for it. It's definitely a regressive step and I don't see why we can't have beautiful industrial design, slick software interfaces and more easy, open-access under the hood for users that want to get in there.


iPhone has a computer in it, but it is not a general purpose computer. Why is this concept so hard to get? There are tons of things that have CPUs and billions of lines of code to run them, but they are not personal computers.


It's hard to get because iphone users want it to be an open general-purpose computer that they own, and it masquerades as one most of the time. Remember when the apple-using-world lost it's collective shit over a free U2 album? It's not like they didn't know _intellectually_ that Apple had the ability to do that - they just hated being reminded of it.


>iphone users want it to be an open general-purpose computer

Which ones?


All of them. A general purpose computer (e.g. a linux box) is one that can run arbitrary code. A special-purpose computer (e.g. an ATM machine) only runs what the manufacturer intends it to run. The former is powerful-but-dangerous, and the latter is safe-but-limited.

Customers, of course, don't want a trade-off - they want power and safety. IOS is an attempt to get as close to that as possible, but it only works until iphone users want to do something on their phone that Apple doesn't want to let them do - that's when the illusion comes crashing down. When this happens, Apple users don't say, "Well, I'd like to run XYZ on my phone but I understand that I can't because that's the trade-off I made in exchange for a nice app store with no viruses in it," they say, "What the hell! I should be able to run XYZ if I want, it's my phone!"


What does general purpose mean? It means I get to do what I wan.

Anyone who complains that they can't do something that Apple won't let them is a candidate. Anybody who jailbreaks is definitely one. There are plenty.


It's not hard to "get", in that their policy decision is clear and understandable. However, are you saying that the iPhone is incapable of being a general purpose computer? That it would be difficult to have it more open/accessible for tuning? If so, you'd have to explain.

The point is that the iPhone / iOS devices could be more open (to user control and modification at the OS level, to third-party peripherals, to third-party software distribution and installation, etc)... but, due to fiat policy rather than any significant technical constraint, it is not.

Is that hard to "get"?


The part that is hard to "get" is the constant complaining. You could complain once and then let it go, but the favored technique seems to be to beat it to death, accompanied by mock surprise that not every person gives the same priority to things. It's boring, frankly.


I am pretty sure that this is the first time, or at most one of the few times, I personally have made any note of this general issue.

Are you perhaps displacing? Do you think it's actually fair to take the view that because other people may have made note of an issue repeatedly in the past, it is therefore to be frowned upon if any new person also makes note of it from their own perspective? I guess you'd also tell Alan Kay, who made the actual comment under original discussion here, to muzzle it because his thought is boring? In any case, by your own logic, shouldn't you not have replied at all because no doubt many, many other people have replied to this issue as you are here, given that it is apparently such well-trod ground? Aren't you then being hypocritical?

And do you actually think it's fair-minded to call what I specifically wrote a "complaint"?


Advocacy or evangelism, and repetition, go hand in hand.

Your complaint makes little sense, and has no foundation.


Sounds like you don't get it. And I'm not being facetious.

Well, maybe a little.

Just because something can be done doesn't necessarily mean it should be done, does it? This is not a commentary on the iPhone as a computer, but about products in general.

Lots of things have general purpose computer parts, but serve a very specific purpose. What did Steve Jobs pitch the iPhone as again? A phone, an iPod, and an internet communicator? I don't remember "personal computer" being one of those things.

So yes, this is the thing that Apple wants to make. And I don't think there's anything wrong with that.


how was the macintosh different?


I don't think he was saying that it was, only that at least the 1st time the vision was romantic and idealised by comparison. The second had no bones about ditching that to chase $$$.


First Steve and Second Steve were the same person. And he was always about the money, for as long as he'd been involved with Apple. Otherwise he wouldn't have had the Macintosh require a special case-cracking tool made only available to authorized dealers and repair shops to open.

The Steve who was most passionate about starting a revolution with cheap, accessible computers was named Wozniak.


Early in the history of computing two camps formed: the 'AI' camp that proclaimed that computers would soon be able to figure everything out on their own and 'mind amplification' camp that held that computers would become an extension to the human mind. The whole personal computing movement and thus early Apple grew out of the second camp. Alan Kay is a great representative of the second camp.

Of course, Apple soon figured out that what sells is not mind amplification tools but 'appliances' and under second Steve they became really good at producing those.


Jobs explicitly compares the computer to the bicycle -- a machine which amplifies human action and capability and turns an ungainly biped into the most efficiently locomoting land animal (or any animal) ever.

There's much to be said for the concept.


For one thing, it was open. You could install your own software on it.


It was so open that your software which could write all over the memory space of other programs: https://en.wikipedia.org/wiki/Mac_OS_memory_management

Then OSX introduced virtual memory. Programs became a little bit safer.

Then in later versions, Application Sandboxing, code signing, etc.

Every step of the way programs can do less but are safer for the OS.


> Every step of the way programs can do less but are safer for the OS.

The important thing here is that it's a tradeoff, not an universal good. And a tradeoff that I personally don't like very much. Doing fun and interesting things keeps getting harder and harder because you have to jump through increasing amount of security hoops. For instance, reading and writing memory of other programs enabled you to do tweaks that are now almost impossible to perform.

I understand the need for change, now that a computer is expected to be connected and running untrusted third-party code by default. But we've lost something with that change, and I wish for a way to get it back.


For a while at the beginning, you needed a Lisa to program one. Maybe things will change with iDevices and we won't need the mac anymore?


I don't mind needing a Mac, it's the required signing that bothers me.


you can run software on macOS that isn't signed by apple.


Not on the iPhone, and you know it.


You can, for zero cost, run arbitrary code, without a paid Apple signature, on your own personal iPhone. You just need a Mac.

What you can't do is distribute a binary for general use without paying Apple ($100/year) to sign your code.


Seriously? Explain that to the 500 person company that I'm writing an app for.

We have to pay $299/year to Apple just for the privilege of putting software on our own fucking devices. That's some bullshit right there.

The free option is worth exactly shit for such a large organization.

The $100/year is to get in the public app store where your app must obey every arbitrary rule that Apple invents. They actually have a clause that says they can kick you out of the store for any reason they want without justification.

Why does anyone defend that sort of behavior?


It's the cleanest mass market app store in existence.

Highest quality apps, least malware.


Good for Apple. None of that would change if there was a simple free option for sideloading though.


People wouldn't sideload malware?


Apple having a safe store wouldn't change.


But would they still have a safe platform?


You moved the goal posts.

But yes, it could still absolutely be safe. Why do you think otherwise? Does Apple hold a patent on publishing malware-free apps? No. They do not.


The intent is keeping the entire platform free from malware and junk.

Your app may be malware free, but it's pretty obvious there are many others which would not be.


Sure. Great. It's not an open platform though and you cannot run your own software on it if you're a business.

So that really sucks for a lot of businesses. Nothing you've said has countered that. I'm really happy that your concerns have been met. Mine and millions of others' haven't.


What you can't do is distribute a binary for general use without paying Apple ($100/year) to sign your code.

Exactly. It's a locked down platform.


Interesting interview. My experience with start ups has been different than what he talks about at the end. My experience has been of using a modified version of the scientific method to accelerate product development. There is less peer review, and a focus on keeping the subject matter (the customer usually) close. Alan makes it sound like startups come up with an idea and push it into the market as fast as possible.

His quote is “research means you can change your mind”. Well, I change my mind all the time.


His main point has always been that for significant breakthroughs, you start by searching for a problem that isn't even formulated yet. Solving the problems of your customers is the opposite of that. The impact of solving the "unformulated" problem may be so strong that it eliminates the need of solving the little particular problems. In that sense, office workers of the 1970s probably wanted better typewriters instead of GUIs.


Depends on your definition of "significant breakthrough". Technical breakthroughs are what Kay values, and that's OK. But to suggest they're the only kind of breakthroughs is myopic. Startups can, in their best incarnations, change human behavior and the way we interact with our world.


To call it purely technical is also myopic IMO... The GUI/typewriter change the previous post talks about is a paradigm shift much like in Kuhn's scientific revolution literature. Some neat aspects are paradigm shifts in like music production or other art, which is yes technical but achieves aesthetic end points and is about the human-tool relationship.


I'm struggling to find what's different in your description from Mr Kay's. I interpreted him as saying that start-ups focus on selling a thing to a customer and your description backs this up rather than challenges it. I'll accept that start-ups (the idealised ones I read about, because I mostly work in agencies and more established companies), research their market, but I don't think this is the kind of research he's talking about. I think he's talking about mining the depths of a problem or of a unexplored facet of nature - and start-ups don't have the time or budget for that.


I would wager your experience is an outlier.


Orthogonal to the article, but: "He is widely considered one of the fathers of object-oriented programming, or OOP, which at its simplest is a “if-then” programming paradigm that links data to pinpointed procedures."

I'd like to think I have read a few definitions of OOP, I have to say I don't recognize the above - what does "pinpointed procedures" mean? Also, "if-then" programming doesn't seem like a paradigm or a necessary/representative flow control structure of OOP.

To be fair, I am not familiar with Smalltalk and perhaps this definition makes more sense in that context.


I think that the "if-then" part is a simplification of imperative paradigms.

I've never heard it described that way, but I assume the part about "pin pointed procedures" tied to data is a weird way of describing the way that methods relate to the data encapsulated by their owning object.

Ultimately, I think this is a case where someone familiar with the concept has explained it to someone who is not, who has then tried to paraphrase and, lacking any domain knowledge, summarised it in an unusual and vague way.


Actually that reads as a pretty good non-developer summary of how current object-oriented programming languages work. Give that to a business person and it would probably make sense to them.


Jonathan Rees has a "really interesting response" where he lists several of definitions of OO [0]:

"Here is an a la carte menu of features or properties that are related to these terms; I have heard OO defined to be many different subsets of this list.

1. Encapsulation - the ability to syntactically hide the implementation of a type. E.g. in C or Pascal you always know whether something is a struct or an array, but in CLU and Java you can hide the difference.

2. Protection - the inability of the client of a type to detect its implementation. This guarantees that a behavior-preserving change to an implementation will not break its clients, and also makes sure that things like passwords don't leak out.

3. Ad hoc polymorphism - functions and data structures with parameters that can take on values of many different types.

4. Parametric polymorphism - functions and data structures that parameterize over arbitrary values (e.g. list of anything). ML and Lisp both have this. Java doesn't quite because of its non-Object types.

5. Everything is an object - all values are objects. True in Smalltalk (?) but not in Java (because of int and friends).

6. All you can do is send a message (AYCDISAM) = Actors model - there is no direct manipulation of objects, only communication with (or invocation of) them. The presence of fields in Java violates this.

7. Specification inheritance = subtyping - there are distinct types known to the language with the property that a value of one type is as good as a value of another for the purposes of type correctness. (E.g. Java interface inheritance.)

8. Implementation inheritance/reuse - having written one pile of code, a similar pile (e.g. a superset) can be generated in a controlled manner, i.e. the code doesn't have to be copied and edited. A limited and peculiar kind of abstraction. (E.g. Java class inheritance.)

9. Sum-of-product-of-function pattern - objects are (in effect) restricted to be functions that take as first argument a distinguished method key argument that is drawn from a finite set of simple names.

So OO is not a well defined concept. Some people (eg. Abelson and Sussman?) say Lisp is OO, by which they mean {3,4,5,7} (with the proviso that all types are in the programmers' heads). Java is supposed to be OO because of {1,2,3,7,8,9}. E is supposed to be more OO than Java because it has {1,2,3,4,5,7,9} and almost has 6; 8 (subclassing) is seen as antagonistic to E's goals and not necessary for OO.

The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as a definition of OO.

Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince you that you are a loser."

[0] http://www.paulgraham.com/reesoo.html


This also left me scratching my head. I always think of "structured programming" when if statements are brought up, and that's purely procedural and significantly predates object orientation. I think it's a poor definition.


> a “if-then” programming paradigm that links data to pinpointed procedures.

this is an overly simplified and reduced way of describing the idea of encapsulation.


I think "if-then" is the layman's term for "imperative" in that context.


sure, it could be. OOP is basically an imperative style of programming. it defines procedures as methods of objects, rather than as stand-alone functions. in any case its the awkward use of language by a journalist without deep subject matter expertise.


OOP is basically an imperative style of programming.

lol you would have been flamed to a crisp for saying that in the 90s.


maybe I would have been, but now its 20 years later and people actually have done some OOP and they know what it is really like in practice, and are no longer just reciting the bullet points from a slide show they saw at a conference.

what are the other paradigms we usually contrast with imperative though?

Declarative style can be done with OOP but it is still implemented in an almost strictly imperative way internally within the classes implementing the interfaces. Even the gluing together of interfaces is usually done in a more imperative than declarative style with OOP. The ideal for OOP is to have declarative interfaces from top to bottom but we almost never achieve this goal in practice.

Functional style is basically not done with OOP. It's possible, and my first exposure to "objects" was in working with closures in Scheme. However, in mainstream OOP functional style is only used as part of a mixed paradigm approach. We see a lot of this in languages like Python, Ruby, and Javascript.

Aspect-oriented programming? orthogonal to the question of imperative vs. not. This style is supported naturally in almost any language that offers first class introspection of its runtime but is only popular in niche use cases.

The actor model (i.e. what Erlang does)? A subset of functional style that is designed for highly concurrent systems. Orthogonal to imperative vs. non-imperative. Each actor within a system is probably an imperatively coded procedure. The system as a whole is not.

Why am I bothering to point this stuff out? Because I want to get past the kind of shallow and limiting view that the label "imperative style" matters.


  >Functional style is basically not done with OOP.
Check out Smalltalk, it's actually more derived from Lisp than from Simula.

  >The actor model (i.e. what Erlang does)? 
Smalltalk matches the actor model. A lot of people compare Erlang and Smalltalk, actually.


If you skipped the linked PDF with his tribute to ARPA/PARC, read it. Here are some quotes that I just put in my quote file:

>"A fish on land still waves its fins, but the results are qualitatively different when the fish is put in its most suitable watery environment."

>"Because of the normal distribution of talents and drive in the world, a depressingly large percentage of organizational processes have been designed to deal with people of moderate ability, motivation, and trust. We can easily see this in most walks of life today, but also astoundingly in corporate, university, and government research. ARPA/PARC had two main thresholds: self-motivation and ability. They cultivated people who "had to do, paid or not" and "whose doings were likely to be highly interesting and important".

>"Out of control" because artists have to do what they have to do. "Extremely productive" because a great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future."

>"Unless I'm badly mistaken, in most processes today—and sadly in most important areas of technology research—the administrators seem to prefer to be completely in control of mediocre processes to being "out of control" with superproductive processes. They are trying to "avoid failure" rather than trying to "capture the heavens".


Although a fish on land is what eventually led to the evolution of all land and air based lifeforms :)


Wow, this guy is humble:

I’m not interested in being remembered — but I would to have the ideas, visions, goals, and values of my whole research community not just remembered but understood and heeded.


From listening to his talks and interviews, I see Kay as a true renaissance person. Close enough to the industry to know its inherent illnesses, distant enough to not be indoctrinated by contemporary BS. The absolute opposite of serial bullshitters like Martin Fowler, for example.

I recommend his talk "Normal Considered Harmful".


I worked with Alan Kay's VPRI from about 2003-2007 and was fortunate enough to hang out with him in several contexts. He cares most deeply about education and developing higher order thought as a means to becoming better individuals and society. He wanted computers to be a thing children learn from in a very constructive way. He has no ego about himself or his contributions to computer science, but only wishes people would value the thoughtful exploration of learning and building learning environments his research team has worked on.


Kay compares the iPhone, et al to "selling sugar water to children" in an attempt to relegate them to "consumer gadget" status... compared to what, enterprise gadgets?


No, compared to the early vision (1950s/60s) of computers as metatools that humans can use to augment their cognition. The current crop of mobile devices (and even most desktop devices) are nowhere nearer to that glorious goal. See his OOPSLA talk "The computer revolution hasn't happened yet" - https://www.youtube.com/watch?v=oKg1hTOQXoY


That's a bit harsh. The iPhone and its ilk have clearly done both: revolutionise information and connectivity, as you say, AND introduced entire generations to rampant consumerism.

Whether or not one outweighs the other, or whether the latter is all that bad, is a different discussion. But just because you have a different perspective than he does is no reason to attack him for it.


He sounds like a grumpy old man mixed with a bit of hipster. If you can dismiss the mobile revolution as selling "sugar water to children" you're wearing blindfolds. Look at adoption rates of these devices all around the world - it's unmatched in human history. Look how they've given access to healthcare and finance in poor countries.

And he calls it "sugar water". Baffling.


He didn't dismiss the mobile revolution, he dismissed consumer Apple products. Not quite the same thing, unless you think the impoverished masses were uplifted by itunes.


'Sugar water' here means consume & consume, instead of creating, learning & collaborating


Funny thing about his "real objects" answer is that it sounds very close to what unix shell scripting does. Shell scripting that the web kiddies turn their nose up at.


I think it's fairer to compare his "real objects" to actors in the actor model. Live processes with state that can spin off new processes, send and receive messages, and change state based on the messages they receive.

Shell scripting is a subset of this with short-lived processes typically communicating sequentially via a text-based messaging system (particularly line-oriented, text-based, moderately structured data).

Long-running unix processes communicating with each other or with external processes via some other mechanism is also an example of the actor model, and consequently can each be conceived of as "real objects". But they aren't typically modeled this way or constructed around this concept, so it's more a post hoc description rather than a deliberate consequence of their design.

For "web kiddies", they deal with multiple objects all the time. Their database is an object (or collection of objects), it responds to queries by returning information, changing information, storing information. Their web server is an object responding to HTTP connections, generating session objects, which deal directly with each browser instance viewing the web content. Again, not typically modeled in this way, but it's the actor model at work.

When software is designed in this way, IMHO, it comes out better in the end. Though perhaps not as performant so sometimes we have to take those designs and drill down a bit and end up with something slightly different than the high level model.


This is the first interview I've seen with Alan Kay where he discusses some of his thoughts on AI.


Was it just me or did Alan say the brain is magic? It's so arrogant of him to think he believed he understood the brain enough to say he doesn't understand it so it must be magic. I so love Minsky. Long live Marvin Minksy.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: