Hacker Newsnew | past | comments | ask | show | jobs | submit | more kevindamm's commentslogin

And python didn't get it right the first time either. It wasn't until python 2.3 when method resolution order was decided by C3 linearization that the inheritance in python became sane.

http://mail.python.org/pipermail/python-dev/2002-October/029...


Inheritance being "sane" in Python is a red herring for which many smart people have fallen (e.g. https://www.youtube.com/watch?v=EiOglTERPEo). It's like saying that building a castle with sand is not a very good idea because first, it's going to be very difficult to extract pebbles (the technical difficulty) and also, it's generally been found to be a complicated and tedious material to work with and maintain. Then someone discovers a way to extract the pebbles. Now we have a whole bunch of castles sprouting that are really difficult to maintain.


I still have my Newton but I wasn't, nor am I, elite.

I didn't store recipes on it, though.


I would love to see a new Newton with the same spirit of innovation but current tech. Current phones are so boring. No innovation, just slow evolution.


It really was way ahead of its time. I remember the handwriting recognition being excellent for the time, too. Meanwhile Palm forced its users to write each letter one at a time in a tiny box and requiring specific sequencing of each stroke too.

Newton had a modem module you could plug in and third parties had written web browsers for it, it basically was the first smart phone just without the phone.

Trying to imagine that level of innovation, but starting from present day tech, is very interesting.


I had the message pad 100 and a message pad 120. My handwriting improved, and it s recognition also improved. It was brilliant. I stored shopping lists and recipes on it. Although a lot of fun was made of the handwriting recognition, it was surprisingly good, and got better with use.


We're told not to feed the wildlife at parks and beaches because of the dangers when they become dependent on visitors for their food source. It changes their natural behavior to the extent that it becomes difficult to revert to natural food sources in the absence of visitors.

(there are other behavioral and disease-related dangers but they're not as appropriate to this metaphor)

I think the more alarmed voices in this comment thread are not reacting to the change or "exponential progress" but are instead concerned about the impact of becoming reliant on something else to do our remembering.

This last part is anecdata (but no worse than the survey data in TFA), I think smartphone users have not really lost the ability to memorize, in general, but that the things being memorized are different. If the memory test (mentioned in a cousin-comment) had a set of 20 memes instead of 20 words, I expect most study participants would be a lot better at recall.

I suppose the question of "is this like junk food, though?" may be relevant.


which is a real problem if a significant part of being in the population of whale or addict involves AI psychosis


This was, I think, the greatest strength of MapReduce. If you could write a basic program you could understand the map, combine, shuffle and reduce operations. MR and Hadoop etc. would take care of recovering from operational failures like disk or network outages by idempotencies in the workings behind the scenes, and programmers could focus on how data was being transformed, joined, serialized, etc.

To your point, we also didn't need a new language to adopt this paradigm. A library and a running system were enough (though, semantically, it did offer unique language-like characteristics).

Sure, it's a bit antiquated now that we have more sophisticated iterations for the subdomains it was most commonly used for, but it hit a kind of sweet spot between parallelism utility and complexity of knowledge or reasoning required of its users.


If you count with each finger as a binary digit you can count up to 15 on one hand!

255 if you use both hands!

More like 1023 if you also use thumbs but I prefer to use them as carry, overflow bits.


I trained myself to do this by default a very long time ago and I can't imagine counting any other way.

It's so natural, useful and lends well to certain numerical tricks. We should explicitly be teaching binary to children earlier.


It's never too late to learn queueing theory

...because the typical setup assumes λ ≤ μ so all arriving jobs eventually get serviced.

I think there's a lot of unmet potential in design of interfaces for pipelines and services that really gets at the higher level you mention. There are some universal laws, and some differences between practice and theory.


I remember the 3D glasses that you could plug into the Sega Master System in the mid-80s. They took what would be interlaced frames and rendered them to different eyes instead (which made the version getting shown on the connected TV pretty trippy too).

And then there was the time travel arcade game (also by Sega) that used a kind of Pepper's Ghost effect to give the appearance of 3D without glasses. That was in the early 90s.

I think the idea of 3D displays keeps resurfacing because there's always a chance that the tech has caught up to people's dreams, and VR displays sure have brought the latency down a lot but even the lightest headsets are still pretty uncomfortable after extended use. Maybe in another few generations... but it will still feel limiting until we have holodeck-style environments IMO.


I wasn't aware of all of those, will check them out - thanks for sharing!

Yes I believe you are right in that the tech is catching up with concepts that seemed futuristic in the past. For example the hardware today supports much more than it would have been able to do, say, 5-10 years ago.

Our hypothesis is that the current solutions out there still require the consumer to buy something, wear something, install something etc. - while we want to build something that becomes instantly accessible across billions of devices without any friction for the actual consumer.


Something I haven't seen mentioned in this thread or TFA is just how high corporate taxes were (and even personal investment taxes) in the 50s and 60s, and this influenced spending on R&D immensely because that investment wasn't considered taxable income. Tax rates were over 50% for much of the era of Bell Labs and Xerox PARC.


The first time I learned it was from a book by LaMothe in the 90s and it starts with your demonstration of 3D matrix transforms, then goes "ha! gimbal lock" then shows 4D transforms and the extension to projection transforms, and from there you just have an abstraction of your world coordinate transform and your camera transform(s) and most everything else becomes vectors. I think it's probably the best way to teach it, with some 2D work leading into it as you suggest. It also sets up well for how most modern game dev platforms deal with coordinates.


Think OpenGL used all those 2,3,4D critters at API level. It must be very hardware friendly to reduce your pipeline to matrix product. Also your scene graph (tree) is just this, you attach relative rotations and translations to graph nodes. You push your mesh (stream of triangles) at tree nodes, and composition of relative transforms up to the root is matrix product (or was the inverse?) that transform the meshes that go to the pipeline. For instance character skeletons are scene subgraphs, bones have translations, articulations have rotations. That's why it is so convenient to have rotations and translations in a common representation, and a linear one (4D matrix) is super. All this excluding materials, textures, and so on, I mean.


Tricks of the * Game Programming Gurus :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: