That’s as far as I’ve gotten in the video so far but I find it kind of haunting that I already know what he’s talking about (brute force, simulated annealing, genetic algorithms, and other simple formulas that evolve a solution) and that these all run horribly on current hardware but are trivial to run on highly parallelized CPUs like the kind that can be built with FPGAs. They also would map quite well to existing computer networks, a bit like SETI@home.
Dunno about anyone else, but nearly the entirety of what I do is a waste of time, and I know it, deep down inside. That’s been a big factor in bouts with depression over the years and things like imposter syndrome. I really quite despise the fact that I’m more of a translator between the spoken word and computer code than an engineer, architect or even craftsman. Unfortunately I follow the money for survival, but if it weren’t for that, I really feel that I could be doing world-class work for what basically amounts to room and board. I wonder if anyone here feels the same…
It doesn't have to be like this. We have the brain power. But I feel we squander it on making silly websites/apps when we could be making history.
>I follow the money for survival
Maybe we can have a system where people can survive without money?
Do you think Alan Kay was incentivised by money to do all the stuff he talked about in this video?
a.It's probably too soon.
b.It often takes boring work(and money) to transfer pure ideas to successful practical applications.
Take a look at this thread -- scroll all the way to the bottom. What is the incentive for all of that?
Innovate or die (literally), doesn't seem like the wisest philosophy to base life on this planet around.
I suspect a similar calculus may have inspired Joey Hess's retreat to a farm.
I can think of others: jwz, Bret Glass, Cringely.
A sailship travels for weeks or months, has not a single piece of automation and has to be maintained during the journey.
Similarily the submarine is meant to travel for long periods of time during which it has to be operated and maintained.
Both of those are designed to be self-sufficient, so they have to provide facilities for the crew, and the crew grows due to maintenance required, medical and battle stations.
The airplane travels for 16 hours tops and is supported by a sizable crew of engineers, ground control, and for passenger planes, cooks and cleaning personell, on the ground.
If you want a sense of how many people are behind an airline flight, watch "Air City", which is a Korean drama series about a woman operations manager for Inchon Airport. The Japanese series "Tokyo Airport" is similar. Both were made with extensive airport cooperation, and you get to see all the details. (This, by the way, is typical of Asian dramas about work-related topics. They show how work gets done. US entertainment seldom does that, except for police procedurals.)
None of this seems to be relevant to computing.
What are engineers, architects, and craftsmen but translators of spoken word (or ideas) to various media or physical representations?
One could argue that the greatest achievements of history were essentially composed of seemingly mundane tasks.
We'd all love to feel like we're doing more than that, perhaps by creating intelligence in some form. Maybe as software engineers we can create as you say "simple formulas that evolve a solution" - and those kinds of algorithms/solutions seem to be long overdue - I think we're definitely headed in that direction. But at some point, even those solutions may not be entirely satisfying to the creative mind - after all, once we teach computers to arrive at solutions themselves, can we really take credit for the solutions that are found?
If one is after lasting personal fulfillment, maybe it'd be more effective to apply his or her skills (whatever they may be) more directly to the benefit and wellbeing of other people.
Engineers, architects and craftsmen don't work, nor waste anywhere near as much time, with as many "pieces" (if any) that behave in such erratic ways.
Budget is the first constraint, which limits the available materials and determines material quality.
Components and building codes are a second. While the 'fun' part of building design might be the exteriors and arrangements, a tremendous amount of the effort goes into binding these 'physical APIs' in a standards-complaint manner.
Finally, there is physics. This brings with it the ultimate and unassailable set of constraints.
I know I'm being a bit silly, but my point is that all crafts have frustrations and constraints, and all professions have a high (and often hidden) tedium factor resulting from imperfect starting conditions. These do not actually constrain their ability to create works of physical, intellectual or aesthetic transcendency.
We spend too much time getting these tiny pieces to connect correctly, and more often than not they're misaligned slightly or in ways we can't see yet (until it blows up). It's way too much micro for not enough macro. And there isn't even a way to "sketch" as I can with drawing - you know, loosely define the shape of something before starting to fill in with the details. I keep trying to think of a visual way to "sketch" a program so that it would at least mock-run at first, and then I could fill in the function definitions. I don't mean TDD.
It's easier for most people to reason about a simple task. Not everyone can also see how that simple task will interact with the larger whole. I've seen plenty of developers afraid to introduce something as fundamental as a message queue, which is external to a core application. It breaks away from what they can conceptualize in a standing application.
Different people have differing abilities, talent and skills regarding thought and structure. Less than one percent of developers are above average through most of a "full stack" for any given application of medium to high complexity (percent pulled out of my rear). It doesn't mean the rest can't contribute. But it does mean you need to orchestrate work in ways that best utilize those you have available.
Which is the opposite of sketching, which means (in drawing) to loosely define the basic shapes of something before accurately describing things with "enough definition".
We're not talking about the same thing at all and I can't relate to the rest of your post.
I'm talking about how there's no way to sketch a program that mock-runs and then fill in the details in a piecemeal fashion.
I think it even makes sense at start to create uncomplete objects, run the mock, as continue. Also, the authors of the framework claim that you work very close to the domain language s you can discuss code with domain experts, so in a sense you're working relatively close to the spec level.
The learning curve is a bit high though, i think it could have been much more successful if it has used GUI tools to design domain objects and maybe a higher level language,
It's called an existential crisis. It's not a special Sillicon Valley angst thing, everyone gets them. Try not to let it get to you too much.
So much of it comes down to focusing on everything but requirements. If someone asks for directions to the wrong destination, it doesn't matter how exceptional of a map you draw.
And, despite the importance of requirements, we often trust the requirements gathering tasks to non-technical people. Worse, in large corporate environments, the tasks are often foisted on uncommonly incompetent individuals. When I hear about "Do you want to ...?" popups that are supposed to only have a "Yes" button, I know something is fundamentally broken.
I haven't escaped the corporate rat race yet, but I'm taking baby-steps toward a more self-directed work-life. Here's hoping that such a move helps.
You are not even hardly alone. And while I suspect the PoV is prevalent in IT, it's hardly specific to it.
There's a guy who wrote a book about this so long ago that we don't remember his name. He's known only as "the preacher", sometimes as "the teacher" or "the gatherer".
It's Ecclesiastes, in the old testament of the Bible, and begins with "Vanity of vanities, saith the Preacher, vanity of vanities; all is vanity."
I'm quite an atheist, not religious, and generally not given to quoting scripture. I find this particular book exceptionally insightful.
Much of what you describe is in fact reality for most people -- we "follow the money for survival", but the purpose of it all quite frequently eludes me.
I started asking myself two or three years back, "what are the really big problems", largely as I saw that the areas in which they _might_ be addressed, both in Silicon Valley and in the not-for-profit world, there was little real focus on them. I've pursued that question to a series of others (you can find my list elsewhere, though I encourage you to come up with your own set -- I'll tip you to mine if requested).
My interest initially (and still principally) has been focusing on asking the right questions -- which I feel a great many disciplines and ideologies have not been doing. Having settled on at least a few of the major ones, I'm exploring their dynamics and trying to come to answers, though those are even harder.
It's been an interesting inquiry, and it's illuminated to me at least why the problem exists -- that Life (and its activities) seeks out and exploits entropic gradients, reduces them, and either continues to something else or dies out.
You'll find that concept in the Darwin-Lotka Energy Law, named by ecologist Howard Odum in the 1970s, and more recently in the work of UK physicist Jeremy England. Both conclude that evolution is actually a process which maximizes net power throughput -- a case which if true explains a great many paradoxes of human existence.
But it does raise the question of what humans do after we've exploited the current set of entropy gradients -- fossil fuels, mineral resources, topsoil and freshwater resources. A question Odum posed in the 1970s and which I arrived at independently in the late 1980s (I found Odum's version, in Environment, Power, and Society only in the past year).
But yes, it's a very curious situation.
At our current level of technology and business sophistication, that strikes me as a hard enough problem to justify a career with good compensation.
Any patterns you are noticing in the translation process?
It’s hard for me to single out specific patterns because it feels like the things I’m doing today (taking an idea from a written description, PowerPoint presentation or flowchart and converting it to run in an operating system with a C-based language) are the same things I was doing 25 years ago. It might be easier to identify changes.
That affects everyone, because when the standards are perceived as being so poor, they are ignored. So now we have the iOS/Android mobile bubble employing millions of software engineers, when really these devices should have just been browsers on the existing internet. There was no need to resurrect Objective-C and Java and write native apps. That’s a pretty strong statement I realize, but I started programming when an 8 MHz 68000 was considered a fast CPU, and even then, graphics processing totally dominated usage (like 95% of a game’s slowness was caused by rasterization). So today when most of that has been offloaded to the GPU, it just doesn’t make sense to still be using low level languages. Even if we used Python for everything, it would still be a couple of orders of magnitude faster than the speed we accepted in the 80s. But everything is still slow, slower in fact than what I grew up with, because of internet lag and megabytes of bloatware.
I started writing a big rant about it being unconscionable that nearly everyone is not only technologically illiterate, but that their devices are unprogrammable. But then I realized that that’s not quite accurate, because apps do a lot of computation for people without having to be programmed, which is great. I think what makes me uncomfortable about the app ecosystem is that it’s all cookie cutter. People are disenfranchised from their own computing because they don’t know that they could have the full control of it that we take for granted. They are happy with paying a dollar for a reminder app but don’t realize that a properly programmed cluster of their device and all the devices around them could form a distributed agent that could make writing a reminder superfluous. I think in terms of teraflops and petaflops but the mainstream is happy with megaflops (not even gigaflops!)
It’s not that I don’t think regular busy people can reason about this stuff. It’s more that there are systemic barriers in place preventing them from doing so. A simple example is that I grew up with a program called MacroMaker. It let me record my actions and play them back to control apps. So I was introduced to programming before I even knew what it was. Where is that today, and why isn’t that one of the first things they show people when they buy a computer? Boggles my mind.
Something similar to that, that would allow the mainstream to communicate with developers is Behavior-Driven Development (BDD). The gist of it is that the client writes the business logic rather than the developer, in a human-readable way. It turns out that their business logic description takes the same form as the unit tests we’ve been writing all these years, we just didn’t know it. It turns out to be fairly easy to build a static mockup with BDD/TDD using hardcoded values and then fill in the back end with a programming language to make an app that can run dynamically. Here is an example for Ruby called Cucumber:
How this relates back to my parent post is that a supercomputer today can almost fill in the back end if you provide it with the declarative BDD-oriented business logic written in Gherkin. That means that a distributed or cloud app written in something like Go would give the average person enough computing power that they could be writing their apps in business logic, making many of our jobs obsolete (or at the very least letting us move on to real problems). But where’s the money in that?
It's even slower when you write it in Python or Lua. I've built an app heavily relying on both on iOS and Android and it's a pig. This was last year, too, and outsourcing most of the heavy lifting (via native APIs) out of the language in question.
"Low-level languages" (as if Java was "low level") exist because no, web browser just aren't fast enough. There may be a day when that changes, but it's not today.
I think one reason for this is because devices became personalized. You dont often go check your email on someone elses computer. People dont need the portability of being able to open something anywhere, so the cloud lost some of its appeal. Now its just being used to sync between your devices, instead of being used to turn every screen into a blank terminal.
* because code efficiency translates directly into battery life
* because downloading code costs bandwidth and battery life
* because html rendering engines were too slow and kludgy at the time
* because people and/or device vendors value a common look and feel on their system
* because not everyone wants to use google services/services on US soil
[Edit: on a minimal micro-scale of simplification, I just came across this article (and follow up tutorial article) on how to just use npm and eschew grunt/gulp/whatnot in automating lint/test/concat/minify/process js/css/sass &c. Sort of apropos although it's definitely in the category of "improving tactics when both strategy and logistics are broken": prevous discussion on hn: https://news.ycombinator.com/item?id=8721078 (but read the article and follow-up -- he makes a better case than some of the comments indicate)]
web ecosystems can still enforce common uitoolkits.
gmail was just an example.
So you have a lot of people you could be selling to, and a technology that has conceptual elegance, but isn't strictly necessary. Wherever you hit the limits of a browser, you end up with one more reason to cut it off and go back to writing against lower level APIs.
In due course, the pendulum will swing back again. Easy is important too, especially if you're the "little guy" trying to break in. And we're getting closer to that with the onset of more and more development environments that aim to be platform-agnostic, if not platform-independent.
Meta: I'm glad I asked about the patterns now. I had doubts about the extent to which asking that question would add to this thread.
If the initial force is headed towards uselessness software won't save it from it's fate, it will in fact just get it there faster.
By the same token if you manage to write some software that is magnifying a force of good the results can be staggering.
Software allows us to accomplish more net effect in our lifetimes than was ever possible before. However if we spend our time writing glorified forms over and over again we aren't going to live up to that.
At the very minimum, the fruits of your labour are useful to your boss/customer. That is already a single positive impact in someone's life right there.
Most likely, your boss/customer then used your developments in their primary business, which, no doubt, affects even more people in a positive way.
I believe being in technology, we have the capability to positively affect many orders of magnitude more people, than in most other industries. So cheer up! ;)
People waste money all the time. Maybe the OP's boss paid him three years salary to do something the boss thought was important, but really just wasted time. And nobody knew it wasted time, because it was hard to measure.
A personal example: At a small marketing agency, my boss insisted we needed to offer social media marketing, since "everyone's on facebook now a days." Because we were posting on behalf of clients, we didn't have the expertise to write about the goings on of their business, and often had no contacts in the client organization. We were forced to write pretty generic stuff ("happy valentine's day everybody!" or "check out this news article"). Worse, we had clients who shouldn't be on social media in the first place: plumbers, dentists, and the like. I have never seen anyone like their plumber on facebook. So naturally, these pages were ghost towns.
Yet despite having zero likes and zero traffic, clients insisted on paying someone for social media (They too had read that "everyone's on facebook") and my boss never refused a paying customer.
Want to know how it feels getting paid day in and day out to make crap content that nobody will read? Feels bad.
Ok. So find where to do that, and do it. If you don't have children to support or other family to stick by, you're only excuse is your fear of doing something too unconventional.
You want to do world-class work? Cool, let's talk about it. What's world-class to you? What would get you out of bed rather than give you an existential crisis?
If you're a Ruby person, maybe watch this version instead, since it's almost the same talk but for a Rails Conference, with a few references:
I've also always been a fan of Feynman's assertion that we haven't really understood something until it can be explained in a freshman lecture. At the end of his life he gave a now famous explanation of QED that required almost no mathematics and yet was in fact quite an accurate representation of the theory.
The problem here is, when two people disagree, they both think the problem is simple and they both think their answer is trivial. The thing is, they haven't solved the same problem.
In the standard formulation, where switching is the best strategy, Monty's actions are not random: He always knows which door has the good prize ("A new car!") and which doors have the bad prize ("A goat.") and he'll never pick the door with the good prize.
If you hear a statement of the problem, or perhaps a slight misstatement of the problem, and conclude that Monty is just as in the dark as you are on which door hides which prize and just happened to have picked one of the goat-concealing doors, then switching confers no advantage.
A large part of simplicity is knowing enough to state the problem accurately, which circles back to your paragraph about Feynman: Understanding his initial understanding of QED required understanding everything he knew which lead him to his original understanding of the problem QED solved; for his final formulation of QED, he understood the problems in terms of more fundamental concepts, and could therefore solve them using those same concepts.
Here's a nice example of this teaching: http://ocw.mit.edu/courses/electrical-engineering-and-comput...
You need probability in the explanation because of the two goats/one car choice in part one .. and here I am trying to explain it again. You already know how it works, but you don't see that your explanation of it doesn't explain it.
After they removed one answer, you have more information...
Because the choice isn't balanced...
If it was as simple as you suggest, it wouldn't make a good part of a game show. Everyone would see straight through it.
the problem is deceptive because it doesnt explicitly say they ALWAYS remove a goat.
The answer becomes obvious when you ask "There are three doors, one has a prize. Would you like to select one door or two doors?"
Edit: granted, this still can happens when self-medicating. The general case of doctors fixing the mistakes of doctors is different than what the parent suggests.
In a general sense that is true but in a more specific sense it is definitely possible to be handed a project without docs and horrible problems and it might as well be an organic problem. Even so, the simplest biology dwarfs the largest IT systems in complexity.
And also mercifully in continuous real-world testing over time with no Big Rewrite to be seen. While an individual organism might have unsolvable problems, nature doesn't have the same tolerance for utterly intractable systems that we sometimes see in the software community.
I was digging at the idea that someone would accept a claim on Faith, then demand evidence before giving up the idea, and how that is inconsistent. If someone can faith into an idea without evidence it should be equally easy to faith out of it with no evidence.
That it's not easy, is interesting.
Nothing -> belief in God, costs little and gains much.
Belief in God -> no belief, costs much and gains little.
Where by costs and gains, I include social acceptance, eternal life, favourable relations with a powerful being, forgiveness of sins, approved life habits, connections to history and social groups, etc.
It's not a Boolean. Or at least, from inside a mind it doesn't feel like a Boolean.
edit: I would also like to add, although I already said it, that I believe both of these opinions regarding the creator are acts of faith, to absolutely say there is not one and to absolutely say there is, and if someone where to have made a statement as if a creators existence was a sure thing I would have called them out just the same.
edit: a thanks in advance to all you enlightened fervent atheists for the down votes.
Edited to be more honest.
edit: original response claimed it took courage to doubt the existence of a creator. It has since edited out to claim it simply takes intellectual curiosity. I would argue that the same intellectual curiosity could have the end result of someone having a belief in a singular creator of reality.
Possible, but not likely, given how many intellectually curious people have independently reinvented atheism but no religion has been independently reinvented.
edit: misunderstanding of mutually exclusive and added some more words.
This denies the existence of Ancient Greek atheists, and people who were skeptical of religion in general before the modern era.
I'm also not alone: My definition of the word seems to be the most common one among the current atheist movement.
I'm just saying you'd expect these doctor types to have the greater powers of analysis, but in my experience they just pretend harder.
[Edit to remove that off putting stuff]
Besides Nile, the parser OMeta is pretty cool to imho.
Check it out? Where is the source code for the OS?
I haven't found it in the link you gave..
There's also a kind of arrogance when he shits on intel and djikstra that I find off-putting.
Edit: And Kaye is also ignoring the fact that every high school science curriculum covers the scientific method and includes hands on experimentation.
You must've been in an elite high school.
> “Roman concrete is . . . considerably weaker than modern concretes. It’s approximately ten times weaker,” says Renato Perucchio, a mechanical engineer at the University of Rochester in New York. “What this material is assumed to have is phenomenal resistance over time.”
This is no coincidence.
"Silicon Valley will be in trouble when it runs out of Doug Engelbarts ideas" -- Alan Kay
Bret works for Alan now, as far as I know.
Plato was pretty cool, though, I agree.
Pretty much the point I was making some time back where I got in a tiff with a documentary creator for D/L'ing his video from Vimeo (its browser player has pants UI/UX in terms of scanning back and forth -- the site switches videos rather than advances/retreats through the actual video you're watching).
And then as now I maintain: users control presentation, once you've released your work you're letting your baby fly. Insisting that people use it only how you see fit is infantile itself, and ultimately futile.
(Fitting with my Vanity theme above).
Anyways, just a tip. Its saved me hours and keeps slow speakers from getting boring.
Very good point about the manufacturing of processors being backwards. Why _not_ put an interpreter inside the L1 cache that accomodates higher level programming of the computer? Why are we still stuck with assembly as the interface to the CPU?
Unfortunately, it was slow, at 1 MIPS, and had an overly complex CPU that took 12 large boards. (It was late for that; by the time VAXen shipped in volume, the Motorola 68000 was out.) The CALL instruction was microcoded, and it took longer for it to do all the steps and register saving than a call written without using that instruction. Nobody ever added to the microcode to implement new features, except experimentally. DEC's own VMS OS used the rings of protection, but UNIX did not. One of the results of UNIX was the triumph of vanilla hardware. UNIX can't do much with exotic hardware, so there's not much point in building it.
There's a long history of exotic CPUs. Machines for FORTH, LISP, and BASIC (http://www.classiccmp.org/cini/pdf/NatSemi/INS8073_DataSheet...) have been built. None were very successful. NS's cheap BASIC chip was used for little control systems at the appliance level, and is probably the most successful one.
There are things we could do in hardware. Better support for domain crossing (not just user to kernel, but user to middleware). Standardized I/O channels with memory protection between device and memory. Integer overflow detection. Better atomic primitives for multiprocessors. But those features would just be added as new supported items for existing operating systems, not as an architectural base for new one.
(I'm writing this while listening to Alan Kay talk at great length. Someone needs to edit that video down to 20 minutes.)
Java runs directly on hardware: http://en.wikipedia.org/wiki/Java_processor , http://en.wikipedia.org/wiki/Java_Card
Pascal MicroEngine: http://en.wikipedia.org/wiki/Pascal_MicroEngine
And some more: http://en.wikipedia.org/wiki/Category:High-level_language_co...
And the x86 instruction set is CISC, but (modern) x86 architecture is RISC (inside) - so your modern Intel/AMD CPU already does it. http://stackoverflow.com/questions/13071221/is-x86-risc-or-c...
Backward compatibility aside, we could have CPUs that support a ANSI C or the upcoming Rust directly in hardware, instead of ASM.
The fact that we had an architecture running entirely on ALGOL with no ASM as far back as 54 years ago, is quite astonishing, followed by depressing.
An address on the Burroughs machines is a path - something like "Process 22 / function 15 / array 3 / offset 14. The actual memory address is no more visible to the program than an actual disk address is visible in a modern file system. Each of those elements is pageable, and arrays can be grown. The OS controls the memory mapping tree.
One of the things that killed those machines was the rise of C and UNIX. C and UNIX assume a flat address space. The Burroughs machines don't support C-style pointers.
What is this "pseudotime"? Google turns up nothing. Am I mis-hearing what he said?
To his credit, he handled the questioner's faux pas much more gracefully than how RMS typically responds to questions about Linux and Open Source. ;)
Questioner: So you came up with the DynaPad --
Alan Kay: DynaBook.
Yes, I'm sorry. Which is mostly -- you know, we've got iPads and all these tablet computers now.
But does it tick you off that we can't even run Squeak on it now?
Alan Kay: Well, you can...
Q: Yea, but you've got to pay Apple $100 bucks just to get a developer's license.
Alan Kay: Well, there's a variety of things.
See, I'll tell you what does tick me off, though.
Basically two things.
The number one thing is, yeah, you can run Squeak, and you can run the eToys version of Squeak on it, so children can do things.
But Apple absolutely forbids any child from putting a creation of theirs to the internet, and forbids any other child in the world from downloading that creation.
That couldn't be any more anti-personal-computing if you tried.
That's what ticks me off.
Then the lesser thing is that the user interface on the iPad is so bad.
Because they went for the lowest common denominator.
I actually have a nice slide for that, which shows a two-year-old kid using an iPad, and an 85-year-old lady using an iPad. And then the next thing shows both of them in walkers.
Because that's what Apple has catered to: they've catered to the absolute extreme.
But in between, people, you know, when you're two or three, you start using crayons, you start using tools.
And yeah, you can buy a capacitive pen for the iPad, but where do you put it?
So there's no place on the iPad for putting that capacitive pen.
So Apple, in spite of the fact of making a pretty good touch sensitive surface, absolutely has no thought of selling to anybody who wants to learn something on it.
And again, who cares?
There's nothing wrong with having something that is brain dead, and only shows ordinary media.
The problem is that people don't know it's brain dead.
And so it's actually replacing computers that can actually do more for children.
And to me, that's anti-ethical.
My favorite story in the Bible is the one of Esau.
Esau came back from hunting, and his brother Joseph was cooking up a pot of soup.
And Esau said "I'm hungry, I'd like a cup of soup."
And Joseph said "Well, I'll give it to you for your birth right."
And Esau was hungry, so he said "OK".
Because we're constantly giving up what's most important just for mere convenience, and not realizing what the actual cost is.
So you could blame the schools.
I really blame Apple, because they know what they're doing.
And I blame the schools because they haven't taken the trouble to know what they're doing over the last 30 years.
But I blame Apple more for that.
I spent a lot of -- just to get things like Squeak running, and other systems like Scratch running on it, took many phone calls between me and Steve, before he died.
I spent -- you know, he and I used to talk on the phone about once a month, and I spent a long -- and it was clear that he was not in control of the company any more.
So he got one little lightning bolt down to allow people to put interpreters on, but not enough to allow interpretations to be shared over the internet.
So people do crazy things like attaching things into mail.
But that's not the same as finding something via search in a web browser.
So I think it's just completely messed up.
You know, it's the world that we're in.
It's a consumer world where the consumers are thought of as serfs, and only good enough to provide money.
Not good enough to learn anything worthwhile.
Can you use that specification to verify if a program does what it should? I know BDD taken to extremes kinda does that.
But could a specification be verified if it's non-contradictory and perhaps some other interesting things like where are its edges that it does not cover?
I'm not sure I have a problem with the behavior. The new links page cycles so fast that watching the whole video before upvoting would result in the post not making it to the front page.
We've removed "Alan Kay" from this one. (Nothing personal—we're fans.)
(But I didn't upvote, and your question is valid)