HiScore author here. HiScore allows domain experts to easily create and maintain scores. It is currently being used by a major environmental non-profit and by IES, a startup that assesses the safety and sustainability of fracking wells.
Seems strange to write an article complaining about the Harvard bubble when the only reason it's getting printed in the Paris fucking Review is that it's about Harvard in the first place. People have weird/terrible/difficult college experiences all the time, but the guy who went to Iowa State never gets a similar forum to talk about himself.
A lot of people at Harvard were uncomfortable there. I wasn't. I loved the place and made great friends. It was a safe place where I could challenge myself intellectually. And the plurality of Harvard students aren't wealthy legacies, they're striver upper middle class kids that worked their asses off in high school.
This author sounds like a poor fit for Harvard. Nothing wrong with it. But after two years should have really looked at transferring. Sure it's "Harvard", but you can still do quite well at other schools like Berkeley (where you get to stay a year on campus at best, and then you're off to fend for yourself).
The author admitted not being ambitious, and it really felt like he just took what was handed to him, rather than really seeking uut what would really click with him.
"And the plurality of Harvard students aren't wealthy legacies, they're extremely lucky middle class kids."
Need I remind you that there are a ridiculous amount of kids who take ridiculous courses for high schools (e.g. even college courses), get extremely high testing scores, and loads of extracurricular and get rejected on the basis that Harvard has a quota for each school and a quota for ethnicities.
Otherwise, Ivy Leagues' demographics would probably turn into the fair-and-balanced UC schools ...
Uncle Tom's Cabin is an awful book. First off, it's boring and damn near unreadable (it was one of the only assigned books I never made it through in college). But in a larger sense, the slaves are "heroic" and "emotionally nuanced" only in the sense that HBS makes them fulfill a racial type: sympathetic, penitent, long-suffering Christians. They're treated more as people than as property, but more as caricatures than as people.
The interesting contradiction of UTC, to me, is that it had this enormous significance to history despite being terribly written. As a modern reader, I couldn't get any emotion about the book other than it being terrible. James Baldwin trashes the book brutally but fairly in his great essay "Everybody's Protest Novel": http://www.uhu.es/antonia.dominguez/semnorteamericana/protes...
My favorite Friedman-ism is this one:
"I had lunch with a group of professors at the Hong Kong University of Science and Technology, or HKUST, who told me that this year they will be offering some 50 full scholarships for graduate students in science and technology. Major U.S. universities are sharply cutting back."
As an AI researcher, I think obstacles like "not having your robot fall over all the damn time" are a little more immediate than robots having a nuanced understanding of ethics. I can understand why this stuff is fun to think about and debate, but it's just not relevant at all to where AI is going to be for the next 50 (or 100, or probably 200) years.
I really think the stability of your robot is a completely separate issue. George W. Bush and Barack Obama both use flying robots with missiles to hunt down and kill people they don't like. Don't you think that perhaps, as these flying robots gain more and more autonomy, that discussions of ethics are actually important, and important now? 50 years is a long, long time in computer science.
I'm surprised that you are so pessimistic about your research that you think ethics won't even be relevant in year 2205. Holy cow you must think AI is hard.
George W. Bush and Barack Obama both use flying robots with missiles to hunt down and kill people they don't like.
This is a very good point. It's always good to be reminded that we're already living in the future.
That said, I feel like aothman is discussing real artificial intelligence, that is, an entity capable of making a conscious decision that it wants to, in this case, fire the missiles. If I had to guess, if predator drones gain the ability to "decide" for themselves whether or not to fire their missiles, it will be built on a system of complex rules, and not because they're "intelligent". Potayto, Potahto? Maybe. I'm not an AI researcher and I don't even come close to understanding human intelligence, but I feel like even if it is just a complex system of rules, it's at a much deeper level than we'll be able to simulate soon.
> AI is going to be for the next 50 (or 100, or probably 200) years.
To be clear about that "probably 200", are you saying that you believe we'll need trillions of times the processing power of the human brain in order to crack how it works, or that you believe that we've nearly reached the end of increases in processing power, for at least the next 200 years?
If I had to guess, he's saying that at the current rate of software and research progress, we won't be able to cobble together the weak and specialized subsystems that we currently call "AI" into anything more interesting for quite some time. Not an altogether uncommon belief, esp. amongst those that specialize in robotics or other practical AI applications, because they know first-hand how hard it is to create humanlike behavior with any of the techniques we know of today.
But self improving AI is not remotely predictable based on our current progress, and really, it's not even the same field as what we call AI today: extrapolating our current progress to predict where we'll be in 50 years is like asking a bombmaker from 1935 to look at a log log plot of historical explosive power in bombs to try to predict what the maximum yield from a bomb in 1950 would be. It doesn't matter how slow the mainstream research is if someone finds a chain reaction to exploit, and it's impossible to predict when someone will successfully exploit that chain reaction.
IMO there's very good reason to believe that we're already deep into the "yellow zone" of danger here, where we have more than enough computational power to set off a self-improving chain reaction, though we don't actually know how to write that software. What we really have to worry about is that as time goes by, we creep closer to the "red zone", where we don't even need to know how to write the software because any idiot with an expensive computer can brute force the search through program space (more realistically, they would rely on evolutionary or other types of relatively unguided methods). That's exceptionally dangerous because the vast majority of self improving AIs will be hostile, and we want to make sure that the first to emerge is benevolent.
So yes, there's a lot of uncertainty here, but I think it's a mistake to say that we don't need to worry about it until it's here. By the time it's inevitable and the mainstream has started to even accept it as possible, it's probably going to be impossible to ensure that (for instance) some irresponsible government won't be the first to achieve it merely by throwing a lot of funding at the problem and doing it unsafely.
"Proverbial Stanford" coincides a great deal with "Proverbial Harvard" - both are wealthy private schools that admit the very best students and have society's bias towards the well-to-do sons of well-to-do fathers. If you're looking for a school to contrast with Harvard, Stanford is a poor choice.
Furthermore, actual Stanford isn't doing any damage to actual Harvard. The data I've found suggest that 70+ or 80+% of undergrads admitted to both Harvard and Stanford pick Harvard.
All analogies are imperfect, but I chose Stanford for the specific reason that it's been at the forefront of perhaps the greatest semi-meritocratic* drive in modern American history: the high-tech industry. I very much meant the "proverbial" Stanford when I used Stanford in this example. I meant the Stanford of the movement Stanford has helped to shepherd -- the Stanford that exists in popular consciousness, regardless of how removed that perception may be from the reality of the school.
*I must use the "semi-" qualifier here because, as we all seem to agree, there's no such thing as a true meritocracy. The wealthy have advantages throughout life that maximize the chances of generating a meritorious CV.
As an elitist alum, I don't like these programs one bit. I can't help but feel that they are, in a small but meaningful way, watering down the value of my degree. And even though it's petty, I'm chagrined that my diploma features English rather than Latin text.
Harvard's Extension School was designed to teach the greater Boston community, and I think it should be a vehicle to improve town-gown relations, by convincing locals to not perceive Harvard as "the other". I certainly don't think it should have as part of its mission handing out Masters degrees to people from Kansas over the Internet.
Full Disclosure: I earned my bachelor's degree (an ALB) via the Extension School (HES) and graduated in 2009.
I'm disappointed in your comment. Firstly, it reflects an almost complete disconnect between what HES's stated mission is and what a lot of students at the College believe it to be. I'm often shocked by how /little/ the College students know about HES.
First and foremost, HES has been around for 100 years. It's changed over the years but its mission is essentially the same: provide an education for those whose life circumstances/obligations preclude them from committing to full-time study at a residential school.
You might disagree, but I took a lot of the EXACT same CS courses from the EXACT same professors that the College students took, and I wasn't particularly impressed with the engagement of my younger peers. My impression of many of them was one of "putting in time." I was often more engaged than my classmates and one of my professors was quite happy to allow me to sit in class alongside them. I wasn't near the top of the class but I certainly didn't bring up the rear either. I also had the opportunity to take a lot of classes that aren't necessarily part of the "core" program required of HC students and expand the boundaries of my academic career.
Second, the article title is more than a bit misleading. You can't earn a degree from Harvard completely online AT ALL. For the ALM/IT, you're going to have to spend a decent chunk of your time in Cambridge attending classes. At substantial portion of that time will be spent working on a thesis which is nearly equivalent to a full Ph.D dissertation by some accounts.
Finally, never forget that nearly all of the HES students are holding down a full career while pursuing their studies. I worked a full week and commuted to Cambridge (from DC) once a week to attend my two classes. A third I pursued via distance-ed during that same semesters. In some semesters, all of my classes were distance ed; in others, none.
It took me 4 years to complete the 2 years of school I needed to complete a degree I abandoned to chase my fortunes in the Internet industry. It was hard hard work and I'm very happy I did it. You should be thankful for the fellow alums who enrich your student body with their years of experience in industry.
BTW, I also too CSCI E-131b (ommunication Protocols and Internet Architectures). Didn't study. Didn't read the book. Earned an A. In at least one case, a particular networking technology we were discussing in class was developed and deployed by one of my colleagues.
One more thing for the rest of the HN community: if you think you're programming hot-stuff, you should take Mitzenmacher's "Introduction to Algorithms" class. I did and it was a real wakeup call. You might find yourself humbled.
Well, if you went to Harvard for a "valuable degree" then I really don't feel sorry for you. You are lucky to have gone, and should consider your degree a parting gift at best. As a grad student at Yale I had to endure this same elitism from many undergraduates who felt Yale's graduate schools were diluting the prestige of their degrees. If you are like me, you have already found that the people who really care about degrees are often the ones least likely to trust their own good judgement.
The last time I heard colleges like Harvard were built to impart education to anyone who wanted one and if they were capable enough to get it.
I don't think the founding motto of Harvard was to create any kind of elitism among the people who drank from it's fountain of knowledge but to spread the truth (as in an education).
Even if Harvard stopped handing out these extension degrees the knowledge they're opening up to anyone through their free content is unstoppable. Kudos to MIT, Harvard and countless other colleges for opening up the knowledge that was beyond the reach of a majority of the population all over the world!
Not everyone can attend elite colleges even if they deserved one!
What evidence do you have that extension classes are easier than their traditional counterparts? It's certainly easier to get into Extension than other Harvard colleges, but I see no evidence it's easier to graduate.
I don't disagree that the signalling of a pedigree brand is valuable. However, the signalling of accomplishment is much louder than credential.
So, my curiosity was based on the emotional investment in the credential, when I would think a Harvard grad would believe themselves capable of producing accomplishments that far out-signal their degree.
That's fair -- perhaps most of the sentiments regarding "diluted brand" probably come from younger grads or current students who are less sure of themselves and have fewer real-world accomplishments to rest on.
If you're interested in this kind of stuff, Ken Pomeroy (kenpom.com) runs a fantastic basketball analytics site. He's a big proponent of what are called "tempo-free stats", which aim to filter out issues of playing speed from scoring (a team that plays quickly will score a lot of points, nearly independently of whether or not they are winning). Tempo-free stats instead count possessions; one interesting statistic is that the average team this year in college basketball produced 1.01 points per possession - such a tidy figure to emerge from the chaos.
In terms of predictions, one of the most interesting teams this year is Kansas (http://kenpom.com/team.php?team=Kansas). They've only lost twice but have a large number of narrow home wins. Depending on how your algorithm treats those wins they either look like a team that will struggle to reach the sweet 16 or like a potential national champion.
One other issue that comes up is "garbage time". When a game isn't close (say in the last quarter of a blow-out), the stats are basically meaningless. Does Pomeroy have good ideas about how to deal with that?
Funny, I did tempo-free stats when I was about 10 years old -- decades ago. Although I didn't have access to actual possessions, I tried to base it, best off I could off of field goals attempted for individuals and at the team level, and do various extrapolations.
Seems like it is still a fun area, with much better data now. I just need the time.
Pittsburgh's 1985 win (in the middle of the steel industry collapse) prompted a lengthy screed about how cumulative ratings are assembled from a UW psychology prof:
Pittsburgh keeps winning most livable city awards today for the same reasons it won in 1985: it's a city that rates good or very good in everything and bad in almost nothing.
As an AI grad student, this kind of sensationalism is somewhere between a minor irritation and a serious threat. AI always has had a severe problem with over-promising and under-delivering, and I'm of the humble opinion that until you're actually shipping the most awesome thing in the world you should keep your mouth shut. If the first thing people associate "AI research" with is "disappointment", that hurts everybody (particularly, NSF funding).
"Brain-based" AI should stay in the dark ages. Optimization-based AI is the present and the future.
(That said, if you want to talk about your sweet computer vision system that's "coming soon", go right ahead. Just don't call it AI.)
"Brain-based" AI should stay in the dark ages. Optimization-based AI is the present and the future.
Humans can see. Computer vision systems suck. There's a perfectly good one in our brains. Why not try to understand what already works?
Contrary to what most would believe, brain-based computer vision has made a lot of progress in the past 20 years. Some might think there is a fundamental flaw in the "brain-based" approach given past failures, but that ignores that fact that those failures very likely happened due to a poor understanding of the brain at the time.
The work in brain-based computer vision however has been mostly academic. Brain-based computer vision startups are even more recent, and I think it's exciting to see the startup approach to solving what has been mostly an academic problem. In a startup, the engineering mindset, quick iteration, as well as a lack of concern for publishing and other forces at play in academia could produce very different results.
I do agree that the 5 year promise is extreme, but I think we need time to see how this relatively new mode of work (both in terms of the technical approach, and the process of implementation in a startup) will play out before we call it a failure.
Full Disclosure: I was an intern at Numenta last summer.
Is Numenta's approach really that informed by findings regarding actual brain function? It's been a while and I don't remember most of Hawkins' model, but I don't feel that one needs to consult any actual neuroscientific results to use the general concepts of hierarchical design or top-down processing, which seem to capture the basic idea of his work.
This whole neuro-A.I. fad began with artificial neural networks, which had nothing to do with brains, and still hasn't died.
Numenta's most recent algorithms are actually very strongly neurobiological. If you have looked at earlier work, you should check out the most recent white paper from a couple of months ago, which detail more than year of recent efforts in that direction.
You are correct that neural networks had almost nothing to do with brains. Numenta's new cortical learning algorithms, on the other hand, are very closely modeled on the structure and function of the neocortex.
There was something of a collective and large-scale underestimation of how hard AI would be. Why that would be the case is interesting, especially since it persisted for some decades, across multiple fields filled with smart people. From probably the 1880s to, say, the 1970s, there seemed to be this widespread view that high-level AI was just around the corner, and mainly depended on some inevitable technical progress (faster computers and more memory, plus a bit of algorithms work).
There wasn't even really much debate, in either CS or philosophy or engineering, over whether computers would be able to do "routine" tasks like accurate object recognition, mathematics, playing chess, etc., in the near future. The biggest controversy was over whether computers could ever be "truly" intelligent and creative, e.g. whether computers would also replace Beethoven in addition to mathematicians, or whether they'd be forever limited to just being very capable automatons. Somehow everyone missed that even making them the "lesser" kind of intelligent, so they can walk around, recognize objects, translate languages, etc., would turn out to be pretty hard.
Numenta does fall under brain based. I'm not sure what Vicarious are working on, but recently Numenta transitioned to radically more biological algorithms. It would be interesting to compare the two if Vicarious comes out with more detailed information about their algorithms.
I would say "Brain-inspired". As I see it, Numenta's model (as of about a year ago) is based on (1) the hierarchal organization of neurons, (2) the presence of feedback loops in neural architectures and (3) the importance of temporal processing even for static scenes. This doesn't include any intracellular details nor any of the larger and/or specialized brain structures.