Well, I haven't received any such email and my score in the class is 98% so far (with only the final exam pending this week).
I expected that around 10,000 people would have a perfect score in the class given my experience with Asian participation in online programming contests. There are a few countries in that part of the world where organizing a group to check each others' work and ensuring perfect scores is common and encouraged even when the official requirement is to do your own work. And those happen to be countries filled with millions of smart people. India, China, and Russia seem to be the heart of the phenomenon. Google, Facebook, and Topcoder have systems to deal with it but ai-class.org does not.
Anyway I'm neither surprised nor disappointed not to be in the top sliver. I'm not hireable anyway. It's funny that there is an identifiable top 1,000 at all instead of 10k+ perfect scores.
I'm not so sure about that given that I'm from one of the countries you've mentioned. My guess is that the profile of people who've taken part in ai class would be very different from what you would see in coding contest sites. Since no rewards were promised, I would think that those groups of people who would game the system to get perfect scores would be a very small number.
Getting new jobs is great. However, stay put in your jobs. Try to apply what you learned to little problems at work. Everyone has a huge database these days. I work for a medical device company and I write software that processes blood (embedded stuff). Every year, a million or so run logs find their way into one of our databases. There is a wealth of information there that we could figure out. Eg. How do me maximize yield, what's the best we can do with a certain type of donor. Do we perform worse on people in a certain geographic region etc. Eventually make useful predictions based on that data. Granted, that is not what I was hired to do, but I'm going to do it in my "free" time.
Why? Because it is interesting. Find something to apply to that might add value to the business. If a enough people do that, AI techniques might become common place. Get on Kaggle, GitHub your next project etc. Putting what we've learned to practical use is how we can proliferate and disseminate this knowledge. This was the intent of the class in the first place anyway.
tl;dr Using this course as a gateway to your next job is shortsighted IMO.
I wish I could +1 this more than once. This is exactly the point, to me. There's not really such a thing as "an AI job". There is application of AI techniques to existing problems. Those problems are already found riddled throughout every job everyone's ever worked in.
You can't sit back and wait for AI to "happen". What would that even look like? The benefit of these techniques is in application to existing data; you don't have to quit and found some kind of computer vision/kinect/big data startup to take advantage of the knowledge.
You should be aiming to put yourself out of a job by using AI.
Drat. If I'd known they were going to do that, I'd have made sure I was in the top 1000.
edit: why is this getting voted down? I would expect there are a fair number of people here who took the course with a casual approach, like I did. E.g., I usually watched the lectures early Sunday, then did the homework that night. Then, for those topics that I enjoyed and wanted in more depth, I read the sections in the book on them. This approach is relaxing and fun, but does increase the tendency to make silly mistakes and not catch them (which I in fact have done a few times now).
I am, it turns out, enjoying the material enough that I think I would like, if I ever find myself looking for a job, to find one where I can use this stuff.
My guess is that you are being modded down because your statement is of a form that many people have heard in real life and associate with poor performance. It shows a lack of thinking about jobs. However, I'm going to give you the benefit of the doubt and assume that you are young and not really worrying about such things.
Basically your answer is like the classic "I could have got a 5.0, but I partied", which is a statement that says a number of things. 1. "I think all the kids who got 5.0 are boring and don't party" (untrue). 2. "I am not capable of getting a 5.0 and partying" (probably true).
That the person would even make the statement says something about their general level of intelligence. Hence when you say "If I had but known, I would have been in the top 100", you are saying "I had no idea that a hugely publicized course at a renowned university might lead to job opportunities". This lack of intelligence seems to contradict your first statement. Or it could be that you are not in the habit of thinking about such things. I hope you are either a kid, or an incredibly wealthy trust-fund baby living in isolation. Usually, however, I hear such remarks from people who are not smart enough to realize they are not smart enough, or they are quitters, or they are victims.
Actually, the modding has reversed and my comment is now at +9.
The "partying" I was doing instead of putting a non-casual effort into the class was dealing with some projects at work, and starting to read a book on analytic number theory because I'm tired of being someone who doesn't understand the proof of the prime number theorem.
> you are saying "I had no idea that a hugely publicized course at a renowned university might lead to job opportunities"
Yeah, I did not think it would lead to much in the way of job opportunities. That's because the class is just an introduction. If you compare almost any section of the class to the corresponding section in the Russell and Norvig book, you'll see that they are leaving out a lot of detail in the class. The homework in the class almost never pushes any limits of what was done in the lectures. Many of the homework problems are just things that were done in lecture with the numbers changed.
I would have expected the job opportunities to start at a deeper level than the class reaches.
"But in education circles, Mr. Khan’s efforts have captured imaginations and spawned imitators. Two Stanford professors have drawn on his model to offer a free online artificial intelligence class. Thirty-four thousand people are now taking the course, and many more have signed up."
Not sure if "taking the course" means advanced track and completed midterm or not.
I have only the best possible things to say about the video lectures, but the lab assignments are just too easy. For something that is supposed to be the advanced track, I was hoping that they would do something more than "take this description of the function, implement it in Octave, make it pass our tests".
Of course, I know it's their first class and you can't get everything perfect right off the bat. But I'd love to see the lab assignments being one larger project, some kind of task where you need to apply the principles learned to solve different problems. It would be also more effective in measuring/grading students.
On an unrelated note: I wonder what is the license for the classes/videos. I am working now in an adaptive e-learning platform. It would be cool to put their videos and questions in our system and see how effective the system is.
Unfortunately the "difficulty" of the ML class assignments was not in the material application of the things learned in the lecture but instead in the gotchas of Matlab/Octave vectorization. Most of the assignments were in the format of: "Here's a forumla that applies to the element, generalize it to the matrix so there's no for-loops." Which, while challenging for those not accustomed to thinking that way, was not a challenge of applying what one learned from the lectures.
The course was awesome for what it was: a chance for the average programmer to get their feet wet in ML, but the fact that it was hosted by Stanford seems to lead to a somewhat humorous irony that it was pretty accessible whereas Stanford's real-life rigor is supposed to be nothing of the sort.
I agree the programming assignments have been very easy.
However, I doubt they have the bandwidth to manually grade thousands of assignments. Given the scale of the class, the grading is forces to be automated. It's hard to come up with such assignments and even harder to make them challenging, given an automated grading script.
The grading doesn't need to be manual. Think of the lab assignments being something like a smaller scale of the Netflix challenge:
1) They provide some set of data and establish rules for the competition.
2) They implement their own solution to the challenge, and that is the benchmark.
3) A "passing grade" is obtained by getting any working system.
4) The actual grading is then given on a curve, compared against their benchmark.
If your project is better than the benchmark, you get an A+, 95%-100% an A, 85%-95% gets you a B... etc.
I think this is a brilliant idea and is already being implemented by many training organisations through kaggle.com's "kaggle in class" program: http://inclass.kaggle.com/
Using this system for ml-class (and presumably the forthcoming pgm-class and nlp-class) would be extremely beneficial for real-world application of the information presented.
That said, learning what the algos are and how they work is one thing; learning how to actually apply them to real life situations is another thing. I think the class leans quite heavily towards the former, but I really love the few glimpses of the latter.
Personally, as someone who is new to the field (didn't do maths at college) & is barely fitting the classes & exercises around a fulltime workload & other things, I am glad that the programming exercises are "easy". Some of them are ridiculously easy, agreed (where 1/2 the solution is given basically verbatim in the pdf notes, and the other 1/2 in the code comments) - but for most of them I think it's enough to wrap my head around what's actually happening, especially in terms of the multiclass neural network assignment. That gives me enough foundation to try to apply them to real-world situations on my own time.
Granted it wouldn't work in many other classes, but my teacher for assembly language did something like this. First, your code had to work or you got nothing. You also had some time limit, to avoid ridiculously slow, yet working code. Finally, each working submission was graded by the number of additional bytes you used above the reference implementation.
And he knew all the tricks. I don't think anyone ever beat him. And he didn't show anyone any of the solutions until after the final.
I felt like I learned more from the few minutes I spent reading those solutions than I did during the rest of the course.
I don't know the numbers for the ML class and the format is a bit different, however, for DB class from the many people that were signed up only 9180 took the midterm and (only 908 got scores >= 18/20) <- That is wrong, final had 20 questions, midterm had 18. So that's 908 people who scored 18, 2082 with 17 or over and 3428 with 16 or over.
So if ML numbers are in any way similar to this maybe it wouldn't be so bad. That is if there was such a letter.
I'll add that Professor Widom's style is quite engaging as well.
I wonder about this. Apparently the letter we get at the end contains our ranking. I wonder what the distribution of scores would be; I'd assume there would be a small fraction of enrolments actually doing the exercises, a big spike at 100%, and most falling in the range of 80-100%. Personally I forgot that I hadn't done a couple of the quizzes, so did them late, and didn't complete the optional part of the first week's programming assignment on time, so although I've probably got a "good mark" on paper, I'm probably nowhere near the top 10% of students.
As someone doing the course i would say its a decent way to identify some candidates but it wont be a really high correlation. It depends a lot on the time you have to invest and how rigorously you go through your answers and study the material the questions are drawn from. Some programming challenges would be great to better separate whether people really understood the material despite being harder to mark.
Personally I've always got a bit wrong in most of he homeworks but usually felt that while I understood the material I didn't recite enough time to rigorously check my answers and rewatch the materials to pick up on some of the subtle things you need to remember or some questions.
I took this course casually as well (wanted to fill in some gaps in my undergrad education). But my guess is that people that complete this course casual probably already have a lot on their plates. For me it was family, working, teaching and taking a grad class. I never double checked my hws, could only do a single pass through the lectures, and didn't spend any time checking for nuance in many of the Qs and my grade shows accordingly. However in the end I'm really excited that I was able to continue through it despite all this (I'm really curious to see the percentage of participants who saw the course through completion compared to those who signed up).
The correlation will be interesting. Undoubtedly the top 1000 is made of many of the type of people that must be the best at whatever they try (not exclusively of course) and with a decent amount of time on their hands, after all there is no disincentive whatsoever for not completing something and no previously known reward for perfection (I guess a piece of paper can count). Given some wording in some of the Qs (actually my only complaint about this course) a lot of care must be taken to ensure that every question is right, so being in the top 1000 is no small effort.
Same here in regards to time, family, etc. I'm hoping they keep the videos up for some time after the last week, I missed a few during the more hectic times. Regardless I've learned tons that I'm already implementing in code (and daily life, like actual planning methods and such.)
That said I have a job I like already, and plan on using what I learned here.
Those 1000 with more time and drive to be the best probably are some of the best job candidates though. Plus chances are they do need a job after all.
Yeah I would agree, I have been allocating a block of time on a Saturday or Sunday to get through the content.
Definitely learnt a lot, which surprised me because I did 3 classes that were based around AI and probability in my course. It has enlightened me a bit on the difference between a top of the range CS college and where I did my degree just it terms of how much ground can be covered and covered well.
They may be trying to collect resumes (and job placement performance) from the top 1000 students to see what the correlation is. If they can prove that correlation, then they could essentially run the courses as a recruiting tool for paying companies.
Prof Thrun mentioned in the last office hours that there were about 1650 people with "perfect scores" and he seemed a little thunderstruck by that and said he would admit these folks into Stanford if he could, because they are (paraphrasing, from memory)"Stanford quality".
What struck me was how much importance he gave to this metric, which isn't that hard to game on an online offering.
I know a few people (who shall remain nameless) who collaborate and check each others answers and so on before submission, in direct violation of the Stanford policy (and have 100s or close to it), and so probably have received this mail, whereas more "deserving" (note quotes) people who honestly work through the course material may not because they have, say, a 85% or 90% score.
That said, my key takeaway from this is that professors are very impressed by perfect scores irrespective of how you got them. There must be something magical about that row of 100s. Once you set up a grading/ranking system, it is psychologically very hard not to admire people who end up at the top.
I am personally a little dubious that the people with the highest scores would make the best pool of employees, especially given that this is an online course without the programming component, but what do I know?
I wrote Java code for most of the algorithms in AIMA as a side project a few years ago , and after I read an online post by Peter Norvig saying a few of his students had tried and failed a few times (to implement the code in Java- Common Lisp code existed and the Python version was in its infancy), I sent him the code and this became the "official" java distribution for AIMA ( though I don't maintain it anymore- the immensely talented Ciaran O'Reilly of the Stanford Research Institute does) and no one ever invited me to Stanford or offered me a cool AI job, sob! :-P.
No I am not bitter I tell you, not even the teeniest bit :-p 
I wonder how this signalling will play into the upcoming courses? If there are tangential real world benefits to be gained by attempting a "perfect" score, then you can expect a lot more game playing wrt scores and exams.
 though eventually, after a lot more work, it did lead to my working on good ML/robotics etc projects from Bangalore, which is a hard thing to do in the Great Outsourcing Wasteland.
 I am really not bitter.
I wrote the code for the hell of it, not to get a job. AIMA was my introduction to the fascinating field of AI. It is a great, great book and it has a lot more material than is covered in the course.
I once did want to go to Stanford and learn from the great profs there, but now in a "mountain comes to Mohammed" fashion, Stanford is coming to me. I don't care about the credentialling - I just want to learn. I took the AI online course and enjoyed Peter's and Sebastian's teaching immensely. Fwiw I should have a high 90's score, (I didn't add it all up) but nowhere near a perfect score.
It is surprising that you thought of this as a "cool AI job offer". I have two remarks here. First, the email sent to the class is barely an invitation to send resumes. Something many programmers/CS Students with online presence experience on a regular basis. Probably not from a Stanford Professor but at least from major companies recruiters. It would be interesting to know how many will actually make it through the screening, phone/on-site interviews and get a job offer.
Second, I registered for the Machine Learning course (I am not sure if the same applies to the AI course) and I compared it with the actual ML course at Stanford (CS229) (I mainly looked at Youtube videos of Andrew as well as Assignments/Midterm). The latter is by far more advanced and theoretical. The assignments tend to test more than basic comprehension of the material presented in the lectures, which is exactly what the online course reviews tend to evaluate. They require strong mathematical knowledge and obviously a minimum level of creativity/intelligence.
"It is surprising that you thought of this as a "cool AI job offer"."
I don't. That part of the post was written with tongue firmly attached to cheek. If that tone didn't come through, that means I have to improve my writing.
The online ML course is CS 229A (which is also an actual course at Stanford. The online version is close to the Stanford course).
The "tough" version is CS 229 (no 'A' at the end). I registered for the ML course thinking it was an online version of CS 229 and dropped out when it was confirmed to be 229A. In my politically incorrect opinion, 229A is close to worthless. The math is important in real world ML. This course included gems such as "if you don't know what a derivative is, that is fine".
The online AI course is almost exactly the same course as Stanford (CS 221), minus, of course, the programming assignments. It is an introductory, broad based course, and it does the job well (imo)
The online DB course is almost (if not exactly) the same as Stanford CS 145. I think this was the best course of the three.
All courses track the corresponding Stanford courses.
> 229A is close to worthless
> This course included gems such as "if you don't know what a derivative is, that is fine".
It also included other gems like debugging models with learning curves, stochastic gradient descent, artificial data and ceiling analysis. I have not come across practical things like these in more mathematically oriented ML books that I have tried reading in the past.
Interestingly, your arrogance is in sharp contrast with the humility of the professor, where he admits in places that he went around using tools for a long time(like SVM) without fully understanding the mathematical details.
On the other hand, it you already know what a derivative is, you already went through all the lineal algebra stuff, have an idea of numerical methods, etc, I appreciate not wading into those side areas. Specially if you have kids, a dayjob and doing the AI-class at the same time :D
> "if you don't know what a derivative is, that is fine".
A bit of me died when I heard prof. Ng say that. However, I had committed to finishing ml-class and I did. As of now, I'm glad I went through with it. I felt like I was learning all these cool AI techniques that I hadn't heard about. However, the proof is in the pudding. The question is will I be able to take a real world problem and apply what I learned in that class to come up with something interesting? If I can't you are probably right. My perfect record would only be worth the paper it's printed on and the money I paid for the course!
I'm not pointing fingers at Prof. Ng. or anyone here. It was an experiment for Stanford and an experiment for me. I know I am looking forward to the courses next year :).
While I agree with your general sentiment, note that at least in ML-class, you can resubmit quizzes as many times as you want for full credit. Thus over the course of an hour you can retake the quiz enough times to brute force a perfect score without breaking the letter (and arguably the spirit) of the honor code.
The Programming Projects that ML class had were slightly better metric of performance as there's more work that would have to be plagiarized, and if you're just going to go through life outsourcing all of your work then I guess that's your prerogative. However, I think that if you wanted to be very serious about actually testing for knowledge of material then the addition of some sort of interview component (phone/skype session), while time-consuming, could help.
In a way, yes, the programming projects in ML seemed like a better measure of performance in that you have to actually figure something out. However, they have two (sort of) disadvantages vs normal homework:
1) you immediately know if you got it right or wrong when you submit, so you can to a lesser extent brute force the correct answer
2) with the exception of maybe the first assignment, they are all "fill in the blank" sort of programming assignments. You basically just have to find the equations they give you in the PDF, translate them directly to Octave, and bam you're done.
I can't comment on online courses, but in general there is a HUGE difference between people who get As and people who get 100% on every single assignment. Never making a single error is an amazing feat.
I personally have only scored straight-100%s in a single course (Python programming), and that was only because I was relatively an expert in the material before the course began.
If you are getting 100's on everything it means you are gaming the spirit of the learning, overfitting the memory. Plop that guy in front of a computer with specs and a deadline and you will learn why grades are not an indicator of success.
Well, the only two people I personally know who would get all 100s are Peter Norvig and Sebastian Thrun, and I personally wouldn't mind hiring them!
Of course, in reality, Peter Norvig and Sebastian Thrun are working on projects that have long time horizons, e.g. self-driving cars and search. So perhaps you're still correct: The people you would hire to bang out code to meet a short deadline are probably different from the people you would want to work on your long-term technology bets.
In general, I disagree that knowing a topic incredibly well is necessarily overfitting. Deep knowledge can only aid new insights. You often hear about mathematicians and physicists who think by inhabiting their own mental world, composed of insights that they hold so deeply that they are _intuitive_.
Its possible that to them, the scores aren't important and its a more conducive (and realistic) learning environment if they work together to solve the problems rather than doing it alone. There are many advantages:
* rather than just giving up on a problem, you can talk it out and learn together
* you get the opportunity to teach the material that you think you know that others find hard (a good heuristic for problems you may have just barely understood, but gotten correct anyway). Teaching material is a great way to learn it, and expose any gaps you might have in your knowledge.
* instant feedback on problems while they are still fresh in your memory
Your final score will be calculated as 30% of the score on the top 6 of your 8 homework assignments, 30% your score on the midterm exam, and 40% your score on the final exam. For those completing the advanced track you will receive your final score as a percentage as well as your percentile ranking within all those who completed the advanced track, and this will appear on your statement of accomplishment. The statement of accomplishment will be sent via e-mail and signed by Sebastian Thrun and Peter Norvig. We hope to have them digitally signed to verify their authenticity. It will not be issued by Stanford University.
Why do people keep coming back to games like farmville and games are now rampart with side achievements with little point like collect 300 x in zone y? I think some people on some level would feel the need to use all means available to maximise scores purely as something that needs to be checked off like achievements in games.
People form study groups all the time, it's a great way to learn. Since it's a free online course all of the benefit is what you actually learn, there is no way to cheat. At least that's my point of view (I am not in any of the classes, but wouldn't hesitate to co-work on stuff if I was).
Again, maybe it's just me, but this is a free online course that everyone is doing for their own knowledge. There's no degree being granted and it doesn't count for anything. I wouldn't bother to read the rules and certainly not attempt to follow them. I would try and learn the material as best I could, however that is.
Not sure I agree with the correlation that the 'most talented engineers' are those that 'scored highest on the AI class HWs and Exams,' but I certainly wouldn't refuse any of the folks who were able to solve the ApproximateAgent PacMan Search problem in under 30 seconds from joining my team ;)
Its wonderful that Sebastian & Peter are reaching out like this after all they have done so far. Congrats to everyone that slugged it out, and good luck on the final this weekend!
There are probably a lot of talented engineers not in the top 1,000, but there are also a lot of non-talented engineers not in the top 1,000. I guess it's a reasonable cut-off. Thankfully it's not the only way to get into a great company.
Likely they look at everything, not just the graded portions, to come up with that kind of split. So if you got 100% on every quiz the first time, every homework and test you'd be in that group. If you flubbed the quizzes here and there, probably not.
They will also probably look closely at that optional programming assignment as well.
I knew there was a reason Google let Peter Norvig take the time to do this ;)
I'm not sure why there's a down-vote here, because that was exactly the question I had. There has been a system of professors referring their top students to their colleagues for years, and on a small scale I don't see much issue. However, with these large online classes, these professors can stand to be gatekeepers to an incredibly massive - and valuable - source of leads. It doesn't help when there is no true transparency to the process, and a desire for the school to recoup costs - we assume the professor would just send the top 1000 over regardless of source, but what's to stop them from only referring the names of students who, say, enroll for credit and pay the university? Or those who pay some sort of a separate "consideration fee"?
I can't put my finger on why, but I'm a little unnerved by this...
That this inevitably leaked due to someone's bragging will encourage future cheating.
On the other hand, it was inevitable that as long as "grades" or equivalent are officially certified (even though the course is not for degree credit), that people would collect online courses as credentials. Wherever there's signaling value, there will be cheaters faking the signal.
Disclaimer: I got the email and missed two homework questions.
This type of thing wouldn't help you get a job with me. It could make a difference in getting an interview with me, if you had no real experience on your resume/CV. I want a well-balanced team player, that has a good chance to flourish on my team. Do course grades correlate with real-world flourishing on my team? Of course not.
It seems that this was sent to students with 100% aggregate score up to this point (after dropping the bottom two quizes). One acquaintance had 1 wrong in aggregate and another had 0 wrong in aggregate but two imperfect quizes, and only the latter claims to have received the letter.
It would be interesting to know what data they use for these kinds of assessments outside the given ones for the course, that is the best 6 homeworks out of 8 account for 30% of the grade, the midterm for 30% and the final exam for 40%.
I was one of three researchers in a team at Yorktown Heights that did some of the best research in AI in the world, published a string of peer-reviewed papers, was the source of two commercial products, gave a paper at the AAAI IAAI conference in Stanford with for that year the 25 best AI applications in the world, personally won an award, etc.
I've taught computer science at Georgetown University and Ohio State University. My Ph.D. is in applied math.
Once in my career, just as a 'scientific programmer', in two weeks I sent a few resume copies, went on seven interviews, and got five offers.
But after my Ph.D. and AI work, I sent over 1000 resumes to Google, Microsoft, GE, FedEx, and hundreds more, got only five interviews, and no offers. I got a nice letter back from Fisher Black (as in Black-Scholes) saying that he saw no applications of applied math or AI at GS.
I ask you: Who will hire you in AI and why?
In business, hiring is because some manager has some work to do and a budget to do it. That manager believes that they know nearly all that is needed to do the work and otherwise would not be betting his career on the work. Thus, the manager is not hiring high technical expertise he doesn't have. Instead the manager is, as on a factory floor 100 years ago, hiring labor to add 'muscle' to his work.
In particular, unless the manager knows AI, he won't be hiring for AI. And there is at most only a tiny chance that the manager knows AI and even less chance that his project will depend on AI.
Moreover, the manager does not want competition from below and does not want his project 'disrupted' from below so really doesn't want technical expertise above what is needed just to get his project done.
Net, if you know some AI and want to use it in business, then find an application and start your own business. Then, since you know AI, you won't have to hire anyone in AI either.
What I've said here for AI holds for essentially all advanced academic topics.
You were looking for someone to hire you to do AI work because you were great in AI. This reads more to me as using a student's performance in ai-class as a proxy for their overall software engineering performance. Seems like a better metric than many others that employers use.
I was just trying to get HIRED for anything, anything at all, not seriously illegal, immoral, or dangerous, and I would have compromised on those.
So, since this thread was about 'job placement' for students who did well in an AI class, I posted about my experience getting hired where part of my background was some expertise in AI.
Net, I have to conclude that, in getting a "job", expertise in AI will be from very rarely helpful to often a serious disqualification as in "It appears that you are overqualified for our position and would not be happy in it".
Sorry, I had assumed you were playing the AI angle, and my heart bleeds for people who get binned as "overqualified" and ignored. I still think the difference here is "did really well in one AI course, so is likely to do well in software in general", and not that the students are being recruited for AI jobs.
I am now in a simlar position. I have a BS in Software Eng. and did a Masters/PhD in Computer Science after a year and a half of software development work.
After my PHD I got a postdoc in a 3 year EU project just finished. Now I am craving to get out of "academia" and get into software development again. The problem is that I would consider my development skills as a "junion" or "mid level" developper but without hardcore expertise in a technology.
And the worst problem is that as you say, a lot of companies that see my Resume see "PhD" and think "overqualified".
Recently I tried appliying to a group some company that is doing Machine learning with the hope that they will see a PhD as a "feature" and not a bug.
I just applied for anything, and not just for software engineering jobs. The resumes I sent to Google, Microsoft, GE Research, etc. were for whatever I was qualified for.
At the time, 1995-2005 (see my post below) AI was not much on the radar of companies. It would, could, and should have been but was not.
But asking that a company need "particular skills" that are a bit advanced is, as I explained, fundamentally something of a long shot.
Net, if someone has some advanced expertise and sees an application, then they should just start a business and there be CEO-CTO-CIO, and Chief Scientist along with chief floor sweeper until they get funding and/or revenue and can hire people.
Yet there are AI applications coming from Google, Apple, Microsoft and many other companies (voice products, kinect, etc.), and service based companies that work on these advanced topics too like ITA used to be. Why do you think those companies weren't interested in you? or weren't you interested in the projects they had to offer? (I'm guessing it's the latter but I could be wrong)
At big corporations you needs fame and "eminence" to land a job in a specialized field. You need to showboat and schmooze into these places. They want to hire the Sanjay Gupta of AI, writing code while also talking to refugees on CNN, or something.
edit: tell me why you think I'm wrong if you think I'm wrong.
Mostly wrong: You have described a way to stay afloat in essentially a paper boat driven by hype. Don't try such a boat on long trips over deep water in high winds!
Elsewhere on this thread I've outlined how business 'handles' new technology.
So, yes, maybe a big credit card company gets up on their hind legs about credit card fraud and wants to attack the problem with AI, big data, machine learning, optimization, etc. So they set up a group.
First problem: Some high executive looks at the group, doesn't see what he expects and respects, gets a headache, and kills the group.
Second problem: The group gets some really nice research done, writes a technical paper, and gives an executive briefing; some high executive gets a headache, concludes that the group is just engaging in 'theoretical nonsense', and kills the group.
Third problem: The group is making good progress, has some running software with good estimates that credit card fraud losses will go down by 75%, thus giving an ROI off the tops of the charts, with essentially no effect on non-fraud (false alarm) credit card operations. Some executives elsewhere in the company start to feel the heat of internal competition and work to shutdown the project.
Fourth problem: The project is rolled out in production and is fully successful. There is no more need for the group, and it is disbanded with everyone fired. The head of the group has his house foreclosed, and his wife leaves him. He sends 1000 resume copies and joins his brother's business mowing grass.
Lesson: Being a technical employee in a big company where 95% of the people and nearly all the executives are non-technical sucks, i.e., is a career long walk on a short pier.
Net, big, old companies are nearly always just unable to work effectively with new ideas. Exceptions are possible but rare in practice.
Larger lesson: Start a successful company and either run it and pass it down in the family or sell it for $50 million, $500 million, whatever. Don't be the factory floor worker. Instead start a company and be the CEO. Use special technical expertise as the advantage.
Can plug together one heck of a Web server computer for $1000 in parts. Heck, just in starting a pizza shop, one pizza oven might cost more than $1000. The pizza shop needs to pay attention to a long list of regulations, but a Web server doesn't,
One of the good things about the US is the relative ease of starting a new business. If want to use technical abilities to start a company and sell it for $50 million, then the US is the place to be. Thank the big, old companies because they are the ones willing to spend $50 million buying a company based on less than 100 KLOC!
One of the in person interviews was at a company, somewhat well known, in Connecticut. They had a room with long tables and vertical walls about three feet high set on the tables with about enough space between each partition for a PC and a chair -- these were for 'developers' they wanted to hire.
Near the end of the visit, a nice girl in their HR office walked me to a bulletin board they had with their legal announcement of their job openings and their claim that they had to hire H1Bs because no qualified US citizens were available. We didn't say anything to each other, but the scam was clear, Hope they didn't fire her.
A friend, who worked with me at Yorktown Heights, recently went on an interview for a programmer slot. Apparently all the programmers were on H1Bs from Taiwan, India, and Russia. My friend didn't get hired. He's terrific at C, C++, C#, Visual Basic, .NET, FoxPro, T-SQL, and system management and administration of Windows Server, SQL Server, and Exchange. And he's an expert in AI with an applied math Ph.D. from one of the world's best math departments. Yup, guess those guys from India, etc. were 'better qualified'.
The early part of my career was greatly helped by the Cold War and the Space Race. That's why in two weeks I could go on seven interviews and get five offers. Yes, that was near DC. At one time I was making six times what a new Camaro cost. I still have the Camaro!
But as has been documented, about then some executives from industry and government got together to see if they could change that 'horrible' situation. The result was that the NSF set up a team of economists that did some supply-demand calculations and estimated how many more 'tech workers' would be needed to 'solve the problem'.
Then to get the 'tech workers', the NSF wrote into academic research grant contracts that so many students had to be supported. And, hint, hint, hint, such students are available from Taiwan and India.
So, for some years freshman calculus classes were taught by graduate students with good understanding of Chinese but poor understanding of English. Of course, if are going to study a subject in English but don't know much English, then about the easiest subject to study is math since the vocabulary is very small and the terms are very well defined.
Soon US citizens in college walked into computer science classes and, on the first day, saw only a minority of US citizens, sensed something wrong, and walked out. So for some years during rapid growth of computing in the US, academic computer science was very short on US citizens.
Really, the H1B program was designed and intended to flood the US labor market for tech workers, and basically the program worked.
Congress was getting pushed from two sides on the H1B situation. But 9/11 provided an excuse to throttle all immigration (except from Mexico!), and the permitted H1B slots were shrunk.
Now some tech industry executives are back at claims that the US needs more immigrant entrepreneurs to get 'skills' in 'short supply' in the US. So, amazing situation: In the US families commonly have one heck of a time paying for college. Even in the US, a good computer, printer, Internet connection, work space, etc. for learning computing is somewhat expensive. A lot of bandwidth to US servers is needed for downloads. Yet somehow in countries with average family incomes 10% of those in the US and 10,000 miles farther away from US servers people are 'better trained' in 'technical skills'. Amazing.
No, it's just an old story: Economic activity needs land, labor, raw materials, capital, etc. Anyone with one of these likes to believe that their part is the most valuable. So, the people in the US with the capital, and who never wrote 100 lines of software or invented an algorithm, tend to believe that they have the most brains, the most valuable part, and should have the most power and that labor should be like workers on a factory floor 100 years ago. They wrap themselves in notions like "The US is a nation of immigrants". Yes, Mayor Bloomberg, you are one of those people.
To me these are now very old issues and there are some larger issues:
First, Moore's law and related 'laws' for other hardware have been charging along so fast that what can be assembled for a development computer or a first server for $1000 in parts is astounding.
Second, common US Internet bandwidth is beyond belief, even for a server. Just do some arithmetic assuming a Web page that sends for 200,000 bits, with three ads, with some reasonable 'charge per thousand ads displayed' (CPM), and an Internet connection with 15 Mbps upload bandwidth for less than $100 a month, assume half fill that bandwidth 24 x 7, and estimate the monthly revenue. THen can join the supercharged Corvette of the month club or the 50 foot yacht of the year club. Multiply it out and see.
Third, US technical graduate education still totally knocks the socks off nearly all the rest of the world. If have some such education and some research and also a good application, then get a computer, type in the code, go live on the Internet, get users, ads, ad revenue, a Corvette and a passenger about 5-4, 110 pounds, good figure, natural blond, cute, sweet, majored in art history, good at cooking, sewing, playing piano or violin, singing, wants to be a wife and mommy, ...!
Yes, I've published in mathematical statistics, that is, the more serious version of 'machine learning'. And I've published in optimization, that is, the more serious version of 'planning' in AI. And my Ph.D. research was in stochastic optimal control, that is, the more serious version of the AI 'planning over time under uncertainty'. And I've done some applied math research for my project. I recommend this path instead of 'computer science'.
That is, if going to grad school, I believe that there are some serious advantages in a carefully selected collection of topics in applied math instead of 'computer science'. Start with an undergraduate major in pure math. Sure, somewhere learn to write some code and then get, say, three hours of lectures on 'algorithms and data structures'. In graduate school, take seriously measure theory, functional analysis, probability based on these two, stochastic processes, optimization, and mathematical statistics, at least. My guess is that you will have the best tools for the future of computing and 'information technology' entrepreneurship and won't have much competition from outside the US or even inside. And those math classes are NOT crowded!
Yes, I believe that the situation has changed since 2005. For the evidence, this thread is the strongest I know.
Yes, I know that Google, Microsoft, etc. should be using AI for ad targeting, search ranking, scam and fraud detection, etc. Maybe they are.
Still from the tech news it does appear that the best way to get paid for such work from such companies is to do a startup with such work and then just sell your company to one of those companies.
I still believe that the ability of US big business, with its many 'traditions', will have one heck of a tough time running projects with anything advanced technically and not already common in the company. In simplest terms, the existing middle and upper management has essentially no ability to work productively with anything new.
Indeed, broadly the leading US research universities are terrific at working, for as far as they go, with new technical ideas, and business is just AWFUL at it. So, super tough to get a business to buy a technical idea, but they will buy a business based on a technical idea.
Oh there is a history: Off and on for decades business has tried to make statistics, optimization, AI, etc. work. The pattern is some new buzz words, some hype, some projects, some project failures, and then all's quiet again. Thing is, of course, eventually the ideas have to work. Business doesn't know how to make such ideas work. Parts of US research, in or close to academics, is from a little better at making ideas work to much better. E.g., look up the video of Eric Lander's lecture at Princeton 'Secrets of the Genome' or some such and see what he made work with new ideas and new hardware.
In simple terms, 'business' gets a 'business model' that is working and then hires 'line managers' to 'manage' the execution of this business model. Typically when the model dies, the business dies. Doing well with work that is new and advanced within the on-going business is not common. Yes, one of the reasons is part of the tax code -- it can look better for a company's financial condition just to buy a business for, say, $50 million, than to develop in-house.
The good news is, this is a GREAT time for information technology startup entrepreneurs. Don't struggle against your problems; instead, pursue your opportunities.
AI is extremely useful for web companies since they collect terabytes of historical user behavior and want to make predictions about how users will behave in the future. In particular, machine learning is very widely used within Google, Facebook, LinkedIn, etc. to do search ranking, anti-fraud, ad targeting, spam filtering, friend recommendations, you name it. Other AI sub-fields like planning are used in shipping and route optimization.
I'm at a 4-person startup and AI is the core of what we do. My e-mail is on my profile if you (or others) are interested.