Hacker News new | past | comments | ask | show | jobs | submit login
Carnegie Mellon Launches Undergraduate Degree in Artificial Intelligence (cmu.edu)
502 points by e15ctr0n 10 months ago | hide | past | web | favorite | 339 comments



I really do not like this move. AI and Machine Learning require graduate-level mathematical and computational skills. I don't think it's productive to pretend that we can train someone to be even remotely useful in these fields in four years of an undergraduate education. It sounds like an attempt to get around the fundamentals of csci to "skip to the interesting bits," which will produce graduates with a cursory knowledge of computers, programming, math, and data science, which is honestly worse than no knowledge at all.

I'm not fundamentally opposed, but I think this is akin to creating a "Condensed Matter and Nanophysics" undergraduate degree alongside "Physics."

Changing the name of a factory will not change the output. The only solution to creating more and better AI research is to invest in better fundamentals in computer science and mathematics, then create pipelines for specialization. Slow and low.


This is a lie that I hear a lot from AI/ML experts. To me it reeks of gatekeeping. Yes the foundations of the field and many of the breakthroughs require this knowledge, but one can be a very effective practicioner of AI/ML techniques with a few years of undergraduate level instruction. And given how many industries are kicking the tires of AI/ML, we're going to need hordes of practicioners.


I can't create a new programming language that compiles to machine code. Maybe I shouldn't be programming?

I don't see why you need to be a domain expert of the low level details to make a career out of something so useful.


I don't see why you would need a graduate degree to create a new PL that compiles to machine code. In fact, many successful PL designers lack PhDs.

Most work in CS feeds more on experience than education. A PhD is simply a way of getting deep experience (and advice) in areas that a job probably wouldn't allow for. Since ML is hot, you can get hands on experience with it these days, its not like 10 years ago when the only people looking at deep learning were pretty much researchers, PhDs, and PhD students.


That is also taught to a useful level in undergraduate coursework. You can do useful ML using only first year undergraduate statistics, linear algebra, and multivariate calculus.


I don’t think it is actually gatekeeping because no one is saying that you CAN’T do AI/ML with this degree just that there is going to be a tradeoff in any curriculum focusing on a subset of CS.

A consequence of this degree is that it makes CS degrees worth less. Because AI/ML is by definition a subset of CS, this degree implies that you have CS + more. It doesn’t fairly show that there is a tradeoff in a specialized curriculum (less exposure to other areas of CS).

I would say that if you are at CMU or another school offering this degree, you would be a fool not to do it. Even if you are not interested in AI/ML, it will be more hirable than CS because the name implies it is “better” than a CS degree. It’s a similar situation with the new “cybersecurity” degrees being offered. Ideally the accreditation group would bring some sense into this but so far they have greenlit both degrees.


That's exactly the opposite of what happens to most graduates with specialized degrees.

They end up having to explain their degrees, which is generally not a good thing. It happens all the time with engineering. In the job market a degree in mechanical engineering is in general worth more than one in robotics.

You also run the risk of graduating into another AI winter, or just deciding you hate AI, and then you really have to try hard to explain that your degree is really just a CS degree.

Imagine graduating now with an undergrad in big data--that sounded like a good idea 5 years ago.


> Imagine graduating now with an undergrad in big data--that sounded like a good idea 5 years ago.

Is that a good analogy? Big data started out as a marketing buzzword, whereas AI always was an academic field of research.


AI is a very overloaded term. What we currently call "AI" is likely suffering from the same hype bubble that big data did.


And yet, most of today's commercial AI requires "big data".


Yeah big data can be very useful. The reality just doesn't match the hype.


I think you are confusing between specialization, and a degree in sub-branch. A degree in CS is always valued more than a degree in say IT.

On the other hand, a degree in CS with specialization in Networking, would be more valued than a plain degree in CS.

I'd say if you want to specialize, do a master's. A degree such as above would put the holder in really bad position if that field goes into ice age. In case that happens to master's degree, you always have your bachelor's degree to fall back to.


I took an open CMU grad course on the basics of AI from a year ago and generally I can understand papers in journals at an intuitive level and hack around with some existing libraries no problem. I won't be solving research problems or hired by Tesla or anything but writing amateur trade bots, and feeding predictive bids inside an app I wrote is possible with just an AI crash course. Rest of CMUs standard undergrad is pretty solid I imagine these will be good advanced undergrad courses.


If you can teach a kid frequentist (regular) statistics, then you can certainly teach them things like Bayesian statistics.

Add things like decision trees (which you need to build something like a chatbot). In fact, a friend of mine from my freshman year (way back before everyone had computers in college) wrote a chatbot. I had never seen anything like it, and he let it loose on IRC. Was pretty cool back when things like rn and ftp were all there were on the net.

Regression isn't that hard. Heck, most advanced Calculus and further maths are way harder.

Neural networks aren't even that complicated, although tuning and understanding the output sure is. But that comes with time.

Markov Chains are basically advanced flowcharting. If they can teach PERT analysis to Business majors, then Markov should be easily understood.

Sure, there are some hardcore things, but the foundation is based on stuff many smart people should be able to learn in undergrad. Twenty years from now, when we are surrounded by "AI"-everything, we'll wish more schools did this. And I guarantee you'll wish your kids learned this in high school.

Yes, I am serious about that last comment. I have an 8 year old son, and I'm already working on teaching him Markov Chains using sports. Yes, it's super simple, but easy enough to teach him consequences of actions, without getting into nested probabilities.


You are describing those models as they are being applied (and as many introduction present them) not how they can be trained. To understand whether those models converge, how to calculate the solution to even a basic linear regression, you need graduate math: matrix algebra, differential calculus, topology.


To me, your statement is like saying an airline mechanic needs to have a pilot's license because he needs to know how the rudder affects the flight.

There are three different roles that I think you are conflating: Designing, Building (and maintaining), and using.

Each has a different skillset. But to think that you need to determine the convergence of models makes no sense to me. Why can't an undergrad build a simple classifier?

And in finance, CAPM uses regression. Every single finance undergrad in the country learns it. Are you implying that they can't because they don't know how the model will converge with something else?

Seriously, it really frustrates me when people look down on undergrads and say "Oh, they can't do it". Baloney. I've met tons of really smart, focused undergrads. And some pretty f-ing stupid grad students (at a PhD level to boot!).

Two decades ago, you needed some pretty heavy CS to build a web page. Today, my mom can do it. With your line of thinking, that wouldn't be possible.


I like the metaphor, but I think it would be more appropriate to flip the roles: it’s more akin to asking the person using the models (or the airplane in your simile, the pilot) to understand how their work (i.e. have basic mechanical understanding). As it happens, that’s precisely what we ask pilot: they need to have enough mechanical engineering talent to inspect the plane in detail before take of as well as understand, diagnose and fix a problem in the air.

To come back to modelling: I see daily people with a cursory understanding of models misusing their tools and not noticing blatant problems. Just today:

- someone using random forest to predict a quantity (which you can do, it’s often nicknamed “random tree regression”) so the first thing their did was to turn quantities to predict to as many categorical variables;

- someone else, on the same problem didn’t see why having a massively higher mean square error on the out-of-bag sample compared to the test sample was suspicious.

I’m not saying that five years of university will solve that, but I’m noticed that taking the time to dig into the model, vs. assuming it’s like a car -- you only need to turn the key and go with it -- is strongly correlated with making those mistakes.


I don’t see how building a web page is any different now than it was in 1998, if anything it seems like everyone has made it harder to build correct pages.


If you stick to the basics of static html content + styling it's actually a lot easier because there are far fewer browser 'quirks' to work around nowadays (you rarely need browser specific code), and things like flexbox are a lot more intuitive than hacking around with floats.

I'm a front-end dev (amonst other things), so I'm pretty comfortable with React, Angular, etc. And therr are definitely cases where they make sense. But simple static or server side rendered sites are much simpler now.


Unless you mean some sort of differential topology, those are rigorously covered in and respectable math undergraduate.


I believe that’s the point of contention: I’ve you’ve gone through those programs, you are a graduate.


To be honest most of those are covered at a good-enough level in the sophomore and junior years of a math undergrad. You don't need measures, differential geometry, or even epsilon-delta analysis to do ML which pins the calculus requirements pretty much to whatever proper multivariable calc class one takes in their sophomore year

Edit: if my school wasn't so obsessed with teaching CS majors diffeq (probably just as a gpa filter...), they could already fit in the requisite math for a solid ML understanding


I would argue that you need measure and differential geometry to understand Support Vector Machine and the kernel trick properly.

I think my contention is less things like being formally introduced to ‘epsilon-delta analysis’ (not sure what that is) but more that people trying to cut corners by skipping a semester of differential calculus tend to also skip a big part of the explanation around how models really work. They tend to not grasp what is convergence, get very confused in higher dimensions, and assume ‘harmless’ short-hands like: ”you should aways normalise your data, in some cases, you need to, but why actually remember why, just do it”; “as long as it’s not overfitting, the model is fine” -- without really much recourse when things are not acting as expected.


You need differential calculus in R^n, but there's no need for the full force of differential geometry. Also, I don't think that measure gains much in terms of understanding, but it certainly is needed to do some proofs the proper way.

I agree that cutting corners is something I would be super skeptical of in this degree. It should really be an offshoot of a mathematics program, not a CS program with the bare minimal mathematics included. It's end goal is probably PR, money grab, and pumping out students that are really attractive for doing analytics grunt work.


Yeah, well those are not graduate courses, maybe the exception of topology (for a non math major). Those are 1st and 2nd year undergrad engineering/CS courses.


I had courses in each of those topics as part of my undergraduate math degree. There’s no reason an AI degree couldn’t include those as standard curriculum for undergraduates.


That's undergraduate math at most universities.


This is completely untrue - machine learning is largely undergraduate mathematics - in fact a lot of the linear algebra is commonly taught in high school.

Definitely agree it's potentially narrow, but there's absolutely nothing wrong with that.


One of things that we should do as science progresses is open it up to more generalists as we learn its rules and how to teach it more effectively. Quantum Mechanics used to require expert knowledge to learn and use, but today, it is a standard part of undergraduate physics education, and all advanced knowledge in any field will eventually.

One of the signs of how developed a subfield of a science is is how easily it can be taught to non-experts.


I agree. Many of the statistical concepts can be extended from advanced undergraduate stats too. I can see how the high salaries paid in these jobs would lead to some serious gatekeeping though. My concern is that these courses will likely be taught without any deference to the ethics involved in the work done, and I think that will play an increasingly important role in years to come.


Most undergrad engineering worth its salt will require an ethics course and humanities classes broaching the topic.

The main issue is that a lot of people do not take away the intent, or can fully answer questions correctly about the intent but not actually care. You can lead a horse to water but you can't make it drink.


>Most undergrad engineering worth its salt will require an ethics course and humanities classes broaching the topic.

Someone I knew in college made bank writing papers for engineers' in those ethics courses.


Ha I actually think the complete opposite will be true - Here in New Zealand I got a degree in neuroscience without doing any papers outside of core science, whereas in the US I would probably have to do humanities based courses etc (which I think is a fairly good thing) - and it's not unlikely that these will come to include topics in ethics.

Also ethical review boards for other areas of science are very well established, and it's not unrealistic to imagine that extending towards machine learning as well.


Engineers need not get a postgraduate degree, but they are expected to be held to (and are legally held to) very high ethical standards.


Linear algebra, some calculus and probability theory

Being a practitioner for Deep Learning, the bar is actually lower than being a proficient, compiler or database programmer.


> AI and Machine Learning require graduate-level mathematical and computational skills

Yeah, not really. A lot of day-to-day work in ML requires rudimentary math, at most. I know PhDs who quickly get discouraged with ML because they're suddenly spending 95% of their time doing the grunt work. It would be a boon if we could hire non-PhDs who are competent in the fundamentals of signals, algorithms, statistics, and experiment design.

If you're aspiring to work in ML, what major do you choose now? Statistics? EE? CS? Math?

None of these are ideal. If you're doing CS, you're probably too busy with compilers or DB courses to get a proper education in signal processing, information theory, stochastic processes, etc. If you're in EE, you're too busy soldering circuits and to learn about data structures, algorithms, or software engineering courses. There's a lot of room for improvement here.

Even at the graduate level, most of our EE interns and new hires can't solve FizzBuzz, while the CS majors can't properly design a scientific experiment to save their lives.


So essentially what we need is an "academic CSEE" degree, where you replace courses on industrial topics like application design and databases / circuits and electromagnetics with these theory-based classes from both departments.

I'd also suggest adding some systems neuroscience courses in there too.


Electromagnetics is all theory, so much so it’s been dropped from most EE curriculums for being to difficult.


I wish my school would have done that, those were terribly intensive and not really stimulating like signals was.


I ate it up, but then I’m an RF/Microwave EE. Grad level EM is even worse.


I think you could get away with swapping out upper level algorithms and systems courses and swapping in statistics and ML. I've used my OS class 0 times in my career. The class I've used the most was the second level statistics class I took for my econ minor.


definitely, and your algorithms class was probably industry-driven and an exposure to what you should be doing to write code in the workforce, not teaching you to come up with novel algorithms.


I wish undergrad EEs knew how to solder.


I agree that AI/ML is kind of narrow for BS degree.

I think there is room for specialist field called numerical programming or scientific programming. Someone who knows basics of numerical programming, math, statistics, data science, computational modeling and simulation, DSP etc. and can apply the skills to multiple different fields, including machine learning.

The skill level needed to work independently usually requires at least masters level, but there could be BS level degree as well. Today the problem is that you have research PhD's doing basic grunt work because you can't just hire a coder. All they know is web stacks, android and SQL.


The narrowness of a niche is wholly dependent on how much demand there is for the thing in the grander scheme of things. Programming used to be incredibly narrow, and I would bet ya there were people complaining along similar lines back when engineering meant just mathematics and physics. Clearly the AI and ML rift has been growing, partially evidenced by this program; enough for it to no longer be considered "too narrow for a BS".


> I'm not fundamentally opposed, but I think this is akin to creating a "Condensed Matter and Nanophysics" undergraduate degree alongside "Physics."

Isn't this what Material Science is? A truncated version of physics for those with a greater interest in application than theory. Arguably most engineering derives from hard science that is converted into a form that is more easily applied.


Materials Science sounds more general than "Condensed Matter and Nanophysics", but akin to AI, which fits your larger point.


Absolutely false, I can't believe this comment is up voted. Undergrad Linear Algebra & Statistics is definitely enough for ML. In fact, they taught introductory ML at my old high school in AP CS, years ago.


Maybe for an intro course, but a degree usually means you are ready to work in that field. I’m not sure what ‘field’ that is but I assume they are looking at it for AI research.


There are AI jobs open to BS that are filled with BS CS grads. These BS AI grads are just more targeted towards these positions.


okay, that is in theory.

Here is the syllabus:

https://www.cs.cmu.edu/bs-in-artificial-intelligence/curricu...

Do you think any part of that is unreasonable or putting the cart before the horse?


They did a good job on this syllabus. Something had to give, and it is math and engineering depth. No 300 or 400 level courses, like networking or OS. Skipping some close to the metal stuff is probably fine, but I am surprised not to see at least some distributed computing or systems engineering skills. I'd be concerned that a grad could work well within an ideal data env but would get blocked by not being able to engineer processing pipelines.


I've worked with brilliant statisticians would not be able to engineer processing pipelines.


Absolutely. It just requires an environment where other folks are building, maintaining, and adapting data infrastructure to new needs -- e.g. a large company with a decent data engineering investment.

Also, in those specialized roles, a statistian with an advanced degree is likely to get more leverage. So I think it's smart for undergrads to work on being "unblockable" In a variety of environments so they can build experience - which is the spirit of the standard CS undergrad there.


There's a big difference between being able to calculate the gradient of an activation function you just invented and apply it to a backpropagation algorithm or analyze it for convergence, and simply understanding what a gradient is and how it is used when training the model.

I find that the understanding and ability to visualize the mathematical concepts is the important part, for everyday practitioners. It is not necessary to be able to derive a gradient in order to understand the differences between sigmoid and hyperbolic tangent. Being able to do the calculation on paper is not a prerequisite for understanding the process.

The rigorous mathematics come into play if you wish to advance the field as a whole, but is not necessary to successfully design and train efficient models.


No expert here, but I lived through times when calculus was deemed beyond the capabilities of high school students. This was clearly incorrect.

As I said, no expert in AI & ML, but I don't see anything that differentiates them from other domains earlier (and wrongly) regarded as beyond undergraduate work.


Would you consider an undergraduate degree in Physics the same as a graduate degree in physics?

No, I would not expect a person with an undergrad degree in physics to have nearly the same capabilities and understanding of a PhD holder in general.

Similarly, it would be silly to assume the alums from this program are going to be considered to be of the same caliber. Instead, they will fill roles in which having a richer math, cs, and stats background matters a bit less, as is the case for a large fraction of data scientist jobs. And yes, they would be remotely useful with just 4 decent cs classes, 2 stats classes, and 2 math classes.


Isn’t most AI work in research though? I think the problem most are having with this is we generally consider AI to be more prestigious than something like data science (which seems like it would be a better name for a degree in its own right).


AI is a broad field. Much of it is applied these days.

Even setting a very high bar for what constitutes "AI", I would concede that Google Assistant, Siri, Alexa, and Cortana are AI systems. How many jobs are there supporting those products today? How many will there be 5 years from now when these kids graduate?

But there are many other AI products out there, and moreso, machine learning is now widespread as a supporting tool in very many places. The data science field of today/tomorrow is ML and AI, and we definitely will need more practitioners.


I think I agree with you that the degree might be more aptly named "data science", but that's a relatively minor detail in the scheme of things.

I don't think I would claim most AI work is research, but regardless, I think there is a still quite a lot of work that is quite applied nowadays. For example, there are a huge number of special-use-case neural networks you can think of to make very specific decisions based on image data. Or, nearly every online retailer would love someone to be able to come up with a good recommendation engine for products on their site. While a PhD might build a slightly-better one, I'm fairly confident that an undergrad trained properly (no pun intended) could pull off something usable.


I disagree

The major sounds similar in intent to the Data Science major we introduced into our Bachelor of Computer Science at UNE (in Australia) last year. These days there is something of a branch in computing routes -- to express it in a very muddy way, designing and building the systems themselves (Software Development) versus applying machine learning algorithms, AI, and statistics across the data systems produce (Data Science). And of course there's other branches and specialisms too, but for our particular market those are what we have at the moment.

The level is still clear -- it's an undergraduate degree, and they are coming out with a good undergraduate's knowledge of the area. So that's entirely appropriate. There's no need to shy away from stating what they've been focusing on (offering a named major). And we do find they are useful and in demand.


CMU's CS program is very rigorous and tough. A lot of CMU's undergraduate courses are actually graduate courses in other universities' CS program e.g. CMU's 15-213 course (sophomore level) is a graduate requirement in Stanford's MSCS course as well as CMU's MSCS curriculum. Looking at CMU's curriculum for this degree, it seems that they did a good job at not "skipping to the interesting bits."


Most CS programs have a lot of filler from the Math department and various other places. Your third semester of Calculus for example. So, I think their is plenty of room for 6-8+ AI focused classes which would get you reasonably far. Especially if it's a comprehensive major that covers both Major and Minor requirements.


Not sure how you're going to have someone understand gradient descent and backprop without having a strong command of multivariable calculus.


I'd assume you take multivariate, but skip on programming languages, digital design, and operating systems.


Going from 1 variable to 2 or 20 never seemed like a big jump to me. I can see taking 1-2 weeks to go over it quickly, but you don't need a lot of problem set's to get it.

It's like learning a new computer language, it takes about as long to do that on it's own as to have a class in something else that happens to use your that new language.


Line / surface / volume integrals are certainly a jump from one-variable integrals.


Not a big jump, if you conceptually stick to the 'area under curve' definition.

Analytically computing surface/volume integrals, yes, that is a big jump. But I don't understand why students need to memorize N algorithms and M substitution rules for that.


What is the straightforward "area under the curve" definition of a line integral?


Wikipedia has a nice animation for this.

https://en.wikipedia.org/wiki/Line_integral#/media/File:Line...


Okay. I am used to this being called a path integral, and the phrase line integral being reserved for the integral of a vector field over a curve. This does not, as far as I know, admit a straightforward definition as an area under a curve.


If you replace area under curve with volume under surface, it seems to me that the same intuition holds.


Sure, and AI isn't a big jump from statistics.


I would say AI:Statistics :: Engineering:Physics .

Sometimes it is a big jump, sometimes it is all common sense.


I don't get this focus on 'maths via arcane problem solving' either.


They taught gradient descent in my lower division math courses in college.


Multivariable calculus is a lower-division math class.


yep, softmore level math.


The CMU AI degree includes 3rd-semester calculus: one course each of numerical integration and linear algebra.

It cuts out the hardware/OS/compilers/networking part in order to make room for a very wide "minor" in AI.


At CMU plenty of undergraduates took grad machine learning (10-601 or 701), high achievers sometimes in sophomore year. CMUs ML grad program is a top grad program and be sure not to move the goal posts to PhDs defined as contributing new knowledge


The same might have been said of a "Computer Science" degree when it split out of mathematics departments. I don't know, the jury's out, but there is a case for teaching what is an expanding area of knowledge with different emphasis than what came before.

An undergraduate degree obviously will be more cursory than a graduate program but even then it might help produce a crop of better thinkers for graduate programs with already solid background. There's nothing inherently preventing an 18-year-old from learning this kind of stuff.


I did not study computer science, but on my many years of self-studying I've realised that Mathematics, especially Discrete Mathematics and theoretical Computer Science essentially pursue the same objective using very similar tools.

I'm currently reading Turing's famous paper "Can Machines Think?" and the way Turing explains computers, state machines and Laplace's idea of a deterministic universe made me think about the current state of CS. It seems to me that CS faculties change their curricula to teach students more and more about practical programming and less about its history and theoretical foundations. I think that is not right.

Many of my friends are studying CS as their major, but it seems to me that they don't grasp the deeper meaning of the field. For them CS is about programming, about building apps. For me computer science is not about programming and computers, it is about creating an abstract world of computation that is independent from us.

Maybe it would've been a good decision to keep theoretical computer science inside mathematics departments and offer a computer engineering degree for practical applications such as programming.


Computer Science courses omit essential electrical engineering and transistor physics.

The issue is whether a field can be understood at a higher level, atop blackboxes. All abstractions leak, but some are good enough.

DL NNs are no where near being a science yet - it's empirical let's try this architecture. But... you could have a trade-school technician degree in present techniques that can be applied. Like an electrician.

You could also have a degree aimed at preparing you for graduate AI - which would include a lot of mathematics.


I majored in AI before the current fad. It’s basically a concentration and depends on the University. It goes a little deeper on the AI side using electives.


Really? Was it an accredited program? Most of what I’ve seen before is that you get a CS degree but with a ‘focus in’ AI.


Of course it was. Accredited by the British Computer Society and the UK system. It is a Computer Science degree with an AI concentration. AI is a big enough subject for that. Why would it not be?


In my opinion, seeing that nobody actually understands the fundamentals of why the techniques used in ML/AI seem to work, it matters less that a researcher is not armed in the practice of complex mathematics.

I suppose there is a use for precisely describing things that we do not actually understand, but I don't think it is a pre-requisite, at this early infant stage in ML/AI.


There are plenty of engineering undergraduate programs. What's the difference here? Civil, mechanical, computer, electrical engineering are all very math-heavy. Sometimes an engineering degree can take 5 years instead of 4. So be it.


>even remotely useful in these fields in four years ...Condensed Matter

I just looked up the Cambridge physics tripos and you do condensed matter in years 3 and 4. You can learn a lot in 4 years. Admittedly it's a specialisation in a general science degree.


Density functional theory, for example, is taught at undergraduate level as part of both chemistry and physics triposes. Probably materials science too, and it certainly used to be an option within earth sciences (as part of the mineral physics path).


You definitely do not need csci fundamentals to learn AI/ML. It’s more of a math discipline. Knowing what big endian means or the fundamentals of programming languages aren’t really needed when learning AI.


You could also say computer science is just discrete mathematics, and therefore people doing CS should rather study math.

But we both know in practice they're substantially different lines of work.


As my algorithms professor said on his programming project. A program is just a number, pick the right number.

At least he made it funny.


While you are right, that deep understanding of machine learning require graduate level mathematics, however I see the trend where big companies (Google, Facebook, Microsoft) are trying to democratize AI and machine learning. Soon you will have people using machine learning tools and algorithms without understanding the underlying math.

Given the nature of machine learning, even if you understand the mathematics behind it, you cannot "reason" with a trained-model. I think now academia is catching up in this democratization effort, by producing engineers that can use those machine learning tools (even if they cannot design such tools/frameworks)


Would you consider this degree the bare minimum subject material covered to be qualify as an "AI bootcamp" of sorts?


As someone with a BSc. in AI from a university that's been handing them out since the 50s, I, er, disagree.


No, you don't need the degree.

Denny Britz, He drops school (his phd) to go into industries.

Christopher Olah: Same story, at least he doesn’t communicate about his education (http://colah.github.io/cv.pdf).

Both are well-known for their expertise, and followed by a large amount of people in the deep learning community.

You can def become an great machine learning engineer who used deep learning technique, without going to any university.

--- Ian Goodfellow:

"One of my biggest heroes is Geoffrey Hinton. His PhD was in experimental psychology ( . Biographical Sketch ).

In mechanical engineering, I think you learn quite a lot of the mathematical tools that are used in machine learning, so you won’t be starting from scratch.

In general, we often find that revolutions in one field happen when someone from another field arrives with fresh ideas that developed elsewhere."

(https://www.quora.com/Would-you-encourage-people-from-anothe...)

How can someone with almost no technical knowledge learn ML?

Ian Goodfellow::"It’s important to master some of the basics, like linear algebra, probability, and python programming.

But you don’t need to know all of linear algebra (I don’t think I’ve ever used a QR decomposition outside the classroom), all of probability (we don’t usually do much involving combinatorics / re-ordering of sequences in deep learning), or all of python programming (a lot of the more obscure language features are actually forbidden at some companies).

I’d say maybe start by learning enough linear algebra, probability, and python programming that you can implement logistic regression yourself, using only python and numpy..."

(https://www.quora.com/How-can-someone-with-almost-no-technic...)

As a college sophomore, how can I prepare myself for artificial intelligence?

Ian Goodfellow:

"Take classes on linear algebra and probability

Take classes on how to write fast code that is friendly to the hardware you’re working with.

Take classes on how to write good high performance, realtime multithreaded code.

Read Deep Learning.

Pick a simple machine learning project that interests you.

Work through this project, and when you have questions about the material you read about machine learning, trying to answer your own questions by running experiments in your project codebase.

If your university has them, take classes on machine learning, computer vision, natural language processing, robotics, etc."

(https://www.quora.com/As-a-college-sophomore-how-can-I-prepa...)

*You can find all the courses online.


If you are an undergraduate in computer science these days, it is very hard to get into advanced AI/ML classes, which are typically reserved for graduate students. Back before the ML goldrush, a strong CS undergrad interested in AI could elect to take advanced coursework beyond the introductory AI class. Nowadays, good luck getting off the waitlist! Having an official "AI major" does at least tell students, "Hey, we are making it a priority that undergraduates have access to our rich AI curriculum."


I made the mistake my sophomore year at CMU of enrolling in a graduate-level economics class called "Game Theory" because I got my course prefixes incorrect, and assumed it was a CS class about video games.

I am now much more versed in Nash equilibria than I ever thought I'd be, but damn, that class took a chainsaw to my GPA.


Ha! Grad school at CMU took a chainsaw to my GPA after 1st year. Studying 16h/day and still didn't have enough time to complete all assignments and understand the material.

Awesome school though, would do it again.


Serious question, from someone who hasn't had to do it: how is it possible to do productively study for 16 hours a day? Controlled substances? I don't last that long even on occasion, much less regularly.


It is not productive to study for 16h/day, or even 10h/day for an extended period. It is not only less effective for sustained learning or intellectual work than spending less but more focused time, it also leads to physical and mental health problems, and sometimes results in severe burnout.

The problem is that (a) students are young and many of them are quite inexperienced with managing their own time and work, (b) students are so stressed and sleep deprived that it is hard to introspect about process or get into a productive rhythm of focused productive work alternated with rest, and (c) there is often a workaholic student culture which creates peer pressure and presents the illusion that staring at a textbook for hours while already half asleep is the mark of a good student. For grad students (especially foreign students) sometimes there is additional pressure from abusive advisors.

It’s sad that e.g. MIT’s unofficial motto is “I hate this fucking place”.

Unfortunately the same kind of culture extends into some people’s professional lives. I dated a lawyer for a while who was a few years out of law school and working for a big firm, and with all the hours she needed to “work” and the few hours she could sleep each night her ability to think through complicated legal arguments or write briefs was severely compromised; sometimes she would be “reading” for an hour before bed with her eyes half closed, barely able to parse the words on the page. But that’s what the firm expected (and by their standards she was performing well), so she felt she had no choice.


All joking aside, CMU does very much have a problem with promoting and fetishizing a culture of stress.

I only have anecdotal evidence of this, but more so than just about anywhere else, CMU as a university prides itself on a very difficult workload and a lot of the solutions that students come up with are extremely unhealthy.


I have personal experience (EE,CE,AM '88). CMU sucks in terms of student experience - at least then it did. At least the physical plan is far superior now. The place was fugly in the 80s. The teaching was weak. I felt that I was paying for a reputation.

And on top of that, in our freshman year they had us all come to an auditorium to tell us this: "Sorry, we have to fail half of you out because there are too many of you. Look to your left and then to your right. Those two students will be gone." Any EE in that class can testify to the truthfulness of what I say.


I'm curious too. I've done controlled substances for studying and still can't do 16 hours a day.


Everyone takes Adderall. Not just that, but a lot of the time is spent explaining things to your classmates, getting answers to match our expectations, etc.

So it's not like 16 hours of reading and trying to understand the material, more like 16 hours of school work.


I've always wondered: is the information retained successfully in the long term even if acquired using Adderall?

I have taken Adderall for a few months, and I developed serious memory gaps. I have little recollection of several events during that time.


What's it like with Adderall? Does it impact your sleep cycle, or just allow you to concentrate?


I would do it again, too!


I did the same thing as a grad student in chemical engineering — I thought it would be fun to take graduate-level quantum field theory as an elective. Despite the blow to my GPA, I really enjoyed the course and don't regret taking it at all.


While there may be a lot of demand for AI/ML, I’m concerned to whether there are enough students with the proper foundations in math and stats to do well. New classes such as Data Science seem to just instruct students how to use algorithms and not why.


Looking at the curriculum CMU has put together (https://www.cs.cmu.edu/bs-in-artificial-intelligence/curricu...), I see statistical fundamentals in the Math and Statistics Core. I expect it should handle the why and the how.

(... though I get the sense from the outside looking in that a lot of machine learning at this point is still a little bit alchemy, so there may not always even be a firm "why" answer to give. All the more reason to give students firm general fundamental groundings so they can seek out those answers).


I see students that graduate with degrees in CS/IT to fill the demand. Quality has declined... Translating to AI/ML I would assume the same. I think CMU would retain quality, but the local universities will start churning out AI degrees like butter.


Are you talking about undergraduate courses, or non-accredited certificate courses? We've had stuff like A+ / Cisco / Java certifications for decades. They fill an important niche, but they aren't how industry leaders are trained.


Only referring to my experience as an undergrad: most students seemed to be turned away by the mathematics.


Honestly, consider the flip side as well. This is just going to inflate the bubble more and create more unqualified candidates. AI degree means they're going to be churning out candidates who don't know what tcp is or what context switch means. Also, the guy with the (graduate or PHD) AI degree from CMU, before this, handing you his resume, may have studied functional analysis and convex optimization in addition to learning about SVMs. The new guy did not, but he'll be have a checklist of when to use an SVM and when not to, and be pretty good at python. So, in a way, it's deflating the intellectual rigor of the field even further, considering CMU's reputation. Of course, they think it's beneficial to their CS program for whatever set of reasonable reasons we can likely guess at.


Having graduated with the CMU CS undergrad: if I could go back in time and replace about 80% of the language theory classes that were mandated by the curriculum with regression and machine learning modeling classes, hell yes I would.

(And this is coming from someone who TA'd one of those language theory classes ;) ).


That I will agree with!


How many people really need to know functional analysis to practice machine learning? How many developers need to know TCP coming out of school?


You're reading that into my post. Most do not need to know functional analysis. Some absolutely should. My point is not that one needs to know functional analysis to get a machine learning job, its that one of the top AI institutions is decreasing the intellectual rigor of its "average output" in that domain.

I find it scary that you're asking the second question. I think any accredited university handing out CS diplomas should make sure their graduates know what TCP is, especially CMU, which will theoretically be sending its graduates to good companies


Is an understanding of TCP necessary to do AI/ML? As someone who does work in ML (and has no formal background in CS but in physics), I see it as being mostly a combination of statistics and numerical computing. CS concepts outside of algorithms don't really come into it all that much.


Very large models train and evaluate across networks. Data pipelines are built across networks. You are a large handicap to a small team if you don’t know how networks work. I think a physics background is nothing more than checking off the math checkbox, which is certainly important


> candidates who don’t know what TCP is or what context switch means.

Nope, look at the curriculum again.


Are you trolling?


I did my undergrad in CS at CMU, and have first-hand experience of what’s covered in the core courses, which are also requirements for this new program.

Perhaps you should take a look at the curriculum again like I told you, instead of spewing out falsehoods like “churning out candidates who don’t know what tcp is”.

You’re not entitled to your own facts.


At CMU you took no courses in operating systems? Algorithms? Computer hardware or logic? Compilers? Graphics? Databases? Web programming? Distributed systems? Networks? Parallel/HPC? Language theory? Security/crypto?

Because these students will take none of these courses, they will differ significantly from those with a BS in CS. But their AI skills still won't run deep enough to make them expert there either. At best, they'll be conversant with a couple of foci in AI, but not in many other AI areas.

In fact, this program seems custom made to prep for work most typical at Google, Facebook, Microsoft, and not that many others -- doing pattrec forms of ML on large data. Yet they'll lack the skills typical of today's data engineers (basic ML plus HPC/distributed/throughput, networking, and DB /sys admin) or typical of data scientists (nasic ML with a BA in statistics, plus facility with RDBMSes).

Will the absence of these CS skills hamper their competitiveness one day in most mainstream general computing software jobs? I think it probably will.

Therefore, if those with this degree don't spend their entire careers working only in big data areas of AI, they will likely will be at a competitive disadvantage to those with broader skills in CS.


> At CMU you took no courses in operating systems? Algorithms? Computer hardware or logic? Compilers? Graphics? Databases? Web programming? Distributed systems? Networks? Parallel/HPC? Language theory? Security/crypto?

The core that's required in both programs (15-122, 15-128, 15-150, 15-210, 15-213, and 15-251) is very broad and touches pretty much all of those areas. To be clear, hardware design isn't covered there, but the (x86-64) programmer's side of memory management and the CPU is covered well.

Other than algorithms, dedicated courses in all of those areas are offered as electives and you pick some of them. I recall taking OS, security, digital design / RTL (which was actually in the ECE department), web, and logic - but I could have subbed OS with Parallel/HPC, for example. The BS in CS curriculum[1] requires enough free and area electives that students gain depth in several of those areas.

> Because these students will take none of these courses, they will differ significantly from those with a BS in CS.

The BS in AI curriculum[2] only requires two CS-wide electives, so students in that program will indeed have depth in fewer of the areas. This is why these students will receive BS in AI degrees, to differentiate them from those who receive BS in CS degrees. I think you're in agreement with CMU's decision here?

That said, with the broad base of the core classes like 15-213 and the second half of 15-210, plus implementation details covered in the AI/ML courses, I'm sure no graduate of that program would struggle with HPC, networking, or DB/sysadmin in the workplace, or in a graduate program in AI.

Ultimately, there's only so much you can fit into four years, but I'd bet it would be easier for someone from this new program to deepen their skills in those areas, than it would be for most BS in CS graduates to add ML skills.

[1] https://csd.cs.cmu.edu/academic/undergraduate/bachelors-curr... [2] https://www.cs.cmu.edu/bs-in-artificial-intelligence/curricu...


https://www.cs.cmu.edu/bs-in-artificial-intelligence/curricu...

I see an Introduction to Computer Systems course which looks like the only thing that could potentially teach networking, but from looking at the curriculum, it does not. Can you please find the course on this list that teaches networking, even if it isn't in-depth?


15-213 Introduction to Computer Systems is the one. Anyone who passed that class knows what context switching means, and what TCP is.

Whether they still remember it many years down the line is a different matter :)


I stand corrected; there is a lecture referencing networking that I missed upon first glance


So in CMU it's quite easy to get into grad classes and you can start doing it as a freshman - there are generally a number of juniors+ in masters/phd classes


Couldn't the same be accomplished with a minor or concentration as is common in many universities?

(Registering formally for a minor gives you preference)

I don't really have an issue with this degree, but I think it's mostly a marketing ploy to have a "major in AI" versus "AI concentration" or "AI minor" (set of electives alongside the normal CS degree)


What's the "ploy"? It's got about 4-6 more on-focus classes than a traditional "minor"


> "Hey, we are making it a priority that undergraduates have access to our rich AI curriculum."

Maybe these AI classes will be reserved for AI majors, so regular CS majors still have to pray for getting off the waitlist.


This. I think people forget that majors are also an operational consideration.


The problem with CS education in general is that it's very hard to get teachers, since anyone who can teach CS could earn 2-100x the income in industry.


Depends on the school. At a good school you can generally always find the classes to learn what you want to learn.


So much negativity on this discussion. I'm very surprised to see this from the HN crowd. Guess what? Computer Science, Engineering, etc... is getting more complicated and complex. So seeing a discipline (assuming this is CS) get broken down into more distinct groupings is actually a good phenomenon. Of course there's always foundational knowledge that is important to learn - but with time I feel like that information becomes de-emphasized to focus on higher-levels of understanding and knowledge.

On another note, CMU was always very good at cross-disciplinary studies. (Building Virtual Worlds comes to mind). As the article points out, this new degree bridges over to the humanities and ethics. With where we are today with AI, isn't it a good thing to train the AI developers of the future to think about the implications of their work?


I just hope that they still include software engineering etc. I've interviewed many "AI specialists" who probably know AI pretty well (I'm not qualified to make that judgement) but they can't code their way out of a paper bag. Basic data structure errors, terrible organization, etc.


Internships work wonders here. No amount of coursework can substitute for doing the real thing. IMO internship placement, not coursework, will always be the best way to solve this problem.


I completely agree. CS started often from the Math department, sometimes from the Engineering department. Further specialization is to be not only expected, but welcome, as complexity increases. Is there a better way?


I totally agree. I really think it is heading toward a med school styled specialization. And with that will come the added prestige. CS is so remote from pure/natural sciences, it screams for standalone schools, like law and medicine. And with it, prestige and power. Why are people against that?


Carnegie Mellon was the first university to offer a PhD in Machine Learning (via the Machine Learning Department which, again, I think is relatively unique in its existence). Regardless of how you feel about the hype, they made an early bet on the field and adding an undergraduate degree seems consistent.

Source: https://www.ml.cmu.edu/about/index.html


What I don't yet understand is how this new AI program differs from machine learning. Is AI about broader questions about conversational interactions and interpretability, closer to the Lisp heydays of 5 decades ago, or more about applications than theory?

Edit: years to decades


From the article:

> The bachelor's degree in AI will focus more on how complex inputs — such as vision, language and huge databases — are used to make decisions or enhance human capabilities

My understanding is that AI is more about applying ML concepts to mimic human intelligence.


This seems like a publicity stunt. It sounds like a CS degree where the electives are predetermined. Why couldn't they just make this a concentration when it's so intimately intertwined with CS.

The extra overhead graduates will have to deal with doesn't seem worth it.

"AI majors will receive the same solid grounding in computer science and math courses as other computer science students. In addition, they will have additional course work in AI-related subjects such as statistics and probability, computational modeling, machine learning, and symbolic computation."

They even say "other computer science students".


It's not just predetermined electives - it removes several of the CS upper-division breadth requirements to allow more depth in AI/ML/stats. (I posted a comparison below, so won't repeat it here.) It's a pretty decent change to serve the students who really want to push more on ML.

The key thing to look at is what the CS major requires that the AI major doesn't -- in 8 semesters, you can only fit in so many classes.


Parent's post for convenience: https://news.ycombinator.com/item?id=17041413


I looked at your description, and compared the requirements myself.

I'm even more convinced this is for publicity (or other political reasons). There's nothing there that couldn't have been done by very slightly altering the CS requirements.

If CMU had done that instead, students would have the ability to take more AI classes, but they wouldn't be at a disadvantage if they decide (or need) to work in another field of CS.


As a CMU professor, in case it wasn't clear: We, in general, felt fairly strongly that we did not want to radically alter the meaning of a CMU CS degree, i.e., that our B.S. in C.S. students do come out with the degree having hit some depth in systems, in PL, in algo and theory.

We also felt fairly strongly that supporting students who wanted something different from their education, which still falls under the broad CS umbrella but is different in some important ways -- was important.

These things are surprisingly hard to juggle. Eight semesters, four courses per semester, juggling cross-university requirements, some amount of personal electives and fun, and an intensive set of major courses, doesn't actually leave much wiggle room, especially when you include the prerequisite dependency graph of those courses.

Hence, it's a different major, because it reflects a quite different set of skills that {employers, grad schools, whatever} can count on the graduate knowing.

(There's also value in providing a roadmap for sequencing these things, again because of the prerequisite chains, but I concur that that alone isn't a reason for a major.)

And yes, of course it's all a continuum. We call AI a separate major, alongside things like HCI and computational biology. We don't have an "operating systems" major. If you like systems a lot, you still have to take all the normal algo/PL/etc. breadth requirements.

It's all a judgement call. The feeling here is that AI/ML are starting to contain a sufficiently different set of core skills that it was worth breaking them out into their own major instead of just saying "eh, go take some electives, and try to fit in all of your interests while _still_ taking all the other CS classes." Because that's what we used to suggest, and the students rightly pointed out that it wasn't possible to do it right within the existing degree framework, at least, if you wanted to sleep.


>we did not want to radically alter the meaning of a CMU CS degree

That's just it. No radical alteration was required. You're already saying that: "AI majors will receive the same solid grounding in computer science and math courses as other computer science students." But if CMU had added a new concentration or specialization to the existing major this story wouldn't be on the front page of HN.

There is no way that at some point in the discussion "This will be a big publicity win for us" wasn't brought up by someone.

In general I think it's bad advice for undergrads to pursue hyper specialized degrees. I think it's a bad idea when engineering schools do it with things like robotics engineering, and I think it's a bad idea when CS departments do it with AI. Specialization is what grad school is for--this isn't the UK. I also think that schools that offer these degrees are doing a disservice to their students.


CMU doesn't generally craft their degrees for industry-marketability; even the CS degree operates under somewhat of an assumption that they're training you to be a CS grad student or professor, not a software engineer. You can find your way out of that program without ever having touched C++, for example.

I think you're greatly underestimating how much different the CS curriculum would become if they tore out functional programming above 15-150, OS, and Networking.

Consider the flipside: if they bent the CS degree instead of introducing a new AI degree, could higher-learning institutions continue to trust that a CMU CS undergrad had a solid foundation in functional programming, discrete mathematics, and systems theory?


>CMU doesn't generally craft their degrees for industry-marketability;

I don't think a CS degree should be a trade program, but avoiding actively harming students job prospects by adding a few more electives isn't quite the same things as crafting their degrees for industry-marketability.

>they tore out functional programming above 15-150

I'm looking at the requirements for the BS in CS right now. I don't see any function programming requirements above 15-150.

>OS, and Networking

It looks like neither is required right now. Here's the relevant section.

    Choose 1

    15-410: Operating System Design and Implementation

    15-411: Compiler Design

    15-418: Parallel Computer Architecture and Programming

    15-440: Distributed Systems

    15-441: Computer Networks

    Others as designated by the CS Undergraduate Program
> if they bent the CS degree instead of introducing a new AI degree, could higher-learning institutions continue to trust that a CMU CS undergrad had a solid foundation in functional programming, discrete mathematics, and systems theory?

Looks like the functional programming, and discrete math requirements are the same.

Systems is an overloaded word, so I'm going to assume you mean software systems, since that requirement is what is removed. The systems requirement is already just chose one from above list. I don't think taking 1 network class means you have a solid foundation of systems theory.


I stand corrected: since I took the curriculum, functional programming requirements seem to have been substituted with an option to do higher-level systems-engineering electives (such as 15-414). And the systems elective has been expanded to include parallel and distributed systems.

The key difference on the deep-theory side is that CS and AI appear to swap out deep-diving into discrete math for deep-diving into statistics and statistical modeling. I'd consider those different enough to warrant separate degree tracks, personally.

(Your opinion of networking is noted but I do not share it, being somewhat familiar with what that course asks of students. It's every bit as preparatory as its sibling 15-410 class ;) ).


>The key difference on the deep-theory side is that CS and AI appear to swap out deep-diving into discrete math for deep-diving into statistics and statistical modeling. I'd consider those different enough to warrant separate degree tracks, personally.

What discrete math classes were removed from the AI degree?

>(Your opinion of networking is noted but I do not share it, being somewhat familiar with what that course asks of students. It's every bit as preparatory as its sibling 15-410 class ;) ).

I looked over the syllabus and assignments for a section of that class. It looks like a bog standard networking class (bog standard for top tier schools that is). It's an elective. You can take an OS class, a compilers class, or a networking class. I don't think there is some intersection of knowledge/skill between those 3 classes, the absence of which would give higher-learning institutions pause.

My institution required that you take both an OS and a networking class before being admitted for graduate study. It's one thing if they require OS, and networking, and compilers. That they don't do that says to me that they don't consider them critical classes, since any given graduate could be missing any 2 of them.


We actually have a set of criteria for what makes a qualifying systems elective. As with many things at CMU, we don't generally care what details you learn. We care greatly what higher-level concepts you get exposed to, and the systems courses are the place we try to focus on the development of abstractions; modularity; isolation; reasoning about failures and complexity; integrating security concerns. They're also the courses where students are required to work on projects large enough to blow out their cache -- multi-week or month projects that force you to think reasonably about how you divide your design into pieces so that you can coherently reason about the ensemble.

We're pretty much equally happy if you hit layering in the network class or thinking about the filesystem and kernel VFS layers in the OS class - or the modular structure of a modern compiler. Tackling the idea of reliability through replication in distributed systems (via a lot of different mechanisms, but with a decent dose of Paxos), or via the Reliable Storage module in 15-410, or in DB. Getting additional hardware architecture exposure through compilers or the parallel class. Thinking about communication using a fast local interconnect (parallel), the internet (networks & DS), or IPC (OS). Compilers can be more or less of a systems course depending on who teaches it, but it's generally got such a strong architectural component that it flies.

It's much like programming languages. We don't care much if you graduate knowing a particular language -- any CMU CS graduate should be able to pick up a new language in short order. We care greatly that you've been exposed to a mix of programming styles and thinking -- imperative, functional, and logical or declarative, and can successfully use those tools to reason about code, program structure, algorithms, and data structures.

So no, we absolutely don't consider it critical that you take any specific systems course, but we do consider it critical -- for the CS major -- that you be exposed to the broad set of systems concepts we teach in them. That's why we start them in 15-213 and then reinforce them with one upper-division systems elective requirement.


>It's much like programming languages. We don't care much if you graduate knowing a particular language -- any CMU CS graduate should be able to pick up a new language in short order. We care greatly that you've been exposed to a mix of programming styles and thinking -- imperative, functional, and logical or declarative, and can successfully use those tools to reason about code, program structure, algorithms, and data structures.

I completely support this philosophy.

> Compilers can be more or less of a systems course depending on who teaches it

So what happens when it's less of a systems course? Do students taking that section lack a critical component of the CS major?


We encourage it back towards systemsy-ness. ;). (in other words - nothing's perfect, and we accept some occasional compromises in service of providing a diverse menu. Compilers has other value. If it got too PL-centric, we would just move it to the PL cluster, but it's generally stayed systems for the last decade.)


The logic and languages cluster, while not exclusively a functional programming set, practically covers a lot of what one might think of as upper-division FP concepts. Foundations of PL and Semantics are bread and butter PL theory, for example.


But none of those are required in that section. You could take Software Foundations of Security and Privacy, or Foundations of Cyber-Physical Systems.


Funny you'd pick those two. :-)

Software Foundations includes, for example, the use of type systems to ensure bug-freedom, program semantics, and more. Matt Fredrikson focuses on the intersection of formal programming languages research and security. For example, lecture 3: https://15316-cmu.github.io/lectures/03-safety.pdf

Cyber-physical is one of the hardest classes I've ever seen. Seriously - it combines very solid differential mathematics with logic and formal verification. It's a different set of skills than Semantics, but it combines a really solid dose of the same kind of logical and proof-centric thinking that advanced PL courses do. And rapidly runs into the logical underpinnings of both fields. For example, lecture 13: http://symbolaris.com/course/fcps16/13-diffchart.pdf

(In large part, this is because the course relies on identifying PL-style semantics of differential systems, and thus, students in the course end up being exposed to nearly identical proof methods as they do in the more straight-up PL semantics course, in addition to a lot of differential equations.)


It does look like there are portions of those classes that are similar to a PL semantics course, which in turn covers some of the concepts you'd cover in an upper-division FP course. It's still a bit of stretch.

After I looked over the assignments for a section of Software Foundations, I don't think that taking an AI class instead would make much of a difference when it comes to having a solid foundation in functional programming, which is what the GP was talking about.


15-210: Parallel and Sequential Data Structures and Algorithms, which is still required, is purely functional.


That class is also required for the AI degree.


That's actually what I meant, but used the wrong phrasing - I can't edit it, unfortunately.


Well, that's basically what Computer Science degrees started out as, right? An Electrical Engineering degree with a bunch of software related electives? Even now, at UC Berkeley for example, "EE/CS" majors can choose a set of courses that end up almost exactly matching what the "CS" majors take, with not necessarily more EE. It's really just a narrowing of the electives.

There are enough AI related courses available now at many schools that it seems useful to separate "more general Computer Science" from "a focus on Artificial Intelligence", and similarly I think there's room for a separate major in "Software Engineering" as opposed to theoretical computer science.


Many CS programs came out of Math departments (that was the case in my program).

To me, it's more a difference in degree than kind. To be effective with AI you basically need the equivalent of a CS degree anyway. The same isn't true with EE/CS.


For me this just looks equivalent to adding a new major for every sub field of electrical engineering. Signal processing, digital circuits, analog circuits ... truth is a CS major could go take the same classes and a computer engineering student could also.


I think there's just a (subjective) point where a field is different enough that it's worth distinguishing from the rest. I could see Signal Processing being a different major than Electrical Engineering, and digital circuits is basically Computer Engineering.


It shifts the core of the curriculum much deeper into the statistical mathematics and away from the "how the bare metal works" and forest-of-languages pieces of the CS undergrad degree. In particular, you can't generally get the CS undergrad degree without cap-stoning your experience with either a networking or operating systems course; this new degree omits both of those from the curriculum (but adds machine learning, modern regression, and a cap-stone of either natural language processing or computer vision).


It's a few classes. There is no reason they couldn't have just changed the requirement to OS, Networking, NLP, or computer vision. And made the ML and modern regression classes prerequisites for the NLP and computer vision classes.

A few very minor tweaks to the CS requirements is all it would take. But you wouldn't get the fanfare of launching a new major.

The advantage for students is that if they decide to pursue some other CS discipline, they don't have to explain their weird degree.


(responded to similar thought on a different thread)


Did you not bother looking at the curriculum before making a comment about the curriculum?

https://www.cs.cmu.edu/bs-in-artificial-intelligence/curricu...


I looked over the curriculum pretty thoroughly. Look through some of the other comments in this thread to see a more detailed look at the differences.

If you look over the curriculum and compare it the BS CS curriculum you'll notice there's nothing that couldn't have been done as a concentration.


> It sounds like a CS degree where the electives are predetermined

This is offered through CMU's school of computer science (SCS), so that is exactly what this is. CMU loves creating new sub-departments with SCS, for some reason, there are already 7 or 8.


New faculty titles!


It's always this way when new fields start. Give it time. AI leans harder on areas of knowledge that traditional CS treats as periphery like philosophy and linguistics. In a couple decades the field could be as separate from CS as EE is.


I agree, but I think this also supports the parent argument. CS and EE are not very different at the undergraduate level.


They were at Waterloo, my alma matter. CS was far more mathematical and EE had a much broader basis in physics. Comp Eng was the middle ground with overlap on both.


That's right, and that's harmful to people who want to "higher up the stack" than EE. This new major helps address that problem.


They certainly are at UT Austin! But UT Austin has a separate Computer Engineering degree that's very similar to EE.


> It sounds like a CS degree where the electives are predetermined. Why couldn't they just make this a concentration when it's so intimately intertwined with CS

Unless they really mean "CS-degree with surface level knowledge of stats" this should really be an offshoot of the math/applied math department, not CS


It might not be the perfect curriculum yet, but not every CS course should be necessary to be good at ML/AI (compared to no need of knowing electrical engineering or physics while studying CS).


It sounds like a CS degree where the electives are predetermined.

If history is anything to go by they will graduate right into an AI Winter.


you mean that blind optimization of a black box might run into problems that can't be solved by "adding more layers"? I would never have guessed.


I think I now know how the electrical engineers felt in the '50s and '60s.


So that's what that feeling in my gut is. I'm struggling to find the time to learn ML/AI. This is another red flag to me that I should have been on this long ago.


I think it's time to say, as Arnold does, to break the rules and do this at work.


AI is large enough a field to warrant a new, dedicated track. In practice, as of current times, AI R&D/academia are enabled (perhaps even fueled) largely by the "traditional" constructs of computer science.

In my opinion, as time goes by and advancements are made, the coupling should grow weaker - and so we'll reach a point where there would be a more clear distinction between the two tracks, and they won't share much of the same curriculum, similarly to where we stand today with CS and electrical engineering.


Can you extrapolate on this a bit more? I understand the meaning but it seems different no?


Expanding even more upon a reply to a sibling comment: I think it's possible (but not necessarily so) that AI and "data science" are emerging as academic fields and practical disciplines dependent on, but distinct from, computer science. I think this is similar to how computer science emerged as a distinct discipline from both electrical engineering and math.

Electrical engineering and hardware design didn't go away when computer science emerged - quite the contrary. One could be a computer scientist or a practicing software engineer without having a full backward in the underlying technologies (such as electrical and computer engineering, including computer architecture) and theoretical foundations (from the math side, although theoretical computer science clearly covers a lot of this). But for quite a long time, I think that computer science and the field of software were driving the most visible technological change in society and culture.

I wonder if that's no longer the case, and AI and data science are emerging "on top of" computer science. We may eventually have AI and/or data science academic departments that are distinct from the computer science department in a university. While there would certainly be an intersection of topics covered - just as there currently is with computer science and computer architecture and electrical engineering - I can see the needs of training a new AI and/or data science researcher and practitioner requiring a separate curriculum. I could see that happening if AI and/or data science become the dominant driver of technological change for society and culture in the same way generic "software" was during the latter half of the 20th century.

All of this is speculation, of course. But I think it's quite possible, and perhaps likely.


I think that the way computer science emerged from EE is totally distinct to what’s happening right now. CS eventually abstracted away all of the electrical engineering aspects of the discipline and as a result you need no knowledge of digital logic design to study computer science. AI/ML I don’t think will ever be this way; you will always need CS knowledge in order to experiment/run/optimize your algorithms.


Maybe! But I can foresee a future where this is not the case. I can imagine an electrical engineer in 1955 saying the same thing about software.


So I thought about that scenario, I just don't think an EE can reasonably say that circuit design is necessary to understanding assembly programming. Further, by the time CS departments were created, it was definitely obvious that CS was distinct from EE, at this point I definitely don't think it's obvious that AI/ML will ever be distinct fields from computer science.


Consider that in 1955 (the year I chose above), Fortran was still two years in the future. At this point in time, people were still wrapping their heads around the concept of a library of pre-existing routines that new programs could call. Compilers for algebraic languages pre-Fortran was even called "automatic programming" at the time. Also keep in mind that although the mid '50s was when software and computer science was emerging as a distinct discipline, it wasn't until the '60s that independent CS departments emerged and it took even longer for that to be the norm in most universities. Animats and osteele is a sibling thread have interesting anecdotes in this regard. So I think it's quite possible that electrical engineers at the time not seeing a future where people would think about software independent of hardware. (To see some documents from the time, I wrote about some my family had a while back: http://www.scott-a-s.com/grandfather-univac/)

I don't think it's obvious that AI and data science will be distinct fields from CS. I just think it's quite possible, and if it does happen, this is the time people will point to when it started emerging on its own.


What do you mean?


That was when computer science emerged as an academic discipline distinct from the design and implementation of computers themselves; up to that point, an electrical engineer probably had the confidence that they were at the forefront of technological change.


He's referring to the newly-created "computer science" degrees. The thought must be: why create a degree for a subfield of my field?


I think he's referencing the beginning of "Computer Science" degrees. Computers were researched by electrical engineers and mathematicians before.


Yes. My undergraduate diploma says "Electrical Engineering - Computer Science". Computer science wasn't a full department yet.


Mine says “Linguistics”. I took CS graduate courses, but there wasn't an undergraduate major yet.

My father-in-law had a math degree and was a math professor, and then an EE professor — the latter while he co-founded an AI lab, that hired physics major Richard Stallman and other non-CS-majors.


This feels like a more useful CS degree, IMO. I don't do anything like machine learning, but for both scaling backend services and building day-to-day business logic, I've gotten a ton of value out of knowing stats, logistics, and a certain amount of pattern recognition (ah, how terms go in and out of fashion). Take this stuff instead of the other sorts of electives I was picking from - UML Modeling, for instance - and I think you'll be set up with a good broad base for understanding both code and machine learning applications, but also broader decision-making at a business level.


AI has always felt like a buzzword to me, but I have to admit, I really like the approach taken by the AI course that I took and Peter Norvig's textbook: https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod... .

Mostly, I like the focus on breaking down the problem domain in a logical way, so you can decide on which approach to take. The problem with the other courses I took (Machine Learning, Statistics, Computer Algorithms) is that they are so focused on solving specific problems that they often didn't adequately define the problem domain. I'd really recommend both Norvig's books to anyone interested in AI (in the broad sense).


I used Norvigs book in high school this past year for an independent study and it was a great help. Doesn’t require too firm a grasp on advanced mathematics to understand the major concepts presented, so I think it’d be perfect for undergrad.


You had a whole class dedicated to UML Modeling? That makes me laugh, I know it gets very complicated in high level java applications, but man that feels like a waste of time.


In theory it was "system design" or somesuch.

In practice we learned nothing particularly useful what to take into account when deciding where to draw boundaries, and just focused on what was easy to represent in UML.


I once co-wrote a book on UML, and I also think that nature class in UML is not a good idea.

I still sometimes use UML sequence diagrams though.


Same. I hope I was the last generation of "OO waterfall design is the pinnacle of software engineering" thought.


Personally, I find the lack of logic programming, and logic in general, a rather notable property of the outlined curriculum. In fact, "logic" is nowhere explicitly mentioned in the course titles. Neither are "formal" and "method".

For comparison, at the department of AI in Edinburgh, Prolog was very important and even actively developed to such an extent that current Prolog systems are still hugely influenced by "Edinburgh Prolog" (the original version being "Marseille Prolog"). Also, theorem proving is an important area of computer science with many connections to AI.

In Vienna (TU Wien), the related Computational Intelligence curriculum also involves a lot of logic, Prolog, constraint solving and formal methods, which play an important role in many areas of AI. It is a graduate degree though and assumes familiarity with many of the topics that are mentioned in this curriculum.


>Also, theorem proving is an important area of computer science with many connections to AI.

I think this is only true if you use a different definition of AI than the one likely used here. Expert systems aren't considered to be very effective tools for useful "AI" anymore. You can't define a procedure to recognize a happy face in logic programming, at least not with any degree of efficiency.


> You can't define a procedure to recognize a happy face in logic programming, at least not with any degree of efficiency

This cannot be the test to differentiate between AI and "AI"?


Is recognizing happy faces the only use for AI? Who redefined AI to solely mean Machine Learning and excluded everything else that defined it before the AI winter?


No, but if you want to solve certain useful problems, ml works and expert systems don't. That's what I was getting at.


Math Foundations of Computer Science (15-151) covers proofs, combinatorics, etc.

https://csd.cs.cmu.edu/course-profiles/15-151-Mathematical-F...


Yes, this is the book from the course's page:

http://www.math.cmu.edu/~jmackey/151_128/infdes.pdf

This definitely has some aspects of formal logic in it and contains a few definitions about proofs, theorems etc. The logic-oriented aspects are covered in Appendix B ("Foundations"), which is currently unfinished.

Still, this is no substitute for, and clearly does not intend to be, a course on formal logic, let alone logic programming or model checking.


Prolog, constraint solving and formal methods are pre-AI forms of AI.


University of Toronto Engineering also is starting a Machine Intelligence option:

Engineering Science - Machine Intelligence Option

http://engsci.utoronto.ca/explore_our_program/majors/machine...

What is the difference between the Machine Intelligence major in Engineering Science, and an undergraduate degree in Computer Science?

While there are some commonalities between the Machine Intelligence major and what is offered through Computer Science, engineering offers a unique perspective.

First, graduates will have a systems perspective on machine intelligence, which integrates computer hardware and software with mathematics and reasoning. This enables a focus on algorithm development and the relationship between machine intelligence with computer architecture and digital signal processing.

Secondly, graduates will benefit from an approach that encourages problem framing and design thinking. Design thinking is a method for the practical and creative resolution of problems, which encourages divergent thinking to ideate many solutions, and convergent thinking to realize the best one. Students will be able to frame and solve problems in the MI field, and apply MI tools to problems in many application areas. These include finance, education, advanced manufacturing, healthcare and transportation. This field is in a phase of rapid development, and engineers are well equipped to contribute as a shaping force.


I got my BSc at U of T, in Artificial Intelligence (and Cognitive Science)... in 2006. So ahead of the curve! This was right before the big deep learning explosion, at the very end of the last AI winter. Our lecturers spent a whole lot of time lamenting at the endless disappointments of AI research. I walked away deeply skeptical, and can't help but see the current ML hype as a glass half empty.


I'm currently about to enter my freshman year of College, and have been looking at majoring in Cognitive Science. What were your thoughts on it/it's applicability to the rest of your life?


I think it very much depends on where you're doing it. There are many approaches to cog sci, so your experience will likely be different depending on your profs, their schools of thought, and the kind of research being done at your institution. U of T at the time was dominated by people doing work in embodied cognition (e.g. Evan Thompson), neo-continentalism / phenomenology, philosophy of mind, and a few dynamical systems people. I very much enjoyed it, but I later learned that this was a rather unorthodox take on cognitive science, not at all representative of how things are done elsewhere. I'd stay away from programs too rooted in more traditional experimental cognitive psychology, or developmental psychology. To me this seems incredibly dry, but I guess it depends on your own personal proclivities.


What were they disappointed in?

Also, what do you mean by "see the current ML hype as a glass half empty"? I take it that you are also disappointed with the recent research

I'm just getting into the field, but it seems to me at least in computer vision, voice recognition, and text to speech there have been great strides in the recent years


Have a look at https://en.wikipedia.org/wiki/AI_winter

I personally wasn't disappointed — I'm really glad I did this as my undergrad. AI research however tends to go through periods of hype followed by disillusionment. There's a history of promising developments that hit a wall or fizzle out in the long run. That's not to say there hasn't been progress (there's been tons!), but based on track record alone, it's prudent to be skeptical of overly optimistic pronouncements — we're probably much further from the "singularity" than one would think, based on current wave of ML hype anyway.


You’re over emphasizing a particular historical fable. Yes, some AI hype has happened. We’ve all read the “summer project” of McCarthy, Shannon, et al.

But all these recent advances are not just hype. It’s real. Anyone who has been following this area for a long time knows that some big problems (like large-scale image classification) have been solved, and in an orderly way that builds on prior work going back to the 1990s and before. (My ML PhD was in 1995.)

Nobody here is referring to the “singularity” - that is obviously speculation that has nothing to do with the CMU program.


> we're probably much further from the "singularity" than one would think

For your - and my - sake I hope that that is true. If not all bets are off and you might not like the end results.


Singularity of stupidity is already here. I mean, you have to be pretty stupid to take something as absurd and sci-fi nerd bs like "singularity" seriously.


Ah, EngSci - the most freakishly difficult undergraduate curriculum on the planet.


Care to elaborate?


The University of Toronto Engineering Science program has a reputation of being very rigorous, difficult.


My undergraduate degree (from The University of Edinburgh) is Artificial Intelligence. I remember when I was visiting different universities in the UK back in 2000, Edinburgh was the only one I saw which offered AI as a "real" degree. Everywhere else it was a specialization which was tacked on in the final year of a computer science degree.

That seemed really odd to me then. Seems even odder now.


For those who might not know, the University of Edinburgh had a department of artificial intelligence in the 1970s. They were very forward thinking at the time -- it later got folded into the School of Informatics, but Edinburgh remains one of the best places to work on AI/ML work.

edit: slightly awkward phrasing in my original comment above. Amended: They were (and still are!) very forward thinking.


I'm curious to hear about your experience. In my mind, artificial intelligence can't be separated from computer science. In fact, I feel like you need a full Comp Sci degree before you can effectively apply your skills to real world AI challenges.


I am currently studying this degree at the same university. A few AI concepts (NLP and Formal Language Processing) are introduced in the 2nd year. Other than that, all courses are CS/Maths. Keep in mind that at Scottish Universities, students apply directly to their degree and besides one or two courses per year (some like Medicine or Law often have no electives), students take only courses within their degree. This way, with most AI courses in 3rd and 4th year, students tend to have a strong enough grounding in CS principles and Maths for this material. That's not to say that the degree is perfect, or providing "real/production" AI, but it is certainly well done.

You can see the courses here - http://www.drps.ed.ac.uk/18-19/dpt/utaintl.htm


Oh cool, what year are you in?

If it's 3rd or 4th I was one of the judges at your systems design practical.


It's a little more complicated than I made it sound above. 50% of my curriculum was courses from CS department, the other 50% courses from the AI department. It was actually possible to do AI without doing CS at all, though. There were AI and Linguistics, and AI and Phycology degrees, for example. There was basically no crossover in languages used in the courses taught by the two departments. CS was mostly Java with some C++ and C. AI was Matlab, Prolog, a little bit of Python for NLP, and few other esoteric things. Some AI-side courses involved basically no programming at all ("Introduction to Cognitive Science" springs to mind).

That said: some of the AI students who didn't have to take any CS courses chose not to take the suggested ones... then had a really hard time in a few of the later courses. Computer Vision was brutal for them.

The situation is slightly different now. Edinburgh has a foundational "Informatics" (being the combination of CS, AI and Cognitive Science) curriculum. Students in those disciplines start with that, and then fully specialize in the later years of the course.

As a more or less completely unrelated side note: Sethu Vijayakumar, one of the judges for the last couple of seasons of Robot Wars UK, was my dissertation examiner.


Back in the 2000s - it was AI & CS (or SE) Joint Honours.

In 1st and 2nd year, you would do the same Maths and CS course as CS/SE; you didn't get an elective it was a separate AI course, which covered the basics.

In 3rd/4th year (honours years as they're called here) - IIRC you'd have to take 8 courses in 3rd (plus an individual project and a team project) and 6 in 4th and your dissertation. Depending on the degree specialisation; you had to take some mandatory courses (CS only had to do Compiling Techniques and Algorithms, AI/CS didn't have to do CT - but they had to do Algorithms and Computability and Intractability). So, the two departments were very closely aligned and then were brought together into a new department/school within the Science and Engineering faculty.

They also offer a single AI honours degree now, but the structure seems very similar to what I experienced, with perhaps a bit more freedom in 3rd and 4th year.

Interestingly, while the majority of students were AI/CS or AI/SE - they also had joint honour programmes outside the faculty - so there were a few students who were AI and Psychology as well as AI and Linguistics. I don't believe they offer this combination anymore.


CMU seems to agree with that. Upthread bertjk posted:

"AI majors will receive the same solid grounding in computer science and math courses as other computer science students. In addition, they will have additional course work in AI-related subjects such as statistics and probability, computational modeling, machine learning, and symbolic computation."


Could be worse, if you'd gone to Reading Uni around then you could have ended up with a Cybernetics degree from the Cybernetics department :)


Holder of "Artificial Intelligence & Cybernetics" from Reading here ;)


I am from the United States, next year I am probably going to attend the the University of Edinburgh for The Computer Science and Artificial intelligence course. Something I have been wondering is how US employers view the university, particularly with the somewhat strange (at least in the US) course title - "Artificial intelligence?"


Are you considering doing a foreign year, or your whole degree in Edinburgh?

I can only speak for my current employer (Google), who look very favourably on degrees from Edinburgh. It's one of the four UK universities we recruit from directly.

As for the course title, there's also "Computer Science" in it, which people can latch on to if they need that. When people asked about the AI part of my course I would say "Software Engineering is 'This is what works', Computer Science is 'This is why this works', and Artificial Intelligence is 'I wonder if this works'".


Thanks, that is good to know. I am doing a whole degree there. I am not sure how recruiting works at Google but when you say Google do you mean London or US based offices? Obviously I don't know what I will want to do in 4 years but I anticipate moving back to the US after I graduate.


I would be curious to hear how different this proposed CMU curriculum is vs what you had in the early 2000's.


Me too. I'm guessing they'll hear the word "perceptron" less than I did. Probably less Matlab and Prolog will be taught as well. I can't remember whether the Semantic Web course was CS or AI, but I suspect that won't come up, either. Fashions have probably changed enough that their probably won't be that much crossover.

For me at least, it depends almost as much on who's doing the teaching than it does what's being taught. Generally for me, the highlights were any course taught by Barbara Webb or Jon Oberlander.


>"Me too. I'm guessing they'll hear the word "perceptron" less than I did."

Why would that be? The Perceptron is very much a part of Neural Networks no? Wouldn't it be common now?

I understand about Prolog being a big part of AI curriculum from that time but why was the Matlab so heavy?


Back then perceptrons (single neurons with engineered feature inputs) were a lot closer to cutting edge than they are now.

As for why Matlab was used a lot: because it comes "batteries included", I suspect. Probably the same reasons that Andrew Ng used it as the teaching language for his Stanford/Coursera Machine Learning course. Plus a lot of my lecturers had maths backgrounds.


This looks good (which shouldn't be surprising coming from CMU). I'm kind of impressed by how similar this was to my undergrad curriculum (focusing on AI/ML and CS theory). Looks like a fun program.

Also wow, Great Theoretical Ideas in Computer Science[1] is a hell of a course. Induction, DFAs, matchings, TMs, complexity, NP, approximation and randomization, Transducers, crypto, and quantum algos. That's a lot of material, even if most of it appears to be only introductory level.

[1]: https://www.cs.cmu.edu/~15251/schedule.html


yeah, it's rather notorious at CMU for being particularly challenging and fast-paced, especially for a freshman course. sometimes called "two-fifty-fun"!


I'm getting a panic attack just reminiscing about it.

213 and 251, the twin terrors.


That course (15-251) is somewhat controversial among CS students (at least it certainly was when I was there) in that its primary goal in the curriculum seemed to be to act as the weedout course, since as you observed it covers a ton of material in little depth. It is typically taken 2nd semester freshman year, and I know at least 3 people my year who dropped out of CS from it :(


“250fun”, as we ironically called it. It was intense, but the topics were covered in a very interesting way.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: