
A complete 4-year course plan for an AI undergraduate degree - deepaksurti
https://www.mihaileric.com/posts/complete-artificial-intelligence-undergraduate-course-plan/
======
numbsafari
No surprise that this doesn’t include any consideration for the profound
moral, ethical, or social implications of AI and AI solutions.

Nothing about bias in models. Nothing about communication. Nothing about
privacy. Nothing about security. Nothing about resiliency. Nothing about the
responsibility of the individual practitioner. Nothing about the economic or
ecological impact of the work.

Every undergraduate degree should include both general, and subject specific
courses on ethics, morals, social, economic and ecological impacts.

~~~
theferalrobot
> No surprise that this doesn’t include any consideration for the profound
> moral, ethical, or social implications of AI and AI solutions.

Would you say the same thing for an undergrad in SE (actually asking)? I feel
like the hype train for AI has moved it into a plane where it has to be some
moral arbiter but it seems to me that the same demands could be placed on most
fields of study including general software engineering.

> Nothing about bias in models. Nothing about communication. Nothing about
> privacy. Nothing about security. Nothing about resiliency. Nothing about the
> responsibility of the individual practitioner. Nothing about the economic or
> ecological impact of the work.

Maybe it is just my bubble but I feel like very few AI jobs actually deal with
hot button ethics (facial recognition for dubious purposes etc). It is just
that these fringes get all the attention as if they represent the broad
community (they don’t). For instance I work in AI for satellite
communications, most of the things you bring up don’t really apply (any more
than they do to the software engineers I work with). My former work was in AI
and computer vision in marine ecosystems. Again... your moral questions don’t
really apply (privacy of coral reefs?). I agree it is important to be aware of
these issues but most of the real jobs in AI I feel like are more scientific
in nature.. not the privacy busting, bias inducing ones we hear about a lot.
Again that could be my bubble to some degree I suppose though.

~~~
numbsafari
I absolutely do. It's why I was so quick to jump on that with this.

My personal belief is that every undergraduate degree should include these
subjects. That's what separates an undergraduate degree from a bootcamp--in
theory.

But I think it's especially important for engineers and scientists, because it
is very often overlooked or discounted by virtue of the fact that there's so
much "technical" stuff to get into.

I think schools should try to counteract that by producing more team taught
material. e.g., let's not just do a regularly philosophy course, let's do a
philosophy for engineering and science course that takes you through the same
general philosophy, but with readings that focus on engineering and science
related ideas and themes.

But we should also have courses, and woven into our courses, discussions of
these larger issues. Engineers, especially, have such a profound impact on
society. It's important for us to be trained to think about these things. Not
what to think, but that it is necessary to do so, and perhaps some tools and
reference points for being able to do so in a cogent manner.

Lastly, I think it's important for undergraduates to get some exposure to the
ethical issues related to work: the relationship between the employee and
employer, the intellectual property, integrity in science and methods, etc.

~~~
artificialLimbs
"It's important for us to be trained to think about these things."

This doesn't seem self evident to me.

------
idoby
I love initiatives like this, but IMO if you're doing a self-study "degree"
aimed at practical AI knowledge, I would:

* Drop the compilers and database courses.

* Add an intro to statistics course.

* Pick one domain, since you'll need deep, proven domain expertise in one area to get hired without a real degree. So doing both NLP and CV, for instance, is probably a bad choice if this is your endgame.

* If you choose CV, add a traditional image processing course. I don't recommend to rely on whatever happens to be included in the DL for CV course. You might also want to add a good general purpose DSP course.

* Substitute exams for projects. Exams are adversarial so they don't make sense to give to yourself. You'll learn more more doing a project than you would cramming for a fake exam.

* Try to finish the whole thing in 1.5 years, not 4. Spending 4 years on a practical curriculum that doesn't yield an actual degree is a huge waste of time. Try to get the theory out the way in a year and a half so you can jump into real life projects/research. With dedication and without a rigid university schedule that forces slow progress, this is completely doable.

~~~
pradn
Compilers may not be as relevant for AI-related work, but databases are, imho.
So much of real world AI work is setting up data pipelines. Knowing how
transactions, indexes, and joins work is very useful.

~~~
idoby
True

------
btrettel
This is missing a course on software testing, particularly one that has a
focus on testing software-implemented models.

I'm almost done a PhD in mechanical engineering. Some CS folks have lamented
gaps in my programming-related education, _but_ I actually did take an
elective course on the reliability of science which included a lot on
scientific software testing. I think such a class should be required for
anyone working in theory or simulations.

In engineering, concerns about the testing and accuracy of a model are
typically called "verification and validation". There unfortunately doesn't
seem to be a standard curriculum for the subject yet but Wikipedia can give
you an idea of what this covers and how it's different from general software
testing:
[https://en.wikipedia.org/wiki/Verification_and_validation_of...](https://en.wikipedia.org/wiki/Verification_and_validation_of_computer_simulation_models)

AI/ML seems to have their own culture around this and I'm not sure its
actually rigorous. Seems to me that they could have their own version of the
reproducibility crisis.

This course should be taken after a statistics course. Parts of it rely fairly
heavily on statistics.

~~~
chriskanan
AI is having some reproducibility issues, especially reinforcement learning:

[https://www.wired.com/story/artificial-intelligence-
confront...](https://www.wired.com/story/artificial-intelligence-confronts-
reproducibility-crisis/)

In other areas of machine learning, things have gotten a lot better. A decade
ago people rarely released their code or trained models. I did this for an
early feature learning paper in 2010, which led to it getting a lot of
citations.

What I'm more worried about is the lack of science in machine learning papers.
The code reproduces the result, but the reasons for the efficacy given in the
paper are spurious. I like this recent paper that pokes this issue in metric
learning:

[https://arxiv.org/abs/2003.08505](https://arxiv.org/abs/2003.08505)

~~~
btrettel
Thanks for the links.

I would consider the science issue you mention to be related. A lot of what
was covered in the "verification, validation, and uncertainty quantification"
course I took was basic science. The class wasn't only about software testing.
Software testing was just a major tool covered in the class for the larger
goal of making reliable scientific claims. What I wrote previously wasn't
clear on this, so I clarified my previous comment on this point.

------
zozbot234
I'm not seeing an actual statistics course in there. Also GOFAI techniques
seem to be only briefly mentioned in an "Introduction to AI" course, which
just doesn't cut it in my view - you'd need something like an Operations
Research class to really explore that stuff in more detail. Some of the more
focused CS content could be cut to make room for this; OS's and DB's has been
mentioned already, but maybe one could also remove the Compilers course.

~~~
disgruntledphd2
Yeah, I'd probably remove compilers, and add classes on statistics,
experimentation and actual data analysis.

I kinda find it hilarious that there is no mention of analysing data in this
course, given that that is a core skill that all data professionals actually
need. I mean, where do they think the models come from?

There's also little to nothing about ETL, which is pretty much all you do in
most data sciencey jobs.

I actually think that this course is pretty revealing of the worldview of a
lot of AI researchers, and may be why predictions of AI's imminent dominance
across multiple fields appear to be not particularly accurate.

~~~
zozbot234
> "There's also little to nothing about ETL, which is pretty much all you do
> in most data sciencey jobs."

Maybe one could design a custom "Databases and ETL" class for this, to make it
useful for more than just data-sciencey jobs. ETL is usually covered to some
extent in stats classes though, AIUI.

------
jpz
I think a lot more maths is needed. "CS109 Probability for Computer
Scientists" does not look like a sufficiently deep study in statistics.

Spending time studying operating systems and compilers (both things I've
studied) at the cost really submersing yourself in the maths of probability
seems to me to be a misallocation of a scarce resource (time).

e.g. a course in Bayesian statistics, and also in traditional statistical
inference would be useful (for instance, this gives a good foundation for
understanding the EM algorithm.)

------
NalNezumi
I dont understand why

> Convolutional Neural Networks for Computer Vision

is taken without traditional CV before, while NLP it says "include traditional
NLP". I'd say it is a big nono to jump in to CNN directly without some basic
CV, even-though CNN is outperforming most algorithm you will miss big
fundamentals.

Also most first/second year courses seems too CS focused, such as OS and
Databases. I would remove those two things, add Differential Equation &
Discrete mathematics (and numerical methods, maybe).

At best the over focus of general CS stuff seems to create "CS with some ML
experience" rather than a solid foundation of the principles behind it.

This will give you a shoddy CS knowledge, and more fragile AI/ML understanding
than someone with AI/ML background.

I would call this program "Data Science" but not AI.

Edit: removed "looks good" prefix after a second take at it.

~~~
throwaway5548
The lack of statistical content disqualifies it for "Data Science" as well

------
bonoboTP
This is a well written article and has really good courses and topics in it.

Cynically though, I wonder how many people of those who liked it and file it
under "I should get to this at some point" actually use it at all. In AI it's
been a cliche now how many introductory blog posts and into YouTube videos and
"how do I start" Reddit and Quora questions there are. The resources and buzz
is very high for the first steps. I guess it's similar to "how do I make a
game" or "hó can I learn to hack" are popular.

As I said this one is very solid and has very actionable pointers to great
courses. But I find that only a small minority of people are so obsessed that
they can pull this huge project on their own. It's a multi-year undertaking
and even though the resources are freely available, a normal homo sapiens just
doesn't work like this. For all but the wild outliers (who would find the info
anyway, with no Obstacle able to stop them), the context of a university
program is really necessary. It gives you time, structure and social
motivation, discussing with people in the same boat, helping them out, getting
help from them, in person, being forced to chew through the boring bits, not
getting satisfied with your own self assessment etc.

Otherwise I think this also just gets thrown onto the pile of bookmarks that
we all build, to make ourselves feel good about a future day where we somehow
magically motivated and sharp to tackle and learn all those things we
bookmarked.

~~~
autokad
> "n AI it's been a cliche now how many introductory blog posts and into
> YouTube videos and "how do I start""

one of the frustrating things its really hard to find great advanced content,
and when you reach out to HN readers, they are like 'read really advanced PHD
papers'. thats not what I mean at all >.<

~~~
bonoboTP
There are textbooks, but people (myself included sometimes) want something
flashier something more palatable. But other than conference talks and papers,
textbooks and lecture slides, there just isn't much out there.

It's a consequence of how the sausage is made. Textbooks take years to write
so only established topics can be included in them. Researchers are under time
pressure and chase performance criteria. Publishing at a good conference
pushes you along your PhD path, but writing a blog post for a niche advanced
target audience of maybe 100 readers is not always worth it. Especially when
you have other duties beyond research like teaching courses. Industry blog
posts are good, but also cannot go very deep as the audience would be lost.

Also the nature of advanced stuff is that there is less of a clear established
path forward. There are tons of small specialized communities that often don't
know much about each other, they are in different departments etc.

Your best bet in becoming more advanced is to use the above mentioned
resources and work with people who are more advanced than you, either in
academia or industry. There is no law of nature that everything must be
achievable by browsing the free internet without leaving the house.

An important lesson when you get to a point in your studies is that there is
no set Platonic chapter-by-chapter structure in knowledge and science. A
common failure mode is to think that learning is about leveling up in some
achievement tree like in duolingo, and being puzzled where the next chapter
is. You have to seek it out, tackle a practical problem either from your own
idea and doing a side project or by joining a group as an assistant or a
company as a junior engineer/data scientist.

A lot of it is "dark knowledge", hidden inside organizations, people learning
from one another. There are best practices that "everyone" just knows to do by
discussing by the coffee machine, it's not in any book or paper. And papers
are written in an obtuse language to not give away too much (because that will
be in the next paper).

The interests of an advanced practitioner reader are markedly different from
the paper writer researcher's incentives, so this can make papers hard to read
and decipher what to take away from them. The goal of a paper is to frame a
small incremental tweak in the context of the larger scale literature in such
a way that the reviewers are sufficiently impressed to click accept. Papers
are obsessed with how they fit in the academic community's pursuit. They
aren't tutorials or howtos or guides. Fortunately people are now releasing
more and more code. The code is often more enlightening than the aggrandized
math of the paper (which often boils down to few lines of code).

------
omarhaneef
More of a meta question: but why should all undergrad degrees be 4 years? Why
should an undergrad in computer science and one in AI be the same length of
time?

Was it created because a school wants to charge the same amount regardless of
major? Or because 4 years is an acceptable proportion of a human life to spend
studying to obtain a white collar job half a century ago? Or some other
reason?

(Related, why do people want the 4 year degree as a pre-requisite for most
jobs? Why do some jobs -- law and medicine -- require 3 and 4 year degrees on
top of it? Why is the CFA happy with 3 years of tests on your own time?)

~~~
6gvONxR4sf7o
It's probably a combination of all your questions. Simpler fee structure that
takes a standardized part of young adulthood that provides a standardized
amount of expertise, leading to "an undergraduate degree" denoting a certain
amount of breadth and depth. It's about what a typical college student can
learn in four years.

Some fields like law and medicine require more time to get practically useful,
like 7ish years for law, which we can break down into units of 4 + 3, an
undergraduate degree and a law doctorate (JD). Or medicine, which takes like
15 years, and is split into 4 + 4 + 3-5 + 1-4, an undergrad degree, a medical
doctorate (MD), a residency, and a fellowship.

Maybe splitting it into these points helps standardize those and provide
convenient points where paths split.

------
glup
This seems optimized for AI in the hyped sense (as shorthand for applied
machine learning and data science) rather than actually preparing a student to
contribute to "real" AI (see work of Josh Tenenbaum or Brenden Lake). For an
undergrad curriculum for the latter, I'd expect to see more on understanding
intelligence in extant biological systems, eg courses in cognitive science,
neuroscience, child cognitive development, and more background on animal
cognition. I would also expect to see more robotics and at least some
treatment of reinforcement learning.

~~~
sevensor
I would also move up the second year "intro to AI" course to the first year,
with an emphasis on history. You shouldn't be hearing about AI Winter for the
first time after already spending a year on the subject.

------
SimonSword
This is a course plan for computer science. I don't see why Artificial
Intelligence should have its own "degree". However, there is a trend towards
these niche degrees at universities too.

~~~
h4l0
We have never considered splitting medical school into subdomains at
undergraduate level. Why are we trying to do this now for Computer Science?

~~~
barry-cotter
Dentists, nurses, physiotherapists and physicians are all degree level medical
specialties. Engineering is split. What’s the argument against doing it for
medicine other than tradition?

~~~
OJFord
> Engineering is split. What’s the argument against doing it for medicine
> other than tradition?

One might argue medicine _is_ a split-off piece from natural sciences.

A counter-argument might be that while the natural sciences are almost always
split, where they aren't, such as at Cambridge, medicine is still of course
separate.

I think the real reason is probably just that there's more value to most
medicos in a whole-body understanding than there is to most engineers in a
multi-disciplinary understanding.

I'd quite like to need to routinely design electronic circuits, CAD/CAM
packaging for them with certain mechanical constraints, and develop software
to run on them in my work, but I don't; that'd need to be a _very_ small
company working on a physical product for that not to be at least two people's
jobs.

~~~
rjsw
I feel that mechanical engineers should have a bit of understanding about what
is going on behind the scenes when they click on things in a CAD package.

From talking to recent students and current professors, I'm not sure they are
learning this as part of a degree course.

~~~
OJFord
By 'behind the scenes' do you mean the physical objects that they're
modelling, or how the software works?

~~~
rjsw
I mean the kind of data structures that the software is operating on, in
particular the ones that can end up in an exported file.

------
adjkant
There's a lot of weird choices here. The lack of math/stats is glaring, the
inclusion of compilers seems to be at the cost of more relevant material for
someone focusing on AI (speaking as someone who studied CS with a specific
focus on PL/Compiler type things). You don't just throw project based in at
the end (this should be through the entire degree), and there's two systems/OS
courses included but no networking?

Generally I think having an AI degree is fine, but at the end of the day its a
CS degree + a concentration no matter how you slice it. This isn't like CS
splitting from math, etc.

------
zerr
I find it pity that while "Compilers" course is always about _implementing_
compilers, "Databases" courses are 99% about _using_ database systems.

~~~
jjice
I never really though of that, but that's pretty true. I guess that using a
compiler is generally a lot simpler than using a database. When using a
compiler, the majority of the time you're doing something really simple
compilation, or you have a build system take care of the entire thing, where
as the database requires design considerations. That would be my guess at
least.

But I do agree that a database implementation course would be fantastic. It's
one of the most important categories of software today for sure, up there with
operating systems and compilers. My Uni was going to offer a course this Fall,
but COVID has led to it being delayed. Here's to hoping it's offered in the
Spring.

~~~
SimonSword
I would argue that the "using a compiler"-class would be the programming
class!

------
bmikaili
God the CS field is really getting muddied by AI hype. Please for the love of
god start with the fundamentals.

------
noelwelsh
4 year degrees in AI already exist.

Here is Edinburgh:
[https://www.ed.ac.uk/studying/undergraduate/2020/degrees/ind...](https://www.ed.ac.uk/studying/undergraduate/2020/degrees/index.php?action=programme&code=G700)

Here is CMU: [https://www.cs.cmu.edu/bs-in-artificial-
intelligence](https://www.cs.cmu.edu/bs-in-artificial-intelligence)

I think the above degrees are more balanced than OP's proposal. More emphasis
on foundations, and courses on ethics. Operating systems and compilers are
useful if you go into AI engineering but are a diversion from core AI topics.

~~~
BossingAround
But are they free? I think that's OP's point.

~~~
noelwelsh
I don't think that is the case. I just clicked a random course in their
curriculum (Convex Optimization) and it links to a Stanford course that
requires a login to access the videos.

------
mcv
Yeah, that's not what my 4 year AI curriculum looked like. Mine focused more
on Expert Systems. Also more logic, more software engineering, more psychology
and philosophy. But much less algebra and calculus, less computer systems and
parallel computing (though I took them as elective), no compilers (I don't
really see the need either, though it's an interesting subject), and clearly
not enough machine learning.

I also would have liked to see more robotics. Our neighbouring university had
an AI curriculum that focused more on robotics.

If I could design a curriculum like this, I'd keep the focus on software
engineering and logic that we had (maybe a bit less logic than we had; they
were overdoing it), keep algorithms, keep some philosophy and psychology, but
make them more focused on our field and include things about social impact and
ethics. Definitely more linear algebra and calculus, some statistics, machine
learning, and get some basics for vision processing. After the first two
years, you'd get to choose between more focus on vision and robotics, or more
focus on logic, statistics and expert systems. Machine learning should be
included in both.

------
chriskanan
I actually was on a committee to do this for a university. Ultimately, some
believed it would take resources away from computer science and compete with
the program, so it was a year of work without anything to show for it.

I'm still a proponent of undergraduate AI degrees, but this proposal doesn't
suffice. Deployment, testing, dataset collecting, statistics, more math,
better organization of electives (computer vision, robotics, NLP as applied
electives), bias mitigation, ethics, and more need a role. The state and
university guidelines also provide a lot of constraints. I do think that a
research or applied project is essential, which is captured in this proposal.

I'd definitely remove operating systems and compilers. I took both courses at
the graduate level and have been working in basic and applied AI research for
15+ years, and those courses haven't helped me with that.

------
6gvONxR4sf7o
An AI degree that doesn't teach anything about learning about the real world
from data (the stats that scientists get) seems really weird, even if that
does seem to be the state of things. AI is about learning and decisionmaking,
and the learning side is currently pretty heavy on the pattern recognition
side, but why bake that into the education? You can't teach someone to
automate learning without teaching them to learn the old fashioned way.
There's a reason basic statistical inference and experimental design is such a
foundation for every scientific field, but it's always missing in these AI
progressions. It's sure as hell present in the jobs people going through these
degrees will probably end up in.

------
caspper69
So, people are just supposed to _start_ their 4-year undergraduate degree in
Linear Algebra? I know HNers are rare air, so to speak, but what about Calc
1-3 and Diff Eq?

This is like a 4 year plan once your first two years of university are under
your belt.

So a combined CS / MS in AI?

~~~
zozbot234
Linear Algebra is quite self-contained. You'd need Calculus in there to really
make sense of the probability and stats, though. Not sure about the vector
calculus part and diffeq's, maybe that can be cut and the whole thing might
fit in 4 years.

~~~
caspper69
I just can't see how a single self-contained Linear Algebra course could in
any way prepare one for the actual data science behind AI and ML. But your
point is well taken otherwise.

Edit: Reviewing Bretscher (Linear Algebra with Applications, 5th), it appears
you are correct; it's not until you study the applications that you need
Calculus. Most Unis I'm familiar with, however, will require multivariable
(Calc 3) at least as a pre-req for Linear Algebra.

------
cpp_frog
Reading the comments in this thread, I am surprised to see that many think it
lacks math. I am majoring in applied math (all my professors are french-school
mathematicians, in a non american university) but I deem my courses
excessively theoretical. It doesn't help that the degree is 6 years long.

In the meantime I've been solving coding problems but still feel like I lack
something in order to make working AI programs. Maybe someone can tell me if
the non-math courses listed there are more than enough for a person like me to
get into AI.

If you're curious about my background, feel free to ask.

------
ansgri
Lots of suggestions about math, ethics, SWE; I’d suggest some biology.
Especially biology of sensory systems. Both for some understanding of the
systems that already solve these AI problems really well, and for better
understanding of human interface requirements: what do we really hear and see
in these AI-generated stimuli.

------
saos
Why no 3 month bootcamp?

------
mattkrause
This also seems to be missing any coursework outside of CS/Math.

People certainly disagree about the overall value of general education
requirements, but a writing course, even a technical one, would fit nicely
here. No point in doing amazing work if you can’t tell people about it, after
all.

------
TuringNYC
The proposed curriculum looks very similar to a computer science curriculum w/
an AI minor/specialization. I'm curious why a student would opt for a
specialized AI curriculum rather than a broader CS one? Wouldn't the CS
provide more options down the road?

------
MatthiasP
For comparision, here is the AI bachelor degree from JKU Linz:
[https://studienhandbuch.jku.at/curr/714](https://studienhandbuch.jku.at/curr/714)

~~~
zerkten
It's good that it includes an AI and Society course.

------
sonzohan
Professor and curriculum engineer here who has created an undergraduate degree
in CS/IT. Proof: ([https://www.rtc.edu/net-arch](https://www.rtc.edu/net-
arch))

First off, this is a great starting point! There is a goal for every year, the
courses clearly build upon each other, and there is an end-goal. I am a big
advocate for project-based learning, and I'm seeing more employers seriously
consider portfolios like GitHub and blogs. No employer cares that you got a
103% on your algorithms exam, and few vaguely care about your GPA. They do
care if you can hold a conversation about a topic. They do care if you applied
the knowledge, like building something that uses a tree, even if you didn't
explicitly write all the data structure code yourself.

This curriculum is designed for the elite, think a student that would do well
at Stanford. This is probably an obvious comment, as the majority of courses
come from Stanford, but it's important to note that the breakneck speed of
this curriculum targets the top 1-5% percentile of students. If our
hypothetical freshman is a high school graduate they need to have done well on
the BC Calculus, AP Statistics, and AP Computer Science exams at minimum. It
also assumes that the student has enough computer experience to pick up
advanced skills rapidly. You could argue that "Introduction to Computer
Systems" and "Programming Fundamentals" are entry level, but reading carefully
they are closer to the 2nd or 3rd CS course a student would take in a normal
sequence. Week 2 in the programming fundamentals talks about Stacks and
Queues, and Week 1 in Intro to Computer Systems introduces Unix, the CLI, and
GitHub all in one week. These are very advanced topics for students that come
in with 0 knowledge. You will need either rigorous admissions requirements, or
prepare for a high first-year drop out rate.

We tried a 4th year very similar to what was described in the article.
Students could pick a final project done in partnership with industry, or
report on their internship/career if they were already in one. Students
responded with "What do I need to do to get full credit?", "Can you give me
step-by-step instructions that I can follow?", and "What will be on the exam?"
(there are no exams in the program after year 2). After 3 years of being
explicitly told what they must do to succeed, students have correctly learned
that going above and beyond in a class rarely yields extrinsic benefit within
the scope of a class. To put it another way, students are trained to work
efficiently in a classroom environment, "Exactly as hard as I need to get the
exact grade I want, and not 1% more." It's not necessarily a bad mindset, but
we need to train students out of habits that are bad if applied outside of the
classroom. In the case of my program, we chose to set a bar in the 4th year
but didn't tell students how high to jump. This translates to removing a lot
of clarity in assignment rubrics, and providing direction but not answers. A
lot of students flailed, but none failed (yet!). Graduates report a smoother
transition into industry. We also introduced GitHub in their third year as a
portfolio tool, few students have enough knowledge to understand and use it
before then. At the end of every quarter students build a final project in
groups, and apply all of the skills that go into a good repository
(collaborators, pull requests, readmes, CI, etc.). While not all employers are
looking at the actual projects, all students who have graduated thus far have
said they used the knowledge of talking about a project in interviews. On the
internship/project classes/research front, unless you are using your
reputation to open those doors for students (as I had to), a student's best
chance of arbitrary contribution is through GitHub Issues and PRs. Your
college's reputation is also a major factor here; In the case of my specific
college, Amazon Web Services won't work with us because of a teaching
agreement that went sour before I was hired.

Some critiques from others I'd like to comment on, in lieu of the author: "Why
isn't there core class X?" \- Most students can't handle more than 2 core
classes per quarter. That means 24 core classes over a 4 year period. Choose
wisely. I think the author chose well, but no plan survives it's first
encounter. My curriculum certainly didn't, and 3 years in we've thrown out
almost all of the curriculum we initially wrote.

"No ethics, soft skills?" \- This is a core class curriculum. Electives and
additional classes for "rounding out" a student occur during at a later phase
of curriculum design, usually when it's time to meet accreditation standards.
From there you will not like the second part of this answer. In my 3 year
study on building an IT bachelor's
([https://www.nsf.gov/awardsearch/showAward?AWD_ID=1601140](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1601140)),
we found ethics training to work AGAINST a student's hiring prospects (cue
pitchforks). Our Business and Industry Leadership Team (representatives from
technology companies) rated "teaching ethics" at an average of 2.1/5 (A 2
means slight disagree that this should be taught). They marked "ethical
knowledge application" at 1.4/5 (A 1 means strongly disagree that this is
important). The BILT team commented that they were unlikely to listen to a
recent graduate's ethical concerns on a project, they would regard a graduate
raising the concerns as a 'nuisance', and that they would fire the graduate if
their ethical concerns significantly impacted their job performance (cue more
pitchforks). Finally, consider the fresh graduate's perspective. The typical
first-job-out-of-college mindset is "I have a ton of debt. I'll take the first
job with the largest salary." I see this advice regularly on HN.

------
kyawzazaw
I think CMU's 15-213 is better than Stanford's CS107.

------
arcadeparade
I’d like to see a similar one for computer science

~~~
caspper69
If you are serious, pick an institution and view any of their ABET accredited
programs. Look for Computer Science or Computer Engineering and then go to the
University to view the curriculum:
[https://www.abet.org/](https://www.abet.org/)

~~~
mattkrause
I would not rely on ABET for this.

Stanford, CMU, Yale, Cornell, Columbia, and Princeton aren't ABET accredited.
Devry Tech is, but I still think you should give Stanford and CMU a second
look.

------
cat199
Machine learning is not AI, but a part of it.

