We don't charge anything for the course and there are no ads - it's a key part of our mission so we give it to everyone for no charge: http://www.fast.ai/about/
And yes, it does work. We have graduates who are now in the last round of applications for the Google Brain Residency, who are moving into deep learning PhDs, who have got jobs as deep learning practitioners in the bay area, etc: http://course.fast.ai/testimonials.html . Any time you get stuck, there's an extremely active community forum with lots of folks who will do their best to help you out: http://forums.fast.ai/ .
(Sorry for the blatantly self-promotional post, but if you're reading this thread you're probably exactly the kind of person we're trying to help.)
thrawy45678: I see a lot of criticism about tmux and other non-core items being included in the overall curriculum. I think the author is trying to portray the workflow he is currently on and exposing the full tool kit he uses. I don't think he is saying - this is "THE" approach one has to follow.
derekmcloughlin: If you've no experience with ML stuff, you might want to start with Andrew Ng's course [...] Paperspace (https://www.paperspace.com/ml) have ML instances starting at $0.27 per hour.
webmaven: Any recommendations for the cheapest (but not "penny wise, pound foolish") HW setup that meets these requirements for course completion? - (answers:
Edit: To be clear, I'm not suggesting it's not worth it, just highlighting that theres's more than a time commitment to budget for.
If you have a workstation with a fairly recent Nvidia GPU (I used a GTX 980 TI) and bunch of spare disk space, you don't need AWS at all. You'll still pay for the electricity, of course, but it's not what I would call a significant extra cost. That is, if you already have the hardware.
btw, I have a doubt, the course has specific video to setup aws, for local machines, what to do? should I install the required packages python, keras or whatever they use?
A GTX 970 should do. The exercises are designed for GPUs with more memory, so you'll need to use smaller batch sizes here and there.
The time you spend on the course, and the time spent by your instance running workloads are two completely separate things.
Your cost per-hour is accurate (plus a few bucks for an IP address), but the number of hours is off by an order of magnitude.
Also, I'm paying double that per instance per month for SSD using the provided scripts to build the instances. It's the smallest part of the cost, but I mention it because it can take newcomers to EC2 by surprise if they have an instance shut-down but consuming disk space.
I had to adapt the aws-install.sh script, but it was easy enough. I ended up using a snapshot instead of a persistent volume, as the monthly cost when you're not running is much cheaper. So I have a script to create a new instance and then restore that snapshot. It's faster than installing the dependencies each time).
Google Cloud has a $300 free trial for new users: https://cloud.google.com/free/
(Disclosure: I work for Google, but have no involvement in Google Cloud other than being a happy user)
1) Create the instance the first time: https://gist.github.com/rahimnathwani/40ed3b6f496d377e3d17de...
2) SSH into the instance and run this script (replace anaconda mirror if your instance isn't in Asia, and definitely set the password hash): https://gist.github.com/rahimnathwani/b63ebc5900b832743d9e22...
3) After you've tested the server, snapshot the disk (using the web console) and destroy the instance and the persistent disk.
4) Run this script to recreate the instance and disk from the snapshot (replace with your snapshot name): https://gist.github.com/rahimnathwani/9019df33d4ad7ec4607413...
AKiTio Node | Thunderbolt3 External PCIe Box for GPUs https://www.amazon.com/dp/B06WD8KS52
Gigabyte GeForce GTX 1080 XTREME Gaming Premium Pack Video Card (GV-N1080XTREME-8GD Premium Pack) https://www.amazon.com/dp/B01HHC9Q3U
This applies across basically all of life, and it's so frustrating to see people ignoring it, because what ends up happening is they use a string of 'gonnas' to justify buying stuff they don't need. Gonna get fit - buy $1500 worth of gym gear. Gonna learn electronics - buy oscilloscope, power supplies, tons of components. Gonna get your motorcycle license - buy brand new bike and stick it in the garage.
If you have a desktop computer, you're good to start. When you've done enough that your available CPU/GPU is limiting you on your own projects (not on something you pulled off github) then you can look at upgrading.
A fairly accomplished electronic engineer told me that they'd never once solved a problem using an oscilloscope, but that it helped to keep them occupied while they were mulling over what might have gone wrong. (That's presumably why the better ones have so many knobs and dials to play with, like one of those children's toys.)
Or you can use spot instances, as a sibling comment mentions - about $0.20/hour generally.
gcp: Contrary to the claims there, the CPU does matter
dsacco: yes, this will work, it will quickly become suboptimal [...] as you scale your hobby into something resembling more professional work
brudgers: I'd start with a used Dell Precision T7xxx series off of Ebay for a <$300 including RAM and a Xeon or two.
croon: If the limit is a firm $1000, I would get something like this: https://pcpartpicker.com/list/XHV9Fd
Is there some particular constraint involved that I cannot run the computations at home?
Funnily enough big tech companies have seemed the most willing to accept people making a switch. I'm guessing because their appetite for ML/DL people is currently unquenchable.
If you're 22+ with a high school degree and not much of an engineering resume... Yeah, your pitch deck better be good.
Although a lot of folks in the course do indeed have graduate degrees in other fields (including English Lit, Neuroscience, Radiology, etc...)
But I really do not understand why we hate ads. I have seen many tutorials which are given out for free. Of course, I am grateful that you decided to offer the course for free of charge, but I really would not mind a little adverts just to get you as the creator/maintainer some $$.
I am a self published author of a decent intro to web development book in Go language, it is an example driven tutorial/ebook and while the main book is open source on Github, there is a leanpub.com version which I offer for variable pricing, 0$ to 20$, and it has been working great for me, rather than getting nothing for the tutorial, I am getting something.
Without ruining the current tutorial, there are ways of getting something from it, of course.
For a fairly niche resource such as this (it'll never reach a "how do I get a boyfriend" level of audience), it's unlikely to ever draw as much ad revenue to pay for itself. To do so, a high quality, specialized ad system would need to be deployed, which honestly becomes a high touch deployment and maintenance project, that isn't directly tied to the core goal of just providing an awesome resource publicly for the greater good, which is just distracting (or costly) for whoever is behind it.
I appreciate the mature choice to not try to gain small change and instead eat the cost for hosting and development to feel good that you are providing something not just great, but unadulterated as well.
That's passive income which you don't have to bother about, like you'd have to bother about ad deployment. We as an industry are funny, we expect everything to be free of cost _and_ the author should not monetize it in any way possible, it isn't evil to monetize that's what I am saying.
I'd start with the Coursera one, if only to learn when _not_ to use a neural network and use something simpler. But if you already know how to cluster data with K-means, what a linear classifier is, and/or what an SVM is, then you can probably skip the ML one.
I wanted to point out though, that on the main page http://course.fast.ai/ the courses appear out of order; 1, 2, 0, 3, ... instead of 0, 1, 2, 3, ...
Advice from Open AI, Facebook AI leaders
If the course is free and you're doing this to improve the world what's the point of using such tactics? At least shut down the rotation after one or two iterations.
Sure, about as much as rendering a single glyph of the entire text on the page.
Kidding - I have no idea but if it's just panning it that should be the ball park for the necessary compute load.
Can other people who've actually seen real promise in their AI research chime in and help convince me that we are actually on the precipice of something meaningful? Not just more classification problems and leveraging the use of more advanced hardware to do more complicated tasks.
But we don't need fully general AI to have a huge transformative impact on society and the economy. Think about the full spectrum of jobs that exist in the modern world: doctor, teacher, construction worker, truck driver, chef, waiter, hair dresser, and so on. How many of those jobs actually require the full power of human intelligence? In my view, almost none. Maybe the intelligence required is still more general than what a computer can do, but probably a determined research effort could be enough to make up the gap for any specific task (say, to cut someone's hair, you need to have good visual understanding of the shape of the head, and very good scissor-manipulation skills, and a good UI so that the customer can select a style. It hardly seems like an insurmountable technical challenge).
In all probability, I want to agree with you.
However, deep learning unnerves me. Seriously, how many people who evolved the deep-learning-trained models actually understand how those models work? But the models produce good-to-great results.
So there could be a small chance that general AI could randomly and spontaneously come into existence. Either we are too close to general AI or infinitely far from it. We can't tell it because we are not sure if intelligence is a evolved goal oriented function or an engineered substance. Being an atheist, I believe the first part - and thats what deep-learning based AI looks like.
Human intelligence evolved because of the need to survive which evolved because of the need for genetic replication. Our need to survive lead to a nervous system which created the need for a central management platform (the brain).
People do understand the deep-learning models they create. They're based on a user defined limit for error which is the mathematical distance between what is and what is not.
I think that until we have an algorithm that can rewrite itself optimizing for existence (bug-less-ness and a continuous power supply?), we won't even scratch the surface of general AI.
They train a model that translates signals from the motor cortex to directly stimulate the patient's forearm muscles. It is the first case of a quadriplegic man being able to control his arm directly with his mind.
I think the results seen from deep learning along with recent developments in commodity hardware(GPUs have made major leaps in the past two years) to run them in a reasonable amount of time has kicked off a frenzy. IMO it does seem a little overhyped in the same way the internet was -- its a tool that enables ideas beyond our imagination, so VCs throw money at companies on a dart board hoping one takes off. In the end, the internet was a revolution but it wasn't as quick as everyone thought.
TL;DR - AI is becoming a tool that can be used by anyone.
The fact that I can do it should be evidence enough for that statement. ;)
As far as the bulk of your comment goes, maybe. That's a prediction, but I can't say that I feel especially confident in the prospects of it. But who knows. Cold fusion looked like a lock for a revolution 60 years ago, and social media looked like a fad a decade ago.
But robots haven't really proliferated outside of narrow applications. The main thing holding them back is they are blind and dumb. They can only do very simple rote tasks over and over again. They can't identify objects in an image. They can't figure out how to pick something up. They can't learn from trial and error.
ML has improved massively over the last few years. It's now possible to do machine vision at (arguably) super-human levels on certain tasks. When you see videos of reinforcement learners learning to play video games; that's not a huge stretch from controlling a robot. There have been similar jumps in ability in many other areas. From machine translation to speech recognition.
Sure, there may be lots of over-inflated expectations. But I think a vast increase in automation is well within the limits of what is possible in the near future. You may not get Jarvis. But we will get robots that can understand simple orders and learn to do simple tasks.
That's huge. That's replacing most human jobs huge. It will completely change our economy and our way of life.
So the tools we have now aren't total frauds, and there's probably more to be discovered.
What all three of those examples have in common is that they involve an AI assisting human decision-making. I think that this will be the most lucrative area in the future as well. If you have a set of data a human can't even look at, like a million web pages, the AI doesn't have to be that good at processing it to be useful, since it has no competition from humans. On the other hand, it's rare that you ever need to come up with a million new concepts - you probably just need one. So humans easily outcompete AI at coming up with new ideas. Maybe it's a bit vague what "new ideas" means, but certainly if you're capable of generating new ideas, then you're at least generating turing-complete outputs. Nobody is even trying to do that right now. In the case of image recognition, there are some complex ideas needed to make it work, but at the end you're just outputting a label, getting your model to use the most basic DSL possible.
What would you say is the missing piece for a robot that knows how to wrap a football for amazon, (and knows what do if the ball falls down, and if there are people around?)
AI and ML are exciting because they promise to help us evolve systems and machines quickly to perform more accurately.
This requires access to a constant flow of large, proprietary datasets.
Providing cheap access to datacenters and compute power is a great first step for leveling the playing field for startups.
I'll be interested to see how YC tackles the (IMO) more important problem of providing access to the data needed to train models.
I think IBM has taken an extremely wise first step by acquiring the data assets of The Weather Company. This will give its Watson IoT portfolio a leg up on other companies that need to rely on a patchwork of public and private data sources when factoring something as integral as the weather into algorithms and logic engines.
Perhaps YC can consider something similar, pooling investors and VCs together to acquire/partner with providers of essential data.
It's not just the data. It's also acquisition: Facebook and Google are figuring very large in the exit path of a lot of these companies.
Which means that after one or more years the stated goal here could be nixed. You'd almost have to close that route before this statement is meaningful.
Taken in isolation, the two trends (data acquisition and talent acquisition) are already disturbing. Combine the two, and you have to wonder how dumb we actually are to allow things like Facebook's acquisition of WhatsApp.
So it would definitely help smaller AI companies if an open minded consortium would launch a new FreeBase alike service and open web crawler with free data dump access. (WikiData is at the moment (and for the next months) several magnitudes smaller, and Freebase frozen stale data gets more useless every day)
From all the places where AI could help, why focus on that field, the place where the most vulnerable employees are found, the hundreds of millions from china, Bangladesh, etc - who have little chance of having a meaningful Social safety net ?
>> job re-training, which will be a big part of the shift.
I don't believe that this is realistic, even in the west. Why ? because the internet, who is obsessed about this subject, and doesn't lack imagination, can't even supply a decent list of what jobs will the future offer that could employ the masses who's jobs will be automated.
To me there's no other way to look at this other than having faith in our ability to find work to do. Wherever there is deficiency in technology, humans will find a way to fill in. Now the question is whether this will be at the same scale, and will there always be gaps to fill in. No one knows. We never have known. We've just blindly innovated and had faith. It's worked so far. Until it doesn't we will probably just continue doing it.
In other words, this is and has always been unplannable. An example. When DARPA invented the internet, no one explicitly said "We're going to put a bunch of mail staff, brick and mortar commerce employees, etc, out of work so we better ensure we train programmers to help fill the void."
I'm with you though; it concerns me.
As for being less brick-and-mortar employees ? i think that was an historical trend, so it makes sense it will continue, supported by tech.
Also It's somewhere between 1960 and 1970(darpa and the internet), i think. But Moore made his law in 1965. So we guess that in 2017 computers are extremely abundant and fast, connected at high speeds(moore + internet), and we know they are versatile.
So we can guess they'll have many uses(and a lot of room for imagining), and maybe we'll need many programmers for that.
furthermore , see my comment on predicting jobs, i don't think it's that hard(in general) :https://news.ycombinator.com/reply?id=13910181&goto=threads%...
But jobless people are a resource that entrepreneurs will find a way to utilize for profit. It's not that the technology will require new kinds of work, but that idle people will enable new kinds of companies.
I think that Uber/Lyft/Prime Now/etc were enable because of high unemployment / low participation rates.
That said, despite whatever you think, VCs want return on investment and they do not really care about who and what they replace --for example, environmental concerns, they do their PR thing but they will still build/invest in projects which will develop in places with less stringent regulation effectively undermining their domestic posture on environmental issues as well as their posture vis a vis respect for the working person. They just want money and influence --but to feel conscientiously at ease they may engage in political issues that locally will appear to be progressive.
I visited two Chinese factories this week, one for injection-molded plastics and one for metal. Both were essentially collections of very expensive automated machinery such as 5-axis steel boring CNC controllers and high powered laser cutters, with a few gantries, finishing/packaging and storage areas. There were as many people on computers working with 3D models as there were standing on the factory floor. In the latter case, the primary interface was usually a keyboard and mouse or touchscreen.
 http://unctad.org/en/PublicationsLibrary/presspb2016d6_en.pd... (page 3, citing UNCTAD secretariat calculations, based on International Federation of Robotics, 2015, World Robotics 2015:
Industrial Robots, available at http://www.ifr.org/industrial-robots/statistics/ (accessed 19 October 2016))
For example, I'm sitting next to someone who is working on robotic apple-pickers. The amount of cost from buying apples is about 75% represented by labor. So yes, there would be job loss, but what if apples start to cost 25% of what they currently do? What if we extrapolate that across all fields?
What if we could do this with everything we need - food, consumer goods, housing, etc? It's a shift as big as the industrial revolution and moving away from an agricultural society from a consumer perspective. In short: Life would get much better and cheaper for all of us and fast.
But then there's unemployment. What will the masses do? Well, what did the masses that used to be farmers do when we moved from 50% of the United States being farmers to the current levels of ~2%?
The answer historically has been that entirely new types of employment are created, and the pain is remarkably short-lived considering the size of the shift and the decrease in costs, and the Luddites look silly in retrospect. It's usually hard to see looking forward, but somehow it's always worked out.
I'm not sure what the entire picture looks like; I do think that we will need massive re-education as a society, and we're completely unprepared for that. In the past we've replaced unskilled labor with new unskilled roles, and I don't see those roles hanging around anymore. (I moved to San Francisco from a very rural town, and I get stressed whenever I think of the economic future of that small town. Until there's risk-free education available at will that area is economically doomed.
But I don't think the proper response is to slow innovation because of pending job loss. If the AI shift will be as big of a deal as the Knternet, imagine how much better it makes the lives of everyone - is that really something we should actively fight against? I'd argue history suggests we shouldn't.
The sad reality is that there's a nontrivial chunk of the populace that isn't able to pick up highly skilled roles. It also ignores the role of unskilled jobs in providing space for people whose job class has been destroyed and need to retrain (or mark time until retirement).
I'm not advocating slowing innovation to prevent job loss. I am advocating avoiding magic thinking ('there's always new jobs to go to'): we need to start a serious conversation about what we do with our society when we have the levels of unemployment we can expect in an AI-shifted world. Right now we're trending much more towards dystopia than utopia.
That's great, as long as there are new jobs, or a decent safety net(which i doubt).
>> The answer historically has been that entirely new types of employment are created, ... It's usually hard to see looking forward, but somehow it's always worked out.
It's not always worked out - remember horses ? and what about the terrible transition period ? but let's leave that.Is it really that hard to see looking forward ? That's a more interesting question:
So before the industrial revolution, rich people bought a lot and a variety of clothes/foods/things/services(doctor/lawyer/cook/traveling/transportation), large houses, etc . So assuming things would get cheaper, they'll be more available , you could imagine that regular people will want those things,a lot of them, but maybe with even more variety(more people). So you could imagine a consumer culture.
And the same for the business side: businesses wanted advertising, transportation, financial services, manufacturing, knowledge, etc.. So if all those things will become cheaper - and better, you could assume they would want more and more of them.
And furthermore - before the industrial revolution there we're many problems - the world wasn't ideal(you got sick, life we're boring). In general , we can assume many of those problems will get solved, so they'll create new demand.
So so many new jobs! And we know all that - we've seen it happen!
So why the hell we cannot think of new jobs ?
My job didn't exist 50 years ago. Did yours?
Didn't that work in the opposite way, though? New opportunities appeared that pulled (sometimes forced) people away from farming; it wasn't like there was a bunch of unemployed ex-farmers who suddenly started finding new jobs.
In the 1930s, the farms were themselves collapsing. Mechanized agriculture created both a huge oversupply of produce (which drove down prices) and also ruined the ecology of the plains, which eventually led to the dust bowl. Farmers absolutely were forced off their land: that's where we got Okies, Hobos, the Great Migration, Grapes of Wrath, and all those other subcultures of migrant workers from.
Considering that UC is also funding Basic Income research, I think they'd be one of the better people to be automating stuff.
Here's an article about that idea from the POV of startups, though I think it's more generally just a moral thing we should do:
For instance: Reducing the cost of farm equipment could enable poor rural farmers to rise above subsistence farming. I could think of countless examples.
I'm not agreeing with them, just saying that I think this is where they are coming from.
Why attack YC instead of the ones exploiting those employees, or even the systems that allow exploitation to occur in the first place?
But no, by all means, lock those 'vulnerable employees' into a job that they will be treated as sub-human for the rest of their lives, all in the name of 'job security' (or whatever you think you're preserving).
The point is that if factory work is automated - they'll have no other possible job, and no safety net, unlike the west. Thus they are vulnerable.
I know, and it's morally disgusting and I wish systems like this didn't exist in the first place.
What I'm saying is that there is no future in which automation does not replace (most) human labor, so if the goal is ROI, then why would YC ever invest in something other than that? It would be delaying the inevitable, would it not?
Let's say YC provides a UBI to all of those displaced workers; now what? A bunch of unemployed are (still) surviving in the same fucked up system as before and the problem still exists.
To somehow pin the problems of the system on YC's investments is ignoring the larger, more pressing problems of socioeconomic inequality and exploitation as a whole.
One of the handholds I'd need is a physics engine which which models a robotic arm down to the finest levels of motor control and feedback. I am not a mechanical engineer, and I do not know where to begin with a problem like that. I don't imagine this is hard to build using existing physics engines, but it wouldn't work out the box. It requires some deep domain knowledge of things like friction, tensile strength, and so on. A system like this would spur progress in robotics immensely.
I am an undergraduate leading an open source autonomous vehicle project  and we're currently almost done setting up simulation testing. Gazebo and ROS are very poorly documented but they serve their purpose really well when it comes to creating pub sub and physics simulation for robotics systems.
If you or for that matter anyone want a cofounder to apply to YC for AI, happy to help.
Offer stands to help you, but given the deadline for the next batch is days away, you would need to let me know as soon as possible via a comment, then I'd contact you via email.
"How This Startup Is Disturbing The Equipment Financing Market"
"A startup is disturbing the consulting industry"
"16 Startups Poised to Disturb the Education Market"
The last part reminded me of google training a bunch of robot arms . Haven't seen much being done with it, does anyone know if anything is being done with the data?
We don't really know how much of each resources it takes to grow something. How much oxygen does 1 tomato plant take? I think growing in space would require one to investigate such questions in more detail. Ever since I came across ML and DL, I wondered if there was a way to train a deepFarmer. Something that understands relations between minerals nutrients and the fruiting body. Sometimes you want to grow stuff that might have less of certain stuff but you want more off it. Eg: low calorie high volume foods. Or you might want high protein food. If an "AI" could figure out how to maximize vitamin C for example in tomato for a population that needs more of that vs it learns to make the tomato more water rich for some other reason.
AI is interesting.
If you could isolate the different rooms sufficiently, you could run machine learning to optimise yield (or Vitamin C).
I have doubts as to the commercial viability of this, as farming is pretty low margin, so the overhead requirements for the capital expenditures would require pretty substantial gains in efficiency, which I think just don't exist.
Also, I am interested to cooperate on creating reliable contracted urban demand without the need for retail packaging - http://infinite-food.com/ - currently relocating to Shenzhen.
If automated urban vertical farming happens it will probably happen in China because dense/cheap/economies of scale/rising food prices/rapid urbanization/transition toward smaller households/cultural preference for fresh and less processed foods/less regulatory issues.
Although it's a much more difficult problem to solve being an uncontrolled environment, my company is working on this automation but for traditional farms (http://touch.farm). The first step is gathering rich data from different crops/climates, which is why we're starting by designing the cheapest and most usable ag sensors we can. If we can make sensor tech accessible to a wide range of farmers, we'll gain a rich enough dataset to start applying ML (and farmers and the environment win in the process).
tl;dr: vertical farming is great, but what we really need is innovation/investment in traditional farming.
Ethnic Chinese investor, China focused. China has removed a lot of arable land recently for urban development, people are urbanizing at the fastest pace in human history, and food prices are rising quite rapidly.
Your comments are logical for the US and your project sounds interesting. From what I understand EU agriculture is a sort of middle-ground, with smaller-scale automation.
There are pros and cons to both but I think you can get richer data for vertically initially to train systems than move on to larger scale horizontal farms. Tough this doesn't mean there are low hanging fruits their either.
In my understanding vertical farming is usually a hydrocultural practice (doesn't use soil). That works well for a lot of horticulture, but not so much for broadacre crops.
Is rice agreeable to vertical farming? i.e.: could the bulk of China's crops be moved to VF? In my understanding it's typically a hydroponic practice, so I imagine so.
Interesting thoughts on the reusing the waste. I toured the Carlton United breweries the other day, and found out it's their waste that has formed the basis of Vegemite since its inception. i.e.: there are massive wins to be made when you can "close the loop" on food production. When you've got production, processing, and reuse all under one roof like you could with vertical farming - gold.
The data gathered using aforementioned robot arms is available here: https://sites.google.com/site/brainrobotdata/home
Google (specifically Google Brain Robotics) is continuing research on this. Exciting time to be alive.
As I understand it growing crops in a small, sealed environment requires much more precise understanding of how plants behave than you do for traditional Earth-bound farming.
1. We need to work on big things whether or not they're overhyped and whether or not we're on some precipice.
2. Those who think what marketers call AI is overhyped (myself among them) don't think that it isn't something really big. Even though we haven made very little progress since the algorithms that power most modern machine learning were invented fifty years ago, there is no doubt that machine learning has become quite effective in practice in recent years due to advances in hardware, and heuristics accumulated over the past decades. It is certainly big; we just don't think it has anything to do with intelligence.
3. Are there any machine learning experts who think we are on the precipice of artificial intelligence? If so, do they think we can overcome our lack of theory or do they think that a workable theory is imminent?
The factory environment has to be one of the most frustrating as a software developer. So much low hanging fruit that you really have to develop too much if you are not following the typical industrial idioms. (Throw ladder logic at it, keep at it till it barely works, then run it till the wheels fall off.)
Even just something as simple as reporting on a 30K$ PLC driven machine is painful. You can't buy a 10K HMI with Wonderware (a still painful piece of software) so you just ignore it. In my particular case I developed a simple reporting system on a RPI.
These industrial systems are usually not capable of even SQL or MQTT. You have to strap on an extra piece of hardware and a ton more money for licensing.
Even deployment is painful. You can't just push new code because you have no way to mock your industrial systems. Even if you could you will need to live edit your code into the PLC or you will wipe its current state of recipes and other user data. Because your PLC wasn't able to use standard SWE tools to get that data. God help you if you need to roll back.
So I am applying to this. Everything is broken. Where do you even start? Fix the platform, fix the robots, fix the job setup, fix the people.
It is very different than say backend development.
1. Is there a firewall between the information companies applying give you and the rest of the OpenAI effort?
2. What's to prevent a partner from seeing a good thing and passing the info along to a potential competitor already funded inside the program?
Overall it seems that this may be used to give OpenAI a strategic competitive advantage by using ingress application data for market analysis/research/positioning/signaling/etc.
Reading "Computer Power and Human Reason" by Joseph Weizenbaum (published over 40 years ago!) should remind people that attempting to build a "machine-powered psychologist", even if it's something that can be done, is not something that should be done.
Was against the creators insistence taken as the real deal.
Reinforcement learning for self improving robots is one of their called out areas? I've never found companies focused on tech or research problems first to be all that successful. In terms of a research projects it's not very interesting or socially beneficial compared to self driving cars or robotics applications in medicine.
It all leaves me wondering what YC's strategy is here. Maybe it's easier to establish a fund in AI, get smart people to apply, or that their expected future returns are higher?
Are there any other future verticals which you would consider domain specific perks for?
Also, what exactly constitutes an AI startup? If you utilize a ML library to handle a small feature of your product, are you an AI startup?
I mean, even if it is overhyped, I think there's a lot to be excited about. Weak AI is still an amazing breakthrough for automation. The trick is to not try to do too much at a time. We do ourselves a disservice by not considering how amazingly efficient humans augmented by ML can be. The research for AI doing everything just isn't there yet, and that's OK.
Time to add the words "deep" or "learn" to your startup name and reap in the dough!
Speech recognition, image recognition, translation have all made leaps in last 5 years.
Sure it's silicon Valley and a lot of companies will scam themselves in but I absolutely believe there will be another Google coming out soon or has already been born who will capitalize on AI.
Is Google really going to miss improving search with AI with all their billions and brains?
I find a lot of wishful thinking, denial and cognitive dissonance in this sentiment, which is found everywhere.
"Computers will eliminate these jobs, but humans will always have more to do."
If computers can learn at an accelerated pace, what makes you think that by the time you learn that next thing, it won't already be eliminated (or shortly afterwards) by a fleet of computers that use global knowledge? Do you really think that Uber driver - turned - newbie architect is going to be in demand vs the existing architects with their new AI helpers?
It's not black and white, but the AVERAGE demand for human labor is going down, not because EVERY human job will be eliminated but because automation allows LESS people to do the job.
So wages drop. On average.
The only real solutions are either unconditional basic income, or single payer free healthcare / food / recreation.
My strategy is to build a service, free for non-profits to use, that would solve the problem of "if I only had the same data Google engineers had, this product would be perfect". Here is how it would work.
1. Go to my webpage and register a site you want me to index for you. The site URL you enter may already have been registered by another user, but to be sure the data is in my index, register it again. I will now continue to index this site every 24 hours for as long as I live. You need higher frequency indexing? Sure, no problem. You will owe me for the additional cost.
2. Download a client of choice from the website, we have them in c#, java, python, R ect. The client will let you query your own private data as well as the data in the cloud (the data I'm now generating and refreshing every 24 hours). The query language will also let you join or intersect between the two datasets. In fact, due to the nature of RPC you can use your local data and all of the data I'm generating and refreshing, as if it was your data.
3. In the end, I will be indexing such a large part of the internet that there will not be much use for Google anymore, or ads. That's the vision.
I'm not American and can't see how I'm a good fit for the YC program this summer. However I will be needing funds for cloud machines pretty soon and so far I've found noone at OpenAI to contact. Is there anyone from OpenAI reading this? This should be right up your alley. Care to speak?
The research we would do in my team would be cutting-edge. But we would never even attempt to achieve what Google is achieving when they sniff their Android users. Why would we be cutting-edge? I don't know, but that would be our aim. Here's an example of what we would be doing in the NLP domain:
"Give me the latest sales numbers."
exec sp_daily_sales '2017-03-23'
It's like the term "technology". Spoons, chairs, bricks are technology. The modern usage of the word technology is what people used to call "high technology".
Datasets, that's the most important point in Machine Learning and exactly what Google has been collecting for the past decades.
I wanted to start a project about dermatological images but how would I get that information? Then decided to start an agricultural project but then again, how to get a million images to identify a thousand species? Birds? Legal documents? Human faces? Fashion? Speech? Translation? Everything needs a huge collection of datasets.
The tools are there, that's the easiest part.
On the "demand" side, client-companies can offer problems to be solved by AI.
On the "offer" side, startups can provide algorithms solving specific problems.
YC can be a mediator, running the algorithms, and keeping the data of client-companies safe from anybody else (including the AI startups).
Here's an example of such an agency: http://www.aigency.co/about/
Democracy is dead. Long live AI Democracy.
There are plenty of actually innovative products in the AI space.
(Note: I am very skeptical towards CTM and quackery. But also towards AI and ML in general. Any pointers would be great. Why is this the new industry to look out for, for example?)
AI is a tool that we are now seeing solve many problems which computers couldn't even touch 10 years ago with better than human accuracy today. This field is very new and there is significant room for improvement and new approaches. IMO, a lot of the excitement comes from the really impressive progress in a short time and a very wide open field of problems still to be explored.
I would like to ask you and anyone reading this: what exactly gives AI/ML the "edge"? I mean, faster computers are a given these days compared to any other time in humanity. They are giving us simulations so accurate that people in the 80's would be extremely jealous. However, my question is why these new techniques are superior to the established techniques. Why would an AI solution be better? In what sense would it be different from any other method?
Again, I am not anti-AI or anti-ML but I'm interested in hearing opinions. If you could give me concrete examples where AI or ML is better in a way that is different from "faster computing" I would be sort-of satisfied.
For the ones pursuing the CTM line, forget that. I know it's BS and I hope nobody falls for that stuff. However, s/AI/TCM/g in the post would not make clear what the "edge" of AI is over conventional computers.
Thanks for the civil discussion.
Prior to 2012, this was done using traditional computer vision algorithms with hand picked features. The accuracy was often around 30-40%, until the first deep learning model beat the 2nd place competitor by near half the error. Today, almost every single competitor is deep learning based and accuracy is equal to human classifiers.
Since deep learning has so many applications, it's hard to generalize what the "edge" is against all other approaches. In my opinion, other approaches often require handpicking features (where a ANN learns them) and struggle to recognize patterns in many areas (where a ANN does).
If I look at the link you provide, I see an improvement which is great. However, the improvement seems to be somewhat linear year-on-year. In the original post, the first sentence is: "Some think the excitement around Artificial Intelligence is overhyped. They might be right. But if they’re wrong, we’re on the precipice of something really big. We can’t afford to ignore what might be the biggest technological leap since the Internet." Am I right in assuming that there is the expectation of AI/ML to have "hockey stick" growth in the (not-so distant) future? If so, what kind of methods could possibly make this real? If anyone could give me some papers, an essay or some words to search for this would help me a lot. Essentially, I am intrigued why there is this expectation of hockey stick growth. If my premise is wrong, please enlighten me too.
Just one random example: http://lmb.informatik.uni-freiburg.de/Publications/2016/TDB1...
Are you familiar with the old Polaroid instant film?  You took a picture and pulled it out of the camera and it developed over the next few minutes. The picture went from dark grey nothingness to the image you took, like "magic".
Anyhow, so-called "neural" networks (they are no more neural than Kermit the frog is amphibious) are basically just giant polynomials with changeable coefficients.  The input variables are e.g. pixels from an image, and the polynomial is solved to generate a single output value indicating that it is e.g. a cat. (But before backprop this will be uncorrelated, random, useless.)
Now this image-to-yes/no-cat function is very difficult for a human programmer to write out by hand. I mean it's so difficult that no one can do it very well at all.
But using backprogagation, and a big pile of images (some of which are cats and some of which are not) and an index of which are cats for the computer to use (humans will have had to already sort and tag the images for this to work, TANSTAAFL), you can run a simple loop to develop a "function" (in the form of coefficients for the polynomial) that can "see" cats. The "training" phase adjusts the variable coefficients until the result correlates with the tagged "catness" of the images, and then you're done. After that, you can input new images to the function and it will be able to indicate whether they depict cats or not.
The function forms out of the data+backprop like an image developing on the film, projected there by the data set.
So yeah, this AI/ML stuff lets us generate useful functions that we have no other way of generating (literally no one knows how.) It's a big deal.
When you have 10M+ documents, it takes a long time for a team of lawyers to review them. So a company with AI/ML is at a huge advantage in identifying the documents a lawyer needs to win a case.
To get accurate results, you need a lot of data, but to crunch that data you need a lot of power, not just speed. A lot of the processing is done now with GPUs (thousands of cores)...traditional computers are CPU intensive and have limited number of cores. Think for example a multi-threaded application running on GPUs vs CPUs...the GPU can divide the problem in more tasks and the processing can be done more accurately not just faster.
PS: I think people overuse "Artificial intelligence" we're no where close...but it excites people compared to "machine learning" or "deep learning" or "convolutional network" or just statistics...It's just trying to find an equation that fits the data like linear regression.
In a sense, and correct me if I'm wrong, I get the feeling that the whole ML is just starting now because we have faster computers. To analyze such amounts of text and make some sense of it, you'd need a massive "database" or "fitted model" to decipher what the legalese means. Only recently has this been possible to create (thanks to Moore's law). It's not like there is a breakthrough in mathematics (statistics) right? The theory probably has been around for ages but is only now becoming practical.
That's absolutely correct. But to pooh pooh it on those grounds is like pooh-poohing the fourier transform around the 70s (which theory had been around for 200 years) and only had then "become practical" with the advent of transistor-based computer. Now the FT and relatives are used in everything from audio processing and compression, image filters, statistical analysis, pretty much every form of scientific spectroscopy, MRIs, etc. If you had generally made investments in "FT-related technologies" you'd be doing well.
No one knew you could do it, but yes, it build upon previous work. I was working in the field before and after and if anyone had asked me if one could represent all human languages in only 300 dimensions, and have vector composition be meaningful I'd have laughed at them.
Take using back-propagation to train deep neural networks. People had shown it worked in 1 or 2 layer networks, but despite years of work no one had been able to train anything deep enough to be useful. Then Krizhevsky, Sutskever and Hinton won ImageNet, proved it was possible and kicked off this whole ML craziness.
Neither of them is exactly because of more powerful computers, nor magical math breakthroughs. It was more lots of hard work by researchers trying many combinations of techniques until something worked.
These techniques, combined with huge volumes of data and - yes - more powerful computers are what have made the difference.
Please don't get me wrong: there is no future for CTM. You are 100% correct about this. However, quackery is still not a thing of the past. Therefore, I would ask of you not to debunk CTM or to point to its weaknesses (which has been done infinite times). I would like to know the AI-specific answers to your three questions (in your words or in a linked essay / paper). In a way, I don't want AI to be the "quackery of computer science": people use and believe it but it has no advantages. I would love it to be real.
I am all for a future in which we can have AI/machine learning/deep learning if it actually makes a difference. But I would love to know your opinion on what difference that could be and a little on how to achieve that. Obviously it seems I am missing something since YC is eager to invest in this field (or fields).
If I had general_ai.exe or worlds_best_nlp.py, what should I do with it? It isn't even clear to me how they would be useful.
Often I imagine I had a technology such as general artificial intelligence or world-class NLP. Perhaps I lack imagination, but I have not yet thought of an exciting application. These are tools, much like Perl scripts, and it is not clear to me which will displace more jobs during my life.
But seriously, narrower forms of AI are useful for things like self-driving cars.