It's very nice to see a project I started reach the front page of HN.
I remember starting the project around 2006. Back then, I had a dial-up connection and it wasn't easy for me to watch a video I liked a second time. It took ages. There were Greasemonkey scripts for Firefox that weren't working when I tried them, so I decided to start a new project in Python, using the standard urllib2. I made it command line because I thought it was a better approach for batch downloads and I had no experience writing GUI applications (and I still don't have much).
The first version was a pretty simple script that read the webpages and extracted the video URL from them. No objects or functions, just the straight work. I adapted the code for a few other websites and started adding some more features, giving birth to metacafe-dl and other projects.
The raise in popularity came in 2008, when Joe Barr (RIP) wrote an article about it for Linux.com.[1] It suddenly became much more popular and people started to request more features and support for many more sites.
So in 2008 the program was rewritten from scratch with support multiple video sites in mind, using a simple design (with some defects that I regret, but hey it works anyway!) that more or less survives until now. Naturally, I didn't change the name of the program. It would lose the bit of popularity it had. I should have named it something else from the start, but I didn't expect it to be so popular. One of these days we're going to be sued for trademark infringement.
In 2011 I stepped down as the maintainer due to lack of time, and the project is since then maintained by the amazing youtube-dl team which I always take an opportunity to thank for their great work.[2] The way I did this is simply by giving push access to my repository in Github. It's the best thing I did for the project bar none. Philipp Hagemeister[3] has been the head of the maintainers since then, but the second contributor, for example, was Filippo Valsorda[4], of Heartbleed tester[5] fame and now working for Cloudflare.
Another thanks here. I use youtube-dl so much that I occasionally substitute it for wget when trying to fetch online content (and occasionally discover in the process that it will in fact grab what it was that I was trying to get in the first place -- mostly audio files).
I vastly prefer offline media players to browser-based tools, for a number of reasons: better controls and playback, richer features, uniform features (I don't have to learn each individual site's idiosyncracies), the ability to queue up a set of media from numerous sources and play them back without clobbering one another, and more.
Hugely useful tool, and I've been impressed as hell as well by its update frequency.
And lift a mug to old Warthog. I miss Joe as well.
> I vastly prefer offline media players to browser-based tools
Why don't browsers provide some way to play local video files, for example by typing "file:///c:/my_video.flv" into the address bar. After all, the browser certainly includes the ability to play the video being downloaded off the web.
If you try "file:///c:/my_video.flv" with Firefox, it opens a dialog box offering to pass the video file to whatever external media players you have installed.
In what seems inconsistent to me, "file:///c:/my_notes.txt" and "file:///c:/my_pic.jpg" will be rendered correctly by Firefox -- it won't offer to open an external text editor or photo viewer. Why is video different?
I tried Google Chrome (v29 on Windows, fwiw), and it behaves just like Firefox; i.e., it'll render local text or image files but won't play local videos. I didn't try Chromium.
Browsers don't natively handle .flv files, they hand off the file to the flash plugin. If you try that with files the browser can handle (.mp4 videos, .mp3 music, .png images, etc.) you'll see that it works fine.
Filippo here. I'm sad that I didn't have the time to contribute to ytdl much recently, but it was my first time playing a role in a big and popular project, and I'm terribly grateful for that trust. (Thanks Ricardo, thanks Philipp!)
Also, what always impressed me is the incredible amount of random contributions from the community. Ever since we introduced a super-simple plugin system [0], support for the most disparate video sites poured in as PR. (>800 PR!!) Also, given how ytdl is structured, the most simple plugin gets you 90% of the tool power for that video sit. Big results with minimum effort.
Finally, to answer the question about the updates in some siblings, there is no active effort against us most of the time (VEVO videos being the notable exception) but supporting such a number of sites mainly by scraping means that breaking changes happen really really often.
Thank you for all your contributions! Can you update that article to use video_id = self._match_id(url) and _VALID_URL = 'https?://...' though? We've also added a fair bit of "official" documentation at https://github.com/rg3/youtube-dl/blob/master/README.md#addi... .
Love your tool. Thanks, like others say I use it at least a few times every week. I just prefer watching longer videos in VLC, easier seeking and no buffering, etc. Also much snappier than Youtube on my old laptop.
Well, this is going to be more of a philosophical answer than a technical one: with the popularity of YouTube nowadays, which is available on every platform and allows for anyone, anywhere to instantly watch a video, the cat-and-mouse DRM game would not succeed. I think DRM is flawed (insert the typical lock-and-key analogy here) but it does work for some situations. For YouTube: probably not. Somewhere, someone talented would crack it and tools like youtube-dl would continue to exist. A recent example is youtube-dl using rtmpdump when available to download DRMed videos.
I use this tool a couple of times every week. I love this tool!
And I must say I'm impressed by its ease of use (basically zero installation effort), and also by the frequent updates.
(I wonder why those frequent updates are necessary, though. Are you under the impression that google is actively working against tools which attempt to download material from youtube?)
Hi, I'm the current lead developer. We update extremely frequently because our release model is different from other software; there is usually little fear of regressions (fingers crossed), and lots of tiny features (i.e. small fixes or support for new sites) that are immediately useful for our users. We've had the experience that almost all users prefer it that way, so we try to enable every reporter to get the newest version by simply updating instead of having to check out the git repository.
As @fillipo said above, there is little if any pushback from video sites. Most of the time, they update their interface (we've gotten better in anticipating minor changes) and something breaks. The recent string of YouTube breaks (for some videos, mostly music videos - general video is unaffected) is caused by the complexity of their new player system, which forces us to behave more and more like a full-fledged webbrowser. But I think we usually manage to get out a fix and a new release within a couple of hours, so after a small youtube-dl -U (Caveats do apply[0]) you should be all set again. Sorry!
I'm really grateful for this tool: both to the creator, and those that keep making sure it works. I have a swath of my life tied up in a couple of youtube playlists, and every now and then videos disappear (presumably due to DMCA-requests) -- and it's always annoying. With youtube-dl, I can simply download my lists, and then I can be confident that whatever obscure (or not) tune or cover version I found some years ago, will be in my archives (I've yet to automate this -- but my goal isn't really to archive all the things -- I generally add few songs at a time).
Anyway, if you didn't write this tool (and update it) -- I'd have to do it myself. And I'd rather not do anything myself ;-)
The current team should have more information, but I think most updates are due to other sites breaking and new sites being added that due to YouTube or bug fixes. I don't think YouTube is actively working against tools like youtube-dl, at all.
I agree. It's a consequence of the software being very volatile, having thousands of users and supporting so many sites. There's something to fix or to add every day.
Remarkably, YouTube makes scripting downloads very easy. The script below needs only sed and some http client and it has worked for years. I have only had to change it once when there was a change at YouTube; the change was very small.
# this script uses sh, sed, awk, tr and some http client
# here, some http client = tnftp
# awk and tr are optional
# wrapper for tnftp to accept urls from stdin
ftp1(){
while read a;do
ftp ${@--4vdo-} "$a"
done;}
# uniq
awk1(){ awk '!($0 in a){a[$0];print}' ;}
# some url decoding
f1(){
sed '
s,%3D,=,g;
s,%3A,:,g;
s,%2F,/,g;
s,%3F,?,g;
s/^M
//g;
# ^ thats Ctrl-V then Ctrl-M in vi
'
}
# remove redundant itags
f0(){
sed -e '
s/&itag=5//;t1
s/&itag=1[78]//;t1
s/&itag=22//;t1
s/&itag=3[4-8]//;t1
s/&itag=4[3-6]//;t1
s/&itag=1[346][0-9]//;t1
' -e :1
}
# separate urls
f2(){
sed '
s,http,\
&,g'
}
# remove unneeded lines
f3(){
sed '
#/^http%3A%2F.*c.youtube.com/!d;
/^http%3A%2F.*googlevideo.com/!d;
/crossdomain.xml/d;
s/%25/%/g;
s,sig=,\&signature=,;
s,\\u0026,\&,g;
/&author=.*/d;
'
}
# separate cgi arguments for debugging
f4(){
sed '
s,%26,\
,g;
s,&,\
,g;
'
}
# remove more unneeded lines
f5(){
sed '
/./!d;
/quality=/d;
/type=/d;
/fallback_host=/d;
/url=/d;
/^http:/!s/^/\&/
/^[^h].*:/d;
/^http:.*doubleclick.net/d;
/itag.*,/d;
'
}
# print urls
f6(){
sed 's/^http:/\
&/' | tr -d '\012' \
|sed '
s/http:/\
&/g;
'
}
f8(){
sed 's/https:/http:/'
}
FTPUSERAGENT="like OSX"
case $# in
0)
echo|$0 -h
;;
[12345])
case $1 in
-h|--h)
echo "url=http[s]://www.youtube.com/watch?v=..........."
echo usage1: echo url\|$0 -F \(get itag-no\'s\)
echo usage2: echo url\|$0 -g \(get download urls\)
echo usage3: echo url\|$0 -fitag-no -4o video-file
echo N.B. no space permitted after -f
;;
-F)
$0 -g \
|tr '&' '\012' \
|sed '
/,/d;
/itag=[0-9]/!d;
s/itag=//;
/^17$/s/$/ 3GP/;
/^36$/s/$/ 3GP/;
/^[56]$/s/$/ FLV/;
/^3[45]$/s/$/ FLV/;
/^18$/s/$/ MP4/;
/^22$/s/$/ MP4/;
/^3[78]$/s/$/ MP4/;
/^8[2-5]$/s/$/ MP4/;
s/.*?//;
'|awk1
;;
-g)
while read a;do
n=1
while [ $n -le 10 ];do
echo $a|f8|ftp1||
echo $a|f8|ftp1 &&
break
n=$((n+1))
done \
|f2|f3|f1|f0|f4|f5|f6|f1|sed '/itag='"$2"'/!d'
done
;;
-f*)
while read a;do
n=1
while [ $n -le 10 ];do
echo $a|$0 -g ${1#-f} |ftp1 $2 $3 $4 $5 ||
echo $a|$0 -g ${1#-f} |ftp1 $2 $3 $4 $5 &&
break
n=$((n+1))
done
done
;;
esac
esac
There are separate scripts for extracting www.youtube.com/watch?v=........... urls from web pages to feed to this script.
The problem is that this only works for some YouTube videos (for example it will fail for basically all VEVO videos), not to mention maintainability issues.
I had to look up what "VEVO" was. A joint venture of several major record labels and Google launched in 2009.
Personally I have no need for "VEVO" videos. Nor do I ever encounter VEVO youtube urls posted to websites, like HN. I wonder why?
As for maintainability, I beg to differ. The raison d'etre for this script arose out of frustration that early YouTube download solutions, e.g. gawk scripts, clive, etc., kept breaking whenever something at YouTube changed. I got tired of waiting for these programs to be fixed, if that ever happened.
I can fix this 164 line script faster if YouTube changes something than waiting for a third party to fix something they developed that is far more complex. Moreover, it does not rely on Python. Is there something wrong with DIY?
I see someone posted a link in this thread to another 208 line script, yget, that uses sed and awk. This further demonstrates the relative simplicity of downloading YouTube videos.
An alternative to goofing around on the youtube.com web site, scrolling constantly and getting hit with advertising and endless lists of "related" videos is to search and retrieve youtube urls from the command line via gdata.youtube.com.
Despite its name, youtube-dl doesn't just download from YouTube but from a ton a different sites as well [1]. The rate in which this project keeps up with changes is incredible.
It seems like quite a modern success story for the classic "Cathedral and the Bazaar" model of open source development structure and motivations.
As I recall, it was originally written by one person (Ricardo Garcia) in 2008 and worked only on YouTube using (by later standards) relatively simple heuristics to find the URL to extract the video. But it's catalyzed an explosion of interest in every aspect of the problem: tracking changes to the HTML of the video sites, adding support for more video sites, figuring out indirection and parsing through multiple pages and HTML objects, making the tool much more multiplatform and easier to install and update...
It's attracted hundreds of contributors (many of them motivated by a personal desire to be able to use the tool on a different site, or to fix a bug that was preventing them from downloading video in a particular rare case) and maintained an incredibly rapid pace of development.
This is exactly why I contributed. In fact this morning, by coincidence, I had my first ever PR accepted, and it was for this piece of software [1]. I was using youtube-dl to download VK videos, but I really wanted to be able to download an entire playlist -- in the same way you can for YouTube. It didn't exist, so I just got stuck in and did it myself. It really helped that there was many other examples I could look at from other sites, and the maintainer of the package provided me with some very good feedback.
This kind of project that requires a lot of fairly laborious work to create support for many different information sources is a particularly good candidate for an open source project.
I'm not sure if "leeching copyrighted content" was the kind of motivation that Eric Raymond had in mind for future open source projects when he wrote Cathedral and the Bazaar.
When I was in the Himalayas earlier this year, between poor WiFi and sketchy 3G, it was the only practical way to watch at all. Having the file offline was an added bonus that meant others could benefit too, so big thanks to rg3 & current devs on behalf of a lot of folk who've never been near HN themselves.
Yeah, because at the core of human civilization lies a respect for copyright, a BS notion that was developed for exploiting the restrictions of (analog) physical formats for profit...
Copyright law was developed in the 1700s precisely to prevent people from exploiting the limitations of physical formats for profit.
It's the opposite of what you've stated.
Authors, composers and publishers needed protection against cheap printing presses that would just print anything that was popular and flog it in the marketplaces.
>Authors, composers and publishers needed protection against cheap printing presses that would just print anything that was popular and flog it in the marketplaces.
The limitations I mention are the difficulties and cost of the printing itself.
What authors wanted was to restrict who can print their work -- but it's not true that authors "needed protection" because printing presses started appearing.
That makes it sound like authors were paid for the work until those "cheap printer" pirates appeared. But on the contrary it was the invention of the printing presses themselves that gave authors an industry in the first place -- for millenia authors just wrote for free.
Yes; a good read is "The Surprising History of Copyright and The Promise of a Post-Copyright World" [1] which I think is from Karl Fogel, the author of the (Free, Libre, CC-BY-SA) book "Producing Open Source Software" [2]
The reason the industry of paid authoring could develop is because of copyright. Without it, all the value of the new printing industry would have accrued to the printers, and none to the authors.
At least put some effort to make the analogy more accurate:
"it's not unlike an anti-capitalist punk rocker STEALING her clothes at H&M".
That said:
First, I fail to see the contradiction from being an "anti-copyright freedom fighter" and "downloading stuff from YouTube".
Someone somehow convinced you than anti-copyright people only like copyleft works? The very idea of being anti-copyright is wanting to abolish all copyright.
Second, what's witht the "anti-copyright freedom fighter" strawman? As if someone needs to be that to want to download stuff off of YouTube?
I'm not sure "leeching copyrighted content" is a fair description of what Youtube-dl does. Yes most of the content you will download with it is copyright, but it is content that you already have a right to see, and in most cases the expectation is that you would maintain your right to see it for as long as Youtube (or other sight) remains active. The main difference is that Youtube-dl allows one to view the content with a program other then the browser. I suspect that few people uploading to a video sharing site did so with the intention of requiring people to view it using that sites player, but rather did so with the intention of people viewing it, and the player restriction was incidental.
The one place I can see where this breaks down is in advertisements, but I consider that to fall into the incidental results. (Although Youtube-dl does have a --include-ads option)
>it is content that you already have a right to see //
You have opportunity, that's not the same as a right. The content supplier is under no obligation to provide content to you, ergo no "right to see" that content.
That said, personal time-shifting and format-shifting should IMO be a normally allowed part of the copyright deal.
There's not much motivation to create a non-porn site YouTube clone. You have to believe you can do it better or need to not host your content on YouTube, and you have to be able to do it.
Porn has the need (the mainstream providers generally delete porn) and the sheer resources to do it.
Sorry! The problem is that our userbase is split about wanting the playlist or the video. You can create a file ~/.config/youtube-dl.conf with the content --no-playlist so that you don't have to type it out every time.
Correct. Each class that handles information extraction for a different side defines a regexp to match the url against. (Note: Some of the regexps aren't hardy to the http-vs-https distinction, so you might have to remove the 's')
Awesome, thanks. Should have tried before asking. :)
Searched my system drive for these .py files, but found nothing, so I figured something was missing.
All of the modules are compiled into a single file for the youtube-dl command. I've never looked into what they are using to do this, but you could poke your head into the repo to check it out.
We're simply making use of Python's ability to load a module from a zip file [0]. Therefore, the generation[1] is just zipping up all the files and prepending a shebang.
Might be less confusing if you append '.zip' in the first two commands:
zip --quiet youtube-dl.zip youtube_dl/*.py youtube_dl/*/*.py
zip --quiet --junk-paths youtube-dl.zip youtube_dl/__main__.py
When you echo the shebang overwriting the file, I was thrown off. I'm thinking,
"Why did you just zip all those contents into the file to just throw them out?"
Then I see the `cat` line, and it makes sense that the `zip` command appends
the .zip to the end of the file.
ytplay() { youtube-dl "$1" -o - | vlc - #<enter here>
> } #Where "> " is bash prompting for more/end of definitoin
In a file (eg: .bashrc), I'd personally prefer:
ytplay() {
youtube-dl "${1}" -o - | vlc -
}
Note that there's very little difference between "$1" and "${1}" in practice, I tend to prefer it for consistency with recommended[1] practice of using ${NAME} rather than $NAME. (And to differentiate something like "${1}${2}" vs "${12}", as you might if $1 was a name, and $2 an extension, or $1 and url-scheme and $2 a host-name (http://hostname -> "${1}${2}" 1="http://" 2="hostname").
Quick protip for those wondering, the simple command to download an entire youtube channel is like so:
$ youtube-dl -citw ytuser:LastWeekTonight
I downloaded a channel with 121 videos, 4.4 gigs, took 26 minutes, so 2.8MB/s average. Curious if the Youtube people will shrug it off and free the beer or rate limit or more aggressively combat this.
Also, to get the total number of supported sites:
$ youtube-dl --extractor-descriptions|wc -l
466 (wow)
As this can run on anything with Python, I guess that includes Android[0], iOS[1], Windows Phone[2], heck even Blackberry[3]??
Thanks pmoriarty for submitting this. Awesome and I'm just getting started poking around with it. Makes me really want to learn Python, seems that's what all the fun stuff[4] is coded in.
Please don't pass in -citw [0]! I have personally run youtube-dl on android, works fine. (Disclaimer though: I am the current lead developer, so may have missed a pitfall or two).
fredoliveira: Eyeballing those lectures, which are enormous, youtube-dl is helping you more easily teach people to make their lives and careers better, instead of just downloading Taylor Swift songs. So youtube-dl is indeed being used for Good.
This may sound surprising but via youtube-dl I bought more music than before.
If I find a new band that I might like, I search for Youtube videos first. The non-official videos often show just the cover of the CD or some useless slide show, so I extract the audio to have it in my playlist.
Once I am decided that I like the music I had over to Bandcamp or Amazon to buy the mp3s.
As an example: I lately bought four digital cds from progmetal act Redemption because someone upped their cd 'This Mortal Coil' to Youtube.
While -f 141 is of course perfectly fine, may I suggest -f bestaudio ? That should work fine for non-YouTube sites (soundcloud or so), and will get you a better version should YouTube one day add it. If you really prefer 141, you can also use -f 141/bestaudio to fall back to bestaudio if 141 is unavailable.
Converting it to MP3 is a bad idea. Youtube uses other lossy codecs inside, such as Vorbis and AAC. So reencoding it will degrade the quality. The best option is to keep the audio as is.
When you convert audio from a lossy format to a lossy format (from YouTube's native AAC streams to MP3,) you always end up with worse quality than the original, regardless of the encoding settings. Since pretty much everything can play AAC, there's no point in converting it in the first place. Just remove --audio-format mp3, and you'll get an .m4a straight from YouTube with no conversion step.
That's right, usually the best quality audio you can get from youtube is in m4a format. The only problem I'm having is youtube sets the format inside the m4a to dash, which some stupid players (including iTunes) don't want to play. So I have to run something like
ffmpeg.exe -i "Keith Wiley - The Fermi Paradox, Self-Replicating Probes, Interstellar Transport Bandwidth-AUk6ZlePtQA.m4a" -c:a copy 2.m4a
As others have pointed out, you'll need -- in this case. However, there's really no reason why youtube-dl should not detect this common problem (we also try to detect when users forget to quote URLs with ampersands). Update to youtube-dl 2014.11.23.1 or newer and try this again ;)
By the way, the GitHub issue tracker (https://yt-dl.org/bug ) is usually a better place to report issues. But just for youtube-dl reaching #1 on HN, I'll make an exception.
I've used this quite extensively. It's less critical for YouTube now that almost all YouTube videos work with the HTML5 player, but it helped quite a bit when every other video required Flash. Still necessary for many third-party sites as well.
It'll also download an entire playlist, and add sequential numbers at the beginning (with the -A option).
Yeah, the format enumeration works quite well too. I've used it to download the "original" format for videos available in higher-than-1080p resolutions, as well as using the --extract-audio option.
No one pointed out that it has a do-whatever-you-want license. This is the thing which bothers me most. There are gazillions of shitty youtube downloaders (paid, free and adware supported) out there that people still use and the code is being powered by the work of open source developers.
I don't think the license would change anything, even if they chose something like AGPL. The sites would either ignore it (hard to prove infringement) or put a small link with their source. They don't rely on being proprietary anyway.
Well it'll atleast do something to limit creations and who knows if the copyright attribution was placed on many projects, youtube-dl would have received more attention, I'm surprised it wasn't already posted to HN.
As a stalwart defender of the WTFPL in my small circle of developer friends (who mostly think I'm crazy for it), you are now added to the list of people I can use to back up the fact that I'm not. ;)
Seriously though, awesome project, used it for a while.
Another example for you: libcaca. [1]
However, the GNU people recommend the X11 license for small programs and Apache 2.0 if you've already decided to use a permissive license on a large program [2]
I use this to replace noisy audio on a smartphone recording of dancing lessons with a high-quality version from a youtube video, automatically: http://youtu.be/AVIHpaNQLS0
I mean I understand that all the separate steps are stuff we've seen is easily possible nowadays, but putting it together in a single UI makes it (roughly) 3000x as useful!
I want to try it, anyone had luck compiling it for Linux?
EDIT/update: Well I gave it a try, grabbed QtCreator, loaded the project, not much luck. Some issues with the "phonon" library, it seems. I'm not very good with getting C++ stuff to work when it gives build errors. I did spend about half an hour fiddling and googling error messages, but now it's time to give up, sorry :)
I'm writing this update to let you know that one of the errors I did manage to fix, is that Windows has case-insensitive paths/filenames, while Linux does not. Apparently the path for the phonon library is lowercase, so you should `#include <phonon>` lowercase. I'll try to leave a Github issue about this.
That didn't help much (complaints about the State enum in soundfix.h) which I tried to fix by also putting `#include <phonon>` in the soundfix.h. I'm not sure if that was right at all, but it did seem to fix that particular problem. As a result I was greeted with a whole bunch of other (I think unrelated?) errors about some types not being strictly compatible or something. That is where I check out until I know more about C++, decided it had been long enough, and just writing you a little message to let you know how it went.
However, it made me install and try QtCreator, something that I was meaning to do anyway. So that's a win :)
Youtube-dl is the biggest pre-built thing I use in GifMachine[0] after ffmpeg, and I've used it in innumerable projects since then. I love youtube-dl, it's fantastic!
Is there a plugin version of this program, which would dynamically change any (supported) flash video reference to an HTML5 video tag? That way I can get rid of flash completely.
youtube-dl gets updated very frequently and the version that comes with your distribution (ie, ubuntu) usually is out of date and doesn't work often on most sites. So its better to download from the source and update it using "youtube-dl -U"
That's true for the version in Debian stable as well. The package in unstable tends to pick up upstream updates pretty quickly, though, especially when they're needed to fix site support.
The amount of time it takes to keep up with all of the changes big sites make is impressive.
At some point I decided to write something similar in Ruby ( https://github.com/rb2k/viddl-rb ) and I'm kind of ashamed of how broken things are from time to time.
Video hosting sites don't have APIs and reverse engineering the sources for the videos is like shooting at a moving target.
OK. I just realized this does something really cool. I've been troubled with 1080p videos as they no longer contain audio. They're separated and YT uses DASH to join the audio+video stream.
Youtube however is switching away from fixed video files to separate streams to be used with MSE. You can note that higher resolution video is not available the old way. So downloading that won't be so straightforward.
youtube-dl supports muxing the separate video and audio streams from YouTube. You just need a recent ffmpeg/avconv and youtube-dl -f bestvideo+bestaudio.
By the way, FFmpeg has support for the 0.4 branch of libquvi, so if you built it with --enable-libquvi you can ffmpeg -i http://youtube... (assuming libquvi and its scripts still work)
Good question, I didn't keep track of its development. Your best option would be asking developers if they are planning to work on it further or not.
> By the way, FFmpeg has support for the 0.4 branch of libquvi
Same as mpv I think. You can play Youtube videos with it directly:
mpv "$url"
Which is kind of fun, since you can do tons of things that aren't available in the browser player - looping, playing only portions of the video and all other things which mpv can do.
The bit that invokes youtube-dl is a Lua plugin. It checks if the URL starts with http:// or https://, and if so, invokes youtube-dl -J on the command line to dump JSON information about the video.
I think quvi might be dead. mpv-git actually dropped libquvi a little while back because it was buggy and development was inactive. It's been replaced with a youtube-dl based downloader, so in mpv-git, this would invoke youtube-dl to get the video:
I wonder if there is any way to specify to it which stream to pick, since Youtube has many options. By default it picks H.264 / AAC, at least when it's using quvi.
get-flash-videos supports Hulu; I've been meaning to try it out [1] because flash is Satan's anus. But I'd still keep youtube-dl for downloading & converting YouTube videos.
Well, you can get a cheap VPS in the US, or UK, or wherever the content is available, and run youtube-dl there. You can also use a proxy[1]. Unfortunately, I think socks proxies are not supported yet[1].
Yget is an alternative I've found to be more reliable. It's just for youtube though, and doesn't support all the things (such as bypassing age restrictions).
Back when I used youtube-dl, it seemed like Google changed something about Youtube every couple months and youtube-dl would break. Yget survived many more of these changes.
Getting youtube-dl to run has been a pain for some reason, for example I always seem not to have the right version of Python available. Yget doesn't depend on such volatile tools.
It's easy to just throw the streams of Soundcloud songs against whatever music player you have at hand.
MPD even has a playlist plugin, which correctly handles basic soundcloud.com urls and handles all the API stuff for you.
On OS X, you can use afplay(/usr/bin/afplay) to play those downloaded videos' music(in a headless player). This is pretty useful if you listen to youtube music at work.
? hence the "so special" in my question. There are plenty of these tools already and it's been so for long time. As a hacker you should know. So again, what's so special about this?
I remember starting the project around 2006. Back then, I had a dial-up connection and it wasn't easy for me to watch a video I liked a second time. It took ages. There were Greasemonkey scripts for Firefox that weren't working when I tried them, so I decided to start a new project in Python, using the standard urllib2. I made it command line because I thought it was a better approach for batch downloads and I had no experience writing GUI applications (and I still don't have much).
The first version was a pretty simple script that read the webpages and extracted the video URL from them. No objects or functions, just the straight work. I adapted the code for a few other websites and started adding some more features, giving birth to metacafe-dl and other projects.
The raise in popularity came in 2008, when Joe Barr (RIP) wrote an article about it for Linux.com.[1] It suddenly became much more popular and people started to request more features and support for many more sites.
So in 2008 the program was rewritten from scratch with support multiple video sites in mind, using a simple design (with some defects that I regret, but hey it works anyway!) that more or less survives until now. Naturally, I didn't change the name of the program. It would lose the bit of popularity it had. I should have named it something else from the start, but I didn't expect it to be so popular. One of these days we're going to be sued for trademark infringement.
In 2011 I stepped down as the maintainer due to lack of time, and the project is since then maintained by the amazing youtube-dl team which I always take an opportunity to thank for their great work.[2] The way I did this is simply by giving push access to my repository in Github. It's the best thing I did for the project bar none. Philipp Hagemeister[3] has been the head of the maintainers since then, but the second contributor, for example, was Filippo Valsorda[4], of Heartbleed tester[5] fame and now working for Cloudflare.
[1] http://archive09.linux.com/articles/114161 [2] http://rg3.name/201408141628.html [3] https://github.com/phihag [4] https://github.com/filosottile [5] https://filippo.io/Heartbleed/