Hacker News new | past | comments | ask | show | jobs | submit login
Ncdu – NCurses Disk Usage (yorhel.nl)
446 points by thecosmicfrog on Dec 7, 2022 | hide | past | favorite | 162 comments



I use ncdu almost daily across many different systems. Especially handy when running macOS on a <512GB volume and you need to keep an eye on ballooning caches and other "helpful" cruft.

`rmlint` and `jdupes` are also seeing a lot of use here lately, reclaiming terabytes of space from years of sloppy media organization (or the lack thereof!)


Yeah I prune stuff with ncdu weekly on my work 0.5T machine. Oddly I don’t have to use it so much on my 2T personal machine.


I love ncdu and install it on all of my machines. But at the risk of sounding like a broken record - why isn’t its functionality baked into stock file managers on windows and Linux?

Why can’t either of these systems do what the Mac has been able to do since the 90s, and display the recursive size of a directory in bytes in the file manager, allowing one to sort directories by recursive size?

I am not exaggerating to say this is the single biggest roadblock to my permanent migration to Linux!

(I would love nothing more than to hear I’m wrong and “you fool, Dolphin can do that with flag: foo”!)


The Bash CLI is my file manager. So I've got ncdu built right in. Try it, you'll love it. I almost never touch the rodent.


Except that running "ls" doesn't show you the directory content size, and "ncdu" requires the user to make a tea first. The above poster is right in saying that having this built-in to the filesystem metrics would be a huge win.


But `du -h -d1` does, though, or `tree —-du -h`.


The time to scan with ncdu on directory with massive number of directories and files can be long and you don't get progressive stats.

I made jsdu to get progressive (and recursive) size.

I mostly only use jsdu on a few top levels directories, and use ncdu for the rest or after the stats is cached by jsdu.

You can install jsdu with "sudo npm i -g jsdu" or run it without install with "npx jsdu"


duc!

use a cronjob for `duc index`, then you can use `duc ui` to see the index. it doesn’t immediately update on change so it’s not quite what you’re looking for, but it might be the closest thing.


Wow thank you for that! This whole thread is great - I've been missing a utility like this for ages but never took the time to go hunting for it.


If I ever need to know a directory size, du -sh foo/ is already muscle memory, and if OP needs it often he can alias it.


I assume the restriction is file system related. It's probably not always cheap to calculate the full size of a directory, especially if it's heavily nested.

Windows will tell you the size of a dir in the right click -> properties menu, but it takes a while to calculate for large/complicated directories.


WizTree (https://diskanalyzer.com/) on Windows seem to be faster than other tools I tried.


>Windows will tell you the size of a dir in the right click -> properties menu, but it takes a while to calculate for large/complicated directories.

Caja (and probably Nautilus/other-Nautilus-based managers) does that as well. But although can show it in properties arranging by size doesn't take it in consideration. (Rather it just sorts them by number of items inside.)


Just lie to me a little bit. I wouldn't mind seeing quick cached approximations that assume that I have changed the disk between reboots, or recently just move huge files around (and the OS would know anyway)


> Why can’t either of these systems do what the Mac has been able to do since the 90s, and display the recursive size of a directory in bytes in the file manager

Many file managers can do that, although for obvious reasons it's rather built as a contextual action on a single directory than an always on feature than would slow down the filesystem horribly by accessing it recursively on many levels. On Thunar (XFCE's file manager) for example it's accessible from the contextual menu opened using the right mouse button on a directory name; other file managers would work in a similar way.

I'm sure filesystems could be modified so that any write would automatically update a field referred by the containing directory, so it would quickly propagate to the upper level, but that would imply many more write accesses which for example on SSD media would do more harm than good.


Mac doesn't for me, for a folder it shows size as "--".


Open display settings (CMD + J) and tick “calculate all sizes”. May take a second if you have some huge directories.


You fool, Dolphin's predecessor Konqueror had a directory view embedding the k4dirstat component! There you can sort by subtree percentage, subtree total (bytes) and amount of items, files, subdirs.

This broke some time in the past (KDE really jumped the shark) and is now available as stand-alone applications only: k4dirstat and filelight. The MIME type inode/directory is already associated with those, so you can run them from the context menu of a directory anywhere, including file managers.


I'm not sure what exactly you're asking for, but Dolphin shows me the size of a directory. You may have to right click and update it from time to time.


Almost every district has a tool called "Disk Usage Analyser" that does exactly what you want. Very helpful when you start getting "no space left on device" errors.


ranger has this built in


Yeah but I'm not aware of any repos that use it as a stock file manager..


One thing ncdu does not improve on over du | sort is that it still needs to scan the full directory structure before even displaying any result.

I would like something that starts estimating sizes immediately, and then refines those estimations as it is able to spend more time. I tried writing it myself, but I ended up not quite knowing how to go about it, because just getting the count of files in a directory takes about as long as getting the total size of said files...

(Another problem is that file sizes are notoriously fat tailed, so any estimation based on a subset of files is likely to underestimate the true size. Maybe by looking at how the estimation grows with more data one can infer something about the tail exponent and use that to de-bias the estimation?)


If you're okay with a GUI, I think that's how baobab works. I think it only shows the intermediate updates if the disk is slow enough, as I remember it doing that in the past, but checking my SSD just now it didn't.


I will take a look. Thanks!


btdu


This one? https://github.com/CyberShadow/btdu

Seems to be specific for btrfs


Interesting idea to use physical addresses on the file system as the sample space, though!


If you like ncdu, you might also like dua[0]. You can run `dua i` to get an interface similar to ncdu, and can also run `dua` to list file sizes in the current directory, similar to `du`. Or `dua filename` to get the size of a given file.

[0] https://github.com/Byron/dua-cli


The UI is not as good but it’s multi threaded and much faster.


Not in any repo, I'll pass, thanks


Diistinctly not true. `nix run github:nixos/nixpkgs/nixos-unstable#dua`


Actually it is in the arch community repositories and seems to be quiet a bit faster than ncdu so I will keep it in my toolbox for now.

Painpoints are that there seems to be no progress bar in interactive mode, the ui is (imho) ugly/unintuitive (for instance the usage bar seems to be relative? and the shortcuts look like glyphs) and there are functions missing (like exclude patterns, you can exclude dirs though!).

So it won't replace ncdu, but if it get a interactive progressbar maybe it will be on all my machines (with arch)


/usr/ports/sysutils/dua-cli


it's in the void and arch repos


If you use a Btrfs filesystem with snapshots, I can recommend Btdu as an alternative. Advantage: Can handle duplicate files (snapshots), which however only occupy 1x disk space.

https://github.com/CyberShadow/btdu


More interesting than its support of Btrfs features is its unusual statistical approach:

> btdu is a sampling disk usage profiler […] Pick a random point on the disk then find what is located at that point […] btdu starts showing results instantly. Though wildly inaccurate at first, they become progressively more accurate the longer btdu is allowed to run.


At first blush, this beats the tar out of using other deduplicator strategies on btrfs. I'm looking forward to checking this out more thoroughly.


This looks very useful! I wish there were something similar for ZFS.


Have used btdu - very useful on Btrfs systems with lots of snapshots.


While ncdu does the job I've found gdu (similar tool written in Go) significantly faster for larger directories.

https://github.com/dundee/gdu


Someone's gotta write rdu.


OP’s link has a list at the bottom with alternatives (that in itself is very cool). It lists both gdu and a Rust alternative called dua.


See also broot, dust, and pdu for Rust alternatives.


The realest reply


you're probably more likely to see jsdu first and then I'll have my buddy explaining to me why it's actually faster


Still only pretty fast, though.


Really nice tool, didn't know of the rewrite in Zig, the linked blog post discussing the rewrite it was good: https://dev.yorhel.nl/doc/ncdu2


Broadly, is anyone aware of a generalized list of "new versions of classic tools?"

There are so many now that are better than the old stuff; I almost feel like a unified round up of these, maybe even in a distro form, might be good for linux enthusiasts, newcomers, old-timers, etc.


Zellij instead of tmux (not necessarily better, but it's easier to use)

Xonsh instead of bash (because you already know Python, why learn a new horrible language?)

bat instead of cat (syntax highlights and other nice things)

exa instead of ls (just nicer)

neovim instead of vim (just better)

helix instead of neovim (just tested it, seems promising though)

nix instead of your normal package manager (it works on Mac, and essentially every Linux dist. And it's got superpowers with devshells and home-manager to bring your configuration with you everywhere)

rmtrash instead of rm (because you haven't configured btrfs snapshots yet)

starship instead of your current prompt (is fast and displays a lot of useful information in a compact way, very customizable)

mcfly instead of your current ctrl+r (search history in a nice ncurses tui)

dogdns instead of dig (nicer colors, doesn't display useless information)

amp, kakoune (more alternative text editors)

ripgrep instead of grep (it's just better yo)

htop instead of top (displays stuff nicer)

gitui/lazygit instead of git cli (at least for staging, nice with file, hunk and line staging when you have ADHD)

gron + ripgrep instead of jq when searching through JSON in the shell (so much easier)

keychain instead of ssh-agent (better cli imo)

Wrote this on the train with my phone by checking https://github.com/Lillecarl/nixos/blob/master/common/defaul... for which packages I have installed myself :)


Tig also belongs in this list. An ncurses git repository browser that I use all the time.


> Xonsh instead of bash (because you already know Python, why learn a new horrible language?)

Exactly, one horrible language is enough!


I'm big fan of using micro[1] instead of nano/vim as the default command line text editor.

[1] https://micro-editor.github.io/


fd instead of find (fast, good defaults) https://github.com/sharkdp/fd


tuc instead of cut (cut text better than `cut`, or lines like head/tails - eg cut first and last line at once) (but I'm biased, I'm the author)

https://github.com/riquito/tuc/


ranger and vifm instead of midnight commander (vim key bindings)

lsd instead of exa (better formatting, icons)

mosh instead of ssh for interactive sessions (maintains session even with bad connectivity)

hyprland instead of sway instead of i3 instead of XMonad


nushell instead of Xonsh/bash/fish. A bit of a learning curve but worth it Also tig as a git ui


btop instead of htop instead of top


An annoying thing about btop is that it only uses the snap package manager and nothing else. You can still install it yourself easily but I don’t understand why they’d stick stick with snap alone.


>that it only uses the snap package manager and nothing else

The first installation method it shows for Linux systems is download a statically compiled binary and it already exists on the repos of every major distro. Where the only uses snap comes from?

https://github.com/aristocratos/btop#installation


htop has much better support for more obscure unices, fwiw. Supports every BSD, whatever Solaris is called these days.

Btop seems to only support Macos, Linux and FreeBSD.


One thing that comes to mind as someone who is not in the know of these new tools would be, are they safe?

The old tools have been there forever and used everywhere. My assumption would be these are safe and don't change often. For better or for worse, I would be concerned about using the newer tools unless they are backed and/or approved by a large open source org.


If the tool is "$x, but with pretty colors", there's a good chance they are not safe to use in pipelines. It's distressingly common for colored output to be sent even when piped.


I really respect this take -- and also kind of don't like it at the same time?

Basically, I don't do "mission critical" Linux things. I teach IT and I hack around on my own boxes with scripts and stuff because it's fun and useful to me. I'm always on the lookout for the hooks and such that can get more and different people into Linux.


I understand. The truth can be bitter. My first instinct is to install most of these tools. Then I remember all the recent supply chain attacks and malicious packages that have been snuck into software.

I use 1 daily driver for everything, including my finances and crypto. So trust me, I'm bummed out about this. I'll still check some of these out. As long as they seem safe, I will install a few.


> My assumption would be these are safe and don't change often.

Go take a trip through the GNU userland tools and you'll find a lot of dodgy code that hasn't been touched in 30+ years.


https://github.com/ibraheemdev/modern-unix

> A collection of modern/faster/saner alternatives to common unix commands.


If you are just looking for the largest files in a directory hierarchy, try this:

  find . -type f -print0 | xargs -0 ls -s | sort -n
If your find/xargs don't support null delimiters, then the guy who wrote musl has some tricks:

http://www.etalabs.net/sh_tricks.html


What came to my mind is a blog post[0] by Julia Evans

Not exactly what you had in mind but might still be interesting.

[0] https://jvns.ca/blog/2022/04/12/a-list-of-new-ish--command-l...


> generalized list of "new versions of classic tools?"

https://news.ycombinator.com/item?id=19967138


I like iotop. Pretty much exactly what it sounds like - a top-like program for I/O operations. It's also just an apt/yum/dnf install away on most distros.


htop also includes columns for read/write.

There's also more detailed CPU usage you can turn on, including I/O wait.

Use it, it's great.


bpytop, ncmpcpp


This tool has been pretty invaluable to me over the years in diagnosing/tracing high disk usage, particularly relevant for pypi and npm packages that were needlessly shipping huge artifacts.


The trick with ncdu is to install it BEFORE your disk gets full from some random error log going haywire.


    $ sudo apt install ncdu
    > Not enough free space
Fuck!

Actually, one trick I learned was to create a dummy file full of zeroes or rand, maybe 200MB or so, and place it in an obvious location (e.g /) for easy deleting in such a crisis.


Ext3/ext4 systems do this by default and reserve 5% of space for root.[0]

[0] https://docs.cloudera.com/cloudera-manager/7.4.2/managing-cl...


Another trick is to have a 1GB file called DeleteMeWhenDiskIsFull.

Then you can delete that file and ponder your life choices that led to that situation.

Just remember to create it again :)


> ponder your life choices that led to that situation.

I always remember that VMware vCenter appliance didn't come with a working logrotate config for years and feel better.


That's the reason why typical Linux (or Unix) filesystem comes with a setting to set reserve some space for root account.

Doesn't help if the disk is filled by processes running as root, of course.


Or a "journalctl --vacuum-time=1d" will usually clear a lot of space


One of the first things I did in my new position was to add ncdu (and mysqltuner) to our Ansible playbooks. It's that useful for me - I probably use it every other day at least.


I've saved the day once or twice by having a built of ncdu that I can just copy to /dev/shm to figure out what went haywire.


This program is extremely useful. I have recently performed a recovery on a bricked Microsoft Surface and was able to extract all the useful files from a Windows home directory and quickly discard all the libraries, caches, and so on. By seeing the folders that take up the most space you can easily determine what the juicy bits in a folder hierarchy are. Huge time saver.

Using GrandPerspective on macOS was a similar revelation, but ncdu being keyboard driven and allowing you to quickly launch a shell inside each folder and then apply find -exec there quickly is a productivity boost on yet another level.


Love ncdu. I probably use it at least once a week somewhere debugging a disk that's getting full.


Such a great tool. Thank you!

On a Mac, I've been using OmniDiskSweeper for years, but this can be run in a single directory and on my Linux machines as well. Fantastic!

And look, it didn't ask me to sign up for an account, and it didn't require me to consent to usage data collection with a huge Privacy Policy attached! How is that possible? (still dealing with this morning's scars from trying, and failing to run the warp terminal)


now if it was just an executable file and didn't require a package manager or compiling to install it would be flawless.


TIL that my `.xsession-errors` file is ~16GB. Thanks, `ncdu`!

(I should probably look into this...)


> This is fine.


Just fair warning. 99% of the time, ncdu is great at intuitively finding what is taking up space. Know that unlinked, but in-use files still take up "physical" disk space.

Be weary on machines that have not been restarted I'm a while. The free space reported on the file system layer may not be the same as the amount of space taken up by files.

i.e. It's possible that delete files may still be referenced by processes still running since the space will not be reclaimed until they are killed or close the fd. This commonly happens with GB of log files you deleted but at still `open()`.

That seems to be the greatest difference between Windows and Linux from an OS & file system perspective. - Windows treats files similar to a row in adl database with locking behavior: "This file entry must be deleted from the system. Tell me if I'd be locked out by another process". - Linux treats files as reference counted resources: "I no longer need this file in this directory. The underlying data can still be referenced. You can unlink it."


I use this system all the time!

ncdu -x to instruct it not to traverse foreign filesystems


Fantastic tool. One improvement I'd like to see is support for .Trash instead of straight out deletion.

I had to delete a lot of stuff so I turned off the 'Are you sure?' prompt.

Sure enough, 4 minutes later I fat fingered some wanted files into oblivion.


Interesting. I’ve just used df-m | sort -n and walked the tree from the CLI for so long.

I’ve also used windirstat (I think that’s what it was called) years ago on Windows and Disk Inventory X on MacOS graphically.

This seems like a nice in between that could replace both methods for me. I’ll have to try.

The GUI bunch of squares is really only useful when you can hover with a mouse and would be clunky in a TUI even if you could render it. And the FS structure isn’t immediately obvious, so I find myself wasting time wandering hovering over big files and globs of little ones.

I really like to avoid a mouse when I can unless it’s really useful.


FYI I recently discovered sort —-human is a thing so you can keep the K/M/G units if you want to.

However I use ncdu all the time.


Yeah, my habits come from using GNU, BSD, SysV and having to use the lowest common denominator.


> I’ve also used windirstat (I think that’s what it was called) years ago on Windows and Disk Inventory X on MacOS graphically.

On Windows, the new hotness is WizTree, which rather than recursively calling directory listing functions, it directly reads and parses the file tables itself. This makes it orders of magnitude faster. I have a 2 TB hard drive full of a million files, and WizTree reads and parses it all in under a minute, whereas I can expect WinDirStat to take half an hour.


That's pretty brilliant.

My windows experience is really out of date. I knew NT4 and 2000 the best. Then I picked up 2008 for a while supporting small businesses. I don't hate it or anything, but am definitely deeper on Unixes and about equal on VMS that I supported as my NT4/2000 time. I'll work on whatever pays :).


You mean `du` right?


I always used "du -S | sort -n" ... I think... it's been a long time since I've needed to worry about disk space.


Yes, just dumb fingers typing the comment. :(


NCDU is one of the first things I install on any system. Glad it's finally getting some recognition


I have used ncdu for a long time, and it's good if you have time to wait for it to get sizes, if you can't, Duc AKA "Dude, where are my bytes" has caching and is ready to apt install on Debian and Ubuntu repos https://duc.zevv.nl and it has different GUI options. It's slightly more complex but for larger volumes I use it instead of ncdu.


Have been using ncdu for more than a decade, and recently started using diskonaut for similar purposes. Was looking for a terminal-based treemap visualization for analyzing disk usage and stumbled upon diskonaut, which is exactly that.

https://github.com/imsnif/diskonaut


I never knew about this tool, but it looks quite useful--much better than just using df.

The only thing I wish it had was a squarified treemap view of disk space. There was an old graphical tool for Windows called SequoiaView that I used to use years ago, and I've never found a worthy replacement for it on Linux or MacOS.


GrandPerspective is in the mac app store and at least for my needs was similar enough to SequoiaView.


Try https://www.derlien.com/ - GPL. Might fit your needs.


sequoiaview looks like a clone of kdirstat, which is available on all platforms.


The best tool I've used on Windows for that is "Scanner" by Steffen Gerlach:

http://www.steffengerlach.de/freeware/


WizTree is vastly faster than the other options presented here (Scanner, SpaceSniffer, WinDirStat).


WizTree v3.33 is the last 'donationware' version vs. free for personal use now.

https://diskanalyzer.com/wiztree-old-versions


Switched to this a few years ago. Far better than anything else I've tried.


It's also the one that's much more regularly updated.



While we're listing them, Disk Inventory X (https://www.derlien.com/)


Thanks for sharing! I really like the real time update during progress. You sometimes have to wait a very long time in Scanner before you see the result.


I find myself using WinDirStat on Windows systems, but often use ncdu in WSL.


On Windows, you should switch to WizTree. Rather than recursively calling directory listing functions, it directly reads and parses the file tables itself. This makes it orders of magnitude faster. I have a 2 TB hard drive full of a million files, and WizTree reads and parses it all in under a minute, whereas I can expect WinDirStat to take half an hour.

On an SSD, WizTree only takes a couple seconds.


For all the hate NTFS gets, MFT has led to the creation to two amazing tools: Everything and WizTree. Unfortunately both proprietary although freeware.


Thanks for the suggestion!


You should try Directory Report on Windows. It is faster than WinDirStat. Has more filtering and reporting than WinDirStat. Can find duplicate files too


I use WinDirStat on Windows

https://windirstat.net/


You should switch to WizTree. Rather than recursively calling directory listing functions, it directly reads and parses the file tables itself. This makes it orders of magnitude faster. I have a 2 TB hard drive full of a million files, and WizTree reads and parses it all in under a minute, whereas I can expect WinDirStat to take half an hour.

On an SSD, WizTree only takes a couple seconds.


`dua`, discussed elsewhere in this thread also works on windows, just for another option.


`rclone ncdu` is also a jewel


Yes, this. I actually discovered ncdu through first seeing it in rclone. I thought "that is awesome, why isn't this tool a thing?" And it was.

I type `du -sk *|sort -n` a lot less frequently now.


Named after Enkidu, the ancient Sumerian mythological figure?

https://en.m.wikipedia.org/wiki/Enkidu


Most likely just a coincidence due to abbreviating NCurses Disk Usage


Dense

Edit, to spell it out: op was surely joking


Used the zig static build in offline mode yesterday to hunt down a excessive storage consumer. Highly effective.

Another tool I've used is jdiskreport[1]. It's a java app with a straightforward GUI. File system scanning is multi-threaded and quite efficient.

1. http://www.jgoodies.com/freeware/jdiskreport/


Despite trying these tools many times, I still find myself just running `du -sm * | sort -g`. It works and it's easy to pipe to grep, etc.


ncdu is fabulous. I used it a while back on my 2009 Mac mini and recovered 40GB of disk space. 8GB of this was in just four folders: ~/Library/Logs/CrashReporter/MobileDevice, /private/var/db/systemstats, /System/Library/Caches/com.apple.coresymbolicationd, and /Library/Caches/com.apple.iconservices.store.


I would advise against deleting random things on your disk that are taking up space.


One I don't see mentioned here is xdiskusage (for Linux) which gives a nice 20th century style X11 tree view of your disk usage.

xdiskusage -qa /


I use this tool all the time! On my Windows boxes, I use an old tool called "Scanner" that presents the results in an interactive pie chart: http://steffengerlach.de/freeware/

I've used it on everything from Windows 10 to my Win98SE oldschool gaming box.


For Windows/ NTFS WizTree is unbeatable because of its speed. I wish there was something similar for Linux every time I use it. I don't know anything about dinner systems: why is there nothing so fast for ext3?


ncdu is one of the most useful tools I've come across to quickly get rid of unneeded files in remote servers. Top program.


See also tkdu for a graphical version: https://github.com/daniel-beck/tkdu

It's abandoned but it still works, and I like it because you can pipe the output of du into it, which is useful for visualizing remote systems.


I recently discovered ncdu while troubleshooting a logging issue in one of servers. Something broke and syslog began to inflate unchecked and consumed the entire disk's worth of space. ncdu helped explore the system and find out where the file was. Great tool.


Nice tool - although with remote machines my go-to tool is KDE filelight, the joy of remote X11.


I use ncdu a lot. It’s not strictly necessary but it’s really nice to have in the toolbox. Also IIRC there is a standalone binary available which comes in handy when you can’t install packages on a full filesystem.


Wow! Someone really outdid himself on todays (7th) advent of code challenge! ;)


Best tool I've ever used for finding where I could free up some disk space


What do people use on file systems where you have hundreds of TB of files? I always end up having to use the tools that come with the storage device (EMC InsightIQ for example)


ncdu is one of the most useful CLI tools out there! Been using it for many years as well.

Another disk scanner worth plugging that I came across for some use cases where I needed to generate single-view reports is pdu - it has the same concurrency implementation that other ncdu alternatives use so the performance is much better too.

https://github.com/KSXGitHub/parallel-disk-usage


The page mentions FSV - a 3d fs viewer. Classic!

https://fsv.sourceforge.net/


I don't really see the point of a gfx ui for this. I just live on the edge:

  find / -type f -mtime +200 -exec rm -f {} \;


Do you bump that up to 300 when the Ubuntu release date slips?


    du -hs * | sort -h


If someone could make a CLI DaisyDisk that would be amazing. If anyone's aware of it let me know please!


i replaced gnome Baobab with ncdu for a while. it's significantly faster


Nice GUI to `du -h --max-depth=1 .` or `du --max-depth=1 . | sort -n -r`


The other big advantage over just being a gui is that it Scans the filesystem ahead of time and keeps the data across your navigation into directories.

du will have to rescan as you traverse down the tree


The subsequent scans feel a lot faster, I suspect due to caching.


growlight may be of interest as well:

https://github.com/dankamongmen/growlight#readme


didnt see my favorite one in "Similar projects":

https://github.com/muesli/duf


ncdu and /var/lib/docker, name a better combo.


`docker system prune -af` ?


Old-school alternative:

  du -ax / | xdu -n -c 9


I just used it like 10 minutes ago! Awesome piece of tech.


Oldie but a goodie.


Love the tool.


It's a neat tool. Not quite clear why it's #1 on hn though?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: