Hacker News new | past | comments | ask | show | jobs | submit login
Procfs: Processes as Files (1984) [pdf] (gobolinux.org)
50 points by tanelpoder on March 1, 2021 | hide | past | favorite | 15 comments



I was updating my Linux Process Snapper /proc sampling/profiling tool [1] and ended up curious about when did this concept (examining process state via reading special files) first show up. Back in 1984 already, pretty wild! And, according to Wikipedia, it was first meant to replace the usage of ptrace() syscall [2].

[1] https://tanelpoder.com/psnapper/

[2] https://en.wikipedia.org/wiki/Procfs#UNIX_8th_Edition


On the comments that glorify procfs as a great example of "everything if a file", I'm not sure that's how everyone sees procfs.

To me, it has always been a sort of hackish/no hassle way of exposing some kernel interface that would not fit/be too annoying to provide as syscalls (not to mention there's a limit to the number of syscalls).

I would much more prefer a sysctl-like interface (i.e. a single, generic, syscalls to get process metadata).

As a side note, most BSDs moved (or are moving) away from procfs, for this very same reason (overused, too much pressure on the VFS, loose format, etc)


Really? I think it's fantastic to have access to that kind of information from my shell. I've certainly used it for debugging live systems. Sure I could still use something like lsof and have it monkey around with the extra syscalls, but it's just really cool and useful to be able to use it with one less binary.


It's not really harmful as long as it's just some shell programs here and there that use it. The problem is when everything starts to rely on it.

Hell, in linux, the dynamic linker (ld-linux.so) depends on procfs to expands ELF variables (e.g. $ORIGIN, which requires the path to the current program being executed, which is retrieved from procfs). It's gone _way_ too far!

Now it does not mean that an other interface would be ugly/annoying to use from the shell.

`sysctl` for instance is quite convenient to use from the shell, and the hierarchical variables that it accepts could very well mirror what procfs exposes.

Something like

`$ sysctl proc.42.environ` <=> `$ cat /proc/42/environ`


I feel Solaris really got this right with its p* commands, e.g. pargs -e 42 or just penv 42 to mirror your example. Whilst in practice they may use a procfs they could use other backends if implemented on other systems without or with a differing procfs. These commands are a much cleaner interface IMHO.


Is rather have proper object interface to work with. Saves a lot of parsing


Hacking a small pipeline starting with cat always beats having to write a code to parse something unintelligible at first sight IMHO.

At the end of the day, first one is always faster.


Streams should be used for that.. streams.

Structures should be used for structured data.


I don't see a reason for /proc/meminfo or /proc/cpuinfo to be a stream. Similarly /proc/$pid/{cmdline,cgroup,mounts,..} to be streams.

/proc is a very fine interface for getting realtime basic information about a process. Maybe we should leave it as it is and stop abusing well matured interfaces?

I see red when Chrome uses intentionally broken symbolic links to store information on a system for example.


You could have object interface on procfs. I think Solaris used a lot of binary files there.


Wow, that's 7 years earlier than the paper with Roger Faulkner that I thought was the first. I guess this is the first reference in the Faulkner and Gomes paper.

https://www.usenix.org/sites/default/files/usenix_winter91_f...


That Faulkner paper also discussed differences with Plan9’s implementation of /proc


UNIX idea of (almost) everything is a file is probably one of the reasons that made its concepts and quirks so long lived and why we have to deal with them even today. Although I agree with many of the points of "the UNIX hater's manual" no other OS came even close the influence that UNIX has had.


Because none of the alternatives was free beer with source code as dessert.

Everything is a file on UNIX until it isn't, when it comes to IPC and everything that came after Bell Labs was done with it.


one of the things I used to really like about js is that processes were often objects. need to get a file? make an XMLHTTPRequest object, give it some instructions, & watch it as it progresses.

Unfortunately promises have utterly ruined everything about these everything-is-an-object days, in practice. Now you stack up a whole bunch of constructions options & create a Fetch promise, which exposes nothing, says nothing, reports nothing. It's safer & yeah it's better too, more explicit & intentional, but heavens, adding back progress, adding back abort, & the general complete lack of common ways to live manage your transfer progress: everyone has their own means of using & harvesting control, where-as previously the XHR had control built in.

Promises really are processes. They're not just an abstract future value, they're a thing representing the work too. Alas while once upon a time js had a design sensibility to expose pieces & control alike a process exposing itself as files, today a way more top-down non-first-class way isn't just dominating, it's totally taken over the language. Some ultra-conservative hardline mindsets made it through with promises, such as a) that they had to be accessed asynchronously, for no real reason other than great great great fear of the user (resulting in them being almost ideal, but actually alas unsuitable to work as things like AbortSignals & many other level-triggered systems that they very very closely resemble), and b) that promise handlers don't get to see their handlee/triggered. There's a whole host of design decisions that very outspoken very controlling people put into the languages reflecting their biases. I don't think the were wrong, but there's a ton of possibility space & openness & expressiveness that just got dumped, deliberately that, yes, probably would have slowed down uptake had some confusion, but which sadly really constrains how good we are at modelling & thinking about long running things very harshly.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: