Good to see a netcat one show up there --- it is capable of serving only a single file, and you need to prepend the response headers yourself, but it's been very useful the few times I've had to use it.
My point is that while lots of languages include a web server in their standard library (one of these was a damn install! Like yes you can pip/gem/npm/etc install a server, that doesn’t make it a one line web server), it is by no means a one liner any more than the above two examples can be considered bash one liners.
On practically any linux (or mac) you can do $ python -m http.server 8080 to instantly start up a server in that directory. You can do this in multiple terminals to have multiple servers on different ports serving multiple directories. Nginx, Apache require significant configuration and reboot to apply changes. They're not at all comparable.
Apache usually just needs to reload the config file not reboot, and it's also often only a few lines to get whatever you want done. Reload might take 1/2 a second, but I doubt it takes even that long. Sure, if you want to load a module with a bunch of new functionality you might need the 2 seconds it takes to reboot unless you have thousands of sites, but that has been a rare experience in the last 7 years or so for me. If you have thousands of sites you are not using python server anyway. In the end apache config is relatively easy -- especially to just test stuff quickly --, it's the overall server security that is more nebulous due to the variety of attack vectors throughout the stack. I would guess all that is similar on Nginx.
So sure, it is a little easier to just throw out a one liner, but it doesn't give you an environment that is similar to deploy, so the extra 2 minutes to open the apache config and reload is worth it to me for the implicit bug testing going on. Of course none of us has a horse in the race and to each his own on his own box. But it's fair to keep their names in the conversation at least.
How is it more of a burden to install apache2 vs sinatra? “Significant configuration” required for a static web server with apache2 is limited to just a few lines of configuration specifying the directory you want to serve and on what port.
Look, I love me an ad hoc HTTP server for development work, especially the one Python comes with, but calling it a one line web server is like calling “apt install postgres” a “one line database server”. It’a just not a one liner.
One of these is literally installing sinatra. And if you just want a server to serve static assets, nginx or apache2 will happily serve it out of your /var/www. Symlink what you want there and you don’t have to edit any config files or anything.
What would it tell you when you see that e.g. the Python option is twice as fast as the Java one? Or that the node server boots slower then the php one?
> Warning
This web server was designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
I bet people won't. Ready-made docker solutions are based around nginx/apache and shared hosting stacks were in place long before php built-in webserver was available. There's no reasons and incentives to change all that.
This web server has been available for a long time. Sure you can misuse pretty much any technology. Rolling your own crypto is possible too but most people don't do it.
People tend to use ready-made Docker containers for PHP deployment. The official PHP image has apache and fpm versions: https://hub.docker.com/_/php/
I've been doing this with singe file html pages that bundle all dependencies inline, no server necessary. It does put some restrictions on what you can do, since local files are sandboxed pretty hard, but with all dependencies inlined (and with WebAssembly you can get pretty crazy, like embedding SQLite in an html file) you can do quite a lot.
Just the other week I made a little app that loads CSV files from a room booking system to create monthly reports, which was done manually by the sole employee of our housing association and took several hours – now it takes minutes.
It's been my go to way to create easily deployed rich GUI apps for the past year or so, and is pretty simple to build with a few shell scripts to put it all together. The resulting html files can be big, but they load really quite fast and since they're deployed on to a machine and not served over a network the size doesn't matter that much. It's also way less than an electron deployment anyway.
What is the tooling and workflow you use for this? It's not something that had occurred to me, but now that you mention it, it seems like something I'd like to try.
Edit: It looks like none of the major bundlers support this feature out of the box, but there is a project called in inliner that does exactly this: https://github.com/remy/inliner
Nothing fancy, just a few home grown shell scripts to essentially replace external dependencies in the html (e.g. script tags or link tags) with the contents of those files. There's no minifying, optimization, or building or anything. I only use vanilla js, css and html. The only exception to this is if I have anything that needs compiling to wasm, which essentially is just SQLite, but could be anything really.
Because I only deploy these locally on desktops it's pretty much pointless to build/obfuscate/optimize anything for transport, it's so fast to load anyway since it's all off the local drive.
Maybe I should write up a how-to or tidy up the workflow and publish, but it's really nothing fancy.
I was heavily into PHP in the late 90s and early 00s, including hosting download mirrors, then left it for years for other tech in non-server areas. I got back to it recently to start playing with Micropub implementations and other indieweb stuff and was pleasantly surprised to find this server in there. It’d been a long while since MAMP and configuring Apache for me so it was nice to just not have to worry about unearthing that stuff again and hit the ground running on the actual software.
Oh man, this sucker. I did some filthy stuff in a legacy PHP codebase using this to get some ultra basic acceptance and smoke tests in place. It worked fairly well, but there were a lot of weird edge casees where it didn't _quite_ work like a real webserver. I've got some notes hiding on an old laptop, I should dust those off and post them up sometime.
I use this to host a small WordPress blog on my server whenever I want to make changes that I then compile into static HTML for public access. Works well enough and avoids the need to lock down a public WordPress instance.
I've used a plugin called WP2Static to turn WP sites into static HTML. You can also use WGET to crawl the site and export HTML, but it requires some tinkering. Google "export Wordpress to HTML using WGET" to find some walkthroughs.
I am familiar with wget, curl and some browser plugins that can scrape a site. A WordPress plugin should be able to give you some extras, especially when you start to think about things like responsive designs.
"You can configure the built-in webserver to fork multiple workers in order to test code that requires multiple concurrent requests to the built-in webserver."
The built-in web-server itself was released with PHP 5.4 (2012). The only thing new about it is support for multiple workers to make it faster for concurrent request handling. But, it's still only intended for development environment and testing. I imagine multiple-workers support is very useful with SPA development, and especially improves testing speed if you run concurrent tests that rely on built-in web-server.
That's my point. So is MAMP/WAMP. The devs have always stated their setup was meant for development only, yet lots and lots of novices use a MAMP/WAMP setup as their production environment.
Then your static side would be served from localhost over a public URL which you could then use for demos (or helping your backend guy do the integration). This wouldn't be too difficult to implement.
On Linux, php -S 0:8080 is shorter if you're just testing (as it's meant for) and have a firewall - though it opens it for 0.0.0.0 (anyone who can reach your system).
Not sure about other OSes, I think I noticed this doesn't work on Windows once but that could also have been BSD or something.
This sounds like a perfect solution to self-hosting websites (e.g. wiki, grocy, monica) that's protected by an identity-aware proxy (e.g. Pomerium) that are expected to receive <1qps traffic.
(Most of my self-hosted web services are written in Go, I've being able to stay nginx-free thanks to Go's built-in web server.)
I wonder if the 'news' is the 7.4.0 change that allows for configuration for multiple concurrent requests, by spinning-up numbers of workers as needed?
Absolutely, but you should probably only do it for development. As they write on the linked PHP manual page:
> This web server was designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
That was my thought too (the sarcasm). This was amazing when it was introduced, but in the age of Docker it's no longer really relevant to bootstrapping a basic dev environment.
It is still nice for quickly running something and get rid of it. For instance an ad-hoc script mocking an API endpoint that you can run on an arbitrary port in seconds.
PS: a few years ago docker was not deemed production ready, and people only used it as dev/testing environment. I find it funny that the reverse is sometimes happening, where people prefer running this webserver locally for speed and eventual debugging, while production runs on containers.
> It is still nice for quickly running something and get rid of it. For instance an ad-hoc script mocking an API endpoint that you can run on an arbitrary port in seconds.
That is one of the many things Docker and Docker Compose are good at. If you have a docker-compose.yml file you can do just that with any web server backend.
Say you wrote your 10 lines script that mocks your third party endpoint.
running it on port 7659 is
> php -S 0.0.0.0:7659 scripts/test.php
and done.
Compared to this one liner, the proposition is making a dockerfile with a base image, perhaps your local extensions if you had to compile them, and configuration if you changed from the default one.
Then a docker-compose to share your local file, which references the previous Dockerfile, and specifies the right port. Knowing if your testfile is not self contained, you’d need to map any services it needs to access to.
Then you run the whole.
It is by no mean complicated, and you could keep around a template of all of that around in case you need it (or you copy it from your main application’s docker files, and modify if you need to it more lax). As you say it also gives you more freedom. It’s just a tad more tedious than literally one line.
That's a fair use case, although I might still wrap that one-liner in a Docker container because the service consuming the mock is probably Dockerized. But indeed it doesn't make sense to configure and run another nginx container just for that 10-line mock.
php is great and this is not news. all the haters go to your hipster frameworks and spend the time i need to build an app with just setting up your boilerplates.
I have an absolute hatred of putting unused code on a production server, especially if it's something that can listen to network traffic. It's such an unnecessary risk. The fact that you don't appear to be able to set PHP up in a way that disables the use of this thing feels horrible. There's no way I'd ever want to run PHP outside of a container that I have control over which few ports are mapped to now.
This is only present in the CLI builds of php, And you should limit who can run the php scripts anyway on a production server.
Remember that setting up a reverse shell only requires a networkable shell (like bash). Most linuxes (including containers) there for have the capability of having reverse shells started on them. The way to protect abuse of PHP’s webserver function is the same as the one to protect against bash reverse shells. Do not allow any outbound traffic but only traffic you trust!
Do not blame php for having a feature that others have had for years. PHP’s version is no better or worse than any of them.
honestly, I would never run anything important in production that's not behind a load balancer+firewall, so without direct internet access. it helps with upgrades, as you can upgrade separately each node without putting down the application and it provides security because you only whitelist what you want to give external access (yes, I know there are still other attacking vectors, but still, security is an incremental job that adds on layers and layers of best practices)
I don't hate the programming language. PHP is a great language that I used for a decade before moving over to the frontend. I don't use it any more but I work in a company that does and I see good results when it's deployed. This is not about PHP (or Python, or Ruby, as they also have built-in httpd servers).
What I hate is the idea of putting something on a server that's going to make my life harder, or create more work for my team. I don't want the developers I work with having to give up their evenings or weekends to resolve issues and incidents that arose from things shouldn't have existed in the first place. That's why I don't want additional network services sneaking their way on to production. I should very easily be able to tell which applications on a server are listening to external traffic.
Maintaining good security is hard enough without language runtimes including things that should strictly only ever be a development dependency rather than a production dependency.
If there are pills I can take to make this problem go away, sign me up.
How does a feature of the Command line version of php impact developers? Developers should not run applications on production machines themselves (either its a script, a deployement agent / engine or a operations guy that does that work (including devops guy with operations)).
As for security you must keep up with the capabilities of the tools you use. Php has been able to run as a webserver for years (even before 5) all they did is implement a good sane dev server to run php code without setting up a complex php environment (that is probably less secure).