Hacker News new | comments | show | ask | jobs | submit login
Good thing in PHP nobody talks about (pixeljets.com)
69 points by jetter 3 months ago | hide | past | web | favorite | 67 comments



The phrase "shared nothing" springs to mind.

Years ago (1999?) one of the early PHP bloggers had written that PHP was good precisely because of 'shared nothing' architecture. There was no shared memory, no threads, etc to trip you up.

Years ago, and colleague bitched about PHP, and one of the issues was 'no thread support'. I went to pick him up for lunch one day, but he couldn't make it - 'debugging some threading issues' (in either java or c++ - can't remember which one). I... didn't quite bite my tongue, and, while I know 'different tools for different jobs' makes a lot of sense, I also know many people have made more work for themselves by explicitly avoiding even greenfield projects in PHP because... well.. because it's PHP.

Also, from what I gather, PHP will be getting real async support in a couple years, which may throw some of the 'ease of thinking' arguments out the window a bit.


There has been threading in PHP via the pcntl_ functions for a long while - since PHP4.

You have to have some pretty exotic needs to want to use them, but threading does exist, and I’ve built a multithreaded php app using it.


The needs aren't very exotic. You can use them any time you need to do multiprocessing, a common example is building a daemon to process a queue in parallel with multiple workers.

Also, the pcntl functions don't provide multithreading primitives, only multiprocessing (fork)

Edit, looks like there is also a pthreads extension


> The needs aren't very exotic [...] a common example is building a daemon

While "writing a daemon" is pretty common, I've never seen a daemon written in PHP (I'm sure they exist, I've just never seen one), so I would call that an exotic use-case :P


I had to maintain one, would not recommend it as a general pattern. Most of the problems might as easy have to do with the original author and his experience in writing such an application (and hence choice of PHP for this task). But this post also springs to mind: https://software-gunslinger.tumblr.com/post/47131406821/php-...


Better solution might be to run several daemon processes in parallel.


PHP has hundreds of legitimat reasons for professionals to avoid using it aside from the fact that writing multithreaded code is hard.


It was definitely intentional. The prefect sandbox has been a central design decision since day one.


Citation needed.

The reality is almost certainly far simpler. PHP started as a suite of tools for writing CGI pages [1], which have the same property mentioned. As with many things, that early design choice has followed through and led to continuing to execute PHP in similar ways as CGI did way back then.

[1] http://php.net/manual/en/history.php.php


> Citation needed.

I think the 'rll' user is Rasmus, who started PHP. I think he would be the best person to cite re: intentionality.


This is why I read HN every single day. This comment is epic :D


It's also funny that the article mentions the evil subscribers in Node.js.

The fact that PHP died after every request made everything other than the HTTP use-case quiet a pain to implement.

That's one reason why it got pwnd by Node.js, I think.


You can write a server that will handle multiple requests in PHP, for example, using ReactPHP.

By the way, I don't like how they implemented promises and asynchronous file streams: they copied them from Node.JS and didn't implement proper error reporting (which Node did't have at the time too if I remember correctly). For example, if there is a promise that was rejected and the rejection is not handled, there will be no exception thrown. So the developer won't even know about an error, even if it is a syntax error. That's how poorly Javascript promises were designed. Now this is fixed in JS and unhandled rejection will produce a warning in the console but there are many other bad things about them.


I know, I wrote such a server in 2010, it just wasn't as nice as with Node, hehe


It's funny you got downvoted for this (at the time I'm writing this), considering (based on your comment history) you're likely Rasmus Lerdorf, creator of PHP.


It's a perfectly valid and normal reaction. Regular viewer has no reason to suspect that this is anyone special speaking. That comment was indistinguishable from a random thought of a random person, so it was entirely reasonable to request a citation. "That's me, Rasmus" would be enough of a citation of course.


Indeed! That's what I was implying with "based on your comment history".

It required me being curious about why a random commenter would make such an authoritative (but not backed up) statement, remembering PHP's creator's name and making the connection that their username may be related to it, then reading his Wikipedia page and digging through his comment history to find a couple from 2015 (about UWaterloo) and then 2013 (about his age!) as supporting evidence!

Now I feel creepy.

Edit: (but really, it's nothing compared to the awesome "Did you win the Putnam" response https://news.ycombinator.com/item?id=35079)


Yeah well, one of the many reasons I generally avoid HN.


I'd hesitate to say PHP has a "perfect sandbox". I certainly wouldn't run a script like `eval($_GET["foo"])`.

The only "perfect sandbox" I can think of for CGI programming is something like Unlambda (pure functional, except for monotonic reading of input and writing of output)


> a central design decision

I do not think PHP had design decisions until ... much, much later than day one. I would cautiously say even PHP 4 is more than a bit haphazard and PHP 5.0 I definitely remember as an alpha quality release, 5.1 as beta and 5.2 as stable. Design, I think, started to appear with the later stalled PHP 6 attempt and so mainline 5.3. The first one I remember that can be called a conscious design decision was deferring $this support in closures to 5.4. Correlating with my memories is the first release to appear under the "Implemented" headline on https://wiki.php.net/rfc is 5.3 indeed.


I have been working on .Net c# this past 2 years, coming from PHP. The hardest thing for me was to remember to close my database connections. Of course when you don't close one connection, your app doesn't break, it simply consumes more memory. After a few hundred requests, your database fail and you have no clue what went wrong. (close your connections!)

In PHP, when your page is done processing, the process is killed and all the connections, file handles, or resources it had open will be closed.

The more I work with .Net, the more I get to appreciate the things in PHP I had taken for granted.

I wrote about this a year ago: https://idiallo.com/blog/php-fast-debug-time


Presumably, this guy's never seen $_SESSION hell: enormous amounts of state stashed in a single super-global associative array that persists between sessions [1].

In addition to $_SESSION hell, lots of custom PHP applications end up with extremely complex and convoluted MySQL tables dedicated to storing what would typically be global application state.

Both of these can lead to difficult-to-debug performance issues, race conditions, and especially security vulnerabilities.

Global state is going to happen, so languages/libraries should provide mechanisms that make it easy to state and enforce strict invariants on reads and writes to global state.

[1] http://php.net/manual/en/reserved.variables.session.php


I don't like sessions in PHP: most times the problem can be solved without them.

But sessions are available in other languages as well:

- https://docs.djangoproject.com/en/2.1/topics/http/sessions/

- https://guides.rubyonrails.org/security.html#sessions

- https://www.npmjs.com/package/express-session


Thats pretty much a bad practice well documented accross various php tutorials and books. But many php developers have no understanding of programming so they stash everything in that _SESSION superglobal or do other scary things.


Global state in general is bad practice well documented across tutorials and books about programming. Article's argument is that PHP avoids global state, but _SESSION hell and "MySQL table as global state" patterns demonstrate that programmers will always find a way to shoot themselves in the foot with global state :)

Also, IME even good PHP programmers suffer from the "MySQL-table-as-global-runtime-state" syndrome. And it's at least as bug-prone as storing global state in an ADT.


Anything, in any language can be written to not share state, to reload everything on every request and to throw out after the response.

We're setting the bar pretty low for what's a good thing in a language here.


But practically speaking you can’t though. You’re not starting a new java process on every request because the overhead is too big, and you can’t fully reset the jvm state for every request.

PHP with fastcgi reliably and quickly resets the php process to freshly launched state after every request, and very few languages can manage that trick in a production context.


Practically speaking it's done a lot. Even outside of web development.

For example Gradle (a java build system) runs a daemon that hosts a bunch of tasks that run and die.

Docker does it with images.

The browser does it with tabs.

Php achieves it with the help of server components. That is fine, but I wouldn't advertise it as a feature.


Starting a new docker instance for each http request is an interesting approach


<3


The article is about that it is FORCING you to do it. Erlang is designed to restart processes, because they also know it's very hard to keep state but don't leak memory or be in a good state for a long time. Every application server has an option to "restart after N number of requests" for the exact same reason.


Erlang is designed to restart processes when stuff goes wrong, because repairing is hard, and restarting is easy. This is different than PHP -- I hope my Erlang processes will have years of uptime, but I'm ok if they don't. My PHP requests have 30 seconds to live, and if they make it that long, they'll be systematically murdered (and, in some environments, the process they live in will be murdered after N requests, because even though PHP makes it hard to leak memory, developers rise to the challenge).

This property is definitely one of the things I love about PHP -- when the request is done, everything is thrown away. This encourages you to do the minimum amount of work to get your HTML (or whatever) out the door. I'd like to say it forces you, but it doesn't really -- I've seen plenty of 'lightweight frameworks' that mess around for 50 ms creating cathedrals of objects that just get thrown away on a hello world page; if you do the minimum amount of work, you can get pages out the door pretty quick. Layering and abstraction can solve a lot of things, but it can't solve wrong abstractions.


Sure, but I'd expect a paid web developer to be aware of the statelessness of http to begin with, and also to rely on testing to get any form of guarantee of any kind.

What I find ironic is that creating the illusion of state in php is an absolute pain without a framework, which again applies to most language/server combos I've tried.


Which one has more chances into resulting the version variable not being overwritten. var const_version = 0.5; or const version = 0.5;

Guidelines, good/bad practices and patterns are merely suggestions, enforcement is king.


Yes PHP, that's how it all started with CGI. This provenance also dictates that PHP must be fast to load. I'm often surprised at how fast a PHP process will complete in comparison to Python.

Still, I remember having to cache initialization state in PHP because it took too long to build it on every request. That's when the model breaks down and you prefer a server with state.


>Still, I remember having to cache initialization state in PHP because it took too long to build it on every request. That's when the model breaks down and you prefer a server with state.

I also had this problem, but fortunately, I could always buy my way out of the problem better hardware and maybe one afternoon of optimization.

I finally shut down the last of my PHP side projects because of pervasive security issues and a dwindling/dead community. I recently reinstalled one of those sites on a DO droplet. Getting the right versions of everything took an entire weekend, but I eventually succeeded. The droplet had way more RAM and a faster CPU than I had access to back in the day.

But holy hell those pages still loaded so slowly. I probably didn't notice at the time because internet connections were so much slower and waiting for an HTTP page with a few images to load was normal. But those load times would be absolutely unacceptable today.


Guilty admission time: I run thousands of distributed worker processes in PHP. Some of the processes run for months at a time and I've not encountered any issues nor memory leaks. I assume the others would too but they get replaced anytime we deploy, so the uptime is usually measured in days.


how long have you done this?

Years ago (2005? 2006?) I had a lot of trouble running PHP processes for any serious length of time, but don't now (7.1 mostly)


We've been doing it since 2011, though we saw significant improvements when we switched to 7.x. Memory and CPU usage dropped by ~50% at that time. We did have occasionally memory leaks back on 5.6 (usually around shared libraries like libxml) but haven't had any issues whatsoever since 7.

I didn't try too much daemonization around ~2005, though Schlossnagle's "Advanced PHP Programming" (2004) did have a whole chapter on daemonizing and the pcntl functions that made it possible.


Isn't this the case for any programming language if you run it through CGI?


Yes this is first thing I thought about as well. Since PHP commonly uses FastCGI, using CGI via fcgiwrap as in the case of a cgit deployment for example, is basically the same sort of model, in a more modern sense.


That would be pretty slow. FastCGI doesn't start a new process for every request unlike CGI.


Many languages aren't really suitable for CGI, because of long startup times, leading users of those languages to not tending to have a clean slate on every request, unless they work for it. (Alternatively, if you don't want a clean slate on every PHP request, you have to work for that)


How does it work for websockets? Do you have to spawn one process per connection?


Yep, which is why Laravel [0], CakePHP [1], etc prefer to use a node app or separate server to handle WebSocket connections that then use an internal router to call PHP code.

0: https://laravel.com/docs/5.6/broadcasting#driver-prerequisit...

1: https://github.com/scherersoftware/cake-websocket


In such cases, websockets generally aren't handled by the application itself, but by a server (e.g. the nginx in front of the app with plugins, or an extra server) that manages the connections, often provides "channels" and takes commands from the application.


Developers just blame programming languages. As always.

As much as a programming language gets more popular, and more programmers write code with it, you see more "quick-and-dirty" solutions around.

As Java started to become even more popular cos of Android, I see more crappy code written in Java.

As much as Scala codes were clean when only early adapters would pick it, now I see more "quick-and-dirty" code because simply there are more codes out there.

The main problem is the majority of developers are not disciplined/experienced/responsible enough to keep their code organized and learn how to do it.

And that is just how it works with everything we do. Majority of people also don't keep their house/room/desk clean and organized.

As soon as any programming language hit the top 5 popular programming language, you have enough example to complain about it.


Funny thing how we go full circle with FaaS now, lol.


There are a lot of nice things in PHP. The bad parts are pretty well known.

As Bjarne Stroustrup said, there are only two kinds of languages: the ones people complain about and the ones nobody uses. Having invented C++, he should know.


> ... at the same time the single and the only huge factor which make poor PHP code written by junior developers manageable is this ...

One thing many people misunderstood is that "PHP are for newbies".

No, you need to have some serious knowledge on PHP to be able to navigate yourself away from all those pitfalls that language provided. For example, use ===, be careful with "0e123" etc.

All those pitfalls combine, is annoying enough to out weight most the good things that PHP also provide.


PHP being stateless is a good thing, as well as real classes, real interfaces and type hinting.

Sadly, PHP developers still use bad coding practices. For example, one of the popular PHP OOP frameworks, Symfony, makes things like HTTP Request or logged user a global variable by storing it in DI container. What about incapsulation and concern separation? The code breaks if you run it from CLI. PHP should not copy bad ideas like this from Ruby.


The good thing in PHP that I don't think enough people talk about is Laravel.

If you are starting a new PHP web app and you expect to grow beyond a toy project, I highly recommend starting by using the Laravel framework. Laravel provides so many useful things and it encourages good separation of concerns and code organization. But it is flexible enough to get out of your way if you need it to.


I don't think Laravel is a good example. It uses a lot of global state and static method calls. It copied all bad things that were there in Ruby on Rails.

If you look at any code example here [1] you will see that global variables are used everywhere.

[1] https://laravel.com/docs/5.6/urls


Using global variables in a careless fashion is a bad practice indeed.

But globals are not universally evil. Laravel uses globals very thoughtfully. Having used the Laravel framework for many years, I agree with many of the choices it has made.

Additionally, Laravel does not force you to use Globals. See https://laravel.com/docs/5.6/requests to dig a little deeper.


I find PHP better than I thought I would because it supports a style of programming that's very similar to Java, but the big problem is that PHP developers often tend to have more of a "quick-and-dirty" mindset and don't really appreciate that kind of code.


PHP supports almost any style of programming depending on the framework you choose. And that's sometimes a problem and as a consequence is 'that PHP developers often tend to have more of a "quick-and-dirty" mindset and don't really appreciate that kind of code'.

I guess you have tried Symfony, haven't you?


I tried Symfony, which I liked a lot, and I wrote code in a similar way as I would have for ASP.NET, which I thought seemed logical considering how similar they were. And then my colleagues complained that my code was too complicated and they didn't like all the type declarations when you could just easily return untyped arrays everywhere. I don't work in PHP anymore.


The same could be said of many scripting languages when frameworks have less influence. Had Rails not come along, and most Ruby apps were just files fronted by Rack, you'd see the same thing. ColdFusion has a reputation for being junk code, but that's because many apps are a bucket of files, and clean frameworks like ColdBox aren't nearly as dominant.


True enough. What I liked about PHP7 was that, optionally, you could have most of the rigor (and sophisticated IDE help) of non-scripting languages (although with a painful amount of boilerplate and some weird quirks).


Exactly. It's possible to write good PHP, but how much of that do you see in the wild?

PHP really baits people into making very fragile software. It's very hard to write good PHP. It's hard to write good C, too, so rather than frame it as an attack, realize that it's a long rope with which one can hang himself.


I think if you pick a sensible framework it's not especially hard to write sensible code (except it's hard to reason about the performance of PHP's associative arrays), but the older culture is hard to root out.


As a PHP programmer, among other things, I think Perl programmers have the ultimate "quick and dirty" mindset, but they're typically trying to solve different problems from a different perspective.

I also think a lot of the bitching about PHP is mostly because newer programmers are just repeating what they heard with the fervor of someone who doesn't work in PHP regularly.

Does it have problems? OH HELL YES. So does every language. Are the problems ever going to be fixed? OH HELL MAYBE. Is it the perfect hammer for every nail? OH HELL NO.

I work with what I know and I live with the problems because that's what I know. If I need new features, I'll work in another language, maybe.

No language is without issues. Except Python. Python is perfect.


Python is surely nice looking, but as it allows redefinition of everything at runtime, it is impossible to optimize at compile time. I think PHP, with the recent type strictness functionality, is better positioned for making way to new developments like meaningfull code analysis, (compile time) optimizations and performant JIT compilers. Python is too dynamic for that, unfortunately. The same holds for JavaScript.


PhpStorm with the Symfony plugin is already excellent for code analysis if you're working in Symfony


Yeah, don't get me wrong, PHP is much better than I expected it to be and my problems were more social than technical.


Is that python 2.x or 3.x that is perfect?

GRIN

well put BTW


Python 3. Python 2 is only going to be around for another 20 years or so.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: