
Combining Golang and PHP can solve real-world development challenges - dhotson
https://blog.spiralscout.com/php-was-never-meant-to-die-830de87915ee
======
osrec
I think the standard PHP model (one process per request), while still popular,
is being replaced with fairly robust event loops. I've been using Swoole
([https://github.com/swoole/swoole-src](https://github.com/swoole/swoole-src))
in my company for about a year now, and it has matured into quite a stable
code base, even if the documentation is lacking at times. The performance is
pretty phenomenal and it has useful async coroutines for interacting with
redis/MySQL/PgSQL. I saw some benchmarks in their GitHub issues list where it
outperformed Node and Go (taken with a pinch of salt obviously).

If you're afraid of something as experimental as Swoole, more stable options
do exist, such as ReactPHP. Not quite as performant apparently, but seem to be
popular.

In general, I find that the PHP ecosystem is still thriving. It's definitely
one of the easier languages to get up and running with, and now thanks to the
community, it's imbibing some of the more advanced features found in more
modern languages.

I still love PHP, as much as the 10 year old me did when I started using it 20
years ago. It just gets the job done, even if it is a bit ugly at times!

~~~
hu3
Thank you for sharing your experience with swoole. I didn't know about swoole
and will definately play with it to add to my options on future PHP projects.

~~~
osrec
FYI, the learning curve is a little steep due to lack of docs, but a friend of
mine has started to put together [https://swoole.co.uk](https://swoole.co.uk),
which tries to bridge the gaps.

Also, I believe swoole is used extensively by tencent/wechat, so we're in
reasonable company!

------
nolok
> we can no longer use f5-debug

Sums up what is still one of the major issue-slash-weirdness of the php dev
world. For all the frameworks and language advance, despite xdebug existing
for over 15 years and working great in pretty much any editor or IDE, most PHP
dev including the "experienced" ones have no idea how to use a debugger, or if
they do they don't apply it to PHP.

Can you imagine debugging your C/C++/Rust/Go/... programs by adding printf
everywhere and re-run the whole thing ? Again and again to find which variable
fail; while a simple step by step with watches would solve it in a single run,
in a tenth of the time, without needing to crappify the source ? That's what
99% of PHP dev still do.

Even the javascript world have adopted proper debugger right in the browser.
But in PHP it's so freaking terrible that when people need to solve a bug you
don't hear "step until you spot the issue" but "var_dump all the things" or
"log to the symfony debug bar !".

~~~
sagichmal
> Can you imagine debugging your C/C++/Rust/Go/... programs by adding printf
> everywhere and re-run the whole thing ?

Yes, absolutely. This is my principal and nearly-exclusive method of debugging
the distributed systems I work on all day.

~~~
groestl
It's true that for distributed systems, logging tends to work better/is a
necessary first step. I'd still want to invest in an extensive test harness
though, if you can afford that luxury. Usually this enables productive
debugging for distributed systems code, which pays of quickly.

~~~
stcredzero
There was one network monitoring company that used Smalltalk, and Smalltalk's
ability to very rapidly save the entire memory image. Basically, for
exceptions, a snapshot of the server's state could be saved off in addition to
the log, then brought up later, completely live, with a debugger on that very
method. (Definitely a distributed system, running on other people's hardware
in other people's data centers.)

I worked at another company where we used a different kind of Smalltalk
serialization to save the GUI's state in an auto-generated error email or log
entry, so we could open the application in the exact same UI state when
dealing with a bug.

------
stabbles
If the goal is to save resources, maybe it's worth switching to Go entirely.

It's a shame the article does not explain what was unsatisfactory about php-pm
[1], which is a native php solution that also allows the application to boot
just once for all requests.

[1] [https://github.com/php-pm/php-pm](https://github.com/php-pm/php-pm)

~~~
wjkohnen
They mentioned php-pm, but marked it as unsatisfactory w/o providing any
rationale.

------
flashmob
I'm also one of those who switched from PHP to Go and now I have quite a bit
of legacy PHP code that just works and don't have time to rewrite.

My solution:

Use a Go FastCGI client library to call PHP by talking directly to php-fpm. It
saves on the HTTP request overhead and no need to run a web server. Actually,
php-fpm is a decent application server itself.

Edit: Here's a link to example code:
[https://github.com/tomasen/fcgi_client](https://github.com/tomasen/fcgi_client)

------
toast0
This probably works ok, but it's doubling down on the terrible that is modern
php. PHP has this wonderful conceptual model of get a request, return a
response, and throw everything away. The logical consequence is that
everything you built up in objects or variables along the way of making your
response is garbage (because you're throwing it away when you're done).

Since you're just making garbage, it doesn't need to be pretty, it just needs
to be fast.

Since you're making garbage, it doesn't need to be future proof, it will only
be alive for 10ms.

You can make some really fast pages with PHP if you try, but modern frameworks
are going to take longer to load than a fast page will take to be finished.

------
furicane
40x speedup claim with 0 evidence or reproduction scenario. It's like that
meme: "source: trust me dude". The conclusionnis: they made a middleware,
something that nginx's auth module does.

I want to believe this. I want to be hyped. It sounds great. But there's no
reproduction scenario. Some claims are also false (php doesn't kill the
process to start the processing cycle). Others seem too good to be true - like
40x speedup.

It just smells like "hey, we are cool kids too, look at us, we're advertising
using the hype that other kids use!".

If I'm able to reproduce 40x speedup, I'll so gladly eat my words and flame
myself.

------
ealexhudson
Without wanting to sound critical of this, they seem to have reinvented long-
lived PHP servers. Gearman has allowed you to do this with PHP for almost ten
years now, and it was even the case that you could plug it directly into
mongrel2 with 0mq.

Fundamentally, it's an idea that works incredibly well - it's amazing that
more people don't make use of it....

------
z3t4
Classic ASP is a similar framework to PHP. In Classic ASP there are a Session
and a Application object. The Session object can store variables that will be
available during the user session (using session cookie). And the Application
object can store variables that will be available to _all_ other scripts. So
when the server starts, you put everything in the Application object, so when
you get a new request, you load the data from the Application object instead
of re-creating everything, doing database lookups etc on _every_ request. This
makes Classic ASP very performant with sub-millisecond requests. I used
Classic ASP until recently and have now switched to Node.JS which uses an
event loop. Per-request as in ASP and PHP is _much_ easier then a event loop.
In Node.JS for example, it's very common to pull in a framework like Express
to do the routing, while PHP and ASP already do that job for you. One problem
with ASP and PHP however - they're not optimal for real time stuff like chats
or progress-bars.

------
perpetualcrayon
I had a similar idea a while back. The implementation I came up with (but
never implemented) was to create a PHP extension that would link to a
statically compiled GO module. Basically it would instantiate a network server
on the GO side, and marshal the HTTP request/response data structures back and
forth between PHP and Go. I imagined I would probably utilize a pthreads
extension or one of the event loop extensions to handle each incoming request:

[http://php.net/pthreads](http://php.net/pthreads)

[http://php.net/manual/en/book.event.php](http://php.net/manual/en/book.event.php)

------
rajangdavis
Can anyone dumb this down for me?

I don't really understand this use case - is this for PHP developers to eke
out some more performance for their web apps?

The goridge library
([https://github.com/spiral/goridge](https://github.com/spiral/goridge)) makes
a little more sense to me; it's a bridge between golang and PHP via RPC.

I went through the code
([https://github.com/spiral/roadrunner](https://github.com/spiral/roadrunner))
but it doesn't seem like something you can swap into an existing PHP project.
Can anyone explain what use case this might be for?

~~~
kilburn
In typical PHP applications, the entire application is bootstraped, executed
and teared down for every request.

RoadRunner lets you run "workers" (php processes) that perform all the
bootstrapping just once, and then receive-process-reply to requests. The use
case is to avoid the (possibly expensive, especially with modern frameworks)
bootstrapping for every request.

Check the symfony integration example for instance [1]. Everything that's
outside of the

    
    
      while ($req = $psr7->acceptRequest()) {
      ...
      }
    

is usually run for every request in traditional PHP, whereas here it is only
run once when initializing the worker and then that while block
reads/executes/replies to each specific request.

The problem with this approach is that you now need to be careful not to leave
unwanted stuff behind (open files / db connections, static $properties, etc.)
that may interfere with future requests, whereas this just isn't possible in
"normal" PHP. I would argue that this is one of the features that makes PHP so
beginner-friendly.

[1] [https://github.com/spiral/roadrunner/wiki/Symfony-
Framework](https://github.com/spiral/roadrunner/wiki/Symfony-Framework)

~~~
rajangdavis
Got it. Thanks for the explanation!

------
xena
Found another rilkef: [https://christine.website/blog/experimental-
rilkef-2018-11-3...](https://christine.website/blog/experimental-
rilkef-2018-11-30)

------
rakoo
Did they just reinvent FastCGI?

------
LolNoGenerics
Usual question for this. How you handle sessions? The php built in is very bad
if you recycle the main process.

------
fasteo
am i missing something ? php-fpm can keep the process running as long as you
want (pm.max_requests=0)

~~~
kilburn
What you are missing is that, in php-fpm, while the interpreter process itself
is not destroyed after every request, it _does_ cleanup everything on the php-
side (ie, it closes all open descriptors, it unloads all php code, etc.).

This is awesome ergonomically (because a past request cannot affect a future
one in any way), but has terrible performance because all of the PHP code has
to be re-bootstrapped for every request (think: read config files, open db
connection, setup the injection-container, etc.).

Thus, this thing is like php-fpm but letting you bootstrap once and then just
reply to requests within that php-space.

Aside from the "now you have an extra gun to shoot your foot with" issue, the
other major difference is that php-fpm can execute any php file, whereas this
approach requires a different process for each entry point (aka "front
controller" in modern frameworks, aka "index.php").

------
Kaveren
PHP is the worst programming language ever made in current widespread use.
There was never an excuse to use it, there were always other options, even
many years ago.

> "To them, we say “think again.” We believe that the only limit PHP has is
> the limit you set."

You shouldn't be writing PHP in the first place, and should slowly work to
(incrementally, slowly) shift your codebase into any other programming
language in existence if it is already written in PHP.

The core language is terribly designed. PHP 7 did not fix this. Even
JavaScript is miles better (PHP has a type system equally as bad, if not
worse).

Of course, this sentiment is unpopular, because you're never supposed to
rewrite anything even incrementally, and Popular Means Quality, and saying
otherwise is heresy.

I propose the more radical solution of killing PHP entirely (again,
incrementally, slowly), and replace it entirely with the language of your
choosing (maybe that's Go).

~~~
hackthemack
> There was never an excuse to use it, there were always other options, even
> many years ago.

What was the alternative to using php from 1996 to 2000 if you were running a
small mom and pop isp and your customer base was requesting shopping carts,
photo galleries, and blog like web capabilities?

In that era, the servers were pentiums of 75MHz.

~~~
Twirrim
Agreed.

Back then you primarily went one of two ways, Perl & PHP. PHP was by far the
lowest barrier to development. Easy to enable PHP support in webservers, easy
to incrementally increase usage in your platform alongside any static HTML.

PHP + mod_php moved away from needing to know what was happening on the server
side.

Perl and the other cgi-bin integrations could be complicated, involved,
requiring more specific configuration on the server side. Sure, it's not
_that_ hard to do, but it does present a barrier to entry that PHP / mod_php
handily solved.

~~~
pjmlp
By 1999, Zope, C CGIs, TCL (AOLServer), ColdFusion, Delphi HTTP Server, ASP,
IIS ISAPI,...

More than two possible ways.

~~~
Twirrim
I'm sorry, I don't see anywhere where I said there were only two ways.

~~~
pjmlp
That is what I take from "Back then you primarily went one of two ways, Perl &
PHP.".

As if everything else wasn't worth mentioning.

Back on my little bubble, the options listed by me were much more relevant.

