Just curious, how does a PHP application handle server side rendering? By shelling out to a headless browser? What if the content is so personalized that server side caching isn't helpful, is it still viable to SSR a React client?
There still are Wordpress wrecks, and will be until the end of time. But there's a lot of good PHP code being written now, it's going through somewhat of a renaissance. Lots of good work being done with Symfony.
>Just curious, how does a PHP application handle server side rendering? By shelling out to a headless browser? What if the content is so personalized that server side caching isn't helpful, is it still viable to SSR a React client?
I haven't done SSR for React with PHP, but this react library  runs v8js, so it's not as heavy as a headless browser. The Angular2 team is building SSR for PHP now but hasn't released it. From scanning the GitHub issues it looks like they're taking the same approach (v8js) .
Besides that, memory management is mediocre; just DON'T do long running processes with PHP. You want to enable a module? Good luck finding the right php.ini. Keeping a persistent connection pool to a database or similar is generally hard and intransparent, due to the fact that each instance serves exactly one request, but hey, at least PHP "automatically recovers" from errors...
I left for good.
That sounds like a wrong configuration and/or architecture. Symfony is in my experience fast if you follow the best practices.
Maybe you should find out the bottlenecks before you blame Symfony.
> just DON'T do long running processes with PHP.
We run long-running workers just fine.
> You want to enable a module? Good luck finding the right php.ini.
That depends on your OS. In Debian/Ubuntu you just run phpenmod module and restart the service.
I don't know... Your criticism seems to stem from not looking at the issues closely, or maybe you were stuck in a really old, abandoned system. But that woukd have been ugly regardless of language.
In the En Marche platform project, we serve million of users per month, with fast response times, deployment without downtime, 4 mail workers and multiple PHP instances in paralell synchronizing cache using Redis. I personally worked on several projects having the same infrastructure and the same high quality standards.
I think that, as many other developers, you have a strong but irrationnal opinion on PHP and will do everything in your power to attack it. I hope this kind of behavior will fade in the future, as it should not represent our community.
Of course, we also moved to Amp and a PHP-cli-running-async-http-server model, so we can share the in memory extension, and did some cute stuff with Redis to get it running stupidly fast without memory leaks. Good fun really, PHP 7.1 is damned nice in a lot of ways, but annoying in others still.
I'm working on a project now that uses airbnb's hypernova service with a php client.
Edit: not suggesting this was used in this case, just that it's a viable option for react ssr + php.
 - https://github.com/airbnb/hypernova
 - https://github.com/wayfair/hypernova-php
When your client and server are both running the exact same UI code, then keeping the server rendered HTML in sync with the initial state of the client side DOM, is just a matter of keeping the application state in sync. That is done be serializing it into the response and then reading it from the client side app during init.
But, if the server is using an entirely different codebase to render HTML, then it would take heroic automation to keep that in sync with what the client expects. Better to just use a different type of client side framework, in that case, I guess; seeing React clients backed by servers that are written in neither JS nor compile-to-JS languages is another surprise.
- React doesn't require the existing markup to match - it will happily clobber whatever it finds lurking in its container. The first render is probably faster if it matches, but it isn't like your page is going to crash if they are out-of-sync.
- The Shadow DOM is a way to encapsulate styles/state in DOM elements - it's what draws the slider thumb in an <input>, and what allows Web Component styles to use simple selectors like "button" without commandeering the all the buttons on the host page. It has nothing to do with React. I think the phrase you're looking for is "virtual DOM".
You do understand that server-side rendering is not (typically) necessary, right? I mean, React will render just fine initially on the browser, of course. It's only done (wisely) on the server before serving to the client for performance reasons.
In the project, we didn't use server-side rendering: what I meant in the article was that the basic HTML was a first version then replaced by React, on the client side. The React component has different code than what was rendered initially, but for our usage it's not an issue.
Every language can be abused - the degree to which it can be abused is an indicator of its power.
The author mentions having no budget, but then lists several third party resources in building the site.
Can the author or anyone else guesstimate what costs were involved in building to scale and if the DDOS attacks spiked their bills?
Also, is a containerized deployment the defacto procedure for apps in 2017?
What I meant in the article was that our budget was way smaller than the one of other parties. That is the reason why we relied mainly on Internet since the beginning to create committeess and organize events.
This being said, the "small" budget we had was not that small if you see it as an individual: we had several million euros coming from donations and loans. This was not enough to have physical places in France but it was largely enough to create a high-end platform instead.
Technically speaking, the hosting and the different services we use costed us around 8000 euros/month at the end of the campaign, including the DDoS attacks mitigation (which did indeed increase our costs). The website development itself is very difficult to estimate as it was made both by volunteers as me and by external companies.
I also would like to add that the cost of development would have been much higher if we had relied only on external companies. The flexibility required by a presidential campaign forced us several times to develop complete features in a few hours and that would have been difficult with a classical company only, requiring specific contract and limiting our possibilities. I think one of the reasons why the project went really well was because of the synchronization between motivated volunteers (highly flexible work time) and external companies (proper high-end quality code).
On a related note, the total budget for the candidates had to be less than 17M€ all included before the first round (then they could spend 5.6M€ more for the second round). A lot of that goes into organizing rallies.
edit: I guess they are meant to be here http://www.cnccfp.fr/index.php?art=584
I also would be curious to know the production and operating costs for a website of this kind.
Was the Kubernetes cluster necessary/useful for this sort of architecture? I'm asking as someone who has basically no experience with Kubernetes, but I'm familiar with the rest of the pieces.
>As any other high-profile web site, we were the target of some attacks coordinated and carried out by powerful organizations. Most of the attacks were of brute-force nature and the aim was to take the web site down rather than infiltrate it.
I'm surprised that the attacks were just brute-force attacks. Assuming state actors wanted to compromise Macron's site you'd think they'd have more to throw at it.
Also, I wonder if the security implications of open sourcing your code change if you think you will be targeted by state actors. Generally the advice is open source leads to more secure code as more eyes on the code == more exploits found and fixed. Do we make an exception to this advice when dealing with state actors, or does it hold?
Phrased another way, do any
At the beginning of the campaign, we had only one node in the cluster, as we thought it would be enough. However, while it was enough most of the time, it had issues under DDoS attacks: as the node was the only one, it was the master node of Kubernetes and when it overloaded, Kubernetes crashed.
To avoid this, we used three smaller nodes instead, to avoid having a node overloaded leading to the whole system crashing. Kubernetes handled the following attacks really well with this setup, and it did not cost more for us.
About the attacks: they threw more at us (XSS, SQL injections, etc.) but most of these attacks were still automated. Perhaps have they tried something even more subtle, but I doubt it: they prefered to hack emails :) .
I have to admit making the project open-source was a quite difficult decision: I really wanted it, but I also knew we would be potential targets of powerful organizations. We decided to do it because in the end, the argument you stated was stronger: open source does lead to more secure, stable and quality code, and this project showed it. Note also that we didn't advertise much on this project during the campaign, so perhaps was it not clear for potential hackers that the code was open.
And it's also refreshing to see them using google cloud and github !
Got one of this email that was oblviously sent by a member of "enmarche" in bulk the mail address domain was pointing on an obviously cybersquatted domain in .fr which is illegal on this TLD (fr law)
So they are proud to have made a tool obviously usable for commiting felony.
Our president supports infringing the laws.
That's the reason, I think ethic and liability should become a must in software industry. You cannot say, I ignored it would happen and it is not my fault.