The world that PHP grew up in has changed. The language and community have matured, and in the process PHP has lost most of its competitive advantage: low barrier to entry.
It used to be:
* Moving from static HTML to dynamic server code was a matter of changing file extension and adding PHP tags.
* Many shared hosting services supported PHP.
* Deploying was a matter of copying files to the server.
Projects that started from HTML + PHP tags grew and became unmaintainable messes. The PHP community learned from this and evolved in a different direction. You can still _write_ PHP code in the old way, but it's strongly discouraged and (rightfully) seen as a bad practice. You don't even use PHP as a templating language anymore. Symfony and Laravel have their own templating languages that dynamically compile to PHP.
Modern PHP code looks very much like Java or C# -- classes, OO design patterns, and so on. Except in most ways, it's worse than Java or C#. Why would anyone start a new project in PHP?
Deploy models have also changed. You're usually not copying files to servers, but deploying Docker images or other formats. PHP has no particular advantage in this new world.
If PHP wants to turn things around, it needs to figure out what makes it unique or better than other languages. Right now, there's really nothing.
Every time someone says there’s really nothing that stands about php I roll my eyes. Php usually competes with JavaScript and python, all three being dynamic languages.
First of all, php is faster than python, and ruby and probably most dynamic languages except JavaScript.
So in terms of performance, modern php is ahead of the other languages and it scales well.
Regarding features, php has the best support for classes and types out of the three languages. So if you want to design a classic oo system, php has the best features language-wise.
Php also has a great community (good ides, good package manager, good open source libraries and frameworks).
What do other languages offer that php doesn’t have?
To be honest apacha/nginx + php with laravel is probably few of the most stable web serves I've build. IMO it's still a lot lighter and easier then Java or C#. But yeah the old script kiddie days were great. And I do thing we should encourage users more to be more wild for small projects and just build something without a framework.
I actually want a more refined version, with better ( or different ) syntax of old PHP.
And I really like you touch on low barrier to entry. Right now, in my likely controversial opinion, modern Web Dev is a huge pile of excess complexity.
The web has always been inherently complex, so it is only logical that frameworks for building things on the web have evolved to tackle all the possible complexity. Of course also the introduction of handheld devices and duplex multimedia features has made the web more complex, but mostly we just didn’t care 20 years ago, because we could get away with less. Now, more and more things are exposed to us every day, requiring us to grok it all to be able to make day-to-day decisions.
No wonder so many people turn to magical thinking, cargo cult coding or Uncle Bob -like messiahs.
I think one potential improvement for PHP is to make PHP templating usable again. This is something that could have a low cost of implementation but a big win.
Example on improvements
1) auto-escape output - one of the biggest reason not to use PHP for templating is that you need to manually escape your strings to avoid XSS, whereas a dedicated templating library can do this for you. This could be done with a either a special opening and closing tag or let you register a tag hook.
2) today you can use the alternative syntax for if, while, for, foreach, and switch
This could be expanded for match expression, closures, and other block expressions.
3) custom HTML tag support, register a HTML tag like <my-form> and implement it thru an API, perhaps a class that implements an interface. And now you can do much better reusable components that can automatically close them selves instead of multiple calls.
In the first example you need to always match one function call with another function call ( manual work), in the latter example you HTML just needs to be valid, which many editors can detect for you.
And it would be easy to share these custom components on github with composer.
That can work for a while but it doesn’t attract new developers, and it tends to have legacy projects be replaced outright with new ones in better languages.
That’s roughly what I was thinking, yes. There’s still a certain appeal to the easy install process but the gap is a lot lower now, the classic PHP way famously lead to security issues, and there’s a vicious flip where once you need to customize anything it goes from easier to harder than the average containerized Node/Python deployment since you need to learn how to run things like FPM rather than just installing a web server package. That also shows up with Composer - the standard library is big but once you need something else, the experience is worse than NPM or PyPI.
The fatal flaw in my opinion was not taking the language itself more seriously 20 years ago. The culture of laxness around things like type coercion, ignoring errors by default, being inconsistent about positional arguments and typing or nomenclature, etc. is so deeply ingrained that major improvement are much harder and will likely require advanced automated tooling. I left the community in the mid-2000s after getting burned out by the endless stream of security and correctness bugs caused by the language and culture – my favorite was the time a detailed bug report for functionality not matching the documentation was WONTFIXed since it was working as the core developers intended & they didn’t feel like updating the docs – but having had the misfortune of being involved with a PHP project again, the only big improvement I can think of was that register_globals isn’t the default. You still routinely see bugs caused by e.g. inconsistent array function parameter ordering causing errors which are silently suppressed despite our configuration having logging enabled for that class of error and the built-in linter won’t report. There was no reason to waste time on that in 2003, much less 2023.
I find it bizarre that urlscan.io displays recent scans from paying customers. I assume GitHub is large enough that they have to pay, anyway. If they're not, who is?
From the URLscan pricing page [0], it looks like each plan has a tier of "private", "unlisted", and "public" scans. It looks like you're somewhat incentivized to just publicize all scans because that's the most economical. Based on what GitHub's email said, they've opted to scan things in public, probably assuming that the repos are public anyways. It looks like this assumption was a poor one to make, in this case.
Oh man I thought this was gonna be just showing the TLD or something. There is a scrolling list of scans, down to the exact HTTP transactions. Just watched an OAuth grant roll by in plaintext. Yikes.
When you run a scan you specify whether it’s public, unlisted, or private. Can someone here explain the utility of non-private scans? (The urlscan.io folks apparently think it’s too obvious to explain.)
TL;DR: Log impure actions. If you minimize and group side effects, you don't need to log as much. Pure functions can be recomputed any time if you know their inputs.
Event buses can be useful, but it's hard to look at an event being published and know what it does and whether it's important. Whereas the importance of a logger call is usually pretty clear. Logs are rarely load-bearing, but an event can mean anything.
One big argument in favour of that approach is the oft-forgotten (or at least ignored) fact that logging is, pretty much by definition, a side effect itself.
Honest question: what science? Any good papers you could recommend?
After a quick search it seems that NPS originated in a Harvard Business Review article, which I don't consider a credible source of scientific results. The scientific papers I'm seeing mostly seem pretty skeptical, judging from the abstracts.
I live in Auckland and the tech scene has really been booming in the last year or two. Obviously it's no SF bay area, and there are now companies struggling in the current economic conditions, but it may be worth another look in, say, a year.
Is there still a shortage of qualified tech workers and do you know if they make exceptions for those that don't meet the required amount of points for immigration? I'm short by 5 points because I don't have a degree (but I have well over a decade of experience in software development).
EDIT: N/M looks like I can get in easily if I'm offered a job by an NZ employer listed as an "absolute skill shortage", so I guess I answered my own question. I'm using this tool for anyone curious: https://www.immigration.govt.nz/new-zealand-visas/apply-for-...
> Now where are the details about how to get ElasticSearch to behave this way?
The rules posted in the article make a lot of sense, but they also require a ton of backend work. To do prefix matches quickly, you need to create a new index of your content. Then to handle the relevance rules described, you need to be able to query both indices and merge the results in a meaningful way. No obvious way to do this with ES.
The problem is that there actually are patents covering GraphQL. So anybody who writes an alternative GraphQL server implementation is potentially infringing upon those patents.
The issue with the GraphQL spec is that people actually want a patent grant for alternative implementations, but there isn't one.
I normally hate it when people immediately trot out the old "premature optimization" quote, but it really applies here.
Please don't go around naming all your returns just because today's compiler happens to generate better code with them. This is a compiler issue that I'm confident will be fixed one day, especially if you do the right thing and file an issue.
But by all means, if you're profiling and your inner loops are actually slowed down by this, then make the change. And add a comment so that someone might be able to change it back some day when the compiler's improved.
I'm surprised Go doesn't compile down to an IR language where these differences in syntax are represented in a single manner. Seems like different ways to write the same thing.
Go 1.5 was the first self-hosting release, with the Go compiler auto-translated from C to Go. But it was still fundamentally Ken's C compiler in Go syntax.
Every release since (Go 1.6, Go 1.7, Go 1.8, Go 1.9) has been cleaning it up and making it more Go like and less C like.
So, it keeps improving. Just remember the Hello World compiler we started with.
Also amusing in retrospect is that when Go first came out, despite having a very basic compiler at the time, people coming from scripting languages thought we were so fast.
Just for what it's worth, the way I'd fix that problem in the compiler would be to implement dead store elimination via global value numbering. With trivial alias analysis, the compiler would be able to detect that the result of the "duffzero" instruction (which I assume is a memset) is always killed by the "duffcopy" instructions and would eliminate it.
Not my area of expertise and it is yours, but if you eliminate a zero instruction before a copy instruction how can you be sure that doesn't affect other threads?
var x Int
// Pass x to a thread by reference
x = 0
time.Sleep(1000)
x = 1
How can you be sure that the other threads ever see it in time? they might be suspended for a whole second because a HDD needs to spin up or something like that.
So threads never seeing the value is already a valid outcome, so the compiler might as well always do that.
The answer to that one would be to embed thread-safety in the type system, aka. Rust.
For languages with less sophisticated type systems you get a choice between inefficiency (Go), or complicated rules which state that the programmer is wrong for coding that way (C).
Your "solution" is possible in many languages. It's just to give threads complete ownership of data they use. It doesn't require a special type system.
In general I don't think Rust actually adds much abstraction that isn't already in say Python. What it does is enforce tight constraints.
Not sure those optimisations are worth much. And it's only safety by forcing lowest common denominator code and making you justify everything to a dumb compiler. Rust serves a niche, but it is a tight niche IMO.
The memory dependence analysis must prove the memory is unaliased, which ensures among other things that no other thread can have a reference to it. Presumably in Go return pointers are guaranteed to be unaliased.
I hope this won't come out harsher than I intend to, but I'm so tired to hear this expression "not reinventing the wheel" to justify using third party code. This is not what it means.
Note that there is not a single wheel that was built once in prehistory and now every human gets it lent when they need it. People build wheels everyday to fit their needs, reusing the concept of wheel, that is, knowing that a circular object allows for smooth movements with less friction. The analogy in software development means that you've better know of designs that help you solve your problem, not that you should blindly use code built by someone else to bypass the whole problem solving. This is basically trying to use a bicycle wheel for everything. This may work well on an other bicycle, not on a car.
I thought the SSA backend was not replacing the Plan9 assembly, but that it was a phase that happened before the assembly was output (presumably SSA is a phase and not an IR?).
I think accusations of premature optimisation might be a little unfair here.
Ignoring the style issues for a second (I'll pick that up later), if I'm looking at some code and there are two equally viable alternative ways of writing it, one of which saves a chunk of memory* or is faster then it's just perverse to choose the path of larger/slower code. I do this with regular expressions/string functions. I see people use regexes a lot, but the tool I reach for first when doing string operations are the built-in string functions, eg. https://golang.org/pkg/strings/#Contains or https://ruby-doc.org/core-2.4.0/String.html#method-i-start_w.... I'm not optimising, I'm just not de-optimising.
Back to the style issue, at this point if you really feel strongly about the way it looks in the editor, or in documentation then I can see why you would choose one method over the other, choosing the less desirable, but theoretically faster code would absolutely be a case of premature optimisation. I personally don't have a particular preference either way however, and feel like the reasons outlined in the style guide are rather fragile. So ultimately, if I pick up some code full of named return values I don't think it would bother me, in the same way that code that uses none, or mixes them where someone thought it appropriate doesn't bother me either.
* There may be benefits other than just disk/distribution size. Many years ago I read about the benefits of small binaries, something relating to CPU caches, though that may be out of date now and I forget the details.
The issue is that with the string/regex issue, there's a good reason that the string operation should be faster. Maybe a "sufficiently smart compiler" could optimize it in some cases of static regexes, but it's at least complicated. In cases where the regex is dynamically determined, even the sufficiently smart compiler probably can't optimize away the regex.
In contrast, the case in this article seems like table stakes for an optimizing compiler. It's just not eliminating common subexpressions. There's no reason to contort your code around something that should automatically happen.
When I've benchmarked regexes before (Ruby ISTR) there have been situations where the regex engine optimised the code to be comparable to string functions, I can't remember the details though. Just for fun I benchmarked =~ /\Asomething/ and start_with?("something"), a situation that seems could be optimisable by the regex engine, but the string function is still faster (Ruby 2.3.1):
> There's no reason to contort your code around something that should automatically happen.
Absolutely, but it's a question of style at that point. "Contorting your code" suggests using a less desirable style/syntax for some gain, and I'd agree would be premature optimisation. If you're just making a choice between two styles that you consider to be pretty much equal then it's just pragmatic.
edit: Forgot to add, yes, I agree that this should happen automatically in the compiler :)
My point of view is that if you're two equally good styles based on the compiler, it's contorting your code. One doesn't have to be better than the other, if you can't make that choice between semantically identical options, you're conforming to an arbitrary standard.
I dunno. A 30% code size win is non-trivial. I'm all for filing an issue first and seeing how likely it is that there is uptake from the dev team and desire to fix the problem. However, if no fix is forthcoming . . . code size has fairly well known effects on performance.
A 30% code size reduction in code that does little other than construct and return a value. I have certainly seen individual functions where this is the case, but across an entire program, you will not get anywhere near 30% size reduction.
Having said that, this is certainly something that should be fixed in the compiler.
On a related note, in the final assembly, the compiler could also have optimized the 4 RETs into 1, then optimized away all of the conditionals, turning the sample code into the equivilent of "return objectInfo()"). Of course, in a real example, these optimizations would not be possible; but they do show that these reduced cases are not the best way of benchmarking performance.
Unless it actually impacts the use case of the application, and has been confirmed by a profiler that is indeed the case, it is just cargo cult optimizations.
Documentation for 'cached' is on its way. There'll be a blog post up soon with a friendly introduction, and some representative benchmarks. In the meantime there's a drier but more detailed specification of the behaviour of 'cached' waiting in a pull request here:
The Go library's standard AST package[1] similarly stores comments and whitespace information. It has to, since it's used by gofmt, and you wouldn't be very happy you lost comments when formatting your code.
Unfortunately, most people agree that the ast package is a mess. It isn't used by the actual compiler. There is some talk of eventually deprecating it and publishing the internal ast/parser/type-checker packages.
It used to be:
Projects that started from HTML + PHP tags grew and became unmaintainable messes. The PHP community learned from this and evolved in a different direction. You can still _write_ PHP code in the old way, but it's strongly discouraged and (rightfully) seen as a bad practice. You don't even use PHP as a templating language anymore. Symfony and Laravel have their own templating languages that dynamically compile to PHP.Modern PHP code looks very much like Java or C# -- classes, OO design patterns, and so on. Except in most ways, it's worse than Java or C#. Why would anyone start a new project in PHP?
Deploy models have also changed. You're usually not copying files to servers, but deploying Docker images or other formats. PHP has no particular advantage in this new world.
If PHP wants to turn things around, it needs to figure out what makes it unique or better than other languages. Right now, there's really nothing.