Shame his financial contribution doesn't really match up. $34k is pretty darn low considering how much money Laravel makes. Even worse when you see much smaller operatons like Private Packagist have donated more than Laravel and Symfony combined.
I'm a heavy PHP & Laravel developer and I speak for myself and a few close friends around my in my network who are like me. We all consider Laravel the reason we are still within the PHP scene and didn't move away. So in a sense I think it is true.
That said, the recent changes around Laravel (being bought out and becoming more and more commercial) is not something I (we) consider a good thing. Not necessarily a bad thing, but we all know that a OSS framework becoming commercial doesn't usually end well.
It's hyperbole, but like so often, there's a grain of it truth somewhere.
Also, like most things, Laravel is built on the shoulders of others. It sometimes makes hard things easy to access (ppl like this) and hides complexity away, clouding how things actually work (ppl don't like this).
I primarily use Laravel but like to think I code in a generalist if way as to not get stuck in its system.
Ehh, I don't think I'd ever use that word but he had a huge impact on reinvigorating the PHP ecosystem as a whole with Laravel. I remember playing with early versions of Laravel on my own and having my eyes opened to a better way to structure/write code.
Years ago I wanted to use native FTS (because of tall the things mentioned, having to sync to external simply adds complexity) and it failed at another point.
Not completely surprising, but on a table with _potentially_ couple of thousand of inserts / seconds, it slowed down the overall updates to the point that transactions timed out.
We already added an index for one of the columns we wanted to index and were running the statement for the second one. The moment this the second index finished, we started to see timeouts from our system when writing to that table, transaction failing etc.
We had to drop the indices again. So, sadly, we did never get to the point to test the actual FTS performance :/ I would have like to test this, because didn't necessarily had to search hundreds of millions of documents, due to customer tenants this would always be constrained to a few million _at most_.
Sounds like the issue was just co-location of search index + other transactional data in the same table. If you had a table acting as your search index only then would insert lag on that table matter? I could maybe see connections piling up, but with proper batching I bet it'd be fine.
If I am understanding your experience correctly the colloquial wisdom here is to use GIN on static data and GIST on dynamic data.
> In choosing which index type to use, GiST or GIN, consider these performance differences:
> GIN index lookups are about three times faster than GiST
> GIN indexes take about three times longer to build than GiST
> GIN indexes are moderately slower to update than GiST indexes, but about 10 times slower if fast-update support was disabled (see Section 54.3.1 for details)
> GIN indexes are two-to-three times larger than GiST indexes
> As a rule of thumb, GIN indexes are best for static data because lookups are faster. For dynamic data, GiST indexes are faster to update. Specifically, GiST indexes are very good for dynamic data and fast if the number of unique words (lexemes) is under 100,000, while GIN indexes will handle 100,000+ lexemes better but are slower to update.
Every time I read blogs like this I'm tempting to get back to the product, still in use, where we scraped the search integration due to the issues we had with pgsql (performance). Although ElasticSearch (nowadays: OpenSearch) is available, we never deemed it important enough for this part of the product.
Last year I also learned that ransomware attackers are not computer gods, necessarily.
My QNAP NAS was open to the internet (a forgotten but-not-in-use port forward on my router) and the famous QLocker ransomware got hold of it.
I got out, I got really lucky. I've millions of files (useless to anyone in the world but me) on my NAS and the ransomware encryption took literally days. I probably discovered it on the 3rd day of its activity.
Someone brighter than me already figured things out and posted this on the qnap forums, that there's a 7zip process running which encrypts the files and they process all files sequentially (aka: takes a long time in my case). This 7zip process gets executed for every file and gets passed the password for it on the command line, though in the process view it was masked for me. But I could just replace the invoked 7zip binary with a shell script and redirect all the arguments -> presto, I had the password.
I then wrote some more shell scripts to decrypt all the files infected so far, wrote it more efficient than the attacker, and within 24 hours everything was back to normal.
Before anyone cries "BACKUPS": yes, of course I've off-site backup and they also were not (yet) affected. But actually I only backed up the real important data for me and my family, due to the costs at that time, I didn't backup _everything_. But I since switched cloud backup storage to Backblaze which I figured was the cheapest vs. acceptable ergonomics for recovery, so I could increase the amount of data I back up.
Lesson learned, I guess. I know not everyone can help themselves like I could, I feel really lucky I got away with just some blood sweating.
I think asymmetric encryption is not usable for large amount of data, the only thing it is good for is to encrypt a passphrase or a binary signature (like a hash). If you can catch the process of encryption while it is running, it is likely that the passphrase is in memory (or used as a command line argument).
That's why you create a lengthy random key (that you know it cant be brute forced) and encrypt everything using it and symmetric encryption.
Than you store that random key encrypted with asymmetric algorithm.
Same goes for things like disk encryption. You never use the users key for encrypting the data. You always encrypt using random large key that is not brute-forcable and encrypt that one with user password, so the process of changing the user password is just decrypting the random key and encrypting it back with new password. Or you would have to re-encrypt the whole disk on password change
gpg supports using public / private keypairs to encrypt any amount of data you like. I use it for uni-directional backups from machines where trust is an issue.
Or is the reality of this that it's just encrypting a symmetric key with the asymmetric cipher, and then encrypting data using that key?
Everything is encrypted with a symmetric key. It is just that sometimes there is an asymmetrically encrypted symmetric key packet included in the message so that GPG (or whatever) does not have to ask you for the symmetric key. This is all fairly generic, if you actually have the symmetric key you can use it directly even if a key packet exists. This means that you can give some entity a key to decrypt a particular message/file without revealing your asymmetric secret key associated with your identity.
I need to use them almost daily for work. AWS SSO and often need access to multiple accounts at once. Without multi containers this would be a serious PITA
> Our roll-outs first apply migrations, then deploy the code. Migrations are applied by iterating all databases, and it sometimes it can take for up to several hours, before the code is finally deployed (so that it can use the new schema in every DB). It creates a very large window where old code can see new DB schemas, so we have to be careful for our migrations to be forward- and backward-compatible.
I read this answer last week but had to let this sink in.
I can't fathom how this would feel being "the norm". As a postgres user, I'm accustomed that usually most DDL statements can be done inline in a few seconds, ignoring things like creating indices concurrently which may takes up to 1-2 hours depending on the table.
OTOH I'm sitting more or less on a monolithic database, if there's even a term for this. Yes, it's multi-tenant for tables where necessary (though we don't include that particular customer ID key on all tables as some FK relations naturally provide this separation) and there are times when I'm not sure a tables with 700+ million rows and rapidly growing is a good idea.
From where I sit, having multiple database per server and/or application instances sounds an impossible thing to manage. Let alone that the "application" is a collection of multiple services (micro to medium) and "one just does not run multiple instances" of this cohort. Then there's is the additional challenge that we're receiving "lots-o-webhooks" from various services, a few hundreds to thousands per second for any customer and would need a central service to know to dispatch to which database etc.
If I may ask kindly, is it possible for you to share a bit of insights on how you got on this road to where your company is now? Did you start out _that_ way from the start?
For tech ppl, linear has so many pain points (ofc heavily depends on your workflow style) I cry in agony every night.
- email notification are batched unless it's an urgent issue. This is such bollocks, I don't know how they get away with this. Gh issues and jira just work. Workaround: use their Webhook and do it yourself. Makes me so angry.
- can't search comments WTF
- I'm a "tabber", but linears super-fancy-SPA bootstrapping is way behind loading a single issue page especially compared to github but also jira
- github has no wysiwyg and linear ONLY has it, but it has so many subtle bugs, especially with lists. But I'm spoiled with notion. Has its own set of problems, but pure editing is great
Oh boi I could go on.
But I agree, for cross teams with non-devs it works great.