Because I was curious about this: the use of "categorically" in the phrase "categorically false" doesn't directly relate to the concept of "categories" as groups or classes of things. Turns out it's actually related to the philosophical use of the term which originated with Aristotle and was further developed by Kant (see "categorical imperative" on Wikipedia).
"categorically" in this situation is used to emphasize the absolute, unambiguous, and unconditional nature of the falseness.
Congrats to the black team on this release, and thank you scrollaway for your fork. We are in a similar position where we are (for various reasons) stuck on using tabs for an existing project. Luckily it seems like there might be some movement on this front where the maintainer team is at least more receptive to reopening this conversation:
There's definitely a segment of people for which GSuite access and cloud storage of notes is a non-starter, but in many cases those can be mitigated even when it comes to early adopters in larger enterprise companies (eg SOC-2 compliance). On top of that, you get a lot of the usual benefits when using a collaborative tool: others can help take notes for your meetings so that effort is shared, agendas can be prepared beforehand, contributed to by all the meeting attendees, and easily accessed via your meeting invite.
You're right that one of the biggest reasons to switch is that everyone else has switched (which is a bit sad to me since I think Coffeescript had some good features to offer), and community support + developer tooling are always important factors to consider. We were on Coffeescript 2 which had niceties like JSX syntax support, but even then our team had issues with the lack of IDE support for it. We were using it with GraphQL/Relay which also meant we needed to maintain our own compilation toolchain.
In the end, the benefits of having types and first-class editor integration convinced us to switch fully which we did over the course of a few months towards the end of 2019. One of our team members even wrote a blog post about how we did it:
I really respect companies that have well-written engineering blogs that go in-depth into their technical problems - Figma comes to mind here, for example.
For a small team/company, what are some good tips or resources on how to start an engineering blog? We've worked on many interesting challenges and I know the team would have a lot of insights to share, but it can be difficult to find and justify the time that writing a good article takes, and it's not something that everyone is necessarily interested in doing either.
I'm curious about the mention of Cloudflare's culture of "internal blogging". That seems to me like it could be a first step in that direction.
We use Confluence for all sorts of stuff and there's a blog category. It's full of stuff. Some of it would make a great public blog (in fact, one cool investigation is getting turned into a public blog right now). Some of it is so internal and full of details that it wouldn't do well publicly but is good for others inside the company to read (and a good way of having a memory of how we did something).
At a previous company (30-40 employees) the main thing was having a blog setup somewhere that engineers could post to - clearly distinguished from the main product blog. Actually getting engineers to write up blog posts was a separate problem. Besides me there were only a handful, even when it was made clear that it could be about a very small topic. A few hundred words is plenty.
Not CF, but my employer has a list for emails like that, where people might report on things they did, things that didn't work, ... Which sometimes is only of internal relevance, sometimes about things we can't share publicly, but sometimes also prompts "this could be a blog post interesting to other people"
There's always more than one way to do things, and it's good to be aware of the trade-offs that different solutions provide. I've worked with systems like you describe in the past, and in my experience you always end up needing more complexity than you might think. First you need to learn Packer, or Terraform, or Salt, or Ansible - how do you pick one? How do you track changes to server configurations and manage server access? How do you do a rolling deploy of a new version - custom SSH scripts, or Fabric/Capistrano, or...? What about rolling back, or doing canary deployments, or...? How do you ensure that dev and CI environments are similar to production so that you don't run into errors from missing/incompatible C dependencies when you deploy? And so on.
K8s for us provides a nice, well-documented abstraction over these problems. For sure, there was definitely a learning curve and non-trivial setup time. Could we have done everything without it? Perhaps. But it has had its benefits - for example, being able to spin up new isolated testing environments within a few minutes with just a few lines of code.
> First you need to learn Packer, or Terraform, or Salt, or Ansible - how do you pick one?
You don't. These are complementary tools.
Packer builds images. Salt, Ansible, Puppet or Chef _could_ be used as part of this process, but so can shell scripts (and given the immutability of images in modern workflows, they are the best option).
Terraform can be used to deploy images as virtual machines, along with the supporting resources in the target deployment environment.
> Salt, Ansible, Puppet or Chef _could_ be used as part of this process, but so can shell scripts
I don't see the point of your post, and frankly sounds like nitpicking.
Ansible is a tool designed to execute scripts remotely through ssh on a collection of servers, and makes the job of writing those scripts trivially easy by a) offering a DSL to write those scripts as a workflow of idempotent operations, and b) offer a myriad of predefined tasks that you can simply add to your scripts and reuse.
Sure, you can write shell scripts to do the same thing. But that's a far lower level solution to a common problem, and one that is far hardsr and requires far more man*hours to implement and maintain.
With Ansible you only need to write a list of servers, ensure you can ssh into them, and write a high-level description of your workflow as idempotent tasks. It takes you literally a couple of minutes to pull this off. How much time would you take to do the same with your shell scripts?
We've been running our own cluster on EC2 nodes built with kops and it's worked well so far. As for logging, which part of ELK is heavy? You can use the cloud operator to run it from within your cluster (https://www.elastic.co/elastic-cloud-kubernetes). We've also switched to https://vector.dev as a more lightweight alternative to filebeat/logstash.
I would highly recommend getting to know https://github.com/ovalhub/pyicu, which in addition to counting grapheme clusters with BreakIterator can also do things like normalisation, transliteration, and anything else you might need to do relating to Unicode.
Seems really interesting after a quick read-through. Specs that allow range-based validation look useful, and the structural declarations also feel like they'll help reduce a lot of boilerplate and repetition. I wonder how this compares with Dhall and Jsonnet, both of which I've been looking into as a safer alternative to templated YAML. With Google putting its weight behind this I'm curious if it'll start finding its way into K8s.
Folks upset at the sibling comment weren't here at the time. The reaction was swiftly negative, and there are few charitable reasons to be found for that.
I've had a good experience with pip-tools (https://github.com/jazzband/pip-tools/) which takes a requirements.in with loosely-pinned dependencies and writes your requirements.txt with the exact versions including transitive dependencies.
Same here, in my team we had immediate dependencies defined in setup.cfg when PR was merged, a pip-compile was run and generated requirements.txt and store it in central database (in our case it was consul because that was easiest to get without involving ops).
pip-sync was then called to install it in given environment, any promotion from devint -> qa -> staging -> prod, was just copying the requirements.txt from environment earlier and calling pip-sync.
Take my upvote. This has helped us a ton. So nice that it resolves dependencies. Only issue we're running into is that we don't use it to manage our dependencies for our internal packages (only using it at the application level). I've been advocating we change so that we simply read in the generated requirements.txt/requirements-dev.txt in setup.py
Late to the party but `pip-tools` also has a flag for its `pip-compile` flag: `--generate-hashes`. It generates SHA256 hashes that `pip install` checks.
"categorically" in this situation is used to emphasize the absolute, unambiguous, and unconditional nature of the falseness.