I believe that the biggest problem is that different "compilers" do different amount of work. In the race to win the popularity contest many, and especially newer languages offer compilers packaged with "compiler frontend", i.e. a program that discovers dependencies between files, links individual modules into the target programs or libraries, does code generation etc. This prevents creation of universal build systems.
I.e. javac can be fed inputs of individual Java source files, similar to GCC suite compilers, but Go compiler needs a configuration for the program or the library it compiles. Then there are also systems like Cargo (in Rust) that also do part of the job that the build system has to do for other languages.
From a perspective of someone who'd like to write a more universal build system, encountering stuff like Cargo is extremely disappointing: you immediately realize that you will have to either replace Cargo (and nobody will use your system because Cargo is already the most popular tool and covers the basic needs of many simple projects), or you will have to add a lot of work-arounds and integrations specific to Cargo, depend on their release cycle, patch bugs in someone else's code...
And it's very unfortunate because none of these "compiler frontends" come with support for other languages, CI, testing etc. So, eventually, you will need an extra tool, but by that time the tool that helped you to get by so far will become your worst enemy.
I have seen this first hand with Bazel. You have lots of Bazel rules that are partial reimplementations of the language specific tooling. It usually works better - until you hit a feature that isn’t supported.
I think the idea for these words here is more about preferring speed over remote execution and large build caching type of features, but not limiting the subset of toolchain functionality etc.,. In theory if you scoped your build tool to only support builds of sufficiently small size you can probably remove a lot of complexity you have to deal with otherwise.
Intelligent caching is also table-stakes though. It requires a detailed dependency graph and change tracking, and that's not something that can simply be relegated to a plugin— it's fundamental.
Right, and I think that's a combination of a few factors— first of all, there's the basic momentum that CMake is widely known and has a huge ecosystem of find modules, so it's a very safe choice— no one got fired for choosing https://boringtechnology.club
But bigger than that is just that a lot of these build system and infrastructure choices are made when a project is small and builds fast anyway. Who cares about incremental builds and aggressive caching when the whole thing is over in two seconds, right? Once a project is big enough that this starts to be a pain point, the build system (especially if it's one like CMake that allows a lot of undisciplined usage) is deeply entrenched and the cost of switching is higher.
Choosing technologies like Nix or Bazel can be seen as excessive upfront complexity or premature optimization, particularly if some or all of the team members would have to actually learn the things— from a manager's point of view, there's the very real risk that your star engineer spends weeks watching tech talks and yak shaving the perfect build setup instead of actually building core parts of the product.
Ultimately, this kind of thing comes back to the importance of competent technical leadership. Infrastructure like build system choice is important enough to be a CTO call, and that person needs to be able to understand the benefits, to weigh activation costs against the 5-10 plan for the product and team, and be able to say "yes, we plan for this thing to be big enough that investing in learning and using good tools right now is worth it" or "no, this is a throwaway prototype to get us to our seed money, avoid any unnecessary scaffolding."
I may be targeted for calling this the “correct” way, but it is - it’s the only correct way.
Otherwise you need complicated setups to test any of the stuff you put up there since none of it can be run locally / normally.
GitHub Actions, like any CI/CD product, is for automating in ways you cannot with scripting - like parallelizing and joining pipelines across multiple machines, modelling the workflow. That’s it.
I would really appreciate an agnostic templating language for this so these workflows can be modelled generically and have different executors, so you could port them to run them locally or across different products. Maybe there is an answer to this that I’ve just not bothered to look for yet.
> I would really appreciate an agnostic templating language for this so these workflows can be modelled generically and have different executors, so you could port them to run them locally or across different products. Maybe there is an answer to this that I’ve just not bothered to look for yet.
Terraform? You can use it for more than just "cloud"
In addition, adding our own custom modules for terraform is, all things considered, fairly easy. Much easier than dealing with the idiosyncrasies of trying to use YAML for everything.
It was YAML but I actually really liked Drone CI's "In this container, run these commands" it was much more sane than GitHub Actions "Here's an environment we pre-installed a bunch of crap in, you can install the stuff you want every single time you run a workflow".
You can also do that in Gh. Gizlab does that, too. But I hate it. When you do this, every build becomes cargo culting commands until it works. Ntm every build it's is own special snowflake from hell.
I want standardized builds which output everything I need (artifact, warnings, security issues etc) without me feeling like a necromancer. Gh solves this okayish, but extending and debugging is a pain.
I don’t see the post as the author suggesting you do this, but informing that it can be done. There’s a large difference. Knowing the possibilities of a system, even if it’s things you never plan on using, is useful for security and debugging.
My take was that it is not useful, definitely, categorically not useful. It is a potential security hazard though. Especially for 'exploring' self-hosted runners.
You know that when the public media and the elites start attacking him personally by attacking his credentials or his background or work history he's onto something. And this will happen because they cannot refute what he's saying with facts so they must try to silence by undermining his credibility.
This could be achieved with a hierarchical namespacing scheme for functions, no?
universe.mega_corp.finance_dept.team_alpha.foo
But to use `universe.mega_corp.finance_dept.team_alpha.foo` in your application, you don't import a module, just the function `foo`.
Who controls what goes into the namespace `universe.mega_corp.finance_dept.team_alpha`?
That would be Team Alpha in the Finance Department of Mega Corp.
I'm probably just missing something obvious, but in this scenario with really long names, doesn't that just mean all code will be extremely verbose? Or are you saying there'd be some way to have shorter bindings to those longer names within a specific context? But then what would that look like? Typically we use modules to denote contexts within which you can import longer fully-qualified names with shorter aliases.
Then when you use `foo`, the compiler would know you mean `universe.mega_corp.finance_dept.team_alpha.foo`.
There will probably need to be some kind of lock-file or hash stored with the source-code so that we know precisely which version of `universe.mega_corp.finance_dept.team_alpha.foo` was resolved.
Every argument made quickly becomes invalid because in any sufficiently complex project, the function naming scheme will end up replicating a module/namespace system.
Very few languages let you have multiple versions of the same package in one project; fewer let you use functions from both versions together; and even fewer make this easy!
This is something that would be enabled with hash identifiers and no modules:
let foo_1 = universe.mega_corp.finance_dept.team_alpha@v1.0.0.foo
let foo_2 = universe.mega_corp.finance_dept.team_alpha@v2.0.0.foo
let compare_old_new_foo(x) =
foo_2(x) - foo_1(x)
There would be a corresponding lock-file to make this reproducible:
I disagree - if the Vision Pro had some strong use-cases then developers would hold their nose and make apps for it. The platforms that get apps are the ones where businesses see value in delivering for them. Of course businesses prefer it when making apps is easier (read: cheaper) but this is not a primary driver.
I think the potential high-return use-cases for VR and AR are (1) games, (2) telepresence robot control, (3) smart assistants that label (a) people and (b) stuff in front of you.
Unfortunately:
1) AVP is about 10x too pricy for games.
2) It's not clear if it can beat even the cheapest headsets for anything important for telepresence (higher resolution isn't always important, but can be sometimes).
Irregardless, you need the associated telepresence robot, and despite the obvious name, the closest Apple gets to iRobot is if someone bought a vaccum cleaner because Apple doesn't even have the trademark.
3) (a) is creepy, and modern AI assistants are the SOTA for (b) and yet still only "neat" rather than actually achieving the AR vision since at least Microsoft's Hololens, and because AI assistants are free apps on your phone, they can't justify a €4k headset — someone would need a fantastic proprieraty AI breakthrough to justify it.
reply