One thing that I notice is that Dark requires a completely walled in environment to be able to deliver all of its promises. In reality, any real-life project has to interact with external services, legacy systems, or custom peripherals which are unlikely to be based on Dark.
How will Dark be able to handle such cases? Current tooling and processes are able to cope with such things through custom, hand-crafted, and as the article stated, often fragile, systems and processes. The are a pain, but they get the job done. They are very versatile by their nature. While they break more often that we would like to, they can be patched up well enough to band-aid over rough patches and adjust to surprising externalities. Dark seems to have one way of doing things, and I am curious how compatible that way is when it comes to interfacing with external systems.
We expect we'll have some built-in support for common things (resizing images and videos, or generating pdfs for instance; we already support static assets on a CDN).
What's the best way to contact you? (I'm $HNusername @ gmail).
Reading on Darklang.com I see they kind of address this. The ecosystem they’re building is training wheels for developers, a way to make “coding 100x easier.”
I can see that working in a sense, and being an entry point that a lot of people use to make neat toys. I could even see it being a gateway that people use to get interested in and learn about programming. But I can’t imagine wanting to build a business with more than one programmer, or any kind of scale on a completely black box system like that.
I will also say, there’s a problem when it comes to entry-level systems, trying to teach people to code:
The struggle with the complexity is actually important. Dark isn’t actually making all the work of building and running a web application go away, it’s abstracting it all into a platform such that you can’t see it.
Suppose a person gets into coding self-taught and learns to work this way. The knowledge isn’t going to be very transferrable - ie when they look at other languages or systems they’re likely to struggle with problems like “what does prod versus dev mean, I just want my program to run for the world...”
You usually move knowledge between ecosystems by translating “I did it this way in (toolset A), so what’s the parallel in (toolset b)?” The more you have an idea of the underlying principles the easier this is to figure out.
That said, Dark certainly looks neat, and I imagine the implementation is quite cool.
The only other nit I’d pick is the name. To me, its not really a language, but maybe a development environment, or framework. I suppose it had a language in it, which I’d probably call DarkScript or something.
> Reading on Darklang.com I see they kind of address this. The ecosystem they’re building is training wheels for developers, a way to make “coding 100x easier.”
No no no no no. This is not what we're doing. We're building something for experienced software engineers. (New developers will be able to use it too and it should be much much easier for them than learning in other ways, but that's a secondary audience).
> Dark isn’t actually making all the work of building and running a web application go away, it’s abstracting it all into a platform such that you can’t see it.
Some of it is being abstracted, but a lot of it being properly removed. Servers are abstracted, but almost all of actual deployment is actually being removed.
Even if I was willing to accept the other tradeoffs that come with Dark, I'd want another human to approve my changes before that 50ms deploy to production kicks off.
So to do code review, you write the code behind the feature flag, then ask someone for code review. (There's google-docs style collaboration, so they can see your code if you permit it). After code review, you can "merge" by toggling the feature flag, or changing the setting to "let 10% of users see this" or whatever roll-out strategy you want.
just safely. So you're writing code behind a feature flag (but "in prod"), but no users can see it.
In traditional design, there will be pre-commit code reviews and staging env with limited access. But looks like Dark get rids of all of this?
But it is one of the few steps the Dark deployment process retains (see the diagram near the end of the article), so it seems they will support it (hopefully reasonably well).
It requires the language to have a bulletproof feature flag system which can't be bypassed, which it seems is the goal.
From what I've read so far it seems like Dark is a good fit for prototyping, experiments, and small products and teams that "move fast and break things". Would you consider it a good fit for large teams and applications, where different people own different components and reliability is the most important metric? A lot of the accidental complexity in the "standard issue" continuous deployment process detailed in the post is there to prevent accidents and enforce permissions.
The feature flag is a thing that you might not need if you're small, but it completely changes how large teams work. Same with DB migrations: if you have a small amount of data you can just do a schema migration and the locking wont affect you too badly.
We'll have a permissions model at some point that will take care of the ownership problem. We designed Dark to allow extremely fine grained ownership, for example allow a contractor access to a single HTTP route or function.
Overall, this is kinda similar to how we went about building CircleCI. We focused on good tooling for companies (eg parallelization and speed) instead of individual projects (eg build matrices).
You want developers to behave responsibly and to put all development effort into writing good, reliable code in the first place. I personally prefer pair programming over code review. Code reviews have a lot of downsides but two most important are that you can't get from commit to prod automatically (as there is another inevitable manual step) and that reviewer typically doesn't have enough background/time to perform the review in meaningful way.
It looks like there won't be a migration path, and you'll need to rewrite your app from scratch using a different language. Again, it would be nice some transparency around how much VC money they've taken.
I think the only way I'd use this is if I had the option to host it myself i.e. the code was all open sauce.
For our early customers, we have a handshake agreement that we won't leave them hung out to dry. If Dark was to shut down, we would 100% make sure they had the time and ability to port their app off. There would be a time cost to them of course, so it's not perfect.
As we get closer to opening up to the public, we'll be doing a bunch of work on the sustainability question so we have good answers to the "acquired or shutdown" problem. We imagine a legal framework to protect customers, but haven't got the details worked out.
You can achieve the same result with PHP and FTP, however once you start adding unit tests and branching, that's when things start to take time.
Long builds aren't necessarily a problem if they give you enough in return.
The reason we stopped doing that is because we wanted version control, we wanted to work in teams and we wanted unit and integration tests, which I doubt that you could run within 50ms.
> the feature flags, function versioning
I can easily replicate that with at least early 2000s PHP(would probably work with 90s PHP as well).
> DB migrations
We do have DB migrations in all web frameworks that are worth using.
A good way of thinking about Dark is to suppose that instead of stopping doing that, we found a way to make it work.
> I can easily replicate that with at least early 2000s PHP(would probably work with 90s PHP as well).
If you could replicate it, you'd do it. Except you need version control, unit tests, etc, etc. So you need tooling that has those things to make the feature flags work.
You found a way to run multiple selenium tests in less than 50ms?
Tests wouldn't be Selenium, and Dark can probably deduce which tests are affected by a particular code change (if it's close to purely functional/no side effects), so it does not need all of them ran.
I generally like the idea and approach technically, but wouldn't be using them because of lock-in nature.
I also worry about feature flags: for longer lived "features" (they happen!), code ends up being more complex and hard to maintain, and when the time finally comes to drop them, it's a mess.
Unless your new language can fix all my issues, be open source, and be my One True Human Interface to All Things Machine, I'm going to keep building on ruby, with occasional bash / crystal / rust excursions for speed.
Is your language pure-OO? Do you have gradual typing? Does it run on a common VM? Am I going to have to rewrite everything I'd ordinarily bring in a library for? I can accept every single one of your claims but still not be interested in learning a whole new stack because I can already deliver stuff quickly enough with the one I've been iterating on for a decade. Especially when it could end up orphaned in 3 years if it doesn't get enough adoption.
I'm not saying your holy grail can't be holy enough. But it has to be pretty darn holy.
Zero downtime deployments are very rare, and ads a lot of complexity, so it's a big trade-off, compared to just uploading the new code and restarting the service. And having the front-end deal with minor hiccups like service restarts, because users will have such issues all the time due to being on mobile networks, trains going through tunnels etc.
You can do as much testing as you want, both manually and automatically, but you still can't detect all issues as efficient as thousands of users in production. So just accept that there will be issues, and instead design your pipeline so that those issues can be fixed within minutes.
What do you mean? Rolling deployments are a very basic feature of container orchestrators.
Erlang also provides "fix in minutes, in production" stories he refers. For most of languages, it's very hard to remotely get access to an object, let alone fix things directly.
With Erlang, you can inspect every object in the cluster to understand what's going on for any user, and fix the plane in the mid-air instead of rebuilding a plane with the same issue, then go through the process to fix it.
My favorite "load balancing" for web-pages is using "DNS round robin" where you add several A records, and the browser automatically try another IP if the request fails.
The browser tries another IP if it cannot connect to the first one.
If a server throws 500s left and right DSN RR won't help, whereas a load balancer would remove the offending server from the pool.
DNS round robin is not failover. If one of your four A records points to a dead box, 25% of your customers are SOL.
Browsers even try other IPs and alternate between IPv4 and IPv6 before the connection is established.
The whole point of it is to connect to multiple A and AAAA records with a very short delay (250ms in chrome).
The browsers use poll to get notified about the first established connection and then use it.
So if one A/AAAA records IP is unreachable or slow it will use a another one.
There will be no fallout (except maybe for clients that don't use something like happy eyeballs, but any browser does that) if you have multiple A/AAAA records and one of them is down.
We deploy to production several times a day. We have an in-house product that our own emplyees use and they absolutely love that they can ask a feature in the morning and get it in a few hours (in best case scenario). That wouldn't be at all feasible if all those deployments caused errors to users. Luckily that's not a problem, because these days it's really easy to get zero downtime deployments with all this containers and "serverless" functions and whatnot.
With what Dark is promising, I believe testing would be easier. Deploy to a test environment in 50ms. Then run a suite of tests, then deploy to production in 50ms.
There could be some complementary tech that figures out what tests need to be run based on what you change to get the time down.
You only need pip or any package install if your testing image doesn't pre-install the required packages before running the tests. Which I don't myself since I always want to see if my packages work with the latest versions of other packages - and Dark wouldn't be able to handle that anyway.
So my test runs actually complete in about 50 ms if I'm not testing any external API functionality. They will also never finish earlier than 5 minutes if I am, even on Dark - since they're based on external services.
disclaimer: working for a l2 network provider, and a mistake costs us infinitely more than an "Oh we're sorry!" can fix.
I guess the value prop is that if you're starting from scratch, and want to write your whole app in Dark then you can avoid hiring a dev-ops engineer and iterate faster?
Seems like a stiff trade-off. If it's an interpreted language you're coming from, setting up envs can be < 30 secs anyways (and same with prod deploys), so that's faster than you need.
If it's a compiled language you're coming from (e.g. scala/c++) then it's likely Dark may not match your needs.
Plus I guess this assumes every team in your company is using only dark? Like what if it's a SOA and a full environment has a node component?
"Dark is currently in private alpha."
That's probably the primary reason why there's little to no info.
What happened to Zope? Well, Chris McDonough (creator of Pyramid, and Zope veteran) blogged about this in 2011 and from my perspective he got it exactly right.
I still think this history is fascinating and I wish more people knew it.
After the history lesson, some of his "lessons learned" bullet points seem very apropos in the context of Dark, particularly the first two:
* "Most developers are very, very risk-averse. They like taking small steps, or no steps at all. You have to allow them to consume familiar technologies and allow them to disuse things that get in their way."
* "The allure of a completely integrated, monolithic system that effectively prevents the use of alternate development techniques and technologies eventually wears off. And when it does, it wears off with a vengeance."
One of the things we designed Dark around was being able to safely and easily continuously deliver language and framework changes. The Zope 3 project was a disaster by the sound of it, and I would hope this will allow us avoid a similar fate. That's the goal.
Also, we're pretty pro-testing (though also type systems, which help too).
Why would I want that? How do I control it for green-blue? For rational isolation? Is there native support for canary releases?
And 50ms away from restoring service.
The degree of care that needs to be put into a release is related to the potential damage a bad release can cause and the time to repair.
Reducing time to deploy reduces time to repair for most errors (certainly not for errors than write bad states, if you write bad data to your database, that's most likely a long repair regardless of how fast to deploy it was).
If you're building NES cartridges, you should really get it right the first time, since there's no way to update. If you're making a low cost internet service, for small changes, it's not unreasonable to try things without a lot of testing before hand as long as you commit to detecting and rolling back quickly.
Sure, but fast rollbacks are much easier than fast rollouts.
> Reducing time to deploy reduces time to repair for most errors (certainly not for errors than write bad states, if you write bad data to your database, that's most likely a long repair regardless of how fast to deploy it was).
A uniform fast rollout is almost always worse, and a separate problem from data recovery or fast rollbacks. But I agree with this paragraph in principle.
> If you're making a low cost internet service, for small changes, it's not unreasonable to try things without a lot of testing before hand as long as you commit to detecting and rolling back quickly.
When you're doing more than basic presentation of semi-dynamic content, most CSPs make multi-region deployment very affordable and easy. It's generally a failure of early engineering departments or an accumulation of technical debt that makes it otherwise in 2019. Too many folks assume it is something they should do later rather than get right to start.
Canaries and regional redundancy should be part of every product's basic launch checklist, imo. People love to say any kind of software engineering best practice is bad, because they love to pretend software engineering has no value. But for every flash UX trick or tracking feature someone ships providing value, that's completely overshadowed by being down when a user needs you. Reliability not only saves you time at the developer level by making features more predictable, but it means you need to worry slightly less about funnel optimization because your user attrition is lower.
(Seriously, how do people come with such facile counter-arguments?)
It's actually not hard at all to write fast software deployment. Many people do it accidentally. But to tie it directly to editors and stovepipe the infrastructure around testing, that seems to me like a great business model that any rational engineer interested in reliability should immediately reject unless it's coupled with open source, statistically valid canary tooling at the bare minimum.
What you actually want at that degree of universal knife-switching is rollbacks.
This is not a fair statement. This is like saying "Did seatbelts save the victims who died in car accidents X/Y/Z?"
Just because your tests/ci/etc. can't catch every single issue that could go wrong doesn't make them useless. Just because people die every single day in car accidents doesn't mean car's safety features are useless.
You're correct in that a CI should not in any way be considered a "perfect solution for outages", but an automated system that tests, deploys, etc. in a predictable and (again) testable manor is at least helpful.
Actually, your refutation is precisely what I'm trying to say! The GP implied that CI and UTs fix these problems. They don't. In the same sense that seatbelts and air bags may reduce the chance of a fatal accident, they do not replace safe driving, well-trained drivers and intelligent infrastructure design.
> but an automated system that tests, deploys, etc. in a predictable and (again) testable manor is at least helpful.
I didn't say otherwise. In fact I agreed those measures should be in place. They're just not sufficient. You will still face bad releases even if you do this, and so you shouldn't do fleetwide rollouts unless you just don't care about your system falling apart.
Please don't misrepresent my comment.
Yes, multiple times. It just didn't help with each and every outage, which is a different thing.
For the ones not preventable by CI (e.g. a disk was borked, power was down, but no integration / programming error etc a CI process could possibly spot) the question is moot.
Besides the concern about outages is totally unrelated to "speed of deployment".
Increased speed of deployment helps testing and iteration (obviously faster deployment speeds also means faster deployment to testing and staging environments, not just to the final server).
So then maybe the idea that a prompt global rollout is a bad one even if you have good tests and coverage.
> For the ones not preventable by CI (e.g. a disk was borked, power was down, but no integration / programming error etc a CI process could possibly spot) the question is moot.
I'm here to tell you, as someone who has been at this for a long time and someone who is actually doing SRE at Google at the outer layers: No. It matters. We do canaries because they let you find problems no reasonable person cold catch with tests. Tests themselves are only cone component of a chain of reliability best practices, starting at coding practice and expressive type systems with static checks and moving outwards to reploying the code carefully and with intentionality.
> Besides the concern about outages is totally unrelated to "speed of deployment".
This is obviously false. Slower rollouts of new software and canaries are a proven way to reduce the impact of a bad push. Once your system gets modestly complicated, it can be very difficult to spot a bad push with naive unit testing which (let's be real here) often just mocks out dependencies anyways.
> Increased speed of deployment helps testing and iteration (obviously faster deployment speeds also means faster deployment to testing and staging environments, not just to the final server).
Right, but very fast deployment to client machines is not hard. The CD model is just inherently dangerous unless it has automated safeguards against the complexities of even a modestly large production app.
Personally, if your staging environment is CD I think that's great (up until your org gets big enough that a broken build pisses off other devs). But to prod? I will resign before I let you do it. Everyone who does software at scale agrees it's not a good plan to do CD on prod. There are whole books about why you shouldn't do it.
Non sequitur. This confuses "speed of deployment" (which is a technical issue) with "slower rollouts" (which is a devops choice).
You can have 50ms (or, generally very fast) deployment and still have canaries, slow rollouts, incremental rollouts to smaller audiences, and so on. Those are orthogonal, choices.
The slowness of deployment that Dark claims to solve (and which is good for any project to solve) is about the non-essential slowness because of convoluted processes, too many moving parts, accidental complexity, and so on.
>Right, but very fast deployment to client machines is not hard.
You'd be surprised.
No, I wouldn't.
Perhaps one of the promotion strategies could involve a human controlled criteria.
I would like to understand their plans on this point a bit better as well.
I see immense power in switching a promotion strategy away from large units, like entire containers or versioned artifacts, to fine grained components, like versioned functions.
Do you see any advantages here for improving your platform's availability?
To answer, the whole system is based on feature flags (which are equivalent in this case to green-blue/canary releases). So yes, there is a massive amount of control for this; in fact, that's the whole point of the article.
You just didn't talk about the actual complexity of deployment. And that's your call, but as you can see from this thread there are people who think CI/CD to prod with no controls is a good call.
From my reading of the article, the way that Dark is intended to work is that the language and environment enforces that changes to code accessible in production are forbidden to be made without putting the change behind a feature flag. Feature flags can be written so that flag flips are rolled out gradually - to whitelisted users, percentage of users, etc. (I'd be interested in how sophisticated the set of requests enabled for the feature flags can be specified.) Writing a tool to instantly revert the feature flags to their state at a previous timestamp is straightforward in principle. And presumably there'd be no need to write separate configuration files in a pure Dark environment, so changes to configuration would also be feature flagged. In this case, there would be in theory no need for a separate canary process, since the feature flags themselves can be canaried. I just hope that Dark will encourage teams to canary changes rather than have "easy" defaults of "cowboy deployment to prod".
Dark would probably make it very easy to roll back the simple changes that result in exceptions being thrown on every request. All the most interesting outages that I've seen have involved emergent interactions that put the system into a state unrecoverable without manual intervention. It's unclear how Dark would help with that from my reading.
A thing to clarify is that you're not really deploying code in 50ms. It's more that all the code you write is in production, behind a feature flag. But, you cant change unflagged code and you also cant change the old code, you can only set a flag that some users see the new code.
Knife-switch rollbacks are good though, tbf. I just don't think that's particularly challenging.
As we grow and start to look at compilation and optimization, I think we'll probably get much fancier, but for now we're very into keeping things super simple.
I would argue with you that this is not such a priority for a software development organization, there are more important things to do.
You got so excited...so fast, I think this community may be doing more bad for me than good lately.
A lot of comments here are coming from an anxiety that is caused by many years of dealing with fragile deployment processes, but I like the boldness of Dark to just bypass all that and make the fundamentals safe and solid.
But it's fast enough as long as you keep each project small.
I like to imagine what would need to happen to reduce(increase?) that granularity to single function - you change the code, byte-compile it, send that blob to your backend’s hot code swap port and voila - you’re done. A lot has to happen behind the scenes though - a change in dependencies would still require a larger deployment or much smarter code swapping procedure, caller/callee tree would need to be considered, functions need to be pragmatically pure (talking to dB is fine, being part of an “object” - not really), functions need to be extensively specced, etc.
But that would be my backend holy grail.
think of function `+` - i want to be able to deploy a change to that function such that no worker process gets respawned.
also imagine every time you call `+` in your application - it goes throught the whole FaaS shenanigans with serializing, sending rpc over wire, receiving response, deserializing, dealing with throttling, errors, backoffs. all of that is ridiculous for `(+ 1 2)`.
and yes, erlang is a good example of an environment where this could work.
(The mailing list thing is weird, I only see that link at the bottom. Can you point me at the ones you see in the text?)
- statically typed, functional language with no null pointers
- safe deploys and rollbacks
- structured editing / version control
- minimal infrastructure management
...etc. A tool that makes writing a correct program, and deploying it without catastrophe or fuss, easy. I look forward to reading more.
I seriously hope though that you will open source the compiler.
I can't see Dark going anywhere without a guarantee that you can use it without using the provided infrastructure and that your code will work fine even if Dark goes away or you decide to host it yourself.
I certainly won't touch Dark with a 10 foot pole otherwise. And would advise anyone against it strongly.
Dark can talk to other HTTP things, so many of the functions that are done by libraries can be done over HTTP. So you can call other services, even if we don't have an SDK for that yet. And from there it should be easy to package it up so there is an SDK.
Finally, it's a question of what you're doing. There's many tasks where the disadvantage of not having an ecosystem is outweighed by Dark's other advantages. And as we grow the ecosystem, the disadvantages go away, unlike the status quo tooling.
How did the decision to create a whole new language take place? Was using existing languages a serious consideration and if so, what were the trade-offs that made you go in this direction?
I'm familiar with the basics of back-ends. Let's say you had a language that compiles each back-end point into a binary executable (each api point is a separate program). All you'd need is versioning via directories, symlinks to the active most up to date version and a router that looks up the executables in a dir and routes to them.
Making changes would involve changing the code, pressing "deploy" which pushes to github and gets the backend to pull, re-compile the binaries and update symlinks.
Given this setup - what does your approach do that's more than what I've just described? I'm assuming there's some fundamental problem my naive approach does not handle that made you go in the direction that you've just described.
Then we'd be stuck with all the JS mistakes: promises and futures and callbacks and object model and null/undefined/0/"" comparison tables.
This list keeps going: it would have to use a parser. It would need to support npm, etc, etc.
I expect once people see Dark that they'll try to build similar features for existing languages, and I hope they succeed! There's some companies that are doing similar things, and I think the scale of their vision is much lower, partially because they are limited by language choice.
What are the fundamental features of Dark that are not possible to add to an existing language?
What are the fundamental features of an existing language that are not possible to restrict via enforcing a subset, that are fundamental to Dark?
Is Dark a very domain specific language?
Perhaps these questions will better be answered by a demo that is coming in September.
An obvious example is the structured editor. To do that, we have to have a language in which there is a meaning to every incomplete program. Other languages don't have that - they just have a syntax error.
Our editor shows you what we call "live values" as you type. If you've seen lighttable, it's a similar idea. We save real values from production and use it an a sorta debugger. It's really wild, makes understanding code really easy, and debugging too.
How would we handle that in JS? No idea, might be possible by hacking into v8. But even then, we actually needed to change the language semantics and require all functions to be marked as "pure" or "impure". And then we discovered there's a 3rd state: "non-side-effecting but we don't have a version of it in the client". And then a 4th state "not idempotent but we can it without side-effects".
I doubt there's anything that _couldnt_ be done in Js or ruby or python, but what's the point in fighting that battle? By controlling the entire thing we avoid all this.
Same as why we don't support vim. Could we make all the cool stuff we're doing work in vim? Sure, with a massive massive effort. Blank slates work better for this.
I don't know if you can convince people to do it, but I'm glad somebody's trying.
The UI is just not thought out at all. I don't think I'm being resistant to change. I think their IDE is just garbage.
Some people have done new IDEs well, VSCode comes to mind. But replicating 40 years of features is not going to happen overnight, and some of those features are actually important. Figuring out which ones are the key. If you don't hit a developer's critical set of features, they aren't going to use your thing.
And the reason 1d text is hard to get rid of is not merely an editor (ie, an "app") but a galaxy of languages, tools, and operating systems built around 1d character streams. Everything from the serial driver to the style sheet can be manipulated with primatives for mapping, filtering, and composing character streams and operations on them.
There ARE a bunch of benefits to structured code, for example Docker and npm/pip/etc have recently showed us a hint of what composition of standard components can bring.
What might be missing from structured code is the rest of the primitives on ASTs. Once we have all that we can talk about backporting to emacs :)
Maybe the editor should be customisable via Dark itself?
Alternatively, allow us to use our editor-of-choice and provide a language server to facilitate the creation of structured editing plugins for Dark.
I would add that most applications, including most non-vi-or-emacs editors and web browsers, do not allow much customisation of how their input is mapped to actions in the software. For someone like me, a person that is a vi die-hard because of its ergonomics, this is absolutely critical. Post-input customisation, no matter how advanced, cannot satisfy this need.
That's the plan!
It sounds like taking the Go idea of quick builds to the next level.
Most of the advantages we discuss in this post come from the fact that you're using the Dark editor, language and infra. I don't believe this would be possible with a self-hosted solution.
It's a proprietary language, that can only be edited in a proprietary cloud-based editor, that only runs on proprietary infrastructure?
- What happens if Dark-the-company disappears? It doesn't matter how. You're acquired by Google and shut down, your office is hit by a meteorite, whatever. Everyone that's ever built anything using Dark would completely lose it? Every company that's built anything using Dark would have nothing remaining in its place except--at best--some code that can't run anywhere?
- How much venture capital have you taken? Do you expect to take more?
- What if you update the platform and it introduces behavior changes or bugs into some of your users' applications? Can they rollback and stay on the previously-working version of Dark (indefinitely), or will they have to rely on you to resolve the issues?
- The first mission of the company that you list on your Values page is "Democratizing Coding". How does making literally every aspect of Dark completely dependent on your company support that? Will Dark users vote on company decisions? How can a completely centralized system with a for-profit owner promote democratization?
They can fix it in 50ms! Don't worry!
If you're worried about a meteor hitting SF (and I know that's a metaphor, but let me run with it), then your risk profile indicates you won't be using new technology from a new startup (you probably won't even use less risky stuff, like Rust or Elm).
If every technology had exactly the same constraints, then we'd be stuck with those constraints forever. This isn't the _right_ way to do it, it's just our vision of how to solve the problems with coding.
I'd like to have the option to reuse my code to mitigate those risks. Two possibilities: a) If the Dark IDE and infrastructure were compatible with a common language, I could use "regular" build and deploy tools if I wanted (I'd weigh the pros and cons vs. your price increase), or b) if the Dark infrastructure had an open source API (not necessarily an open source implementation, but like what SQL is for databases, or the S3 API for object storage), I could implement my own alternative infrastructure or shop for one.
(a) seems difficult technically and (b) means risking commoditization on your end.
This would also kill the company.
We are actually looking at ways to mitigate this risk, as we do agree that being locked in is a real risk. We also have to provide sustainability to the business, as Dark failing as a business isn't helpful for our customers either. We're thinking about this, but nothing concrete yet.
no. they're both open source for starters.
furthermore, even if all developers working on these projects vanish from one day to the next, you'll still be able to use the last released version and will have a maintenance window to deprecate the software.
if Dark goes away, you'll be gone at the same date.
I'm an open source fan myself, but given that AWS has a penchant for lifting concepts from open source projects and using it to crush them, and given that AWS is one of the primary targets here, it kinda makes sense to opt for proprietary. Especially at this stage.
Just my two cents.
What makes it impossible specifically?