The past had a solution for this: the system administrator. He could design and implement robust and simple systems to provide infrastructure and tooling for developers to deploy their code. But this isn't "cool" any more. You have to use tens of self-service bleeding edge (read unproven and unstable) concept tools bought from the players who have the most money to invest in tech conferences, blog posts, astroturfing, evangelists, etc, and pretty much nothing left to pay their own developers.
For example try telling people they would be better off with their code being packaged and delivered by a Debian package instead of a container image - see how fast they consider you a crazy preacher from the past.
But the market for sysadmins is completely soaked in corporate IT professionals whom really only have maybe 20% skill coverage with what smaller tech companies want or need. When it comes to delivering software, companies have found they want / need strong sysadmin knowledge domain with a strong software development background - this is the rise of the “devops” and SRE disciplines.
What really bums me out is that long ago the term sysadmin applied to software developers as well - people on Unix systems that wrote C and other systems code that used bash to string together software fast.
So in a lot of senses, the creation of bash is a form of the developer-sysadmin construct done quite well. But today, with our fractured, mercurial software ecosystem we see so many competing tools that largely do things 90%+ the same way in practice for hardly any real advantage over each other. Every option can easily manage hundreds of machines in some odd seconds or a few minutes, after all.
But there’s a cultural problem that’s become a bifurcation into engineer castes over time. Go to conferences, social networks, and meet-ups for developers vs sysadmins - the tone and tenor of topics is completely different. Developer conferences have a smattering of deeply concerned talks but a majority are bright, cheery, and exploratory in tone and the side conversations match up. Sysadmins oftentimes stuck in cost centers are as a rule grumpy and even belligerent, and the side conversations remind me more of the hallways of a VA hospital than Bell Labs.
Where I work we create Debian packages, then prepare an image installing it from the Dockerfile.
In our defense, our test systems are physical, our CI/CD chain is containerized, and production will move from physical to containers soon.
> (if, say, both app1 and app2 need ImageMagick with different policies) ?
Debian typically handles that by building fat binaries/libraries with all the possible options enabled. In cases where there are multiple incompatible options, I'm not an expert, but I believe you create a virtual/meta "package" that can be satisfied by installation of any of the incompatible binaries. Pacman handles it similarly. Gentoo is an oddball in that it tracks a set of system flags and enabled/disables features in builds based on which flags you have set. It's able to do that because it compiles the packages itself, so you can choose the options at install time.
But, you understand that this means that Debian doesn't handle the issue, right ? Like, cool, I got an error but I still can't make my two programs run concurrently, back to Docker it is. Also some programs may generate config files or stuff in /var at runtime so apt can't even warn.
Packages are not permitted to overwrite data files or configuration files (conffiles) from other packages. dpkg would abort the installation or upgrade of any package which tried to usurp a file owned by another package. At least, not without an explicit Replaces dependency to allow adoption of them. Enforcing consistency and ownership with a central database is the entire point of managing the system with a package manager.
This isn't even new. This is 25+ year old technology at this point.
> Enforcing consistency and ownership with a central database is
Assuming a lot of things very wrongly. My distro maintainer's idea of consistency may be very different from mine - it's not the system that matters, it's the freakin apps because running this is the only reason we buy computers for, and it happens that you need to run two apps which would be entirely incompatible on the same system (if I take my past life music production experience, for instance you really really want to keep using the same version of a software for a given project. But you can have a dozen different projects in flight which all require different versions of the software used. Try installing 12 different versions of gimp or ardour on Linux without something like appimage :))
Like, I know that Python is particularly prone to this, and if you have multiple languages needing C/C++ based libraries then I guess this is a concern.
But how many people have experienced this to an extent that containerisation seems like the right solution most of the time?
Making sure it's all correctly split into runtime, library, development, documentation, debug etc. parts, and ensuring that each part has the correct dependencies upon other packages in the system is a much more involved task. But it's this part that adds most of the value compared with simply building from source.
Making a "package" is easy. But the real point of packaging is integration with the wider system, and that part requires actual effort. Docker doesn't even attempt it. Docker is easy and convenient primarily because it pretends that problem doesn't even exist. It does the easy 10% of the job while ignoring the 90% of the work that takes the time and effort.
A “container” is an archive (eg, tarballs) of the contents, a hash chain of the construction, and some config data about cgroups and namespaces to run it with. Turn off the parts you don’t want — I usually turn off virtualized networking, for instance.
It’s a more lightweight package that doesn’t have versioning problems and doesn’t require crazy installation scripts — what benefit is traditional Debian packaging supposed to have?
1. after code review etc. a change is merged to master
2. the CI tests are automatically run
3. IFF the tests pass the change is auto-deployed to staging
4. promoting a deploy to production is a push-button or a CLI command
5. adding 3rd party services such as New Relic / DataDog / Papertrail / Bugsnag is a push button or single CLI command
6. even things like upgrading to a new version of Postgres is distilled down to a few CLI commands
I think Heroku is not alone in providing this level of ease, but that's what I've been using.
I would argue that there is not a "gap" but there is actually "a thing that you can totally have if you're willing to pay more money and give up some customization or control," which makes it roughly analogous to the iPhone.
It's not actually that hard to setup a self hosted solution that gives you the Heroku workflow.
Dokku provides a nice, open-source Heroku-like experience. I've had a good experience so far using Dokku on DigitalOcean or Hetzner VMs.
I do. At least for the vast majority of my experience, for webapps nothing comes close in terms of dev happiness.
You do have to give up that control and stay within the rails, though.
There is something there fighting to come out - something about software literacy (I claim we all will read write software like we read write language).
There are two types of "publish" - the kind I do here and on social media that is not proofread just flung out. It includes emails to friends and texts with shopping lists - and it is easy and simple and enabled by the platform
and there is publishing - a book an article etc. It has hurdles (and higher expectations)
Most times writing software feels like the latter - QA and proofreading, tests and slow release cycles.
Actually getting my thoughts down takes time - it takes reading around and consideration and hell it has friction.
So yes we need faster ways for people to deploy software and sandbox it so the blast radius is is limited - ensure my code to switch the lights on might leave my hall light blazing for a weeks holiday but is not going to turn the microwave on or reset the fire alarm.
But the code to do that must be the higher quality, the professional tier.
It is crazy to expect JK Rowling to go through the same process to send a tweet as to publish a book, but it is also crazy to hold that tweet to the same standards. And frankly it's crazy for the tweet to get the same kind of audience. (no this is not a comment about the content of any of her tweets - just an example after buying books for daughter)
Same goes for software. Different release standards different quality standards different sandboxes.
In the early 2000's there were promises that we were on the verge of graphical programming. Environments where anyone could stitch programs together visually. They turned out to be more trouble than they were worth. Anything sufficiently complicated needed a proficient software engineer. That engineer inevitably did away with cumbersome graphical tool preferring to just write the code. The tasks that non-software engineers were able to work were of questionable value to begin with.
Personally I prefer the patchwork. Emerging from the java, .NET bubble was a breath of fresh air for me. I considered the IDE to be constraining. It did 90% of what I needed but I also needed to do the other 10%. Simplicity always trumps completeness and composition is always better.
I don't feel that we lack the tooling to be as slick as we want, we lack the motivation to invest the time and money to make things slicker. It worked at Apple, not because of integrated tooling, but because of a dictator-style management that forced people to collaborate in the name of quality. Perhaps what we are uncomfortable with is the idea that high quality requires that level of control!?
The truth is that there is probably not much in the UX that is fundamentally better or worse apart from the animation side of things and possibly the higher res screens in some instances, most of it is just different.
What made the iPhone revolutionary was the input method (and of course all the UI built on top of that).
I kinda glazed over after that intro.
Thus I kinda inferred the OP meant "... which device crossed the threshold into mainstream awareness". So your "glazed" dismissal seems a little harsh.
That said, I think you're right to make note of the Hiptop (TIL!), and I agree that it was the touchscreen (albeit also not truly firat of its kind either) that was a transformative experience for the masses. I'll never forget the first time I interacted with an iPhone.
The hiptop unquestionably crossed the threshold into mainstream awareness at the time. It's just been forgotten because the iPhone overshadowed it.
On a personal note, I had a few generations of Hiptop before I switched to the 1st gen iPhone. Yes, my first experience with the iPhone was memorable... but still not as memorable as my first Hiptop. "You mean I have the internet everywhere??" That was incredible in 2002. We just take it for granted now.
The connected devices at the time were all still struggling with the early adopter chicken-and-egg issues of an emerging network of services. There weren't "apps" and web sites had not centralized themselves. And cost, speed and service coverage remained very limited. You couldn't justify a data plan just for Mapquest when you could print it out at home. Most kids, myself included, wouldn't be getting any phone for a few years yet. The mobile phone's purpose in the 2000's was served well with what was in feature phones: call, text, Snake, maybe email.
In contrast, the iPod, a rough contemporary with the Hiptop, addressed something more immediately compelling with a two-sided, integrated marketplace when paired with iTunes. There was a value proposition in that since not everyone was or wanted to be a savvy song pirate, and you could buy singles instead of albums. Internet speeds and access were ready for that use case.
Apple's success at the time, both with iPod and with iPhone, rested on timing and quality of integration, which returns again to that which the article alludes - we have a lot of early-adopter developer services, and some of these are in a position to be more like an iPod/iTunes. But I don't think the article goes deep enough in recognizing that even the iPhone was capitalizing on underlying infrastructure developments to channel them through a specific product and service mix.
Email was (and still is) the killer app, and it worked great. Instant messaging worked great - and at the time, Maxis (where I worked) ran on AOL IM the way people use Slack now. Danger's web browser actually worked through a proxy that chopped up pages and served them in a reduced format, which was effective in 2002 but was a liability by 2007. You could even edit your contacts online with full syncing!
The Hiptop was popular and "quality of the integration" was better than anything Apple had produced right up until the iPhone. However, by 2007 the data networks had gotten fast enough that the Hiptop's web browser was feeling a bit antiquated, and navigating web pages with a scroll wheel was cumbersome. The capacitive touchscreen was a major leapfrog, and coming up with all the UI behaviors to leverage that was a major feat - Apple deserves a lot of credit! But maybe not quite as much credit as the OP suggests.
I knew a bunch of deaf kids around that time and they all were enthusiastic Sidekick users.
What I could I do with 2003 Internet?
Download J2ME applications from Swisscom and Sunrise, including some crufty map applications based on cell position.
Or maybe because it never successfully left US and reached the rest of the world.
Was out well before the iPhone, sold Internationally, had installable 3rd party apps (proper apps, not just "web" apps that the 1st gen iPhone had).
Apple fanatics always seem to think Apple create the first smartphone as we know it.
"Symbian OS platform security model"
While the USENIX paper is from 2010, the 3rd edition was released in 2005, 2 years ahead of iPhone.
And the now super fashionable Widgets were already a thing on Symbian, back in 2010.
So while we do occasionally get nice things, they're paid for by "investor story time" instead of the people who use them, so they will go away. Either the companies go broke, or they're swallowed up by a behemoth who may steward it well for a while, but will nonetheless eventually want their billion back.
However, I have to disagree with the iPhone analogy. The consumer mobile phone market doesn't have any similarities with the developer tooling and software platform market. For one, Apple has always preferred to set trends and _tell_ people how to use their devices rather than provide openness and flexibility. Developers thrive on flexibility - shepherding them to the one true way of doing things, even if it provides a better experience, would unleash the inner forces and wrath which made open-source a thing. Also - the problem of "listening to music while taking a call" has vastly reduced complexity compared to anything done by developers. I don't think you can apply the same learnings. Each market and each customer segment require completely different business models and the iPhone bundling and "do few things right" strategy just isn't right for such a vast ecosystem of developers who are all doing so many different things. Yes, they are all deploying code to production. That's not a market. And to see the differences, compare the SDLC at a regulated bank to that of a 50-person startup with proper automation - you'll find they are light years apart. Not because the banks suck as devs, but because they are in a different market.
My perspective is strongly influenced from a system dynamics perspective -> money and time in -> experience out. It we look at the whole space and then seed it with lots of money and huge time pressure we get condensation around those kernels. These will be island solutions and they will be very good as lots of money was available. But no money can buy enough calendar time to coordinate over distance. In good old times (TM) we had larger scale solutions but those also took years to develop and the outcome was often less than perfect. At the moment we have a more component architecture (and as they were build in semi-isolation integration is lowest common denominator).
Where does this lead us to? Innovation on the semiconductor layer slows dramatically as financial cost of the next generation explodes, speed-up has stalled and density improvements have slowed. This gives us more time on the software level. The fragmented software landscape makes is harder to do first-one-takes-it-all. Larger consolidated integrations may again be doable as more calendar time is available.
1) Comprehensive. Maven provides comprehensive coverage from archetypes (project starter templates), dependency management, build, version archiving, everything. It is written in Java as well.
2) Developer native. Eclipse is all encompassing. It provides deep integration with everything via plugins and great out of the box experience with different default plugin packages. There are eclipse for developers of java, jee, rust, c++, python. There's even third party distributions of it like NVidia's Nsight plugins for CUDA development.
3) Elegant. Is very subjective, but the Java API was elegant enough that Google decided to steal it for Android and be sued for billions of dollars.
4) Multi-Runtime. "Write once, run anywhere"
5) Multi-Vendor. OpenJDK is GPLv2+CE. There are numerous vendors who provide Java. Amazon Corretto, IBM OpenJ9, AdoptOpenJDK, most linux distros have their own package repo distributions of one or more of these as well.
With the added bonus that Java has memory safety, is the dominant language for the past two decades, and stays current by adding trending features like functional programming syntax.
2. Eclipse is your suggestion here??!! I could never imagine willingly using Eclipse again. Heck, years ago I was willing to pay hundreds of dollars a year for Jetbrains products specifically to avoid Eclipse.
3. Google's decision to use Java for Android had nothing to do with the "elegance" (or lack thereof) of the language, and everything to do with tapping into a huge community of existing developers and libraries.
The fact that you can do the same thing in your tool of choice, IntelliJ, further refutes the author's point 2. You and I arguing about which we prefer is beside the point being made by the author.
Elegance is subjective. The fact that there is a "huge community of existing developers" means Java is objectively more approachable to more people. Yet another way to look at it as elegant.
I don't smoke. You've reinforced my points in favor of Java.
Sigh. There were other such phones before the iPhone. But I guess if we repeat it long enough, someday we will start believing it.
On paper the Nokia N70 I was using when the iPhone came out could do everything the iPhone did and more. But it was essentially unusable by comparison. All three aspects of the iPhone were dramatically superior to prior devices.
* Self updating code pipelines? Yep: https://aws.amazon.com/blogs/developer/cdk-pipelines-continu...
* Cross-cloud compatibility?
YepYepYep (CDK for Azure): https://www.youtube.com/watch?v=0q89VbEA9I4
Certain developer experiences are drastically improving. The frontend (as fast as it moves) now has the likes of Netlify and Vercel. It's about saying, we just need to focus on frontend so give me the solution to that. I think backend is the same but it also highlights that there's no defacto standard for backend development in the cloud. When that's finally realised we'll see solutions that cut away all the non-essentials. Someone mentioned Heroku and one click spin-up of dependencies. Great, be everything to everyone, that's why Heroku is barely successful. Anything that was ever compelling was incredibly narrowly focused, opinionated and resonated heavily with devs.
My take. Go based microservices written using gRPC and consumed as HTTP/JSON APIs. A platform that offers access to underlying primitives for building those types of services and a complete lifecycle for build, run and manage. Look at Twilio, look at Stripe, look at Segment, look at Sendgrid. These are the companies of the future built in the Cloud. For the backend, its simply about building APIs and having platforms that enable that. Which means you need auth, config, storage, pubsub, etc.
Here's my piece https://github.com/micro/micro/blob/1166d15eff2015e32a9ed793...
Here's my effort https://m3o.com
It's great and it's very successful. The only reason Heroku is not more widespread is because they're charging $50 per unit (roughly a CPU or a GB of memory), which makes it prohibitively expensive for most workloads.
I assume there exist lots of good solutions and technologies for many problems, they just aren't mainstream. Many times the barriers for adoption are not technological.
Pragmatically companies form opinions which differ very much to the "developers need choice" mantra of the industry itself. Choice is the enemy of productivity. Google maintained a single platform called Borg that ran self contained binaries and required you to use the Google-3; c++, java or python. Internally the majority of systems looked the same and everything was consumed with RPC apis using Stubby that's now open source as gRPC.
Explicit decisions and doing less makes us all more productive. The iPhone did that for mobile. Cloud is waiting for an industry standard.
So much romanticization, but let's we forget it also had no copy/paste. What's more important? What's the better UX?
I was the Apple Forums technical administrator at the time the iPhone 1 came out. All the hardware competitors said in print that Apple couldn't make a phone. Pleasantly surprised at how very, very few bugs/problems were reported on the forums.
Yeah, but no copy and paste initially.
(The forum databases ran on 3 PowerPC minis running MySQL using statement-based replication. Those were later upgraded to Intel minis. We didn't use any endian or isa-specific column types like float, so seamless upgrade.)