
Show HN: HyScale – An abstraction framework over Kubernetes - hyscale
https://github.com/hyscale/hyscale
======
kube-system
Wrappers that aim to simplify another technology always make me nervous,
especially with a rapidly evolving project.

Yes, this makes it easier to get started -- but when something goes wrong, now
you have to hunt down bugs in two layers of software. And, since you've
intentionally isolated yourself from the underlying layer, you have less
experience with it!

This is why I like Helm. If you write your charts well, you can write your k8s
yaml once, and do the things you need to do on a daily basis by adjusting your
chart values

~~~
hyscale
Your concern is justified. Any abstraction must deal with minimizing leakages.
This is why we have started addressing deployment error troubleshooting with
some level of diagnosis so that the tool can provide error info in terms of
higher-level abstraction.

Helm does require understanding of all the underlying low-level objects
defined in Kubernetes. HyScale hopes to provide higher-level entities to deal
with, as well as providing ways to do higher-level ops & deployment
introspection. We believe it should be possible to satisfy the needs of a
reasonably large number (>80%) of apps.

~~~
Fiahil
> HyScale hopes to provide higher-level entities to deal with, as well as
> providing ways to do higher-level ops & deployment introspection.

That's exactly why the operator pattern exists!

Kubernetes stands out from other container orchestration tools for one reason
only: it's portable. I can apply my templates to any cluster -empty or not-
and my services will be running with the same topology. I can make them work
with an existing installation, if I need to. To this end, I need to be able to
inspect, modify and understand how everything is fitting together. An
abstraction layer will always complicate things.

As said before, Helm is great because it doesn't hide Kubernetes' insides.

~~~
xiwenc
The first thing that ran through my head when I saw the example of hyscale
was: Why isn't this implemented as a CRD? It looks to me like this is a higher
level abstraction on top of Deployment resource with more bells and whistles.

~~~
hyscale
CRD is a good idea and something to consider for this project. But it does
require pre-installing into a cluster before deploying apps. One fundamental
difference with HyScale as it currently stands is that it can deploy to any
cluster anywhere from any machine and doesn’t require to store any state or
install anything extra into the cluster. There are pros and cons of both
approaches.

HyScale generates standard K8s manifest yamls from hspec as you would write by
hand, and in that sense it is portable as well and doesn’t hide the underlying
yamls if you really wanted to see. Our experience shows that for several apps,
one doesn’t really need to dig down below. Especially in the case of large
teams, you may still want one or two people who understand the underlying
mechanism (just as Angular developers may understand underlying JS
mechanisms), but everyone else can simply move faster and easier.

------
hyscale
Kubernetes complexities are often acknowledged. In our team, we experienced
this first-hand while migrating a large PaaS application onto Kubernetes about
two years ago. This prompted us to seek out a way to simplify and speed-up app
deployments to Kubernetes. This pursuit eventually led us to believe that
Kubernetes complexities deserve an abstraction, in quite the same way as
JQuery over Javascript, or Spring over Servlets. The HyScale project was born
out of this, to create an app-centric abstraction that is closer to the
everyday language of developers and devops.

With HyScale, a simple & short service-descriptor allows developers to easily
deploy their apps to K8s without having to write or maintain a voluminous
amount of K8s manifest YAMLs. This enables self-service deployments as well as
development on K8s with minimal IT/DevOps intervention.

We would love to get feedback, ideas and contributions.

~~~
soamv
Congrats on this launch, it's great to see serious efforts at Kubernetes
simplification and I hope you succeed! K8s bills itself as a "platform for
platforms", and such projects are a real test of that idea.

Questions:

1\. How do you deal with the mutability of K8s resources? Do you assume people
won't change the underlying resources that you generate, or do you keep k8s
controllers running to ensure there's no deviation?

2\. How understandable is your generated output to humans? Do users have a way
to go "backwards" through your abstraction? (Since your tool lives in an
ecosystem with many other kubernetes tools, your users will sometimes end up
having to deal with the generated output, since other tools operate at the K8s
level such as prometheus, log aggregators etc)

3\. Do you interoperate with K8s-level resources well? Do _all_ my services
need to be in this abstraction for this to work well? e.g. Can my hyscale yaml
reference a non-hyscale service for any reason? Or are they essentially two
separate worlds?

~~~
hyscale
Thanks for your encouraging comments :)

Answers to your Questions:

1\. The expectation is that people deploying through hyscale don’t want to go
and modify K8s resources directly on the cluster behind the scenes. If they
do, then it either deviates from the hspec files in git or they have parallel
K8s yamls which won’t make much sense either. As of now, hyscale deploys and
doesn’t keep anything (eg. controller) running, But it's a good suggestion to
keep the K8s resources in sync with the desired state (hspec) and we’ll
certainly put some thought into it.

2\. HyScale outputs standard K8s manifest yamls which are as readable as any
well hand-coded. On the other hand, if there was a way to generate back hspec
files from existing K8s yamls, would that be interesting/useful?

3\. Suppose you have K8s resources/snippets already written for some of your
services, you can refer to them in hspec and add application context to it and
make use of profiles for environment variations. If the question is more about
off-the-shelf services (eg postgres, redis, etc), we have helm chart support
coming up. [https://github.com/hyscale/hyscale/wiki/Roadmap#2-ots-off-
th...](https://github.com/hyscale/hyscale/wiki/Roadmap#2-ots-off-the-shelf-
services-support)

~~~
soamv
Thanks for your answers.

> 2

I don't mean literally generating hspec from yaml -- I mean more like
operational "source map" support. For example if I look at a pod taking up
more CPU than I expect in a dashboard somewhere, I should be able to trace it
back to the hspec file.

------
greentimer
Kubernetes was released only 6 years ago so I'd imagine there is still a lot
of legitimate evolution left in the ecosystem. I'd have to compliment you for
choosing a project like this rather than something that had no chance of
working because the ecosystems are completely set, like a new programming
language. I believe there will be a distributional challenge for you in
getting people to use this software. You can't pay for an advertising
campaign. Maybe the most you can do is post on HN, but after that, people will
forget about it. The fact that once it's used once in a GitHub project others
will be forced to use it provides some hope. You say you want to be like
JQuery over javascript. It may be worth it to you to figure out how JQuery
solved their distributional challenge. Just as nobody needs to use JQuery,
nobody will need to use your software, and there will be a strong temptation
for people to bypass it and just use raw Kubernetes.

It is amazing the complexity of modern software projects like Kubernetes and
I'd agree they have challenges in creating a simple interface that everyone
will like while still getting the software to work consistently. According to
the principle of radical skepticism it's amazing that anything so complex
works at all.

~~~
aantix
OP - it can be done.

Reach out to the CTOs and VPs of Engineering that list Kubernetes as one of
their core technologies. They're most apt to choose K8s for their own team.

Ask them if they've had any issues with Kubernetes, specifically mis-
configuration or slow turn around times for configuration changes.

Explain your framework in one or two lines. Pick out one or two _specific_,
common problems with K8s and ask them "Are you experiencing X? How about Y?"
Talk to them like you already know and feel their pain. Because you do (you
wouldn't have created this framework otherwise).

You'll learn a lot. And maybe get adoption and mayb a consulting gig out of
it. :)

Use the advanced search on Linkedin to find these people. Make sure your
Linkedin title has something to do with being a Kubernetes expert.

If you're in a big city, find those clients that are local first, as you can
visit them in person (that goes a long way).

e.g. Senior DevOps Consultant, Specializing in Kubernetes/HyScale.

Here's the people search you need. Use Hunter.io to find their emails.

[https://www.linkedin.com/search/results/people/?facetGeoUrn=...](https://www.linkedin.com/search/results/people/?facetGeoUrn=%5B%22103644278%22%5D&facetNetwork=%5B%22F%22%2C%22S%22%5D&keywords=kubernetes&origin=FACETED_SEARCH&title=CTO)

Client outreach can be successful if it's specific and serving a genuine need.

------
freedomben
_Disclaimer: I work at Red Hat with OpenShift_

This is something I think OpenShift really adds value over "raw Kubernetes."
With OpenShift you can treat it a bit like a flexible Heroku with `oc new-app`
which can use s2i or your provided Dockerfile and will generate the foundation
of what you need. You can then iterate on it if you need something beyond the
standard setup.

By the way OKD 4 (the freely available upstream version of OpenShift) is now
generally available: [https://www.openshift.com/blog/okd4-is-now-generally-
availab...](https://www.openshift.com/blog/okd4-is-now-generally-available)

~~~
gattacamovie
okd4 came ~1yr after ocp4 was there. Can it be trusted it won't have such
delays in the future ? What about security fixes, features, will they always
lag behind as an incentive to get the paid version? In k8s, even if you
maintain iy yourself, there is a huge community and you can always get your
fixes. How does okd community compare? okd was on k8s 1.11 till few weeks
back, 8 releases behind! Imagine the security issues okd had for such a long
period... (ps: even centos seems to be lagging behing badly, centos 7.7 took
many months after rhel 7.7).

as for ocp/okd tools like s2i, ansible for replacing helm, routes, deployment
configs, etc -> they never took off, community did not agree. Those that did
not take care to stay away from stuff that is not pure k8s suffer from beying
disconnected from the rest of industry and have to invest to redo
everything... not to mention their impossibility to switch to cloud providers
solutions like eka,aks,pks,etc...

~~~
freedomben
Thanks for your comment. I'll take a bit at a time:

> _okd4 came ~1yr after ocp4 was there. Can it be trusted it won 't have such
> delays in the future ?_

This is a fair criticism and great question. I was also really frustrated at
this delay, although there was a pretty good technical reason for it.
OpenShift was based on Red Hat Core OS, which until recently didn't have an
upstream (Fedora CoreOS now fills this void). With the acquisition of Core OS,
RH engineers saw an opportunity to totally rethink the Node portion of
OpenShift. CoreOS allows you to treat the whole OS as immutable like
containers, which makes for some fascinating possibilities. This became a hard
requirement for master nodes for a few reasons, one of which is the Node
itself is totally managed by an operator[1]. With RHEL being a subscription
product, this was a problem for OKD users (no host OS!). I do think RH
deserves criticism for not prioritizing the community high enough, but I can
assure you it wasn't malice. Also because of this I don't worry about releases
falling behind in the future since Fedora CoreOS is generally available now.

[1]: [https://github.com/openshift/machine-config-
operator](https://github.com/openshift/machine-config-operator)

> _as for ocp /okd tools like s2i, ansible for replacing helm, routes,
> deployment configs, etc -> they never took off, community did not agree. _

This is untrue. DeploymentConfigs _did_ take off. In fact, modern Deployments
in K8s are the result of the community agreeing and integrating them into
upstream K8s. There are minor differences that were made to balance priorities
(primarily CAP theorem considerations, Consistency v Availability[2]), but the
two are remarkably similar. The child resources of each (ReplicaSet and
ReplicaController) are also super similar.

Regarding Ansible, that works just fine on K8s, and likewise Helm works fine
on OCP. OpenShift is not a custom mangled version of K8s - it is K8s, with
some custom resources slapped on top. It's true that OCP Routes don't work on
plain K8s, but to say that it never took off because the community did not
agree is not fair. The modern Ingress of K8s took a lot of inspiration from
Routes. Red Hat is the number 2 contributor to K8s (behind Google) and
constantly pushes code upstream whenever possible.

[2]: _" DeploymentConfigs prefer consistency, whereas Deployments take
availability over consistency. For DeploymentConfigs, if a node running a
deployer Pod goes down, it will not get replaced. The process waits until the
node comes back online or is manually deleted."_ See:
[https://docs.openshift.com/container-
platform/4.1/applicatio...](https://docs.openshift.com/container-
platform/4.1/applications/deployments/what-deployments-
are.html#:~:text=DeploymentConfigs%20prefer%20consistency%2C%20whereas%20Deployments,online%20or%20is%20manually%20deleted).

> _Those that did not take care to stay away from stuff that is not pure k8s
> suffer from beying disconnected from the rest of industry and have to invest
> to redo everything... not to mention their impossibility to switch to cloud
> providers solutions like eka,aks,pks,etc_

I think "redo everything" is pretty unfair and borders on FUD. I've only known
one person that moved from OCP to EKS, and the only thing they had to do was
change their Route to an AWS Ingress (which is similarly non-portable I might
add. It only works on AWS). If you use ImageStreams and such then yeah, you'll
have to move those, but it's not very hard. Migrating from OpenShift to "other
K8s distro" is really not that bad.

I would also point out that OpenShift/OKD can run nearly anywhere as well, so
if you move from AWS to Azure, or bare metal or anything else, you don't
necessarily have to abandon OpenShift. There's really no such thing as "just
k8s" anyway. If you use any of the custom cloud provider stuff (like the AWS
Ingress) then you're not portable without modifications either (and in some
cases significant modifications if you have a highly customized ALB for
example). If you care about portability, I think OpenShift/OKD is still a
decent solution.

------
whalesalad
IMHO if you need a tool like this, you are normally better off building it
yourself in-house. You will inevitably end up fighting all of the leaky
abstractions something like this does not support for your use cases.

~~~
q3k
This a hundred times. Do yourself a favour and use Dhall/Cue/Jsonnet to
develop some abstractions that fit your workload and environment. There is not
much value proposition in a tool like this if you can use a slightly lower-
level, more generic tool (like a configuration-centric programming language,
which is an actually full-fledged programming language) to accomplish the same
goal in a more flexible and more powerful fashion, that leaves you space for
evolution and unforeseen structure changes.

The idea of tools mandating what 'environments' are is absurd, as it's pretty
much always different for everyone (and that's good!).

~~~
retzkek
I've been enjoying using Tanka [1], which is a command-line tool from the
Grafana team to manage k8s configurations, which you define using jsonnet.
Complete flexibility, with minimal boilerplate possible by using the older
(unmaintained unfortunately) ksonnet library [2] or the upcoming jsonnet-
libs/k8s(-alpha) (which we're using in production) [3], or roll your own,
abstracting to whatever level you find best.

[1] [https://tanka.dev/](https://tanka.dev/)

[2] [https://github.com/ksonnet/ksonnet-
lib](https://github.com/ksonnet/ksonnet-lib)

[3] [https://jsonnet-libs.github.io/k8s-alpha/](https://jsonnet-
libs.github.io/k8s-alpha/)

~~~
q3k
I've been using kubecfg [1] with kube.libsonnet [2]. I don't like Tanka as it
imposes a given directory structure on me via scaffolding - which is a big no-
no for the way I organize projects (I value very unopinionated tools in this
regard). I also couldn't get into the ksonnet style of mixins/arguments, as it
takes away the ease of overriding underlying Kubernetes structures.

[1] - [https://github.com/bitnami/kubecfg](https://github.com/bitnami/kubecfg)

[2] - [https://github.com/bitnami-labs/kube-
libsonnet](https://github.com/bitnami-labs/kube-libsonnet)

------
sandGorgon
Are the you using the Docker Compose standard ? Your specification looks very
familiar.

It would be a killer app if you are.

[https://www.compose-spec.io/](https://www.compose-spec.io/)

~~~
verdverm
The examples look like they are not. Yet another Yaml based spec

~~~
yipbub
YAY BS?

------
WolfOliver
Cloud Foundry is doing the same thing and is even longer on the market than
Kubernetes. They started with their own container orchestration layer and
switch over to Kubernetes. see: \-
[https://specify.io/systems/cloudfoundry/features-and-
usecase...](https://specify.io/systems/cloudfoundry/features-and-usecases) \-
[https://www.cloudfoundry.org/](https://www.cloudfoundry.org/)

Its also backed by some serious enterprises: \-
[https://www.cloudfoundry.org/memberprofiles/](https://www.cloudfoundry.org/memberprofiles/)

------
ForHackernews
Can the author talk about how this compares to other similar-sounding projects
like Rancher's Rio[0] or Cloud Foundry[1]?

[0] [https://rancher.com/blog/2019/introducing-
rio](https://rancher.com/blog/2019/introducing-rio)

[1]
[https://www.cloudfoundry.org/kubecf/](https://www.cloudfoundry.org/kubecf/)

------
hajhatten
Just what people running kubernetes needs, more yaml!

~~~
hyscale
:-) Actually less yaml. Typically without an abstraction like Hyscale, for a
micro-service you might end up having to write / maintain a couple of hundred
lines of K8s yaml including things like sidecars, ingress, PVCs, config-maps,
etc. whereas the same service can be described in hyscale spec using barely
20-30 lines of yaml consisting of higher-level entities/language that is
intuitive to most developers. You also get simpler troubleshooting of
deployment errors and worry less about having to deal with backward
compatibility with each new K8s version.

------
alexfromapex
This is a good idea but with simplicity being the value proposition it looks
like you have spec files that are roughly the same length as a yaml file I
could deploy with k8s. I think it would need to be much much simpler to be
more valuable, just my take.

~~~
hyscale
Typically without an abstraction like HyScale, for a micro-service you might
end up having to write / maintain a couple of hundred lines of K8s yaml
including things like sidecars, ingress, PVCs, config-maps, etc. and linking
up these yamls with the right selector-labels, etc.

Whereas the same service can be described in hyscale spec using a few dozen
lines. But it's not just about the number of lines, the HyScale hspec is
defined in terms of higher-level app-centric entities that are intuitive to
most developers.

You also get simpler troubleshooting of deployment errors and worry less about
having to maintain compatibility of your K8s manifest yamls with each new K8s
version.

------
koeng
I really like git push workflows (Heroku / Dokku). I would use
[https://github.com/dokku/dokku-scheduler-
kubernetes](https://github.com/dokku/dokku-scheduler-kubernetes) , but it
doesn't support Dockerfiles, and I need Dockerfiles for a few of the
applications that I want to run.

It would be great if there were some docs on how to integrate Hyscale with,
for example, a Github action to enable enable deployment on a push to master.
It wouldn't be too difficult for me to set up, but having a "right way" to do
it written by the maintainers would give me much more confidence.

~~~
ForHackernews
Does Cloud Foundry/Cloud Native Buildpacks enable the Heroku workflow on k8s?

[https://devstack.in/2020/01/03/introduction-to-cloud-
native-...](https://devstack.in/2020/01/03/introduction-to-cloud-native-
buildpacks-with-kubernetes/)

[https://www.cloudfoundry.org/kubecf/](https://www.cloudfoundry.org/kubecf/)

~~~
jacques_chester
Yes.

Disclosure: I work for VMware.

~~~
koeng
How so? Do you have some examples?

~~~
jacques_chester
Broadly, once you've installed CF on Kubernetes, the process is:

    
    
        cf push
    

The code gets uploaded, buildpacks chew on it thoughtfully, out pops a running
process with route, log capture and service binding.

I've worked on and around Cloud Foundry for about 5 years now. It's still a
much easier and more complete developer experience than anything I've so far
touched in Kubernetes land.

------
dastx
Maybe just me, but I've never thought of kubernetes as being complex for
users. To me the complex bit of K8s isn't deploying to it, but administration
of it. Figuring out the architecture, debugging some silly thing not working
etc.

~~~
lazyant
Absolutely agree. There's a ton of literature on how to get started etc but
not a lot on "day 2", the "now what" experience in production. (Friend working
on it: [https://day2kube.com/](https://day2kube.com/) )

------
afterwalk
A lot of attempts at "simplifying k8s" seem to be writing some type of
"coffee-script" that generate the underlying yaml. What I (think I) want is
just some good UI built for essential workflows.

~~~
meowface
There are a lot of advantages to defining these things in some form of text-
based configuration files. You get to track everything in source control, you
don't need to worry about upgrades somehow breaking the UI or database, you
can quickly and easily check any deployment config anywhere if needed, you can
diff changes in a sensible way.

Adding a web UI option in addition isn't a bad idea at all, but I like the
canonical form being config files, be it YAML or HCL or some CoffeeScript-type
thing.

------
bassman9000
_Spring over servlets_

This is not actually true. Servlets were never complicated. Documentation,
specially the Javadoc, is terrific. I/Os and API are pretty simple. Problem is
that there's a lot of boilerplate, and you ended up with a lot of copy-paste.
But repetitive doesn't mean complicated. Spring solved may of these issues by
providing the boilerplate.

k8s is actually complicated. Both conceptually, and in implementation.

I don't think it's an apt comparison.

------
k__
Awesome!

I did a deep dive into K8s in the last two weeks (usually doing serverless)
and I think it really needs projects like HyScale to step up its game.

Even with EKS+Fargate, which remove master and worker
provisioning/maintenence, K8s is still orders of magnitude behind serverless
solutions in terms of dev experience.

------
justsomeuser
So this is two components:

\- 1. Compiles high level config into lower level Kubernetes config.

\- 2. Sends/runs that config on a Kubernetes cluster.

~~~
vii
If it were just that, then overall the project would be dangerous trap, as
there is a big cost of added complexity from the new high level configuration
language with its limitations and own terminology (volumes, etc).

Adding a wrapper, and then eventually forcing you to learn all the
abstractions that leak through, creates an attractive nuisance. The Hyscale
project is at least trying to overcome this problem. Not sure how well they
succeed.

Along with the high level config it attempts to help untangle common K8s
debugging steps, which normally require using multiple tools to determine what
caused an error condition like CrashLoopBackoff - see
[https://github.com/hyscale/hyscale/wiki/App-centric-
Troubles...](https://github.com/hyscale/hyscale/wiki/App-centric-
Troubleshooting)

Looking at the code they painfully enumerate different Docker and K8s options
in Java, so it will be expensive to maintain and keep up to date - the host
company Primati may have the resources to do this and that's exciting!

------
filleokus
How (and how well) does the docker file generation work? What
languages/framework do you support?

~~~
hyscale
You simply specify the stack image that your service needs along with the
service source/binaries. See here for more:
[https://github.com/hyscale/hyscale/wiki/Tutorial#using-
hysca...](https://github.com/hyscale/hyscale/wiki/Tutorial#using-hyscale-to-
build-your-image-optional)

------
kylegalbraith
Looks super interesting. I'm curious how this can handle conditional logic
like Helm supports?

~~~
hyscale
Wonder what kind of conditional logic you would want to apply. If its more for
managing differences between different environments (say staging vs prod), you
could just use the profiles option in HyScale:
[https://github.com/hyscale/hyscale/wiki/Tutorial#specifying-...](https://github.com/hyscale/hyscale/wiki/Tutorial#specifying-
over-rides-for-different-environments)

------
exdsq
Can’t wait for the abstraction over HyScale. It’s abstractions all the way
down.

------
ramon
What about mesh setups does this Hyscale support meshing.

~~~
hyscale
You can deploy sidecar agents using HyScale for your mesh. We're looking at
further abstracting out things like VirtualService and we're also watching SMI
related developments. If there's any specific mesh use-cases that you’d like
to see abstracted out from a service deployment perspective, do let us know on
our github page.

~~~
ramon
Yes so the idea would be to use something like "Kong" or "weave net" whatever
as the "orchestrator" and then bundle ir up in an easy fashion. I'm always
more inclined into lower performance requirements scenarios so for example not
always we need a Kubernetes.. I have setup for PM2 on redundancy in which I
can run on low-end VPS for example. The "Overload" of Kubernetes is just too
much and the costs are way up in the roof it's becoming to bloated so focusing
on minimalist with K3S is a must and people looking for mesh are certainly the
ones not wanting to rely on any cloud provider's orchestrator, they want to
have their machines spinned up anywhere by themselves. That's the beauty of
the Mesh. I was able to setup weave net on low-end machines but it takes up a
lot of resources so it's not trivial, K3S also is possible within low-end but
lacks a lot of stuff so for now I'm just sticking with PM2 only on low-end and
K3S on bigger machines.

------
cheez
Not an expert. Comparison to terraform?

~~~
hyscale
Terraform takes a more infrastructure-centric approach and its strength is
typically for provisioning infrastructure such as putting up a K8s cluster,
etc. than for deploying apps (although you can). Whereas HyScale is aimed at
defining higher-level entities in a language that developers understand
intuitively without any extra learning curve.

HyScale also aims to reduce the amount of abstraction leakage by dealing with
aspects of troubleshooting and runtime operations as well. So this
application-centric abstraction enables self-service deployments as well as
development on K8s with minimal IT/DevOps intervention.

For more, see: [https://github.com/hyscale/hyscale/wiki/App-centric-
Abstract...](https://github.com/hyscale/hyscale/wiki/App-centric-Abstraction)

~~~
verdverm
Terraform is moving into the k8s space FYI. So you can use HCL instead of
Yaml.

[https://www.hashicorp.com/blog/deploy-any-resource-with-
the-...](https://www.hashicorp.com/blog/deploy-any-resource-with-the-new-
kubernetes-provider-for-hashicorp-terraform/)

~~~
sriku
From what I understand, Terraform's k8s provider doesn't abstract k8s details.
You still have to deal with the same complexity.

(Disclosure: I work for Pramati - the parent company behind Hyscale)

~~~
verdverm
You can abstract using TF capabilities, they just aren't hidden like HyScale.
HCL is widely used, people already use the TF tool, so you end up not having
to add more deps and yet another DSL to learn.

------
dang
Url changed from [https://hyscale.github.io](https://hyscale.github.io), which
redirects to this.

~~~
hyscale
You can access the project here
[https://github.com/hyscale/hyscale#readme](https://github.com/hyscale/hyscale#readme)
or at [https://hyscale.github.io](https://hyscale.github.io)

