
Type-safe GraphQL with OCaml - cuvius
https://andreas.github.io/2017/11/29/type-safe-graphql-with-ocaml-part-1/
======
kcorbitt
If ReasonML is able to form a real community, I have high hopes for its long-
term prospects. Such an enjoyable language to use! I think their general
approach of bootstrapping a community by lowering impedance with the JS
ecosystem is a decent one.

In case anyone on the OCaml team is reading this though, there are two
language-level changes that I think could do wonders for wider adoption. The
first is modular implicits: it feels so kludgy to have to type `a + b` for
integers, `a +. b` for floats and `a ^ b` for string concatenation. I know it
sounds like a small thing, but it makes the language feel inelegant, and
aesthetics are important. The second is a good concurrency story, ideally one
that is compatible with JS-style async/await keywords for easy interop.

I know those are both being worked on; I just hope they become available in
time to coincide with the wave of interest in ReasonML!

~~~
kbenson
> The first is modular implicits: it feels so kludgy to have to type `a + b`
> for integers, `a +. b` for floats and `a ^ b` for string concatenation. I
> know it sounds like a small thing, but it makes the language feel inelegant,
> and aesthetics are important.

That's interesting, as being forced to use the same operator for _addition_
and for _concatenation_ (and what's more with the result of mixed types if
allowed often dependent on the order of the parameters) has always seemed
extremely inelegant to me.

Addition and concatenation are not the same thing. Why they should share a
symbol when they don't share some properties integral to their nature (it's
non-commutative) is beyond me, and I think it's caused a lot of bugs that
didn't need to happen.

Not allowing the addition of floats and ints together is less of a fundamental
issue, but it also helps quite a few problems. Al alternative solution, if you
didn't want to reply on _implicit_ coercion would be to _explicitly_ convert
one operand to the appropriate type so they match.

Or you can go whole hog and provide an entirely separate set of operators for
different types, like Perl did (string concatenation is '.', and equivalents
comparitors are eq,ne,gt,lt,ge,le). That works well in Perl's case, but that's
mostly because for Scalars Perl really just wants to know whether you are
treating it as a number or a string, so there's only two types to account for.

~~~
baddox
Couldn’t the type system still allow using the + operator, but requiring the
values on both sides to have the same type? You could do more magic like
having int + float return a float, but I’d even prefer typing “int.toFloat +
float” than “int +. float”.

~~~
masklinn
> Couldn’t the type system still allow using the + operator, but requiring the
> values on both sides to have the same type?

'course it could, multiple languages already do that e.g. Rust
([https://play.rust-
lang.org/?gist=530fa94d3f451e896ac0d3ddeaf...](https://play.rust-
lang.org/?gist=530fa94d3f451e896ac0d3ddeaf66ab1&version=stable)), Haskell
([https://repl.it/repls/FrozenBlondUmbrellabird](https://repl.it/repls/FrozenBlondUmbrellabird)),
Swift
([https://repl.it/repls/LuckyNaiveBream](https://repl.it/repls/LuckyNaiveBream))

------
aaron-lebo
I wish OCaml had the libraries and community of Go or Rust, I think it'd be
the most useful all-around language out there.

~~~
lilactown
I'm hoping the movement around ReasonML[0] could lead to both the web
community being able to reap the benefits of OCaml's fundamentals, while also
giving people a point of introduction to the OCaml community through web dev.

It has been my experience that the ReasonML community is incredibly welcoming
to new comers, and is moving at a break-neck pace to find the right target of
being approachable for people who are familiar with JS, but also empowering
users to use all of OCaml's power to create web apps that are more simple and
correct than they would otherwise be.

[0]: [https://reasonml.github.io/](https://reasonml.github.io/)

~~~
noncoml
I really don't understand why they had to invent a brand new syntax.

~~~
quicksnap
The short answer is that Reason's syntax is simpler and more enjoyable to use.
I simultaneously learned both OCaml and ReasonML syntaxes and find Reason to
be much easier to work with.

Perhaps people in Reason core will chime in with a more detailed answer.
Here's some official information:
[https://reasonml.github.io/guide/ocaml/](https://reasonml.github.io/guide/ocaml/)

~~~
noncoml
Yeah, I guess the complaint comes from a guy that was already familiar with
OCaml's syntax.

I guess if you don't know OCaml, maybe the Reason syntax is more appealing.

~~~
jordwest
I learnt Reason syntax first, then OCaml second. I prefer the OCaml syntax, it
seems simpler, cleaner, and doesn’t hide the nice things about the language
behind JavaScript semantics.

The biggest example I can think of is that OCaml uses let..in, which reminds
you that every function is a statement. By contrast, the `;` in Reason hides
that fact.

That said, it was the Reason syntax that first brought me to the ecosystem,
and I think it’s going to be the catalyst that makes OCaml a big player in
front end.

~~~
jordwalke
> “The biggest example I can think of is that OCaml uses let..in, which
> reminds you that every function is a statement. By contrast, the `;` in
> Reason hides that fact.”

... Except in OCaml modules/files where you don’t use “in”. OCaml still has
“semicolons” for let bindings - but they are just spelled as “in” or “let ()
=“ depending on the the context. All but one of these “semicolon” forms have
nothing to do with language semantics and are only used to help the parser
figure out how to group bindings and values. It doesn’t make OCaml’s core
semantics worse or better- though it might make it harder to learn initially
or copy/paste code between different contexts.

The point of ; in Reason is merely for consistency - to have exactly one way
to form bindings whether you are in a module or expression, allowing you to
copy paste lines between the two contexts. Compare that to OCaml which
requires different syntax depending on the context and copying values from
module bodies to expressions requires a lot of editing. There may be other
ways to achieve the same kind of consistency without semicolons and we are
open to them - just so long as they are consistent across all contexts and
easy to learn. (And as long as they don’t have major foot guns like JS’s ASI).

(Often people object to ; as a parsing separator because they think it means
“side effect”. It doesn’t. Not in Reason. Not in OCaml. OCaml features
ubiquitous use of semicolons for everything from arrays, lists, to records
field separators, all things that have nothing to do with side effects. And if
you still haven’t had your fill, there’s the ;; double semicolon - which
actually does typically imply side effects).

------
rawrmaan
If you're interested in typed APIs, also check out RESTyped:
[https://github.com/rawrmaan/restyped](https://github.com/rawrmaan/restyped)

It's an end-to-end way to type check REST API calls and responses using
TypeScript.

~~~
habitat_mike
We're using restyped over at habitat in production - makes API integrations a
breeze!

------
michaelsbradley
Would wiring up ocaml-graphql-server resolvers to OCanren[+] have any
interesting use cases?

[+]
[https://github.com/dboulytchev/OCanren](https://github.com/dboulytchev/OCanren)

------
z3t4
Having spent my entire career, 15+ years, in "weakly typed" or whatever you
call it, languages, such as basic, vbScript and JavaScript I don't get this
type hype. In "unsafe" languages that have overflows I get it _will_ help to
make it less unsafe, but in high level languages such as JavaScript why do you
even need static (not sure I'm using the right vocabulary) typing !? Such as
HypeScript, err I mean TypeScipt. Is there an entire field of programmers out
there that get bugs because they are mixing numbers and strings !? I also did
that a lot as a beginner in JavaScript as the plus sign is used both as
concatenation and addition. I read a study here on HN a while ago (that
probably was sponsored by $M) that stated using TypeScript would prevent most
of the bugs found on GitHub, but of course there where no examples of where
type annotation would have helped. So what's with this type frenzy ?

~~~
KirinDave
Dear z3t4,

I write this letter from the distant past, late in the month of November in
the distant pass of 2017. From your lofty throne upon a future so bright, I
urge you to remember the sad lives we lead as a return code type mismatch
caused every Mac OS X machine running modern software to be accessible by
anyone with physical access by typing "root" into the login field then
hammering on the Enter key like a 9 year old.

Morale is high, because we know we cannot be sued for this act of gross
incompetence. We have decided to solve this problem by shaking our heads at
people who code in C and ignoring any similarities to our own toolkits.

Please remember us and our backwards ways.

Sincerely,

A time traveler from about 3 days ago.

~~~
tom_mellior
> a return code type mismatch caused every Mac OS X machine [...]

I don't doubt you, but I was just trying to read up on details of this and the
internet is so full of fluff pieces that I can't find anything technical real
quick. Would you have a link to a writeup of the bug behind this issue?

~~~
runeks
[https://www.theregister.co.uk/2017/11/29/apple_macos_high_si...](https://www.theregister.co.uk/2017/11/29/apple_macos_high_sierra_root_bug_patch/)

The relevant function returns an opaque integer to signal failure/success. If
this were typed properly — e.g. a type that represents either “Success” or
“Failure” explicitly (rather than implicitly via an int) — the bug would be
unlikely to happen, since it would require the function in question to
explicitly return “Success” when in fact it had failed (as opposed to the
opaque 0x01).

~~~
btown
In this specific case, strongly typing the CryptVerificationResult as an enum
would have helped. But, generalizing the problem, if you had more complicated
criteria for whether that branch should be taken, it can be far too easy to
save an intermediate result into a (strongly typed) variable but never end up
using it in the correct way in the condition statement itself.

Rust goes part of the way here by ensuring that Result variables get checked:
[https://doc.rust-lang.org/std/result/#results-must-be-used](https://doc.rust-
lang.org/std/result/#results-must-be-used) which to my knowledge is not
something you can mimic in C/C++. But you could still forget a negation or use
&& instead of || somewhere, and have an uncommon code path fail.

Code review reduces but doesn't eliminate the probability of these complex
logic errors. What's really needed is tooling that ensures test coverage on a
phrase-by-phrase, not line-by-line, basis. (Basic line coverage would have
said that all these lines of code were executed in testing.) And you need a
culture around paying attention to those results. That can be very difficult
to build, but for mission-critical software (security included) you absolutely
need that level of attention to detail.

~~~
KirinDave
> In this specific case, strongly typing the CryptVerificationResult as an
> enum would have helped. But, generalizing the problem, if you had more
> complicated criteria for whether that branch should be taken, it can be far
> too easy to save an intermediate result into a (strongly typed) variable but
> never end up using it in the correct way in the condition statement itself.

We can capture that requirement with something called Linear typing, but even
if we don't go for a compiler-enforced consumption the creation of a
_universally used_ family of result enums brings enormous discipline and
reliability to error handling. In no small part because when considering the
result of such an enum, the compiler can demand a total pattern match, which
forces developers to consider what that failure means in context and present
SOME kind of strategy (even if it's hard failure).

The approaches of languages with type inference and more sophisticated type
systems like OCaml and Haskell go even a step further because they can create
workflows around these types, creating composite workflows that make it _easy_
to handle errors. It becomes harder to ignore them, or write functions that
ignore them.

One of the reasons it's so easy to write parsers in languages like Haskell is
that algebraic data types and applicative functor composition make it easy and
even convenient to talk about the logic within the context of non-trivial
error flows. It's much more frustrating to write a parser without a combinator
framework in OCaml or Haskell. Similar stories exist for Validator, Either,
and Maybe/Option.

These techniques don't mandate error processing, but they make it more
convenient to compose error-handling functions and offer more compiler
checking when the results must be handled.

