Hacker News new | past | comments | ask | show | jobs | submit login
Rust at OneSignal (onesignal.com)
329 points by gdeglin on Jan 4, 2017 | hide | past | favorite | 99 comments



First, I have to comment on that diagram:

That diagram is the most interesting representation of a diagram I have ever seen. Do you have a tool to generate these for you, or was it hand crafted for this post? If it's a tool, is it available anywhere? If this is a built out tool, I would use this tool all the time.

On the article itself:

This is the most interesting type of posts about choosing Rust. Very balanced and aware that Rust gives a lot of upsides, but compared to other alternatives there are measurable downsides as well.

I think the biggest trade-off that they identified (and I've also gotten the impression from writing Rust on my own time, not for work) is the relative immaturity of the ecosystem, in as far as libraries available, and amount of developers supporting some of these libraries. What makes me hopeful, is that as a stable language (post 1.0) Rust is still relatively young. And you can definitely see things improving, with the community growing, more libraries propping up, and also gathering around specific libraries.


> That diagram is the most interesting representation of a diagram I have ever seen.

Thank you! We worked really hard on it.

> Do you have a tool to generate these for you, or was it hand crafted for this post? If it's a tool, is it available anywhere?

It was hand-made for this blog post. The diagram itself was drawn in a vector graphics editor and exported as SVG. Then, using web inspector tools to find which visual box corresponded to some line of svg markup, classes/ids were added to the items we cared about. Next came writing prose for each component. Finally, a bit of JavaScript and a CSS animation makes it come alive.

> If this is a built out tool, I would use this tool all the time.

Me too!


Fantastic work on the entire post then. Great content, and presentation. Looking forward to reading future posts like these as well.


> The diagram itself was drawn in a vector graphics editor and exported as SVG.

Which vector graphics editor did you use?


Not sure what they used but on Windows I would recommend Inkscape (perhaps an obvious choice).


My obvious choice would be Visio or PowerPoint.


Powerpoint can’t export to SVG though, can it?


Not the version that I own, but then there is Visio or the other vector formats that can be converted to SVG.

I seldom use SVG, because it is failed vector format. You still cannot guarantee an image will work properly across all browsers (desktop, mobile devices) or vector design tools.

PS or PDF have more chance of working across tools.


I, too, have been looking for a tool that will generate clean, styleable structural diagrams, but I haven't found the perfect one yet.

The best I've found is Mermaid [1], which has a Graphviz-like grammar. Unlike Graphviz, it can generate very clear, orthogonally laid-out diagrams with minimal effort, and it works better because it's designed for flowcharting (it supports classic flow diagrams, too). On the other hand, because it was designed for flowcharting, the automatic layout can generate less reasonably clutter when you have many entities with many connections. In particular, if you tried to generate the OP's diagram, I suspect it would accidentally lay out the boxes in a way that made some arrows cross — but I didn't actually try.

I've been considering writing a better tool for some time, but the layout is a hard theoretical nut to crack, and I'm not sure I would be up for it. The best idea I've had so far is to use genetic algorithms with some a fitness measurement that promotes cleanness (overlapping lines, density, horizontal/vertical extent and so on).

[1] https://knsv.github.io/mermaid/


Try https://www.omnigroup.com/omnigraffle

(not affiliated, it just does this very well)


That's a manual tool, though. I personally use Illustrator whenever the need comes for something that is to be shiny, but my desire is for an automatic layout tool that can be driven from a DSL, Graphviz-style.

Aside from efficiency (lots of diagrams for documentation or blogging), one benefit would be the ability to enforce a single theme (font, colours, etc.) across many diagrams, and push out new versions by tweaking the styles. Illustrator can't do that, and OmniGraffle is very weak at it.

I tried using Sketch, which has a support for reusable symbols, but it's pretty terrible at it.


I'm always very happy to hear of companies using Rust.

I'm especially happy to see that they're using clippy. It's not a core piece of their infra like hyper, but it's still being used, which is nice.

How has it helped? Does it just keep the code clean or has it found mistakes and stuff like that too?

Loved the diagram, btw.


> How has it helped? Does it just keep the code clean or has it found mistakes and stuff like that too?

Clippy hasn't found any bugs in our code that I'm aware of, but it has been invaluable even so.

Clippy is really good at preventing bit-rot. When refactoring, it typically points out constructs that are no longer needed after other changes.

It's great for teaching how to write Rust effectively and succinctly. I've certainly learned a lot from it. One example lint in this area is using `for item in list` vs `for item in list.into_iter()`.

OnePush sets `#![cfg_attr(not(test), warn(result_unwrap_used))]` to make sure no `unwrap()`s are used in production code; all errors must be handled! `result_unwrap_used` has easily been the most valuable lint for us.


Nice to know, thanks!

Yeah, clippy isn't solely for finding bugs -- a lot of the lints are about better style and cleaning up code, which often become useful after refactorings.


I'd really like to use clippy, but for production use I don't want to use anything but the stable Rust channel, whereas clippy apparently needs nightly Rust. Maybe I'm being overly cautious (OneSignal seems to do fine with just version pinning), but there's a reason nightly is separate from stable.

It would be cool if I could configure cargo to only use nightly Rust when invoking clippy, and use stable for all dev/test/prod builds. Do you happen to know if that's possible?


Yeah, clippy is a bit of an outlier because it hooks into the compiler itself to extract information during compilation. These are APIs that will never be "stable", nor are they really part of the "stdlib"; it's just that they're around and accessible in a nightly compiler.

Given that clippy is a tool, not a dependency, folks aren't overly concerned about this -- it cannot infect dependency trees and only requires nightly for local development. Most people do `rustup run nightly cargo install clippy` followed by `cargo +nightly clippy` to run it locally without needing to change the default toolchain.

There are plans to distribute clippy as part of the rust "extended" distribution, so the stability issue will go away.


:) There is one way to have clippy work in the stable compiler: The integration of useful tools!

But that is clearly an awful idea. No one would like useful tools at the expense of "separation of concerns" and marginally increased compiler maintenance :P


Not sure what you're trying to say here (you seem to be using sarcasm but I'm not sure), though I will note that clippy is wedded to a lot of compiler internals, so being a part of the compiler build/CI is a good thing for clippy. Often APIs get removed or changed from the compiler internals and we have to work around or sometimes revert it. Usually it works out, but it would be nice if the concern of having to support clippy's use case was part of the decision of how an API is to be changed. In the opposite direction, rustc does lints already and clippy is an extension of that. So I'm not really sure if "separation of concerns" is an issue here -- they're really the same concerns. While a lot of the clippy contributors are not well versed with compiler internals, the clippy maintainers all basically have to be, and we often have to dig deep into compiler changes to make stuff work with nightlies.


I just meant that if a tool/lints have proven to be useful, why not internalize them into the compiler?

This way the compiler has a marginal increment of maintenance effort but can remain flexible, change its internals at any point. It will also benefit users who for a multitude of reasons only want/need to use stable.


I don't see any reason why a set of stable APIs couldn't be defined, after all other languages do have them for the same purpose as clippy, e.g. Go vet, .NET Roslyn....


A set of stable APIs could be defined, but it would probably remove some of the more powerful lints in clippy.

You have the same issue with clang -- libclang is stable, but a lot of the plugins use the unstable clang plugin API.


I personally find it strange to develop on different branch than on your production system. "Works fine on my machine" is already well-known joke.


I've seen a lot of folks using nightly only for clippy and stable for everything else. Almost everyone has CI for both nightly and stable. It usually works out. If you're only using stable features, rust nightly is basically the same as stable with some speed improvements and clippy. This rarely, if ever, affects how something will work. If anything, you have a higher propensity for hitting bugs locally because the nightly compiler could be buggy.


Nigtly is nightly for a reason :)


We definitely have plans to get clippy working on stable Rust; though I will say, with clippy, using nightly is less of a big deal, since it's not something you depend on, it's a tool you use.

> if I could configure cargo to only use nightly Rust when invoking clippy,

This is

    rustup run nightly cargo clippy
or

    cargo +nightly clippy
if you have rustup installed.


Hey steve,

Using clippy is not as straightforward as you put it because clippy and the latest nightly often are broken together. There would be much less hassle if it was a stable crate. Also no worries of backwards compatibility between stable/nightly.

- rg


To be clear, clippy can't ever be a true "stable" crate. What clippy does is rely on a whole ton of compiler internals so that it does not have to reimplement the compiler. These internals change, because the rust compiler is under active development.

The plan for "stable" clippy is to bundle it with releases, precompiled. It will still use the internal APIs, but from a user's perspective they just have to `rustup component add clippy` and it will magically work.


Ballpark ETA on being able to install a precompiled clippy with rustup like that?


It's "phase 2" of https://internals.rust-lang.org/t/rust-ci-release-infrastruc...

We've got a good idea of how it will happen, but the infra stuff has to happen first -- nobody wants to complicate the existing infra if it's going to go away.

I don't have a clear timeline on that. A few months I guess.

On clippy's side we don't need many changes to make this work.


`rustup component add clippy`

Whoa, that's the first I've heard about this. Very exciting plans!


That's totally fair; I've gotten lucky, I think, with the mis-matches. As mentioned, we certainly want to ship a "known good" one to address exactly what you're talking about, but until then, it's not _super_ onerous to use, IMHO.


This is an excellent blog post - thanks a lot for writing it up. Solid use cases defined, I particularly enjoyed how you dealt with the shortcomings of joining a relatively young language.

The 'belligerent refactoring' resonated with me - I have worked on untyped and statically typed codebases of varying sizes and when making huge changes there is no better tool for me personally than an expressive type system.


Given the use case described for OneSignal and the enumerated benefits gained from Rust contrasted against the difficulties... I'm curious why Erlang/Elixir + Dialyzer wasn't considered or used?

We're starting to use Rust for static binary quasi-safety-critical applications, but communication/control-plane stuff like what's described at OneSignal we've found is much easier w/ Erlang.


We actually did consider them briefly. The problem in this case was having zero in-house knowledge about the technologies nor anybody particularly experienced with FP. We did not want to take on that much risk.


You started with experienced Rust people?

Where'd you find them? I'd _love_ to find a vein of them to mine over time. :-)


It depends what are your requirements (how «experienced» you want them) but it shouldn't be too hard to find :

- Rust is still a really young language and most of us just use it as a hobby and don't write Rust for a living, but many would love to. And for some of these people (I'm one of them), been able to use Rust at work can be a good reason to join you. - IMHO Rust is a good language for recruitment because - it has a high barrier to entry - you can't just fake knowing Rust: if a candidate doesn't understand the ownership system really well, he won't even be able to write a 100 lines code sample.

To get in touch with potential candidates you could post on /r/rust, and I'm pretty sure it will be well received since many people are eager for Rust job opportunities.

Of course if you need someone with 3 year+ professional experience with Rust, your choice will be a bit narrow ;)


Post in HN's "Who's hiring". Rust being constantly on the front page seems to be a good indication that there are many people here who are interested in the language.


Indeed - I write things in Rust on my own time and enjoy doing it so much that I'd consider "come write Rust" to be a big, big positive differentiator between jobs.


I think OneSignal will benefit immensely once Hyper lands async support in an upcoming release. I know this is an immature side of Rust but there is a lot of inertia behind async libraries right now.


We're definitely looking forward to the tokio-based version of Hyper async. In case it wasn't clear from the post, we are actually using the unreleased async version of Hyper (powered by mio/rotor) today.


You could switch to the tokio branch of hyper, my understanding is that it's pretty much stable, just waiting on the final tweaks before the impending release of tokio 0.1. We're so close...


I'm still a bit hesitant to pick up tokio and tokio-hyper so soon. We've got a lot of production miles on the current async client - on the order of billions of HTTP requests. Qualifying the new hyper code for production use will take some time. There's also still some discussion on the tokio issue tracker about problems with panicking; we'll probably wait until that category of issue is resolved before diving in.


That is super, super fair. :) I'm just incredibly excited for this release, almost as much as I am for Rust 1.15 generally. It's a good year so far...


Nice read. Apart from the excellent writing I also think the interactive diagram was really cool!


Thank you for the compliments! I'm especially glad you like the diagram.


How does OneSignal make money? Since the pricing is free, i'm not sure how it will sustain itself after the vc chest dries.


From https://onesignal.com/about:

It's free; how does OneSignal make money?

We make money by using the data we aggregate to improve web and mobile experiences. We also offer custom solutions to enterprise clients.


Wet blanket! The rust will sustain them!


What does the expression 'Wet blanket!' mean?


It generally refers to something that ruins a happy/good situation -- like throwing a cold wet blanket on someone out on a summer day.


Thanks.


Every time I read "Rust tooling is weak" I'm sure in following lines I will not find words about IDEA plugin (powerful and with regular updates) and Cargo (with a lot of builtin features, such as testing). Rust tooling promotion is weak, not tooling itself.


Still has a bit of catch up to do, compared with what I enjoy daily in Java and .NET projects.

Right now my biggest pain, are the compilation times, specially seeing cargo compiling multiple times the same crates instead of reusing the already binary compiled libraries.

Unless there is some magic cargo incantation I am missing.


Cargo doesn't recompile all crates on each iteration, you are missing something definitely. It only recompiles changed code. Or if rustc/cargo version has been changed.

"a bit of catch up" is one thing, not mentioning most powerful IDE plugin is another.


This is not what I see, I imagine you are talking inside one single project, while I mean across all Rust applications I might have available.

1 - Get Project A

2 - Crate X gets compiled as one of the dependencies

3 - Get Project B

4 - Crate X gets compiled again as one of the dependencies

I usually see this when compiling all the VSCode related projects during new Rust releases.

Could the reason be that Crate X on Project A and Project B isn't the same version?

Where are the binary rlib visible to all cargo projects stored? I could only find text versions zipped together.

I haven't checked this deeply yet, just seeing "Compiling crate X" multiple times isn't fun.


We don't do any of this kind of caching because flags can be different per-project. There's an open bug on it; it's not impossible, we just haven't used it yet.


Same applies to any language that compiles to native code or has support for conditional compilation.

Even with MIR, Rust will never compile faster than C or even C++ if dependencies on binary libraries aren't supported.

Right now I can compile my C++ projects on Windows, and .NET Native ones, faster than when updating Rustfmt, racer and Rustsym, after each new Rust release.

VS2017 will bring the new linker, and the ongoing MS work for C++ modules (now a TS).

Rust won't win the hearts of C++ developers if it doesn't allow for binary dependencies, and doesn't compile faster.


All projects are independent and binaries are stored in "target" folder. It's a good thing. Even libs nominally same versions can have different source code and dependencies (because of "replace" or vendoring).


No, it is not a good thing. It is a waste of my development time that I could use for productive work.

On other AOT compiled languages that I use at work, I can reuse binaries across projects, including the languages Rust intends to replace.

It is a hard sell, if using Rust means having to spend more time compiling than even with C++ (specially now that modules are finally coming, at least for us with VC++).


For me is a good thing, because, as I explained, version number is not a guarantee.


So, than cargo needs to be improved so that there is a set of dependency values (name, version, rustc version, etc) that can guarantee the uniqueness of a binary library.

To me it seems FOSS undervalues the value of binary libraries in the enterprise, or how they get used in many corporations or companies selling components.

Rust improved safety is a good thing™, but not at the expense of continuously seeing the same libraries being compiled over and over, vs reusing binary artifacts across projects.


Cargo needs to be improved, here I agree :)


dub (D build tool) support both of these locations. I can attest stuff gets reused.


It doesn't have to use the version, it could hash the source, like ccache.


Needs better build support, CI testing and better tooling to match C++ on mobile.


I use the IntelliJ plugin daily, and although I recommend it (and the maintainers are awesome!) it has a ways to go before it matches what I've seen with IDEA (Java).


It proves point that tooling is weak?


>Languages like Go make it too easy to ignore errors.

What? I've almost exclusively heard the opposite: Go makes error handling too in-your-face. People don't like writing "if err != nil" constantly.

>Finally, the developer leading the effort had experience and a strong preference for Rust. There are plenty of technologies that would have met our requirements, but deferring to a personal preference made a lot of sense. Having engineers excited about what they are working on is incredibly valuable.

I wish it were more acceptable to admit this instead of having to half-assedly argue that your favorite language just so happens to be a perfect fit for the problem. There's no shame in simply using the language you're most proficient in. Practically speaking, language choice is only a problem for large teams or for projects with language-specific requirements.


> What? I've almost exclusively heard the opposite

Practically speaking, Go makes it hard to accidentally ignore errors if the function you're calling normally returns a value and easy to accidentally ignore them if it doesn't. For example, Go will force you to handle the error when unpacking the result of os.Open(), but not the result of os.Chmod(). There is a popular errcheck tool that does simple static analysis to catch this.

By contrast, rustc warns if you drop any error anywhere, and this warning ships on by default with the compiler.


If function in Rust returns any type of result except Result, you can just call it without handling errors. Compiler will raise warning only if Result wasn't handled. And even in this case you can just add "let _ =" before call to suppress warning. I'm pretty sure you know it.


> If function in Rust returns any type of result except Result

The conversation was specifically about ignoring errors. In Rust, errors are conventionally reported using Result hence it warning on unuse — AFAIK the only type opted into that by default, its variants being specifically called Ok and Err, and its currently exclusive support in `try!` and `?`

> And even in this case you can just add "let _ =" before call to suppress warning.

You can even disable the warning entirely, can you imagine that?

Incidentally, you can use that exact same feature to ignore errors from value-returning functions (where you need the actual result) in Go as well, so that's not exactly a slam dunk; and it is an explicit ignoring of the error rather than an implicit one which can (and often is) a mistake rather than a conscious decision by the developer. That is, there is a large difference between

    _ := os.Chmod(…)
where the developer is explicitly stating "I don't care about an error" and

    os.Chmod(…)
where the developer might not have noticed that an error is possible.

In Rust, the latter is will warn by default because fs::set_permissions returns an io::Result<()>.


There's a plenty of ways how function can report about error - with None, 0, -1, false, or even "error", can you imagine it?

I like Rust's way to handle errors with Result, but I hate your tone, so discussion is over.


Think as an author of a function that does something that can fail .

the signature ;"fn risky_shtuff() -> Result<Response , Err)"

Rust guarantees to you as the author who ever uses this function HAS to handle both cases (even if it's an unwrap).

However Rust doesn't stop you form making functions that instead return Option ,bools or even strings ,but you'd be actively working against very strong convention.


if function doesn't return anything (void) as pcwalton saying, it doesn't matter. So or we are talking about returning value or not.


> What? I've almost exclusively heard the opposite: Go makes error handling too in-your-face. People don't like writing "if err != nil" constantly.

Do you actually have to look at the value you err, or can you just pretend like it's not there? (genuinely asking, I thought it was the latter).

> I wish it were more acceptable to admit this instead of having to half-assedly argue that your favorite language just so happens to be a perfect fit for the problem.

This is one of the reasons we felt it important to mention. Rather than demand purely technical motivations, we choose to acknowledge the human aspects of software engineering as well.


> > > Languages like Go make it too easy to ignore errors.

> > What? I've almost exclusively heard the opposite: Go makes error handling too in-your-face. People don't like writing "if err != nil" constantly.

> Do you actually have to look at the value you err, or can you just pretend like it's not there? (genuinely asking, I thought it was the latter).

While Go may not have all the capabilities Rust has at forcing error inspection, it really isn't in the class of languages that "make it too easy to ignore errors". I agree that a lot of it is convention, but multi-value returns, usage of named variables and good practices in the base libs (flowing upwards) can't place it in the realm of easily ignoring errors.


The compiler prohibits declaring variables (and imports) without using them. You can get around this with

    result, _ := someFunc()
or in a (anti) pattern I use sometimes

    _ = []interface{}{ err1, err2, var1, etc }
with underscore being a special variable name indicating to the compiler you understand you're ignoring it.

Nice writeup btw. Did generics/GC play any part in the decision making process?


Go promotes the pattern of dealing with errors ,which to be fair I think is an improvement on exception based handling. But rust compiler forces the user of a function to handle error .it's impossible to 'val ,_ = getValOrErr()' in Rust. You just can't ignore an error.


> Having engineers excited about what they are working on is incredibly valuable.

And for me it was one of weak points of this article. "We use Rust because our engineers want to play with a new toy" is a weak argument and soon will be used by opponents of Rust as their proof.


But it's the truth - so why not admit it?


What is truth? That they make decisions just to play with new toy? It's their problem, really, not a thing to be proud of, not an argument to be mentioned in article.


In my opinion you're seriously underestimating how much "I like this language" and "I'm comfortable with this language" is involved when decided how to code up new projects - especially at companies without an approved-language policy. I also didn't get the impression they were "proud of it" - they were simply honest in acknowledging it as a factor.


Really, I born yesterday and has never started new projects yet. Sometimes people should get rid of "I'm comfortable with this language" just to try out new things. And this argument is good for side-projects.


> To get the same effect in a dynamic language, one would need to load this dictionary out of Redis and write a bunch of boilerplate to validate that the returned fields are correct.

How is this not what serde does internally? I could certainly imagine a nice library for a dynamic language that has a roughly equivalent API.


It is what serde does internally, and I also imagine that you could make something similar in a dynamic language; https://github.com/rails-api/active_model_serializers comes to mind ;)


It's essentially what serde is doing internally. As an author, you don't have to do any of the work, though. I struggle to imagine how a dynamic language could provide equivalent functionality without the call side becoming more complex.


See the sibling comment for an example. I really don't see the challenge here.


This is an amazing postmortem--thanks for posting it! I'm really happy to hear about your success with Rust!


I'm always excite to hear about companies who are using rust in production. OneSignal team has done an excellent job to bring out the strengths and weaknesses. Would like to know more about the Unit Testing practices that OneSignal and others follow.

BTW - I have all the motivation now to plunge into the rust-world


> This wasn't something we anticipated up front, but build times have become onerous for us. A build from scratch now falls into the category of "go make coffee and play some ping-pong." Recompiling a couple of changes isn't exactly quick either.

Rust should really address the top problem with C++ which is slow build times.


Slow build times are a problem for C++, but there are bigger ones yet. Like the reliance on textual inclusion and the preprocessor instead of an actual module system. It has massive second and third order effects on tooling, code sharing, and community building.

Incidentally, a module system will help with build times, but that's only one of the benefits.


Yeah, but the way I see it by using cargo, C++ modules will be there first.

My VC++ builds still compile faster and that is just using the latest VC++ 2015 improvements without the modules work being done by MS.


Since someone from the company seems to post here:

I'm curios: this looks like a perfect use case for Apache Kafka.

Did you evaluate it?


why are you using a language specific connection pooler (r2d2).

why not something like pgbouncer?

this is similar to the java approach of building connection pooling into the application itself using jdbc.


We use PgBouncer as well. It's still useful to have a connection pooler within the application for ensuring connections are still good, reconnecting as needed, and making it easy to check out a connection.


that has not worked out well for me in the java world - one connection pooler connecting to another one. unless one of them has ultimate control on timeouts and eviction.

I'm actually very intrigued on if you are seeing actual benefits using r2d2 ... or would using only pgbouncer make it just as efficient.


Overall impression after reading this article: using Rust is painful and risky, and author is thinking often about Go.

There is huge advantage, not mentioned in article: when code has a lot of work with multithreaded tasks, race-conditions bugs will bite you sooner or later, and they will bite very very painful. They are hard to detect, very hard to reproduce and it takes long time to fix them. Rust is immune to race-conditions, and it's really awesome and enough reason to use it in apps like this.


> Rust is immune to race-conditions

Safe Rust guarantees no _data races_, but not race conditions in general.[1]

[1] https://doc.rust-lang.org/nomicon/races.html


thanks for correction.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: