Hacker News new | past | comments | ask | show | jobs | submit login
Coroutines make robot code easy (bvisness.me)
221 points by bvisness 10 months ago | hide | past | favorite | 122 comments



Yes. The callback is not a natural construct (i.e. it does not map well to our intuitive understanding of X is doing something while Y is doing something else).

I'm annoyed when coroutines are reserved for use only in high performance, c10k-type of situations. For example, the KJ library's doc says:

"Because of this, fibers should not be used just to make code look nice (C++20's co_await, described below, is a better way to do that)."

With this "stackless or nothing" attitude we do not have good C++ coroutine libraries outside C++20's.

For me the purpose IS to make the code look nice and I do not care if the coros are stackfull and consume more stack memory. At the end of the day, for my application, I saved more in programmer's time and bugs than I lost in RAM (and I'm on an embedded board with only 64MB)


I want to add to the callback vs coroutines and async/serial discussion that it all depends on how you treat errors. Are errors mere exceptions or do you want to handle errors in the control flow ?

For example turndeg(90), Move(10), PickupItem(), turndeg(180), Move(10) you treat errors as exceptions, if the robot fail to pickup the item, or if it ends up at the wrong place it's an exception.

Now if you put all these in try/catch, have the functions return error code (or 0 for success), or use callbacks, the code will be more "ugly" yes, but you then treat errors as "first class citizens", if the robot for example fail to pickup the item you want to try something else, maybe apply more vacum to the suction arm, or switch to a grip arm. And if the robot goes off course you want to make a course direction.

async/await, coroutines, futures, promises, do make the code "look nice", but that nice look comes from treating errors as exceptions.


I don't agree. You can handle errors how you like on both approaches. The callback code, however, will get pretty messy once you have a branched control flow.

With coroutines you are using the language's control structures directly. The program counter is your state and a branch is an `if`. With minimal debugger support you will be able to see at which line is the execution on each coro.

With callbacks you would have to inspect which ones are pending, unless you have made the state explicit in a variable.

These is not a huge deal for those who have been programming for a while but for beginners coros will feel more like an extension of the language than something built on top.


I have a gopher server written in Lua [1] that uses coroutines to handle each connection. This bit of code is executed as the main menu page [2] is being displayed (with comments)

    -- -------------------
    -- Load in some modules.
    -- The first makes a gopher menu item of type link
    -- The second allows us to do a TCP connection
    -- --------------------------

    local mklink = require "port70.mklink"
    local tcp    = require "org.conman.nfl.tcp"

    -- ------------------------------
    -- tcp.connect() connects to the given address and port and timeout
    -- (in seconds).  This function will create a socket, set it
    -- to non-blocking, call connect, then yield.  When the socket has
    -- connected, the coroutine is then resumed with the connection; if
    -- 1 second has passed before the connection is done, then nil is
    -- returned when resumed.  This just calls a local QOTD service.
    -- ---------------------

    local ios = tcp.connect("127.0.0.1",'qotd',1)
    
    if ios then
      local res = ""
      -- --------------------------------
      -- The following loop will yield the coroutine until a line
      -- of data has been accumulated from the network.  The coroutine
      -- is then resumed with the line of text.
      -- ---------------------------------------
      for line in ios:lines() do
        -- -----------
        -- We're just accumulating the text into one long blob of
        -- text that we'll return
        -- -----------------
        res = res .. mklink { type = 'info' , display = line }
      end
      ios:close() -- another yield point, when closed, resume
      return res
    else
      return mklink { type = 'info' , display = "Not Available" }
    end
No exceptions here. If we can't connect, it's a simple 'if' test and do something else. The functions `tcp.connect()`, `ios:lines()` and `ios:close()` are all blocking points that cause the coroutine to yield. Yes, if I put something like `while true do end` that will block the entire process as there is no preemption, but aside from that detail, I find this code easy to read.

[1] https://github.com/spc476/port70/blob/master/share/index.por...

[2] gopher://gopher.conman.org/


tip: In order to flatten those if-statements you can return early, so if you start with if not ios then return - you get rid of one if..else.


Ish. You can also have error recovery in the form of the "conditions/restarts" system of common lisp. Such that you can have "look nice" things if you want with many different setups.


> For example, the KJ library's doc says

Just to be clear, the audience for that tour is primarily Cloudflare engineers working on workerd / Workers runtime. Within that context, that's very much correct. The entire codebase is written using asynchronous I/O - synchronous I/O doesn't show up (or if it does, it's in weird parts I've never looked). Fibers are used sparingly in very specific contexts and we have very special code to make it memory efficient at our scale.


I interpreted that as “if you can go stackless prefer it for the coroutines-for-elegance use case”.


Yes.

What question are you answering 'yes' to?

The callback is not a natural construct

According to who? It is very natural to anyone making a GUI or anything interactive for the firs time. Eventually I would try to move people to queueing up events and handling them all at the same time so that the order is easier to debug.

Coroutines are the latest silver bullet syndrome. Fundamentally you still need to synchronize and order data and that's the hard part. I don't know why students would need to do more than the classic interactive loop of:

1. get data

2. update state

3. interactive output (drawing a frame, moving a robot etc.)


> According to who? It is very natural to anyone making a GUI or anything interactive for the firs time.

The people that do not make GUIs but apps with complex behaviours.

Callbacks are nice and simple. Callbacks calling callbacks calling callbacks calling callbacks (because of one of worst ideas in programming ever, function coloring) stops being simple. Async/await is just a patch over that ugliness for languages that can't do any better easily


The people that do not make GUIs but apps with complex behaviours.

I don't think there is a difference here.

Callbacks are nice and simple. Callbacks calling callbacks calling callbacks calling callbacks (because of one of worst ideas in programming ever, function coloring) stops being simple. Async/await is just a patch over that ugliness for languages that can't do any better easily

I agree with all of this, but they said 'callbacks are not natural'. I don't think this is a language issue either and I don't think coroutines help. Just like I said in the first comment, I think the way to go is to have a queue of events/inputs and use that because the ordering and debugging is much better and you don't get the same web of jumping to different parts of the execution.

I do think that callbacks are a 'natural' and straightforward idea that most people either think of independently or understand well the first time they see it. It doesn't mean that's the best way to do it, but to say it isn't 'natural' is bizarre.


> I think the way to go is to have a queue of events/inputs and use that because the ordering and debugging is much better and you don't get the same web of jumping to different parts of the execution.

If you just have stream of events that need to be decided upon in order you're in a very happy place; complexity strikes when you want to run handlers for them in parallel to cut on latency, and those handles also need to do multiple things that can be done asynchronously to cut on latency.

Message passing works well here, you can just send a bunch of requests to various components then just wait for each at the moment they are needed; async/await is essentially a very bastardised version of it (and usually stuck in single thread in most implementations so no parallel computing)


I'm not sure what point you are making now. The original story is about coroutines somehow making robotic programming easier and the comment I replied to was saying 'callbacks are not natural'.


The callbacks (as in a chain of callbacks) are not natural to a beginner, like a student trying to program a robot. For a GUI, where you set up a callback in response to user input, it is also easy to understand because the program's control flow does not progress as a chain of callbacks.

The input-update-output loop is fine, but you cannot easily combine and nest such loops. For example, in a robot you may have a speed control inner loop, then a path following loop and on top of that a high-level behavior loop. Nesting one inside the others you end up with a hand-baked implementation of a coroutine.


as in a chain of callbacks

That's not what you said at first and there is no reason to assume that callbacks automatically mean a chain of callbacks.

The input-update-output loop is fine, but you cannot easily combine and nest such loops.

So don't, where is this assumption that a technique is bad because it might ignore what makes it good and warp it into something terrible?

For example, in a robot you may have a speed control inner loop, then a path following loop and on top of that a high-level behavior loop. Nesting one inside the others you end up with a hand-baked implementation of a coroutine.

Then don't do that. You have a loop, do what you need inside of it. If you don't need to do something every time, skip it most of the time. This isn't rocket science. If a certain architecture doesn't work, don't do it like that. A car doesn't work well if you drive it backwards either.


>I'm annoyed when coroutines are reserved for use only in high performance,c10k-type of situations

Go's pretty much that idea. Make the threads light as coroutines (I think it's like 4k or 8k per goroutine) and give some basic messaging (channels) to go with it. Works well but some of the simplicity ended up biting it in the arse so there can be quite a bit of boilerplate in some cases.


Coroutines are covered in Knuth's first volume. And, I confess, I think I went years thinking he was just describing method calls. Yes, they were method calls that had state attached, but that felt essentially like attaching the method to an object and calling it a day.

Seeing them make an odd resurgence in recent years has been awkward. I'm not entirely clear that they make things much more readable than alternatives. Reminds me of thinking continuations were amazing, when I saw some demos. Than I saw some attempts at using them in anger, and that rarely worked out that well.

Also to the point of the article, I love being "that guy" that points out that LISP having a very easy "code as data" path makes the concerns expressed over the "command" system basically go away. You can keep the code as, essentially:

    (DriveForward 0.5 48)
    (while (NotCarryingBall)
        (Grab)
        (pause 2)) 
    (DriveBackward -0.5 48)
    (Shoot)
With god knows how much bike shedding around how you want to write the loop there.

Of course, you could go further for the "pretty" code that you want by using conditions/restarts such that you could have:

    (DriveForward 0.5 48)
    (Grab)
    (DriveBackward -0.5 48)
    (Shoot)
And then show what happens if "Grab" is unsuccessful and define a restart that is basically "sleep, then try again." Could start plugging in new restart ideas such as "turn a little, then try again." All without changing that core loop.


With these examples I think the author would still be stuck with stepping through the state machines with the students. Unless what you wrote would allow for the "autonomousPeriodic function to keep ticking" another way?


Apologies for not making that more explicit. My point was that that isn't necessarily code, but also data. Literally, you can turn that into a list and instead of evaluating it with the standard runtime, you can send it to another place that turns it into the "command" style from the example java.

You can /kind/ of do this with java, of course. Just make sure to not use "new FooCommand" and instead change the "foo" function to return a the command object. No reason that couldn't be done; but, and this is the big difference, it requires building a ton of scaffolding in the java program to support both ideas at the same time. In lisp, it is fairly easy to wrap in a macro. Still somewhat magical, I suppose, but no more so than the rest of the compilation/build process.

That make sense? I'm somewhat interested in this, so more than happy to try and do a blog post on the idea, if that would help.


I think that developers have minimal scheduling primitives available to them, to schedule complicated work, in the order and timings you want it to have.

I don't like hardcoding functions in coroutine pipelines. Depending on the ordering of your pipeline, you might have to create things and then refer to them, because of the forward reference problem.

Here's my stackoverflow question for what I'm getting at:

https://stackoverflow.com/questions/74420108/whats-the-canon...

Coordinating work between independent threads of execution is adhoc and not really well developed. I would like to build a rich "process api" that can fork, merge, pause, yield, yield until, drop while, synchronize, wait (latch), react according to events. I feel every distributed systems builds this again and again.

Go's and Occam's CSP is pretty powerful.

I've noticed that people build turing completeness ontop of existing languages, probably due to the lack of expressivity of the original programming langauge to take turingness as an input.


I’d recommend exploring some with Scheme. Somewhere between writing code in continuation-passing style, using macros, and using call-with-current-continuation it should be able to build any of these mechanisms in a clean way. Maybe there’s a prototype for a better construct there. Then it’s on other language developers to support these capabilities as well.

Because looking at all the examples listed in the article, none of them seem like very idioms.


Thanks for your reply.

I have looked into shift and reset to the point I think I understand it (and callCC but it actually changes the execution context or can be seen as an AST transformation)

(This wiki page helped me understand delimited continuations: https://wiki.haskell.org/Library/CC-delcont )

I am interested in algebraic effects too, but would like to understand them from a assembly point of view and such as exceptions.

My Lisp-like experience is only with Clojure, and it is delightful applying methods so trivially. I just find other people's LISP hard to read!


> I would like to build a rich "process api" that can fork, merge, pause, yield, yield until, drop while, synchronize, wait (latch), react according to events. I feel every distributed systems builds this again and again.

Years ago I attempted to build an experimental language with first-class resumable functions. Every function can be invoked by the caller through a special reference type called "quaint". The caller can resume or stop the execution of the function either after reaching a timeout, or after passing a "wait label":

https://github.com/bbu/quaint-lang

A typical CPU-intensive example where preemption is done by the caller after a certain timeout:

    entry
    {
        fibq: quaint(u32) = ~fibonacci(32 as u32);

        ps("At start: "), pu8(fibq@start), pnl();
        iter: u32 = 0:u32;

        do {
            wait fibq for 1000 msec;
            ps("Iteration "), pu32(iter++), pnl();
        } while !fibq@end;

        ps("At end: "), pu8(fibq@end), pnl();
        const value: u32 = *fibq;
        ps("Reaped value: "), pu32(value), pnl();
    }

    fibonacci(number: u32): u32
    {
        if number == 0:u32 || number == 1:u32 {
            return number;
        } else {
            return fibonacci(number - 1:u32) + fibonacci(number - 2:u32);
        }
    }
An example that uses "wait labels" to suspend execution of the callee at certain points:

    entry
    {
        q: quaint(u64) = ~pointless_function();

        wait q until pointless_function::label_a;
        ps("At label_a: "), pu8(q@pointless_function::label_a), pnl();

        wait q until pointless_function::label_b;
        ps("At label_b: "), pu8(q@pointless_function::label_b), pnl();

        wait q until pointless_function::label_c;
        ps("At label_c: "), pu8(q@pointless_function::label_c), pnl();

        wait q;
        ps("At end: "), pu8(q@end), pnl();

        ps("Result: "), pu64(*q), pnl();
    }

    pointless_function: u64
    {
        i: u64 = 0 as u64;

        while ++i < 1000000:u64 {
            x: vptr = malloc(1024:usize);
            free(x);

            if i == 300000:u64 {
                [label_a]
            } elif i == 600000:u64 {
                [label_b]
            } elif i == 900000:u64 {
                [label_c]
            }
        }

        return i;
    }


I really like this.

Thank you for your comment and sharing.

I have a lightweight 1:M:N runtime (1 scheduler thread, M kernel threads, N lightweight threads) which preempts by setting hot loops to the limit.

https://github.com/samsquire/preemptible-thread (Rust, Java and C)

How do you preempt code that is running?

Would you like to talk more about your idea?


> How do you preempt code that is running?

It runs in its own VM with custom instructions. The "wait" statement corresponds to a special instruction that is able to switch to a separate execution context and restore the parent context when necessary.

To implement this in native code, some kind of runtime support would be needed. Perhaps a dedicated thread that is able to stop/resume the main thread, changing the contents of the stack and the registers.


Ha. Amazing! Someone actually implemented a variant of COME FROM of C-INTERCAL infamy in another language.

And moreover made it a reasonable and readable variant!


I mean, this is an interesting thing to think about but isn't at all what the article is about...coroutines have many uses may be and are often related to async work in general but OP is using coroutines just as a pauseable function (in good old asm, you know how computers actually just work, this is just jumping into a subroutine).


I think the article implements stackful coroutines since the yield calls a function where control flow jumps to, it eventually returns to the instruction(s) after the yield statement.

(That is, it's not a JMP)


or just use dds as ros2 does, or use something like zeromq?


Thankfully a piece that emphasizes that coroutines are functions that pause. Java frameworks like Quasar became focused on other goals besides that basic capability and lost their way (IMHO).

Java's not the easiest to pick up in high school unless you really make a big after-school effort. Something like Lua is probably better.


For FIRST robotics in particular, Java and LabView are the most commonly used languages because they are the two for which a robust library is maintained by Worcester Polytechnic Institute for use in the competition.

But I have long been of the opinion that both languages are kind of an ill fit for the application. Python is making some headway and I think it might be a better fit; It has some advantages on legibility, it has an optional static typing system, and it does support coroutines.


Thankfully, very few teams are still using LabView at this point and WPILib primarily targets C++ and Java, with first-party Python support coming next season.


> Java frameworks like Quasar became focused on other goals besides that basic capability and lost their way (IMHO).

A bit of an aside maybe, but the guy that made Quasar is behind project loom to add light weight threads to JVM which will become available in JVM 21 https://openjdk.org/jeps/444


Unfortunately high schools are forced to continue teaching Java, or else Oracle starts executing hostages.


In my day Borland had all the hostages, and every time you used something that wasn't orange text turbo pascal another one got fed to a gru.

The right intro language is an interesting study in itself and the intuitiveness of coroutines is an good data point. Go seems like a decent first one. It jas dark corners, but at least they are in the corner. I'd love to argue for rust as a first language, and maybe it isn't a bad one. But I'm not sure where I'd start the argument. C was my second language, and I'm not sure it would have made sense as qyuckly before commodore basic.


I feel bad for autonomous participants, Java is almost a uniquely poor choice for the size of project FIRST teams would be creating.

Java has its domains, but small, algorithm driven code worked on by a small, inexperienced group isn't one.


Especially when people try to impose Enterprise Quality Java on the students, and your project gets bloated with interfaces and dependency injection and config files…all for a single platform with fixed hardware :(

When we were still using Java, I had the students just make everything static. We don’t need more than one instance of our IntakeSubsystem. We only have one intake. But the OOP boilerplate persists.


I don't know Quasar, but a lot of projects seem to want to add functionality rather than be a library for that purpose, and create a new one for the completely unrelated feature that you want.


Coroutined make programming for pico8 very chill as well.

The biggest challenge is sometimes you do want external flow control, and there coroutines can get hard to untangle if your design is a bit messy.

Something like “A signals to B to do something else” starts to be a bit tricky (along with interruptible actions). I think there are good patterns in theory but I’ve found myself with pretty tangled knots at times.

This is ultimately a general problem when programming everything as functions. Sometimes you need to mess with state that’s “hidden away” in your closure. Building out control flow data structures ends up becoming mandatory in many cases.


I took computer science at school in 1989 because I hated my chemistry teacher in 1988. Never looked back. I really loved the author's call for attention to whether kids are "getting" the concepts being taught and adjust accordingly.

My teacher (Hi Mr Steele if you are still kicking around!!) taught us the algorithms without coding, but instead used playing cards or underwater bubbles or whatever. We had our a-ha moments intellectually before we implemented them in code.

As an aside, our school had just got macs and we spent most of the day playing digitised sound files from Monty Python.

"YOU TIT!"


People always say that coroutines make code easier to understand, but I've always found normal asynchronous code with callbacks much easier to understand.

They're equivalent except that asynchronous callbacks is what actually happens and you have clear control and visibility on how control flow moves.


If you want to see the callbacks, an alternative middle ground is promises, e.g. code that looks like doSomething().then(() => doSomethingElse()).then(() => doLastTask());

I currently work on a project that involves Java code with a promise library, and Unreal Engine C++ code which does not and uses callbacks (and do async JS stuff in my personal projects), and both have to do asynchronous logic. The Unreal code is just so much harder to deal with.

Specific problems the Unreal code has:

- There's no "high level" part of the code that you can look at to see what the logic flow is.

- Many functions are side-effecty, triggering the next part of the sequential logic without it being clear that that's what they're doing. Like the handleFetchAccount() callback kicks off another httpRequest for the next step, but you wouldn't know that it does that just from the name.

I'd admit some of these problems might be mitigatable in a better written codebase though.


The coroutine approach shines for complex business-logic.

Consider this example algorithm, of several async steps,

1. Download a file into memory

2. Email a link to a review web page.

3. Wait for the user to review.

4. Upload the file to a partner.

5. Update a database.

You could implement this as callbacks. Callback from each step leads to the next being triggered. Downside - your business logic is spread across all the callbacks. You could mitigate this somewhat by defining a class with one method for each step, with those methods being defined in the same visual order as the algorithm. Then have each callbacks call a method. (The article shows something different but similar with its Command autoCommand pattern.)

Tricks like this only go so far. Imagine if the reviewer user had a choice of pressing 'approve' or 'reject' on the webserver interface, with the algorithm changing depending on their answer. How do you now represent the business logic so the programmer can follow it?

Such changes are easy in coroutines. Here is the algorithm with that variation in coroutine code,

    async def review(review_id, url):
      file_content = await download_large_file(url, review_id)
      ws_review_id = await create_webserver_review_page(file_content)
      await email_the_user(ws_review_id)
      result = await get_webserver_user_response(ws_review_id)
      if result: 
        await upload_to_partner(file_content)
      else:
        await alert_failure(review_id, file_content)
      await update_database(review_id, ws_review_id, result)
You state that callback code gives you easy visibility to what actually happens - yes, they do. When you read callback code, it is natural to follow business-logic to system calls. Coroutine tends towards code of layered business-logic and abstraction.


Just use lambdas, that makes the sequence local.


Callbacks are no more 'what actually happens' than coroutines are; what actually happens involves a lot of jumping to memory addresses, and closure state is just as much a compiler invention as async/await. Blocking-style code, by comparison, is how we actually think about the business logic; language features that abstract over callback hell to let you write it make code inherently more clear. People always say it because it's true.


Closure state is a serious matter, especially if you're not in a garbage-collected language.


> They're equivalent except that asynchronous callbacks is what actually happens [...]

Neither stackful nor stackless coroutines work like this practice. The former suspends coroutines by saving and restoring the CPU state (and stack) and the latter compiles down to state machines, as mentioned in the article. Coroutines are functionally not equivalent to callbacks at all.


Which is exactly what happens when you use asynchronous callbacks except that you have to do the storing of state explicitly. Stackless coroutines even typically compile to (or are defined as equivalent to) callback based code.


Stackless coroutines are literally the same thing.

Stackful coroutines are just a poor man's threads.


I imagine you would get a lot of blank stares with that POV, at least from folks with working bullshit detectors (like young kids that haven’t been conditioned to “modern” industry practices).

I found the article a great example of the kind of crap that passes for programming these days.


In the context of the first robotics competition, teaching the command hierarchy is a good opportunity to teach students what a state machine is... And that can be a good opportunity to talk about what a computer does, because the computer is basically a hardware implementation of a state machine.

But that isn't the kind of lesson that you want to be cramming into the middle of the competition season.


The structure of the coroutine version looks very close to what I've been settling towards for my own background code (not robots but a similar "do a sequence of things that may take different amounts of time and rely on external state"). I'm not sure if it has a name so in my head it's been something like "converging towards a 'good' state":

Every tick, inspect the state of the world. Then do the one thing that gets you a single step towards your goal.

At first I wasn't sure the "inspect" part was possible in the robot system, but the Lua code makes it look like it is? If so, the change is basically changing the "while" to "if" and adding additional conditions, maybe with early returns so you don't need a huge stack of conditions.

The "converging" style doesn't use coroutines and is more robust. Let's say, for example, another robot bumps into yours during the grab - the Lua code couldn't adapt, but the "converging" style has that built in since there's no assumed state that can get un-synchronized with the world like with a state machine / coroutine version. It was because of external interactions like that, that I couldn't 100% rely on but were inspectable, that I originally came up with this style.


Similar code could be written in C#, with code like

    IEnumerable<RobotCommand> MyRobotBrain() 
    {
        while(drivetrain.getDistanceInches() > -48) 
        {
            yield drivetrain.arcadeDrive(0.5, 0);
        }

        yield shooter.shoot();
    }


I seem to recall reading (though quite some time ago so I may be mistaken), that `yield` was developed for use in robotics as part of the CCR (Concurrency and Coordination Runtime), and it's inclusion in C# 2.0 was to support this.


I read the article, but I failed to understand the problem. What's happening when you pause? Why do you need your functions to pause?

The example robot involves taking a deterministic sequence of actions in a fixed order. The state machine is a line. The article seems to be pretty clear that this code is bad:

    while(drivetrain.getDistanceInches() > -48) 
    {
        drivetrain.arcadeDrive(0.5, 0);
    }
and this code is good:

    while(drivetrain.getDistanceInches() > -48) 
    {
        yield drivetrain.arcadeDrive(0.5, 0);
    }
But pausing doesn't seem to be the functionality that's missing. The first loop will drive backwards until getDistanceInches is at most -48. The second one will also drive backwards until getDistanceInches is at most -48, but it will "pause" intermittently while it drives there. If my robot drives all the way in, I guess, one step (?), what will go wrong?

The tick function is called 50 times per second. Is it constrained to terminate within 0.02 real-time seconds? What if you want to think hard about something?


Only one action can run per robot loop. The first code isn't bad exactly - it just doesn't work for how the robot needs to operate.

Essentially the first code block has the issue mentioned in the article:

    Unfortunately, we can't do this because we need our autonomousPeriodic function to keep ticking. Loops like this will never finish and will cause the robot program to hang. So you can't use loops!
The second code block yields every command back to the robot until the robot is ready for the next command - it's not actually the same as the article code but I think it's better design to return a command to the robot than to run a mutable operation in the brain code, it makes it clearer to the user that it is necessary to let the robot do the work.

The article's point is there are many ways to write the state machine code that makes this possible - and the coroutine approach is the easiest for students to understand. It's also quite clean and practical for real world use too.


The problem is more in how you combine that with something else. Consider, you have two "routines", one for "pick up ammo and take a shot" and one for "evade attack". Without a "yield", then your recovery time after an action is the longest chain of actions you have encoded. With the yield, it is the longest of a particular "command" that you have.


Dots instead of colons ?


Yes good catch; oversight from posting on mobile. I've fixed it up


As always when this topic comes up, I have to plug Ceu[0] (formerly Céu) and the programming paradigm it represents. It doesn't use coroutines but synchronous concurrency. On top of that it's a reactive language:

    // an external input event channel
    input int KEY;
    
    // par/or concurrently executes two
    // or more blocks (called "trails")
    // if one of them terminates, the
    // other trails are aborted.
    // Compare: par/and, which waits
    // for all trails to terminate
    // before resuming code
    par/or do  
      // an infinite loop that awaits
      // a timer event. Therefore this
      // trail never terminates by itself
      every 1s do
        // Ceu uses C as a host language
        // and compiles to (essentially)
        // a giant finite state machine
        // C functions can be accessed
        // using an underscore prefix
        _printf("Hello World!\n");
      end  
    with
      // this trail awaits a keypress,
      // then terminates, ending the entire
      // par/or block.
      await KEY;   // awaits KEY input event
    end
    _printf("Bye!\n");
The above code prints "Hello World!" ever second, until a key is pressed, after which it prints "Bye!" before terminating the program.

The reactivity and intuitive single-threaded concurrency makes it a really nice language for low-powered devices. Or robotics, which is a lot about reacting to sensor input.

It has a dedicated Arduino repo too[1].

In practice it's more of a research language by Francisco Sant’Anna, a professor at UERJ, Brazil, than a language with a big community around it. He's currently working on a new version caleld Dynamic Ceu, or dceu[2]

In the same paradigm there is the Blech[3] language, I believe originating from Bosch. Sadly, that project also has lost some steam.

[0] https://github.com/ceu-lang/ceu-arduino

[1] http://ceu-lang.org/

[2] https://github.com/fsantanna/dceu

[3] https://github.com/blech-lang/blech


Right, imperative synchronous programming simplifies real time processing a lot.

Somehow it does not get the attention it should, which is IMHO due to the fact that it is not available in common programming languages.

This is why I tried to create DSLs for C and Swift so that more people could potentially play around with that.

https://github.com/frameworklabs/proto_activities


Oooh, this is delightful, thank you for sharing!


You could have used Kotlin if you wanted to stay on the JVM and use coroutines.

Your desired pseudo-code in Java would look like this in Kotlin:

    coroutineScope {
        // Drive backward
        launch {
            while (drivetrain.getDistanceInches() > -48) {
                drivetrain.arcadeDrive(-0.5, 0)
            }
            drivetrain.arcadeDrive(0, 0)
        }

        // Grab for two seconds
        launch {
            val grabTimer = Timer()
            while (grabTimer.get() < 2) {
                intake.grab()
            }
            intake.stopGrabbing()
        }

        // Drive forward
        launch {
            while (drivetrain.getDistanceInches() < 0) {
                drivetrain.arcadeDrive(0.5, 0)
            }
            drivetrain.arcadeDrive(0, 0)
        }
    }


Coroutines (or the concept of saving context to come back to) is already heavily explored in robotics in recent years. Behaviour Trees [1] approximates what coroutines do in system level controlled ticks mainly to avoid pitfalls of state machines. They are also extensively used in game development too. ROS2 have Nav2[2] package which is based on BTCpp library [3]. Not surprisingly BTCpp library uses boost coroutines to implement some behaviours.

Shameless plug, I have also been developing (weren't planning to advertise yet so no docs or plans for release) a behaviour based C++ library completely built on coroutines [4] to avoid some problems of "Behaviour Tree"s.

[1] https://en.wikipedia.org/wiki/Behavior_tree_(artificial_inte... [2] https://en.wikipedia.org/wiki/Behavior_tree_(artificial_inte... [3] https://github.com/BehaviorTree/BehaviorTree.CPP [4] https://gitlab.com/ifyalciner/ferguson


Given your expertise, why use behaviour trees when coroutines are available? What value do they provide in languages like javascript where generators work well?


I'm writing a game that uses both behavior trees and coroutines (via C# generators), and I have found that they are not quite interchangeable.

The main thing is that the execution model is different. I think of the behavior tree as being evaluated from the root on every tick, whereas with coroutines, you are moving linearly through a sequence of steps, with interruptions. When you resume a coroutine, it picks up exactly where it left off. The program does not attempt to re-evaluate any preceding code in the coroutine in order to determine whether it's still the right thing to be executing. By contrast, a behavior tree will stop in the middle of a task if some logic higher up in the tree decides that you need to be on a different branch.

What we end up doing is composing behavior trees with coroutines as leaf nodes. It works quite nicely, although I wish there was a way to express the structure of the behavior tree in a more elegant way. We do the obvious thing: each node in the tree is some subclass of a Node base class, representing a logical operation like "if" or "do these in parallel".

I very much feel OP's angst about creating a pseudo-programming language-within-a-language by creating what are basically ad hoc AST nodes, but I haven't come up with a better solution. Maybe something like React, where you use basically imperative code to describe a structure, and there is some ambient state that gets properly reconciled by a runtime that you don't touch directly.


Behaviour Trees are a bit more than that. They also effect/advice how you structure your project (not component based but behaviour based: exp. you don't have "controller" component in your architecture to handle all locomation but "go_to_pose" or "turn_around" behaviour) plus there are tooling provided around those architectures.

Design of Behaviour Trees were not an answer to lack of coroutines/generators (although they were missing from C++ standards until C++20) but an alternative to state machines.

Although I agree that "tick" based BT implementation is outdated and should completely be replaced by coroutines and mandatory yields (thats what I did in my implementation). There is still value in behaviour based architectures in robotics. They make building complex/reactive behaviours much easier than component based ones.


Had a go at writing a game a while back, purely for fun, reinventing the wheel to learn about what makes writing games hard. The command pattern here is a really great way to solve a persistent difficulty I had (largely the same one discussed in the article). I have certainly missed the point of the article but that is a great takeaway for me personally.

Funnily enough I have used the command pattern before to automate sequences of steps in a workflow. Back then I kind of stumbled on it - it is super effective for certain kinds of things.


I don’t understand why doing this with normal Java is difficult. Just have a list of objectives. Each objective is a class. The autonomous loop picks the next objective from the list and each tick, asks if it has finished, if so get the next objective and so on.

All the details about the actual commands to complete the objective and checking the state and so on go into the classes.

If no more objectives in the list, mission accomplished.

You can also easily test each objective independently.

Maybe the trouble is trying to fight abstraction so hard in the first place.


Having taught FRC students to use Java: when you're talking about people with very little experience programming before, the multiple class abstraction is itself an obstacle to accomplishing the goal.

I can't tell you how many times students have gotten frustrated trying to understand why you have to pass arguments into the constructor or, for that matter, Why the constructor is different from other method calls. "But we already said 'drivetrain' in the constructor, and over here in this other class. Why do we have to say m_drivetrain in the class also and do m_drivetrain = drivetrain?" And there isn't actually a better answer than "in other languages that learned from Java's mistakes, you don't. But we happen to be using a language that dates back to when Animaniacs was teaching kids the names all the countries, so some parts are just bad."


> the multiple class abstraction is itself an obstacle to accomplishing the goal.

Not if your goal is to learn about object orientated programming!

> I can't tell you how many times students have gotten frustrated trying to understand why you have to pass arguments into the constructor or, for that matter, Why the constructor is different from other method calls. "But we already said 'drivetrain' in the constructor, and over here in this other class. Why do we have to say m_drivetrain in the class also and do m_drivetrain = drivetrain?" And there isn't actually a better answer than "in other languages that learned from Java's mistakes, you don't. But we happen to be using a language that dates back to when Animaniacs was teaching kids the names all the countries, so some parts are just bad."

Do they also have nervous breakdowns every time the spell or read the word "knight"? The etymology of a language can be interesting, and they're welcome to look it up on their own time, but the sooner they learn that all of these decisions are arbitrary, the better.

When I was first learning how to use FreeBSD I was equally confounded by trying to apply logic and reason to how the commands looked and worked. Once I just accepted the fact that it was no different to questioning why the buttons were on the left side of the toaster rather than right side, or why sought and sort are two different words pronounced the same my life got a whole lot easier.

When I'm teaching kids about code and they ask "why ... " it's an opportunity for them to learn that oh-so-important lesson that in almost all cases the design decisions in software are totally arbitrary or of such obscure etymological origin that, unless you actually want to be a computer science historian, the best answer is "because".


They're students, of course they ask why.

And while nobody had a nervous breakdown, yes, the need to pass references around extremely redundantly because this language lacks any way to establish a global context and top level non-class variables was a continuous impediment to their ability to accomplish the task.

I've seen some good suggestions in this topic though; I think next year I may recommend a top-level container class that has init called on it one time and can then be referenced in all other class files. It's the closest thing to global variables Java has to offer, and would save them a lot of hardship passing references around for no other reason than passing references around.


Right, yeah that sounds like the task for which I typically use a Singleton:

https://www.digitalocean.com/community/tutorials/java-single...

Things like connections to databases, network resources, motors and those types of things.


It's addressed in the article... you sort of end up with a meta-programming language when you do that, which ends up being less ergonomic than your initial code. Then you have questions like how do you do control flow within that list? Can you branch or loop over multiple objectives? If you add support for that you end up even closer to a meta programming language, with worse syntax than if things were directly in the base language.


> you sort of end up with a meta-programming language when you do that, which ends up being less ergonomic than your initial code

I would say that what you end up with is a program.

> Then you have questions like how do you do control flow within that list

You don't. Each objective has an "isDone" method. That method returns true if done, false if not. If unable to complete its objective for some reason, throw an exception. Seems like pretty canonical object orientation to me.

> Can you branch or loop over multiple objectives?

No, but you could easily have an objective that groups together multiple smaller discrete objectives.

> If you add support for that you end up even closer to a meta programming language, with worse syntax than if things were directly in the base language.

Again, I just think you end up with a program, using the syntax of the language in which you're writing.

Like if you write a story, you end up with a story, written using the language in which you wrote it.


I had a similar revelation a few months ago when working on a little game in Lua.

Since a game runs as a loop, all synchronous code needs to be able to execute within one frame.

So you end up making overcomplicated state machines to represent processes that last for multiple frames.

Coroutines make writing this code sooooo much easier. You can actually start to see the logic again at a glance rather than having to dive into a big stateful mess.


I've been playing with JavaScript generators recently, and the ability to "pause" a function is exactly what I'm using them for. The goal is to create an animation of how a maze generation algorithm works, so it lets me write loops like I normally would but allow the function to pause which the website recenders the current state.

When I create games, there's also a similar tick() function that's called every frame to handle the main logic, and normally to handle logic that needs to execute over multiple frames I write state machines like the article talks about, coupled with promises to allow things to chain off each other. This article might change how I write that code??

I'm also thinking that instead of using a coroutine and yield(), you could asynchronous functions and "await nextTick()". Mostly for my own sake, trying to think of the differences between the two between the two:

- Coroutines allow the decision of when to execute them to be made outside the function. Whereas asynchronous code goes off and does its own thing. With the corountine approach you fit into the normal code flow of having a tick function that does something every frame, so that seems better.

- This approach only supports one coroutine at a time, whereas it's easy to kick off parallel operations with async code. Not being able to do things in parallel is probably desirable for a robot like this, as otherwise you might accidentally try to move towards two different goals at the same time.

- You can interrupt a coroutine by just not calling it anymore. Async code generally can't be interrupted by an external function. So it would be easier to switch to doing new behaviour.

- Composability. A quick search seems to indicate that support for nested coroutines in lua is not very good [1]. Which means that you can't split your main logic into small functions. In my language of JavaScript though, this isn't a problem thanks to yield*. But seems like asynchronous functions might win out there.

Hm. More to think about. I suppose I'll try it on my next game jam.

[1]: Is this the only way to yield all the results of a second coroutine in Lua?? https://gist.github.com/nicloay/2b893b3de1d964dcc92023c6318a...


Maybe I'm missing something but why doesn't the Java section right before the Lua section not work? It looks like normal procedural code that can just keep running. The lua version is just one coroutine and it's not yielding to anything else. Is it just a matter of some kind of timing constraint/controller from FRC in the background that need to keep calling myAuto?


The robots operate on N ticks per second. You only get one set of robot inputs per tick and can only make one command.

The Java code is tickless. During a single tick none of the while conditions will change.

Therefore the while loop will run forever and never allow the tick to finish.

The Lua code yields at the end of any loop iteration allowing the tick to complete. Then whatever is orchestrating the robot on the next tick calls resume on the coroutine allowing the another iteration to continue, this time with new inputs.


autonomousPeriodic is a tick function; we need to keep ticking. If we spent more than our allotted 20ms in a single autonomousPeriodic then the robot framework’s safety systems kick in and disable all the motors.


Ah okay that makes sense, thanks! It may help to make that note near the final java example.


> We were basically creating a crappy programming language out of Java classes.

At least the pedagogy is accurate. From UIs to database accessors, coding modern Java basically is creating a crappy programming language out of Java classes.


Something that coroutines made a big impact on for us was testing. Multi-step integration tests became a breeze. With state machines, each test would need its own FSM, and callbacks would make the flow hard to read.


The Command pattern looks like transactional programming which is used in SystemC and similar systems-level programming languages. The idea is that transactions or commands have properties like atomicity and so forth to deal correctly with concurrent behaviors.

Language support may be the deciding factor for young students, but conceptually the more interesting engineering debate would be which paradigm is better for a given purpose, coroutines or commands/transactions assuming the programming language can cleanly express both.


I agree, the shift from Java's state machines or "command" system to Lua's coroutines does seem to make the code more intuitive and readable for beginners . Lua's in-built coroutine functionality can be a game-changer for FIRST teams dealing with the complexity of autonomous code 1. It would be fascinating to see how this technique could be applied to other areas of programming where tasks need to be paused and resumed. It's a testament to the versatility of coroutines.


It's really neat to see some FRC code here, lots to be learned from student robotics competitions! Shameless plug: I work on [PROS](https://github.com/purduesigbots/pros), an open source programming environment for VEX. We've talked about adding coroutine support there, this article is an additional push for getting that done!


Very cool to see. I had worked on something similar but in the context of JavaScript a few years ago (https://arxiv.org/abs/1909.03110). Without coroutines/continuations, it really would have been impossible to get people up to speed in the time we had (one week).


Although the procedural blocking style is definitely a lot easier, you don't need language support for coroutines to do that. They could have implemented this in Java by just having two threads. One manages the robot and the other main thread blocks whilst waiting for commands to execute. The two threads swap messages using a linked blocking queue.


"Deep coroutines:" where you can yield from a function called from the coroutine. Lua supports this, but python doesn't (as far as I can tell). Is there a term for this?

To the author: you could make the code even cleaner by moving the yield to within the action functions. Though maybe this won't work as well for parallel actions...


In practice we actually do; I had to simplify for the article. We have a few utilities like a `runUntilDone` that make simple sequences easier to write. Example: https://github.com/frc-2175/2023RobotCode/blob/main/src/lua/...

I suppose we could make more utilities for running coroutines "in parallel", but I haven't really felt the need. At that point we usually have to worry about exit conditions and it feels natural to just write a loop.


For Python generators, you can yield from called functions by using yield from as in this (quick, not stellar) example:

  def first():
      yield 1
      yield from second()
      yield 4

  def second():
      yield 2
      yield 3

  print(list(first())) # collects all the results and prints them
  # Output: [1, 2, 3, 4]
But yeah, it doesn't work on a direct function call you have to know it's going to return a generator (or an iterable, like if it returns a list):

  def something():
     yield from something_else()

  def something_else():
     return [1,2,3,4]


Oddly, the kids I mentored in FIRST loved the command/subsystem framework. I kept trying to convince them to do some procedural code to make things simpler (they had little experience coding, even for high school robotics kids), but command/subsystem was their comfort zone and they didn't want to leave it.


It makes sense for students that can make the logical leap that "things the robot can do" are objects too (which is only a tiny step from the nicely-recursive "code can just be an object").

... but not everybody is ready for that step.


So many similarities with game development: central loop, coroutines, character state machine etc.


An autonomous robot is just a NPC in the real world.


leaky abstractions and all aside, the coroutine code is unreadable to me after clean looking commands. :|

yes, new Java(new Java(1), new ...) is bad, and asynchronous programming paradigms are usually fitting for robotics, but abstractions are good.


Coroutines are an abstraction though.


Related discussion: "Notes on structured concurrency, or: Go statement considered harmful"

https://news.ycombinator.com/item?id=16921761


Maybe coroutines might become syntax for general hierarchical finite state machines if we manage to implement serialization of current execution state.

I'd really love to see async/parallel language based on these ideas.


Great article ! Thanks. I will definitely think about this approach when teaching my kids.

I'm currently watching some videos by David Beazley, and he does several talks on coroutines in Python - very much recommended.


Is there such a thing as universally easier to grasp patterns? Maybe there are just brain types that map better to certain programming abstractions and we just have to accept that.


Yes, threads and co-routines == sequential code, while callbacks == continuation passing style code. Our minds like sequential thinking.


Ah coroutines, essentially glorified goto. My 10keV take is that dijsktra's paper while probably right about goto back then has cursed us and we are still cursed till this day. Some logical structures map well to just using goto in a series of steps, but because of this allergy to goto, we're stuck where basically jumping into a random place in a function is "novel."


Coroutines/continuations are not “basically jumping into a random place in a function”. It's pausing the execution of a function in a well-defined place with well-defined, understandable semantics for resuming the execution.


If you want to make it easy for high schoolers, just don't use java in the first place...


The only officially supported languages for the competition are C++, Java, and LabVIEW. When those are your educational options...you stick with Java.

(We've now switched to Lua, integrated with the official C++, but it's a lot more work behind the scenes.)


What's the hardware/OS stack you have for the autonomous part? Can you go wild and use unsupported software as long as it fits in the official hardware?


The largest constraint on this (in the context of the FIRST competition) is that community matters at least as much as technology.

If you're using Java (or C++ or LabVIEW, to a lesser extent), you at least have a hope of posting to a forum and going "Hey, we think our robot should be doing X and it's doing Y, here's our code, any insights?"

If you use <bespoke language X> and your own custom bindings, you're on your own.


We can use whatever software we want as long as we use the specified hardware (the NI roboRIO), FIRST's latest firmware, and comply with the match system and safety systems that enable and disable the bot at different times. Many teams have been using a community Python alternative called RobotPy for a long time despite it not being officially supported and FIRST had no problem with it.


Why not Python? I thought that was the popular language for learning these days. My son gets quite a bit of Python in school.


Python is slated to become a choice for the 2024 season: https://wpilib.org/blog/bringing-python-to-frc


If it is that flexible, can you have a C++ program that runs a python interpreter as a module?


We learned borland pascal and borland c in high school. Java is perfectly fine.


I went to a university where Java was the main language for most of the basic programming courses

I've been making a living coding in Java for 10+ years

I'm a strong believer in types etc etc

I still don't think Java is a great language to teach programming lol

too much pointless boilerplate and abstraction to achieve the simplest things

lots of footguns and objectively bad standard practices built into the language and the native libraries

to me Python and Kotlin seem like probably better choices (though they too have massive flaws)


I've written enough Java at this point to think of it as the assembly of OOP languages.

You can write code in it, but in industry almost everything I see is basically a DSL built out of annotations that tries as hard as it can to not involve writing actual Java.

When more of your application is annotations than lines of Java code, the language is probably a bad fit for the problem domain.


What's Kotlin's massive flaw?


To me the whole point of Kotlin is fixing all the things that are wrong with Java lol

But idiomatic Kotlin is still too close to Java, they call it "direct programming style" and it's basically already obsolete

Null safety is great but the way they've implemented it with weird syntax like ? and ?.let is not great, it would have been a lot better to just have a regular Maybe type in the native library from the start. Like sure the nullable type is in some ways functionally equivalent, but it doesn't implement map, flatMap, applicative etc like a Maybe does, and when it behaves like it does, it's mostly kind of by accident, not by any real understanding of those operations (and, fatally, it's completely disconnected from map, flatMap etc on lists etc - those should be interfaces that everything that can be mapped, flatmapped etc on implements, not methods that get hurr durr added here and there, sometimes as a regular method called map, sometimes built into the compiler called let)

They've completely dropped the ball on error handling - they don't even know themselves whether the language or people using it should use sealed classes, exceptions or their shitty Result class (which, again, this is a solved problem, they could have just included Maybe, Either, Try etc in the native library from the start)

Coroutines and structured concurrency are powerful but their API is so extremely confusing and full of footguns that virtually no codebase using them ever uses them correctly to reap the benefits

I could go on - don't get me wrong, I love a lot about Kotlin and it's a massive massive improvement on Java but it's not a great language unless you run it with Arrow and learn to completely sidestep large parts of the language (just like you had to do with Java)


Don't get me wrong, I miss Turbo Vision-based UIs and an IDE that fits onto a single floppy disk. But the JVM is a professional tool to begin with. So if you cannot start with a toy such as golang or python at least go with Kotlin.

Borland Pascal already had reasonable static types, classes, pointers, modules with a public API, and blinding compilation speed in the early 90s. All the basic building blocks to grok. Golang seems to be the closest modern-day approximation.


I learned assembly in high school, and programmed my first FRC machine in NBASIC (a variant of BASIC built specifically for the robot controller back in the day that, among its other delightful quirks, had if statements that were only allowed to be followed by a label and no else clause).

Java is "fine" in the sense that NBASIC was "fine." There's definite room for improvement in matching the abstraction to the problem domain.


Having learned Java in high school and later taught Java, Python and C++ to high-schoolers, things have improved greatly in Java. The days of memorizing the entire BufferedReader try-catch snippet are over. The new file and stream APIs are almost as clean as in Python and making GUIs (Swing for basic and JavaFX for advanced) is even easier than Tk in Python.

For me, it boils down to a choice between typed and "non-typed" languages, where the best typed option is Java and the best "non-typed" are Python or maybe JavaScript (NodeJS). Everything else is some combination of not mainstream enough, not powerful enough out of the box or requires too much knowledge to get started. Java and Python read like English, have most of what you'll need included and are very useful languages for students to get their first internships/summer jobs working with.


Arguably this devalues the programming they are learning since Coroutines aren’t as generally applicable


In a world where most programming languages have async/await style abstractions this stuff is pretty applicable IMO




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: