Hacker News new | past | comments | ask | show | jobs | submit login
The Mythical Non-Roboticist: Wouldn't it be great if everyone could do robotics? (ieee.org)
181 points by rbanffy 5 months ago | hide | past | favorite | 153 comments



More generally, stop copying the smallest-common-denominator DX/UX of <famous mass market product> when designing for other markets.

I recently overheard a very smart user struggle to find how to mute a group in Teams.

And while I get it that the context menu is probably less "cluttered" if one doesn't include "mute", and I get that there might be confusion if someone mutes a channel without being aware, I still want modern UX to stop and think.

When decluttering means you end up having to search the internet for a recipe as to what place to click to find which flydropping menu, was it worth it?

Or when your userbase consist of people who go out of their way to avoid Chrome, maybe stop and think if copying Chrome at every step is a good idea?

Same goes for a certain distro that at some point thought that their users was just people who couldn't afford Macs and threw out an IMO rather well functioning Gnome 2 setup for a what I consider a clone of the desktop of Mac OS X that had almost all of the problems from Mac OS X but unlike couldn't run Mac OS X software.


> I recently overheard a very smart user struggle to find how to mute a group > you end up having to search the internet for a recipe as to what place to click to find which flydropping menu

I remember when software was supposed to remove pain points in my life, but now I'm not so sure. I often contemplate whether switching from Spotify to the CD player in my car will save me time, but I'm fairly certain it will save me aggravation.


I've had the same thought about alarms.

Back in the day I had an alarm clock. I set it to go off at a specific time and it did. The only failure mode was the battery running out.

Then I got a Nokia phone. It worked the same as the alarm clock, but I could even set myself notifications and what not. Worked great!

Then I got an Android phone and sometimes the alarms just wouldn't work.

And now today I'm using another Android phone and I give it about 80-20 odds that the alarm works correctly. Sometimes it doesn't work at all. Sometimes it does a quick alarm for a few seconds and stops. And I don't know why.

Notifications on this are even worse. Sometimes the notifications are so unnoticeable that sitting next to the phone with headphones on will not actually alert me to the notification.

I don't see others ever complain about it so I feel like I'm somehow missing something big but I don't know what. Notifications and alarms on modern phones seem extremely unreliable to me.


My father was in the hospital recovering from a heart attack. We were staying with him in shifts, and when I was "off", I watched the last episode of succession. I made sure my phone was on the chair next to me and the notification / volume was all the way up.

After the episode he was dead and I missed the whole thing b/c somewhere along the way "press volume button at home screen until it was full" did not actually turn notifications on anymore. I've never been so mad at UI/UX.

Turning on ringer / notification volume requires pressing volume button once, then clicking the equalizer, then dragging the notification volume up to desired level. There's no indication anywhere that notifications are silenced. I have a google pixel 4.


I'm sorry for your loss.


I have been using the stock alarm app on an Android phone daily for at least 10 years and I've never once had it not work as expected.

I can't fathom how it could possibly be failing that often for you.


I encountered this kind of inconsistency with my Android phone, too (Samsung Galaxy S22). I think every time my alarm failed to go off it was because the phone had automatically updated its OS and restarted overnight, and background apps like the alarm wouldn't run until I entered my pin to finalize the phone's startup process.

I've usually used my $3-4 alarm clock for waking me up in the morning, and then my phone timer for naps.

Now that I just took a closer look, I was able to find a way to disable automatic updates on the phone. (I had to find and tap "Software update > System Update Preferences > Smart Update" in the Settings app.) But I like the alarm clock, so I'll probably keep using it anyway. Better that my phone isn't the first thing I interact with every day.


One thing that Apple has got right with the iphone is: every time it's installed an update over night, restarted, and is waiting for me to enter my PIN to unlock it, the alarm still worked.


Same here, switched to a physical alarm clock after inconsistent alarm behavior post OS updates on my Samsung S22.

I disabled Smart Update after reading your post but considering my phone used to prompt me to update, then started doing them on its own - I wouldn't trust that setting to stick.


I have S22 but don't use alarms much. However, my wife has S23, and this very issue is something I've been banging my head on just last week! Her alarm clock would occasionally not ring, but instead the phone would give a few beeps. My wife has a bunch of stacked alarms in 10 to 30 minute intervals, and I've listened to all of them going "beep beep beep <dead>".

I don't know what's going on there; I've read hints that for some people, their phone thinks it's in a call, and manifest such behavior in that situation. Some reports blame Facebook Messenger. What I know for sure is that it isn't restart or update related.

And yes, it's beyond ridiculous for this to be happening in the first place. It might just become a poster child of how idiotic tech has become. For the past decade or so, it feels that each generation of hardware and software, across the board, is just fucking things up more - even things you thought were so simple and well-understood you couldn't possibly fuck them up, like alarms or calculator apps.


Time zones. I've had trouble while traveling when my phone decides to change time zone, shifting the alarm times unrequested ways. I would appreciate a phone that could properly understand GMT in a way that would allow me to set an alarm at a specific GMT time regardless of which timezone I step into. (yes, I am sure there are XYZ apps that can do this, but I don't see why the base OS cannot handle this without installing more apps.)


One problem with Android phones is that they're all different, sometimes greatly so, because the different phone vendors customize them with their own software, much of which is utter garbage. The stock alarm app on mine has never failed me either, but with some other phone, who knows?


> I can't fathom


If I had a choice between my mom's 30+ year old windy-uppy egg timer vs. the abomination that is the digital interface on my stove, I would take the egg timer every single time.

Unfortunately my sister keeps stealing it (back) every time she visits!


I bought myself one of these, and then had to buy a few more when family or friends wanted to take it:

https://www.aliexpress.com/item/1005006912104394.html

It's _so_ much better than my phone as a general duty kitchen timer. The best thing is I can just leave it going off and it'll just run down the spring and stop. _So_ much better an experience than needing to get my phone out of my pocket when my hands are covered in whatever I'm cooking.


in the kitchen is where I've found voice timers to be the best. there's an unlimited number of them, I can name them, and I don't have to touch anything to set them. which is great because I'm not very good so my hands get messy all the time. "hey Siri/Google/Alexa/Bixby set egg timer for 3 minutes"


Just buy a new mechanical egg timer? They still make them.


A long time ago, on my first Android phone, the alarm used to fail on me.

Since then, all of the failure modes I knew about were fixed. Now I think the only thing that makes an alarm fail on Android is if you run out of battery charge. Of course, that doesn't make me any less paranoid about it.

But anyway, the notifications seem to only get less reliable with time.


The iPhone alarm app is an oddly circuitous process as well. I just don’t ever trust that I’ve set it correctly. Just let me type the time and move on. It’s also unclear if silent mode will override the alarm having sound.


Recently switched from Android to iPhone and was dismayed that Apple doesn't have the "Alarm set to 8 hours from now" confirmation. Such a simple and effective UX. Overall I was frustrated with Android but it gets a few things right.


You want what iOS Clock app calls a "Timer" not an "Alarm". The rightmost icon in the bottom nav bar lets you set "8 hours from now" really easily.

(Not saying that makes your comment wrong, just pointing out the precise way you need to "Think Different" to make your iPhone work for you.)


No, I want to set an alarm at 6:30 am because I know I need to be at work early at 8am tomorrow. The alarm time is not derived from the current time in any way. "8 hours from now" is helping me verify that I didn't mess up the day or am/pm.

If I used timers I would have to carefully subtract the desired wake up time from the current time. It's not the same at all.


Ahh right. I somehow missed the implication of the work "confirmation" there. Sorry for jumping to conclusions...


fwiw, you can ask Siri to set a timer to go off at 8 am and it'll do the math for you. but then it's a timer not an alarm, for better or worse.


I have a recurring problem where my Apple Watch won't sound the alarm if my sleeve or blanket is covering the screen. Like it won't even do haptic feedback. This has made it utterly useless as an alarm that's supposed to wake me up.


You can tap on the time wheels and a numeric keypad will pop up. Not the most discoverable UI, but very welcome.


I had the other issue. I was at a standup comedy show with my phone appropriately silenced because I knew I could ignore the alarm this once.

And the alarm still went off making noise. I managed to turn it off superquick such that the comedian got as far as looking in my general direction and saying I was lucky I was fast because he couldn't pick me out exactly.


You must be missing something. I've never had my alarm fail. Are you accidentally muting the alarm volume somehow?


No idea. How would I find out that I've set all of the knobs correctly?

That's kind of the problem I think.


I'm in a similar boat. I set everything (I think) correctly and fail to hear notifications. But only sometimes. When I check the settings they are subtly different.

I think it's something to do with how I listen to podcasts but haven't been able to work it out.


It's right along side the other volume controls.


I hope you do understand that a system that can play almost every song ever recorded is going to have a more complex and error-prone user experience than a static 50 songs.


Teams has probably worst notification handling in my experience out of all major systems in that space.

It's not just muting, it's that it's either going to show me big honking unread counts begging for my action, or I have to mute the channel and As Far As I Know mute all notifications from it, even when someone explicitly pings me.


You can get a notification when someone explicitly pings you by enabling an activity for it... (yeah, I know)

My issue is stuff like it disabling notifications on the desktop because I turned my phone face-down, or meetings disappearing because... well, I have no idea why. And it delaying messages for hours, again no idea the reason.


Interactions between desktop and phone app were a whole extra kettle of electric eels.


It has all the worst everything in my experience out of all major systems in that space.


Yes, UI's (physical and software) have, in a lot of cases, gone off the rails.

One of my examples is the change to the button on the side of iPhone 15 used to switch between vibration and sound-on. I press it all the time by accident just by grabbing the phone. And, often enough, I have no idea I did. So, the phone doesn't ring.

Same with the ringer volume up/down buttons. Can I disable them in the lock screen? I don't think so. Once again, the ring volume is modified and you have no clue.

The other one, which is a UI issue in the sense that there does not seem to be an obvious way to deal with it, is what happens when I connect my Plantronics single-ear headset. The volume is reduced to around 15%, which is fine. However, when I turn it off, the volume does not return to the prior setting. Which means that EVERY TIME I use the headset for a call I have to remember to raise the volume or the ringer will not be loud enough when the phone is in my pocket.

My criteria has become simple these days: If you have to google how to do something with the UI of a piece of software or hardware, something could be wrong. I am not including complex tools like Maya, Solidworks, etc. Let's call those, professional tools. Anything else just needs to make sense. Not sure how anyone thought that not restoring a phone to the pre-headset level was a good idea, or having live buttons in lock mode that you cannot disable (or at least alter the way they respond in some reasonable way).


If you ask any UI or UX designer about the interface of Teams, absolutely zero will say it's good. You're giving it the blessing of designers based on your assumption that designers approve of MS UI and UX practices. They don't. Sure, some flagship products are pretty decent, but they've famously sucked at UI (and all other sorts of) design since forever. Here's one (made by an MS designer as a joke) from 2006 talking about their entirely marketing-department-driven packaging design: https://www.youtube.com/watch?v=EUXnJraKM3k

We use SO MANY interfaces every day-- every screen on every app and every webpage you use... text messages, ATM, ordering terminals, shopping carts, ad infinitum-- and using those is almost universally intuitive enough to be invisible. Designers use things like implied lines, gestalt, type variance, value contrast, and things like that to make a sane data hierarchy and give users signals for what's happening in the program and what they need to do to influence that. Good practices say they should then test and refine their designs for various kinds of users to make the best interface possible. If the UI design is invisible and users can just solve their problem without having to think about it, that's good design.

When design "sticks out," it's usually for the wrong reasons. While bad designers exist, 99% of all "bad design" decisions cited by developers when talking trash about modern UI design were probably not made by designers-- they were likely made by project managers or developers that insisted their way was better, or they didn't need to check with the designer to modify something because they can make it "looks designed." If the interface looks "cluttered", they might remove useful things probably just needs to be organized. That's bad design.


I don't get the point of this semantic distinction between the people producing (bad) designs and the ivory tower label you're calling "designers". If they're producing designs, are they not designers too?

If none of the people with the title "designer" are meaningfully impacting products, who's employing them and why do they not have the same obligation other technical experts have of ensuring that meddling managers are steered in the direction of correct decisions?


> I don't get the point of this semantic distinction between the people producing (bad) designs and the ivory tower label you're calling "designers". If they're producing designs, are they not designers too?

Is everybody that has written code a software developer?

>If none of the people with the title "designer" are meaningfully impacting products, who's employing them and why do they not have the same obligation other technical experts have of ensuring that meddling managers are steered in the direction of correct decisions?

As I said, you use so many interfaces every day that are so well designed you don't even notice them. That's good interface design, and executing it takes years of learning and practice. The developers that think their interface design skills are objectively good because of what they've gleaned from working with designs, and knowing how they're implemented are like the designers that call themselves web developers because they cargo-cult copy and paste code from tutorials into WordPress plugins. Their end goals are largely the same, but to equate them is pretty ridiculous.


Designers try? It's not like engineers can always prevent bad management from interfering with good product


> When decluttering means you end up having to search the internet for a recipe as to what place to click to find which flydropping menu, was it worth it?

Strong disagree here - let me use a real world example.

On older Lexus models, the car stereo screen had play and pause buttons.

It always bugged me. Why have BOTH a play button and a pause button? The play button doesn’t do anything if you click on it when a song is playing, and the pause button doesn’t do anything if the song is already paused.

What Lexus should have done instead was have one button that changes state depending if something is playing or paused. It would take up less screen space and less cognitive load trying to move the cursor while I’m driving 75mph.

And on a personal note, Teams is a horrid interface. But I’m not here to argue that today.


> Why have BOTH a play button and a pause button? The play button doesn’t do anything if you click on it when a song is playing, and the pause button doesn’t do anything if the song is already paused.

Because it's hard to hit a touchscreen button precisely once, especially when you're driving. And if a song is quiet you may not be able to tell whether it's playing or paused right now. Having a button that will always make it playing and a button that will always make it paused is good design.


There is no feedback from the buttons at all. If I ignore the rest of the screen, I can’t tell if anything is playing or paused. The buttons do not change appearance in any way.

That makes it even worse as now I have to look at multiple places on the screen when I’m driving. That the buttons are small makes it more annoying. A single, larger button the changed would be much, much better.

Side note - it’s not a touch screen. It has a joystick controlled pointer.


> There is no feedback from the buttons at all. If I ignore the rest of the screen, I can’t tell if anything is playing or paused. The buttons do not change appearance in any way.

All the more reason to have two buttons. If you want it to be playing, you hit play, if you want it to be paused, you hit pause. Again that sounds like good design - really anything that's meant to be operated while driving ought to be usable without having to look at it.

> it’s not a touch screen. It has a joystick controlled pointer

Ok that sounds like the problem, not the buttons


So, you're saying that tech that didn't exist was the problem, not the design for the tech that did exist???

That makes no sense.


I'm not sure you disagree as strongly as you think. I don't feel like the OP was suggesting that all decluttering is intrinsically bad. In the case that you describe it's pretty easy to argue that it's good because it made things easier to use (or at least harder to fuck up because you don't have to figure out which button is useless).

Imagine instead that Lexus decided to get rid of both buttons and place them under a hamburger menu.


> Imagine instead …

Aghhhhh!!! Now you are trying to give me nightmares!!!

:)


> Same goes for a certain distro that at some point thought that their users was just people who couldn't afford Macs and threw out an IMO rather well functioning Gnome 2 setup for a what I consider a clone of the desktop of Mac OS X that had almost all of the problems from Mac OS X but unlike couldn't run Mac OS X software.

Ooh! Ooh! Lemme guess! Are we talking about… Deepin Linux? Or is it Elementary OS? I hope it’s not Pop OS, I’m on that now but haven’t seen COSMIC yet


My guess would be Ubuntu or Fedora and Unity/Gnome 3 respectively. Not that I share the hate, but Gnome 3 is certainly closely designed around some distinct Mac OSX design elements (app overview, handling of workspaces, and such).


Elementary OS was actually nice last time I used it.

Looked like Mac but didn't copy every stylistic and UX choice from the 90ies that doesn't belong in the 2020ies, like space saving menus that makes haves your mouse traversing both monitors to reach them or the mental overload CMD-tab that makes me stop and think every time if I want to switch app or window.

I was talking about Unity. A system that had a working alt-tab bujilt in but was so insistent about us learning the Mac way that there was no way to toggle off broken window switching except hacking the configuration directly.


Honestly this feel like trolling. Right click on any group in teams and the channel notifications option is there which brings you to this UI

https://i.imgur.com/z8BnaTn.png

I question how smart this "very smart user" actually is.


Well, I use Teams daily at work, and it's the first I see this particular panel, so shrug.


Magically appeared now, thanks!


This article reads as a rant about ROS (robot operating system) without mentioning ROS once. Which in their quest to make robotics simple; made json a programming language and made a massive ecosystem of abstracted complexity that breaks in undebuggable ways.


ROS isn't an "operating system". It's a piece of middleware. The purpose of ROS was to set up an intercommunication standard for academic robotics projects, so various components (vision, motion planning, etc.) from different sources could be connected. It's for prototyping, not production.

Attempts to make robots easier to program in industry involve "teach pendants" and schemes where you push the robot through the desired motions to record a path. This works in well-organized work cells. Outside that, not so much. Amazon, despite major efforts, still hasn't deployed robotic bin picking much. Their AGVs, though, work great. Amazon is building a thousand units a day of those little Kiva moving platform robots that carry stuff around their warehouses.

There was Rod Brooks' "Rethink Robotics", with their semi-intelligent "cobots". That was a flop. But it did lead others to make better small robot arms.


> There was Rod Brooks' "Rethink Robotics", with their semi-intelligent "cobots". That was a flop.

After that, Rod Brooks founded Robust AI, which is the employer of the author of TFA.


We actually did use ROS in production at a previous job a decade ago.


You couldn't have described it better. ROS is painful and makes you actually hate robotics. My conclusion was that it was written only to be used by their own developers and researchers. Documentation on most useful plugins (SLAM and AMCL) is really bad to near non-existant. I felt like all this was for some elite and you either have a very strong foundation and know a lot to use it or prepare to suffer and stay long hours trying things out until they work. The ecosystem and tooling are the worst I have ever experienced: colcon, sourcing scripts, launch scripts and on top of that you have to use specific Ubuntu versions. Their flagship language is C++, which is a language I hugely dislike. Yes you have Python but is nothing more than a thin wrapper to the underlying API, is not a clean and native Python API so you end up writing a lot of bureocratic and gluing code.


The thing I don't get about ROS is that at no time in its almost 20 year history has anyone, at any point, referred to ROS as having good docs or an easy to use interface. Despite this, it has been pushed on undergraduates and people who find it very confusing and hard to use.

It just seems to me no one seems willing to admit defeat that perhaps ROS is not working out for the purpose of making robotics more accessible.

Like, they spend so much effort on their varied Turtle mascots and all the themes for the many and frequent releases (that never make it easier to work with), I wish they put that kind of energy into the onboarding experience.


Ah your mistake was going for ROS 2 which is by any objective metric a complete trainwreck compared to ROS 1. Am a maintainer for a package in both, AMA.

The move_base stack in ROS 1 was pretty bad overall, but if you made your own navigation it ran really well and was pretty bulletproof as a middleware for the most part (changing network conditions ahem ahem). Nav2 in ROS 2 is better, but it's really hard to tell because the DDS is so unreliable and rclpy is so slow that it's usually impossible to tell why it's only working half the time for no reason. Defaults are bad and pitfalls are everywhere. Incidentally this makes the entire thing unreliable even if you implement your own stuff so.. yeah. We'll see if Zenoh fixes any of this when it's finally out and common sense prevails over DDS cargo culting. Personally I'm not entirely convinced we won't see a community fork of Noetic gain some popularity after it's EoL.

There is some effort to add Rust support, but it's just a community effort because OpenRobotics is completely understaffed. Google bought them and then they sort forgot they exist. There's only like 4 core devs in total. That's mainly why they're treating python as a second class citizen.


> your mistake was going for ROS 2 which is by any objective metric a complete trainwreck compared to ROS 1

Sometimes you don't have a choice. I prefer to stick with ROS 1 but the org I was working with insisted on running ROS2 for everything. The only thing worse than running ROS2 is running ROS2 installed next to ROS1 and trying to work with both.


I mean it is a huge dilemma right now. Noetic is incredibly stable, it has lots of packages, language models all know it in detail, any problems are noted and workarounds known... and support will be cut in a bit over a year.

Kind of the exact opposite is the case for Jazzy really. I think it really hinges on the date you need to have working code on and how long you intend to maintain it I guess. For a company building up its software with a 2 year latency in deployment it probably makes more sense to develop slowly along with ROS 2 and hope for the best.

Also don't get me started on Gazebo Classic/11 vs Ignition lol, that's a similar can of worms right there.


It was infuriating joining a robotics company with a ROS based stack. A lot of the original engineers had started at the company in undergrad as interns, and I swear it was impossible to make anything better because they didn’t know what better looked like. They just understood ROS and were highly proficient with it, so any move away from ROS would slow down the engine of the company.


my bg: I have done a few robotics projects: a warehouse robot and a surgical arm.

About ROS: that's fair, but to be fair, it was an ambitious first attempt that began hmm decades ago? and has some great parts (the 3d simulator, the robot construction, the python interface) and some less great parts (bags). It's a great place to make the next generation from.

Unfortunately, the scope of robot design pain is even larger than TFA! the definition of "robot" is so broad it includes factory robots with hardly any sensor requirements and intelligence, as well as devices with intense vision input, or even large network requirements, micro devices, stationary devices, mobile truck-like things, robotic systems composed of numerous robots, humanoids, etc.


Similar bg + space and some DoD.

ROS established concepts that pervade robotics. There is no non-ros-like robotics framework. Every _serious_ ROS alternative I've worked with has a similar "Markup as programming" problem because everyone has embraced the "Extensible node" paradigm (aka component based architecture). Configuring a system becomes half the battle. Even Space and DoD have this. I am not convinced ROS saw this coming, or knew about it prior, when they designed their "Operating system". real time embedded systems have a longish history of using message passing between tasks, but still, not convinced.

My top three wishes for ROS are:

* Get better interop with other messaging systems. I can build an entire business on HTTP POST/GET, but I need your very-limited rosmsg definition for this robot, this message archive, this wire definition? (not that I want HTTP but you get it).

* Provide a roadmap for hardening. Ros-mil, ros-space, ros-2, etc all seem like steps in different paths toward the same goal. Tell people to stop trying to use a rosgraph across multiple platforms. Kill ROCOS. Teach the industry to use better tools! We have better distributed systems now.

* Never change. Ironically, the fact that ros is so mushy and accessible means that really-high quality robotics research often is ROS-first. (eth zurich for example!). I'm not sure making it commercializable, hard, and interoperable will accomplish that.


Throughout grad school I used ROS and whether the robot demo worked on the day it was supposed to was essentially a coin flip.

In industry I use protobuf, Hydra for config management, few async processes and god does it make a sea of difference. It seems like other companies are catching up, I see more and more companies avoiding ROS these days!


Would you have time to review my resume and help set an achievable get-in-the-field project goal?


I've hired a few roboticists and my advice is always the same - if you're a new grad, knowing ROS and having strong C++, ROS, and OpenCV experience will give you a leg up.

After that, the most common things new grads want to work on are path planning or computer vision. These are high-ish demand but usually go to grad students or senior folks. But you can be an excellent candidate by knowing about them and how to integrate and test them. Path planning has several open source libraries and implementations. Do a comparison on several maps. Pull game maps and try em out. Anything. For AI/CV, there's similarly many applications.

The key for non-PhDs is to know the existing stuff well. Even if you're a PhD, you'll spend less than 1% of your time doing anything interesting / researchy, and will mostly be using existing codebases for everything. Very few research-primary positions exist.


So, the advice for companies is "avoid ROS", but the advice for entrant engineers is "have ROS experience"?

What about, e.g., Drake? (https://drake.mit.edu)

How is experience meant to be demonstrated? Do you do a home project and put it at the top of your resume?

Edit: I don't mean to be pedantic; I'm just ~6 years out of touch here. I did a SLAM-in-quadcopter and force-controlled-6dof-arm in undergrad, and those projects are being correctly ignored by hiring managers as "he's since forgotten this stuff" today.


I don't think that is the intent of the article at all. I think the article is making a comment on building a single framework that can enable anyone to solve an arbitrary robotics problem. From the second paragraph, "The idea goes something like this: Programming robots is hard. And there are some people with really arcane skills and PhDs who are really expensive and seem to be required for some reason. Wouldn’t it be nice if we could do robotics without them?"

There are host of companies, both extant and deceased, who attempted to do just that.

I don't think any ROS developer has ever made the claim that ROS makes building a robot "easy", "easier" yes, but "easy", certainly not. ROS is simply a collection of tools that people have built over the years to get their work done faster by not re-inventing the wheel. Many ROS packages have decades of real-world deployment behind them. Some ROS packages, like Nav2 and MoveIt, are incredibly helpful, other packages are difficult to use and poorly documented, just like in any open source ecosystem.

> Which in their quest to make robotics simple; made json a programming language

JSON, in ROS? I don't think that's how it works.

> massive ecosystem of abstracted complexity that breaks in undebuggable ways

If you have a solution for this I think you solved the problem of software engineering in general.


I think the article is making a comment on building a single framework that can enable anyone to solve an arbitrary robotics problem.

The thing about that statement is it's mixing two different meanings of "framework". One thing framework means is a conventional library or API in a conventional programming language. The other thing framework means is a broad, conceptual that can be reused - in this case, in many robotics problems.

Using the first meaning of framework, sure, maybe it's not useful. But using the second meaning of framework, I think it's very useful to keep looking for such a thing and getting more people involved might help.


Like previously said, ROS is not an OS, its a framework for making hardware easily programmable with simple APIs, which happens to be very useful to get started with robots.

Sure there is a learning curve like many other frameworks!


Yeah, the article seems like it's several different claims and arguments. Yes, robotic is hard with no general solution visible and with the solutions readily a available to companies turning out to be from people who've spent a decade "squinting" at the problem, know the rules of thumb for a bunch things that work and don't work and still build things that are less than ideal.

From this, yes, there are many ways that creating a simplified API in a conventional programming language seems useless.

But as an argument against any "turn a hundred newbies loose", it seems not right. It seems plausible newbies to robotics could find paradigms that don't currently exist. I mean, robotics seems to be in the state that pre-deep-learning computer vision generally was in. Nothing worked "out of the box" and you needed experts to make the random smattering of approach that did exist work. Now we have an out-of-the-box working approach and I'm pretty sure it didn't come from existing experts.


what's the current recommendation instead of ROS?


There's a few alternatives. Zenoh [1] is an up-and-coming distributed middleware. ROS2 is actually adding it as an alternative to DDS [2].

LCM [3] is another middleware that has been around for a while.

[1]: https://zenoh.io/

[2]: https://newsroom.eclipse.org/eclipse-newsletter/2023/october...

[3]: https://lcm-proj.github.io/lcm/


Those aren't really ROS alternatives though, as they seem to offer just the message passing layer, which is one of the best features of ROS, but it's not really enough in isolation from the other good features.


People forget that ROS isn't really about the IPC or the build system or the networking.

Ok it is about those things, but fundamentally it's about standardization. Grab any lidar driver and it'll output a LaserScan. Every wheeled robot will send out Odometry, anything from a submarine to a flying drone to an agv will respond to a Twist message, every robot will have a central coordinate frame called "base_link", etc.

The cross ecosystem interoperability is frankly insane and the main perk of it all. It makes building things easy when you don't have to deal with writing adapters every single damn time. This is why ROS 2 is currently such a step backwards because it fragments the ecosystem in DDS compatibility, making sure that no two packages will necessarily be able to run concurrently.


ROS is one of those "frameworks" that's so awful and pervasive that for teams of more than 2 people handrolling your own networking is almost always going to be better than trying to make ROS work.


Viam seems promising, especially for cloud connected stuff.


Any framework that has "Start for free" and "Contact sales" on the home page is gonna do really great at mass developer adoption. Can't wait for them to rugpull everyone and close source all of their code when the VCs want to get paid.


Indeed. In the robotics space, this just happened with foxglove studio. Six months ago, I considered advocating that we switch our bespoke visualizer to be a bunch of foxglove plugins instead. Today, it's clear that would have been a terrible move.


Quite right. Corporate open source is always one step away from straight up trapping people through sunk cost. Sometimes a change in management is all that needs to happen.

At least Vizanti will always stay open ;)


Wait what has Foxglove done?


foxglove-studio is no longer open source

https://foxglove.dev/blog/foxglove-2-0-unifying-robotics-obs...

BMW research has the most promising fork I have seen yet

https://github.com/bmw-software-engineering/Foxbox


The founder is the same person who made MongoDB, which is still source available, SSPL notwithstanding.

Point taken though.


that's probably the most notorious example of a company abandoning open source and switching to a proprietary license, so that is a strong reason to believe that viam will rugpull if given the chance


I know this is just reopening a discussion that HN has had a million times, but what mongo (and elastic, and so on) swapped to is not IMO a rugpull to anyone except those who were reselling those specific open source projects. Anyone using those tools internally/as a backend weren't affected by that, whether they were a hobbyist or small business. The exception being people who only want true FOSS for ideological reasons.

The trouble with the SSPL is that it also nerd-snipes people who think the legal system works like code. Example of such a person: https://ssplisbad.com/

As far as I can tell reading the license, the key terms of the SSPL hinges on "such that a user could run an instance of the service using the Service Source Code you make available". If you make something available that is then able to be run, then you've fulfilled the license.

Looking at it again the page I linked above actually goes beyond overly-strict interpretation into actual bad-faith reading. When talking about what you have to publish, they cut off the sentence short so that they can interpret it mean things like the BIOS or IDE, but reading the actual license text it's clear that's not the case. "All programs that you use to make the Program or modified version" vs "all programs that you use to make the Program or modified version available"

As someone who only runs FOSS in my day-to-day, and not anything with SSPL, I'm really annoyed that that page nerd-sniped me into defending the SSPL.


the main trouble with the sspl is that it isn't an open-source license, which is a problem for people who only want true foss for practical reasons, not just ideological reasons. reselling is one of the rights that an open-source license protects, and historically speaking, everyone having that right has been absolutely critical to the development of things like linux, gcc, x-windows, ffmpeg, emacs, postgres, and wikipedia; without it, not only wouldn't we have objective-c support (contributed by next against their will) or linux distributions (like slackware, red hat, debian, and especially ubuntu), we also wouldn't have wikipedia quotes in google serps, or android

a secondary problem with it is that, as you point out, the sspl is so vague that you can never be sure that you're in compliance with it, and it's never been litigated, so there's no precedent

it sounds like you have a pretty weak grasp both of how the legal system works and of the history of free software licensing. actual bad-faith reading is literally the job description of a lawyer


Bad-faith readings are the job of a lawyer, but what I pointed out goes beyond a bad-faith reading. It's a literal lie, made by truncating a sentence. If you were to say "Nazis are the greatest evil the world has seen", and a lawyer in court quoted you as saying "Nazis are the greatest", that's not a lawyer's standard bad faith reading.

If you think the above would be legally permissible then I'm afraid you're incorrect regarding which of us has a "pretty weak grasp of how the legal system works".

I do agree with your first two paragraphs.


i agree with your nazi example, but it's not clear to me in context that this is such an example


I've taught kids to make line-following robots, with very little code. They are slow, don't handle crossing lines very well, and can often just flip out and do their own thing.

I'm not teaching the kids to compete in line-following competitions with robots that can run a course in seconds though. I'm not teaching them to handle multiple crossovers in their code.

What I am doing is developing a passion for robotics and programming, and hoping that a handfull go on from there. If they are inspired, they can ditch the simple APIs and move on to the more complex and powerfull stuff.


Well said. "Don’t design for amorphous groups. If you can’t name three real people (that you have talked to) that your API is for, then you are designing for an amorphous group and only amorphous people will like your API."


I'd like some feedback from those in the know.

It seems to me we've been building "robots" since the industrial revolution. An assembly line is a series of "robots" performing highly specific actions on highly constrained inputs.

So for "learning robotics" I would personally recommend starting with something like assembly lines. Highly constrain your inputs and limit what the robot is supposed to accomplish and you will have an achievable goal.

You wouldn't teach programming by telling someone to write a highly scalable dbms, neither should we teach robotics by creating a general purpose "6 degrees of freedom" robot which can grab any object and do something useful with it.

Does this resonate with anyone else or am I alone here?


Background: Started in industrial automation (lots of Fanuc, Yaskawa, Omron, etc.), built a lot of cool systems with cool people that made things with robots. Pivoted to "general" robotics in grad school. Been spending the last 5+ years making "general" robots.

I think the best thing for learning robotics looks pretty similar to learning a programming language: Have a specific task in mind that the robot/programming language will help you solve. Even if its just a pick-and-place and a camera, or a shaker table with a camera over top, or a garden watering timer/relay combo. Just work on something specific with your toy robot and you'll naturally encounter much of the difficult things about robotics (spatial manipulation, control, timing, perception, drivers (GODDAMN DRIVERS), data, you name it).

Going right to a high-DOF arm or trained LLM is always cool, but the person who hacks together a camera/relay/antenna to automate some gardening task or throws some servos and slides together to make a Foosball robot is doing the most interesting things, in my opinion.


I like this! I think the hard part can be finding projects that are both interesting and achievable for the beginner.


> It seems to me we've been building "robots" since the industrial revolution. An assembly line is a series of "robots"

There is no single, clear definition of what a 'robot' is, as normal people use the term.

The moment you write a definition broad enough to encompass a roomba, an industrial robot arm, and a child's toy robot you'll find you've also included cruise missiles, 3D printers, remote control cars and Ming dynasty naval mines.


> So for "learning robotics" I would personally recommend starting with something like assembly lines.

That depends on what you want to learn. In assembly lines the solution very often is to constrain the inputs and limit variability on conditions. If that is what you train on that is what you will learn. You will have 101 tricks on how to QA the inputs and how to maintain the same grippyness on surfaces and so on. Which is great if that is what you want to do.

But if what you want to learn is how to make robots which can adapt to varying circumstances your best bet is to try that. In examples where failure is less costly and you can iterate quickly.

Also "robotics" is not a single thing. Even just the computer stuff has many subfields (localisation, perception, calibration, tracking, prediction, planning, control, etc etc). When learning your best bet is to pick a few of these and assume that the other things are either solved or pick a challenge where they are not needed. :) Not talking about all the "sibling subject" like electrical engineering, mechanical engineering, thermal engineering, fluid dynamics, etc. Cool cutting edge stuff usually requires innovation in more than one of these.

When you look at the latest Boston Dynamic video we all marvel at the exquisite control they demonstrate. But they also have to cool their actuators the right way otherwise the whole thing will just melt.

As a learner your best bet is to pick a stable platform made by people who know what they were doing and colour inside the performance envelope of that machine. What you make will probably won't break the state of the art but you will still learn a ton.


One of the problems with being a beginner is an inability to tell which projects are easy and which are hard, especially if you're doing a "real" project, which is to say, one of your choice from some real problem you have in life, rather than some pre-selected project just for the project's sake.

I've encouraged my kids to use their school years to try out lots of different things, and a consistent theme in all of them is getting them to just scale down their first try. Do not expect to replicate Minecraft. Do not expect to film a feature-length special-effects movie. Do not expect to create a dishwashing robot. Do not set out to write a decalogy high fantasy series. etc. The beginner doesn't know what they don't know.

Robotics has the particular problem that small-scale robotics are often frankly just not that useful. It's been a problem with the field for a long time. The learning curve is quite steep with robotics, and given that the beginner is likely to destroy some equipment along the way, expensive. There still isn't really that much the field can do about that right now. It's better than it used to be, but it's still pretty hard.


>is an inability to tell which projects are easy and which are hard

https://xkcd.com/1425/


Of course the funny things with that cartoon is that today the 'hard' task is just as easy as 'easy' task, if not easier. Which I guess only goes to show that not only is it hard to tell what is easy and what is hard, it is equally hard to tell which 'hard' tasks will remain hard and which will become trivial in a few years time.


> it is equally hard to tell which 'hard' tasks will remain hard and which will become trivial in a few years time.

The comic was published in 2014. That is 10 years ago as I'm writing this. Maybe it is just that Ponytail got the research team and the five years she asked for. :)


Ok, robots are my day job. :) General purpose robots that are useful for any purpose in any environment are incredibly hard to impossible. But, if you constrain either the environment they have to work in, or the actions they have to perform, things get a LOT more manageable.


yup, that was pretty much my point/question. Thanks for confirmation!


>Highly constrain your inputs and limit what the robot is supposed to accomplish and you will have an achievable goal.

I think this is the key take away. I work in an industrial plant we have a large amount of things that are automated but I would say they are "dumbly automated".

For example we have automated cranes. This sounds fancy and sophisticated but the whole system is designed to be as dumb as possible.

The Cranes are overhead cranes that run on fixed rails. This immediately limits the degrees of freedom in which crane can move. The whole operation is to pick up object at location A, deposit it at B then travel back to A and repeat. It is always picking up the same objects, with same dimensions/weight etc.

The whole thing is designed to be as dumb as possible it does exactly one thing, I imagine if it suddenly had to pick up a different type of object with different dimensions it would require rewrites etc.


Yeah and industrial robots are generally pretty easy compared to other parts of automation. They are pretty much turnkey and have a bunch of options for integration. That's not to say that they don't take any skill, but if you're doing material handling or welding or something you can often just buy the entire package from <robot company> and apply power and it works. It does all of the advanced kinematics math for you; you will be better at robots if you understand the math but it isn't a requirement and the vast majority of people programming, maintaining, and working alongside them don't know the math behind it.

It always bums me out when people just want to look at a pretty robot drawing a line or picking up a heavy thing and ignore much more complex and difficult parts of factory automation.


>It always bums me out when people just want to

In school people will say things like "I want to program video games!" and you never hear "I want to work for a bank and file 100 lines of paperwork for every line of code I write", but at the end of the day that bank job and that industrial robot job are the ones that typically pay a living and what most of those people that play around with the stuff and end up in the industry do at the end of the day.


I've been under the impression that getting into robotics has never been easier. For the past 10-15 years, high-schools have had easy access to things like Lego mindstorms, etc.

Small microcontrollers and computers like Arduino, Raspberry Pi has made development much easier, as you're no longer stuck with writing Assembly or C to get something working, and there are tons of modules that are easy to interface against.

Back when I started out playing with robots, as part of my microcontroller and control engineering classes, we were stuck with writing ASM. Same with components, like motor controllers, sensor networks, and what not. Lots of vero and perfboard, lots of soldering.


Because it is. There is an abundance of tutorials showing people how to get Gazebo simulations going, or set up a rudimentary classifier in Pytorch, or actuate motors with Arduino using whatever framework of the week (I even wrote some!).

The hardware has really made strides too, easy and cheap sensors, controllers, cameras. It's awesome how quickly someone can plug-and-play a Realsense with a servomotor or pneumatic slide and start manipulating the world.

The thing that's usually underappreciated is that once you understand how to code a robot, you are barely closer to having solved a practical problem. There are lots of practical problems in the world where spending 4 hours learning how to use $PERCEPTION_API would be actually better spent spending 4 hours understanding more about the widget being perceived or the object being manipulated. Getting into robotics has never been easier, getting something useful out of robotics is still the trick.


Even a vastly simplified 2D robotics is unexpectedly difficult.

I ran a simplified 2D robotics contest to automate code where the task was a simple 2D javascript canvas with a robot that had just two segments that could move (the arm and elbow). A piece of fruit was placed somewhat randomly within its reach, and when its hand was close enough it could be instructed to grasp the fruit. (No grasping code needed to be written, the hand just needed to be close enough) The task was then to bring it to its mouth.

There were no sensors involved, the positions were available as global values.

About twenty people looked at the contest (number of concurrent viewers on the dedicated subreddit I posted it on) but nobody submitted a successful entry. When I tried to solve it myself, I found I couldn't automate the task either, neither geometrically nor with a neural network, which turned out to be too difficult to train. The closest I got was being able to grasp the fruit about 7 out of 8 times or so (I forget the exact number), without being able to bring it to the mouth successfully.

By contrast, solving it by hand on the keyboard (moving the two joints and grasping using keyboard shortcuts) like remote controlling the robot (with a human in the loop) is easy.

I had no idea that even such a toy problem with a tiny 2D universe, only two joints, and perfect information, would prove to be so difficult to automate without a robotics framework.


I think if you know to search the keywords "inverse kinematics" and a bit of JS, this problem isn't too difficult, although I did bungle the signs in a few places. Someone (more dedicated than I) could also work out the trig from scratch if they didn't have that keyword.

Here's a solution: https://editor.p5js.org/jwlarocque/sketches/5tCufh3nw


Thank you, great solution. It matches the spirit of the contest as well. If you want another challenge, you could try to get to that result by doing the equivalent of sending keypresses, further mimicking how a robot works. (You can just copy and paste the keys the keyPressed function calls and check the distances yourself).

    const angleIncrement = PI / 36;
    if (key === 'q') { upperArmAngle -= angleIncrement; moves++; }
    if (key === 'w') { upperArmAngle += angleIncrement; moves++; }
    if (key === 'a') { lowerArmAngle -= angleIncrement; moves++; }
    if (key === 's') { lowerArmAngle += angleIncrement; moves++; }
    if (key === 'g') grab(); // Grab the fruit
    if (key === 'r') release(); // Release the fruit
What order of these statements would you send to get to the fruit, grab it, and then get to the result you show?

Good job on completing the contest.


Here's the code with keypresses being sent:

https://editor.p5js.org/robss2020/sketches/46cRgGQtX


Why not post the details here? I'd love to have a go at it


It was flagged when I posted it originally, so someone didn't like it. (Maybe because I included a $10 prize as a token motivator.)

Here is the contest, you can play with your keyboard with a human in the loop (try to bring the fruit to the head using the listed keyboard shortcuts) or you can try to automate it using javascript:

https://news.ycombinator.com/item?id=39949966

About twenty people looked at it when I posted it but I didn't have any winning entries and I couldn't solve it myself either.

If you are very interested you can see a walkthrough of my attempts to solve it here:

https://news.ycombinator.com/item?id=39962620

I must emphasize that even this very toy, 2D robotics problem with perfect information is very hard, despite being quite easy for a human in the loop. (I tried it with test users, who could complete the task easily with a human in the loop remote controlling the robot.)

Let me know if you have any questions or need help using the test environment or understanding the task. You can give me any other feedback you want as well.

The contest is finished (with no winners) and there is no longer any prize.


The problem can't be solved with the information in the global variables. You need to know the arm lengths and the positions of the head and shoulders. The info is all hidden in `draw()` but I don't want to reverse-engineer it.


Thanks, you're right about that.

>but I don't want to reverse-engineer it.

is kind of my original point :) It's a pain to reverse engineer the geometry. You can copy the variables wherever you want and I would accept it, but it doesn't solve the robotics problem.

Feel free to copy and paste the code or refactor it however you want, if you send me a solution I will look at it if it is in the spirit of the contest.


Robotics is on the cusp of total reinvention. Almost everything that it took to make robots work is about to become archaic.

The core ideas in old robotics will become a framework for the guardrails for new robotics, but this will widely become a solved problem that few need to think about

Stanford Aloha https://x.com/cunha_tristan/status/1743314831912874251

MIT https://youtu.be/w-CGSQAO5-Q?si=kiQ7mBPe6Kh7ZbCq

Figure/OpenAI https://youtu.be/Sq1QZB5baNw?si=ZBdDkO2HRT5sOPly

Tesla Optimus https://youtu.be/cpraXaw7dyc?si=xm-925fFwLeYnkpb

NVIDIA Groot, https://nvidianews.nvidia.com/news/foundation-model-isaac-ro...


Meh. There is the definition of robots used in academy. Something like "A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically." Or "A robot is an autonomous machine capable of sensing its environment, carrying out computations to make decisions, and performing actions in the real world."

And if you go by that definition you see that things like dishwasher, washing machines, elevators, full authority digital engine controls, surface to air missiles are all robots.

But nobody, not even roboticist, calls those things robots. You don't need a roboticist to load your dishes, or to get to the floor of your meeting, or shoot down an enemy airplane.

The reality is that the real definition of a robot is "a machine which does not work yet". Once it starts working we stop calling it a robot.

Making robots for non-roboticist should absolutely be the goal. Giving that up is giving up making reliable useful machines.

If you can't make a "grab_object" api useful that is because we all together suck at making robots percept and suck at making robots manipulate things. Once we get good at those (if ever) that API will become easy and commonplace. And anyone who doesn't use it to achieve their goals will feel like a weirdo reinventing the wheel.


> The reality is that the real definition of a robot is "a machine which does not work yet". Once it starts working we stop calling it a robot.

Counterpoint: Robot vacuums

I think the real colloquial meaning is more along the lines of how constrained or hidden the movement is. Elevators stick to a track, dishwashers only move on the inside, missiles pretty much move on a straight line. On the other hand robot vacuums roam the floor, and while a robot arm is rooted to one spot it can move freely in that range.


That is probably right. But it also doesn't matter much to me. I don't want to convince people to change what they call things. I want to make things which are reliable and useful for them, and I don't quite mind if they will call them robots or not. I know that they are robots and that is enough for me. ;)

> missiles pretty much move on a straight line

Only the boring ones! Have you seen long exposure photos of the Iron Dome in action for example? [1]

They fly all those swoopy trajectories to be at the right place at the right time. Also it is speculated that when they decide to go for intercept they launch multiple interceptors against every target. One aimed earlier in the trajectory one aimed later. And when the system has a high confidence that the target has been eliminated they re-target the second interceptor against some other one.

It is a beautiful and very complex robot. Sadness all around that it is needed of course.

1: https://www.defensenews.com/resizer/CVsuWQwzlBDG_yhJUr6sI16P...


> And if you go by that definition you see that things like dishwasher, washing machines, elevators, full authority digital engine controls, surface to air missiles are all robots.

> But nobody, not even roboticist, calls those things robots. You don't need a roboticist to load your dishes, or to get to the floor of your meeting, or shoot down an enemy airplane.

I guess I'm a roboticist, and yeah, I call all of those robots. Those are _tantastic_ robots that just get their jobs done.


The AI Effect, but for robots.


I think this applies equally well to domains other than robots: making comic books, programming (in general), making music, etc.

“Democratizing access to skills,” by replacing mastery with a tool always requires sacrifice: of the output, of the input, etc.

Sometimes it produces nice results but I don’t see the point of it.


I think the great tragedy of democratization is that it forces the popularization of a few skills by making it easy for everyone to pursue things because they are popular, rather than because they have an innate ability. Therefore, more obscure hobbies and artistic avenues die out since most people would rather "create" something effortlessly as long as it is popular instead of finding out who they truly are through art.


Previous discussion on the substack version of the article (114 comments): https://news.ycombinator.com/item?id=39707356


It is sad to see that IEEE is becoming the discovery channel of engineering.


Why are you saying this? It is a message from people who program robots to those who design robots. Both groups are (mostly) made up of professional engineers for which API design affect their job.

It is not an article intended to those who know nothing about robotics and are unlikely to become roboticists as it would be the case for a "discovery channel" type article. Note that I have nothing against pop science, quite the opposite actually. But this isn't that.


> It is a message from people who program robots to those who design robots.

I thought it was a message from someone who designs robots to someone who programs robots. The author says they are an ME and mostly self taught. In my experience, self taught MEs are not adept robot programmers or programmers in general.

If I may try to parse the subtext here, this is a post by a ME who is helping manage/direct a startup of recent college grads, and they are frustrated with ROS because you basically have to be a Linux network admin and skilled C/C++ programmer to get anywhere with ROS. MEs especially find it frustrating because they just want to do robots, but to use ROS they have to be expert linux console hackers and it's just not a skillset they have.

Same is true for the fresh out of college CS majors -- my students who are seniors, Master's, and even some PhD students have trouble with it.

So what I see here is someone expressing that frustration of how tooling is in the robotics ecosystem is not accessible.

The reason this is a "discovery channel" treatment of the topic is because we're not talking about it at that level. We're talking about it at this distilled, filtered, high-minded perspective that is above the weeds, when really we should be digging in the dirt talking about exactly the problem here.


I am in that group you mentioned and this is not even a bad take. They might have been trying to imply ROS but that's also a very much not a well-targeted criticism either. Robotics is not bunch of humanoids moving their attached manipulators.

Next; this article appears in IEEE spectrum not Quanta. I want higher quality from IEEE as was the case for decades and it is not a pop science avenue. Good pop science is hard and other places do it much better.

This is just a mediocre blog article found somehow its way into IEEE which is very concerning about the quality of IEEE spectrum.


Maybe, and I think this is a good thing. We want more engineers in the world, or, at least, more people who, when they see a problem, or something that's not perfect, bet bothered to the point of finding a good fix.

You can train someone every other skill but that one - the urge to look for what can be improved, and to improve it - is the most valuable an engineer can have, and it doesn't apply to engineering alone.


gatekeeping?


Did you pay attention to what IEEE stands for?


Original version discussed 3 months ago (114 comments): https://news.ycombinator.com/item?id=39707356


ROS [0] always seems an essential layer if there is ever to be a common ground between something as concrete as motors and sensors, and something as abstract as tasks and learning.

[0] https://en.wikipedia.org/wiki/Robot_Operating_System


There is a general idea that people would like a free lunch.

At the gym people would like one simple trick to buff. In robotics one simple trick to make whatever they want.

I see the same in software where people build behemoths like SAP or other ERP so customer could configure whatever they want and don’t have to hire those expensive developers - just to hire expensive consultants because it is so complex anyway that you need a specialist.


Original title:

> The Mythical Non-Roboticist: Wouldn't it be great if everyone could do robotics?

Not sure why it's been changed here.


The bookmarklet uses the HTML title, so that might be the issue.


Adjusted.


This is just "wouldn't it be great if everyone could program" but with extra kinematics.


I can see uses for robots who comb jellyfish out of water and remove them from the beach.

I can also see uses for robots who cut down Heracleum mantegazzianum and ideally make biodiesel out of it.

I'm not going to develop or buy either one, though.


"Robots are hard because the world is complicated"

Or it could be that some problems are inherently hard. Just solving the motion matrix of a simple robotic arm is an academic research field of its own.


industrial robots are surprisingly simple to program, Fanuc/Kuka etc. with tablet teach pendants and the like. there are even extremely simple ones where no lines of code are required (programmed by human movement). I'm sure these real-world open-ended robots will take inspiration from that, possibly having a general world knowledge but human would need to finetinue/teach movement for specific tasks


Congrats Benjie on getting this published in IEEE Spectrum!


I'm not knowledgeable to have an opinion on the article but I chuckled at this framing:

> Global mutable state is bad programming style because it’s really hard to deal with, but to robot software the entire physical world is global mutable state, and you only get to unreliably observe it and hope your actions approximate what you wanted to achieve.

Holson’s Law of Tolerable API Design also has potential:

> Design your APIs for someone as smart as you, but less tolerant of stupid BS.


> Maybe I should tell them it didn’t work for us.

You need to believe to achieve.


Stop making up new titles. And stop these demands.


Are those demands themselves?


You are so clever.


The "Mythical Non-Roboticist" idea suggests making robotics accessible to everyone by simplifying the programming process is the way to go. However, this overlooks the complexity of getting robots to function in the unpredictable real world. Creating oversimplified APIs for a vague, inexperienced user base leads to tools that fall apart in practical use. Instead of aiming for condescending simplicity, we should focus on developing powerful, flexible tools that eliminate unnecessary complexity. When someone starts programming a robot, they become roboticists, so design your APIs for peers who won't tolerate needless complications.


> When decluttering means you end up having to search the internet for a recipe as to what place to click to find which flydropping menu, was it worth it?

The Jony Ive aesthetic made so many things harder to use.


That's lame llm output


This smells like LLM bot




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: