Hacker News new | comments | show | ask | jobs | submit login
Robot Operating System – A flexible framework for writing robot software (ros.org)
165 points by doener 42 days ago | hide | past | web | favorite | 83 comments



My experience with ROS has been suboptimal. It's a collection of incompatible, outdated and buggy tools that communicate with each other via a single-point of failure (roscore) using a pub/sub mechanism.

Furthermore they use their own build system (catkin, WHY?), define their own IDL (WHY?), use their own dependency system (rosdep, WHY?) and even mess with your Bash environment.

I appreciate their effort and it is really hard to make a system that interfaces with all kinds of different robots and hardware, but I don't understand why they don't exploit existing high-quality solutions for the "not-robot-related" parts of their system. At the moment I don't think I would like to have ROS run a potential self-driving car.


I spoke Mikael Arguedas at the 2018 French ROS days (JONAROS) where he gave a presentation about ROS 2 (talks are available online IIRC, in French). During his presentation; he said that most of ROS 1 was engineered before any of today's well known frameworks were available , hence the lot of custom tools. That has driven a major part of ROS 2's design (especially the use of DDS).

Otherwise, I have been personally using ROS for building robots for the French (and European) robot championship (see https://twitter.com/AIGRIS_Birds). It works pretty well, it is convenient, easy to learn and provide some nice tooling (bagging for replaying messages, simulation with Gazebo...).

It is the de facto standard for robotics middleware for scientific projects. It not widely used by industrials because of some shortcomings even if initiatives like ROS Industrial and ROS 2 will likely improve things.


I just saw that ROS2 will be based on the DDS standard. I came across DDS before in a different context and it impressed me a lot. It reminds me of the concepts in the eve programming language where to build a distributed system you share a global data space. Both took inspiration from LINDA and tuple spaces. "The data is the interface" https://www.rti.com/products/what-is-a-databus


Thank god for that. I had an internship where I just wrote a shim layer between DDS and Ros Messages. It was a huge pain since they didn't map very well to each other.


he said that most of ROS 1 was engineered before any of today's well known frameworks were available , hence the lot of custom tools.

catkin is built on top of CMake, so that's no excuse.


A robot is made of dozen, if not hundreds of packages. catkin compiles the whole workspace in a single command while dealing with non trivial build dependency between packages, and generates a ready to deploy folder.

CMake has its drawbacks, fair enough, but 10 years ago it was pretty much already the de facto standard build system for C++. What would have used instead of CMake ?

  - Make and autotools (Haha) ? 
  - scons ?
  - completely custom build system ?
  - something else ?
The most painful I had to deal with was my teammates who, instead of creating a brand new package, would simply copy and paste and existing one without fixing the dependencies, resulting in a build that fails if the workspace was cleared.


> A robot is made of dozen, if not hundreds of packages. catkin compiles the whole workspace [...]

s/robot/software

Why even have the concept of a `workspace`? I don't think it solves any problem.

Most bigger software projects are built on-top of other software, but most deal with it without having to implement their own build system. I know that CMake is established software, but despite its own horrendous scripting language I find it mostly useful if you want to build cross-platform native software - if you're going to run on Linux anyway, I think there is no point for this indirection.

> - Make and autotools (Haha) ?

You might laugh about these tools, but they build most of your userspace if you're on Linux. And yes they even work on distributions other than Ubuntu.


I've been hearing about how ROS 2 is going to fix these problems since 2013. I'm not holding my breath.


> Furthermore they use their own build system (catkin, WHY?), define their own IDL (WHY?), use their own dependency system (rosdep, WHY?) and even mess with your Bash environment.

I agree. When I ask these questions (we use ROS at our company) the answers usually sound like the following:

1. Catkin: No other solution for multi-project CMake. Apparently development often spans multiple projects / repositories for a single integrated "feature" on the robot.

2. IDL: Because they have their own middleware. I think this goes away in ROS 2.0 as they move to DDS.

3. Rosdep: Because they want to run on multiple distributions even though it seems Ubuntu is the only truly supported distro. Rosdep is just a thin layer over the local package manager, pip, and whatever else they have glued in.

4. Environment: They provide the concept of "workspaces" to enable checking out source for a small subset of packages to work on and override whatever is installed on the base system.

I live in the embedded / hardware world but my colleagues working on the higher level software tend think all of the above is necessary for dealing with the hundreds of packages and dependency hell that come with the "modular" robot software approach.

That said, the popular packages in the ecosystem for simulation, planning, robot modeling, etc seem powerful. Community developed ROS "drivers" (middleware integration) are also useful so you don't reinvent the wheel for off the shelf hardware integration.


Indeed, while I critisize the quality of the implementation, much of what they implement is necessary for or assists the development of such systems. Catkin is crap, but you need a reasonable way of assmbling multiple packages (from multiple languages) into a build, and there's not many decent options (vanilla CMake has grown a lot of relevant features now, so you could probably design a similar system with a lot less extra custom code and quirks). Their IDL is naive but you do need a standard one and the landscape was a lot bleaker when the project was started. Rosdep is mostly optional: I outright ignore it when I use ROS and just sort out installing the relevant dependencies using my package manager (If you do this you find ROS works just fine on other distros). The fact that environments are generally self contained is a huge help to development: I usually put in effort to persuade other bits of software to work in a similar manner (I try to keep each project contained to a folder: dependencies outside those available and installed by my distro's package manager should not leak outside this).


Agree, these are, from a certain perspective, justified costs.

> I live in the embedded / hardware world but my colleagues working on the higher level software tend think all of the above is necessary for dealing with the hundreds of packages and dependency hell that come with the "modular" robot software approach.

But it's not a huge step to push your packages to debian, and after that rosdep feels totally unnecessary.

> That said, the popular packages in the ecosystem for simulation, planning, robot modeling, etc seem powerful. Community developed ROS "drivers" (middleware integration) are also useful so you don't reinvent the wheel for off the shelf hardware integration.

I agree 100%.


The own buildsystem/dependency system is not without precedent for open-source communities with large number of independent packages (https://developer.gnome.org/jhbuild/, https://www.pkgsrc.org/).

The initial build system "rosbuild" was arguably a main reason ROS1 became successful, because it started making individual robotics packages developed in diverse academic environments reusable by other users (before, each lab had their own half-broken way of building/packaging software).

The buildsystem in it's limited way also had a very low learning curve for junior developers (though not for people trying to package binaries).

ROS1 was initially distributed as source packages, and that combined with a unified buildsystem led to a very low barrier for contributions. By installing ROS all users already had an environment set up in which they could easily modify sources packages from other packages created by people in other organizations, run with modifications and submit patches.

Compare that to installing a library as a binary, suspecting a bug, and then starting to find out where to get the sources, how to build the sources, how to locally build against those sources, etc.

The downside of this was of course packaging binaries was hell. But the bottom line was that this combination (making binary packaging and distribution very hard, while making dirty source builds very easy), was the winning ticket to create a vibrant open-source community with high cohesion and lots of contributions in academia.


Yes. But it's still useful.

ROS is a packaging system for academic robotics software with a message passing layer. It was created by hammering a huge collection of existing software into talking a common protocol. The ROS people also made the distribution buildable as a whole (more or less; it's notorious for breaking due to version pinning problems.) Just having some kind of standard was a big win. It beats downloading multiple pieces of academic software and trying to bash it into cooperating.

I've used ROS and contributed to ROS in a minor way, but I don't like it.


I love the basics of ROS, but good lord they crammed far too much "into it" for it to only sit on one (or a few closely related) operating systems. If you build an ecosystem on top of an operating system, at least make it OS independent. (python+pip anyone?)

Having said that, it did great things to bring together the community and kill the fragmented landscape of robotics. (Does anyone remember the 2000s-2010s and all the custom, proprietary robot abstraction layers that were built up? Awful.). Clearpath and a number of small vendors partnered well with ROS, pushing the BS off the stage.

But it hasn't aged well ...

The messaging and logging is the only useful component (for me / my teams). The codebases / nodes are abysmally intertwined, over cohesive, and lack a nice unixy modular architecture that I'd expect with something built on top of a message passing layer. The central point of failure is often a very real point of failure in spotty networking environments (dropping from wifi can kill your whole stack, even if all messages are local machine only)

DDS is nice, but everyone serious is pretty much already using it (mil / aero), esp because of shared memory transport. So it's great that they were dragged forward ... but it doesn't quite feel like the big leap that ROS 1.0 was.


> I've used ROS and contributed to ROS in a minor way, but I don't like it.

I find this sentiment among most researchers and industry people I've talked to at IROS and ICRA. I always make a point to ask what they like about ROS when I find out they're using it and probably 95% of the time the top 2 answers are the message transport layer and rviz (the visualization tool). tf (the frame of reference graph) is sometimes in the top 2, but it's also been a complete pain for me (and others, so much that there are 2 competing versions: tf and tf2) and would be in my bottom 2 features. The rest of it I find most people say they could do without.


The build system is mostly macros on top of cmake. But not ideal indeed. Same for the IDL. ROS 2.0 [0] should adress a lot of those issues, also stuff like real-time, improved security, no roscore needed.

On the robots I worked with, I never missed any of those features, to be honest. Largest project of those is a research home service robot, quite different from a self-driving car in terms of reliability requirements. The real-time stuff for motor control there is handled by EtherCAT.

[0] https://design.ros2.org/


> The build system is mostly macros on top of cmake.

That makes it even worse. So it's crap on top of crap. CMake is mostly used to build cross-platform software, but I don't think anybody uses ROS on a system other than Linux, so why not just use established tools like simple `make`?


Having written plenty of makefiles and cmakelists I would much rather write the latter for a CPP project. The idioms are easier to remember so I don’t have to search through SO to do little things. Let’s say I want to include a version of OpenCV of at least version 3.0, this is a super simple cmake command, make equivalent is probably not standardized and is some some strange syntax I will have to include in every makefile where I need the same functionality


The Make equivalent is just the set of commands you would use to generate OpenCV plus a target and dependencies. If you can shell script you can write a Makefile.


My point is that "find_package(OpenCV 3.0 REQUIRED)" is a whole lot simpler to remember than whatever the equivalent shell commands are to enforce the minimum version of a library. What you are suggesting is to build the exact version for use with this specific project, different from "Developer I checked your environment and your version is too low". I'm just speaking from my experience, if you are very experienced with shell and can reproduce this type of behavior more quickly then more power to you. This is a similar debate to "why rewrite standard libraries in every project", for example, to count items in a list in python I can loop through and modify counts in a dict or I can use "collections.Counter(list_)" and it does it for me. Cmake provides ample standardized methods for C++ builds that I don't have to rewrite over and over again for every project.


I think CMake is quite established. I don't like these kind of holy wars (I like vim too :-)) but to get stuff done. ROS lets me do that.

I've seen several times that companies write their message passing stuff on top of ZeroMQ, mttq and what-have-you, while they could have spent all that time doing something novel as well. At the cost of using less than perfect tools made by someone else that a lot of people are also using.


It's not even really supported outside specific Ubuntu releases I think, so for anything else you're on your own mostly.


CMake is pretty established. You don't have to use CMake and if you don't want to, you don't have to use it - you can just compile your code however you like and manually put all your package files where ROS expects them to be, or use symlinks or some other hack. Doesn't seem to be worth it just to avoid CMake.


The build system is effectively almost required, though, because the documentation telling you how to do anything in ROS assumes you're using Catkin. I wouldn't know how to arrange files for ROS to use without going through the build system.


You actually can just cmake and that is all you need!

  [in any folder out of source]
  mkdir build
  cd build
  cmake [source folder]
  make
  source devel/setup.bash
  roslaunch your_fancy_package demonstrator_nodelet_or_whatever.launch
That works, but you can't mix it with catkin_make or other build tools (rosbuild). I.e. you can't just invoke catkin_make there, this won't work afaik.

The source folder must contain a main CMakeLists.txt symlink which points to `/opt/ros/[distribution e.g. kinetic]/share/catkin/cmake/toplevel.cmake`. This symlink will be created when calling "catkin_init_workspace". You can also just put a copy of that file there to commit it in git. I do this and even modify it to include my own cmake modules and debugging stuff.

You can now layout your folders any way you want. That means the packages don't need to all be in the same folder, but can be grouped as necessary.

Example project layout:

  project
  ├── cmake
  |   └── FindSomePackage.cmake
  ├── doc
  |   └── index.md
  ├── project_msgs
  |   ├── msg
  |   |   ├── Foo.msg
  |   |   └── Bar.msg
  |   ├── CMakeLists.txt
  |   └── package.xml
  ├── project_utils
  |   ├── include
  |   |   └── project_utils
  |   |       └── foo.h
  |   ├── launch
  |   |   └── ...
  |   ├── scripts
  |   |   └── do_stuff.py
  |   ├── src
  |   |   └── foo.cpp
  |   ├── CMakeLists.txt
  |   └── package.xml
  ├── components
  |   ├── heisenberg_compensator
  |   |   ├── include
  |   |   |   └── ...
  |   |   ├── src
  |   |   |   └── ...
  |   |   ├── CMakeLists.txt
  |   |   └── package.xml
  |   ├── warp_controller
  |   |   ├── include
  |   |   |   └── ...
  |   |   ├── src
  |   |   |   └── ...
  |   |   ├── CMakeLists.txt
  |   |   └── package.xml
  |   ├── ...
  |
  ├── .gitignore
  ├── readme.md
  └── CMakeLists.txt -> /opt/ros/[distribution e.g. kinetic]/share/catkin/cmake/toplevel.cmake
        should contain set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/cmake)
You can put software components/packages in folders that are structured similar to project_utils / project_msg on any sublevel, I don't know what happens when you directly nest them though.. I wouldn't do that.


It's not required if you know CMake and know what you're doing. There's even resources, if you search for them, e.g.:

https://github.com/gerkey/ros1_external_use

But if you're looking for good documentation and tutorials, then yes catkin is basically required, but I don't think that's unreasonable.


ROS runs on Mac, and ROS 2 adds Windows support.


Agree. I used to work with ROS during a semester of AI. We had to build a robot that drove itself on top of a roomba vacuum cleaner and used a kinect for vision. I was super familiar with Ubuntu/Debian, but ROS was a mess. The materials we were provided by the course staff was more troubleshooting guides with ROS, than initial setup tutorials. It was a lot of fun, but I couldn’t imagine building something with real-world applications on top of it. Outdated packages. Lots of unpredictable failures. If the original purpose was only prototyping for students, then it works well for that.


ROS started development almost 10 years ago. A lot of "high quality" solutions that seem like a good idea now may not have been around that long ago, or were still in nascent stage. Many of the changes in ROS2 are actually about incorporating some of these solutions -- the most obvious example being the switch to DDS for messaging.

Regarding catkin, I actually think it's an OK solution with some nice features. It's mostly a set of CMake macros that work with a certain conventional structure for packages. If you look at some other (newer) C++ package managers out there, like Hunter or FIPS, it's essentially the same idea. And catkin also supports Python code [1]. It's far from ideal, but it's a hard problem. Like many things in ROS, it's a "worse-is-better" kind of solution.

[1] Not as well as it could, re:pip integration and such; ROS2 supposedly has a better solution.


> ROS started development almost 10 years ago. A lot of "high quality" solutions that seem like a good idea now may not have been around that long ago, or were still in nascent stage.

In my opinion this is no excuse. 10 years ago was 2008 and we had rock solid solutions to these problems - I don't think it was necessary to build custom solutions (and AFAIK catkin is already the successor of rosmake).

But rereading my top-level comment it does seem a little bit harsh, so I want to apologize and hopefully do not discourage ROS developers and the community. ROS is still useful and at least there is an organized effort to make an open-source robot software abstraction layer.

Remark that the robot community (especially in an industrial setting) is plagued with closed-source proprietary software that is even worse, so despite its flaws it's a step in the right direction.


Can you detail some of the rock solid solutions in 2008 for the problems addressed? For example, building multiple C++ packages from source. Google's Bazel wasn't open sourced until 2015, so solutions were still appearing long after 2008.


Actually I don't really understand the problem, what do you mean with "C++ packages"? AFAIK C++ still has no concept of packages (although this has been proposed), so you probably mean how to build C++ based software that is located in different directories or how to do hierarchical builds in general?

That is supported by most build systems, plain old make can do it, CMake can do it, scons can do it, the list goes on and it is actually not something special in software development. It's not like we were unable to build software consisting of multiple "packages" before catkin or bazel.

I'm not saying that these existing solutions are perfect and the problem in general (dependency hell) is pretty hard to tackle. It's just that I don't see how catkin and the workspace concept help here.

Btw. did you know that catkin_make, successor of rosmake, is already old-school?

It's now "catkin build" - oh no wait, that's again uncool if you go with ROS2, there we have our meta-meta-build system "ament".

#edit: I kid you not, it's now apparently "colcon".

See e.g. the discussion https://discourse.ros.org/t/colcon-amend-tools/4685 and rationalization http://design.ros2.org/articles/build_tool.html.


catkin_make and catkin-tools (where "catkin build" comes from) both use catkin. They're just different CLI tools, and for the most part compatible, and have been fairly stable for about four years now. How many build tools have come out for JS over that period?

Workspaces are actually super useful. They're a similar idea to virtualenv and conda environments. If you don't find those useful I don't know what to tell you.

Since ROS2 is essentially a fresh restart I'm not surprised there's some churn. I think most people expect it to be fairly unstable for a while.

I've been working with ROS since nearly the beginning (as well as other middleware solutions) and I definitely see a lot of its flaws, but understand the rationale for a lot of the decisions. It's like C or C++, a 'worse-is-better' solution that's overall pretty decent, specially considering it's open source.


> catkin also supports Python code

I can appreciate that if you're writing C++ with some Python, this is probably useful. My experience with it was a project that was entirely written in Python, and suddenly having a complex, unfamiliar build system to deal with definitely felt like unnecessary complexity.

As you say, it's a hard problem. You could imagine an easier option for pure Python code, but that just moves the hurdle to the first time you want to compile something in C++.


Super useful, in fact. A lot of the code I write follows a typical pattern of bottleneck code in C++ with python wrappers, or a mixture of pure python nodes (for the things that don't have to be super fast) and pure C++ nodes. This is fairly common throughout ROS. If catkin played better with current Python packaging and virtual environment systems (virtualenv/conda) it would be so much better, though.


ROS beginner here. I've been experimenting with ROS in my spare time for about a week, with the ultimate goal of building a trainable robot. So far, I've built a custom robot arm in gazebo, converted it to URDF, and hooked up gazebo to ROS. The next step is to start controlling joints.

A few years ago I tried to run a project based on ROS and gave up in the end due to compatibility problems (I was installing ros packages on top of an existing Ubuntu and it caused all kinds of problems). Now that I'm using their pre-built docker image and took the time to read the tutorials, I'm finding it more reliable and much nicer.

The tutorials are very clear and all the steps have "just worked" for me, although the individual packages don't always seem very well documented.

catkin doesn't seem like a heavyweight build system - more like an opinionated layout combined with some cmake macros and some scripts, built on top of cmake. I can see how this would be useful for myself as well as real roboticists to develop multiple packages in parallel without having to poke through complex and disparate build systems. A robotics student would want to dive into C++ or Python as fast as possible as opposed to figuring out the build system, and I think catkin serves this purpose.

Also, it's common for inexperienced linux/mac users to get confused between multiple installations - when they have to compile things from source, i.e. opencv. I wonder if the ros command line tools are partly designed to help students overcome that obstacle, i.e. the pattern roscommand packagename rosoptions.

I'd also argue against the added complexity of avoiding single points of failure during early stage robot development. Having to think about distributed systems and SRE would be a huge distraction from the already complex task of designing a robot and its software. I also very much hope that ROS isn't currently being used in self-driving cars without a lot of safety systems including hardware ones.


The bugginess aspect and often absent documentation and lack of tutorials which go beyond turtlebot is what irks me most. Sometimes, fundamental functionality such as message synchronization is just not useable.


My experience has been similar. Every real system I have worked on starts with ROS for fast prototyping, then slowly replaces each component with an in-house replacement due to recurring frustrations.


I wrote a blog article recently wherein I propose that cloud-native infrastructure can be a better foundation for robotic software stacks than ROS/ROS2. I've been involved in several ROS-based robot & robot-sensor efforts and (like many others here) have found it lacking in major ways.

https://capablerobot.com/blog/2018/2018-05-09-wheel-reinvent...

There are a whole host of performant, lightweight, and open source tools for distributed-tracking, pub/sub, logging, serialization, service discovery, KV stores, visualization, etc. Right now it seems the robot world (including me) ignores this fact and continues to reinvent wheels (poorly).

I'm curious for feedback on this idea and will be publishing updates as I experiment with this idea. I'll likely be starting with timing & latency experiments between ROS, ROS2, and other cloud-native pub/sub brokers.


Also see ROS Industrial [1] for extensions to the ecosystem to include common industrial robots.

[1] https://rosindustrial.org/about/description/


A small recent-ish discussion: https://news.ycombinator.com/item?id=15530813


ROS can be an super enjoyable and productive environment to build software in. It gives you access to a set of common sensor and actuator drivers, motion planning, navigation, perception libraries, and implementations of the latest IROS papers. I’ve seen it used to prototype lots of non-robotics applications too; really any local distributed system integrating, calibrating, and coordinating sensors. Personally I think the standardized data types it provides for robotics and perception data is one of its greatest assets [1], and could be a standards effort in itself. And it’s home to a great community.

But it definitely has its limits, especially as applications mature, and the ecosystem can feel siloed from the technology landscape evolving so quickly in, for example, backend distributed systems. Knowledge transfer between the robotics community and the software engineering happening in distributed systems, databases, gaming, web development, etc. is a huge untapped opportunity. Hit me up if you want to talk more about this!

[1] e.g. http://wiki.ros.org/sensor_msgs


Anyone using this.. what have you built?


Ive done my bachelor's thesis in 2013 on this. We were working on a robot built for exploration of disaster sites (earthquake, fire, ...).

I built:

1) Virtual camera: The robot was equipped with 6 fixed cameras. You can position virtual camera anywhere in the coordinate space and it constructs it's image with the information from the real cameras.

2) System for semi-automatic external calibration of cameras from LIDAR sensor. External calibration is the process of estimating the position and orientation of the camera. This system was not as precise as calibration using calibration patterns in lab, but it was useful when the robot was in the wild and you wanted to adjust the camera or strap a new camera on it quickly. However, to be frank I am not sure ft the system was ever actually used that way. I left the lab after finishing my thesis.

The ROS ecosystem is reasonably simple to understand and definitely makes things much easier. Apart from having some issues with compatibility between versions and incomplete functionality I remember it as quite a nice platform. On the other hand I have not tried any of it's competitors if there are any.


that's pretty cool! I'm particularly interested in how you made the virtual camera work?

can I read your thesis, or (dare i ask) source code?


The virtual camera vas done during summer project but the idea IIRC is quite simple.

I will try some hasty writeup ;)

1) Create a virtual image plane (or cylinder/sphere for panoramic camera) in the 3D space. For this you need to specify the virtual camera's position, orientation, focal length and FOV.

2) Define pixels on the image plane. You need to specify the image resolution and just create grid on the the image plane. Each pixel corresponds to a 3D point in the world coordinate system.

3) To get a color of each pixel you project it's corresponding 3D point to all real cameras. Usually only some of the cameras see the point. Each camera that sees the point gives you a color according to the color of the pixel in it's image. You have to perform blending of these colors. What worked well in our case was to give more weight to cameras where the point projected close to the center of their image.

As you can see, there will definitely be some artifacts, usually more the farther away the virtual camera is from the real cameras. However, it worked to my surprise quite well. It was even possible to look at the robot from 3rd person even though all of the real cameras were on it.

Also it is crucial to have well calibrated cameras. Apart from their projection matrices you also need to consider their radial and tangential distortion which was very significant with our camera models.

https://www.wikiwand.com/en/Pinhole_camera_model

https://www.wikiwand.com/en/Camera_matrix

https://www.wikiwand.com/en/Distortion_(optics)


It's a huge ecosystem of components, but from what I understand it's pretty ubiquitous in the research robotics and commercial space.

My startup is using it to power an autonomous customer service robot. ROS modules handle things like pose calculations, trajectory planning, collision avoidance, and basically all other interaction between software and hardware. From my perspective as a developer on the user-facing software, it's basically a simple event bus and RPC framework that has a load of prebuilt datatypes and modules for handling common robotics tasks.

Caveats – a lot of modules seem to follow pretty sketchy coding practices compared to what I'm used to in other industries. It seems to be a bit of a paragon of bad design. There also seems to be a lot of version confusion – lots of packages seem to support different subsets of versions, with most of the community possibly on older releases. There's an ongoing ROS 2.0 effort that I think uses MQTT as the transport. But I'm not a ROS expert, so take the above with a grain of salt!


Yeah, ROS is a very good example of worse being the enemy of good (to mangle some sayings). It's a common framework which does basically everything you'd want to do in a robotic system, with a bunch of useful functionality on top of it, but basically every part of it is incomplete, poorly documented, awkwardly designed, and buggy. However, since there's basically nothing else which addresses the same space, people feel compelled to use it. I know of two robotics projects, one with a pretty substantial budget, which have been basically hamstrung by developing initially on ROS and running headlong into its limitations while being unable or unwilling to pay the cost of moving off of it.


So basically, ROS is the UNIX of robotics (i.e. "worse is better", + judging by how many serious projects using it are mentioned elsewhere in this comment thread)?


Kind of. Fundamentally, it has never had much competition due to the narrowness of the scope. It is ubiquitous in university robotics due to its accessibility, but in my experience most (successful) industrial robotics applications will simply provide an interface to ROS: internally they will not use it at all.

One thing with ROS is that reimplementing it is not particularly difficult: I have, for various reasons and projects (in some cases before ROS was available) implemented basically all the core components of it (to similar levels of completeness and reliability). So the cost of doing it yourself is actually not all that high (and I think people overestimate it).


NASA's Valkyrie, and Robonaut 2, in addition Fetch, and Baxter,a couple of industrial arms via ROS-i. I know that it's pretty horrid, but it's also the devil I know. This allows for rapid prototyping. In addition, the API hasn't changed much so I can walk away for months and come back and still use the robot. Several organizations support OSRF, but they're still just a couple of people. It's all open source so I've been able to document bugs and submit fixes. That being said, they're still robotics people doing a systems job, but I've failed so far in convincing systems people that robotics is the next mobile.


> but they're still just a couple of people.

27 people are listed on their staff page.

https://www.openrobotics.org/team/


Woah, guess I lost track..


Home service robots, controlled via 'natural' language: https://www.youtube.com/watch?v=b0P2MqmQ-DI

Wall cleaning robot (still testing): https://www.youtube.com/watch?v=4dMjt_Pj4og

There are some nice projects on https://discourse.ros.org/c/ros-projects, I really like Kyler Laird's Tractobots: https://www.youtube.com/watch?v=oPI7f5kQRKo


The whole ROS ecosystem is huge, most of research platform robots are ROS enabled. Check the PR2 (http://wiki.ros.org/Robots/PR2), Turtlebot (https://www.turtlebot.com/), the Nao series (http://wiki.ros.org/nao), even Boston Dynamic's Atlas robot (http://gazebosim.org/tutorials?tut=drcsim_atlas_siminterface...) or Nasa's Robonaut (http://wiki.ros.org/Robots/Robonaut2).

It facilitates work by not having to rewrite a bunch of code every time. Also, most robotic related libraries like OMP, or TF are supported.


Well, apart from adding some fish[1] support to ROS core I worked on these guys[2] at Vecna and they mostly use ROS. A pretty common pattern in commercial use of ROS is to start out with a vanilla setup and replace important nodes with ones you've written to work the way you need them to.

At my current job we don't use ROS but being able to look at the ROS drivers of commercial arms is a big help in making our own drivers.

[1]https://fishshell.com/

[2]https://robotics.vecna.com/


Academics use ROS a lot. The canonical ROS paper has been cited 4,714 times. This is an extreme outlier.

https://scholar.google.com/citations?user=fMDLYCUAAAAJ&hl=en...

Use the 'cited by' link to see thousands of robot projects that use ROS:

https://scholar.google.com/scholar?oi=bibs&hl=en&cites=14376...

(I'm not affiliated with Open Robotics)


I published a research paper as an undergrad on improving an existing algorithm for a team of mobile robots to navigate space and create a map of their environment.

I also used it as an intern at a robotics company to use a microsoft kinect as a depth sensor for helping the robot perceive space in three dimensions.



Used ROS1 and worked DDS in the past, but never did anything with ROS2 yet, anyone uses ROS2 and has something to say about it?


To get ROS and a few more robot related software tools and libraries as nice binary packages robotpkg might also be of interest: http://robotpkg.openrobots.org


How does this relate to, or compare with, say, BrainOS?

(Please forgive the newbie question. I’m totally completely ignorant about robotics operating systems but interested to learn.)


I don't know BrainOS, but the most obvious difference is that ROS is open source, so you can prototype or tinker with it easily. BrainOS seems to be a totally closed solution, and details are sparse on what it actually runs and does.

ROS has a BSD license, so it is even possible that BrainOS is actually using some ROS package or resources. After a quick look, most SW positions at BrainOS mentions than having a prior experience with ROS is a plus.


Could someone explain for a new bee. If I’m building a robot why would I use this and not just write a python script running on Linux?


You can use ROS with Python and Linux. In fact, that's a big chunk of actual ROS-based systems out there, with most of the rest being C++ and Linux.

For simple robots, you can get away with a simple script. But say, you want to attach more sensors, add some visualization, some logging, etc. Pretty soon you'll find yourself writing multithreaded (not great in Python) and/or multiprocess code, some kind of simple GUI to visualize data, some logging code, some way to manage parameters... and before you know it you'll find yourself writing your own little version of ROS (or more generally, robotic middleware). And without the huge ecosystem of ROS packages that might solve all sort of problems for you.


Good points. You also may find yourself wanting to have more than one robot. And then you may want them to talk to each other.


Robots usually consist of many complicated components. ROS provides a unique and consistent method for those components to communicate with each other via an API and a pub/sub method. That way each one of your components can handle it's own logic via whatever software it likes, just so long as you can connect to the ROS api and let the OS handle orchestrating each component. A bit like a conductor in an orchestra.


Could somebody, please, explain me what I need in addition to ROS for functional safety. Some certified laser scanner? Do I understand it right, that ROS completely ignores functional safety?


ROS is not a suitable tool at all. Functional safety is a totally different beast and SW is only a small part of it.

Don't even think about implementing functional safety on your own without a few years of practical safety experience. Even if you get your hands on all the required hardware parts, you will need to spend a lot of time on making sure you tick all the boxes in relevant standards and convincing notified bodies you did your homework. Don't try to do this on your own, hire an experienced professional right from the start of any project which might touch on functional safety.


If you're meaning trying to create a safety-related or safety-critical system then the particular set of libraries you use is one very small part of the problem.

The system design itself is examined as well as how it is implemented. The examiners will want to see proof that your system fails in predictable and deterministic ways suitable for their usage.

Also systems dealing with the motion-control typically require hard-real-time response so you would not be running this on your average Linux system either.

You need to be prepared to have your code and system design examined in great depth, a very long and expensive process generally.


Note concerning the system: you will most likely need dedicated, redundant and/or diversified HW all the way from sensing over computing down to the actuator system. At least for industrial robots and cobots, not sure about social robots and such.


The best thing to do, if you can, is to isolate all functional safety related hardware and software to a smaller and simpler subcomponent of your entire system. Then you can prove that no matter what ROS does, it cannot cause harm. This also makes it easier to be certify since your safety related functions are isolated and hopefully smaller and simpler!


Thank you for explaining this issue. So commercially available robot needs Jetson TX2 (or Wandboard) with ROS and some hardware from safety tools vendor like Pilz or Sick. And the whole safety assessment from TÜV with official certificate. That was helpful!


Depends on what your market is. The suitability of Jetson TX2 for industrial robots is quite debatable, for instance. Might be sufficient for service robots, but really depends on the notified body accepting your argument. Also don't forget that the actuators (brakes? power?) are part of the safety architecture as well.

Word of advice: You need to have a 100% waterproof concept, otherwise your product will _not_ make it to market. Don't try to guess your way around the safety system, it will cost you dear later on.


What market are you looking at? Its all about risk... if you're targeting the medical domain for example, you have a multi-year project just to get thru the certifications. Conversely if you're just looking at creating a toy with limited movement, weight and power then the risk is significantly less.

Get an engineering safety consultant to have a look.


Industrial cleaning, machine is slow, but has many liters clean and dirty water inside. Just prototyping it and thinking how to achieve reasonable price. 30-50k is too expensive, janitor is cheaper.


If you want a solution, don't develop your own. Industrial robots can be purchased on Ebay for <10k that have already had the certification and engineering done. Then you just need to do the easier job of engineering your specific task, versus an entire robot + your task.


Robotic lawnmowers may be a good application to draw inspiration from. They need to be low cost and likely have similar safety requirements to meet in terms of avoiding collisions and going out of bounds.


Please stop using names for version releases. Is "awesome igloo" newer than "super turtle" or "raring rino"?

Just stop. Use boring versioning numbers. Your developers and users will thank you.


They're alphabetical, lol. Plus cool shirts.


The software is versioned with numbers,the codenames are for releases which could have version number ranges, or help differentiate stable from testing, etc. The developers and users can better make the distinctions with codenames.


Decimals do that job with less characters.


Tell that to Android




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: