Disclosure: Previously worked for an industrial automation integrator and currently work for a robotics startup.
Weird paper. Reads like a survey of the state of the art from a decade+ ago. Many of the things mentioned are already mature areas of focus for startups and major players in the industrial robotics space. Many startups are working on the portability, ease-of-integration, and ease-of-use problems [0][1]. Cloud-oriented tools tailored to robot, line, and facility level health-and-status monitoring (even beyond the capabilities typically provided by SCADA systems) have been available to buy from major players for a long time [2]. I know Big 3 automakers have been toying around with integrating collaborative robots for years (and another OP already mentioned Rethink). Simulation tools run a pretty wide range of usefulness and market penetration, from first-party tools like ABB's RobotStudio or Fanuc's ROBOGUIDE to tools in the ROS stack to Nvidia's ISAAC simulator [3].
To be fair, I only scanned it but I did not detect the hints of anything useful except for those who are entirely unfamiliar with the industrial manufacturing and robotics space.
This is from the aero-astro department, not EECS. Their timetable is different; what's state of the art is both way behind and incredibly advanced compared to other industries.
(I took 16.001/16002 though I was never a course 16 major; learned a ton of interesting stuff but probably the most useful thing I learned was that different fields have utterly different Weltanschauungen)
Strange. That reads like a pitch for Rethink Robotics, a MIT spinoff, from 10 years ago. They were going to make robots that could work alongside humans and do so cheaply. "Baxter" was their first product. It was a flop. The robot was not precise enough for assembly, using it to move things from one place to another wasn't cost effective, and the "learning" wasn't that helpful.
This paper should have examined that critically and discussed what has to be fixed for the next try at this.
Reading this as a controls engineer who frequently deals with industrial robotics, it looks like there are three main points being made:
1. Industry 4.0/Cyber-physical systems: Basically, the process of getting a database server to speak Ethercat/Profinet/EthernetIP and upload tag values from all the PLCs in the plant. Then you run some statistics and flag trends or anomalies and have a process engineer squint at the most promising graphs. This is a lot easier to do than of keeping track of cycle time, average sensor values, faults etc. on a whiteboard or clipboard (and don't expect vendors of traditional HMIs to make this easy to do on individual screens...HMIs are one of the most regressive parts of the industry). It's not as complicated as the C-level/government sales pitch makes it sound, but not a bad idea.
2. Safety around collaborative robots is hard. You want the robot to be able to accelerate heavy objects fast, but also to not hurt the squishy human in the same volume. Part of this can be fixed by improved electronics that can detect ever-smaller torque anomalies so that your 1kg payload robot detects a force more like a 2kg payload and comes to a halt after gently shoving the operator. But there's a fundamental limit where no matter the sensor resolution or how fast you can stop, contact between your 100kg, 2000mm/s cast steel robot arm and somebody's skull is going to hurt the human.
3. Anyone familiar with natural-language programming, voice interfaces, or domain-specific languages will probably see the hazard they run into with the second point:
> Improved programming and communication interfaces can enable humans with little programming experience to control robots to perform a variety of tasks,while communication interfaces enable robots to communicate with other hardware and software.
Yes, robotics manufacturers are improving their software - your teach pendant is probably a color touchscreen now, instead of a 480x320 monochrome display with a few softkeys. But solving hard problems is always hard, and it's probably always going to take a programmer (whether that's part of their job title or not) who understands the solution to the problem to express that in a precise way so the software can understand it. Otherwise you've just made a more complicated system that usually solves the problem, and dropping back to a level that actually fully expresses the decision tree leaves you with more complicated interfaces. There's certainly a lot of room to make the interface easier to use, but it's always going to take some programming. Doubly so if you're trying to integrate vision or additional analog axes. Stick with digital sensors, open/close grippers, and highly tolerant processes if you want it to be easy enough for an operator to pick up a pendant and do useful work.
My company, unlike some of our competitors, typically does not lock our customers out of their teach pendants to keep their warranty intact. It does sometimes mean that I need to come back and touch up a point where the operator saw that the robot sometimes failed to make the target and had their technician move it a little too far, so now the parts at the other side of the tolerance range don't fit anymore, but typically both sides are acting in good faith and we work through that.
I think it's going to take some third-party robot software like RoboDK to make this more intuitive. Right now, Yaskawa/Fanuc/Kuka/ABB/Epson/Denso etc. are burdened with:
A) Customer bases who are productive with their legacy teach pendants and Pascal-like programming languages, and have a significant familiarity with their 2000-page manuals.
B) Robot operating systems that work. It's hard to justify starting fresh, so you have to leave the quirks and complexities of old systems.
C) Simulator/IDE software departments that are their own cost/profit center in the automation company. Sure, the company manufactures hardware, but mostly you have to program it with the pendant (which might itself be a line item), the offline IDE is an optional line item that costs several thousand dollars. The offline simulator is extra, vision programming is extra, CAM toolpath generation is extra, each new network interface is extra...
Because of these factors, existing players will have a hard time making things intuitive.
Side note: Figure 2, the 2-position dial table with the light curtains, is a frequently-seen design but IMO a dangerous/bad one. I really hope that blue button is a latch reset, but given that the process is typically to replace the completed assembly in the nest with new unassembled parts, and carry the assembly to pack-out (so your hands are full), everyone involved really wants to make this system self-resetting so the table indexes when the light curtains are restored. Operators and maintenance techs (like, I assume, this guy with the giant wrench) will frequently be tempted to step over the knee wall and into those two small voids adjacent to the dial to reach stuff at the back of the table, or put a knee up on the table to reach something. If they take their other foot out of the horizontal light curtain, fix whatever's broken or interrupt whatever sensor was blocking the condition that prevented the dial table from rotating, they're going for a ride, which is really, really bad. That kneewall should fill the space around the dial so you can't step in the corners, and there should be a floor scanner just above the table (it can also perform the function of the vertical light curtain) to stop the machine when someone's anywhere in the entire volume.
Weird paper. Reads like a survey of the state of the art from a decade+ ago. Many of the things mentioned are already mature areas of focus for startups and major players in the industrial robotics space. Many startups are working on the portability, ease-of-integration, and ease-of-use problems [0][1]. Cloud-oriented tools tailored to robot, line, and facility level health-and-status monitoring (even beyond the capabilities typically provided by SCADA systems) have been available to buy from major players for a long time [2]. I know Big 3 automakers have been toying around with integrating collaborative robots for years (and another OP already mentioned Rethink). Simulation tools run a pretty wide range of usefulness and market penetration, from first-party tools like ABB's RobotStudio or Fanuc's ROBOGUIDE to tools in the ROS stack to Nvidia's ISAAC simulator [3].
To be fair, I only scanned it but I did not detect the hints of anything useful except for those who are entirely unfamiliar with the industrial manufacturing and robotics space.
[0] https://www.ready-robotics.com/
[1] https://www.olisrobotics.com/
[2] https://www.rockwellautomation.com/en-us/products/software/f...
[3] https://www.nvidia.com/en-us/deep-learning-ai/industries/rob...