Not a member yet? Register for full benefits!

Username
Password
Multi-Agent automation on the Factory Line

It is still early days for autonomous factory and manufacturing robotic systems. Robot arms have been following programmed orders for decades, but thinking for themselves is still something very new.

Although such use of AI and AGI to give robots minds of their own is something still very much in its infancy, great strides are being made. In this specific case, automation Scientists Fredrik Danielsson and Bo Svensson of University West in Sweden, have created an automation system where such robot arms make their own decisions and adapt to external circumstances. They continue to work even when something goes wrong in systems nearby. They adapt to the changing conditions and do their best.

Automation Scientists Fredrik Danielsson [left]and Bo Svensson [right], together with a robot arm controlled by one of their embodied agents.
Credit: University West

The artificial intelligences responsible are separate from the robots themselves, added in instead of traditional programming. As such they are embodied in the robots, and learn how the robots move, much like a human baby learning to crawl, save of course at a vastly accelerated rate. As such the hardware can be swapped round; different robots from different manufacturers added to the system, and the artificial mind will figure out for itself, how everything works.

The artificial mind is not very powerful of course; at the moment none of them are. But it is still a huge leap forwards from how things are currently done in automated production lines. Because current lines use robots that require comprehensive programmed instructions, should something go wrong with the incoming materials, the robots won't know how to handle it.

“A single error somewhere makes everything stop. For example, if a sheet metal is damaged an operator has so take it out and then reset and restart everything,” says Bo Svensson.

Unlike in traditional lines where there is a hierarchy in place – with the computer in control of the whole line, passing orders to regional command units which then pass precise orders to the robot arms – this system breaks that hierarchy completely. Every robot has its own separate AI, with no hierarchical links to the others.

They each know what they are supposed to be doing, and function independently, aware of the environment around them, and able to adapt on the fly to the actions of other robots and human operators around them.

If a half-assembled product isn't where it is supposed to be on the line, the AI will recognise what it is, and adjust its own positioning to deal with the changed position.

“The agents know what neighbours they should communicate with and make small local decisions,” says Fredrik Danielsson.

An agent is triggered by what is happening next to it. The start signal for a machine may be that someone puts a sheet metal in it. Then it knows that it must drill. Things do not have to happen in a certain order. If a sheet metal is lost the system continues to work with other sheets. The
operator can also insert a new part in the middle of the flow without disturbing the system.

It may take up to a year to create a traditional automation system and it is very difficult, time consuming and expensive to adapt it to changing demands. In the system built of agents, however, you can easily insert and remove both equipment and operators. And it can produce an array of product variants, as it is easily reprogrammed. Agents are automatically generated in minutes by a software system codenamed P-SOP which was developed by the researchers. P-SOP does not appear to stand for anything, as even the paper describing the work in detail, never explains the anacronym. The operator gives P-SOP instructions, in the form of a PowerPoint sketch of how the system should work. P-SOP does the rest.

It does this by creating a virtual environment for each robot to interact in. The environment is a mirror world, which contains every element of the physical production line, represented in a form the robots can more readily understand. They track the virtual objects, and their movements sync up to the positions of the actual ones. Machine vision cameras and sensors running rather specialised detection routines are responsible for putting this information into the system in the first place.

“Then he presses a button and P-SOP spits out a bunch of small agents for different machines. I think this may be the next big step in automation,” says Fredrik Danielsson.

There are other similar automation systems vying for attention right now. The Baxter system for example, is very similar in function. However, this one is the first such that uses independent agents rather than a custom-developed strong AI system for that specific robot. Each artificial mind learning how to move, and then watching keenly, adaptive like a human, for how it will need to do its next task.

References

Automation systems become flexible when robots make their own decisions

How Rethink Robotics Built Its New Baxter Robot Worker

Automatic Generation of Control Code for Flexible Automation (Paper, PDF)
The paper is in English, however the first few pages will give you a contrary indication

A Flexible Process Planning Platform for Flexible Automation (PDF)
This pdf slideshow represents the work in a much earlier stage. Designed for explaining the concept to students, it helps explain many of the basics that are still in-place and how the embodiment system works.

Staff Comments

 


.
Untitled Document .