Robotic Picking and the Beauty of Human Hand-Eye Coordination
The human hand still beats the robot when it comes to picking, but the gap is closing fast... learn how robots are tackling this complex challenge and how close we are to commercial viability of robots that can pick any object from any place.
How Industry 4.0 Design Principles are Shaping the Future of Intralogistics
This new e-Book takes a look at the six core design principles you need to integrate into your operations to build and effectively utilize Industry 4.0 capabilities.
- The Dark Side of Ecommerce Returns
- How Repurposed Retail Space can Enable a Hybrid Approach to E-commerce…
- Optimizing Efficiency in Deep Storage of Palletized Loads
- Energy Efficiency in Automated Distribution Facilities
- All Resources
In Swisslog’s Americas region, we share a passion for designing, developing and delivering logistics automation systems and software that drive customer success. We do this by offering a full spectrum of innovative solutions ranging from conveyor and sortation systems to world-class automated…
- Company Quicklook
The human hand is a wondrous device. The complexity of its design is such that even the most advanced grippers pale in comparison.
It does have its shortcomings, mainly in strength, stamina, and durability. But the range of tasks it can accomplish is staggering.
What makes the hand even greater is its connection to the eyes. Your arm and hand are not programmed to take one static path to grab something. Hand eye coordination enables an infinite number of moves.
Consider the difference between these five tasks:
- Pick that up from that place.
- Pick any of these objects from those places.
- Pick any of these objects mixed in a pile from multiple places.
- Pick any objects up from any place.
- Safely do 1 to 4 in the presence of humans.
Each is relatively simple for humans, but progressively harder for robots.
Robots have been doing the first task for a long time. The second is more challenging because it requires a gripper versatile enough to handle a range of items and grab from different angles and reach.
In the third, the robot must do everything from task two plus be able to discern one item from another. This requires intense vision software that can pick out a specific shape, determine the angle at which it lays, determine how best to grab it from this angle or abort and find one that is easier to grab.
It also needs to know if it failed (no grab, dropped, etc.) so it can try again. To accomplish this, we have to teach the robot about every item we choose to make it grab.
In the fourth task, the number of angles, positions, and orientations of items is nearly infinite. There is no way to discretely program a robot for each scenario. Infinite permutation equals infinite software hours.
The new intralogistics robots make use of visionbased navigation, sensors, and machine learning, to allow them to do things like work safely alongside humans and move through warehouses in efficient, safe patterns.
And rather than being fixed to the floor, today’s intralogistics robotics can come in the form of smart, wheeled carts that navigate the warehouse using minimal guidance infrastructure.
While the ability of intralogistics robots to think may fall short of Sci-Fi visions, a collaborative picking robot knows it needs to stop when touched by a human, and the software guiding a robotic cart will adjust its path if an aisle is blocked.
So, it’s a confluence of technologies rather than one type of robot that leads to payoffs from robotics in warehouse or “intralogistics” settings, explains Buckley. He continues:
“There are multiple technologies that have come together to allow for new capabilities, and a price/performance level, that makes applications quite effective and cost justifiable; whereas before, it was either not possible or not easily cost justifiable.”
“For example, now we have robots that can work safely alongside humans, or mobile robotics that have navigation systems that don’t require a costly navigation infrastructure.”
Machine learning will progressively improve robots, advancing the capabilities of a picking robot to the point where it can function with more of the versatility of human hand/eye coordination, notes A.K. Schultz, VP of retail and e-commerce with Swisslog WDS.
Schultz: “Traditionally, robots had to be instructed on each ask through costly programmatic means, but with machine learning, a picking robot essentially can ‘learn’ new picking tasks on its own without programmatic instruction.”
“Machine learning is bringing the vision and gripper systems of our picking robots closer to the level of what eye/hand coordination can accomplish.”Download the Paper: Flexible Robotics Come of Age for Intralogistics
Manually “teaching in” robots has become particularly problematic with the emergence of e-commerce. No longer do we have to contend with hundreds or even 50,000 SKUs. We are talking about handling hundreds of thousands or even millions of SKUs.
Let’s say that we have to teach in 500,000 SKUs to a robot using a non-engineer. If it takes 5 minutes per item and can be done by a $35/hour technician, it would take 41,670 man-hours at a cost of $1.45 million to teach the robot to pick all the SKUs.
If we can reduce the time to 1 minute and make it simple enough for a $15/hour employee, man-hours are reduced to 8,300 and the cost comes down to $125,000. That is a huge improvement.
Even better, what if we don’t need to teach the robots at all? What if through machine learning and gripper-vision coordination our robots can learn to grab anything and put it anywhere?
This is what we are working on at Swisslog and KUKA. We already have the first and second tasks down pat. Task three is now commercially viable and our Swisslog Robogistics team is working tirelessly to conquer task four. They are moving closer to commercial viability as I write this.
And, we have already skipped to task five: this robot can safely operate with humans without a fence.
In line with our vision, we believe this robot will shape the future of e-commerce.