Empower Your Robots with AI Using MATLAB - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 39:37
Loaded: 0.42%
Stream Type LIVE
Remaining Time 39:37
 
1x
  • descriptions off, selected
  • en (Main), selected
    Video length is 39:38

    Empower Your Robots with AI Using MATLAB

    Overview

    AI-powered robots continue to expand in use for manufacturing facilities, power plants, warehouses, and other industrial sites. Warehouse bin-picking is a good example. In an e-commerce fulfillment warehouse, human workers need to pick and place millions of different products into boxes based on customer requirements. Deep learning and reinforcement learning now enables robots to learn to handle various objects with minimum help from humans.

    In this webinar, you will learn how to empower your robots using deep learning and reinforcement learning for perception and motion control in autonomous robotics applications including robot manipulators, autonomous mobile robots, and UAVs using MATLAB. 

    Highlights

    Attendees of this webinar will learn: 

    • Why use AI for robotics
    • What types of AI algorithms and tools to consider for your robot applications
    • Detecting using YOLO and classifying objects for robotics applications
    • Controlling robot motion using reinforcement learning
    • Deploying deep learning algorithms to GPU board as CUDA-optimized ROS nodes 

    About the Presenters

    YJ Lim is a Senior Technical Product Manager of robotics and autonomous systems at the MathWorks, Natick MA. He has over 20 years of experience in robotics and autonomous systems area. YJ's responsibility in MathWorks includes long-term strategy development and product management of robotics and autonomous systems. Before joining MathWorks, YJ worked on various robotics projects at Vecna robotics, Hstar Technologies, SimQuest, Energid Technologies, and GM Korea. YJ received his Ph.D. in mechanical engineering from Rensselaer Polytechnic Institute (RPI) and his Master from KAIST in S. Korea. yjlim@mathworks.com

    Tohru Kikawada is Senior Application Engineer at MathWorks Japan. He specializes in development of robotics and autonomous systems with emphasis on perception, planning, simulation, and deployment. Prior to joining MathWorks in 2014, Tohru worked as a digital design engineer for image sensors and image processors at Sony Corporation. He holds a B.S. and M.S. in electric engineering from Tohoku University in Japan. tkikawad@mathworks.com

    Recorded: 22 Jun 2022

    Hello, everyone. Welcome to this webinar. AI is around us, and we are already benefiting by AI in our daily lives. So today, we will be talking about empowering your robots with AI using MATLAB. Here is a quick introduction of today's presenters.

    I am YJ Lim, the senior and a technical product manager of robotics and autonomous systems in MathWorks based in Natick, Massachusetts. I have spent the past over 20 years developing robotic and autonomous systems. Before joining MathWorks, I worked on several robotics project at the companies, including Vecna Robotics, Hstar, Energid, and GM R&D Center in South Korea. I have been at MathWorks for about 4 and 1/2 years now, and I'm managing a couple of robotic products and developing product strategy. Tohru, would you like to introduce yourself?

    Thank you, YJ. My name is Tohru Kikawada. I'm a senior application engineer at MathWorks Japan. I specialize in robotics and autonomous systems with emphasis on perception, planning, simulation, and deployment. I work with customers that develop autonomous-guided vehicles, collaborative manipulators, UAVs, and service robots.

    Thank you, Tohru. my first career started at auto industry, and I closely worked within a team on an automotive assembly line. Hundreds of these robots are working together for the welding, painting, and assembly in automotive factories. And I saw when any of these robots got failed to perform its task, certain sectional lines or sometimes entire assembly line needed to shut down.

    The best way to avoid the situation is to measure the wear of each robot as they are used and to predict when the failure will occur. So AI could be a solution to predict when maintenance is needed. With that, I'm curious to know which of these three best describes your challenges in adapting AI for your robot applications today? A, you may have data complexity, B, the complexity of AI model, C, you may not have AI expertise in your organization, D, you are not involved with AI projects yet. Thank you for your input.

    Here is what we want to discuss today. I will start with the technology trend and the challenges in AI. And then I will also cover current landscape over AI adoption in robotics. Then Tohru will discuss the details of AI-driven robotic application design workflow using MATLAB and Simulink. Let me begin by introducing a quick AI technology trend and the challenges.

    Many technology companies have adopted an AI-first strategy in the product and application they build. As a result of this, AI has become so ubiquitous and ambient, that many people do not recognize they are using on a daily basis. The popular consumer application always use include speech recognition, used in smart speakers, face detection, or the cameras on phones and autonomous self-driving system in automotive vehicles.

    And then evolution of AI directly impacted on robotic applications. Agile robot developed by German Aerospace Center, DLR is an AI-based robot that use television cameras to see the world and has a tactile sensor to feel the object. This robot also can perform human-like task. This suggests lots of useful AI applications in the robotics for the industry automation, warehouse logistics, services in health care, and so on.

    In this the smart and autonomous package delivery concept, for example, self-driving truck, autonomous mobile robot in the warehouse facility, and drone to perform the last-mile delivery need to perceive the environment, keep track of the moving object, and plan on a course of the motion for itself to perform fully autonomous delivery. So this is the perception planning control workflow is critical for wild range of autonomous systems.

    There are ongoing development for all these autonomous algorithms in robotics. Recently, deep learning allowed significant progress to perform tasks related to the perception. Reinforcement learning has shown the potential to solve really hard control problems. They are expanding to scale to more complex problems.

    With all these great potential in AI, why is it so hard? And why does this AI project sometimes fail? The literature and our experience suggest a number of reasons. People problems. Your organization may not have AI expertise. Data problem. Sometimes there are too many data to handle or you may have imbalanced data set or don't have enough data.

    Tool problems. The challenges of engineers working on AI application don't just end with obtaining an AI model, but they need to integrate the model in the broader system and get it deployed. And even business problem, including selecting good problems to demonstrating ROI. Let me move on AI in robotics. I will discuss specifically current landscape of AI adoption in robotics and where to use AI for robotics.

    So not all Robots are artificially intelligent. Many traditional industrial robots are non-intelligent and mostly performing or repetitive series of tasks. There is a safety fence around it to isolate the robot from the human. Collaborative robot or Cobots are doing more autonomous and flexible tasks, from painting to packaging, and pick-and-place application while sharing the workspace with the human labor.

    This type of robots use a sensor input from the environment to control and make a decision using autonomous robotic algorithms. What AI brings to robotics is some more autonomy or full autonomy. AI will increase the capabilities of the robot to perform more complex tasks and various tasks.

    So how do we apply AI for robotics? There are two different ways to get your robot to do what you want. The traditional approach, which I'm guessing that most of us are familiar with, is to write a program that processes data to produce a desired output. Indeed, the most robot currently do things without AI. If robots are built and programmed to perform very specific task, robots simply does not require the AI as the task performed are predictable and repetitive.

    With another approach called machine learning, this is a flipped around. You feed data and the desired output, and then the computer in the robot write a program for you. There are even some techniques where you don't need to know the output, just the data. Now, it is not quite accurate to call the things the computer create a program. So they are called a model instead.

    Machine learning models are largely black boxes. They can generate the desired output, but they are not comprised of a sequence of operation like traditional program or algorithms. In order to put the key terms in context, so here is AI. The majority of AI applications are model-centric. This is because it is difficulty to create large data sets. And th AI community believe that model in the centric approach is more promising.

    Deep learning is a key technology driving the current AI megatrend. It evolved to include machine learning. Due to its learning capability from the data, deep learning is data-driven approach to create AI model. But many of the problems that we have to solve in the real world are not just the making decisions based on labelled inputs. We may also want to interact with our environment through a set of actions where the choice of action depends on what outputs we have.

    Reinforcement learning is an output feedback-driven approach to create AI model. And applying in the deep in a neural network to the different reinforcement learning aspects also remain on hot topic, which is referred to as deep reinforcement learning. You see now how robotics and AI coexist.

    I now want to discuss how AI has been used for robotics. There are multiple ways that robot take advantage from AI. First, the advance in speech recognition technology is significantly enabling voice- driven robot. Yaskawa electric cooperation in Japan used their motorman with a pick-and-place robot system with a perception-enabled solution. This cobot used audio perception for the voice-driven control.

    Images are another rich source of insight of the environment around robot. Many computer vision technologies with deep learning capabilities have been developed to process information from image pixels to label objects for the robot. For this pick-and-place industrial robot, Yaskawa team used the deep learning for the object detection based on RGB images.

    Deep learning is used to find the abnormality from images for industrial inspection. Musashi Seimitsu industry in Japan used deep learning to develop an abnormal detection system with MATLAB to inspect automotive part. This approach is expected to reduce human workload and the cost for manually-operated visual inspection of millions parts.

    3D point clouds provide an opportunity for the better understanding of the surrounding environment of the robot to localize itself within an environment and to estimate the pose of an object. ASTRI in Hong Kong created digital twin of their welding robot embedded with the computer vision and deep learning algorithm using MATLAB.

    ASTRI engineer generated a set of a synthetic RGBT images of the weld pieces. Using deep learning, they estimate the pose of the weld pieces. With the deep learning allowed, significant process to perform task related to the perception. And then the advanced perception of capability whether demining has been already adopted by robotics industries.

    Reinforcement learning is another avenue in AI that has received increasing attention in recent years. Reinforcement learning is the process of learning of the behavior through the trial and error interaction. The first reinforcement learning approach is to use the form of the larger tables, which indicate what action to take in any given state.

    Deep neural network allowed the representation of complex policies to solve more difficult problem. In the reinforcement learning, the integration with the model is especially critical since training involves a lot of trial and error, which can be done best through the use of simulations, so that the main success of reinforcement learning in a decent year is to control the robot. This is really expanding to automate the process of the sensing and the planning.

    The industry are still working on the proof of concept and exploring reinforcement learning for production application. Training on hardware can be prohibitively expensive and dangerous. So the virtual model allows you to replicate the real world in a condition. Therefore, the simulation and future models are key to adopt reinforcement learning.

    So this brings me to summarize where AI and robotics can come together currently. Deep learning has shown in a significant ability to help robot perceive the world around, understand voice and data, and identify the pattern so that it can be act as needed. Through the reinforcement learning capabilities, robots are gaining increased autonomy to control the motion and the process automation, but industries are still exploring to adopt reinforcement learning for the robotic and other in the real world applications. So with that, I will turn it over to the Tohru for the details of the AI driven robotic application design with MATLAB.

    Thank you, YJ. I will cover the AI-driven design workflow where deep learning and the reinforced learning are utilized. Let's begin by looking at the application of AI for perception. Here is an example of pick-and-place application with the robotic arm to see how AI can be integrated into the system. An RDG-D camera on the robotic arm, it used to acquire RGB imaging and depth image.

    It detects white PVC pipes, and estimates the pose of them. There are four types of the PVC pipes, I shape, T shape, A shape, and X shape. All pipes have that same color and the similar shape, making detection and classification difficult with rule based algorithms. Therefore, we need to teach the AI model to detect and classify PVC pipes.

    Such a task is easy for humans to accomplish, but requires a computation of the advanced and complex technology to achieve it with a robot. To develop an AI-driven robotic system as shown in the previous video, four main stages are need to be considered. One, data preparation, two, AI algorithm and modeling, three, simulation and testing, and four, last but not least, deployment to the field.

    Through the case studies, we will go through all four parts of this workflow and talk about some of the challenges within each stage and point out how much we can make it all come together efficiently and effectively to deliver value for you. As a first step, let me start to explain the data preparation.

    How do we prepare the training data? Typically, a huge number of training images are required to train an AI model. Manually preparing them one by one is very time-consuming. Therefore, we use MATLAB and Simulink to generate the synthetic data by simulation.

    By using a simulation, a large number of the images can be collected in a short time than with actual measurement with hardware. In the simulation, the shape of the work pieces, lighting conditions, background, etc, can be easily changed. This makes it possible to generate the images of various scenes that are difficult to reproduce in reality.

    In addition, the ground truth of the abounding boxes can be output at the same time as the synthetic data image. This eliminate the need for annotation and reduce their labeling effort. On the other hand, the camera model cannot perfectly simulate the actual camera. It is necessary to collect the training data efficiently using a combination of the simulation and the actual hardware.

    When collecting training data on the actual machine, the robot arm is moved while acquiring automatic images. The hardware support package make it easy to control the robot in this way. MATLAB and Simulink provide an example of the generating synthetic data set through simulation. For example, sensor model of semantic segmentation for UAVs using Unreal Engine, sensor models for Lidar to generate the point cloud, and the camera model using a Gazebo co-simulation provided as shown.

    This can be vitally important where you are trying to build an algorithm that successfully handles extremely rare events and the exhaustive labeling, like instant segmentation. By utilizing the synthetic data, labeling of the actual images can be semi-automated. In this example, CAD models are used to create a synthetic data for training.

    Next, we train an object detector using only the synthetic data. Since the synthetic data contained the ground truth of the expanding boxes, there is no need for time-consuming labeling. On the other hand, the accuracy of the detector trained with only the synthetic data set is often insufficient. So training with the actual images is still required.

    The Image Labeler app can be used for interactive labeling, acquired images from camera hardware. Image Labeler app allows interactive efficient creation of the data set. In addition, by utilizing the automation function the Image Labeler app, existing object data can be imported for course labeling. The manually labeling effort can be drastically reduced for this automation.

    Within the label of the frames, you can individually adjust the bounding boxes. The rest of the process at retraining including the actual images to iteratively improve the accuracy of the object clicked to. The detector can be efficient improved and the high performance model can be acquired in the short time.

    Let's move on to the second step in the workflow, AI-based algorithm and model. Of course, it is important to have direct access to the many algorithms. There are a variety of the pre-built models that were by the broader community that you want to use, often as a starting point or for comparison purposes. While algorithms and the pre-built models are good to start, it's not enough.

    Examples are the way engineers to learn how to use algorithms and find the best approach for their specific problem. For example, object detection with YOLOv4, instance segmentation with mask R-CNN, semantic segmentation with U-Net. Particularly, these enhanced examples, which can be used for robotics applications, we provide hundreds example for building AI model in a wide range of the domains. Once an AI model is fixed, but it still cannot be yet utilized in the robotic system as a standalone.

    You still need to complete the perception pipeline by leveraging their AI-based domain-specific techniques. MATLAB and Simulink provide our own products for various technologies. In this pick-and-place application, computer vision toolbox can be used to complete the perception pipeline. To feed effectively-sized image to the AI model, RGB-D calibration, and the point cloud construction are required at preprocessing step.

    After the prediction of the AI model, extracting the corresponding point cloud and matching with CAD models are required at the post processing step to get final six degrees of freedom pauses. Leveraging now AI-basd technique often required to complete the AI pipeline and matter provided domain specific tools for pre and post processing for AI models.

    I mentioned the labeling app for data preparation. We also provide apps for modeling. For deep learning, in particularly, we say prebuilt app that's helped automate and the training step and provide a visualization of the understanding and editing deeper networks. Abstract classification learner and regression learner automated those steps as well. And the new Experiment Manager app enables you to create a deep learning experiment to train networks under the various initial conditions and to compare the results.

    Looking at the movie, you will see that it allows you to import a network, drag, drop, edit, and test networks without having to write a single line of code. It allows you to abstract away from the syntax and the focus purely on the design. We also know that the broader deep learning community are incredibly active. And the new models are coming out all the time. Because we support that important model from TensorFlow, PyTorch, we're ONNX, you have access to those new models and can work with them within the MATLAB environment.

    In the next step, AI models have to be incorporate into a larger system to be useful. This is the Simulink model for pick-and-place appreciation as shown in the first slide of the section. You can have a functional decomposition of this model into its case of their component. You can combine sensing, control logic, robot dynamics, and the visualization component along with their AI models for simulating scenarios in which they will operate.

    Here, you can see how an actual robot uses perception component to detect the object and localize them for a pick-and-place application. We have to manipulate a specific motion parameter using bi-directional RRT planner, which accept the pose of the object and outputs collision-free trajectories. Or you can use other existing parameters in Navigation Toolbox, for your specific applications. As you can see, we can accomplish a system-wide simulation and test with AI models.

    With the same model, we easily used it for other application due to the modulization of each block. With a model-based design approach, their model continues to evolve and the design where we further defined. More detailed where we added into this component as necessary.

    Moving on to the deployment, we recognize that AI models are becoming important in a wide range of robotics applications. Each of them have different deployment requirements, whether it is an ECU in a production, an edge system in a mobile robot, an enterprise-based system for production lines, or a cloud-based streaming systems receiving data from a number of the robots. AI can reside in any part of these robotic systems, so your AI models need to be able to deploy to any possible platform.

    We have a unique code generation framework that allows models developed in MATLAB or Simulink to be deployed anywhere without having to rewrite the original model. Automatic code generation eliminate coding errors and is an enormous value driver for any organization adopting need. Let's see a deployment example of the pick-and-place application.

    As one of the popular platforms, Robot Operating System, or ROS, is widely used for prototyping robotic systems. It is also necessary to implement AI models on edge devices to make the robot operate autonomously and in real-time. In this example, the pick-and-place application model is generate as a CUDA ROS node by using a ROS toolbox and GPU CUDA.

    The generated CUDA ROS node is built and executed on the top of the ROS ecosystem in NVIDIA Jetson. Object detection can be performed at the high speed by leveraging embedded GPUs, even on edge devices. This brings us to the rest of the landscape of AI adoption in robotics. It was running for making decisions and control.

    The first thing, I'd like to talk about the reinforced learning is the question right here. Why should you care about the reinforced learning? Imagine for a second that you are a control engineer and that you are working on the project, trying to design your control system that will allow a robot to walk. Your robot has all types of sensors that you can use and model at a joint.

    Now, there a different way of approaching this problem with traditional control methods. Here's a brief comparison of the traditional control system and the reforms in running system. It has many powers to control design. The policy in the reinforced learning system can be built at the equivalent of a traditional controller.

    The environment model is the equivalent of the plant, time for observation and measurement, actions and manipulated variables, the rewards in our area is similar to a cost function in optimal control or the error from some desired control objective. And finally, the info for the learning training algorithms is similar to an adaptation mechanism that changes the rate of the traditional controller,

    They host a learning toolbox that allows you to work through all steps of the workflow using MATLAB and Simulink. This toolbox includes some popular training algorithms like DQL, DDPG, PPO. You compared environment model in MATLAB and Simulink. And you can also reuse existing script and models.

    Let's look at the two case studies of reinforcement learning. The first one is the pick-and-place application using reinforcement learning. In this example, given a workplace with known pods, the robot learn to which grasping pods would have the highest success rate. The robot arm and environment model using the Simscape Multibody. Simscape Multibody makes it easy to import models from CAD software and reproduce the physical phenomena such as a contact force and friction forces more precisely. And then, you can define agent and easier train it on the build environment using the reinforcement learning toolbox .

    Let me move on another example of obstacle avoidance for mobile robot. In this example, a mobile robot uses a Lidar to recognize the obstacle in its vicinity and moves around and avoids collisions. When training such a robot, we first train it on a simple scenario map, which facilitate convergence of the reinforcement learning.

    After the training within the simple map, the agent gains the fundamental obstacle avoidance ability. Then agent can be trained efficiently within high fidelity Unreal Engine environment and performing fine tuning. The simulation functionality provided in MATLAB can be utilized to easy scatter the environment up to train reinforcement learning agents efficiently.

    OK. Thank you, Tohru. I hope you are now able to see how MATLAB and Simulink make use of AI in robotics easy. OK. Then, let me summarize today's webinar. Robot can benefit from AI in different ways. Natural language processing and computer vision help robots to identify and recognize objects they encounter. AI is also moving forward for robotic visioning and planning to be truly autonomous system.

    AI-enabled navigation and motion control accomplish better robotic process to be a collaborative and adaptive system. In result, AI-powered robot carry on various tasks, including pick-and-place, welding, assembly, material handling for agriculture, and many more applications. There are numerous benefit from using MATLAB and Simulink to build AI-driven robot. You can create better data set with domain-specific tools. And automated labeling apps can save you weeks over months of work.

    You can use the best the AI model then you can put in your lower applications. You can also access a variety of pre-built model developed by the broad community then you want to use. You can accomplish an overall system-wide simulation and test with AI model. The end result can be deployed and managed wherever it needed to be.

    We have a unique code generation workflow that allows the model deployed in MATLAB or Simulink to be deployed anywhere without having to rewrite the original model. Last and least, robotic and AIs are related but different field. Using MATLAB and Simulink, robotics engineer can leverage their domain expertise without relying on AI programmers.

    I brought this poll question that I asked at the beginning of this presentation. Let's see how MATLAB and Simulink make this challenges of it easy. The first was data complexity. Robotics engineers are the one who know their robotic application and where the data come from best. So they need to prepare and process the data with the domain knowledge using each of a used tool. In case, you don't have enough data you can generate synthetic data from your robot simulation. MATLAB and Simulink is a great modeling and simulation tool and provide automated labeler for different modality of the data.

    The second was modeled complexity. With MATLAB and Simulink, you have easier access to the pre-trained model with a single line of code. You can also visually create and edit the learning network interactively with deep Network Designer that enables first AI modeling. You can use a model developed in MATLAB over those available in open source frameworks.

    The third challenge was AI expertise. The majority of successful AI stories in the robotics has been made possible by a clever combination of the AI models and robotics algorithms acting together. So robotics engineers need to leverage their domain expertise and apply AI solution for successful project. So I think you can gain AI expertise with MATLAB.

    If you'd like to learn more, the next best step is to get started with deep learning yourself. To do this, simply open your browser and launch one of these online tutorials, starting with the deep learning Onramp. In addition, we have a large number of examples and tech talk series and related webinar that are published on our web page to help accelerate your application effort.

    And we will be happy to support your specific use cases as well. So feel free to reach out to us with your questions. Thank you for your attention.

    View more related videos