Main Content

Automate Virtual Assembly Line with Two Robotic Workcells

This example shows simulation of an automated assembly to demonstrate virtual commissioning applications. The assembly line is based on a modular industrial framework created by ITQ GmbH known as Smart4i. This system consists of four components: two robotic workcells that are connected by a shuttle track and a conveyor belt. One of the two robots places cups onto the shuttle, while the other robot places balls in the cups. A slider then delivers those cups to a container. This simulation uses Stateflow® to control the system control and demonstrates how you can use Unreal Engine™ to simulate a complete virtual commissioning application in Simulink®. For an example showing how to deploy the main logic in Stateflow using Simulink PLC Coder™, see Generate Structured Text Code for Shuttle and Robot Control (Simulink PLC Coder).

System Overview

The core environment has five components that form the assembly line:

  • Robot 1 (Comau) — The first robot is a Comau Racer V3. This robot picks up the cups with a suction gripper and puts them in the shuttles.

  • Robot 2 (Mitsubishi) — The second robot is a Mitusbishi RV-4F. The camera sensor in the workcell of this robot detects the balls and then this robot picks up the balls with a suction gripper, and places each of them in a separate cup.

  • Shuttle Track & Shuttles — The shuttle track moves four shuttles to the Robot 1, then to the Robot 2, and then to the slider before returning to the Robot 1.

  • Slider — The conveyor belt carries the cups away from the track and into a container.

  • Static Machine Frame — The remaining non-moving model parts comprise of the static machine frame, which serves as the base for the assembly. It includes the two work cells housing the robots, as well as the base for the center assembly.

This system is created using CAD data provided by ITQ GmbH. MATLAB® has the imported CAD files, which are preprocessed using Simulink 3D Animation® functions and then saved to a MAT file for reuse.

During normal execution, Robot 1 picks up cups from a tray in the Robot 1 workcell and places them in an empty shuttle that is waiting for the cup. Placing the cup in the shuttle triggers the filled shuttle to move along the shuttle track to reach Robot 2 and create an instance of the ball in the Robot 2 workcell. The location of this ball in Robot 2 workcell is random. A camera positioned above the area where the ball appears in the Robot 2 workcell captures the image of the ball and communicates it to the Deep Learning network. The network uses this information to detect the position of the ball for Robot 2 to pick it up. Once the shuttle containing the cup stops at Robot 2 workcell, Robot 2 picks up the ball and places it inside the cup. Then the shuttle moves to the unloading station, where it releases the cup onto the slider. The cup then slides into a container placed at the end of the slider. After releasing the cup onto the slider, the shuttle moves back to the Robot 1 workcell and waits to receive another cup.

This example includes two versions of the model:

  • smart4i_model.slx This default model requires a minimal set of products to run. No camera is used; instead, the ball is placed at the same location each time and that location is hard-coded in the model.

  • smart4i_model_with_camera.slx This model additionally uses camera feedback to detect the ball, as described above, more closely mirroring the behavior of the original ITQ system. However, this model additionally requires Deep Learning Toolbox, Image Processing Toolbox, and Computer Vision Toolbox to run.

Model Overview

This example uses the assembly line CAD data, robot model files and shuttle trajectory data. Download and unzip these data files.

downloadFolder = matlab.internal.examples.downloadSupportFile("R2023a/sl3d/examples/","");

Open the model to view the block diagram:

smart4iModel = "smart4i_model.slx";

Virtual assembly line simulink model

The model has four main parts labeled:

  1. Scene Creation & Configuration

  2. System Control Logic

  3. Sensor Input

  4. Actor Motion Control

System Creation & Configuration

This section has two main functions.

  • Simulink 3D Scene Configuration block, that defines the baseline Unreal Engine™ environment which the model connects to.

  • Actor block named Prepare World, that is used to set up the environment defined in the previous section. To do this, the actor calls the setup script, smart4i_setupworld.m, which loads and configures the scene.

In this example, the scene was prepared using these tools:

System Control Logic

Stateflow chart in the System Control Logic section determines the main system behavior.

Inside the Stateflow chart, there are six main sections, listed from bottom to top:

  • Shuttle Control Logic — These charts in this section control the motion of each of the four shuttles. The charts output three-dimensional vectors containing the specified shuttle XY-position, and the rotation about the Z-axis.

  • Robot Control Logic — The charts in this section control the motion of the two robot manipulators. The charts output three-dimensional vectors containing the suction gripper translations. The chart for Robot 2 also controls creation of new ball instances so that the balls can be picked up.

  • Cup Instance Logic — This chart controls the destruction of old cups after the slider drops the cups into the container.

  • Cup & Ball Detection — This section contains two Simulink functions, findCupLoc and findBallLoc, which provide the initial pose of a cup and ball, respectively. The robots use the pose to define a configuration in which they can pick up the cup or ball. In a real assembly line, while Robot 1 selects which cup to pick up, the ball may roll in the workspace. As this would change the position of the ball, the camera in the Robot 2 workcell detects the position of the ball in the workcell.

  • Cup & Ball Actor Utilities — This section contains blocks and functions for enforcing event-based behaviors like picking up and releasing a cup or ball.

  • Trajectory Generation — This section contains the genPath2 function, which generates a trajectory for a robot, given two suction gripper poses, and relies on the trapveltraj function from the Robotics System Toolbox™ to ensure that the resulting trajectory follows a typical point-to-point profile for a manipulator.

Robot 2 stateflow chart interfaces with multiple functions and subsystems.

Sensor Input

In the model using camera feedback to detect the ball, the Sensor Input section contains the camera as a Simulation 3D Camera Get block.

The ball position in the Robot 2 workcell is the same but with a small random offset to reflect the real-life variability caused by the ball rolling. The findBallLoc triggered subsystem takes the image data from the camera to detect the ball.

In this subsystem, the Deep Learning Object Detector block from the Deep Learning Toolbox™ takes the image data and outputs the bounding boxes. The bounding boxes provide information about the size and location of the ball in the image. The findBallXY MATLAB function converts the bounding boxes to the XY-positions of the ball and then returns those positions to the Stateflow chart for Robot 2.

The Deep Learning Object Detector was pretrained in MATLAB by creating a video of the ball at random locations and using the Video Labeler app from the Computer Vision Toolbox™ to create a labeled dataset and train a yolov2ObjectDetector object.

Actor Control

To move the objects in the scene, the system translates motion commands to the desired behavior. The Actor Control section defines the motion of the two robots, the shuttles, and the existence and initial position of the ball.

Robot Actors

Each robot body corresponds to an actor in the Unreal Engine environment. Robot 1 and Robot 2 subsystems control the motion of Robot 1 and Robot 2 respectively, to position the robot actors such that the robots reach the specified poses. This diagram shows the contents of each subsystem:

Subsystem takes configurations and sends actor information to Unreal Engine actors that correspond to robot bodies.

These steps rely on kinematic computations on the robot which requires a kinematic model of the robot in MATLAB using the rigidBodyTree object. During scene creation, the system imports the rigid body trees for Robot 1 and Robot 2 from their original URDF files to the base MATLAB workspace.

title("Robot 1 (Comau Racer V3)");

Figure contains an axes object. The axes object with title Robot 1 (Comau Racer V3), xlabel X, ylabel Y contains 20 objects of type patch, line. These objects represent base_link, part_1, part_2, part_3, part_4, part_5, tool, part_1_mesh, part_2_mesh, part_3_mesh, part_4_mesh, part_5_mesh, tool_mesh, base_link_mesh.

Inspect each robot to see the bodies it contains. These bodies correspond to the actors in the scene. In the Unreal Engine scene, each actor pose is defined relative to the parent body.

Robot: (6 bodies)

 Idx     Body Name     Joint Name     Joint Type     Parent Name(Idx)   Children Name(s)
 ---     ---------     ----------     ----------     ----------------   ----------------
   1        part_1        joint_1       revolute         base_link(0)   part_2(2)  
   2        part_2        joint_2       revolute            part_1(1)   part_3(3)  
   3        part_3        joint_3       revolute            part_2(2)   part_4(4)  
   4        part_4        joint_4       revolute            part_3(3)   part_5(5)  
   5        part_5        joint_5       revolute            part_4(4)   tool(6)  
   6          tool        joint_6       revolute            part_5(5)   

Given the robot models, it is possible to translate a suction gripper pose to actor motion in two mains steps, as specified by the two areas in the model:

  • Compute Robot Configuration from Target Pose — The Simulink model combines the input translation and the desired orientation into one signal and inputs that signal as a pose to the Inverse Kinematics block. This block converts the desired suction gripper pose to a set of joint angles, known as the joint configuration.

  • Specify the Hierarchical Actor Poses — Since each robot body pose is specified relative to a parent robot body, you must convert the set of joint angles into six relative poses that relate each body to their corresponding parent body. The model uses six Get Transform blocks to find the relative poses between two specified bodies.

Note that the suction gripper does not act on the cup or ball in any way. Instead, cups and balls are picked by reparenting them to the suction gripper, effectively adding the picked part to the robot hierarchy. The supervisory logic in the Stateflow chart sets the events that trigger the Simulink functions calling the attach and detach functions.

Ball Instance

When the Stateflow chart triggers the system to create a new ball, the Simulation 3D Actor block named Ball Instance commands the creation of a new ball for Robot 2 to pick up. The block adds a random offset to the position of the ball to simulate real-world conditions caused by the ball rolling, but the detection subsystem ensures that the control logic gets the exact ball position.


The translation and orientation of the suttle is set in space to make it move around the track. The Stateflow chart outputs XY-position and orientation about the Z-axis to the Shuttles subsystem to set the poses of the appropriate actors in the Unreal Engine world.

Simulate the Model

Open the model and click Run to start the simulation.


The simulation takes around 30-45 seconds to start in 3D viewer owing to a large number of actors in the scene.

See Also


Related Examples

More About