read
Description
[
returns a point cloud, distances from the sensor to object points, reflectivity of surface
materials, and the semantic identifiers of the objects in a scene. You can set the field of
view and angular resolution in the pointCloud
,range
,reflectivity
,semantic
] = read(lidar
)sim3d.sensors.Lidar
object specified by lidar
.
Input Arguments
lidar
— Virtual lidar sensor that detects target
sim3d.sensors.Lidar
object
Virtual lidar sensor that detects target in the 3D environment, specified as a
sim3d.sensors.Lidar
object.
Example: lidar = sim3d.sensors.Lidar
Output Arguments
pointCloud
— Point cloud data
m-by-n-by-3 array of positive real-valued
[x, y, z] points
Point cloud data, returned as an m-by-n-by 3 array of positive, real-valued [x, y, z] points. m and n define the number of points in the point cloud, as shown in this equation:
where:
VFOV is the vertical field of view of the lidar, in degrees, as specified by the
VerticalFieldOfView
argument.VRES is the vertical angular resolution of the lidar, in degrees, as specified by the
VerticalAngularResolution
argument.HFOV is the horizontal field of view of the lidar, in degrees, as specified by the
HorizontalFieldOfView
argument.HRES is the horizontal angular resolution of the lidar, in degrees, as specified by the
HorizontalAngularResolution
argument.
Each m-by-n entry in the array specifies the
x, y, and z coordinates of a
detected point in the sensor coordinate system. If the lidar does not detect a point at
a given coordinate, then x, y, and
z are returned as NaN
.
You can create a point cloud from these returned points by using point cloud
functions in the pointCloud
(Computer Vision Toolbox) object.
Data Types: single
range
— Distance to object points
m-by-n positive real-valued matrix
Distance to object points measured by the lidar sensor, returned as an
m-by-n positive real-valued matrix. Each
m-by-n value in the matrix corresponds to an
[x, y, z] coordinate point
returned by the pointCloud
output argument.
Data Types: single
reflectivity
— Reflectivity of surface materials
m-by-n matrix of intensity values in range
[0, 1]
Reflectivity of surface materials, returned as an
m-by-n matrix of intensity values in the range
[0, 1], where m is the number of rows in the point cloud and
n is the number of columns. Each point in the
reflectivity
output corresponds to a point in the
pointCloud
output. The function returns points that are not part
of a surface material as NaN
.
To calculate reflectivity, the lidar sensor uses the Phong reflection model [1]. The model describes surface reflectivity as a combination of diffuse reflections (scattered reflections, such as from rough surfaces) and specular reflections (mirror-like reflections, such as from smooth surfaces).
Data Types: single
semantic
— Semantic label identifier
m-by-n-by-3 array of RGB triplet values
Semantic label identifier for each pixel in the image, output as an m-by-n array of RGB triplet values. m is the vertical resolution of the image. n is the horizontal resolution of the image.
The table shows the object IDs used in the default scenes. If a scene contains an
object that does not have an assigned ID, that object is assigned an ID of
0
. The detection of lane markings is not supported.
ID | Type |
---|---|
0 | None/default |
1 | Building |
2 | Not used |
3 | Other |
4 | Pedestrians |
5 | Pole |
6 | Lane Markings |
7 | Road |
8 | Sidewalk |
9 | Vegetation |
10 | Vehicle |
11 | Not used |
12 | Generic traffic sign |
13 | Stop sign |
14 | Yield sign |
15 | Speed limit sign |
16 | Weight limit sign |
17-18 | Not used |
19 | Left and right arrow warning sign |
20 | Left chevron warning sign |
21 | Right chevron warning sign |
22 | Not used |
23 | Right one-way sign |
24 | Not used |
25 | School bus only sign |
26-38 | Not used |
39 | Crosswalk sign |
40 | Not used |
41 | Traffic signal |
42 | Curve right warning sign |
43 | Curve left warning sign |
44 | Up right arrow warning sign |
45-47 | Not used |
48 | Railroad crossing sign |
49 | Street sign |
50 | Roundabout warning sign |
51 | Fire hydrant |
52 | Exit sign |
53 | Bike lane sign |
54-56 | Not used |
57 | Sky |
58 | Curb |
59 | Flyover ramp |
60 | Road guard rail |
61 | Bicyclist |
62-66 | Not used |
67 | Deer |
68-70 | Not used |
71 | Barricade |
72 | Motorcycle |
73-255 | Not used |
Dependencies
To enable semantic output, set the EnableSemanticOutput
argument of the
sim3d.sensors.Camera
object to 1
.
References
[1] Phong, Bui Tuong. “Illumination for Computer Generated Pictures.” Communications of the ACM 18, no. 6 (June 1975): 311–17. https://doi.org/10.1145/360825.360839.
Version History
Introduced in R2024b
See Also
sim3d.sensors.Lidar
| sim3d.World
| sim3d.Actor
| pointCloud
(Computer Vision Toolbox) | pcplayer
(Computer Vision Toolbox)
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)