1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
10 years of news and resources for members of the IEEE Signal Processing Society
For our August 2018 issue, we cover recent patents granted in the area of Lidar based scene analysis.
Patent no. 10,031,231 presents an object-detection system suitable for an automated vehicle which includes a lidar and a controller. The lidar is used to detect a point-cloud that is organized into a plurality of scan-lines. The controller is in communication with the lidar. The controller is configured to classify each detected point in the point-cloud as a ground-point or a non-ground-point, define runs of non-ground-points, where each run characterized by one or multiple instances of adjacent non-ground-points in a scan-line separated from a subsequent run of one or more non-ground-points by at least one instance of a ground-point, define a cluster of non-ground-points associated with the object. The cluster is characterized by a first run from a first scan-line being associated with a second run from a second scan-line when a first point from the first run is displaced less than a distance-threshold from a second point from the second run.
Patent no. 9,989,969 introduces an apparatus and method for visual localization of a visual camera system outputting real-time visual camera data and a graphics processing unit receiving the real-time visual camera data. The graphics processing unit accesses a database of prior map information and generates a synthetic image that is then compared to the real-time visual camera data to determine corrected position data. The graphics processing unit determines a camera position based on the corrected position data. A corrective system for applying navigation of the vehicle based on the determined camera position can be used in some embodiments.
Patent no. 9,945,950 presents a method of localizing transportable apparatus within an environment includes receiving data obtained from a first ranging sensor device that is configured to collect information relating to a 2D representation of an environment through which the transportable device is moving. Further data is received, that data being obtained from a second ranging sensor device of the transportable apparatus configured to collect information relating to at least a surface over which the transportable apparatus is moving. The ranging sensor device data is used to estimate linear and rotational velocities of the transportable apparatus and the estimates are used to generate a new 3D point cloud of the environment. The method seeks to match the new 3D point cloud with, or within, existing 3D point cloud in order to localize the transportable apparatus with respect to the existing point cloud.
As discussed in patent no 9,905,032, in scenarios involving the capturing of an environment, it may be desirable to remove temporary objects (e.g., vehicles depicted in captured images of a street) in furtherance of individual privacy and/or an unobstructed rendering of the environment. However, techniques involving the evaluation of visual images to identify and remove objects may be imprecise, e.g., failing to identify and remove some objects while incorrectly omitting portions of the images that do not depict such objects. However, such capturing scenarios often involve capturing a lidar point cloud, which may identify the presence and shapes of objects with higher precision. The lidar data may also enable a movement classification of respective objects differentiating moving and stationary objects, which may facilitate an accurate removal of the objects from the rendering of the environment (e.g., identifying the object in a first image may guide the identification of the object in sequentially adjacent images).
As detailed in patent no. 9,870,512, within machine vision, object movement is often estimated by applying image evaluation techniques to visible light images, utilizing techniques such as perspective and parallax. However, the precision of such techniques may be limited due to visual distortions in the images, such as glare and shadows. Instead, lidar data may be available (e.g., for object avoidance in automated navigation), and may serve as a high-precision data source for such determinations. Respective lidar points of a lidar point cloud may be mapped to voxels of a three-dimensional voxel space, and voxel clusters may be identified as objects. The movement of the lidar points may be classified over time, and the respective objects may be classified as moving or stationary based on the classification of the lidar points associated with the object. This classification may yield precise results, because voxels in three-dimensional voxel space present clearly differentiable statuses when evaluated over time.
In patent no. 9,710,714 point cloud data is received and a ground plane is segmented. A two-dimensional image of the segmented ground plane is generated based on intensity values of the segmented ground plane. Lane marking candidates are determined based on intensity within the generated two-dimensional image. Image data is received and the generated two-dimensional image is registered with the received image data. Lane marking candidates of the received image data are determined based on the lane marking candidates of the registered two-dimensional image. Image patches are selected from the two-dimensional image and from the received image data based on the determined lane markings. Feature maps including selected image patches from the registered two-dimensional image and received data are generated. The set of feature maps are sub-sampled, and a feature vector is generated based on the set of feature maps. Lane markings are determined from the generated feature vector.
If you have an interesting patent to share when we next feature patents related to Lidar based environment analysis, or if you are especially interested in a signal processing research field that you would want to be highlighted in this section, please send email to Csaba Benedek (benedek.csaba AT sztaki DOT mta DOT hu).
Title: Lidar object detection system for automated vehicles
Inventors: Zermas; Dimitris (Minneapolis, MN), Izzat; Izzat H. (Oak Park, CA), Mangalgiri; Anuradha (Agoura Hills, CA)
Issued: July 24, 2018
Assignee: Delphi Technologies, Inc. (Troy, MI)
Title: Visual localization within LIDAR maps
Inventors: Eustice; Ryan M. (Ann Arbor, MI), Wolcott; Ryan W. (Ann Arbor, MI)
Issued: June 5, 2018
Assignee: The Regents of The University of Michigan (Ann Arbor, MI)
Title: Method for localizing a vehicle equipped with two lidar systems
Inventors: Newman; Paul Michael (Oxford, GB), Baldwin; Ian Alan (Oxford, GB)
Issued: April 17, 2018
Assignee: Oxford University Innovation Limited (Oxford, GB)
Title: Object removal using lidar-based classification
Inventors: Rogan; Aaron Matthew (Westminster, CO), Kadlec; Benjamin James (Boulder, CO)
Issued: February 27, 2018
Assignee: Microsoft Technology Licensing, LLC (Redmond, WA)
Title: Fusion of RGB images and LiDAR data for lane classification
Inventors: Chen; Xin (Evanston, IL), Zang; Andi (Chicago, IL), Huang; Xinyu (Cary, NC)
Issued: July 18, 2017
Assignee: Nokia Technologies Oy (Espoo, FI)
© Copyright 2020 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.