Making a LiDAR – Part 5

Unity Point Cloud Rendering

By: David
Principal Consultant, Data Exploration

12th April 2019

4 minute read

Home » Insights » Making a LiDAR – Part 5

Unity Point Cloud Rendering

Now we’ve got our LIDAR finished, and our first scan completed, we are left with an SD card with some data on it. The data is a list of several million points (called a point cloud) represented in polar spherical coordinates. Each point represents a target distance from the centre of the LIDAR scan. In its own right, this isn’t very exciting, so we need to find a way to visualise the data. Quite a few people have contacted me to ask how I did this, so unlike the previous “philosophical” LIDAR blogs, this one will go into a little more technical detail. So, if you’re not interested in driving 3D rendering engines, then skip the text and go straight to the video!

I’ve chosen to use the Unity game engine, and this is a software tool targeted at creating 3D video games. It handles the maths and graphics of 3D rendering, it provides a user interface for configuring the 3D world, and it uses the C# programming language for the developer to add “game logic”. If you know Unity, this blog should give you enough information to render a point cloud.

An object in the Unity world is called a GameObject, and each game object represents a “thing” that we can see in the 3D world. We also need to create a camera, and this gives the user the view of the 3D world. It’s straight forward enough to write some C# code that moves and rotates the camera in accordance with mouse and keyboard input. If we fill the world with GameObjects, and we move the camera through the world, then Unity takes care of the rest.

A GameObject is made of a 3D mesh of points to define its shape. The mesh can be anything from a complicated shape like a person, to a simple geometrical shape like a sphere. The developer needs to define a Material which is rendered on the GameObject surface, and a Shader to determine how the Material surface responds to light.

The obvious way to render the LIDAR data is to create a sphere GameObject for each LIDAR data point. This produces wonderful 3D images, and as the user moves through the point cloud each element is rendered as a beautifully shaded sphere. Unfortunately, because each sphere translates into many points of a 3D Mesh, and because we have several million LIDAR data points, that’s a huge amount of work for the computer to get through. The end result is a very slow frame rate which isn’t suitable for real time. For video generation, I configured Unity to generate frames offline, but 1/24th of a second apart in game time. The result is a series of images that can be stitched together to make a fluid video sequence.

I thought it would be fun to view the LIDAR world through the Oculus Rift headset, but here we require very high frame rates so offline rendering isn’t going to work. Rather than plotting each LIDAR point as a GameObject, I used a series of LIDAR points (about 60k worth) to define a single Mesh to make one GameObject. The GameObject then takes the shape defined by the 60K set of scanned LIDAR points. The GameObject Mesh requires a custom Shader to render its surface as transparent, and each Mesh vertices as a flat 2D disk. This allows us to reduce the number of GameObjects by a factor of 60K with a massive drop in CPU workload. The total number of GameObjects is then the number of LIDAR data points divided by 60K. The downside is that we lose the shading on each LIDAR data point. From a distance that still looks great, but if the user moves close to a LIDAR point the image is not quite so good. The advantage is a frame rate fast enough for virtual reality.

As a final node, it is quite a surreal experience to scan an area, and then view it in virtual reality through the Oculus Rift headset. It is quite the shame that the reader can only see the 2D video renders. The best way I can describe it is analogues to stepping into the Matrix to visit Morpheus and Neo!

Now we’ve got our LIDAR finished, and our first scan completed, we are left with an SD card with some data on it. The data is a list of several million points (called a point cloud) represented in polar spherical coordinates. Each point represents a target distance from the centre of the LIDAR scan. In its own right, this isn’t very exciting, so we need to find a way to visualise the data. Quite a few people have contacted me to ask how I did this, so unlike the previous “philosophical” LIDAR blogs, this one will go into a little more technical detail. So, if you’re not interested in driving 3D rendering engines, then skip the text and go straight to the video!

I’ve chosen to use the Unity game engine, and this is a software tool targeted at creating 3D video games. It handles the maths and graphics of 3D rendering, it provides a user interface for configuring the 3D world, and it uses the C# programming language for the developer to add “game logic”. If you know Unity, this blog should give you enough information to render a point cloud.

An object in the Unity world is called a GameObject, and each game object represents a “thing” that we can see in the 3D world. We also need to create a camera, and this gives the user the view of the 3D world. It’s straight forward enough to write some C# code that moves and rotates the camera in accordance with mouse and keyboard input. If we fill the world with GameObjects, and we move the camera through the world, then Unity takes care of the rest.

A GameObject is made of a 3D mesh of points to define its shape. The mesh can be anything from a complicated shape like a person, to a simple geometrical shape like a sphere. The developer needs to define a Material which is rendered on the GameObject surface, and a Shader to determine how the Material surface responds to light.

The obvious way to render the LIDAR data is to create a sphere GameObject for each LIDAR data point. This produces wonderful 3D images, and as the user moves through the point cloud each element is rendered as a beautifully shaded sphere. Unfortunately, because each sphere translates into many points of a 3D Mesh, and because we have several million LIDAR data points, that’s a huge amount of work for the computer to get through. The end result is a very slow frame rate which isn’t suitable for real time. For video generation, I configured Unity to generate frames offline, but 1/24th of a second apart in game time. The result is a series of images that can be stitched together to make a fluid video sequence.

I thought it would be fun to view the LIDAR world through the Oculus Rift headset, but here we require very high frame rates so offline rendering isn’t going to work. Rather than plotting each LIDAR point as a GameObject, I used a series of LIDAR points (about 60k worth) to define a single Mesh to make one GameObject. The GameObject then takes the shape defined by the 60K set of scanned LIDAR points. The GameObject Mesh requires a custom Shader to render its surface as transparent, and each Mesh vertices as a flat 2D disk. This allows us to reduce the number of GameObjects by a factor of 60K with a massive drop in CPU workload. The total number of GameObjects is then the number of LIDAR data points divided by 60K. The downside is that we lose the shading on each LIDAR data point. From a distance that still looks great, but if the user moves close to a LIDAR point the image is not quite so good. The advantage is a frame rate fast enough for virtual reality.

As a final node, it is quite a surreal experience to scan an area, and then view it in virtual reality through the Oculus Rift headset. It is quite the shame that the reader can only see the 2D video renders. The best way I can describe it is analogues to stepping into the Matrix to visit Morpheus and Neo!

Top