diversity in engineering, GCSE entries, UK engineering, A Levels engineering

Reasons to be Cheerful

Nicholas Hill, Plextek

By: Nicholas Hill
CEO

18th September 2019

3 minute read

Home » Engineering

At a time when the media is particularly obsessed with gloomy speculation and bad news, it was great to hear not one but two good news stories for the UK engineering technology sector.

The first came out of the A-level and GCSE results that have been announced in recent weeks.  It seems that more girls than boys (50.3%) have taken A-levels for the first time.  This progress has been driven by years of campaigning by government, business, professional bodies and schools, influences like the appearance of female role models on TV and radio, and a move to a more practical-based curriculum.

Welcome News

As someone who is impatient to see an improvement in gender diversity in engineering, this is welcome news.  Digging a little deeper into the numbers does reveal an important issue though, which is that the overall science numbers are propped up by high levels of girls taking A-level biology.  If you look at the A-levels that lie at the core of many engineering disciplines, girls account for just 23% of physics intake and 39% of maths.  The attractiveness of physics in particular, essential for so much of engineering, has a long way to go before we reach anything like gender parity.

So the A-Level figures are perhaps better news for our burgeoning bio-tech sector than a typical engineering technology employer.  What is more encouraging for engineering is that the figures for GCSE entries show girls making up around 50% in all the three sciences – physics, chemistry and biology – and maths too.  That’s a great result, and it will be interesting to see how this GCSE cohort’s subject choices turn out at A-level.

A Practical Effect

EngineeringUK data shows that just 12% of those working in engineering are female, with the disparity being largely due to girls dropping out of the educational pipeline at every decision point, despite generally performing better than boys in STEM subjects at school.  We need to see continued, incremental forward progress, so it’s good to be able to actually see some.  Gender diversity matters not just because engineering will surely be in a better place when it is less male-dominated, but also from the purely practical effect it will have on increasing overall numbers in the talent pool.  As anyone running a UK company that needs to recruit professional engineers will tell you, we have been facing a desperate talent shortage for some years.

The other good news that caught my eye was record foreign investment in UK tech companies this year.  £5.5bn was invested in the first seven months of the year, which equates to a greater per capita amount than for the US tech sector – wow!

The UK leads Europe in inward investment, but is probably doing particularly well just now because of the weak pound and the US-China trade war, which has made those countries less attractive to foreign investors, many of which are from Asia.  This increase in investment in the tech sector is in spite of an overall reduction in UK foreign direct investment, and serves to show that the UK is still a force to be reckoned with in new technology and innovation.

Your Turn

I hope you enjoyed this brief respite from the doom and gloom stories.  If you’d like another diversion before going back to your newspaper, perhaps have a think about what else your organisation could do to promote engineering as a potentially attractive career option to girls and women, particularly those making implicit career choices through the subject choices they are making at A-level and university.

At a time when the media is particularly obsessed with gloomy speculation and bad news, it was great to hear not one but two good news stories for the UK engineering technology sector.

The first came out of the A-level and GCSE results that have been announced in recent weeks.  It seems that more girls than boys (50.3%) have taken A-levels for the first time.  This progress has been driven by years of campaigning by government, business, professional bodies and schools, influences like the appearance of female role models on TV and radio, and a move to a more practical-based curriculum.

Welcome News

As someone who is impatient to see an improvement in gender diversity in engineering, this is welcome news.  Digging a little deeper into the numbers does reveal an important issue though, which is that the overall science numbers are propped up by high levels of girls taking A-level biology.  If you look at the A-levels that lie at the core of many engineering disciplines, girls account for just 23% of physics intake and 39% of maths.  The attractiveness of physics in particular, essential for so much of engineering, has a long way to go before we reach anything like gender parity.

So the A-Level figures are perhaps better news for our burgeoning bio-tech sector than a typical engineering technology employer.  What is more encouraging for engineering is that the figures for GCSE entries show girls making up around 50% in all the three sciences – physics, chemistry and biology – and maths too.  That’s a great result, and it will be interesting to see how this GCSE cohort’s subject choices turn out at A-level.

A Practical Effect

EngineeringUK data shows that just 12% of those working in engineering are female, with the disparity being largely due to girls dropping out of the educational pipeline at every decision point, despite generally performing better than boys in STEM subjects at school.  We need to see continued, incremental forward progress, so it’s good to be able to actually see some.  Gender diversity matters not just because engineering will surely be in a better place when it is less male-dominated, but also from the purely practical effect it will have on increasing overall numbers in the talent pool.  As anyone running a UK company that needs to recruit professional engineers will tell you, we have been facing a desperate talent shortage for some years.

The other good news that caught my eye was record foreign investment in UK tech companies this year.  £5.5bn was invested in the first seven months of the year, which equates to a greater per capita amount than for the US tech sector – wow!

The UK leads Europe in inward investment, but is probably doing particularly well just now because of the weak pound and the US-China trade war, which has made those countries less attractive to foreign investors, many of which are from Asia.  This increase in investment in the tech sector is in spite of an overall reduction in UK foreign direct investment, and serves to show that the UK is still a force to be reckoned with in new technology and innovation.

Your Turn

I hope you enjoyed this brief respite from the doom and gloom stories.  If you’d like another diversion before going back to your newspaper, perhaps have a think about what else your organisation could do to promote engineering as a potentially attractive career option to girls and women, particularly those making implicit career choices through the subject choices they are making at A-level and university.

 

design, sustainability

Elegance and Sustainability

Steve FItz, Director Technology

By: Steve M.Fitz
Director, Technology

5th September 2019

3 minute read

Home » Engineering

There is a grandfather clock in my house that is nearly 200 years old – it has been in the family for a long time. Its face is lined and the body is a bit shabby (rather like its owner I hear you say) but it keeps good time and announces itself on the hour with a musical bong. Once a week I lift the 7 kg weights approximately 1m to make sure that it continues for the next 7 days. That energy input is equivalent to about one-quarter of the capacity of an AA cell; an impressive exercise in low power design given the amount of ticking and bonging that goes on in a week. In its 200 year life, it would have used about 2400 batteries if that was how it was powered.

Were it to break we would have to get it fixed because it is impossible to contemplate destroying something with such dignity. Luckily the designer had in mind the ability to repair so it has been patched and bodged over the years. If it ever finally comes to the end of its life however, every part of it could be recycled: the wood, the brass the lead weights. In fact, it could be reborn as a whole new clock.

Designing for a changing world

I have been thinking about this clock and the lessons it can teach us in designing future products that face up to the implications of climate change.

Form: Most of the products that we use are so ugly that we cannot wait to sling them the minute their function is superseded by the next model. They have no personality or vitality, they are just there to do a job and we have no emotional attachment to them at all. Looking at it more positively, if a product is to be designed to have a long life it will have to be sufficiently elegant for us to want to have it around for that long. Something that is old (or at least not current) will have to get cool; people who carry around and use stuff that is not the latest will themselves have to get cool. It has happened in the past and it needs to happen now.

Function: The clock is quite demanding. It needs winding weekly and putting right occasionally; wouldn’t it be better to have it powered by electricity and set by radio waves? – wouldn’t that improve the ‘user experience’? Definitely not. One of the attractive things about the clock is its dependence on me to wind it; we have bonded, I and the clock are one machine.

So some questions to ask when designing our next product: How can I make this last 200 years? How can I make this so elegant that someone wants it to last 200 years? How can I make this completely recyclable, even if that means making it more demanding of the user?

There is a grandfather clock in my house that is nearly 200 years old – it has been in the family for a long time. Its face is lined and the body is a bit shabby (rather like its owner I hear you say) but it keeps good time and announces itself on the hour with a musical bong. Once a week I lift the 7 kg weights approximately 1m to make sure that it continues for the next 7 days. That energy input is equivalent to about one-quarter of the capacity of an AA cell; an impressive exercise in low power design given the amount of ticking and bonging that goes on in a week. In its 200 year life, it would have used about 2400 batteries if that was how it was powered.

Were it to break we would have to get it fixed because it is impossible to contemplate destroying something with such dignity. Luckily the designer had in mind the ability to repair so it has been patched and bodged over the years. If it ever finally comes to the end of its life however, every part of it could be recycled: the wood, the brass the lead weights. In fact, it could be reborn as a whole new clock.

Designing for a changing world

I have been thinking about this clock and the lessons it can teach us in designing future products that face up to the implications of climate change.

Form: Most of the products that we use are so ugly that we cannot wait to sling them the minute their function is superseded by the next model. They have no personality or vitality, they are just there to do a job and we have no emotional attachment to them at all. Looking at it more positively, if a product is to be designed to have a long life it will have to be sufficiently elegant for us to want to have it around for that long. Something that is old (or at least not current) will have to get cool; people who carry around and use stuff that is not the latest will themselves have to get cool. It has happened in the past and it needs to happen now.

Function: The clock is quite demanding. It needs winding weekly and putting right occasionally; wouldn’t it be better to have it powered by electricity and set by radio waves? – wouldn’t that improve the ‘user experience’? Definitely not. One of the attractive things about the clock is its dependence on me to wind it; we have bonded, I and the clock are one machine.

So some questions to ask when designing our next product: How can I make this last 200 years? How can I make this so elegant that someone wants it to last 200 years? How can I make this completely recyclable, even if that means making it more demanding of the user?

What Is 5G and How Does It Work?

By: Daniel Tomlinson
Project Engineer

18th July 2019

5 minute read

Home » Engineering

As a society that is becoming increasingly dependent on data driven applications, 5G promises to provide better connectivity and faster speeds for our network devices. However, whilst the previous generations of mobile communications have been fairly analogous to each other in terms of distribution and multiple user access, 5G will be drastically different – making it a challenging system to implement. So, how does it work?

Initial Concept

Enhanced Mobile, Massive iot, low latency, the 5G Triangle
Fig 1 – The 5G Triangle

 

As with any concept, 5G was initially based on a very broad and ambiguous set of standards, which promised low latency, speeds in the region of Gbps and better connectivity. Whilst no intricacies of the system were known at the time, we knew that in order to achieve faster data rates and larger bandwidths we would have to move to higher frequencies – and this is where the problem occurs. Due to the severe amounts of atmospheric attenuation that’s experienced by high frequency signals, range and power become serious issues that our current systems aren’t capable of handling.

Range & Power

A modern GSM tower features multiple cellular base stations, that together, are designed to transmit 360⁰ horizontally and at a range in the order of tens of miles, depending on the terrain. However, if you were to consider that the received power transmitted from a cellular base station degrades with distance at a rate of…

And that by factoring in frequency, this effect worsens…

…it becomes obvious that transmitting over larger distances and at higher frequencies becomes exponentially inefficient. Therefore, a key part of the 5G overhaul would require thousands of miniature base stations to be strategically placed in dense, urban environments in order to maximise capacity with minimal obstructions.

Directivity

5G Radiation pattern
Fig 2 – Radiation Pattern of an Isotropic Antenna versus an Antenna with Gain (Dipole)

 

One way to increase the range of a transceiver, whilst keeping the power output the same, is to incorporate gain into the antenna. This is achieved by focusing the transmitted power towards a particular point as opposed to equally in all directions (isotropic).

Figure 1 shows such a comparison, in which, a dipole antenna’s energy is being focused in the direction of 180 and 0 degrees. Equation three reflects this additional factor:

However, as the essence of a wireless handset is portability, it is likely to move around a lot with the user. Therefore, a high gain 5G transmitter would still require a tracking system to ensure that it stays focused directly at the end user’s handset.

User Tracking

One solution for tracking devices could be to employ a high frequency transceiver with a phased array antenna structure. This would act as a typical base station, capable of transmitting and receiving, but an array of hundreds of small scale patch antennas (and some DSP magic) would make it capable of beamforming. This would not only allow the structure to transmit high gain signals but to also steer the beam by changing the relative phase of the output.

However, as this is a technically complex system that has yet to be implemented on such a large scale, the technology is still in its infancy and is currently being trialled in select areas only. Considerable efforts will have to be made to ensure that such a transceiver could operate in a bustling environment where multipath and body-blocking would cause strong interference.

5G in 2019

3GPP (the 3rd Generation Partnership Project) is an organisation that was established in 1998 and helped to produce the original standards for 3G. It has since gone on to produce the specs for 4G, LTE and is currently working to achieve a 5G “ready system” in 2020.

With certain service carriers already having released 5G this year in certain parts of America, 2019 will be welcoming numerous 5G handsets from several of the flagships giants like Samsung, LG, Huawei and even Xiaomi – a budget smartphone manufacturer.

As with previous generations though, only limited coverage will be available at first (and at a hefty premium), but in practice, it will be fairly similar to Wi-Fi hot-spotting. A lot of work is still required to overcome the issues as discussed above.

As a society that is becoming increasingly dependent on data driven applications, 5G promises to provide better connectivity and faster speeds for our network devices. However, whilst the previous generations of mobile communications have been fairly analogous to each other in terms of distribution and multiple user access, 5G will be drastically different – making it a challenging system to implement. So, how does it work?

Initial Concept

Enhanced Mobile, Massive iot, low latency, the 5G Triangle
Fig 1 – The 5G Triangle

As with any concept, 5G was initially based on a very broad and ambiguous set of standards, which promised low latency, speeds in the region of Gbps and better connectivity. Whilst no intricacies of the system were known at the time, we knew that in order to achieve faster data rates and larger bandwidths we would have to move to higher frequencies – and this is where the problem occurs. Due to the severe amounts of atmospheric attenuation that’s experienced by high frequency signals, range and power become serious issues that our current systems aren’t capable of handling.

Range & Power

A modern GSM tower features multiple cellular base stations, that together, are designed to transmit 360⁰ horizontally and at a range in the order of tens of miles, depending on the terrain. However, if you were to consider that the received power transmitted from a cellular base station degrades with distance at a rate of…

And that by factoring in frequency, this effect worsens…

…it becomes obvious that transmitting over larger distances and at higher frequencies becomes exponentially inefficient. Therefore, a key part of the 5G overhaul would require thousands of miniature base stations to be strategically placed in dense, urban environments in order to maximise capacity with minimal obstructions.

Directivity

5G Radiation pattern
Fig 2 – Radiation Pattern of an Isotropic Antenna versus an Antenna with Gain (Dipole)

One way to increase the range of a transceiver, whilst keeping the power output the same, is to incorporate gain in to the antenna. This is achieved by focusing the transmitted power towards a particular point as opposed to equally in all directions (isotropic).

Figure 1 shows such a comparison, in which, a dipole antenna’s energy is being focused in the direction of 180 and 0 degrees. Equation three reflects this additional factor:

However, as the essence of a wireless handset is portability, it is likely to move around a lot with the user. Therefore, a high gain 5G transmitter would still require a tracking system to ensure that it stays focused directly at the end user’s handset.

User Tracking

One solution for tracking devices could be to employ a high frequency transceiver with a phased array antenna structure. This would act as a typical base station, capable of transmitting and receiving, but an array of hundreds of small scale patch antennas (and some DSP magic) would make it capable of beamforming. This would not only allow the structure to transmit high gain signals but to also steer the beam by changing the relative phase of the output.

However, as this is a technically complex system that has yet to be implemented on such a large scale, the technology is still in its infancy and is currently being trialled in select areas only. Considerable efforts will have to be made to ensure that such a transceiver could operate in a bustling environment where multipath and body-blocking would cause strong interference.

5G in 2019

3GPP (the 3rd Generation Partnership Project) is an organisation that was established in 1998 and helped to produce the original standards for 3G. It has since gone on to produce the specs for 4G, LTE and is currently working to achieve a 5G “ready system” in 2020.

With certain service carriers already having released 5G this year in certain parts of America, 2019 will be welcoming numerous 5G handsets from several of the flagships giants like Samsung, LG, Huawei and even Xiaomi – a budget smartphone manufacturer.

As with previous generations though, only limited coverage will be available at first (and at a hefty premium), but in practice, it will be fairly similar to Wi-Fi hot-spotting. A lot of work is still required to overcome the issues as discussed above.

The Virtue of Failure

By: Polly Britton
Project Engineer, Product Design

25th June 2019

3 minute read

Home » Engineering

The Virtues of Failure

In order to innovate, we must accept the possibility of failure. Since the vast majority of inventions and ideas are doomed to fail, failure is inevitable, even for the most successful companies. And yet, businesses try to hide their mistakes in an attempt to appear perfect in the public eye. I started thinking about this when I heard about the Museum of Failure in Sweden, which exhibits the products invented by companies that their customer-base didn’t want, and certainly wouldn’t pay for.

Being ashamed of our mistakes may be a natural human behaviour, or it might be cultural, but there are times when it is advantageous to embrace failure.

Toyota’s Andon Cords

On Toyota’s factory floor, the cars are assembled on a conveyor belt, lined with employees assembling the cars bit-by-bit as they go past on the assembly line. Each employee on the assembly line has a big yellow button at arms-reach, which they are taught to push every time they detect a problem with the assembly. When pushed, the button alerts the rest of the team, bringing their attention to the issue immediately.

In earlier days of Toyota’s manufacturing, there were ropes hanging above the assembly line that served this function, called “Andon cords”. Pulling the cord halted the conveyor, bringing all work to a complete stop until the problem was solved. Although it might sound like a waste of time, it actually increased Toyota’s efficiency and the technique was adopted by other auto manufacturers.

Toyota keeps track of the number of times the button/cord is used each day. When the rate of alarms decreases it is considered a serious problem since it indicates the employees are not being observant enough.

“A stitch in time saves nine”

It’s much easier to solve problems when you to attend to them as early as possible. But to attend to problems, you have to acknowledge their existence, which sometimes means admitting to a mistake. If it’s your own mistake you’re likely to feel ashamed of it, and if it’s someone else’s mistake you may feel guilty about pointing it out and embarrassing them. That reaction is natural but somewhat irrational; we all make mistakes, and everyone knows that. It’s easy to forgive a mistake if you can catch it early, but it’s harder to forgive later when the damage is already done.

Product Design

In the world of product design, each new project is an opportunity to make many mistakes. The project itself might even be a mistake, as was the case for many exhibits in the Museum of Failure. As designers and engineers, it’s important, to be honest about our mistakes and the mistakes of our peers – even our superiors. Our projects might benefit greatly from a culture of forgiveness where we feel less ashamed of admitting to mistakes, or maybe even a culture like Toyota’s where detecting problems is encouraged and a lack of problems is looked on with suspicion.

The Virtues of Failure

In order to innovate, we must accept the possibility of failure. Since the vast majority of inventions and ideas are doomed to fail, failure is inevitable, even for the most successful companies. And yet, businesses try to hide their mistakes in an attempt to appear perfect in the public eye. I started thinking about this when I heard about the Museum of Failure in Sweden, which exhibits the products invented by companies that their customer-base didn’t want, and certainly wouldn’t pay for.

Being ashamed of our mistakes may be a natural human behaviour, or it might be cultural, but there are times when it is advantageous to embrace failure.

Toyota’s Andon Cords

On Toyota’s factory floor, the cars are assembled on a conveyor belt, lined with employees assembling the cars bit-by-bit as they go past on the assembly line. Each employee on the assembly line has a big yellow button at arms-reach, which they are taught to push every time they detect a problem with the assembly. When pushed, the button alerts the rest of the team, bringing their attention to the issue immediately.

In earlier days of Toyota’s manufacturing, there were ropes hanging above the assembly line that served this function, called “Andon cords”. Pulling the cord halted the conveyor, bringing all work to a complete stop until the problem was solved. Although it might sound like a waste of time, it actually increased Toyota’s efficiency and the technique was adopted by other auto manufacturers.

Toyota keeps track of the number of times the button/cord is used each day. When the rate of alarms decreases it is considered a serious problem since it indicates the employees are not being observant enough.

“A stitch in time saves nine”

It’s much easier to solve problems when you to attend to them as early as possible. But to attend to problems, you have to acknowledge their existence, which sometimes means admitting to a mistake. If it’s your own mistake you’re likely to feel ashamed of it, and if it’s someone else’s mistake you may feel guilty about pointing it out and embarrassing them. That reaction is natural but somewhat irrational; we all make mistakes, and everyone knows that. It’s easy to forgive a mistake if you can catch it early, but it’s harder to forgive later when the damage is already done.

Product Design

In the world of product design, each new project is an opportunity to make many mistakes. The project itself might even be a mistake, as was the case for many exhibits in the Museum of Failure. As designers and engineers, it’s important, to be honest about our mistakes and the mistakes of our peers – even our superiors. Our projects might benefit greatly from a culture of forgiveness where we feel less ashamed of admitting to mistakes, or maybe even a culture like Toyota’s where detecting problems is encouraged and a lack of problems is looked on with suspicion.

Making a LiDAR – Part 5

Unity Point Cloud Rendering

By: David
Principal Consultant, Data Exploration

12th April 2019

4 minute read

Home » Engineering

Unity Point Cloud Rendering

Now we’ve got our LIDAR finished, and our first scan completed, we are left with an SD card with some data on it. The data is a list of several million points (called a point cloud) represented in polar spherical coordinates. Each point represents a target distance from the centre of the LIDAR scan. In its own right, this isn’t very exciting, so we need to find a way to visualise the data. Quite a few people have contacted me to ask how I did this, so unlike the previous “philosophical” LIDAR blogs, this one will go into a little more technical detail. So, if you’re not interested in driving 3D rendering engines, then skip the text and go straight to the video!

I’ve chosen to use the Unity game engine, and this is a software tool targeted at creating 3D video games. It handles the maths and graphics of 3D rendering, it provides a user interface for configuring the 3D world, and it uses the C# programming language for the developer to add “game logic”. If you know Unity, this blog should give you enough information to render a point cloud.

An object in the Unity world is called a GameObject, and each game object represents a “thing” that we can see in the 3D world. We also need to create a camera, and this gives the user the view of the 3D world. It’s straight forward enough to write some C# code that moves and rotates the camera in accordance with mouse and keyboard input. If we fill the world with GameObjects, and we move the camera through the world, then Unity takes care of the rest.

A GameObject is made of a 3D mesh of points to define its shape. The mesh can be anything from a complicated shape like a person, to a simple geometrical shape like a sphere. The developer needs to define a Material which is rendered on the GameObject surface, and a Shader to determine how the Material surface responds to light.

The obvious way to render the LIDAR data is to create a sphere GameObject for each LIDAR data point. This produces wonderful 3D images, and as the user moves through the point cloud each element is rendered as a beautifully shaded sphere. Unfortunately, because each sphere translates into many points of a 3D Mesh, and because we have several million LIDAR data points, that’s a huge amount of work for the computer to get through. The end result is a very slow frame rate which isn’t suitable for real time. For video generation, I configured Unity to generate frames offline, but 1/24th of a second apart in game time. The result is a series of images that can be stitched together to make a fluid video sequence.

I thought it would be fun to view the LIDAR world through the Oculus Rift headset, but here we require very high frame rates so offline rendering isn’t going to work. Rather than plotting each LIDAR point as a GameObject, I used a series of LIDAR points (about 60k worth) to define a single Mesh to make one GameObject. The GameObject then takes the shape defined by the 60K set of scanned LIDAR points. The GameObject Mesh requires a custom Shader to render its surface as transparent, and each Mesh vertices as a flat 2D disk. This allows us to reduce the number of GameObjects by a factor of 60K with a massive drop in CPU workload. The total number of GameObjects is then the number of LIDAR data points divided by 60K. The downside is that we lose the shading on each LIDAR data point. From a distance that still looks great, but if the user moves close to a LIDAR point the image is not quite so good. The advantage is a frame rate fast enough for virtual reality.

As a final node, it is quite a surreal experience to scan an area, and then view it in virtual reality through the Oculus Rift headset. It is quite the shame that the reader can only see the 2D video renders. The best way I can describe it is analogues to stepping into the Matrix to visit Morpheus and Neo!

Now we’ve got our LIDAR finished, and our first scan completed, we are left with an SD card with some data on it. The data is a list of several million points (called a point cloud) represented in polar spherical coordinates. Each point represents a target distance from the centre of the LIDAR scan. In its own right, this isn’t very exciting, so we need to find a way to visualise the data. Quite a few people have contacted me to ask how I did this, so unlike the previous “philosophical” LIDAR blogs, this one will go into a little more technical detail. So, if you’re not interested in driving 3D rendering engines, then skip the text and go straight to the video!

I’ve chosen to use the Unity game engine, and this is a software tool targeted at creating 3D video games. It handles the maths and graphics of 3D rendering, it provides a user interface for configuring the 3D world, and it uses the C# programming language for the developer to add “game logic”. If you know Unity, this blog should give you enough information to render a point cloud.

An object in the Unity world is called a GameObject, and each game object represents a “thing” that we can see in the 3D world. We also need to create a camera, and this gives the user the view of the 3D world. It’s straight forward enough to write some C# code that moves and rotates the camera in accordance with mouse and keyboard input. If we fill the world with GameObjects, and we move the camera through the world, then Unity takes care of the rest.

A GameObject is made of a 3D mesh of points to define its shape. The mesh can be anything from a complicated shape like a person, to a simple geometrical shape like a sphere. The developer needs to define a Material which is rendered on the GameObject surface, and a Shader to determine how the Material surface responds to light.

The obvious way to render the LIDAR data is to create a sphere GameObject for each LIDAR data point. This produces wonderful 3D images, and as the user moves through the point cloud each element is rendered as a beautifully shaded sphere. Unfortunately, because each sphere translates into many points of a 3D Mesh, and because we have several million LIDAR data points, that’s a huge amount of work for the computer to get through. The end result is a very slow frame rate which isn’t suitable for real time. For video generation, I configured Unity to generate frames offline, but 1/24th of a second apart in game time. The result is a series of images that can be stitched together to make a fluid video sequence.

I thought it would be fun to view the LIDAR world through the Oculus Rift headset, but here we require very high frame rates so offline rendering isn’t going to work. Rather than plotting each LIDAR point as a GameObject, I used a series of LIDAR points (about 60k worth) to define a single Mesh to make one GameObject. The GameObject then takes the shape defined by the 60K set of scanned LIDAR points. The GameObject Mesh requires a custom Shader to render its surface as transparent, and each Mesh vertices as a flat 2D disk. This allows us to reduce the number of GameObjects by a factor of 60K with a massive drop in CPU workload. The total number of GameObjects is then the number of LIDAR data points divided by 60K. The downside is that we lose the shading on each LIDAR data point. From a distance that still looks great, but if the user moves close to a LIDAR point the image is not quite so good. The advantage is a frame rate fast enough for virtual reality.

As a final node, it is quite a surreal experience to scan an area, and then view it in virtual reality through the Oculus Rift headset. It is quite the shame that the reader can only see the 2D video renders. The best way I can describe it is analogues to stepping into the Matrix to visit Morpheus and Neo!