What Is 5G and How Does It Work?

By: Daniel Tomlinson
Project Engineer

18th July 2019

5 minute read

Home » mobile

As a society that is becoming increasingly dependent on data driven applications, 5G promises to provide better connectivity and faster speeds for our network devices. However, whilst the previous generations of mobile communications have been fairly analogous to each other in terms of distribution and multiple user access, 5G will be drastically different – making it a challenging system to implement. So, how does it work?

Initial Concept

Enhanced Mobile, Massive iot, low latency, the 5G Triangle
Fig 1 – The 5G Triangle

 

As with any concept, 5G was initially based on a very broad and ambiguous set of standards, which promised low latency, speeds in the region of Gbps and better connectivity. Whilst no intricacies of the system were known at the time, we knew that in order to achieve faster data rates and larger bandwidths we would have to move to higher frequencies – and this is where the problem occurs. Due to the severe amounts of atmospheric attenuation that’s experienced by high frequency signals, range and power become serious issues that our current systems aren’t capable of handling.

Range & Power

A modern GSM tower features multiple cellular base stations, that together, are designed to transmit 360⁰ horizontally and at a range in the order of tens of miles, depending on the terrain. However, if you were to consider that the received power transmitted from a cellular base station degrades with distance at a rate of…

And that by factoring in frequency, this effect worsens…

…it becomes obvious that transmitting over larger distances and at higher frequencies becomes exponentially inefficient. Therefore, a key part of the 5G overhaul would require thousands of miniature base stations to be strategically placed in dense, urban environments in order to maximise capacity with minimal obstructions.

Directivity

5G Radiation pattern
Fig 2 – Radiation Pattern of an Isotropic Antenna versus an Antenna with Gain (Dipole)

 

One way to increase the range of a transceiver, whilst keeping the power output the same, is to incorporate gain into the antenna. This is achieved by focusing the transmitted power towards a particular point as opposed to equally in all directions (isotropic).

Figure 1 shows such a comparison, in which, a dipole antenna’s energy is being focused in the direction of 180 and 0 degrees. Equation three reflects this additional factor:

However, as the essence of a wireless handset is portability, it is likely to move around a lot with the user. Therefore, a high gain 5G transmitter would still require a tracking system to ensure that it stays focused directly at the end user’s handset.

User Tracking

One solution for tracking devices could be to employ a high frequency transceiver with a phased array antenna structure. This would act as a typical base station, capable of transmitting and receiving, but an array of hundreds of small scale patch antennas (and some DSP magic) would make it capable of beamforming. This would not only allow the structure to transmit high gain signals but to also steer the beam by changing the relative phase of the output.

However, as this is a technically complex system that has yet to be implemented on such a large scale, the technology is still in its infancy and is currently being trialled in select areas only. Considerable efforts will have to be made to ensure that such a transceiver could operate in a bustling environment where multipath and body-blocking would cause strong interference.

5G in 2019

3GPP (the 3rd Generation Partnership Project) is an organisation that was established in 1998 and helped to produce the original standards for 3G. It has since gone on to produce the specs for 4G, LTE and is currently working to achieve a 5G “ready system” in 2020.

With certain service carriers already having released 5G this year in certain parts of America, 2019 will be welcoming numerous 5G handsets from several of the flagships giants like Samsung, LG, Huawei and even Xiaomi – a budget smartphone manufacturer.

As with previous generations though, only limited coverage will be available at first (and at a hefty premium), but in practice, it will be fairly similar to Wi-Fi hot-spotting. A lot of work is still required to overcome the issues as discussed above.

As a society that is becoming increasingly dependent on data driven applications, 5G promises to provide better connectivity and faster speeds for our network devices. However, whilst the previous generations of mobile communications have been fairly analogous to each other in terms of distribution and multiple user access, 5G will be drastically different – making it a challenging system to implement. So, how does it work?

Initial Concept

Enhanced Mobile, Massive iot, low latency, the 5G Triangle
Fig 1 – The 5G Triangle

As with any concept, 5G was initially based on a very broad and ambiguous set of standards, which promised low latency, speeds in the region of Gbps and better connectivity. Whilst no intricacies of the system were known at the time, we knew that in order to achieve faster data rates and larger bandwidths we would have to move to higher frequencies – and this is where the problem occurs. Due to the severe amounts of atmospheric attenuation that’s experienced by high frequency signals, range and power become serious issues that our current systems aren’t capable of handling.

Range & Power

A modern GSM tower features multiple cellular base stations, that together, are designed to transmit 360⁰ horizontally and at a range in the order of tens of miles, depending on the terrain. However, if you were to consider that the received power transmitted from a cellular base station degrades with distance at a rate of…

And that by factoring in frequency, this effect worsens…

…it becomes obvious that transmitting over larger distances and at higher frequencies becomes exponentially inefficient. Therefore, a key part of the 5G overhaul would require thousands of miniature base stations to be strategically placed in dense, urban environments in order to maximise capacity with minimal obstructions.

Directivity

5G Radiation pattern
Fig 2 – Radiation Pattern of an Isotropic Antenna versus an Antenna with Gain (Dipole)

One way to increase the range of a transceiver, whilst keeping the power output the same, is to incorporate gain in to the antenna. This is achieved by focusing the transmitted power towards a particular point as opposed to equally in all directions (isotropic).

Figure 1 shows such a comparison, in which, a dipole antenna’s energy is being focused in the direction of 180 and 0 degrees. Equation three reflects this additional factor:

However, as the essence of a wireless handset is portability, it is likely to move around a lot with the user. Therefore, a high gain 5G transmitter would still require a tracking system to ensure that it stays focused directly at the end user’s handset.

User Tracking

One solution for tracking devices could be to employ a high frequency transceiver with a phased array antenna structure. This would act as a typical base station, capable of transmitting and receiving, but an array of hundreds of small scale patch antennas (and some DSP magic) would make it capable of beamforming. This would not only allow the structure to transmit high gain signals but to also steer the beam by changing the relative phase of the output.

However, as this is a technically complex system that has yet to be implemented on such a large scale, the technology is still in its infancy and is currently being trialled in select areas only. Considerable efforts will have to be made to ensure that such a transceiver could operate in a bustling environment where multipath and body-blocking would cause strong interference.

5G in 2019

3GPP (the 3rd Generation Partnership Project) is an organisation that was established in 1998 and helped to produce the original standards for 3G. It has since gone on to produce the specs for 4G, LTE and is currently working to achieve a 5G “ready system” in 2020.

With certain service carriers already having released 5G this year in certain parts of America, 2019 will be welcoming numerous 5G handsets from several of the flagships giants like Samsung, LG, Huawei and even Xiaomi – a budget smartphone manufacturer.

As with previous generations though, only limited coverage will be available at first (and at a hefty premium), but in practice, it will be fairly similar to Wi-Fi hot-spotting. A lot of work is still required to overcome the issues as discussed above.

Further Reading

Distance Finding in Mobile Autonomous Systems

Damien Clarke - Senior Consultant, Data Exploitation

By: Damien Clarke
Lead Consultant, Data Exploitation

7th November 2018

Home » mobile

You might already be familiar with FiveAI, a driverless car startup based in Cambridge, and their recent work making headlines but for those that aren’t, allow me to bring you up to speed. FiveAI’s vision is to bring a shared driverless taxi service to London by next year and they have already started gathering data of London’s streets with their iconic blue branded cars.

A key component in the development of mobile autonomous systems is the ability to produce a 3D map of the local environment which can be used for route planning and collision avoidance (e.g. sense and avoid). There are various sensors which can be used to achieve this and each one has specific advantages and disadvantages.

Stereo Vision

The first approach (and one that many animals, including humans, use) is to combine images from a pair of cameras placed at slightly different positions to enable depth perception. This is achieved by determining the horizontal disparity between the same object in both cameras. Nearby objects produce a large disparity in position between the two cameras whereas far objects have a small disparity.

This technique can also be used with a single camera if it is moving as the video is effectively a series of images taken at different positions. This is known as Structure from Motion and is commonly used with airborne cameras, such as those used on small consumer drones.

The primary advantage of this technique is that cameras are small and inexpensive. At close range, good depth resolution can be achieved and fused with the image content itself to produce a 3D colour image. A large number of cameras with overlapping fields of view can potentially produce a 360° panoramic 3D map of the environment around a vehicle.

The main limitation of this approach is that it will only work when suitable images can be produced and therefore adverse environmental conditions (e.g. dust, fog, rain, etc.) will prevent the production of a 3D map. Operation at night time is potentially possible with illumination or the use of thermal imagers rather than standard cameras. Poor camera dynamic range can also be a problem as bright lights (e.g. headlights or the sun) will cause glare. In addition, the processing required to locate features within both images and match them is complex and adds computational burden when using this technique to produce a 3D map.

Lidar

An alternative optical approach to stereo vision is a scanning laser range finder, also known as lidar. This approach uses a laser to send a pulse towards a surface and a sensor to record how long it takes for the reflection to return. The measurement of the time of flight can then be used to determine the range. To produce a 3D map of a scene, this beam must then be scanned in azimuth and elevation. To reduce the amount of scanning, some lidar sensors use multiple beams at different elevation angles and then only scan in azimuth.

Lidar has very good depth resolution and due to the narrow beam can also produce very good lateral resolution. In general, the technology for emitting and sensing light is entirely solid state, however, at present many lidar system still use the mechanical method to scan the beam across the scene. Fully solid state systems would be small and cheap, though this promise has not yet been fully realised in commercial lidar systems which are often large and expensive.

As simple lidar sensors only record the time for the first reflection to return, a drawback of some lidar systems is that they will only detect the nearest object in a specific direction. This is problematic when the environment is dusty or foggy as the first reflection may not be from a solid object and the resulting 3D map will be degraded. More sophisticated (and costly) systems measure the entire reflection over time which then allows a full range profile to be measured through the obscurant. Direct sunlight can also produce problems as the large level of background illumination can make it difficult to detect weak reflections. Similarly, if a surface has low reflectivity (i.e. it is black) then it may not be detected by the lidar. This can be a problem for autonomous vehicles as black car surfaces will only be detected at a closer range than more reflective vehicles.

Radar

Radar is similar to lidar but uses microwaves rather than light (typically 25 or 77 GHz). Lidar was in fact inspired by radar (e.g. laser radar) and only became possible once lasers were invented. The exact mechanism by which the distance is measured varies slightly between different radar systems; however, the concept is the same. A signal is emitted, the length of time it takes for a reflection to return is measured and this is then converted into a range profile. While panoramic mechanically scanned radars are available, it is more common to use an antenna array and calculate the angle of arrival of a reflection by the difference in signal across the array.

One advantage of radar is the ability to measure speed directly via Doppler shift without complex processing. Therefore, objects moving relative to a mainly static scene are generally easy to detect. Poor environmental conditions (e.g. fog, rain and snow) have little impact on the performance of the radar which provides a useful all-weather capability for autonomous vehicles. Single chip radars with integrated processing capabilities are also available for use as small and inexpensive sensor solutions.

A disadvantage of radar is the limited lateral resolution. While the depth resolution can be good, the angular resolution is significantly lower than for optical sensors. However, this is partially mitigated if an object can be uniquely separated from other objects and clutter by its range or velocity value.

Ultrasonic

The final sensor used for range finding on autonomous vehicles is an ultrasonic sensor which emits high-frequency sounds beyond the range of human hearing. Bats are, of course, well-known users of this approach. Ultrasonic sensors are very similar to lidar sensors; however, as the speed of sound in air is vastly slower than the speed of light it is much easier to measure the time for a reflection to return from a surface.

Ultrasonic sensors work well regardless of light level or environmental conditions and are very small and inexpensive. This makes the technology ideal for ultra-short range collision avoidance sensors on small or slow moving vehicles which can be placed in many locations to provide wide area coverage.

The main disadvantage of ultrasonic sensors is their extremely short range as they can only produce distance measurements for surfaces up to a few metres away. For this reason, it is also uncommon for an ultrasonic sensor to be used to explicitly form a 3D map.

Data Fusion

In practice, to achieve a robust and effective sensor solution for autonomous vehicles it is necessary to combine different sensors and perform sensor fusion. As yet there is no standard sensor suite and research is still ongoing to determine the optimum combination with an acceptable performance across all weather conditions.

Furthermore, as an example, Tesla’s latest models that are claimed to be suitable for autonomous operation have eight cameras (with varying fields of view) and twelve ultrasonic sensors to enable panoramic sensing while a single forward-looking radar measures range and speed of objects up to 160m away.

The combination of cameras with radar is a common sensor choice as it provides good lateral and range resolution under various weather conditions for a relatively low price. It remains to be seen whether or not it is sufficient for safe autonomous operation without the addition of lidar.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

You might already be familiar with FiveAI, a driverless car startup based in Cambridge, and their recent work making headlines but for those that aren’t, allow me to bring you up to speed. FiveAI’s vision is to bring a shared driverless taxi service to London by next year and they have already started gathering data of London’s streets with their iconic blue branded cars.

A key component in the development of mobile autonomous systems is the ability to produce a 3D map of the local environment which can be used for route planning and collision avoidance (e.g. sense and avoid). There are various sensors which can be used to achieve this and each one has specific advantages and disadvantages.

Stereo Vision

The first approach (and one that many animals, including humans, use) is to combine images from a pair of cameras placed at slightly different positions to enable depth perception. This is achieved by determining the horizontal disparity between the same object in both cameras. Nearby objects produce a large disparity in position between the two cameras whereas far objects have a small disparity.

This technique can also be used with a single camera if it is moving as the video is effectively a series of images taken at different positions. This is known as Structure from Motion and is commonly used with airborne cameras, such as those used on small consumer drones.

The primary advantage of this technique is that cameras are small and inexpensive. At close range, good depth resolution can be achieved and fused with the image content itself to produce a 3D colour image. A large number of cameras with overlapping fields of view can potentially produce a 360° panoramic 3D map of the environment around a vehicle.

The main limitation of this approach is that it will only work when suitable images can be produced and therefore adverse environmental conditions (e.g. dust, fog, rain, etc.) will prevent the production of a 3D map. Operation at night time is potentially possible with illumination or the use of thermal imagers rather than standard cameras. Poor camera dynamic range can also be a problem as bright lights (e.g. headlights or the sun) will cause glare. In addition, the processing required to locate features within both images and match them is complex and adds computational burden when using this technique to produce a 3D map.

Lidar

An alternative optical approach to stereo vision is a scanning laser range finder, also known as lidar. This approach uses a laser to send a pulse towards a surface and a sensor to record how long it takes for the reflection to return. The measurement of the time of flight can then be used to determine the range. To produce a 3D map of a scene, this beam must then be scanned in azimuth and elevation. To reduce the amount of scanning, some lidar sensors use multiple beams at different elevation angles and then only scan in azimuth.

Lidar has very good depth resolution and due to the narrow beam can also produce very good lateral resolution. In general, the technology for emitting and sensing light is entirely solid state, however, at present many lidar system still use the mechanical method to scan the beam across the scene. Fully solid state systems would be small and cheap, though this promise has not yet been fully realised in commercial lidar systems which are often large and expensive.

As simple lidar sensors only record the time for the first reflection to return, a drawback of some lidar systems is that they will only detect the nearest object in a specific direction. This is problematic when the environment is dusty or foggy as the first reflection may not be from a solid object and the resulting 3D map will be degraded. More sophisticated (and costly) systems measure the entire reflection over time which then allows a full range profile to be measured through the obscurant. Direct sunlight can also produce problems as the large level of background illumination can make it difficult to detect weak reflections. Similarly, if a surface has low reflectivity (i.e. it is black) then it may not be detected by the lidar. This can be a problem for autonomous vehicles as black car surfaces will only be detected at a closer range than more reflective vehicles.

Radar

Radar is similar to lidar but uses microwaves rather than light (typically 25 or 77 GHz). Lidar was in fact inspired by radar (e.g. laser radar) and only became possible once lasers were invented. The exact mechanism by which the distance is measured varies slightly between different radar systems; however, the concept is the same. A signal is emitted, the length of time it takes for a reflection to return is measured and this is then converted into a range profile. While panoramic mechanically scanned radars are available, it is more common to use an antenna array and calculate the angle of arrival of a reflection by the difference in signal across the array.

One advantage of radar is the ability to measure speed directly via Doppler shift without complex processing. Therefore, objects moving relative to a mainly static scene are generally easy to detect. Poor environmental conditions (e.g. fog, rain and snow) have little impact on the performance of the radar which provides a useful all-weather capability for autonomous vehicles. Single chip radars with integrated processing capabilities are also available for use as small and inexpensive sensor solutions.

A disadvantage of radar is the limited lateral resolution. While the depth resolution can be good, the angular resolution is significantly lower than for optical sensors. However, this is partially mitigated if an object can be uniquely separated from other objects and clutter by its range or velocity value.

Ultrasonic

The final sensor used for range finding on autonomous vehicles is an ultrasonic sensor which emits high-frequency sounds beyond the range of human hearing. Bats are, of course, well-known users of this approach. Ultrasonic sensors are very similar to lidar sensors; however, as the speed of sound in air is vastly slower than the speed of light it is much easier to measure the time for a reflection to return from a surface.

Ultrasonic sensors work well regardless of light level or environmental conditions and are very small and inexpensive. This makes the technology ideal for ultra-short range collision avoidance sensors on small or slow moving vehicles which can be placed in many locations to provide wide area coverage.

The main disadvantage of ultrasonic sensors is their extremely short range as they can only produce distance measurements for surfaces up to a few metres away. For this reason, it is also uncommon for an ultrasonic sensor to be used to explicitly form a 3D map.

Data Fusion

In practice, to achieve a robust and effective sensor solution for autonomous vehicles it is necessary to combine different sensors and perform sensor fusion. As yet there is no standard sensor suite and research is still ongoing to determine the optimum combination with an acceptable performance across all weather conditions.

Furthermore, as an example, Tesla’s latest models that are claimed to be suitable for autonomous operation have eight cameras (with varying fields of view) and twelve ultrasonic sensors to enable panoramic sensing while a single forward-looking radar measures range and speed of objects up to 160m away.

The combination of cameras with radar is a common sensor choice as it provides good lateral and range resolution under various weather conditions for a relatively low price. It remains to be seen whether or not it is sufficient for safe autonomous operation without the addition of lidar.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Further Reading