Could Radar Be a More Cost-Effective Security Screening Alternative to X-Rays?

By: Damien Clarke
Lead Consultant

10th October 2019

5 minute read

Home » radar

A key task in the security market is the detection of concealed threats, such as guns, knives and explosives. While explosives can be detected by their chemical constituents the other threats are defined by their shape. A threat detection system must, therefore, be able to produce an image of an object behind an opaque barrier.

X-rays are probably the most commonly known technology for achieving this and they are widely used for both security and medical applications. However, while they produce high-quality images, x-ray machines are not cheap and there are health concerns with their frequent use on or in the vicinity of people.

An alternative to x-rays often used at airports for full-body screening are microwave imaging systems. These allow the detection of concealed objects through clothes though the spatial resolution is relatively low and objects are often indistinguishable (hence the requirement for a manual search). The ability to detect and identify concealed items can, therefore, be improved by using a high-frequency mm-wave (60 GHz) system.

Plextek has investigated this approach through the use of a Texas Instruments IWR6843 60 – 64 GHz mm-wave radar which is a relatively inexpensive consumer component that could be customised to suit many applications. However, a single radar measurement only contains range information and not angle information. It is, therefore, necessary to collect multiple measurements of an object from different viewpoints to form an image. This is achieved through the use of a custom 2D translation stage that enables the radar to be automatically moved to any point in space relative to the target object. In this example, radar data was collected across a regular grid of 2D locations with millimetre spacing between measurements.

This large set of radar measurements can then be processed to form an image. This is achieved by analysing the small variations in the signal caused by the change in viewpoint when the object is measured from different positions. The set of range only measurements is then extended to include azimuth and elevation as well. In effect, this process produces a 3D cube of intensity values defining the radar reflectivity at each point in space. A slice through this cube at a range corresponding to the position of the box allows an image to be formed of an object that is behind an (optically) opaque surface.

In this case, a cardboard box containing a fake gun was used as the target object. Clearly, a visual inspection of this box would not reveal the contents, however, 60 GHz mm-waves can penetrate cardboard and therefore an image of the concealed object can be produced. In this case, the resulting image of the contents of the box clearly shows the shape of the concealed gun.

This example simulates the detection of a gun being sent through the post and automatic image analysis algorithms would presumably be capable of flagging this box for further inspection. This would remove the need for human involvement in the screening process for each parcel.

A more mature sensor system using this approach could be produced that did not require the manual scanning process but used an array of antenna instead. It would also be possible to produce similar custom systems that were optimised for different target sets and applications.

 

Acknowledgement

This work was performed by Ivan Saunders during his time as a Summer student at Plextek before completing his MPhys at the University of Exeter.

A key task in the security market is the detection of concealed threats, such as guns, knives and explosives. While explosives can be detected by their chemical constituents the other threats are defined by their shape. A threat detection system must, therefore, be able to produce an image of an object behind an opaque barrier.

X-rays are probably the most commonly known technology for achieving this and they are widely used for both security and medical applications. However, while they produce high-quality images, x-ray machines are not cheap and there are health concerns with their frequent use on or in the vicinity of people.

An alternative to x-rays often used at airports for full-body screening are microwave imaging systems. These allow the detection of concealed objects through clothes though the spatial resolution is relatively low and objects are often indistinguishable (hence the requirement for a manual search). The ability to detect and identify concealed items can, therefore, be improved by using a high-frequency mm-wave (60 GHz) system.

Plextek has investigated this approach through the use of a Texas Instruments IWR6843 60 – 64 GHz mm-wave radar which is a relatively inexpensive consumer component that could be customised to suit many applications. However, a single radar measurement only contains range information and not angle information. It is, therefore, necessary to collect multiple measurements of an object from different viewpoints to form an image. This is achieved through the use of a custom 2D translation stage that enables the radar to be automatically moved to any point in space relative to the target object. In this example, radar data was collected across a regular grid of 2D locations with millimetre spacing between measurements.

This large set of radar measurements can then be processed to form an image. This is achieved by analysing the small variations in the signal caused by the change in viewpoint when the object is measured from different positions. The set of range only measurements is then extended to include azimuth and elevation as well. In effect, this process produces a 3D cube of intensity values defining the radar reflectivity at each point in space. A slice through this cube at a range corresponding to the position of the box allows an image to be formed of an object that is behind an (optically) opaque surface.

In this case, a cardboard box containing a fake gun was used as the target object. Clearly, a visual inspection of this box would not reveal the contents, however, 60 GHz mm-waves can penetrate cardboard and therefore an image of the concealed object can be produced. In this case, the resulting image of the contents of the box clearly shows the shape of the concealed gun.

This example simulates the detection of a gun being sent through the post and automatic image analysis algorithms would presumably be capable of flagging this box for further inspection. This would remove the need for human involvement in the screening process for each parcel.

A more mature sensor system using this approach could be produced that did not require the manual scanning process but used an array of antenna instead. It would also be possible to produce similar custom systems that were optimised for different target sets and applications.

Acknowledgement

This work was performed by Ivan Saunders during his time as a Summer student at Plextek before completing his MPhys at the University of Exeter.

Further Reading

Single Chip MM-Wave Radar

Damien Clarke - Senior Consultant, Data Exploitation

By: Damien Clarke
Lead Consultant

25th April 2019

4 minute read

Home » radar

Recent advances in radar technology have led to the production of a range of inexpensive highly integrated single chip millimetre-wave radar sensors by Texas Instruments. These chips implement a Frequency Modulated Continuous-Wave (FMCW) radar operating at either 76 – 81 GHz or 60 – 64 GHz. This provides sufficient bandwidth to produce a range resolution of a few centimetres at the same time as measuring object velocities via Doppler shift. In addition, through the use of multiple transmitters and receivers Multiple-Input and Multiple-Output (MIMO) techniques can be used to measure the angular position of an object. With a suitable 2D antenna array it is possible to simultaneously measure both azimuth and elevation angles.

The processing power necessary to calculate the range, velocity and angles to multiple targets is also present within the chips. In the IWR6843, for example, this is achieved via a C674x DSP, an FFT hardware accelerator and an ARM R4F Microcontroller. This also enables the ability to perform object tracking within the chip. A single inexpensive chip can therefore continuously output a point cloud (object ID, range, azimuth, elevation and radial velocity) of multiple unique objects present in the scene.

A common application for such radar sensors is the detection of moving vehicles at a distance. The video below shows an example of two cars driving towards and then away from a radar placed above the road. Raw data is extracted from the chip and processed to emulate what would normally occur within the chip. The left hand chart shows a Range-Doppler map where the two vehicles are clearly detected at all ranges. All static objects have been removed from this image to more clearly reveal moving objects. The central plot shows those range-Doppler cells which are determined to contain non-background objects (i.e. cars). The right hand plot then calculates the 2D position (in metres) of the two cars. Note that a car will produce multiple radar echoes (i.e. radiator, wing mirror, tires, number plate, etc) at different ranges and therefore a cluster of detected points are produced from each car.

The ability of such a sensor to directly output a processed point cloud enables a wide range of possible applications at a low cost. These include the following:

    • Advanced driver-assistance systems (ADAS)

 

    • Autonomous ground vehicles

 

    • Unmanned Air Vehicles (UAV)

 

    • Traffic monitoring

 

    • Pedestrian and people counting

 

    • Intruder detection

 

    • Vital signs detection

 

    • Gesture recognition

 

    • Fluid level sensing

 

Creating a new product using a Texas Instruments mm-wave radar chip requires development in several areas. Firstly, as with all FMCW radar it is necessary to understand what radar configuration to use to achieve the desired data output parameters, i.e. range resolution, max range, velocity resolution, etc. It is also necessary to modify the processing chain implemented on the chip to optimise the performance. Hardware changes will also be required, in particular the design and manufacture of a suitable mm-wave antenna array is of key importance. This will have several effects, but commonly this is used to increase the maximum detection range. Finally, it is also necessary to produce an electronics design for the additional components which must be integrated with the radar chip to create a final product.

Recent advances in radar technology have led to the production of a range of inexpensive highly integrated single chip millimetre-wave radar sensors by Texas Instruments. These chips implement a Frequency Modulated Continuous-Wave (FMCW) radar operating at either 76 – 81 GHz or 60 – 64 GHz. This provides sufficient bandwidth to produce a range resolution of a few centimetres at the same time as measuring object velocities via Doppler shift. In addition, through the use of multiple transmitters and receivers Multiple-Input and Multiple-Output (MIMO) techniques can be used to measure the angular position of an object. With a suitable 2D antenna array it is possible to simultaneously measure both azimuth and elevation angles.

The processing power necessary to calculate the range, velocity and angles to multiple targets is also present within the chips. In the IWR6843, for example, this is achieved via a C674x DSP, an FFT hardware accelerator and an ARM R4F Microcontroller. This also enables the ability to perform object tracking within the chip. A single inexpensive chip can therefore continuously output a point cloud (object ID, range, azimuth, elevation and radial velocity) of multiple unique objects present in the scene.

A common application for such radar sensors is the detection of moving vehicles at a distance. The video below shows an example of two cars driving towards and then away from a radar placed above the road. Raw data is extracted from the chip and processed to emulate what would normally occur within the chip. The left hand chart shows a Range-Doppler map where the two vehicles are clearly detected at all ranges. All static objects have been removed from this image to more clearly reveal moving objects. The central plot shows those range-Doppler cells which are determined to contain non-background objects (i.e. cars). The right hand plot then calculates the 2D position (in metres) of the two cars. Note that a car will produce multiple radar echoes (i.e. radiator, wing mirror, tires, number plate, etc) at different ranges and therefore a cluster of detected points are produced from each car.

The ability of such a sensor to directly output a processed point cloud enables a wide range of possible applications at a low cost. These include the following:

    • Advanced driver-assistance systems (ADAS)

 

    • Autonomous ground vehicles

 

    • Unmanned Air Vehicles (UAV)

 

    • Traffic monitoring

 

    • Pedestrian and people counting

 

    • Intruder detection

 

    • Vital signs detection

 

    • Gesture recognition

 

    • Fluid level sensing

 

Creating a new product using a Texas Instruments mm-wave radar chip requires development in several areas. Firstly, as with all FMCW radar it is necessary to understand what radar configuration to use to achieve the desired data output parameters, i.e. range resolution, max range, velocity resolution, etc. It is also necessary to modify the processing chain implemented on the chip to optimise the performance. Hardware changes will also be required, in particular the design and manufacture of a suitable mm-wave antenna array is of key importance. This will have several effects, but commonly this is used to increase the maximum detection range. Finally, it is also necessary to produce an electronics design for the additional components which must be integrated with the radar chip to create a final product.

Further Reading

Making a LiDAR – Part 5

Unity Point Cloud Rendering

By: David
Principal Consultant, Data Exploration

12th April 2019

4 minute read

Home » radar

Unity Point Cloud Rendering

Now we’ve got our LIDAR finished, and our first scan completed, we are left with an SD card with some data on it. The data is a list of several million points (called a point cloud) represented in polar spherical coordinates. Each point represents a target distance from the centre of the LIDAR scan. In its own right, this isn’t very exciting, so we need to find a way to visualise the data. Quite a few people have contacted me to ask how I did this, so unlike the previous “philosophical” LIDAR blogs, this one will go into a little more technical detail. So, if you’re not interested in driving 3D rendering engines, then skip the text and go straight to the video!

I’ve chosen to use the Unity game engine, and this is a software tool targeted at creating 3D video games. It handles the maths and graphics of 3D rendering, it provides a user interface for configuring the 3D world, and it uses the C# programming language for the developer to add “game logic”. If you know Unity, this blog should give you enough information to render a point cloud.

An object in the Unity world is called a GameObject, and each game object represents a “thing” that we can see in the 3D world. We also need to create a camera, and this gives the user the view of the 3D world. It’s straight forward enough to write some C# code that moves and rotates the camera in accordance with mouse and keyboard input. If we fill the world with GameObjects, and we move the camera through the world, then Unity takes care of the rest.

A GameObject is made of a 3D mesh of points to define its shape. The mesh can be anything from a complicated shape like a person, to a simple geometrical shape like a sphere. The developer needs to define a Material which is rendered on the GameObject surface, and a Shader to determine how the Material surface responds to light.

The obvious way to render the LIDAR data is to create a sphere GameObject for each LIDAR data point. This produces wonderful 3D images, and as the user moves through the point cloud each element is rendered as a beautifully shaded sphere. Unfortunately, because each sphere translates into many points of a 3D Mesh, and because we have several million LIDAR data points, that’s a huge amount of work for the computer to get through. The end result is a very slow frame rate which isn’t suitable for real time. For video generation, I configured Unity to generate frames offline, but 1/24th of a second apart in game time. The result is a series of images that can be stitched together to make a fluid video sequence.

I thought it would be fun to view the LIDAR world through the Oculus Rift headset, but here we require very high frame rates so offline rendering isn’t going to work. Rather than plotting each LIDAR point as a GameObject, I used a series of LIDAR points (about 60k worth) to define a single Mesh to make one GameObject. The GameObject then takes the shape defined by the 60K set of scanned LIDAR points. The GameObject Mesh requires a custom Shader to render its surface as transparent, and each Mesh vertices as a flat 2D disk. This allows us to reduce the number of GameObjects by a factor of 60K with a massive drop in CPU workload. The total number of GameObjects is then the number of LIDAR data points divided by 60K. The downside is that we lose the shading on each LIDAR data point. From a distance that still looks great, but if the user moves close to a LIDAR point the image is not quite so good. The advantage is a frame rate fast enough for virtual reality.

As a final node, it is quite a surreal experience to scan an area, and then view it in virtual reality through the Oculus Rift headset. It is quite the shame that the reader can only see the 2D video renders. The best way I can describe it is analogues to stepping into the Matrix to visit Morpheus and Neo!

Now we’ve got our LIDAR finished, and our first scan completed, we are left with an SD card with some data on it. The data is a list of several million points (called a point cloud) represented in polar spherical coordinates. Each point represents a target distance from the centre of the LIDAR scan. In its own right, this isn’t very exciting, so we need to find a way to visualise the data. Quite a few people have contacted me to ask how I did this, so unlike the previous “philosophical” LIDAR blogs, this one will go into a little more technical detail. So, if you’re not interested in driving 3D rendering engines, then skip the text and go straight to the video!

I’ve chosen to use the Unity game engine, and this is a software tool targeted at creating 3D video games. It handles the maths and graphics of 3D rendering, it provides a user interface for configuring the 3D world, and it uses the C# programming language for the developer to add “game logic”. If you know Unity, this blog should give you enough information to render a point cloud.

An object in the Unity world is called a GameObject, and each game object represents a “thing” that we can see in the 3D world. We also need to create a camera, and this gives the user the view of the 3D world. It’s straight forward enough to write some C# code that moves and rotates the camera in accordance with mouse and keyboard input. If we fill the world with GameObjects, and we move the camera through the world, then Unity takes care of the rest.

A GameObject is made of a 3D mesh of points to define its shape. The mesh can be anything from a complicated shape like a person, to a simple geometrical shape like a sphere. The developer needs to define a Material which is rendered on the GameObject surface, and a Shader to determine how the Material surface responds to light.

The obvious way to render the LIDAR data is to create a sphere GameObject for each LIDAR data point. This produces wonderful 3D images, and as the user moves through the point cloud each element is rendered as a beautifully shaded sphere. Unfortunately, because each sphere translates into many points of a 3D Mesh, and because we have several million LIDAR data points, that’s a huge amount of work for the computer to get through. The end result is a very slow frame rate which isn’t suitable for real time. For video generation, I configured Unity to generate frames offline, but 1/24th of a second apart in game time. The result is a series of images that can be stitched together to make a fluid video sequence.

I thought it would be fun to view the LIDAR world through the Oculus Rift headset, but here we require very high frame rates so offline rendering isn’t going to work. Rather than plotting each LIDAR point as a GameObject, I used a series of LIDAR points (about 60k worth) to define a single Mesh to make one GameObject. The GameObject then takes the shape defined by the 60K set of scanned LIDAR points. The GameObject Mesh requires a custom Shader to render its surface as transparent, and each Mesh vertices as a flat 2D disk. This allows us to reduce the number of GameObjects by a factor of 60K with a massive drop in CPU workload. The total number of GameObjects is then the number of LIDAR data points divided by 60K. The downside is that we lose the shading on each LIDAR data point. From a distance that still looks great, but if the user moves close to a LIDAR point the image is not quite so good. The advantage is a frame rate fast enough for virtual reality.

As a final node, it is quite a surreal experience to scan an area, and then view it in virtual reality through the Oculus Rift headset. It is quite the shame that the reader can only see the 2D video renders. The best way I can describe it is analogues to stepping into the Matrix to visit Morpheus and Neo!

Further Reading

Making a LiDAR – Part 4

Electronics Prototyping, and getting a graduate job at Plextek

By: David
Principal Consultant, Data Exploration

11th April 2019

3 minute read

Home » radar

Electronics Prototyping, and getting a graduate job at Plextek

Since I first picked up a soldering iron I’d say there have been two significant changes in electronics; the parts have got smaller, and my eyesight has got worse. With the advent of surface mount, I did fear we were entering an educational dark age. It became beyond the scope of the hobbyist to create PCBs and solder the parts. Luckily, I think all that’s changed, and there has never been a better time for both commercial prototyping and hobbyist experimentation.

As I described in the previous blog, I’m very much a fan of the STM32 platform, and ST Microelectronics have produced some terrific prototyping boards. In fact, the same is true for every major player in the microcontroller market. All these boards have in common low cost and bring out fine pitch surface mount packages to user-friendly headers.

For around £10 I can visit RS components, and buy a very capable STM32 prototyping board with all of the microcontroller’s features our LIDAR will need. With the addition of a few breakout boards, we can test and prototype all the electronics for our LIDAR without ever having to touch a soldering iron or make a PCB.

We do still have one problem, and that’s because our initial prototype can end up a bit of a mess. All those prototype and breakout boards can leave a “rats nest” of wires, it’s fragile, and it’s probably too big. Luckily, rapid and low-cost PCB production has also come a long way. We’ll take advantage of this for our LIDAR electronics.

A quick visit to one of the far Eastern PCB prototyping houses shows I can get 10 copies of a small custom two-layer PCB for $5 plus shipping. Pushing to 4 layers and it’s only $49 plus shipping. I really have no idea how they make it commercially viable! If you’re concerned about quality and security, a European PCB house isn’t that much more expensive. Of course, you still have to design and solder the PCB, but with a copy of Eagle, a visit to YouTube, a low-cost USB microscope, and a rework gun, you’d be surprised how easy it is. Surface tension is your friend!

So what’s my message from this Blog? Well, over the years I’ve become more and more involved with graduate recruitment, and it’s often a long and frustrating process. I’ve become very impressed by the extent of knowledge and understanding our young potential recruits have, but they generally are not so confident about demonstrating these abilities. So, if you’re keen on a career in embedded electronics, then my challenge to you is to get yourself noticed. Buy yourself some prototyping boards, build some embedded projects, and look on the internet to find out how to do it. Bring them with you to your interview, and show us what you’ve done. I promise if you do that, you will stand head and shoulders above the crowd.

Since I first picked up a soldering iron I’d say there have been two significant changes in electronics; the parts have got smaller, and my eyesight has got worse. With the advent of surface mount, I did fear we were entering an educational dark age. It became beyond the scope of the hobbyist to create PCBs and solder the parts. Luckily, I think all that’s changed, and there has never been a better time for both commercial prototyping and hobbyist experimentation.

As I described in the previous blog, I’m very much a fan of the STM32 platform, and ST Microelectronics have produced some terrific prototyping boards. In fact, the same is true for every major player in the microcontroller market. All these boards have in common low cost and bring out fine pitch surface mount packages to user-friendly headers.

For around £10 I can visit RS components, and buy a very capable STM32 prototyping board with all of the microcontroller’s features our LIDAR will need. With the addition of a few breakout boards, we can test and prototype all the electronics for our LIDAR without ever having to touch a soldering iron or make a PCB.

We do still have one problem, and that’s because our initial prototype can end up a bit of a mess. All those prototype and breakout boards can leave a “rats nest” of wires, it’s fragile, and it’s probably too big. Luckily, rapid and low-cost PCB production has also come a long way. We’ll take advantage of this for our LIDAR electronics.

A quick visit to one of the far Eastern PCB prototyping houses shows I can get 10 copies of a small custom two-layer PCB for $5 plus shipping. Pushing to 4 layers and it’s only $49 plus shipping. I really have no idea how they make it commercially viable! If you’re concerned about quality and security, a European PCB house isn’t that much more expensive. Of course, you still have to design and solder the PCB, but with a copy of Eagle, a visit to YouTube, a low-cost USB microscope, and a rework gun, you’d be surprised how easy it is. Surface tension is your friend!

So what’s my message from this Blog? Well, over the years I’ve become more and more involved with graduate recruitment, and it’s often a long and frustrating process. I’ve become very impressed by the extent of knowledge and understanding our young potential recruits have, but they generally are not so confident about demonstrating these abilities. So, if you’re keen on a career in embedded electronics, then my challenge to you is to get yourself noticed. Buy yourself some prototyping boards, build some embedded projects, and look on the internet to find out how to do it. Bring them with you to your interview, and show us what you’ve done. I promise if you do that, you will stand head and shoulders above the crowd.

Further Reading

Making a LiDAR – Part 3

Embedded Software and Platform Choice

By: David
Principal Consultant, Data Exploration

10th April 2019

3 minute read

Home » radar

Embedded Software and Platform Choice

Do you remember as a child counting how long it took for thunder to arrive after the lightning flash, and then working out how far away the storm was? Well, a LIDAR uses the same principle but with light rather than sound. I think that’s amazing. Imagine switching a torch on, pointing it at the wall in front of you, and using a stopwatch to time how long it takes for the light to get reflected back to you. That seems ludicrous, but that’s exactly what our laser range finder is doing. (Did you know that one light Nanosecond is approximately one foot in length? So, I’m about 5.66 light Nanoseconds tall.)

Now, at the back of our laser range finder we have some wires that carry the measured distance data (I2C for the technically minded), so the first job of our software is to read the data, and just like a digital camera we need to write it back to an SD card. We also need to worry about controlling the scanning stepper motors. If you’ve no experience of stepper motor control, it’s quite straight forward; each time you generate a pulse the motor moves a tiny fraction. If you generate a continuous pulse train the motors spins. If you count the number of pulses you know how far the motor has moved. It’s all very straight forward, and just the sort of task a microcontroller was made far.

However, before we get started we need to pick a microcontroller, and this is where it can get contentious. Whatever you choose; somebody else will tell you that you should have done it their way instead. In my opinion, that’s frustrating, and they are usually wrong. I’ll defend that by saying only one thing matters. You need to pick a technical solution that’s within your capability to deliver to, and ideally exceed, the expectations of the people paying you. The great thing about engineering is that there will be many equally good ways of doing that, so you’re only wrong if your way isn’t one of them!

Personally, I’m not too keen on Arduino. It’s great for hobbyists and proof of concepts, but I do find its “simple educational” environment constraining. Likewise, the thought of days wading through thousands of pages of data sheets is equally unattractive. I want to write the embedded code to make the LIDAR work; I’m not interested in writing the code to make the microcontroller work. That’s why my own choice is the STM32 microcontroller family and the STM32CubeMx tool. It lets me “auto-generate” the framework to configure the microcontroller, I don’t have to read every minutia of detail in the datasheet, and it gives me a professional IDE development chain with full visibility of exactly what the hardware is doing if I should need it..

To sum up, with an STM32 I can get the job done, and I can meet all expectations. But importantly, if a BeagleBone running a Python interpreter lets you get the job done, and leads your customers to success, then don’t let anyone tell you that you’re wrong. Of course, don’t be closed to new ideas either. Learning and exploration are what makes life interesting!

Do you remember as a child counting how long it took for thunder to arrive after the lightning flash, and then working out how far away the storm was? Well, a LIDAR uses the same principle but with light rather than sound. I think that’s amazing. Imagine switching a torch on, pointing it at the wall in front of you, and using a stopwatch to time how long it takes for the light to get reflected back to you. That seems ludicrous, but that’s exactly what our laser range finder is doing. (Did you know that one light Nanosecond is approximately one foot in length? So, I’m about 5.66 light Nanoseconds tall.)

Now, at the back of our laser range finder we have some wires that carry the measured distance data (I2C for the technically minded), so the first job of our software is to read the data, and just like a digital camera we need to write it back to an SD card. We also need to worry about controlling the scanning stepper motors. If you’ve no experience of stepper motor control, it’s quite straight forward; each time you generate a pulse the motor moves a tiny fraction. If you generate a continuous pulse train the motors spins. If you count the number of pulses you know how far the motor has moved. It’s all very straight forward, and just the sort of task a microcontroller was made far.

However, before we get started we need to pick a microcontroller, and this is where it can get contentious. Whatever you choose; somebody else will tell you that you should have done it their way instead. In my opinion, that’s frustrating, and they are usually wrong. I’ll defend that by saying only one thing matters. You need to pick a technical solution that’s within your capability to deliver to, and ideally exceed, the expectations of the people paying you. The great thing about engineering is that there will be many equally good ways of doing that, so you’re only wrong if your way isn’t one of them!

Personally, I’m not too keen on Arduino. It’s great for hobbyists and proof of concepts, but I do find its “simple educational” environment constraining. Likewise, the thought of days wading through thousands of pages of data sheets is equally unattractive. I want to write the embedded code to make the LIDAR work; I’m not interested in writing the code to make the microcontroller work. That’s why my own choice is the STM32 microcontroller family and the STM32CubeMx tool. It lets me “auto-generate” the framework to configure the microcontroller, I don’t have to read every minutia of detail in the datasheet, and it gives me a professional IDE development chain with full visibility of exactly what the hardware is doing if I should need it..
To sum up, with an STM32 I can get the job done, and I can meet all expectations. But importantly, if a BeagleBone running a Python interpreter lets you get the job done, and leads your customers to success, then don’t let anyone tell you that you’re wrong. Of course, don’t be closed to new ideas either. Learning and exploration are what makes life interesting!

Further Reading