Signal Processing

Home / Expertise / Signal Processing

Our signal processing service produces results that operate in the real world with the required performance levels and reliability for your business.

We understand your challenges and derive high quality solutions through modelling, mathematics, complex DSP algorithms, hardware and software engineering.

Peter Debenham

Senior Consultant, Signal Processing

“When someone says ‘signal processing’ many people instantly think of complex mathematics or hard to program computer systems and then run a mile. It is better to think of signal processing as being the enabling technology that allows us to interpret, change and generate information. Signal processing allows us to understand the output of sensors and receivers, to extract and derive meaning from what would otherwise just be a mess of data to give us useful outputs. It drives our modern world.”

peter debenham headshot

A few of the skills we employ

  • Mathematical analysis
  • VHDL/ FPGA design, interfacing, whole process design chain
  • Xilinx certified; Vivado, HLS and ISE
  • Mathematical modelling
  • Compressed sensing
  • Real-time analysis and processing
  • Real time radar signal processing; FMCW, Doppler, CFAR, Displays
  • High performance signal processing
  • Low size, weight, power and cost solutions
  • GPU Processing, supporting frameworks, such as: CUDA®, OpenACC and OpenCL
  • Logic-based signal processing algorithms
  • Radio, 3G and base station signal detection and tracking, linear and non-linear detection techniques, such as: parallel detectors and model channel fading
Signal Processing

High Sensitivity Radio Receiver

CS Secure Communications

Plextek has a history of producing effective solutions to radio problems. In this project, our client approached us with a need to offer a new solution to address recent technology changes in their marketplace. This required a high-sensitivity radio receiver capable of detecting and tracking signals from a variety of different sources, including 3G mobiles and base stations.

Conventional radio receivers are usually a compromise between sensitivity, latency and power-consumption. In this instance, the receiver was not required to demodulate the signal – it merely had to detect it in low-power conditions. Therefore, latency could be sacrificed for sensitivity.

Initial detection of a signal in low power conditions invariably requires the combination of both linear and non-linear detection techniques. The performance of the linear component determines the signal-to-noise ratio (SNR) of the detector’s discriminator input. The purpose of a discriminator is to decide when the input signal is something you are interested in. A non-linear component is then used to ‘tame’ the noise statistics and can either reduce the false alarm rate or allow the discriminator to use a lower threshold to increase sensitivity.

The performance of the linear component has the greatest effect on the sensitivity, but it is constrained by real world effects such as frequency errors, fading and etc. Therefore, the initial design allowed the balance between linear and non-linear components to be adjusted at runtime. It was built into an early prototype that the client was able to integrate with the rest of their system, enabling them to evaluate the performance and discuss results with key clients of their own.

This prototype leveraged COTS components with candidate algorithms being implemented in C. The prototype’s results were discussed with the client before agreeing to suitable design parameters, forming the basis of a number of further units that we designed and supplied.

The end result is a low size, weight and power handheld receiver that is low cost and easy to manufacture in high volumes. Our client requested that the receiver be powered by three AA alkaline batteries so that they could be easily replaced when the unit was deployed in the field. Key algorithms were also eventually implemented in a Field Programmable Gate Array (FPGA) because FPGAs present a very power efficient means of performing high speed complex signal processing tasks.

In this project, Plextek undertook the whole of the design work for this component of the client’s system. This included the power supply management, the Bluetooth interface to the rest of the client’s system, and the mechanical and thermal design. Throughout this process, we ensured that the result would be a product that could be manufactured efficiently and cheaply. Received positively by the client, the signal processing algorithms can detect and track a 3G signal more than 10 dB below a standard radio receiver’s sensitivity levels.

GPU Processing

Embedded GPUs

The stagnation of single-core central processing unit (CPU) clock speeds has brought parallel computing, multicore architectures, and hardware-acceleration techniques back to the forefront of computational science and engineering.

We’ve seen this become more apparent in recent years with the use of graphic cards, or more specifically GPUs (Graphics Processing Units), for high speed general purpose computing; an approach known as GPGPU (General Purpose Computing on Graphics Processing Units).

At Plextek, we recognise this evolution of GPUs and the emerging parity to Field Programmable Gate Arrays (FPGAs) in both performance and power consumption when used as the primary processor for the many signal- and image-processing applications where elements can be processed in parallel.

Emerging technologies are increasingly requiring substantial processing power capabilities and this introduces many situations where low size, weight, power and cost (SWAP-C) solutions are essential for quick prototyping, production and release to market.

Traditionally, this meant either low computing power (a single embedded micro-processor or Digital Signal Processor) or a bespoke FPGA solution with a long and high-cost development process. GPGPU offers a cost efficient alternative with the processing power and power consumption similar to a FPGA but with a quicker and cheaper development path.

One particular advantage is that where CPUs consist of a few cores optimised for sequential serial processing, a GPU is optimised for parallel processing, consisting of thousands of multiple cores designed to process multiple tasks simultaneously. This leads to processing speed ups when looking at similar cost/power consumption.

Our signal processing team can leverage the large number of processing elements within a GPU through support of programming frameworks such as CUDA®, OpenACC and OpenCL. These parallel platforms and application programming interface (API) models allow us to work in high-level programming languages to develop flexible and accessible signal and data processing algorithms. Being able to easily and quickly create a software program, run and profile it on a GPU, test the product and review the results all within a short time period leads to a rapid entry time to market.

We understand that when evaluating what is important for your design, the application of CPUs, GPUs and FPGAs all have their unique trade-offs regarding processing capability, power efficiency, BOM cost and other parameters such as latency, development effort, flexibility and interfaces. Our multidisciplinary team are dedicated to providing a solution that best suits your requirements and specifications.

Get In Touch

Let us know what’s on your mind and someone will reply as soon as possible.