Correlating to good effect

by Robert Fifield, RF Engines Ltd. , TechOnline India - March 24, 2011

Correlation is a mathematical way to carry out pattern recognition returning a value proportional to how good the match was. Taking lottery numbers as an example, a perfect correlation could change your life

After reading this sentence, your brain will already have completed a correlation task far more complex than most applications require.  In short, correlation is a mathematical way to carry out pattern recognition returning a value proportional to how good the match was.   Taking lottery numbers as an example, a perfect correlation could change your life.  Another example is being able to detect your name being said in a noisy room – the ‘Cocktail Party Effect’.  

Apart from our own in-built audio-visual recognition units, correlators are a surprisingly common part of everyday life.  They are used in wireless systems such as mobile phones, WiFi, digital TV and remote controls, where a receiving device has to lock onto and interpret information that is being transmitted.  They are also used to aid high-speed communication between devices even though physically they may only be a couple of centimetres apart.  So how can correlators help you?

First, how does a correlator work in practice?

Going back to the Cocktail Party Effect, somewhere in the brain your name is stored and any incoming sounds are run past this template. We can imagine that for every letter that matches, a point is awarded. In the examples below, an incoming word is moved past our template and, as expected, the maximum score is obtained when the two words are aligned. In maths, this process of matching two patterns as a function of time is called cross-correlation.



In practice, the operation of matching each letter in the above example could be achieved by assigning values to the letters and using multipliers at each letter position. The final score would be the summation of all of these partial contributions.  

For FPGA implementations, multiplication can rapidly consume valuable hardware resources.  Fortunately the multiplications can usually be reduced to additions because the template word is very often a simple sequence of ±1.  The summation can also quickly drain resources, however using a technique called sub-expression sharing we can make savings by taking advantage of any repeated bit patterns.

Canonic Signed Digit (CSD) arithmetic is another good optimisation tool when fixed value multiplications are required, because it uses digits +1, 0 or –1.  As an example, rather than binary encoding 15 as “01111” (8+4+2+1), we can use “1000-“ (16-1) and save arithmetic operations.

Magnitude matters

Going back to our “BOB” example, there is a potential pitfall.  Imagine that instead of hearing ‘BOB’ spoken quietly across the room, somebody standing right next to you shouted ‘BOO’ ten times as loud.  Running this large input past out template produces a bigger output result as shown below:



The largest result of 20 is much bigger than 3 from the previous example, even though the word ‘BOO’ doesn’t completely match our template.  Somehow we need to make our result independent of the input amplitude.  We can calculate the largest possible output value by pretending that our loud input word is ‘BOB’.  We can then normalise our result by dividing by this maximum value to produce a Normalised Cross Correlation (NCC) ranging from 0 (no correlation) to 1 (perfect correlation).  For our ‘BOO’ example, the maximum possible value is 30, therefore the NCC is 20/30 or 0.6667.  In the previous ‘BOB’ example the resulting NCC is 3/3 or 1.0, which as expected is a perfect correlation.

You spin me right round?

Communication signals often have imperfections such as frequency offsets which cause the input signal to spiral and can play havoc with our correlator.  In an extreme case, the input signal rotates through 360 degrees over the length of the correlation word.  This results in a very poor correlation because there is a summation of positive and negative components that cancel each other out.  To minimise degradation due to frequency offsets, we can split our correlation word into multiple sections.  Using our extreme case (360 degree rotation) but now with ten small correlation sections, our signal will only spiral 36 degrees per section.  If we take the magnitude of the result from each section and sum up the values, the unwanted cancellation effects are minimised.  However beware, this solution has the disadvantage of reducing our detection capability because, by splitting up the correlation word, we have moved from coherent (phase sensitive addition) to non-coherent detection, which relies only upon magnitude.

You may need one of these . . .

A correlator is a processing engine that requires other blocks to support it and help process the resulting outputs. Typical supporting blocks can include:

* Combiner. We have seen that, to deal with frequency offsets, correlations can be split into multiple sections.  A combiner will renormalize the results of each section back into a single output value that is between 0-1.0.  If the sections are of unequal length, a combiner may give more weight to a larger section because this result is more significant.

* Detector. A free running correlator can generate a lot of data.  Generally a detector will scan through a window or section of the data and find the largest correlation peak that exceeds a threshold.  The final result and any associated values such as the time stamp when the peak occurred are returned.

* Delay. Some scenarios require the detection of two correlation words that are separated by a significant time delay.  In this case, a delay block is required to either delay the input data or the correlation results so that the combined result of the two words can be calculated.

* Automatic gain control (AGC). Real signals often have a large operating range (16 bits is typical), however the desired output result e.g. to a demodulating block, typically require only 6-8 significant bits.  To optimise a design for both speed and area, it is beneficial to reduce the word width of the signal as much as possible.  An AGC will attempt to scale the signal to the required bit width.

Reduce, Reuse, Recycle

In common with many digital signal processing applications, you can solve the most straightforward tasks using free intellectual property (IP) if your application is not too demanding and available resources are not restrictive.  Unfortunately, most real world applications are less straightforward, and need to be solved within budgets of time, money and resources.   

At RF Engines, we have developed a library of innovative solutions and techniques to reduce the processing burden, optimise area, speed and power, as well as re-using IP to provide fast time to market solutions that work first time.  Take the case of correlators, techniques such as sub-expression sharing and frequency domain based fast convolution, can save over 50 percent of hardware resources.  

Using a look-up table in place of hard maths can provide savings, or allow the flexibility to use memory rather than logic resources.  Converting signals into a phase/magnitude or logarithmic format can also be beneficial, especially when a series of multiplications or divisions are required.
Powerful tools

Correlators are an amazingly useful item to have in your toolbox. The trick is to make sure that you are using the right tool for the job and in the most power and size efficient manner.

Hopefully this article has provided a basic overview of correlation, highlighting some potential hazards, solutions and other associated blocks that are often required.  There is a wealth of further reading available on the Internet, and as with most topics, the further you delve into a subject, the more detail you find that needs to be addressed.  However, as we all know, if it was easy to do, then everyone would be doing it!  RFEL are happy to be consulted on these issues and provide solutions as appropriate.

About the author:


Robert Fifield, MSc CEng MIET is a Senior Digital Systems Design Engineer at RF Engines Ltd.  He studied at the University of Manchester Institute of Science and Technology (UMIST), where he received a BSc in Electrical and Electronic engineering and an MSc in Instrumentation and Analytical science.

He has worked in wireless communications as a Senior Research Scientist at Philips Research Labs (1995-2005), and as a Senior Scientist at NXP Semiconductors UK (2006-2008), before joining RF Engines.

An active member of ETSI RES10 and BRAN standardisation bodies, he worked on prototype OFDM demonstration systems, and has filed 15 wireless system related patents.

As digital processing speeds increased, he was involved with the development of early Software Defined Radio (SDR) architectures, using digital techniques to remove analogue functionality from systems such as GSM, CDMA2000, UMTS, 802.11a/g/n-20/n-40, GPS and Bluetooth.


blog comments powered by Disqus