One of the biggest surprises in 5G standardization so far has been the acceptance of polar codes as an official channel coding technology. Such decisions are of course complex ones that are often as much about political persuasion as technology virtue. Regardless, the feat achieved by this relatively nascent technology is remarkable. It was only a short time ago that the popular belief was that turbo codes would never be eclipsed. So, what makes polar codes different and how do they work?
The basics of channel coding
Polar codes are a channel coding technology and all channel coding technology works in basically a quite similar way. Communication links are susceptible to errors due to random noise, interference, device impairments, etc. that corrupt the original data stream at the receiving end. Channel coding basically employs a set of algorithmic operations on the original data stream at the transmitter, and another set of operations on the received data stream at the receiver to correct these errors. In channel coding terminology, the entirety of these operations at the transmitter and receiver are respectively denoted as encoding and decoding operations. The focus of channel coding research may be stated quite simply: develop high performance channel codes that mitigate the effect of the errors in a communication link (bit-error-rate is the common performance measure used here). However, the real challenge here is doing this in a manner of sufficiently low complexity that allows practical implementation into the silicon technology of the day. The complexity of a code determines everything, e.g. how much power it consumes, how much memory it needs, how much computation power it requires, and how much latency it incurs, all of which at the end of the day determine whether a code is good for any particular use case.