When we talk about network evolution it is by no means all about 5G. Despite all the resources being poured into developing the technology for the next generation of mobile communications users, there are still new features that are being introduced to LTE-A which will continue to push performance boundaries up until and even beyond the launch of 5G services. Staying true to its full name (Long Term Evolution), the LTE/LTE-Advanced technology standards are still growing and evolving. New networks are still being rolled out, and leading-edge features are being added to 4G to satisfy the market need for ever-increasing data rates.
Evolving with 5G
There are three main ways in which LTE-A is evolving towards 5G. The first is by improving user throughput for small cells by the use of higher modulations schemes in combination with higher order carrier aggregation. Secondly, the ongoing improvements in interference, cell coverage and system throughput, are being achieved with the introduction of features such as CoMP (Coordinated Multipoint transmission/reception) and feICIC (further enhanced Inter-Cell Interference Coordination) which offer improvements in cell edge performance. The third one is the introduction of low complexity type user equipment (UE) for IoT applications, a development that is already happening under LTE-A with the introduction of NB-IoT as well as Cat-0 and Cat-M. These new protocols will place tighter specifications on systems signaling in requirements to accommodate large numbers of UEs.
The challenges
One of the major challenges for wireless network validation is to keep up with the increasingly and rapid introduction of new features in the roadmap of 3GPP LTE-A as we move through specifications that are already starting to look remarkably like some of the 5G targets. At its inception LTE used just a single 20 MHz carrier, and its performance only started to meet the IMT targets for 4G in real world scenarios when LTE-Advanced (LTE-A) features were progressively added to it. The first enhancement was carrier aggregation, which combines blocks of spectrum known as component carriers (CC), enabling the use of fragmented spectrum to increase data rates – initially combining two carriers (2CC) but now being introduced for up to 5CC, and also combining time division duplexing (TDD) and frequency-division duplexing (FDD) spectrum. Other features introduced were: Higher Order MIMO, which allows increased spectral efficiency to be achieved; Relays – which extend coverage in areas where wired backhaul is uneconomical; and Self-Organizing/Self-Optimizing Networks (SON), which enable the efficient use of heterogeneous networks (HetNets) that improve the coverage and capacity provided by traditional macro base stations.
Higher levels of modulation density, such as 256 QAM, have already pushed up the achievable data rate to 1.6 Gbps when used in combination with carrier aggregation and 4×4 MIMO. The aggregation of higher numbers of component carriers is pushing this still further: the highest downlink data rate theoretically available is 3.917 Gbps, which combines 256 QAM with 8×8 MIMO and 5CC aggregation.
Interference and implications
There was also a successive introduction of Interference Management (IM) functionality with increasing levels of sophistication, enabling increased area spectral efficiency to be achieved. ICIC (Inter-Cell Interference Coordination), which reduced interference at the cell edges, was first evolved to eICIC (enhanced ICIC), and to further enhanced ICIC (feICIC). eICIC and feICIC use a technique known as ‘cell range expansion’(CRE) to increase the coverage area and reduce interference at the cell edge of the smaller cells. These techniques allow users to be offloaded from the macrocell to the small cell, and are especially important when carrier aggregation into being used. Testing a network employing eICIC/feICIC requires the tester to apply the relevant mobile device measurement procedures in order to feedback correct and reliable information to the network.
CoMP was introduced to further enhance LTE-A performance. HetNets often do not deliver the expected user experience, mainly because of poor cell-edge performance due to the lack of traffic coordination and interference management between small cells and macrocells. CoMP coordinates transmission and reception between different transmitting and receiving cells through the use of load balancing, coordinated scheduling, and the management of signal power and interference. The tight synchronization needed between multiple transmitting and receiving points means that CoMP is challenging both to configure and to validate. Using realistic CoMP usage scenarios in both uplink and downlink when testing enables operators and vendors to perform lab and field trials incorporating realistic performance tests, and thus maximize throughput in their HetNet deployments.
The demands are huge
So what are the main challenges for testing the new features, as each is agreed? Physical layer performance will become more difficult to validate for massive MIMO at higher carrier frequencies, and this is equally a problem that will need to be solved before 5G can be introduced. Another challenge will be testing the fusion of multiple technologies within one system. Not only do features need to be tested in isolation, but crucially where applicable along with the interaction between those, for example testing downlink CoMP in combination with carrier aggregation, and LAA alongside higher order MIMO schemes. As the demand on the networks increases, testing needs to take place for significantly larger numbers of UEs, in order to ensure that the system and user KPIs are met when the network is loaded. This is particularly challenging at the cell edge where interference is prominent.
From a network test point of view, it is crucial that support for new features – and the interaction between them – is made available for R&D as soon as possible; so that network performance can be validated under realistic user scenarios before the features are introduced on real mobile terminals. As capacity becomes an even greater challenge, intelligent debugging capabilities are being developed to validate the system performance and specific of the stack under high load conditions. However as feature interaction and capacity testing place ever increasing demands on validation, it is important to provide KPI metrics that do not create ‘information overload’, but instead genuinely benefit testers and help them to identify where there is a performance bottleneck.