Paper

Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine

As the 5th Generation (5G) mobile networks are bringing about global societal benefits, the design phase for the 6th Generation (6G) has started. 6G will need to enable greater levels of autonomy, improve human machine interfacing, and achieve deep connectivity in more diverse environments. The need for increased explainability to enable trust is critical for 6G as it manages a wide range of mission critical services (e.g. autonomous driving) to safety critical tasks (e.g. remote surgery). As we migrate from traditional model-based optimisation to deep learning, the trust we have in our optimisation modules decrease. This loss of trust means we cannot understand the impact of: 1) poor/bias/malicious data, and 2) neural network design on decisions; nor can we explain to the engineer or the public the network's actions. In this review, we outline the core concepts of Explainable Artificial Intelligence (XAI) for 6G, including: public and legal motivations, definitions of explainability, performance vs. explainability trade-offs, methods to improve explainability, and frameworks to incorporate XAI into future wireless systems. Our review is grounded in cases studies for both PHY and MAC layer optimisation, and provide the community with an important research area to embark upon.

Results in Papers With Code
(↓ scroll down to see all results)