Research Output
Q-learning driven routing for aeronautical Ad-Hoc networks
  The aeronautical ad-hoc network (AANET) is one of the critical methodologies to satisfy the Internet connectivity requirement of airplanes during their flights. However, the ultra-dynamic topology and unstable air-to-air link characteristics increase the need for AANETs to a particular routing algorithm compared to the terrestrial networks. This need is mainly because these AANET-specific characteristics increase the delays, packet losses, and network load with accuracy reduction by continuously changing topology and breaking air-to-air links during routing. The works in the literature do not satisfy the ultra-dynamic topology and unstable air-to-air link characteristics of AANETs during routing. On the other hand, the routing algorithm can adapt to the dynamic conditions of AANETs by utilizing Artificial Intelligence (AI) based methodologies. For adaptation to this dynamic environment, we aim to let the airplanes find their routing path through exploration and exploitation by mapping the AANET environment to QLR. Clearly, this article proposes an updated Layered Hidden Markov Model (updated-LHMM) estimation-based Q-learning (QLR) routing for AANETs to solve the delay, packet loss, network load, and accuracy problems. For this aim, the Bellman Equation is adapted to the AANET environment by proposing different methodologies for its related QLR components. Results reveal that the proposed strategy mainly reduces the routing delay and packet losses by 30% and 33% compared to the methods in the literature.

  • Type:

    Article

  • Date:

    11 November 2022

  • Publication Status:

    Published

  • Publisher

    Elsevier BV

  • DOI:

    10.1016/j.pmcj.2022.101724

  • Cross Ref:

    10.1016/j.pmcj.2022.101724

  • ISSN:

    1574-1192

  • Funders:

    Edinburgh Napier Funded

Citation

Bilen, T., & Canberk, B. (2022). Q-learning driven routing for aeronautical Ad-Hoc networks. Pervasive and Mobile Computing, 87, Article 101724. https://doi.org/10.1016/j.pmcj.2022.101724

Authors

Keywords

AANETs, Routing management, Reinforcement learning, Q-learning, Hidden Markov model

Monthly Views:

Available Documents