Research Output
Improved Double Deep Q Network-Based Task Scheduling Algorithm in Edge Computing for Makespan Optimization
  Edge computing nodes undertake more and more tasks as business density grows. How to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge. An edge task scheduling approach based on an improved Double Deep Q Network (Double DQN) is proposed in this paper. The Double DQN is adopted to separate the calculation of the target Q value and the selection of the action for the target Q value in two networks, and a new reward function is designed. Furthermore, a control unit is added to the experience replay unit of the agent. The management methods of experience data are modified to fully utilize the value of experience data and improve learning efficiency. Reinforcement learning agents usually learn from an ignorant state, which is inefficient. Therefore, a novel particle swarm optimization algorithm with an improved fitness function is proposed, which can generate optimal solutions for task scheduling. These optimized solutions are provided for the agent to pre-train network parameters, which allows the agent to get a better cognition level. The proposed algorithm is compared with the other six methods in simulation experiments. Results show that the proposed algorithm outperforms other benchmark methods regarding makespan.

  • Type:


  • Date:

    30 June 2024

  • Publication Status:


  • DOI:


  • ISSN:


  • Funders:

    National Natural Science Foundation of China; New Funder


Zeng, L., Liu, Q., Shen, S., & Liu, X. (2024). Improved Double Deep Q Network-Based Task Scheduling Algorithm in Edge Computing for Makespan Optimization. Tsinghua Science and Technology, 29(3), 806 - 817.



edge computing; task scheduling; reinforcement learning; makespan; Double Deep Q Network (DQN)

Monthly Views:

Available Documents