Document Type
Article
Publication Date
7-2019
Abstract
In order to fulfill the tremendous resource demand by diverse IoT applications, the large-scale resource-constrained IoT ecosystem requires a robust resource management technique. An optimum resource provisioning in IoT ecosystem deals with an efficient request-resource mapping which is difficult to achieve due to the heterogeneity and dynamicity of IoT resources and IoT requests. In this paper, we investigate the scheduling and resource allocation problem for dynamic user requests with varying resource requirements. Specifically, we formulate the complete problem as an optimization problem and try to generate an optimal policy with the objectives to minimize the overall energy consumption and to achieve a long-term user satisfaction through minimum response time. We introduce the paradigm of a deep reinforcement learning (DRL) mechanism to escalate the resource management efficiency in IoT ecosystem. To maximize the numerical performance of the entire resource management activities, our method learns to select the optimal resource allocation policy among a number of possible solutions. Moreover, the proposed approach can efficiently handle a sudden hike or fall in users’ demand, which we call demand drift, through adaptive learning maintaining the optimum resource utilization. Finally, our simulation analysis illustrates the effectiveness of the proposed mechanism as it achieves substantial improvements in various factors, like reducing energy consumption and response time by at least 36.7% and 59.7% respectively and increasing average resource utilization by at least 10.4%. Our approach also attains a good convergence and a trade-off between the monitoring metrics.
Recommended Citation
Chowdhury A, Raut SA, Narman HS. DA-DRLS: Drift adaptive deep reinforcement learning based scheduling for IoT resource management. Journal of Network and Computer Applications. 2019 Jul 15;138:51-65.
Comments
This is the authors’ original manuscript. The version of record is available from the publisher at https://doi.org/10.1016/j.jnca.2019.04.010.
Copyright © 2019 Elsevier Ltd. All rights reserved.