Reinforcement Learning Techniques for Decentralized Self-adaptive Service Assembly - Service-Oriented and Cloud Computing
Conference Papers Year : 2016

Reinforcement Learning Techniques for Decentralized Self-adaptive Service Assembly

Abstract

This paper proposes a self-organizing fully decentralized solution for the service assembly problem, whose goal is to guarantee a good overall quality for the delivered services, ensuring at the same time fairness among the participating peers. The main features of our solution are: (i) the use of a gossip protocol to support decentralized information dissemination and decision making, and (ii) the use of a reinforcement learning approach to make each peer able to learn from its experience the service selection rule to be followed, thus overcoming the lack of global knowledge. Besides, we explicitly take into account load-dependent quality attributes, which lead to the definition of a service selection rule that drives the system away from overloading conditions that could adversely affect quality and fairness. Simulation experiments show that our solution self-adapts to occurring variations by quickly converging to viable assemblies maintaining the specified quality and fairness objectives.
Fichier principal
Vignette du fichier
416679_1_En_4_Chapter.pdf (567.69 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-01638593 , version 1 (20-11-2017)

Licence

Identifiers

Cite

M. Caporuscio, M. D’angelo, V. Grassi, R. Mirandola. Reinforcement Learning Techniques for Decentralized Self-adaptive Service Assembly. 5th European Conference on Service-Oriented and Cloud Computing (ESOCC), Sep 2016, Vienna, Austria. pp.53-68, ⟨10.1007/978-3-319-44482-6_4⟩. ⟨hal-01638593⟩
341 View
216 Download

Altmetric

Share

More