|Article title||SOFTWARE COMPLEX FOR INTELLIGENT SCHEDULING AND ADAPTIVE SELF-ORGANIZATION OF VIRTUAL COMPUTING RESOURCES BASED IN LIT JINR CLOUD CENTER|
|Authors||N.A. Balashov, A.V. Baranov, I.S. Kadochnikov, V.V. Korenkov, N.A. Kutovskiy, A.V. Nechaevskiy, I.S. Pelevanyuk|
|Section||SECTION IV. CLOUD COMPUTING|
|Month, Year||12, 2016 @en|
|Index UDC||004.023, 004.75|
|Abstract||Rapid development of cloud technologies has led to their wide use both in commercial and academic areas. The Joint Institute for Nuclear Research (JINR) hosts its own cloud infrastructure built on the OpenNebula platform based on the Infrastructure as a Service (IaaS) model. The JINR cloud service is used as a universal computing resource supporting workload-intensive scientific computations as well as low-load activities. The complexity of modern software libraries and ap-plications makes it hard to predict possible workloads generated by the software. For this reason cloud resources are often over-allocated leading to a high degree of underutilization of underlying equipment. It is useful for commercial cloud providers to optimize their resources consumption but it is especially important for scientific organizations, such as JINR, limited in resources, where it is crucial to get the maximum performance out of the owned computing infrastructure. A common solution is to use consolidation of virtual machines (VMs) and a number of algorithms and methods for dynamic reallocation and consolidation of VMs were proposed recently. In this paper we describe a new heuristics-based method of dynamic VMs reallocation utilizing the ranking system of computing resources to control Quality of Service (QoS) while over-provisioning the cloud in-frastructure. The method allows to release an amount of computing capacities by the means of VMs consolidation and a set of approaches to deal with the released resources is described in the paper giving the detailed overview of a novel approach based on integration of cloud services with batch systems. Based on the proposed methods and strategies we review a universal software framework that is currently under development allowing implementing a complete solution for improving cloud environments built on IaaS model.|
|Keywords||Сloud computing; virtualization, optimization; intelligent control; datacenters; VM consoli-dation.|
|References||1. Helge Meinhard. Virtualization, clouds and IaaS at CERN, VTDC '12 Proceedings of the 6th international workshop on Virtualization Technologies in Distributed Computing, ACM New York, NY, USA, 2012, pp. 27-28.
2. Timm, S. et al. Cloud Services for the Fermilab Scientific Stakeholders, J.Phys.Conf.Ser., 2015, Vol. 664, No. 2.
3. Baranov A.V., Balashov N.A., Kutovskiy N.A., Semenov R.N. JINR cloud infrastructure evolu-tion, Physics of Particles and Nuclei Letters, 2016, Vol. 13, No. 5, pp. 672-675.
4. Feller E., Rilling L., Morin C. Energy-aware ant colony based workload placement in clouds, Proceedings of the 12th IEEE/ACM International Conference on Grid Computing, Lyon, France - 2011.
5. Farahnakian F., Liljeberg P., Plosila J. LiRCUP: Linear regression based CPU usage prediction algorithm for live migration of virtual machines in data centers, 39th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), 2013, pp. 357-364.
6. Beloglazov A., Buyya R. Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data cen-ters, Concurrency and Computation: Practice and Experience (CCPE), 2012, Vol. 24 (13),
7. Mastroianni C., Meo M., Papuzzo G. Probabilistic Consolidation of Virtual Machines in Self-Organizing Cloud Data Centers, IEEE Transaction of Cloud Computing, 2013, Vol. 1,
8. Mosa A., Paton N.W. Optimizing virtual machine placement for energy and SLA in clouds using utility functions, Journal of Cloud Computing: Advances, Systems and Applications, 2016, Vol. 5.
9. Monil, Mohammad Alaul Haque and Rahman, Rashedur M. VM consolidation approach based on heuristics, fuzzy logic, and migration control, Journal of Cloud Computing: Advances, Sys-tems and Applications, 2016, Vol. 5.
10. Guenter B., Jain N., and Williams C. Managing cost, performance, and reliability tradeoffs for energy-aware server provisioning, Proc. of the 30st Annual IEEE Intl. Conf. on Computer Communications (INFOCOM), 2011, pp. 1332-1340.
11. Balashov N., Baranov A., Korenkov V. Optimization of over-provisioned clouds, Physics of Particles and Nuclei Letters, 2016, Vol. 13, No. 5, pp. 609-612.
12. McNab A., Stagni F., and Luzzi C. LHCb experience with running jobs in virtual machines,
J. Phys.: Conf. Ser., 2015, Vol. 664.
13. Computing Center of the Institute of High Energy Physics (IHEP-CC) “VCondor – virtual computing resource pool manager based on HTCondor”. Available at: https://github.com/hep-gnu/VCondor (accessed 08 November 2016).
14. McNab A., Love P., and MacMahon E. Managing virtual machines with Vac and Vcycle,
J. Phys.: Conf. Ser., 2015, Vol. 664.
15. Feller E., Rilling L., and Morin C. Snooze: A scalable and autonomic virtual machine man-agement framework for private Clouds, Proceedings of the 12th IEEE/ACMInternational Sym-posium on Cluster, Cloud and Grid Computing (CCGrid), 2012, pp. 482-489.
16. Beloglazov, R. Buyya. OpenStack Neat: A Framework for Dynamic and Energy-Efficient Con-solidation of Virtual Machines in OpenStack Clouds, Concurrency and Computation: Practice and Experience (CCPE), 2015, Vol. 27, No. 5, pp. 1310-1333.
17. Anne-C´ecile Orgerie, Laurent Lef`ever. When Clouds become Green: the Green Open Cloud Architecture, International Conference on Parallel Computing (ParCo), 2009, pp. 228-237.
18. Ward J.S., Barker A. Observing the clouds: a survey and taxonomy of cloud monitoring, Jour-nal of Cloud Computing: Advances, Systems and Applications, 2014, Vol. 3.
19. Ward J.S., Barker A. Cloud cover: monitoring large-scale clouds with Varanus, Journal of Cloud Computing: Advances, Systems and Applications, 2015, Vol. 4.
20. Open Grid Forum “Open Cloud Computing Interface”. Available at: http://occi-wg.org/ (ac-cessed 23 November 2016).