Abstract
Researchers working with Reinforcement Learning typically face issues that severely hinder the efficiency of their research workflow. These issues include high computational requirements, numerous hyper-parameters that must be set manually, and the high probability of failing a lot of times before success. In this paper, we present some of the challenges our research has faced and the way we have tackled successfully them in an innovative software platform. We provide some benchmarking results that show the improvements introduced by the new platform.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
We use the terms experiment and experimental unit to distinguish two different concepts. The former refers to a configuration containing multi-valued hyper-parameters that will require several executions to finish, whereas the latter reefers to each of the single-valued configuration instances produced by combining the values of an experiment.
- 4.
Exp-B requires Microsoftās Cognitive Library, which only runs on x64 platforms. Thatās the reason Exp-B cannot run on Windows-x32 machines.
References
Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2016), Savannah, GA, pp. 265ā283. USENIX Association (2016)
EpicGames: Unreal swarm (2019)
FernĆ”ndez-Gauna, B., Fernandez-Gamiz, U., GraƱa, M.: Variable speed wind turbine controller adaptation by reinforcement learning. Integr. Comput. Aided Eng. 24(1), 27ā39 (2017)
Gauna, B.F., GraƱa, M., Zimmermann, R.S.: SimionZoo: a software bundle for Reinforcement Learning applications, February 2019. https://doi.org/10.5281/zenodo.2579013
Geramifard, A., Dann, C., Klein, R.H., Dabney, W., How, J.P.: RLPy: a value-function-based reinforcement learning framework for education and research. J. Mach. Learn. Res. 16, 1573ā1578 (2015)
van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 2094ā2100. AAAI Press (2016)
Schaul, T., et al.: PyBrain. J. Mach. Learn. Res. 11, 743ā746 (2010)
Seide, F., Agarwal, A.: CNTK: Microsoftās open-source deep-learning toolkit. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 2135ā2135. ACM, New York (2016)
Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529, 484ā503 (2016)
Acknowledgements
The work in this paper has been partially supported by FEDER funds for the MINECO project TIN2017-85827-P, and projects KK-2018/00071 and KK-2018/00082 of the Elkartek 2018 funding program of the Basque Government.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Fernandez-Gauna, B., Larrucea, X., GraƱa, M. (2019). Reinforcement Learning Experiments Running Efficiently over Widly Heterogeneous Computer Farms. In: PĆ©rez GarcĆa, H., SĆ”nchez GonzĆ”lez, L., CastejĆ³n Limas, M., QuintiĆ”n Pardo, H., Corchado RodrĆguez, E. (eds) Hybrid Artificial Intelligent Systems. HAIS 2019. Lecture Notes in Computer Science(), vol 11734. Springer, Cham. https://doi.org/10.1007/978-3-030-29859-3_64
Download citation
DOI: https://doi.org/10.1007/978-3-030-29859-3_64
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29858-6
Online ISBN: 978-3-030-29859-3
eBook Packages: Computer ScienceComputer Science (R0)