img Leseprobe Leseprobe

Humanoid robot control policy and interaction design. A study on simulation to machine deployment

A study on simulation to machine deployment

Suman Deb

PDF
36,99
Amazon iTunes Thalia.de Weltbild.de Hugendubel Bücher.de ebook.de kobo Osiander Google Books Barnes&Noble bol.com Legimi yourbook.shop Kulturkaufhaus ebooks-center.de
* Affiliatelinks/Werbelinks
Hinweis: Affiliatelinks/Werbelinks
Links auf reinlesen.de sind sogenannte Affiliate-Links. Wenn du auf so einen Affiliate-Link klickst und über diesen Link einkaufst, bekommt reinlesen.de von dem betreffenden Online-Shop oder Anbieter eine Provision. Für dich verändert sich der Preis nicht.

GRIN Verlag img Link Publisher

Naturwissenschaften, Medizin, Informatik, Technik / Technik

Beschreibung

Technical Report from the year 2019 in the subject Engineering - Robotics, grade: 9, , language: English, abstract: Robotic agents can be made to learn various tasks through simulating many years of robotic interaction with the environment which cannot be made in case of real robots. With the abundance of a large amount of replay data and the increasing fidelity of simulators to implement complex physical interaction between the robots and the environment, we can make them learn various tasks that would require a lifetime to master. But, the real benefits of such training are only feasible, if it is transferable to the real machines. Although simulations are an effective environment for training agents, as they provide a safe manner to test and train agents, often in robotics, the policies trained in simulation do not transfer well to the real world. This difficulty is compounded by the fact that oftentimes the optimization algorithms based on deep learning exploit simulator flaws to cheat the simulator in order to reap better reward values. Therefore, we would like to apply some commonly used reinforcement learning algorithms to train a simulated agent modelled on the Aldebaran NAO humanoid robot. The problem of transferring the simulated experience to real life is called the reality gap. In order to bridge the reality gap between the simulated and real agents, we employ a Difference model which will learn the difference between the state distributions of the real and simulated agents. The robot is trained on two basic tasks of navigation and bipedal walking. Deep Reinforcement Learning algorithms such as Deep Q-Networks (DQN) and Deep Deterministic Policy Gradients(DDPG) are used to achieve proficiency in these tasks. We then evaluate the performance of the learned policies and transfer them to a real robot using a Difference model based on an addition to the DDPG algorithm.

Weitere Titel von diesem Autor
Weitere Titel in dieser Kategorie
Cover Broadcasting Fidelity
Myles W. Jackson

Kundenbewertungen

Schlagwörter

HRI, HCI, Interaction Design, Humanoid Robot