Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/16268
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | JHA, NUPUR | - |
dc.date.accessioned | 2018-12-19T11:24:26Z | - |
dc.date.available | 2018-12-19T11:24:26Z | - |
dc.date.issued | 2015-06 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/16268 | - |
dc.description.abstract | With the world moving to an automated platform, robots are finding application in almost all domains to reduce the human effort. One such domain is to path a find in an unknown and hostile environment to reach the goal. The complexity of many tasks arising in this domain makes it difficult for the robots (agents) to solve this with pre-programmed agent behaviours. The agents must, instead, discover a solution on their own, using learning. In ordinary reinforcement learning algorithms, a single agent learns to achieve a goal through many episodes. If a learning problem is complicated or the number of agents is more, it may take more computation time to obtain the optimal policy and sometimes may not be able to reach the goal. Meanwhile, for optimization problems, multi-agent search methods such as particle swarm optimization, ant colony optimization have been recognized to find rapidly a global optimal solution for multi-modal functions with wide solution space. This thesis work proposes a SARSA based reinforcement learning algorithm using multiple agents where the agents are guided by the pheromone levels also called the Phe-SARSA. In this algorithm, the multiple agents learn through not only their respective experiences but also with the help of pheromone trail left by other agents to search for the shortest path. The algorithms have been simulated in the MATLAB 2013a and the results have been compared with the Q-learning, SARSA, Q-Swarm, SARSA-Swarm and Phe-Q algorithms. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-3071; | - |
dc.subject | SWARM | en_US |
dc.subject | PHEROMONE | en_US |
dc.subject | SARSA | en_US |
dc.subject | REINFORCEMENT LEARNING | en_US |
dc.title | SWARM AND PHEROMONE BASED REINFORCEMENT LEARNING METHODS FOR THE ROBOT(S) PATH SEARCH PROBLEM | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | M.E./M.Tech. Electrical Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Mtech Thesis _ Nupur.pdf | 6.01 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.