Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/16206
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPANDEY, VIJAY-
dc.date.accessioned2018-12-19T11:14:53Z-
dc.date.available2018-12-19T11:14:53Z-
dc.date.issued2018-06-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/16206-
dc.description.abstractFor the last few years machine learning algorithms have become increasingly popular for solving most of the real life as well as complex problems. Also the demand for tools automating such algorithms has been on a rise. However the difficult part of these machine learning algorithms is to identify the best model for solving the problem and also to identify the hyper parameters that play the most crucial part in getting the most efficient solution to the problem. The methods in present state of art are by using trial and error to identify the model and the hyper parameter. Neural network are the state of the art for solving most real world problems including the complex problems. However constructing a neural network for solving a problem requires a huge task of defining the hyper parameters using trial and error which is very time consuming and inefficient. Also, training a neural network requires forward and backward passes. Back propagation algorithm used for updating the weights by propagating the error variables to the subsequent layers result in interlocking of the initial layers till the latter layers have back propagated the error signal. This also results in an increased time complexity of the model. Genetic algorithms are inspired by the evolutionary process of organisms and are very effective in reducing the time complexity of computational algorithms. GA can be used for automating the task of feature selection of desired neural network in minimal time. Also, synthetic gradients can be used to do away with the problem of interlocking of layers caused by back propagation. Synthetic gradients decouple the layers of NN by introducing the model of future computation of error of the network graph. In this way errors can be computed independently without waiting for the error signals, thus reducing the time complexity. Both the techniques are used to train and test MNIST and CIFAR-10 database. A Recurrent Neural Network model (RNN) is used as the model which can be further extended to any type of neural network including CNNs, LSTM and others.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-4121;-
dc.subjectNEURAL NETWORKSen_US
dc.subjectGENETIC ALGORITHMen_US
dc.subjectSYNTHETIC GRADIENTSen_US
dc.titleOPTIMISATION OF NEURAL NETWORKS USING GENETIC ALGORITHM AND SYNTHETIC GRADIENTSen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
vijayfinal_thesis.pdf1.13 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.