Title: Optimalizace neuronové sítě
Other Titles: Optimization of neural networks
Authors: Bulín, Martin
Advisor: Šmídl Luboš, Ing. Ph.D.
Referee: Švec Jan, Ing. Ph.D.
Issue Date: 2017
Publisher: Západočeská univerzita v Plzni
Document type: diplomová práce
URI: http://hdl.handle.net/11025/27096
Keywords: network pruning;minimal network structure;network demystification;weight significance;removing synapses;network pathing;feature energy;network optimization;neural network
Keywords in different language: network pruning;minimal network structure;network demystification;weight significance;removing synapses;network pathing;feature energy;network optimization;neural network
Abstract: Neural networks can be trained to work well for particular tasks, but hardly ever we know why they work so well. Due to the complicated architectures and an enormous number of parameters we usually have well-working black-boxes and it is hard if not impossible to make targeted changes in a trained model. In this thesis, we focus on network optimization, specifically we make networks small and simple by removing unimportant synapses, while keeping the classification accuracy of the original fully-connected networks. Based on our experience, at least 90% of the synapses are usually redundant in fully-connected networks. A pruned network consists of important parts only and therefore we can find input-output rules and make statements about individual parts of the network. To identify which synapses are unimportant a new measure is introduced. The methods are presented on six examples, where we show the ability of our pruning algorithm 1) to find a minimal network structure; 2) to select features; 3) to detect patterns among samples; 4) to partially demystify a complicated network; 5) to rapidly reduce the learning and prediction time. The network pruning algorithm is general and applicable for any classification problem.
Abstract in different language: Neural networks can be trained to work well for particular tasks, but hardly ever we know why they work so well. Due to the complicated architectures and an enormous number of parameters we usually have well-working black-boxes and it is hard if not impossible to make targeted changes in a trained model. In this thesis, we focus on network optimization, specifically we make networks small and simple by removing unimportant synapses, while keeping the classification accuracy of the original fully-connected networks. Based on our experience, at least 90% of the synapses are usually redundant in fully-connected networks. A pruned network consists of important parts only and therefore we can find input-output rules and make statements about individual parts of the network. To identify which synapses are unimportant a new measure is introduced. The methods are presented on six examples, where we show the ability of our pruning algorithm 1) to find a minimal network structure; 2) to select features; 3) to detect patterns among samples; 4) to partially demystify a complicated network; 5) to rapidly reduce the learning and prediction time. The network pruning algorithm is general and applicable for any classification problem.
Rights: Plný text práce je přístupný bez omezení.
Appears in Collections:Diplomové práce / Theses (KKY)

Files in This Item:
File Description SizeFormat 
mb_thesis_2017.pdfPlný text práce3,04 MBAdobe PDFView/Open
bulin-v.pdfPosudek vedoucího práce272,79 kBAdobe PDFView/Open
bulin-o.pdfPosudek oponenta práce308,42 kBAdobe PDFView/Open
bulin-p.pdfPrůběh obhajoby práce214,44 kBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/27096

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.