Show/Hide Menu
Hide/Show Apps
anonymousUser
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Açık Bilim Politikası
Açık Bilim Politikası
Frequently Asked Questions
Frequently Asked Questions
Browse
Browse
By Issue Date
By Issue Date
Authors
Authors
Titles
Titles
Subjects
Subjects
Communities & Collections
Communities & Collections
Automated learning rate search using batch-level cross-validation
Download
index.pdf
Date
2019
Author
Kabakcı, Duygu
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
3
views
1
downloads
Deep convolutional neural networks are being widely used in computer vision tasks, such as object recognition and detection, image segmentation and face recognition, with a variety of architectures. Deep learning researchers and practitioners have accumulated a significant amount of experience on training a wide variety of architectures on various datasets. However, given a specific network model and a dataset, obtaining the best model (i.e. the model giving the smallest test set error) while keeping the training time complexity low is still a challenging task. Hyper-parameters of deep neural networks, especially the learning rate and its (decay) schedule, highly affect the network’s final performance. The general approach is to search the best learning rate and learning rate decay parameters within a cross-validation framework, a process that usually requires a significant amount of experimentation with extensive time cost. In classical cross-validation, a random part of the dataset is reserved for the evaluation of model performance on unseen data. This technique is usually run multiple times to decide learning rate settings with random validation sets. This thesis is aimed at exploring batch-level cross-validation methods as an alternative to the v classical dataset-level, hence macro, CV. The advantage of micro CV methods is that the gradient computed during training is re-used to evaluate several different learning rates. We propose automated learning rate selection algorithms that are aimed to address setting the learning rate and learning rate schedule during training. Our algorithms use micro cross-validation where a random half of the current batch (of examples) is used for training and the other half is used for validation. We present comprehensive experimental results on three well-known datasets (CIFAR10, SVHN and ADIENCE) using three different network architectures: a custom CNN, ResNet and VGG.
Subject Keywords
Computer engineering.
,
Keywords: Deep Learning
,
Object Classification
,
Learning Rate Search
,
Hyper-parameter Search
,
Adaptive Learning Rate
,
Cross-Validation
URI
http://etd.lib.metu.edu.tr/upload/12623418/index.pdf
https://hdl.handle.net/11511/43629
Collections
Graduate School of Natural and Applied Sciences, Thesis