Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
The reusability prior in deep learning models
Download
index.pdf
Date
2023-5-22
Author
Polat, Aydın Göze
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
80
views
115
downloads
Cite This
Various choices can affect the performance of deep learning (DL) models. For instance, repetitions in a model via cross-layer parameter sharing, using convolutional layers, and relying on skip connections affect the reusability of components in DL models, impacting parameter efficiency. In this work, three different approaches are proposed to investigate how different design choices in terms of such repetitions affect model performance. First, a new library, Revolver, is proposed to analyze reusable modules or model components while training a population of DL models. Reusing modules across models enabled training an entire population of models on a single GPU and collecting statistics about top scoring shared modules. Second, the reusability prior is proposed as follows: model components are forced to function in diverse contexts not only due to the training data, augmentation, and regularization choices but also due to the model design itself. Based on this prior, a counting-based graph analysis approach that can quantify the number of contexts for each learnable parameter is proposed. In the experiments, this approach was able to correctly predict the ranking of several analyzed models in terms of top-1 accuracy without relying on any training. Third, a generalized framework inspired by statistical mechanics is proposed, where the context-based counting approach describes models with absolute temperature T=-1. The generalized framework allowed going beyond the proposed counting approach by encoding the constraints and assumptions in the form of energy at the parameter level. Overall, these approaches may open up avenues for research on model analysis and comparison or lead to practical applications for neural architecture search.
Subject Keywords
Deep learning
,
Parameter sharing
,
Genetic algorithms
,
Reusability
,
Statistical mechanics
URI
https://hdl.handle.net/11511/104170
Collections
Graduate School of Natural and Applied Sciences, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
A. G. Polat, “The reusability prior in deep learning models,” Ph.D. - Doctoral Program, Middle East Technical University, 2023.