Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Paired 3D Model Generation with Conditional Generative Adversarial Networks
Download
index.pdf
Date
2018-09-14
Author
Öngün, Cihan
Temizel, Alptekin
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
226
views
116
downloads
Cite This
Generative Adversarial Networks (GANs) are shown to be successful at generating new and realistic samples including 3D object models. Conditional GAN, a variant of GANs, allows generating samples in given conditions. However, objects generated for each condition are different and it does not allow generation of the same object in different conditions. In this paper, we first adapt conditional GAN, which is originally designed for 2D image generation, to the problem of generating 3D models in different rotations. We then propose a new approach to guide the network to generate the same 3D sample in different and controllable rotation angles (sample pairs). Unlike previous studies, the proposed method does not require modification of the standard conditional GAN architecture and it can be integrated into the training step of any conditional GAN. Experimental results and visual comparison of 3D models show that the proposed method is successful at generating model pairs in different conditions.
Subject Keywords
Conditional Generative Adversarial Network (CGAN)
,
Pair Generation
,
Joint Learning
,
3D Voxel Model
URI
https://hdl.handle.net/11511/32185
DOI
https://doi.org/10.1007/978-3-030-11009-3_29
Collections
Graduate School of Informatics, Conference / Seminar
Suggestions
OpenMETU
Core
Improved Image Generation in Normalizing Flows through a Multi-Scale Architecture and Variational Training
Sayın, Deniz; Cinbiş, Ramazan Gökberk; Department of Computer Engineering (2022-8-31)
Generative models have been shown to be able to produce very high fidelity samples in natural image generation tasks in recent years, especially using generative adverserial network and denoising diffusion model based approaches. Normalizing flow models are another class of generative models, which are based on learning invertible mappings between the latent space and the image space. Normalizing flow models possess desirable features such as the ability to perform exact density estimation and simple maximu...
OPTIMIZATION OF ENCODING AND ERROR PROTECTION PARAMETERS FOR 3D VIDEO BROADCAST OVER DVB-H
Aksay, Anil; Bugdayci, Done; Akar, Gözde (2011-05-18)
In this study, we propose a heuristic methodology for modeling the end-to-end distortion characteristics of an error resilient broadcast system for 3D video overDigital Video Broadcasting -Handheld (DVB-H). We also use this model to optimally select the parameters of the video encoder and the error correction scheme, namely, Multi Protocol Encapsulation Forward Error Correction (MPE-FEC), minimizing the overall distortion. The proposed method models the RQ curve of video encoder and performance of channel c...
Closed-form sample probing for training generative models in zero-shot learning
Çetin, Samet; Cinbiş, Ramazan Gökberk; Department of Computer Engineering (2022-2-10)
Generative modeling based approaches have led to significant advances in generalized zero-shot learning over the past few-years. These approaches typically aim to learn a conditional generator that synthesizes training samples of classes conditioned on class embeddings, such as attribute based class definitions. The final zero-shot learning model can then be obtained by training a supervised classification model over the real and/or synthesized training samples of seen and unseen classes, combined. Therefor...
Improving classification performance of endoscopic images with generative data augmentation
Çağlar, Ümit Mert; Temizel, Alptekin; Department of Modeling and Simulation (2022-2-8)
The performance of a supervised deep learning model is highly dependent on the quality and variety of the images in the training dataset. In some applications, it may be impossible to obtain more images. Data augmentation methods have been proven to be successful in increasing the performance of deep learning models with limited data. Recent improvements on Generative Adversarial Networks (GAN) algorithms and structures resulted in improved image quality and diversity and made GAN training possible with lim...
Compressed Representation of High Dimensional Channels using Deep Generative Networks
Doshi, Akash; Balevi, Eren; Andrews, Jeffrey G. (2020-05-01)
© 2020 IEEE.This paper proposes a novel compressed representation for high dimensional channel matrices obtained by optimization of the input to a deep generative network. Channel estimation using generative networks constrains the reconstructed channel to lie in the range of the generative model, which allows it to outperform conventional channel estimation techniques in the presence of limited number of pilots. It also eliminates the need for explicit knowledge of the sparsifying basis for mmWave multiple...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
C. Öngün and A. Temizel, “Paired 3D Model Generation with Conditional Generative Adversarial Networks,” 2018, vol. 0, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/32185.