Semantic dimensionality reduction during language acquisition: A window into concept representation

Özcan, Rojda
We explore the dimensionality in the semantic representations derived from the Eve fragment of the CHILDES database to gain insights into whether or not semantic dimensionality reduc- tion (DR) occurs during language acquisition, and if so to gain insights into how this reduction of dimensions could look like. We start exploring these representations that are in the form of lambda terms (LTs) by trying to find different representations for them which would be more suitable for the use of DR techniques on them. Seeing as they make the data set more suitable for the application of DR, we prepare a version of the original data set where the LTs are turned into De Bruijn terms (DBTs). After coming up with vectorial representations for the LTs and DBTs (For DBTs we prepare 2 different versions of the data set.), we study the dimensionality in these 3 data sets by utilizing Principal Component Analysis, Cosine Similarity Comparisons, and Affinity Propagation Clustering, and report the results.


Sentiment and Context-refined Word Embeddings for Sentiment Analysis
Deniz, Ayca; Angin, Merih; Angın, Pelin (2021-01-01)
Word embeddings have become the de-facto tool for representing text in natural language processing (NLP) tasks, as they can capture semantic and syntactic relations, unlike their precedents such as Bag-of-Words. Although word embeddings have been employed in various studies in recent years and proven to be effective in many NLP tasks, they are still immature for sentiment analysis, as they suffer from insufficient sentiment information. General word embedding models pre-trained on large corpora with methods...
Learning semi-supervised nonlinear embeddings for domain-adaptive pattern recognition
Vural, Elif (null; 2019-05-20)
We study the problem of learning nonlinear data embeddings in order to obtain representations for efficient and domain-invariant recognition of visual patterns. Given observations of a training set of patterns from different classes in two different domains, we propose a method to learn a nonlinear mapping of the data samples from different domains into a common domain. The nonlinear mapping is learnt such that the class means of different domains are mapped to nearby points in the common domain in order to...
Semantic Communications in Networked Systems: A Data Significance Perspective
Uysal, Elif; KAYA, ONUR; Ephremides, Anthony; Gross, James; Codreanu, Marian; Popovski, Petar; Assaad, Mohamad; Liva, Gianluigi; Munari, Andrea; Soret, Beatriz; Soleymani, Touraj; Johansson, Karl Henrik (2022-7-01)
We present our vision for a departure from the established way of architecting and assessing communication networks, by incorporating the semantics of information, defined not necessarily as the meaning of the messages, but as their significance, possibly within a real-time constraint, relative to the purpose of the data exchange. We argue that research efforts must focus on laying the theoretical foundations of a redesign of the entire process of information generation, transmission, and usage for networke...
Emergence of verb and object concepts through learning affordances
Dağ, Nilgün; Kalkan, Sinan; Şahin, Erol; Department of Computer Engineering (2010)
Researchers are still far from thoroughly understanding and building accurate computational models of the mechanisms in human mind that give rise to cognitive processes such as emergence of concepts and language acquisition. As a new attempt to give an insight into this issue, in this thesis, we are concerned about developing a computational model that leads to the emergence of concepts. Speci cally, we investigate how a robot can acquire verb and object concepts through learning affordances, a notion first...
Learning by optimization in random neural networks
Atalay, Mehmet Volkan (1998-10-28)
The random neural network model proposed by Gelenbe has a number of interesting features in addition to a well established theory. Gelenbe has also developed a learning algorithm for the recurrent random network model using gradient descent of a quadratic error function. We present a quadratic optimization approach for learning in the random neural network, particularly for image texture reconstruction.
Citation Formats
R. Özcan, “Semantic dimensionality reduction during language acquisition: A window into concept representation,” M.S. - Master of Science, Middle East Technical University, 2022.