Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Joint Learning of Syntax and Argument Structure in Dependency Parsing
Download
index.pdf
Date
2023-2
Author
Kayadelen, Tolga
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
57
views
20
downloads
Cite This
This thesis is an experimentation on learning predicate-argument structure and syntax within the dependency parsing framework. The linguistic representation used in this framework is dependency grammar. In dependency grammar, the predicate argument structure of a sentence is represented in the form of labeled and uni-directed dependency trees. Dependency parsing is the problem of inducing a dependency grammar from data. The dependency parsing problem can be conceived of as a combination of two tasks: head-selection (arc-prediction) and label-classification. Head selection aims to determine head-modifier relations in the sentence by associating modifiers with the heads that they modify using dependency arcs. On the other hand, label classification aims to determine the grammatical role of each word in the sentence. In existing parsing approaches, these two tasks are usually stacked on top of one another where the former takes precedence over the latter. In other words, models first try to predict the dependency arcs by connecting dependents to their heads and generating an unlabeled tree, following which they assign labels to the arcs of the tree. In this set up, dependency labeling have no impact at all in predicting the correct dependency tree as it applies only after the tree is already generated. In this study, instead of generating an unlabeled dependency tree and then using dependency labels only as names over the arcs in that tree, we give dependency labels a more central role in the overall parsing process. We first predict the dependency label of each word, therefore predicting its grammatical role in the sentence, and then generate the dependency tree based on those predictions. We call this method label-first parsing. As it will be shown, this approach improves the parsing accuracy considerably for a number of languages. Another important aspect of the label-first parsing approach is that in this approach syntactic attachment is mainly driven by the argument structure that the system detects, therefore a lot of weight is put on predicting the predicates and the arguments correctly. We experiment with a variety of languages and show that a parser that can accurately predict the predicate and argument roles early in the parsing process can perform better across a number of languages compared to one which does not. Comparing the variation in parsing performance across languages, and considering their typological characteristics, we also try to derive conclusions about the suitability of the dependency representation for learning the predicate-argument structure in languages with different linguistic properties.
Subject Keywords
Dependency parsing
,
Language processing
,
Syntax
,
Argument structure
,
Deep learning
URI
https://hdl.handle.net/11511/102788
Collections
Graduate School of Informatics, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
T. Kayadelen, “Joint Learning of Syntax and Argument Structure in Dependency Parsing,” Ph.D. - Doctoral Program, Middle East Technical University, 2023.