Markov Decision Processes Based Optimal Control Policies for Probabilistic Boolean Networks

2004-06-01
Abul, Osman
Alhajj, Reda
Polat, Faruk
This paper addresses the control formulation process for probabilistic boolean genetic networks. It is a major problem that has not been investigated enough yet. We argue that a monitoring stage is necessary after the control stage for providing guidance about the evolution of the investigated state. For this purpose, we developed methods for generating optimal control policies for each of the following five cases: finite control, infinite control, finite control-infinite monitoring, finite control-finite monitoring, and repeated finite control-finite monitoring. Our initial proposal was based on using action cost functions in the process. In this study, we propose Markov decision processes as an alternative to the action cost functions approach. We conducted experiments on two simple illustrative examples to demonstrate that the considered five cases are necessary, effective and really matter while developing optimal control policies; the obtained results are promising.

Suggestions

Asymptotical lower limits on required number of examples for learning boolean networks
Abul, Osman; Alhajj, Reda; Polat, Faruk (2006-11-03)
This paper studies the asymptotical lower limits on the required number of samples for identifying Boolean Networks, which is given as Omega(logn) in the literature for fully random samples. It has also been found that; O(logn) samples are sufficient with high probability. Our main motivation is to provide tight lower asymptotical limits for samples obtained from time series experiments. Using the results from the literature on random boolean networks, lower limits on the required number of samples from tim...
Verification of Modular Diagnosability With Local Specifications for Discrete-Event Systems
Schmidt, Klaus Verner (Institute of Electrical and Electronics Engineers (IEEE), 2013-09-01)
In this paper, we study the diagnosability verification for modular discrete-event systems (DESs), i.e., DESs that are composed of multiple components. We focus on a particular modular architecture, where each fault in the system must be uniquely identified by the modular component where it occurs and solely based on event observations of that component. Hence, all diagnostic computations for faults to be detected in this architecture can be performed locally on the respective modular component, and the obt...
Multiobjective evolutionary feature subset selection algorithm for binary classification
Deniz Kızılöz, Firdevsi Ayça; Coşar, Ahmet; Dökeroğlu, Tansel; Department of Computer Engineering (2016)
This thesis investigates the performance of multiobjective feature subset selection (FSS) algorithms combined with the state-of-the-art machine learning techniques for binary classification problem. Recent studies try to improve the accuracy of classification by including all of the features in the dataset, neglecting to determine the best performing subset of features. However, for some problems, the number of features may reach thousands, which will cause too much computation power to be consumed during t...
Maximally Permissive Hierarchical Control of Decentralized Discrete Event Systems
SCHMİDT, KLAUS WERNER; Schmidt, Klaus Verner (2011-04-01)
The subject of this paper is the synthesis of natural projections that serve as nonblocking and maximally permissive abstractions for the hierarchical and decentralized control of large-scale discrete event systems. To this end, existing concepts for nonblocking abstractions such as natural observers and marked string accepting (msa)-observers are extended by local control consistency (LCC) as a novel sufficient condition for maximal permissiveness. Furthermore, it is shown that, similar to the natural obse...
Employing decomposable partially observable Markov decision processes to control gene regulatory networks
Erdogdu, Utku; Polat, Faruk; Alhajj, Reda (2017-11-01)
Objective: Formulate the induction and control of gene regulatory networks (GRNs) from gene expression data using Partially Observable Markov Decision Processes (POMDPs).
Citation Formats
O. Abul, R. Alhajj, and F. Polat, “Markov Decision Processes Based Optimal Control Policies for Probabilistic Boolean Networks,” presented at the BIBE′04 - 4th IEEE Symposium on Bioinformatics and Bioengineering (2004), 2004, Accessed: 00, 2021. [Online]. Available: https://hdl.handle.net/11511/73948.