Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Automated Video Game Testing Using Synthetic and Human-Like Agents
Date
2019-10-01
Author
Arıyürek, Sinan
Betin Can, Aysu
Sürer, Elif
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
356
views
0
downloads
Cite This
We present a new methodology that employs tester agents to automate video game testing. We introduce two types of agents —synthetic and human-like. Our agents are derived from Sarsa and MCTS but focus on finding defects while traditional game-playing agents focus on maximizing game scores. The synthetic agent uses test goals generated from game scenarios, and these goals are further modified to examine the effects of unintended game transitions. The human-like agent uses test goals extracted by our proposed multiple greedy-policy inverse reinforcement learning (MGP-IRL) algorithm from tester trajectories. We use our agents to produce test sequences, and run the game with these sequences. At each run, we use an automated test oracle to check for bugs. We compared the success of human-like and synthetic agents in bug finding, and evaluated the similarity between human-like agents and human testers. We collected 427 trajectories from human testers using the GVG-AI framework and created three testbed games with 12 levels that contain 45 bugs. Our experiments reveal that human-like and synthetic agents compete with human testers. We show that MGP-IRL increases human-likeness of agents while improving the bug finding performance.
URI
https://hdl.handle.net/11511/69667
Journal
IEEE Transactions on Computational Intelligence and AI in Games
DOI
https://doi.org/10.1109/tg.2019.2947597
Collections
Graduate School of Informatics, Article
Suggestions
OpenMETU
Core
Automated Video Game Testing Using Synthetic and Human-Like Agents
Arıyürek, Sinan; Betin Can, Aysu; Sürer, Elif (2019-06-01)
In this paper, we present a new methodology that employs tester agents to automate video game testing. We introduce two types of agents -synthetic and human-like- and two distinct approaches to create them. Our agents are derived from Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) agents, but focus on finding defects. The synthetic agent uses test goals generated from game scenarios, and these goals are further modified to examine the effects of unintended game transitions. The human-like ag...
AUTOMATED VIDEO GAME TESTING USING REINFORCEMENT LEARNING AGENTS
Arıyürek, Sinan; Sürer, Elif; Betin Can, Aysu; Department of Bioinformatics (2022-9-21)
In this thesis, several methodologies are introduced to automate and improve video game playtesting. These methods are based on Reinforcement Learning (RL) agents. First, synthetic and human-like tester agents are proposed to automate video game testing. The synthetic agent uses test goals generated from game scenarios, and the human-like agent uses test goals extracted from tester trajectories. Tester agents are derived from Sarsa and Monte Carlo Tree Search (MCTS) but focus on finding defects, while tradi...
Evaluation and selection of case tools: a methodology and a case study
Okşar, Koray; Okşar, Koray; Department of Information Systems (2010)
Today’s Computer Aided Software Engineering (CASE) technology covers nearly all activities in software development ranging from requirement analysis to deployment.Organizations are evaluating CASE tool solutions to automate or ease their processes. While reducing human errors, these tools also increase control, visibility and auditability of the processes. However, to achieve these benefits, the right tool or tools should be selected for usage in the intended processes. This is not an easy task when the vas...
Vocational Interests toward complex occupations make a difference in STEM work life.
Toker, Yonca (2018-04-19)
The STEM Interest Complexity Measure, measuring interests toward complex tasks under the realistic and investigative work environments, was investigated with employed engineering-scientist and technol-ogist-technician samples. Interest levels were higher for the higher complexity engineering-scientist sample. Interest and work criteria associations were again higher for the high-complexity sample.
Assessment of Software Process and Metrics to Support Quantitative Understanding: Experience from an Undefined Task Management Process
TARHAN, AYÇA; Demirörs, Onur (2011-06-01)
Software engineering management demands the measurement, evaluation and improvement of the software processes and products. However, the utilization of measurement and analysis in software engineering is not very straightforward. It requires knowledge on the concepts of measurement, process management, and statistics as well as on their practical applications. We developed a systematic approach to evaluate the suitability of a software process and its measures for quantitative analysis, and have applied the...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Arıyürek, A. Betin Can, and E. Sürer, “Automated Video Game Testing Using Synthetic and Human-Like Agents,”
IEEE Transactions on Computational Intelligence and AI in Games
, pp. 1–21, 2019, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/69667.