Automated Video Game Testing Using Synthetic and Human-Like Agents

2019-10-01
Arıyürek, Sinan
Betin Can, Aysu
Sürer, Elif
We present a new methodology that employs tester agents to automate video game testing. We introduce two types of agents —synthetic and human-like. Our agents are derived from Sarsa and MCTS but focus on finding defects while traditional game-playing agents focus on maximizing game scores. The synthetic agent uses test goals generated from game scenarios, and these goals are further modified to examine the effects of unintended game transitions. The human-like agent uses test goals extracted by our proposed multiple greedy-policy inverse reinforcement learning (MGP-IRL) algorithm from tester trajectories. We use our agents to produce test sequences, and run the game with these sequences. At each run, we use an automated test oracle to check for bugs. We compared the success of human-like and synthetic agents in bug finding, and evaluated the similarity between human-like agents and human testers. We collected 427 trajectories from human testers using the GVG-AI framework and created three testbed games with 12 levels that contain 45 bugs. Our experiments reveal that human-like and synthetic agents compete with human testers. We show that MGP-IRL increases human-likeness of agents while improving the bug finding performance.
IEEE Transactions on Computational Intelligence and AI in Games

Suggestions

Automated Video Game Testing Using Synthetic and Human-Like Agents
Arıyürek, Sinan; Betin Can, Aysu; Sürer, Elif (2019-06-01)
In this paper, we present a new methodology that employs tester agents to automate video game testing. We introduce two types of agents -synthetic and human-like- and two distinct approaches to create them. Our agents are derived from Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) agents, but focus on finding defects. The synthetic agent uses test goals generated from game scenarios, and these goals are further modified to examine the effects of unintended game transitions. The human-like ag...
Evaluation and selection of case tools: a methodology and a case study
Okşar, Koray; Okşar, Koray; Department of Information Systems (2010)
Today’s Computer Aided Software Engineering (CASE) technology covers nearly all activities in software development ranging from requirement analysis to deployment.Organizations are evaluating CASE tool solutions to automate or ease their processes. While reducing human errors, these tools also increase control, visibility and auditability of the processes. However, to achieve these benefits, the right tool or tools should be selected for usage in the intended processes. This is not an easy task when the vas...
Vocational Interests toward complex occupations make a difference in STEM work life.
Toker, Yonca (2018-04-19)
The STEM Interest Complexity Measure, measuring interests toward complex tasks under the realistic and investigative work environments, was investigated with employed engineering-scientist and technol-ogist-technician samples. Interest levels were higher for the higher complexity engineering-scientist sample. Interest and work criteria associations were again higher for the high-complexity sample.
Assessment of Software Process and Metrics to Support Quantitative Understanding: Experience from an Undefined Task Management Process
TARHAN, AYÇA; Demirörs, Onur (2011-06-01)
Software engineering management demands the measurement, evaluation and improvement of the software processes and products. However, the utilization of measurement and analysis in software engineering is not very straightforward. It requires knowledge on the concepts of measurement, process management, and statistics as well as on their practical applications. We developed a systematic approach to evaluate the suitability of a software process and its measures for quantitative analysis, and have applied the...
Software test-code engineering: A systematic mapping
Yusifoglu, Vahid Garousi; Amannejad, Yasaman; Betin Can, Aysu (2015-02-01)
Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE).
Citation Formats
S. Arıyürek, A. Betin Can, and E. Sürer, “Automated Video Game Testing Using Synthetic and Human-Like Agents,” IEEE Transactions on Computational Intelligence and AI in Games, pp. 1–21, 2019, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/69667.