Generating test input with deep reinforcement learning

Cited 26 time in webofscience Cited 0 time in scopus
  • Hit : 219
  • Download : 0
Test data generation is a tedious and laborious process. Search-based Software Testing (SBST) automatically generates test data optimising structural test criteria using metaheuristic algorithms. In essence, metaheuristic algorithms are systematic trial-and-error based on the feedback of fitness function. This is similar to an agent of reinforcement learning which iteratively decides an action based on the current state to maximise the cumulative reward. Inspired by this analogy, this paper investigates the feasibility of employing reinforcement learning in SBST to replace human designed meta-heuristic algorithms. We reformulate the software under test (SUT) as an environment of reinforcement learning. At the same time, we present GunPowder, a novel framework for SBST which extends SUT to the environment. We train a Double Deep Q-Networks (DDQN) agent with deep neural network and evaluate the effectiveness of our approach by conducting a small empirical study. Finally, we find that agents can learn metaheuristic algorithms for SBST, achieving 100% branch coverage for training functions. Our study sheds light on the future integration of deep neural network and SBST.
Publisher
IEEE Computer Society
Issue Date
2018-05-29
Language
English
Citation

11th ACM/IEEE International Workshop on Search-Based Software Testing, SBST 2018, co-located with the 40th International Conference on Software Engineering, ICSE 2018, pp.51 - 58

DOI
10.1145/3194718.3194720
URI
http://hdl.handle.net/10203/246834
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 26 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0