Abstraction Refinement Guided by a Learnt Probabilistic Model

Cited 11 time in webofscience Cited 0 time in scopus
  • Hit : 416
  • Download : 0
The core challenge in designing an effective static program analysis is to find a good program abstraction - one that retains only details relevant to a given query. In this paper, we present a new approach for automatically finding such an abstraction. Our approach uses a pessimistic strategy, which can optionally use guidance from a probabilistic model. Our approach applies to parametric static analyses implemented in Datalog, and is based on counterexample-guided abstraction refinement. For each untried abstraction, our probabilistic model provides a probability of success, while the size of the abstraction provides an estimate of its cost in terms of analysis time. Combining these two metrics, probability and cost, our refinement algorithm picks an optimal abstraction. Our probabilistic model is a variant of the Erdos-Renyi random graph model, and it is tunable by what we call hyperparameters. We present a method to learn good values for these hyperparameters, by observing past runs of the analysis on an existing codebase. We evaluate our approach on an object sensitive pointer analysis for Java programs, with two client analyses (PolySite and Downcast).
Publisher
ASSOC COMPUTING MACHINERY
Issue Date
2016-01
Language
English
Article Type
Article; Proceedings Paper
Citation

ACM SIGPLAN NOTICES, v.51, no.1, pp.485 - 498

ISSN
0362-1340
DOI
10.1145/2837614.2837663
URI
http://hdl.handle.net/10203/225265
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 11 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0