The goal of this paper is to understand how exponential-time approximation algorithms can be obtained from existing polynomial-time approximation algorithms, existing parameterized exact algorithms, and existing parameterized approximation algorithms. More formally, we consider a monotone subset minimization problem over a universe of size n (e.g., VERTEX COVER or FEEDBACK VERTEX Set). We have access to an algorithm that finds an α-approximate solution in time ck · nO(1) if a solution of size k exists (and more generally, an extension algorithm that can approximate in a similar way if a set can be extended to a solution with k further elements). Our goal is to obtain a dn · nO(1) time β-approximation algorithm for the problem with d as small as possible. That is, for every fixed α,c,β ≥ 1, we would like to determine the smallest possible d that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the α-approximate extension algorithm. Our results completely resolve this question: 1. For every fixed α, c, β ≥ 1, a simple algorithm (“approximate monotone local search”) achieves the optimum value of d. 2. Given α, c, β ≥ 1, we can efficiently compute the optimum d up to any precision ɛ > 0. Our technique gives novel results for a wide range of problems including FEEDBACK VERTEX Set, DIRECTED Feedback Vertex Set, Odd Cycle Traversal and Partial Vertex Cover. The monotone local search algorithm we use is a simple adaptation of [Fomin et al., J. ACM 2019, Esmer et al., ESA 2022, Gaspers and Lee, ICALP 2017]. Still, attaining the above results required us to frame the result in a different way, and overcome a major technical challenge. First, we introduce an oracle based computational model which allows for a simple derivation of lower bounds that, unexpectedly, show that the running time of the monotone local search algorithm is optimal. Second, while it easy to express the running time of the monotone local search algorithm in various forms, it is unclear how to actually numerically evaluate it for given values of α, β and c. We show how the running time of the algorithm can be evaluated via a convex analysis of a continuous max-min optimization problem, overcoming the limitations of previous approaches to the α = β case [Fomin et al., J. ACM 2019, Esmer et al., ESA 2022, Gaspers and Lee, ICALP 2017]. * The full version of the paper can be accessed at https://arxiv.org/abs/2306.15331. Research supported by the European Research Council (ERC) consolidator grant No. 725978 SYSTEMATICGRAPH.
ACM-SIAM Symposium on Discrete Algorithms (SODA)
2024
2024-12-04