time complexity: subset of vectors $ k $ with the shortest sum, with respect to the norm $ ell_ infty $

I have a collection of $ n $ vectors $ x_1, …, x_n in mathbb {R} _ { geq 0} ^ {d} $. Given these vectors and an integer $ k $, I want to find the subset of $ k $ vectors whose sum is shorter than the uniform norm. That is, find the set (possibly not unique) $ W ^ * subset {x_1, …, x_n } $ such that $ left | W ^ * right | = k $ Y

$$ W ^ * = arg min limits_ {W subset {x_1, …, x_n } land left | W right | = k} left lVert sum limits_ {v in W} v right rVert _ { infty} $$

The brute force solution to this problem takes $ O (dkn ^ k) $ operations – there $ {n choose k} = O (n ^ k) $ subsets to test, and each takes $ O (dk) $ Operations to calculate the sum of the vectors and then find the uniform norm (in this case, only the maximum coordinate, since all the vectors are not negative).

My questions:

  1. Is there an algorithm better than brute force? The approximation algorithms are fine.

One idea I had was to consider a convex relaxation where we assigned each vector a fractional weight in $ (0, 1) $ and require the weights to add $ k $. The resulting subset of $ mathbb {R} ^ d $ encompassed by all those weighted combinations is in fact convex. However, even if we can find the optimal weight vector, I'm not sure how to use this set of weights to choose a subset of $ k $ vectors In other words, which integral rounding scheme to use?

I've also thought about dynamic programming, but I'm not sure if this would end up being faster in the worst case.

  1. Consider a variation where we want to find the optimal subset for each $ k $ in $ (n) $. Again, is there a better approach than naively solving the problem for each $ k $? I think there should be a way to use information from executions on subsets of size $ k $ to those of size $ (k + 1) $ and so.

  2. Consider the variation where instead of a subset size $ k $, one is given some objective norm $ r in mathbb {R} $. The task is to find the largest subset of $ {x_1, …, x_n } $ whose sum has a uniform norm $ leq r $. In principle, one should seek $ O (2 ^ n) $ subsets of vectors. Do the algorithms change? Also, it is the decision version (for example, we could ask if there is a subset of size $ geq k $ whose sum has a uniform norm $ leq r $) of the NP-hard problem?