Convex optimization: convergence of a fixed-point algorithm for a concave objective function

Suppose we have an objective function. $ max_ limits {x} sum_ limits {i} f_i (x_i) $ with the restriction that $ x_i geq 0, sum_ limits {i} x_i = 1 $.

Each function $ f_i $ It is continuous and differentiable within the limit, with the following properties: $ f_i (x_i) geq 0 $, $ f_i (0) = 0 $, $ f & # 39; _i (x)> 0 $, $ f & # 39; _i (1)> $ 0, $ f_i & # 39; & # 39; (x_i) <0 $.

By adding a Lagrange term and taking the partial derivative, we obtain

$ frac { partial sum_i f_i (x_i) + lambda (1 – sum_i x_i)} { partial x_i} = f & # 39; _i (x_i) – lambda $.

By putting the previous expression to zero, moving $ lambda $ to the right side of the equation and multiplying a $ x_i $ With each side, we would arrive at a fixed point update.
$ x_i ^ mbox {new} propto x_i cdot f & # 39; _i (x_i) $.

Would this iterative algorithm monotonously increase the concave objective and is it guaranteed to converge?