At a given point p, a convex function f is differentiable in a certain subspace U (the subspace along which ∂f(p) has 0-breadth). This property opens the way to defining a suitably restricted second derivative of f at p. We do this via an intermediate function, convex on U . We call this function the U-Lagrangian; it coincides with the ordinary Lagrangian in composite cases: exact penalty, semidefinite programming. Also, we use this new theory to design a conceptual pattern for superlinearly convergent minimization algorithms. Finally, we establish a connection with the Moreau-Yosida regularization.
Download Full PDF Version (Non-Commercial Use)