9.2 The Lagrange formalism

The Lagrange method is a simple method for solving extreme value problems under constraints. It generally has the following form

max x,yF x,y under the condition that g x,y c.

There may also be other constraints. Since the number of constraints should be less than the number of variables (otherwise there are usually only single points or no points at all which fulfill all constraints), you use only one constraint for two variables. Additionally, in particular for economic problems, the non-negativity conditions x 0 and y 0 often apply. In this generality, it must first be examined whether there is a maximum inside the constraint (g x,y < c. Only then can an extremum at the boundary of the constraint be determined with the Lagrange method. If, however, the function F is monotonous and the allowable quantity is limited (i.e. x = or y = is excluded by the constraint), the maximum is always assumed to be at the boundary. This is always the case for the problems we are investigating. Usually, for utility non-saturation applies, i.e. more is better, and therefore the utility function is monotonically increasing. In these cases the problem is simplified to

max x,yF x,y under the condition that g x,y = c

and you solve this problem with the Lagrange method. For this purpose, the so-called Lagrange function L is set to 𝕃

𝕃 x,y,λ = F x,y + λ g x,y c

and is maximized. The variable λ is called Lagrange multiplier. If there are several constraints, each constraint gi receives its own Lagrange multiplier λi and is added to the Lagrange function. For maximization, the function is differentiated as usual and the first derivatives are set to zero. The resulting equations are called "First Order Conditions" (FOC).

∂𝕃 x,y,λ ∂x = ∂F x,y ∂x + λ∂g x,y ∂x = !0 ∂𝕃 x,y,λ ∂y = ∂F x,y ∂y + λ∂g x,y ∂y = !0 ∂𝕃 x,y,λ ∂λ = g x,y c = !0 If this system of equations is solved, the solution, if it is permissible, is a maximum point. FOC3 represents the constraint, i.e. it ensures that the constraint is fulfilled. A common step to solve the system of equations is to divide the first two equations with each other in order to eliminate λ because the quotients of the partial derivatives often have a simple form.
∂xF x,y ∂yF x,y = ∂xg x,y ∂yg x,y

As a prerequisite for the proof of a maximum, it must still be checked whether the function 𝕃 x,y,λ is concave in x,y. If it is convex, the point found is a minimum. If we consider linear constraints such as the budget constraint or the cost function, it is clear that their second derivative always disappears. For the proof of concavity (convexity), we can therefore restrict ourselves to the function F. For example, a function F is concave if it shows decreasing returns to scale. If the returns to scale are constant or decreasing, the function is usually convex in one direction and concave in the other. In these cases, a more differentiated analysis must be performed to determine whether a maximum or minimum is present. As proof of a maximum, e.g. a positive second direction derivative in the direction of the constraint would be possible. Also, it would be sufficient to show that the function is monotonically increasing and the isoquants are convex. Other proofs are also possible.


(c) by Christian Bauer
Prof. Dr. Christian Bauer
Chair of monetary economics
Trier University
D-54296 Trier
Tel.: +49 (0)651/201-2743
E-mail: Bauer@uni-trier.de
URL: https://www.cbauer.de