Constrained optimization without lagrangian multipliers

There is an obvious alternative to using lagrangian multipliers for constrained optimization: reformulate the problem in the subspace of constraints and it becomes automatically a non constrained problem.

It is not always obvious, though, how one can do so in specific cases.

Sometimes a singular value decomposition (SVD) can do the trick. As an example, think of finding the plane \pi=(a,b,c,d) which minimizes the sum of squared distances from a set of points P_i, i \in 1 .. n, n \geq 2, but exactly contains one point P_0, being P_j=(x_j, y_j, z_j, 1) in homogeneous coordinates.

Of course a lagrangian formulation is possible:

min_{a,b,c,d} \sum_{i=1}^n (a x_i + b y_i + c z_i + d)^2

subject to

a^2+b^2+c^2=1

to avoid the trivial solution, and to

a x_0 + b y_0 + c z_0 + d = 0,

leading to the lagrangian function

\Lambda (a, b, c, d) = \sum_1^n (a x_i + b y_i + c z_i + d)^2 + \lambda_1 (a^2+b^2+c^2 - 1) + \lambda_2 (a x_0 + b y_0 + c z_0 + d)

But one can also follow a different way.

The constraint a x_0 + b y_0 + c z_0 + d = 0 defines a 2D subspace where the problem becomes non constrained. By applying the singular value decomposition to the single-row matrix M = \left[ \begin{array}{cccc} x_0 & y_0 & z_0 & 1 \end{array} \right], one finds one singular value different from zero and the
remaining three very close to zero (in the limits of machine precision).

SVD decomposes the matrix M as

M = U_M \cdot W_M \cdot V_M^T

where the three columns of the 4 \times 4 matrix V_M corresponding to null singular values form a basis for the nullspace of M, say v_1, v_2, v_3.

Of course the solution sought for, \pi=(a, b, c, d), lies in the nullspace of M, so it must be a linear combination of the basis vectors we have just found:

\pi = \sum_{k=1}^3 \pi_k v_k

To determine the \pi_ks, remember that one wants the system

\pi \cdot P_1 = 0

\cdots

\pi \cdot P_n =0

that is:

\sum_{k=1}^3 \pi_k (v_k \cdot P_1) = 0

\cdots

\sum_{k=1}^3 \pi_k (v_k \cdot P_n) =0

satisfied in the least squares sense. Call L the matrix of this system:

L=\left[ \begin{array}{ccc} v_{1} \cdot P_{1} & v_{2} \cdot P_{1} & v_{3} \cdot P_{1} \\ \cdots & \cdots & \cdots \\ v_{1} \cdot P_{n} & v_{2} \cdot P_{n} & v_{3} \cdot P_{n} \end{array} \right]

Now one can draw again from the almighty SVD, obtaining L=U_L \cdot W_L\cdot V_L^T; one should find the three singular values all different from zero (unless P_0 .. P_n are all precisely coplanar) and the solution sought for is, at last, the column of the 3 \times 3 right orthogonal matrix V_L corresponding to the smallest singular value.

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , . Bookmark the permalink.

One Response to Constrained optimization without lagrangian multipliers

  1. Pingback: Vanishing points in presence of noise | Tramp's Trifles

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s