The *principle of least action* states that the path followed in time by a physical system subject to conservative forces is such to minimize (or, more generally, make stationary) the action , where the *lagrangian* is the difference between kinetic and potential energy: , and the path goes through assigned points at the limits of integration: .

In the late 18th century, Euler and Lagrange showed how it is possible to obtain the equations of motion, in the form of the so called *Euler-Lagrange equations*, from the principle of least action, and therefore the path which obeys these equations.

It turns out that there is also a different way to obtain the path directly from the principle of least action, without the step of the Euler-Lagrange equations.

I will illustrate this new method with a toy example: a material point of mass moving in one dimension along under the effect of gravity in direction . This means and , where is the acceleration of gravity. Without loss of generality, translate the origin to have and .

Let me express within as a power series (see my previous post) of : . Then

The lagrangian can be written as

with

Please note that, even if the potential was not linear in but had a more complicated form, nevertheless it would have been easy to express it in terms of the coefficients of and of the coefficients of .

The action is

and, through the s, it is a function of the coefficients .

The condition at the lower limit of integration, , forces , while the condition at the higher limit of integration, , becomes the equation .

I have at this point a function of which I want to find a stationary point with respect to the infinitely many variables , subject to the constraint . Apart from lagrangian mechanics, there exists another result due to Lagrange, the *method of Lagrange multipliers*, which could do this work, even if it is normally applied to optimization problems of finite dimensionality; let me try it in this context.

The lagrangian of the optimization problem is , where is the Lagrange multiplier, and the system of equations giving the optimal values of the s is

that is

Now some work on the partial derivatives is necessary:

for , with the Kronecker delta, and

; so I can adjust the lower limit of summation in the expression of the derivative of :

Summarizing, the system of infinitely many linear equations in the infinitely many unknowns can be written as

or, in full,

Subtracting times from one obtains

Replacing this value of in the rest of the system, and become identical and, with some manipulation, one is left with

or, in general,

In the conventional way I find the solution of the problem for and as

and it is immediate to verify that , , , satisfies the infinite system

In a real case, one would try assuming that s become negligible for some , and would solve the partial system with for with ; then one would check the closures of some with .