# Simplex algorithm: solution on a vertex

The simplex algorithm is probably the most famous optimization algorithm. It is designed to solve linear optimization problems. It has been invented by George Dantzig in 1949. The main idea is based on the following property. If there is a solution to the problem, it can be found on one of the vertices of the constraint polyhedron. Let’s illustrate this using a problem with one dimension. So we would like to minimize the function a x plus b where x is a real number, and x is constrained to stay between a lower bound ell and an upper bound u. In this case, the constraint polyhedron is the line segment between ell and u. And it has two vertices, which are ell and u. So let’s assume that a is positive. In that case, because we would like to minimize a x plus b, the lowest possible value would be here, which means that x star is equal to ell. And ell is indeed a vertex of the constraint polyhedron. If we assume that a is negative, in this case the slope of the function being negative, it goes down. Because we would like to find the minimum value, it reaches it at x star equals u. And, again, it is a vertex of the constraint polyhedron. The last case is when a is equal to zero. In that case, the function is simply equal to b, for all x. So it’s a constant function. Therefore, any feasible point is optimal. All these points are at the same level, so they are all optimal. In particular, the point x star equals ell and the point x, let’s call it hat, equals u, they are both optimal solutions. Therefore, in this case, not all optimal solutions are vertices, we have some optimal solutions which are not vertices, but we can find an optimal solution on a vertex. And this is exactly what the algorithm will do. So this result that we have illustrated in one dimension can be generalized to more than one dimension. This is a property of linear optimization, is that, if it has a solution, we can find it on a vertex of the constraint polyhedron. This is called in the book Theorem 16.2. If we have a general linear optimization problem, with n variables and m equality constraints, we have a polyhedron in multiple dimensions, then if this problem has an optimal solution, there exists an optimal vertex of the constraint polyhedron. This is good news. Because if the number of feasible solutions is infinite, there is only a finite number of vertices. So, one way to find an optimal solution would be to enumerate all the vertices of the polyhedron, for each of them to calculate the value of the objective function, and to find the best. This is what is called the geometric algorithm. So we enumerate the vertices of the polyhedron, we calculate c transposed x for each of them, and we identify the vertex associated with the smallest value, and this is an optimal solution. The nice thing is that, because we have an algebraic representation of vertices, using feasible basic solutions, we can translate this geometric algorithm into an algebraic algorithm. “Enumerate all vertices of the polyhedron” becomes “enumerate all basic solutions of the polyhedron”. And, for each of them, first we need to check if it is feasible, to make sure that it is an actual vertex, and if so, we calculate c transposed x. And then, we identify the feasible basic solutions with the smallest value. But, for real problems, this procedure is not practical. Indeed, the number of basic solutions for a problem in standard form can be quite high. The number of possibilities n factorial divided by n minus m factorial, times m factorial. This is the number of basic solutions. They may not be all feasible, but this is the number of basic solutions. So if you take a problem where you have one hundred variables and fifty constraints, we have a total of 10 to the 29 possibilities. So suppose that your computer is able to treat one million basic solutions per second. Then, enumerating all the basic solutions will last 10 to the 13 centuries, meaning 10 million million centuries. This is definitely not practical. We’ll have to be smarter than that. The feasible set of a linear optimization problem is a polyhedron. And we know that the optimal solution can be found on one vertex of this polyhedron. Moreover, we have an algebraic way to represent the vertex of a polyhedron. It’s called a feasible basic solution. If you have not reviewed this concept recently, it may be a good time to do so, because the concept of feasible basic solution will be used extensively in the context of the simplex algorithm.