SciPy Optimization – Unconstrained, Constrained, Least- Square, Univariate Minimization

Free NumPy course with real-time projects Start Now!!

SciPy consists of an optimization module. Optimization is the final step in any technological process. It is one of the most important packages. We optimize the input parameters of a function. The module scipy.optimize() consists of a number of different optimization algorithms.

Optimization in SciPy

We can optimize the parameters of a function using the scipy.optimize() module. It contains a variety of methods to deal with different types of functions.

1. minimize_scalar()- we use this method for single variable function minimization.
2. minimize()- we use this method for multivariable function minimization.
3. curve_fit()- We use this method for fixing a function to a data set.
4. root_scalar()- It is to determine the zeros of a single variable function.
5. root()- It is to determine the zeros of multivariable function
6. linprog()- We use it to minimize a linear function with equality and inequality properties.

We can perform various types of minimizations:

  • Constrained and unconstrained minimization of single or multi-variable scalar functions. We can perform this using various minimization algorithms
  • Global optimization.
  • Least- square minimizations and curve fittings.
  • For Root finding.
  • Solving multivariate equation systems.

Unconstrained and Constrained Minimization in SciPy

We use the minimize()function for the performing minimization on the scalar function. As an example function, we use the Rosenbrock scalar function.

f(x,y)=(1−x)2+100(y−x2)2

The minimum value is 0. We can achieve this by setting x=1.

We can perform this process using two methods:

1. Nelder-Mead Simplex Algorithm in SciPy

We use the minimize function to set the method as ‘Nedler-Mead’. This is one of the easiest techniques. However, it is a very slow process.

import numpy as np
from scipy.optimize import minimize
def rosen(x):
  return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
 
x0 = np.array([1.9, 0.7, 3.4, 1.6, 2.1])
res = minimize(rosen, x0, method='nelder-mead')
print(res.x)

Output

[1.00000028 0.99999778 0.99999712 0.9999944 0.99998762]

2. Powell Algorithm in SciPy

This is a similar method. It uses a single scalar function call. We can implement it using the method as Powell.

import numpy as np
from scipy.optimize import minimize
def rosen(x):
  return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
 
x0 = np.array([1.9, 0.7, 3.4, 1.6, 2.1])
res = minimize(rosen, x0, method='powell')
print(res.x)

Output

[1. 1. 1. 1. 1.]

SciPy Least- Square Minimization

We can solve the least squares with bound variables. We take two inputs – a residual cost function and a loss scalar function and calculate the minimum cost function.

import numpy as np
from scipy.optimize import minimize
from scipy.optimize import least_squares
def rosen(x):
  return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
 
input = np.array([1, 3])
res = least_squares(rosen, input)
 
print (res)

Output

active_mask: array([0., 0.])
cost: 0.07163696298945274
fun: array([0.37851542])
grad: array([0.1312017 , 0.10353955])
jac: array([[0.3466218 , 0.27354117]])
message: ‘The maximum number of function evaluations is exceeded.’
nfev: 200
njev: 194
optimality: 0.1312016979177483
status: 0
success: False
x: array([1.61508403, 2.60986412])

1. Root Finding

We can perform root-finding on scalar equations. There are 4 different algorithms for root- finding. We can evaluate an equation through its endpoints. We consider the root of the endpoint as the function changes sign. The most efficient technique is ‘brenqt’.

2. Fixed-Point solving

We can find the zeroes of single or multivariate equations. The problem is equivalent to finding a fixed point of the function. We can use the Aitkens sequence acceleration technique. It is a simple and iterative method.

3. Equation system

We can find the root of a set of nonlinear equations. There are several methods available for this purpose.

import numpy as np
from scipy.optimize import root
def func(x):
   return x*2 + 2 * np.sin(x)
sol = root(func, 0.3)
print(sol)

Output

fjac: array([[-1.]])
fun: array([0.])
message: ‘The solution converged.’
nfev: 7
qtf: array([-2.21298214e-20])
r: array([-4.])
status: 1
success: True
x: array([0.])

SciPy Univariate Minimization

Similar to multivariate we can perform for single variables. We use the minimize_scalar function in this case. We use the ‘brent’ method for unconstrained minimization. For bounded univariate minimizations, we use the ‘bounded’ method.

from scipy.optimize import minimize_scalar
f = lambda x: (x - 2) * (x + 1)**2
res = minimize_scalar(f, method='brent')
print(res.x)

Output

1.0
from scipy.special import j1
res = minimize_scalar(j1, bounds=(4, 7), method='bounded')
res.x
 

Output

5.3314418424098315

Summary

Optimization is the most important concept for any function evaluation. We have the optimization module in SciPy. It is very convenient for working with scalar functions. It consists of a wide range of methods and algorithms. We can deal with linear, nonlinear, multivariate, and univariate equations.

Did you like our efforts? If Yes, please give DataFlair 5 Stars on Google

follow dataflair on YouTube

Leave a Reply

Your email address will not be published. Required fields are marked *