Optym

class optym.GradientDescent(delta_x: float = 0.001, **kwargs)[source]

Simple implementation of Gradient-Descent algorithm.

Example

>>> import numpy as np
>>> from optym import GradientDescent
>>> def cost(parameters):
      target_parameters = np.array([1,1])
      return np.sum(np.square(target_parameters-parameters))
>>> gd = GradientDescent(learning_rate = 0.05, iterations = 100, delta_x=0.001)
>>> start_parameters = np.random.random(2)
>>> par_hist, cost_hist = gd(cost, start_parameters)
>>> print(gd.param_in_min)
Parameters

delta_x (float, optional) – A numerical parameter representing dx on the numerical derivative dy/dx, by default 0.001

class optym.MCMC(min_parameters: list, max_parameters: list, method: str = 'metropolis_hastings', clip_limits: bool = False)[source]

Simple implementation Markov-Chain Monte Carlo algorithms.

Example

>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from optym import GradientDescent, MCMC
>>> def cost(parameters):
      target_parameters = np.array([1,1])
      return np.sum(np.square(target_parameters-parameters))
>>> def prob(parameters):
      return np.exp(-0.5 * cost(parameters))
>>> start_parameters = np.random.random(2)
>>> min_parameters = [0,0]
>>> max_parameters = [2,2]
>>> mcmc = MCMC(min_parameters, max_parameters, method="metropolis_hastings")
>>> par_hist, cost_hist = mcmc(prob, start_parameters, iterations=20000)
>>> print(mcmc.param_in_max)
>>> plt.plot(np.log10(cost_hist))
>>> plt.show()
Parameters
  • min_parameters (list) – List of n elements containing the minimum value for each one of the n parameters.

  • max_parameters (list) – List of n elements containing the maximum value for each one of the n parameters.

  • method (str, optional) – Optimization method. {‘metropolis_hastings’, ‘maximize’, ‘minimize’}, by default “metropolis_hastings”

  • clip_limits (bool, optional) – If the new proposals must be clipped on the parameter intervals, by default False