roboptim::FiniteDifferenceGradient< FdgPolicy > Class Template Reference

Compute automatically a gradient with finite differences. More...

#include <roboptim/core/finite-difference-gradient.hh>

+ Inheritance diagram for roboptim::FiniteDifferenceGradient< FdgPolicy >:

List of all members.

Public Member Functions

 FiniteDifferenceGradient (const Function &f, value_type e=finiteDifferenceEpsilon) throw ()
 Instantiate a finite differences gradient.
 ~FiniteDifferenceGradient () throw ()

Protected Member Functions

void impl_compute (result_t &, const argument_t &) const throw ()
 Function evaluation.
void impl_gradient (gradient_t &, const argument_t &argument, size_type=0) const throw ()
 Gradient evaluation.

Protected Attributes

const Functionadaptee_
 Reference to the wrapped function.
const value_type epsilon_

Detailed Description

template<typename FdgPolicy>
class roboptim::FiniteDifferenceGradient< FdgPolicy >

Compute automatically a gradient with finite differences.

Finite difference gradient is a method to approximate a function's gradient. It is particularly useful in RobOptim to avoid the need to compute the analytical gradient manually.

This class takes a Function as its input and wraps it into a derivable function.

The one dimensional formula is:

\[f'(x)\approx {f(x+\epsilon)-f(x)\over \epsilon}\]

where $\epsilon$ is a constant given when calling the class constructor.

Examples:
finite-difference-gradient.cc.

Constructor & Destructor Documentation

template<typename FdgPolicy >
roboptim::FiniteDifferenceGradient< FdgPolicy >::FiniteDifferenceGradient ( const Function f,
value_type  e = finiteDifferenceEpsilon 
) throw ()

Instantiate a finite differences gradient.

Instantiate a derivable function that will wraps a non derivable function and compute automatically its gradient using finite differences.

Parameters:
ffunction that will e wrapped
eepsilon used in finite difference computation
template<typename FdgPolicy >
roboptim::FiniteDifferenceGradient< FdgPolicy >::~FiniteDifferenceGradient ( ) throw ()

Member Function Documentation

template<typename FdgPolicy >
void roboptim::FiniteDifferenceGradient< FdgPolicy >::impl_compute ( result_t result,
const argument_t argument 
) const throw () [protected, virtual]

Function evaluation.

Evaluate the function, has to be implemented in concrete classes.

Warning:
Do not call this function directly, call operator()(result_t&, const argument_t&) const throw () instead.
Parameters:
resultresult will be stored in this vector
argumentpoint at which the function will be evaluated

Implements roboptim::Function.

template<typename FdgPolicy >
void roboptim::FiniteDifferenceGradient< FdgPolicy >::impl_gradient ( gradient_t gradient,
const argument_t argument,
size_type  functionId = 0 
) const throw () [protected, virtual]

Gradient evaluation.

Compute the gradient, has to be implemented in concrete classes. The gradient is computed for a specific sub-function which id is passed through the functionId argument.

Warning:
Do not call this function directly, call gradient instead.
Parameters:
gradientgradient will be store in this argument
argumentpoint where the gradient will be computed
functionIdevaluated function id in the split representation

Implements roboptim::DerivableFunction.


Member Data Documentation

template<typename FdgPolicy>
const Function& roboptim::FiniteDifferenceGradient< FdgPolicy >::adaptee_ [protected]

Reference to the wrapped function.

template<typename FdgPolicy>
const value_type roboptim::FiniteDifferenceGradient< FdgPolicy >::epsilon_ [protected]