Adam

Adam()

First-order Adam-family optimizer for differentiable objectives.

These optimizers are exposed through the current full-batch objective wrapper. The callable must accept a parameter vector and a writable gradient vector, and return the scalar objective value.

Attributes

Name Description
batchSize Nominal batch size parameter exposed by ensmallen.
beta1 Exponential decay for first moments.
beta2 Exponential decay for second moments.
epsilon Numerical stabilization constant.
exactObjective Whether to evaluate the exact objective after optimization.
maxIterations Maximum number of iterations.
resetPolicy Whether optimizer state is reset before each call.
shuffle Whether to shuffle separable objectives between epochs.
stepSize Learning rate.
tolerance Termination tolerance.

Methods

Name Description
optimize optimize(self: pyensmallen._pyensmallen.Adam, objective: collections.abc.Callable[[typing.Annotated[numpy.typing.ArrayLike, numpy.float64], typing.Annotated[numpy.typing.ArrayLike, numpy.float64]], float], initial_point: typing.Annotated[numpy.typing.ArrayLike, numpy.float64]) -> numpy.typing.NDArray[numpy.float64]

optimize

Adam.optimize()

optimize(self: pyensmallen._pyensmallen.Adam, objective: collections.abc.Callable[[typing.Annotated[numpy.typing.ArrayLike, numpy.float64], typing.Annotated[numpy.typing.ArrayLike, numpy.float64]], float], initial_point: typing.Annotated[numpy.typing.ArrayLike, numpy.float64]) -> numpy.typing.NDArray[numpy.float64]

Optimize an objective from the provided starting point.

Parameters

Name Type Description Default
objective callable Callable with signature objective(params, gradient). The callable should write the gradient in place and return the scalar objective. required
initial_point ndarray Initial parameter vector. required

Returns

Name Type Description
ndarray Optimized parameter vector.