Adaptive, Stochastic Optimization Algorithms

Classical machine learning algorithms require known values for various hyper-parameters such as learning rates of the training algorithms, regularizer controlling the model complexity, and scaling parameter of the gaussian kernel. In practice, these values are in most cases unknown and computationally expensive trial-and-error procedures are usually applied to tuning the hyper-parameters.
At DAI-Labor, we cast the problem of tuning hyperparameters into the online/stochastic learning problems and develop algorithms so that the optimal values can be efficiently learnt. We are specifically interested in the following applications.

  1. Learning rate free training/optimization:
    Machine learning algorithms tend to be formalized as optimization problems and solved with (sub-) gradient methods, which requires an appropriately chosen learning rate.
  2. Zeroth order optimization:
    We cast the problem of tuning continuous hyperparameters into the problem of optimizing black-box functions and solve it with zeroth-order optimization algorithms.
  3. Time series prediction without hyperparameters:
    In many projects of DAI-labor, steaming data need to be processed in real time, which is inefficient/impossible if hyperparameters need to be tuned, due to the concept drift and the computational cost.