Skip to main content
Communication and Computing Systems Lab
CCSL
Communication and Computing Systems Lab
Home
People
All Profiles
Principal Investigator
Postdoctoral Fellows
Research Scientists
Research Staff
Students
Alumni
Former Members
Research
Wireless Communication
Body Area Network
AI Accelerator
All Projects
Publications
Publications
Google Scholar
DBLP
IEEE Xplore
KAUST Repository
ORCID
Events
Media Gallery
Contacts
Join us
gradient methods
Unveiling Insights from "Gradient Descent Converges Linearly for Logistic Regression on Separable Data"
Bang An, Ph.D. Student, Applied Mathematics and Computational Science
Jan 17, 10:00
-
11:00
B1 L0 R0118
gradient methods
Regression Models
Abstract In this presentation, I will share a paper titled "Gradient Descent Converges Linearly for Logistic Regression on Separable Data", a work highly related to my ongoing research. I will explore its relevance to my current research topic and discuss the inspiration for our future works. Abstract of the paper: We show that running gradient descent with variable learning rate guarantees loss f(x) \leq 1.1f(x^*)+\epsilon for the logistic regression objective, where the error \epsilon decays exponentially with the number of iterations and polynomially with the magnitude of the entries of an
On the Natural Gradient Descent
Prof. Levon Nurbekyan
Jun 11, 16:00
-
17:00
KAUST
gradient methods
Abstract Numerous problems in scientific computing can be formulated as optimization problems of suitable parametric models over parameter spaces. Neural network and deep learning methods provide unique capabilities for building and optimizing such models, especially in high-dimensional settings. Nevertheless, neural networks and deep learning techniques are often opaque and resistant to precise control of their mathematical properties in terms of architectures, hyperparameters, etc. Consequently, optimizing neural network models can result in a laborious hyperparameter tuning process that