loading page

Nano-AutoGrad: A Micro-Framework Engine Based on Automatic Differentiation for Building and Training Neural Networks
  • +1
  • Haytham Al Ewaidat,
  • Youness El Brag,
  • Ahmad Wajeeh Yousef E’layan,
  • Ali Almakhadmeh
Haytham Al Ewaidat
Jordan University of Science and Technology

Corresponding Author:[email protected]

Author Profile
Youness El Brag
Universite Abdelmalek Essaadi Departement de Physique
Author Profile
Ahmad Wajeeh Yousef E’layan
Jordan University of Science and Technology
Author Profile
Ali Almakhadmeh
Jordan University of Science and Technology
Author Profile

Abstract

Background: Neural Networks, inspired by the human brain, are a class of machine learning models composed of interconnected artificial neurons. They have a rich history dating back to the 1940s, with notable advancements in the 1980s and 1990s when techniques like backpropagation enabled the training of multi-layer networks. Neural Networks have since experienced a renaissance, achieving state-of-the-art results in diverse domains. However, training them effectively remains a challenge. This thesis introduces Nano-AutoGrad, a system built upon automatic differentiation and optimization methods. Nano-AutoGrad efficiently computes gradients, facilitates parameter optimization, and incorporates mechanisms such as Multi-Perceptrons and Linear models. It also allows for expanding the network architecture with additional layers, enhancing the performance and representation capabilities of Neural Networks. The objective is to design and develop Nano-AutoGrad as an advanced tool for training complex models, leveraging historical advancements and computational graph understanding. Aim: This research is to design and develop Nano-AutoGrad as a simple easy compute Micro-Framework for training Neural Networks. By incorporating mechanisms such as Multi-Perceptrons, Linear models, and additional layers, the goal is to enhance network performance and versatility. The study of Nano-AutoGrad in training Linear models and achieving promising results. Nano-AutoGrad’s efficient gradient computation and parameter optimization contribute to advancements in Neural Network training Method: Nano-AutoGrad is a Micro-Framework that efficiently trains Neural Networks using automatic differentiation and optimization techniques and Computational graphs. It computes gradients by applying the chain rule, enabling parameter updates for improved performance. Optimization methods like Stochastic Gradient Descent (SGD), are utilized to iteratively update network parameters based on computed gradients. Conclusion: In our research, Through the development and evaluation of Nano-AutoGrad, this study highlights the effectiveness of historical advancements in Neural Networks, combined with automatic differentiation and optimization techniques. By incorporating mechanisms such as Multi-Perceptrons and Linear models, and expanding the network architecture with additional layers, Nano-AutoGrad demonstrates simple computational yet improved representation capabilities. The study showcases the potential of Nano-AutoGrad to contribute to the field of machine learning by providing a high-abstract tool for training Linear models and optimizing Neural Networks, building upon the rich history and progress in the field of Artificial intelligence.