Gradient descent is the backbone of a lot of machine learning algorithms, deep learning included. It is used only during the training, and it is the most computationally expensive part of machine learning. Gradient descent is a very complicated mathematical topic which I cannot explain in its entirety in a single blog post. However, I … Continue reading Mathematical Basics Of Gradient Descent

# Tag: Theory

# Neural Network Output Units

The primary function of a feedfoward neural network is to create a prediction of some sorts. The most popular task that is handled by a feedfoward neural network is classification or categorization. Classification is a task where a program is handed some sort of data, and the program classifies the data as something. One popular … Continue reading Neural Network Output Units

# Vectors, Matrices, Tensors – What’s The Difference?

Linear algebra is a branch of mathematics which deals with solving a system of linear equations. It is widely used throughout science and engineering and it is essential to understanding machine learning algorithms. Linear algebra defines three basic data-structures - vectors, matrices and tensors, which are constantly used in machine and deep learning. In this … Continue reading Vectors, Matrices, Tensors – What’s The Difference?

# What Is Dropout? – Deep Learning

Overfitting can be a serious problem in deep learning. Dropout is a technique developed to solve this exact problem. It is one of the biggest advancements in deep learning to come out in the last few years. What is dropout? Dropout is a technique for addressing the overfitting problem. The idea is to randomly drop … Continue reading What Is Dropout? – Deep Learning