Recently, I’ve completed 30 Days of ML with Pytorch , where I explored all the major machine learning algorithms and basics of deep learning with concepts like activation, optimizer, loss functions etc.
However, the idea behind starting 30 Days of ML with PyTorch is to learn the implementation of machine learning algorithm along with strengthening my grasp on PyTorch Library. So moving forward with this blog, I will share few important functions in PyTorch which remains part of every ML and DL algorithms.
In neural networks, how we initialize our weights plays a great role in convergence of the model…
In previous blog post on supervised learning, we have seen that each observed data has a label attached to it, making it easy to train a model. However, in unsupervised learning, the algorithm finds the hidden patterns in unlabeled data. A popular technique in unsupervised learning is Clustering Algorithms.
Useful when multiple developers are working on the same project. It maintains the code intact, helps in restoring previous developed code. Going back and forth for a feature in web dev. and nowadays in any project including ML it helps in keep track of code updates from various developers of the same team.
sudo apt-get update
sudo apt-get install git
Finding Git Version
To Create Repository
Deploying memory intensive large deep models has a great downside if you’re planning to deploy the model in edge devices for real time inference or systems with memory constraints. Edge devices have limited memory, computing resources, and power that means a deep learning network must be optimized for embedded deployment.
For instance, a relatively simple network like AlexNet is over 200 MB, while a large network like VGG-16 is over 500 MB. Networks of this size cannot fit on low-power micro-controllers and smaller FPGAs. To overcome such challenges, techniques like Quantization, Distallation are introduced.
In this blog post, we’ll discuss…
In this blog post, we’ll discuss about 🔥 Keepsake 🔥. Keepsake is a version control tool for machine learning experiments. I, myself as a machine learning engineer feel bewildered whenever I need to deploy a ML model in production. I have lot of questions before deploying like how to track the each model and its parameters, how to move back if some thing is screwed so many big and small questions.
Now, I think, I found a one good answer for all the problem. Keepsake.
From Keepsake Official Documents
Everyone uses version control for their software and it’s much less…
In the previous blog post, I’ve discussed about what and why of class imbalance, and I have briefly touched upon the solutions for class imbalance. Now, we’ll deep dive into solving class imbalance problem with proposed solution from the previous blog post.
In this blog post, we’ll discuss Class Imbalance Problem in machine learning, what causes it and how to overcome it. From my experience of attending interviews, interviewers ask at least one scenario based question on class imbalance, widely being how to handle class imbalance?
In this blog post, we’ll discuss about sampling and its related components. This topic is usually not given much importance compared to other fancy statistics terms such as bayes, frequency, distribution etc.
The topic of sampling is quite dry and requires special effort from the user reading it. My objective from this blog is to share the sampling topic in a more visual form.
In machine learning, sampling refers to the subset of the data from the population, where the population means every possible data available for the task, which is infinite because in real-world task, we are continuously collecting…
In this blog post, we’ll discuss loss functions, parameter θ and the different types of loss function. I’ve learnt a lot while researching about this topic and hope you’ll feel the same. Without further a due, let’s starts off with loss function.
In simple terms, the objective of a loss function is to find the difference between or deviation between the actual ground truth of the value and an estimated approximation of the same value.