Starting the learning challenge!

Last updated: September 17, 2025

# Day 1 Momentum Learning Series

This blog marks the start of a personal challenge: a series where I commit to learning every single day and documenting it here. My focus is on computer science, AI, and computer vision. The rule is simple: at least one hour of learning a day, and one honest reflection written down. You can join me in this challenge too — build consistency, learn something daily, and record it.

This summer my focus was on AI agents and deepening more into computer vision.

And today I finished the last part of a course I started earlier, which was about LLMOps automating and monitoring all steps of ML system construction (LLM development and managing the model in production). In the course we studied data preparation, automation, and orchestration with pipelines (which for me was the essential part of the course). And lastly, what I studied today: prediction, prompts, and safety. It was about deploying the model via REST API, then unpacking and formatting data, and a quick scope on Safety Attributes. The short course was informative, but I could’ve studied it in a shorter time, like in one session.

Then, I studied from the book Deep Learning for Computer Vision Systems, which I also started this summer, because I felt studying computer vision needs a stronger theoretical side, so this was a good resource for me. I started chapter 3 on CNNs, and how image classification using MLPs has its drawbacks by doing an image classification with the MNIST dataset using a very simple neural network. The key lesson was how NNs need the input image to be flattened from a 2D matrix into a 1D vector, and by this process we lose spatial features, which makes the training on finding patterns of close areas in the image harder and take longer. Whereas in CNNs there are convolutional layers, where each neuron is connected only to a small local region of the image instead of all pixels as in MLPs.

This book is a great revision for some fundamentals in CV and DL, and it’s really good at capturing and connecting every piece of information together.

Good Tip: Always check whether the technique you’re learning (like flattening inputs) is a simplification or a limitation, sometimes the most important lesson is why a method fails.

—>

Comments