<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Personal Blog]]></title><description><![CDATA[Personal Blog]]></description><link>https://blog.nishchit.me</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 03:40:17 GMT</lastBuildDate><atom:link href="https://blog.nishchit.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How would I start learning SWE if I traveled back in time?]]></title><description><![CDATA[Let me start with this: I’m not a big name in the field of Software Engineering (SWE), and I don’t have many years of experience either. But with my around a year of professional experience in this field, I believe I am currently at a point where I c...]]></description><link>https://blog.nishchit.me/how-would-i-start-learning-swe-if-i-traveled-back-in-time</link><guid isPermaLink="true">https://blog.nishchit.me/how-would-i-start-learning-swe-if-i-traveled-back-in-time</guid><dc:creator><![CDATA[Nishchit Bhandari]]></dc:creator><pubDate>Sun, 29 Jun 2025 17:39:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751048403360/989b0d98-3b13-4fc4-a091-5cacce9e5b20.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let me start with this: I’m not a big name in the field of Software Engineering (SWE), and I don’t have many years of experience either. But with my around a year of professional experience in this field, I believe I am currently at a point where I can clearly see the wrong choices that I made when I was just getting started in this field. In this blog, I will share the road map that I believe, if I followed by travelling back in time, would give a little more acceleration to my SWE journey.</p>
<h1 id="heading-my-regret">My Regret</h1>
<p>One thing that I regret a lot is diving directly into learning what people call “specialized tech” (for me, it was AI) before understanding the fundamentals of SWE. Why was that a mistake? When I started learning about and building AI projects, I realized that I didn’t know the fundamentals of a lot of things (Git, GitHub, OOP, etc.) that I required for building those projects. The tutors of this “specialized tech” often tend to skip teaching fundamentals, assuming that the students are already familiar with them.</p>
<h1 id="heading-the-roadmap">The Roadmap</h1>
<h2 id="heading-step-1-the-programming-fundamentals">STEP 1: The Programming Fundamentals</h2>
<p>The first step in your SWE journey is to choose a programming language and to be good at it. How to choose a language? Choose a field that seems cool to you. For me, as I said previously, it was AI. So, I started learning Python. For you, it could be blockchain, so you could start learning Rust. The goal in this step is simple: to choose one and to ace the fundamentals. Understand this: learning multiple languages doesn’t make you great (or maybe it does, but not always). Learning one and acing it makes you great!</p>
<p>The fundamentals mostly include the following:</p>
<ol>
<li><p>Data types provided by the language: strings, numbers (types of numbers), arrays (or lists in some), dictionaries, and sets—every data type that the language provides. Also learn the operations between the data types: mathematical operations on numbers, concatenation of strings, reversing a string, searching through a list (using list methods), indexing and slicing a list, etc. Try to explore as many operations as the language offers.</p>
</li>
<li><p>Building simple logic-based projects (like a calculator, a cuboid’s volume calculator, reversing a character, a mark-sheet generator, etc.) You can also look for simple logic-based problems in <a target="_blank" href="https://www.hackerrank.com/">HackerRank</a>.</p>
</li>
<li><p>File I/O. How to open different types of files? How to read an existing file? How to create a new one? How to modify an existing one?</p>
</li>
<li><p>Using an existing library/framework. For this, I’d suggest you choose a cool one (for me, it was Pygame, through which we can make games in Python). Learn how to install a library/framework, how to import it, and what type of magic it does in your code.</p>
</li>
</ol>
<p>After exploring this, I believe that you’re good to go to step 2.</p>
<h3 id="heading-side-learnings-in-step-1">Side learnings in Step 1:</h3>
<p>In the first step, while learning the things that I mentioned above, you can also explore:</p>
<ol>
<li><p>Git/GitHub: You’ll need this to collaborate with others in the future, so this is a must for any developer.</p>
</li>
<li><p>Reading a little bit of documentation: Memorizing gazillions of syntax and methods is not possible at all, no matter how professional of a tech wizard you are. So, you need to start learning things by reading documentations. Nowadays, AI tools like ChatGPT have mostly replaced documentations, but I still suggest you go through documentations and try to understand them.</p>
</li>
</ol>
<h2 id="heading-step-2-the-exploration-step">STEP 2: The Exploration Step</h2>
<p>In this step, I’d suggest you look mainly into two things: <strong>OOP</strong> and <strong>APIs</strong>.</p>
<ol>
<li><p><strong>Why Object-Oriented Programming (OOP)?</strong>: Although there are some people who say that OOP introduces a lot of unnecessary complexities, I’d absolutely suggest you have a look into it. I only understood the greatness of OOP after I learned it and actually used it. It allows you to write scalable and easily maintainable code and remove something called ‘tight coupling,’ which means that a part of your code is fully dependent on others and changing one leads to restructuring the entire project. OOP introduces you to things like ‘Duck Typing,’ code architectures, etc., that allow you to write loosely coupled code. Want to change the whole database of a huge project from SQL to NoSQL? Just change a single file (or a maximum of 3) and you’ll be good to go instead of rewriting the entire project. You get the idea.<br /> However, if the cool programming language that you chose in step 1 doesn’t have OOP, you can skip this part. But I won’t stop talking in favor of OOP!</p>
</li>
<li><p><strong>APIs</strong>: APIs are everywhere. The Instagram story that you posted, the LinkedIn job that you applied to, and this blog that you’re reading are all possible due to APIs. So, I believe that learning the workings behind APIs, how to use them, and how to actually build your own API is the most important thing. It is something that you WILL need no matter which ‘specialized tech’ you choose. You want to deploy your AI model, you will need some sort of API for people to use it. You want to learn hacking, you will need to first test it in your own API. It is simply something that you shouldn’t skip.<br /> Start simple. Learn about using someone else’s APIs, and learn about libraries in the language you chose that allow you to build APIs. Then learn to integrate a database into your API, making your API secure, and deploying your API with the tools required for it. If the cool language that you chose was Python, I have a great resource for you: <a target="_blank" href="https://youtu.be/0sOvCWFmrtA?si=gATcwhdazwlkERzv">This one!</a></p>
</li>
</ol>
<h3 id="heading-side-learnings-in-step-2">Side learnings in Step 2:</h3>
<ol>
<li><p>I think it’s the best time to step outside your comfort zone and to choose a language different than what you chose in step 1. If you’re working in Python or JS, I recommend you choose C to understand how things in Python and JS work in C. You don’t need to grind the other language; just learn how things are done and done properly without causing an explosion!<br /> I think learning some other language (especially something lower level than what you’re currently doing) introduces you to a whole new world: a lot of technical terms you’ve never heard before and a lot of things that you didn’t know how they worked.</p>
</li>
<li><p>Talking about stepping outside the comfort zone, I think it’s the perfect time to learn Linux. You’re just learning the deployment of your API, so learning Linux at this step will definitely help you in that part. So, throw Windows out of your window and install Linux (or you can dual boot if you don’t feel comfortable completely abandoning Windows). Or, install it in a VM if you’re using a Mac.</p>
</li>
</ol>
<h2 id="heading-step-3-exploring-the-specialized-tech-dsa">STEP 3: Exploring the Specialized Tech + DSA!</h2>
<ol>
<li><p>Now is the time to go into your favorite <strong>specialized tech</strong>! Again, start by acing the fundamentals through some course videos. Then, read (or at least try reading) research papers on the fundamental algorithms that are popular in your specialized tech, and implement them from scratch! (scratch = not using any libraries that provide readymade implementation of the algorithms.) Then, you can read books to go deeper into the cool and magical world of your specialized tech! But don’t forget these golden words on every step of your path: ACE THE FUNDAMENTALS!</p>
</li>
<li><p>Start studying <strong>Data Structures and Algorithms (DSA)</strong>. This is something that your interviewer will definitely ask you in your job interview, so you must know it. DSA will also help you build logic in your specialized tech field: which data structure and algorithm to use in what case? Do we prioritize performance or cost in this particular task? Is it even worth using this heavy algorithm for this light task? Learning DSA will help you solve these questions! Also, don’t forget to solve DSA problems in Leetcode!</p>
</li>
</ol>
<h3 id="heading-side-learnings-in-step-3">Side learnings in Step 3?</h3>
<p>I think, at this point, you’ll exactly know what you want and what you have to learn side by side, so you probably don’t have to take my suggestion on this one!</p>
<h1 id="heading-extra-tips">Extra tips!</h1>
<ol>
<li><p>Be curious and ask every question that comes to your mind! You may think that people will judge you, but honestly, they won’t. And it doesn’t matter even if they do! It’s for your betterment.<br /> By being curious, I mean ask stupid questions, ask genius questions, ask this question, ask that question, but don’t stop asking questions. These days, chatbots can help you with questions, but I suggest you ask a real human! This way, they’ll provide some extra piece of advice that comes from their experience that ChatGPT won’t.</p>
</li>
<li><p>Document the learning process. If you’re comfortable sharing blogs of your learning, you can do so. If you’re not, write notes. This will help you in two ways:</p>
<ol>
<li><p>You’ll have something to quickly revise if you forget something.</p>
</li>
<li><p>You memorize things quickly if you write them down.</p>
</li>
</ol>
</li>
<li><p>Mentor others. Teaching others the concept you learned will help you in two ways:</p>
<ol>
<li><p>It’ll revise your learnings.</p>
</li>
<li><p>If your mentees ask some question that you never expected, it’ll force you to experiment more, learn more, and become more curious.</p>
</li>
</ol>
</li>
<li><p>Build real-world projects and push them to GitHub. This will help you in two ways:</p>
<ol>
<li><p>Understand real-world problems, which gives your recruiter an impression that you will be able to work for their problems too.</p>
</li>
<li><p>Sharpen the skills you learned by implementing things practically.</p>
</li>
</ol>
</li>
<li><p>Contribute to open-source or voluntary projects. This will help you in two ways:</p>
<ol>
<li><p>Develop your team communication skills and let your recruiter know that you are a team player.</p>
</li>
<li><p>Understand the code that someone else has written. And I think this is a more important point than the one above.</p>
</li>
</ol>
</li>
</ol>
<h1 id="heading-ending-notes">Ending Notes</h1>
<p>This roadmap that I have provided above would work pretty well for me if I followed it from the start. But your scenario might be different, and you might want to make adjustments to this roadmap, and it’s completely valid! Do your research, play around with things, and make as many adjustments as you want to this!</p>
]]></content:encoded></item><item><title><![CDATA[What is RAG and why is it needed?]]></title><description><![CDATA[What is RAG?
Among the AI buzzwords that we've been hearing a lot these days, "RAG" is one of the most common ones. Also known as Retrieval Augmented Generation, RAG is a technique used in natural language processing (NLP) where a language model is c...]]></description><link>https://blog.nishchit.me/what-is-rag-and-why-is-it-needed</link><guid isPermaLink="true">https://blog.nishchit.me/what-is-rag-and-why-is-it-needed</guid><category><![CDATA[RAG ]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[large language models]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[google gemini]]></category><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[Python]]></category><category><![CDATA[vector database]]></category><dc:creator><![CDATA[Nishchit Bhandari]]></dc:creator><pubDate>Mon, 16 Dec 2024 18:53:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734374657829/0c75e8ed-16fd-4171-9565-3061df673e4a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-rag">What is RAG?</h2>
<p>Among the AI buzzwords that we've been hearing a lot these days, "RAG" is one of the most common ones. Also known as Retrieval Augmented Generation, RAG is a technique used in natural language processing (NLP) where a language model is combined with a retrieval mechanism to generate more accurate and contextually relevant responses. The idea is to augment the generative model (like GPT) with information retrieved from an external knowledge base or corpus. In simpler terms, RAG means storing external knowledge (such as information about a company for the company's chatbot) on a storage service (usually Vector Databases) and retrieving that knowledge and providing it to LLMs as context to get more accurate and relevant responses.</p>
<h2 id="heading-how-does-rag-work">How does RAG work?</h2>
<p>Refer to the flowchart above while reading this for a better understanding.</p>
<ol>
<li><strong>Data Sources</strong>: <ul>
<li>It starts with structured or unstructured data (e.g., text files, PDFs).</li>
</ul>
</li>
<li><p><strong>Chunking</strong>: </p>
<ul>
<li>The large text data is split into smaller, manageable chunks for processing.</li>
</ul>
</li>
<li><p><strong>Embedding Model</strong>: </p>
<ul>
<li>Each chunk of text is passed through an embedding model to generate numerical vectors (embeddings) that represent the meaning of the text.</li>
</ul>
</li>
<li><p><strong>Vector Database</strong>: </p>
<ul>
<li>These embeddings, along with their corresponding text chunks, are stored in a vector database.</li>
</ul>
</li>
<li><p><strong>User Query</strong>: </p>
<ul>
<li>When a user submits a query (e.g., <em>"What color is apple?"</em>), it is also passed through the embedding model to create a query embedding.</li>
</ul>
</li>
<li><p><strong>Retrieval System</strong>: </p>
<ul>
<li>The system retrieves the most relevant text chunk(s) from the vector database based on the similarity between the query embedding and stored embeddings.</li>
</ul>
</li>
<li><p><strong>Context to LLM</strong>: </p>
<ul>
<li>The retrieved chunk(s) (e.g., <em>"Apple is red"</em>) are provided as context to the Large Language Model (LLM), alongside the original query.</li>
</ul>
</li>
<li><p><strong>LLM Response</strong>: </p>
<ul>
<li>The LLM processes the provided context and query to generate a relevant, accurate response (e.g., <em>"The color of apple is red"</em>).</li>
</ul>
</li>
</ol>
<h2 id="heading-why-do-we-need-rag">Why do we need RAG?</h2>
<p>Looking at the workings of the RAG system, you may have a question: Can't we just provide the context to LLM in the user message? 
While doing this might work if the data is small, this doesn't usually work if the data is large, mainly due to the <strong>context window limit</strong> of the large language model. This simply means that the language model can't process an input prompt of token size that exceeds the <strong>context input limit</strong> of the model (for example, the context window limit of gpt-4o is 128k tokens). So, the RAG system solves this problem by providing small but most closely relevant context to the language model.</p>
]]></content:encoded></item></channel></rss>