<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Aqib Ansari]]></title><description><![CDATA[Aqib Ansari]]></description><link>https://blog.aqibansari.xyz</link><generator>RSS for Node</generator><lastBuildDate>Sat, 02 May 2026 22:55:59 GMT</lastBuildDate><atom:link href="https://blog.aqibansari.xyz/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Scalable Video Processing and Streaming Pipeline with AWS]]></title><description><![CDATA[When I was building my video-based cohort review platform, I faced a big challenge:

How do I let students upload large video reviews, process them into multiple resolutions for smooth playback, and m]]></description><link>https://blog.aqibansari.xyz/scalable-video-processing-and-streaming-pipeline-with-aws</link><guid isPermaLink="true">https://blog.aqibansari.xyz/scalable-video-processing-and-streaming-pipeline-with-aws</guid><category><![CDATA[AWS]]></category><category><![CDATA[lambda]]></category><category><![CDATA[FFmpeg]]></category><dc:creator><![CDATA[Aqib Ansari]]></dc:creator><pubDate>Sun, 17 Aug 2025 12:08:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/65b7c4e1ed6765cc77caff38/4d814ec2-c3a7-4828-b2e9-bb4cad855077.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I was building my video-based cohort review platform, I faced a big challenge:</p>
<blockquote>
<p>How do I let students upload large video reviews, process them into multiple resolutions for smooth playback, and make them available to users quickly without overloading my backend server?</p>
</blockquote>
<p>The solution:</p>
<p>An event-driven, serverless video processing pipeline powered by AWS S3, ECS, Lambda, and ffmpeg.</p>
<h2>The Problem</h2>
<p>If you try to handle video uploads and processing inside your main backend server, you’ll quickly run into trouble:</p>
<ul>
<li><p>Large uploads will consume your backend bandwidth and slow down other requests.</p>
</li>
<li><p>Processing videos in real time will block your app and lead to timeouts.</p>
</li>
<li><p>Serving raw video files isn’t streaming and browser-friendly, so users will face buffering issues.</p>
</li>
</ul>
<p>And the above methods are not scalable at all.</p>
<p>I needed a way to:</p>
<ul>
<li><p>Offload uploads from backend.</p>
</li>
<li><p>Process videos asynchronously after upload.</p>
</li>
<li><p>Output streaming ready formats with multiple quality operations.</p>
</li>
</ul>
<h2>The Solution</h2>
<p>Lets understand the architecture.</p>
<h3>Direct upload to S3 bucket with signed URLs</h3>
<p>Instead of uploading videos to my backend, the frontend:</p>
<ol>
<li><p>Requests a signed URL from the backend.</p>
</li>
<li><p>Uploads the raw video directly to a temporary S3 bucket.</p>
</li>
</ol>
<p><strong>Why?</strong></p>
<p>No backend bandwidth bottleneck.</p>
<p>Faster uploads for users.</p>
<p>![](<a href="https://cdn.hashnode.com/res/hashnode/image/upload/v1755379384202/2c8877f5-8ed3-4ea5-9243-c8964d4a5626.png">https://cdn.hashnode.com/res/hashnode/image/upload/v1755379384202/2c8877f5-8ed3-4ea5-9243-c8964d4a5626.png</a> align="middle")</p>
<h3>S3 Event → SQS Queue</h3>
<p>When a new video is uploaded, S3 sends an event notification to SQS.</p>
<p>Why SQS is great here:</p>
<ol>
<li><p><strong>Reliability</strong>: If processing is delayed, messages will wait in the queue.</p>
</li>
<li><p><strong>Scalability</strong>: Can handle spikes in uploads without overwhelming the video processing service.</p>
</li>
<li><p><strong>Retry support</strong>: Failed processing attempts can be retried automatically.</p>
</li>
</ol>
<h3>SQS → Lambda → ECS</h3>
<p>After receiving a message from the S3 bucket on upload, the SQS queue triggers a Lambda function. The Lambda function reads the message from SQS (a message like the object key). After reading the message, the Lambda function starts the ECS with the proper configurations and environment variables.</p>
<p>![](<a href="https://cdn.hashnode.com/res/hashnode/image/upload/v1755382636457/3730c8e6-1b41-4bc3-a25b-f2a5d5e89c6c.png">https://cdn.hashnode.com/res/hashnode/image/upload/v1755382636457/3730c8e6-1b41-4bc3-a25b-f2a5d5e89c6c.png</a> align="middle")</p>
<h3>Video Processing Inside ECS</h3>
<p>Inside ECS:</p>
<p>Download the raw video from the temporary bucket using the object key.</p>
<p>Use FFMPEG to transcode it into multiple resolutions (1080p, 720p, 480p).</p>
<p>Split it into chunks for <a href="https://www.cloudflare.com/learning/video/what-is-adaptive-bitrate-streaming/">adaptive bitrate streaming</a> (<a href="https://www.cloudflare.com/learning/video/what-is-http-live-streaming/">HLS</a>).</p>
<p>Upload the processed files to the final S3 bucket with the following structure:</p>
<p>You can find a sample FFMPEG usage <a href="https://github.com/aqib0770/utils/blob/main/ffmpeg.md">here</a>.</p>
<p>![](<a href="https://cdn.hashnode.com/res/hashnode/image/upload/v1755384794462/f6e13236-1e76-4c85-8ff6-c73d18a808e2.png">https://cdn.hashnode.com/res/hashnode/image/upload/v1755384794462/f6e13236-1e76-4c85-8ff6-c73d18a808e2.png</a> align="middle")</p>
<h3>Final Bucket → DB Update</h3>
<p>The final S3 bucket is configured with another event notification. This triggers a Lambda function that updates the database with appropriate URLs so that the frontend can display the processed videos.</p>
<p>![](<a href="https://cdn.hashnode.com/res/hashnode/image/upload/v1755432236440/1884b9d0-e33b-414e-9da2-2e3e05416e86.png">https://cdn.hashnode.com/res/hashnode/image/upload/v1755432236440/1884b9d0-e33b-414e-9da2-2e3e05416e86.png</a> align="middle")</p>
<h2>Secure and Fast Video Streaming</h2>
<p>After video processing, it's crucial to ensure that the videos can be viewed from your platform. Simply hosting videos in Amazon S3 and serving them directly can leave them vulnerable to unauthorized sharing. Additionally, serving videos from S3 is not efficient from a performance perspective.</p>
<p>To tackle these problems, we will use a Content Delivery Network (CDN) which serves videos from edge servers spread around the world. AWS CloudFront provides a CDN service that allows smooth and secure video streaming.</p>
<p>![](<a href="https://cdn.hashnode.com/res/hashnode/image/upload/v1755432091622/8d3af940-2e85-4ee7-9fa1-63ff94b8dc1e.jpeg">https://cdn.hashnode.com/res/hashnode/image/upload/v1755432091622/8d3af940-2e85-4ee7-9fa1-63ff94b8dc1e.jpeg</a> align="middle")</p>
<h2>Videos Serving Architecture</h2>
<p>In AWS CloudFront, we create a distribution that points to the S3 bucket from which we want to serve the content. For security, we must block all public access to the bucket and remove all CORS configurations so that only CloudFront can access resources from the S3 bucket.</p>
<p>Before understanding video streaming, let's understand signed URLs and signed cookies.</p>
<p><strong>Signed URLs:</strong> A signed URL is a normal CloudFront URL with a signature, expiration time, and policy information embedded in it. The user must use that exact URL to access the resources. It is suitable when we want to serve a single file.</p>
<p><strong>Signed Cookies:</strong> A signed cookie provides authentication details (signature, policy, expiration) in cookies rather than in signed URLs. Once set, all requests from the browser automatically include the cookies. When using signed cookies, the URL structure of the resource remains clean and unchanged. Any request that matches the cookie policy is allowed.</p>
<p>In our use case, where we serve multiple files (a playlist of video chunks), signed URLs would be inefficient because signing every URL is not feasible, so we will use signed cookies.</p>
<p><strong>Generating signed cookies:</strong></p>
<p>Refer to the official AWS documentation to generate signed cookies using <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-cloudfront-signer/">AWS SDK</a>.</p>
<blockquote>
<p>Note: to generate public and private keys to generate signed cookies, ensure that your node js version is &lt;20.</p>
</blockquote>
<p>When accessing assets through cloudfront, then signed cookies are being sent in request headers on every file request. If cookies are not present in header or policy is not matching, then content will not be served.</p>
<p>When cloudfront server receives a request, then it checks its cache, if cache is available then cloudfront serves the content else it fetches the content from S3 bucket once.</p>
<p>With this approach, we can serve content securely as well as fast because there are hundreds of CDN servers scattered around the world.</p>
<h2>Conclusion</h2>
<p>By combining AWS's event-driven services with ffmpeg, I have created a video pipeline that is scalable, reliable, and streaming-friendly.</p>
<ul>
<li><p>Uploads are offloaded directly to S3, avoiding bottlenecks.</p>
</li>
<li><p>Processing is handled asynchronously in ECS with ffmpeg, producing multiple resolutions for adaptive streaming.</p>
</li>
<li><p>Delivery is secured and accelerated using CloudFront with signed cookies, ensuring smooth playback across devices and locations.</p>
</li>
</ul>
<p>This approach keeps the system modular and cost-efficient: each component (upload, processing, delivery) can scale independently based on demand.</p>
<p>If you are building a video-based platform, this architecture gives you a production-ready blueprint to handle video at scale.</p>
]]></content:encoded></item><item><title><![CDATA[Teleportation and JavaScript: The Science of Serialization and Deserialization]]></title><description><![CDATA[Imagine a future where teleportation is real. How would it work? Well, it is not much different from how JavaScript handles serialization and deserialization. Before we teleport an object (any item or human being), we need to convert it to a transfer...]]></description><link>https://blog.aqibansari.xyz/teleportation-and-javascript-the-science-of-serialization-and-deserialization</link><guid isPermaLink="true">https://blog.aqibansari.xyz/teleportation-and-javascript-the-science-of-serialization-and-deserialization</guid><category><![CDATA[ChaiCode]]></category><dc:creator><![CDATA[Aqib Ansari]]></dc:creator><pubDate>Thu, 13 Feb 2025 00:46:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739404397053/27fec8f5-04b2-49b0-b617-3cfa68807091.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a future where teleportation is real. How would it work? Well, it is not much different from how JavaScript handles serialization and deserialization. Before we teleport an object (any item or human being), we need to convert it to a transferable format (serialize), and then at the destination, we need to reconstruct it (deserialize). Let's dive into this fascinating analogy to understand serialization and deserialization in detail.</p>
<h2 id="heading-teleportation-the-ultimate-data-transfer">Teleportation: The Ultimate Data Transfer</h2>
<p>In teleportation, think of our body as a complex JavaScript object. It has properties like name, age, hobbies, etc. But to teleport this object from one place to another, we can’t just send it as it is. To do so, we need to convert this object or body into a transferable format. This is where <strong>Serialization</strong> comes in.</p>
<h3 id="heading-serialization-breaking-it-down">Serialization: Breaking it Down</h3>
<p>Serialization is like dismantling process in teleportation. It takes your complex JavaScript object and converts it into a simpler, more portable format, usually a string like object. In JavaScript, the most common format for serialization is JSON (JavaScript Object Notation). JSON is like a universal language of teleportation, it is easy to understand and works on different platforms.</p>
<p>Here’s how it works:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739406339042/c425946c-db2c-49c2-8846-dd6d7ec5fcb5.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-javascript"><span class="hljs-comment">//Your body as JavaScript object</span>
<span class="hljs-keyword">const</span> person = {
    <span class="hljs-attr">name</span>: <span class="hljs-string">"Aqib Ansari"</span>,
    <span class="hljs-attr">age</span>: <span class="hljs-number">20</span>,
    <span class="hljs-attr">hobbies</span>: [<span class="hljs-string">'coding'</span>, <span class="hljs-string">'roaming'</span>, <span class="hljs-string">'sleeping'</span>],
    <span class="hljs-attr">contact</span>: {    
        <span class="hljs-attr">email</span>: <span class="hljs-string">"aqibansari72a@gmail.com"</span>,
        <span class="hljs-attr">phone</span>: <span class="hljs-number">8591738255</span>,
    }
}

<span class="hljs-comment">// Serialization: Dismantling the person</span>
<span class="hljs-keyword">const</span> jsonString = <span class="hljs-built_in">JSON</span>.stringify(person)
<span class="hljs-built_in">console</span>.log(jsonString)
<span class="hljs-comment">// Output: {"name":"Aqib Ansari","age":20,"hobbies":["coding","roaming","sleeping"],"contact":{"email":"aqibansari72a@gmail.com","phone":8591738255}}</span>
</code></pre>
<p>In above example <code>JSON.stringify()</code> methods acts as a teleportation chamber, breaking down the <code>person</code> object into a JSON string. This string is like a stream of particles that can be transmitted across the network easily or store in the database for future transmission.</p>
<h2 id="heading-deserialization-rebuilding-at-the-destination">Deserialization: Rebuilding at the Destination</h2>
<p>Once the serialized data reaches its destination, it needs to be reassembled into its original form. This is where deserialization comes in. It is like the assembly process in teleportation where each stream of particles is reconstructed into the body.</p>
<p>In JavaScript, <strong>Deserialization</strong> is done using JSON.parse() method. It takes the JSON string and converts it back into a JavaScript object which is ready to be used in our application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739406711907/c9f6e988-37e9-4abc-ac09-d672e6c78bf7.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Deserialization: Rebuilding the object</span>
<span class="hljs-keyword">const</span> personObject = <span class="hljs-built_in">JSON</span>.parse(jsonString)
<span class="hljs-built_in">console</span>.log(personObject)
<span class="hljs-comment">/* Output:
{
  name: 'Aqib Ansari',
  age: 20,
  hobbies: [ 'coding', 'roaming', 'sleeping' ],
  contact: { email: 'aqibansari72a@gmail.com', phone: 8591738255 }
}
*/</span>
</code></pre>
<p>The deserialization process ensures that the data arrives intact and ready to use. Our body or object is now fully functional at its destination.</p>
<h2 id="heading-the-challenges-in-teleportation-serialization">The Challenges in Teleportation (Serialization)</h2>
<p>Since teleportation is a very complex process, it comes with its own challenges. What if during transmission, some data loss occur and at the receiving end, any part of our body goes missing? Similarly, serialization in JavaScript has its limitation. Lets explore some of these limitations.</p>
<h3 id="heading-functions-the-lost-abilities">Functions: The Lost Abilities</h3>
<p>In teleportation, our abilities might not be transmitted because they are not the part of the physical structure. Similarly, JSON serialization doesn’t support functions. If our object has methods, they’ll be lost during serialization.</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Our body as JavaScript object</span>
<span class="hljs-keyword">const</span> person = {
    <span class="hljs-attr">name</span>: <span class="hljs-string">"Aqib Ansari"</span>,
    <span class="hljs-attr">age</span>: <span class="hljs-number">20</span>,
    <span class="hljs-attr">isSmart</span>: <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>)</span>{
         <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>
    }
}

<span class="hljs-keyword">const</span> jsonString = <span class="hljs-built_in">JSON</span>.stringify(person)
<span class="hljs-built_in">console</span>.log(jsonString)
<span class="hljs-comment">// Output: {"name":"Aqib Ansari","age":20}</span>
</code></pre>
<p>To handle this, we need to manually reattach the methods after deserialization.</p>
<h3 id="heading-data-corruption-a-misplaced-finger">Data Corruption: A Misplaced Finger</h3>
<p>Imagine reassembling the body with an extra toe or a misplaced finger. In JavaScript this can happen when serialized data is altered or misinterpreted during transmission. For example, <strong>Dates</strong> in JavaScript are objects but when serialized, they become strings. If not handled properly they might not get convert back to <code>Date</code> object correctly.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> person = {
    <span class="hljs-attr">name</span>: <span class="hljs-string">"Aqib Ansari"</span>,
    <span class="hljs-attr">birthDate</span>: <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>()
}

<span class="hljs-keyword">const</span> jsonString = <span class="hljs-built_in">JSON</span>.stringify(person)
<span class="hljs-built_in">console</span>.log(jsonString)
<span class="hljs-comment">// Output: {"name":"Aqib Ansari","birthDate":"2025-02-12T23:08:25.439Z"}</span>

<span class="hljs-keyword">const</span> personObject = <span class="hljs-built_in">JSON</span>.parse(jsonString)
<span class="hljs-built_in">console</span>.log(<span class="hljs-keyword">typeof</span>(personObject.birthDate)) <span class="hljs-comment">// Output: string</span>
</code></pre>
<p>To fix this, we need to create a custom function during deserialization to convert the string back to <code>Date</code> object.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> personObject = <span class="hljs-built_in">JSON</span>.parse(jsonString, <span class="hljs-function">(<span class="hljs-params">key, value</span>) =&gt;</span> {
    <span class="hljs-keyword">if</span> (key === <span class="hljs-string">'birthDate'</span>){
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(value) <span class="hljs-comment">// Convert string to date</span>
    }
    <span class="hljs-keyword">return</span> value
})
<span class="hljs-built_in">console</span>.log(<span class="hljs-keyword">typeof</span>(personObject.birthDate)) <span class="hljs-comment">// Output: string</span>
</code></pre>
<h2 id="heading-conclusion-mastering-the-art-of-data-teleportation">Conclusion: Mastering The Art Of Data Teleportation</h2>
<p>Serialization and deserialization are the backbone of data exchange in JavaScript, much like teleportation might be the backbone of futuristic travel. By understanding these processes, you can ensure that the data is transmitted and reconstructed accurately without any alteration.</p>
<p>So next time you are working with JSON or debugging with serialization and deserialization issue, imagine yourself as a teleportation scientist, properly disassembling and reassembling data to make it travel seamlessly across the digital universe.</p>
<p>If you find this article meaningful and enjoyable, engage with it.</p>
<p>Thank you for reading!</p>
]]></content:encoded></item><item><title><![CDATA[Internet and Behind the Scenes]]></title><description><![CDATA[Hello everyone, have you ever wondered how you reached this page so easily?
Well, after reading this article, you'll get the answer and realize that all these behind-the-scenes mechanisms are inspired by real life.
First, lets understand why does int...]]></description><link>https://blog.aqibansari.xyz/internet-and-behind-the-scenes</link><guid isPermaLink="true">https://blog.aqibansari.xyz/internet-and-behind-the-scenes</guid><category><![CDATA[ChaiCode]]></category><dc:creator><![CDATA[Aqib Ansari]]></dc:creator><pubDate>Thu, 30 Jan 2025 12:17:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738238790163/e555b60a-285a-4947-8e2e-39be3c42e375.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello everyone, have you ever wondered how you reached this page so easily?</p>
<p>Well, after reading this article, you'll get the answer and realize that all these behind-the-scenes mechanisms are inspired by real life.</p>
<p>First, lets understand why does internet matters in today’s world.</p>
<h2 id="heading-why-does-internet-matters">Why does internet matters?</h2>
<ul>
<li><p><strong>Global Communication:</strong> Enables instant messaging, emails and video calls.</p>
</li>
<li><p><strong>Information Sharing</strong>: Allows access to vast amount of knowledge, research and news</p>
</li>
<li><p><strong>Entertainment</strong>: Streaming services, gaming and social media</p>
</li>
<li><p><strong>Education and innovation</strong>: Enables online courses, remote work and technological advancements.</p>
</li>
</ul>
<p>In above examples, you will find that in one way or another, the data is being shared from one point to another point. The whole agenda of internet is based on this ‘data sharing’ only. So to understand the internet, we must understand how data is being shared. So now, lets move on to technical terms and understand the internet.</p>
<h2 id="heading-the-packets-path-navigating-the-digital-highway">The Packet’s Path: Navigating the Digital Highway</h2>
<p>The data which travels from one point to another is broken down into small chunks, these chunks are called packets. Now I will take you to the entire journey of these packets.</p>
<h3 id="heading-planning-the-trip">Planning The Trip:</h3>
<p>When we search for a website (like <a target="_blank" href="https://www.google.com/">www.google.com</a>) in a browser, the browser needs the IP address of that website. To get it, the browser sends the website name to the DNS server, which helps to find the IP address of the following website.</p>
<p><strong>Analogy</strong>: If we have to go to someone’s house, we require their exact address rather that the house name.</p>
<p>Now we got the IP address of www.google.com which is 142.250.183.100</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738191114540/da6d3345-52ee-4d82-8a8a-d3584c0ebf8e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-preparing-the-journey">Preparing The Journey:</h3>
<p>Now that we have the address to send the data to, our browser constructs HTTP/HTTPS requests to facilitate this transfer. These requests include:</p>
<ul>
<li><p>Method (GET, POST, DELETE, etc.)</p>
</li>
<li><p>Headers (User Agent, Host, etc.)</p>
</li>
<li><p>Request Body (the actual data)</p>
</li>
</ul>
<p><strong>Analogy</strong>: This process is similar to packing our luggage before visiting a house.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738192390649/147c625f-189a-497d-83a5-d5094ab0f04f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-starting-the-journey">Starting The Journey:</h3>
<p>When our HTTP request is ready, it is sent over the internet using specific protocols that dictate how how data transfer should occur. The two primary protocols are:</p>
<ul>
<li><p>TCP (Transmission Control Protocol): Ensures reliable, ordered and error-checked delivery of data. Used for web pages, emails and file transfer.</p>
</li>
<li><p>UDP (User Datagram Protocol): A faster but less reliable protocol used for real-time applications like video streaming, online gaming and video calls.</p>
</li>
</ul>
<p><strong>Analogy</strong>: TCP is like taking a planned route with checkpoints ensuring you arrive safely while UDP is like taking the fastest route without worrying about checkpoints.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738193683216/128c599e-de57-46e0-a473-be8662aa16cf.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-role-of-routers-and-switches">The Role of Routers and Switches:</h3>
<p>Data packets don’t travel from sender to receiver directly. Instead they pass through multiple <strong>routers</strong> and <strong>switches</strong> to reach the final destination efficiently.</p>
<ul>
<li><p>Routers: These act like highway interchanges, directing packets along the best possible path to reach the destination.</p>
</li>
<li><p>Switches: These operate withing the local network (e.g. home or office) directing packets to the correct device in the network.</p>
</li>
</ul>
<p><strong>Analogy</strong>: They act as traffic cops or local guides, ensuring the data reaches its intended destination efficiently.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738193083251/c96b2115-ed84-4a4c-b4f7-f50ae7e44614.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-reaching-the-destination">Reaching the Destination:</h3>
<p>Now our data has reached its final destination. The data is reassembled into its original sequence using defined protocols. The receiver processes the data according to the request and sends a response back if needed.</p>
<p>The entire process in fraction of a second, allowing us to load the web pages almost instantly.</p>
<h3 id="heading-additional-concepts-that-keep-our-internet-running">Additional Concepts That Keep Our Internet Running</h3>
<ul>
<li><p><strong>Caching</strong></p>
<p>  Web browsers and servers use caching to store copies of frequently accessed content, reducing load times.</p>
<p>  For example, when you visit <a target="_blank" href="https://www.google.com/">www.google.com</a> frequently, your browser stores the IP address in memory instead of searching for it every time.</p>
</li>
<li><p><strong>Firewalls and Security Measure</strong>s</p>
<p>  Firewalls and encryption protocols such as HTTPS ensure data privacy and protection from cyber threats</p>
<p>  <strong>Analogy</strong>: A firewall is like a security checkpoint that filters harmful entities before the enter a restricted area.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion:</h3>
<p>From typing URL in browser to receiving a web-page, many such processes occur behind the scenes, ensuring seamless connectivity. These mechanisms ranging from DNS resolution, HTTP requests, TCP/UDP transmission and router navigation are the backbone of the internet.</p>
<p>All these process are complex yet well-structured system inspired by our day-to-day life.</p>
<p>Understand how the internet works through this article? Then engage with it by liking and sharing. Follow me on <a target="_blank" href="https://x.com/Aqib_Ansari_">X</a> and <a target="_blank" href="https://www.linkedin.com/in/aqib-ansari-298b10242/">LinkedIn</a> for more insightful tech content.</p>
]]></content:encoded></item></channel></rss>