Computer Networks-I (TCS-604) Solved Papers
TERE EXAM KI MKC 6 BAR

TCS-604 End Term Examination - May-June 2025
Question 1

a. Analyze the impact of propagation delay in satellite communication. What layer is affected the most and why?

Impact of Propagation Delay: Satellite communication, especially with geostationary (GEO) satellites, involves signals traveling a very long distance (approx. 35,786 km up and 35,786 km down). This results in a very high propagation delay (the time it takes for a signal to travel). A typical round-trip time (RTT) for a GEO satellite link is 500-600 milliseconds.

Layer Most Affected: While the delay physically originates at the Physical Layer (Layer 1), its performance impact is most significantly felt at the Transport Layer (Layer 4), specifically when using TCP.

Why TCP is Affected:

  • Slow Start: TCP's congestion control begins with the "Slow Start" algorithm, which doubles the congestion window (cwnd) every RTT. With a 500ms RTT, it takes a very long time to ramp up to use the available bandwidth.
  • Congestion Control: TCP relies on ACKs to "clock" data out and to detect packet loss. The long delay in receiving ACKs (or detecting their absence via timeout) makes TCP's congestion avoidance and recovery mechanisms very slow to react.
  • Handshake: The initial 3-way handshake to establish the connection takes at least 1 RTT (500+ ms) before any data can even be sent.
  • Application Layer Impact: This high latency also severely impacts real-time Application Layer (Layer 7) protocols like VoIP or video conferencing, causing noticeable lag in conversation.

b. A packet of 1000 bytes is sent on a 10 Mbps link with 100ms propagation delay. Calculate the total delay.

Total Delay is the sum of Transmission Delay and Propagation Delay.

  1. Transmission Delay ($D_{trans}$): The time to push the packet's bits onto the link.
    • Packet Size = 1000 bytes = 1000 * 8 bits = 8000 bits
    • Bandwidth = 10 Mbps = 10 * $10^6$ bits/sec = 10,000,000 bits/sec
    • $D_{trans}$ = Packet Size / Bandwidth = 8000 / 10,000,000 sec
    • $D_{trans}$ = 0.0008 seconds = 0.8 milliseconds (ms)
  2. Propagation Delay ($D_{prop}$): The time for the first bit to travel from sender to receiver.
    • $D_{prop}$ = 100 ms (given)
  3. Total Delay:
    • Total Delay = $D_{trans}$ + $D_{prop}$
    • Total Delay = 0.8 ms + 100 ms = 100.8 ms

(Note: In this case, the transmission delay is negligible compared to the propagation delay, which is common in high-delay networks).

c. Discuss the lifecycle of an email using SMTP and POP3 or IMAP protocols.

The lifecycle of an email involves two main stages: transfer (using SMTP) and access (using POP3 or IMAP).

  1. Composition: The sender (e.g., Alice) composes a message in her User Agent (UA), like Outlook or Gmail.
  2. Transfer (SMTP):
    • Alice's UA sends the message to her outgoing Mail Server (MTA) using SMTP (Simple Mail Transfer Protocol).
    • Alice's MTA acts as an SMTP client. It uses DNS (querying for an MX record) to find the IP address of the recipient's (e.g., Bob's) incoming Mail Server.
    • Alice's MTA establishes a TCP connection (on port 25) to Bob's MTA and transfers the email using a series of SMTP commands (HELO, MAIL FROM, RCPT TO, DATA).
    • Bob's MTA receives the email and places it in Bob's mailbox on that server.
  3. Access (POP3 vs. IMAP):
    • Bob's UA (mail client) connects to his incoming mail server to retrieve the message. This uses either POP3 or IMAP.
    • Using POP3 (Post Office Protocol 3): This is a "download and delete" protocol. Bob's client connects, authenticates, and downloads all new emails to his local machine. By default, the emails are then deleted from the server. The state (read/unread, folders) is kept on the client.
    • Using IMAP (Internet Message Access Protocol): This is a server-sync protocol. Bob's client connects and syncs with the server. All emails and folders are kept on the server. The client just displays a local copy. State (read/unread) is synced across all devices. This is the modern standard as it allows access from multiple devices (phone, laptop, etc.).
Question 2

a. Analyze how BitTorrent's use of parallel downloads increases performance compared to traditional FTP.

FTP (File Transfer Protocol):

  • FTP uses a client-server model. A single server stores the file, and all clients connect directly to this server to download it.
  • The server's upload bandwidth is a central bottleneck. If 100 clients are downloading, they all share this one source. Performance degrades rapidly as more users join.

BitTorrent (Peer-to-Peer):

  • BitTorrent uses a peer-to-peer (P2P) model. The file is broken into many small "pieces".
  • A "tracker" server helps peers find each other, but it does not host the file.
  • Parallel Downloads: A new downloader (peer) gets pieces from *multiple* other peers *simultaneously*. Instead of one download stream from one server, it has many download streams from many peers.
  • Scalability: As soon as a peer downloads a piece, it can *upload* that same piece to other peers. This means every new downloader also becomes an uploader. As more peers join the "swarm," the total available upload bandwidth of the swarm *increases*.

Conclusion: FTP performance scales *poorly* (one source, many clients), while BitTorrent performance scales *brilliantly* (every client is also a source), enabling much faster downloads for popular files.

b. A DNS resolver queries 3 servers sequentially with RTTs of 60 ms, 30 ms, and 10 ms respectively. What is the total time to resolve the name?

This question describes an iterative query process from the perspective of a local DNS resolver.

  1. The resolver contacts the first server (e.g., Root Server). This takes one RTT. Time = 60 ms.
  2. The resolver contacts the second server (e.g., TLD Server). This takes one RTT. Time = 30 ms.
  3. The resolver contacts the third server (e.g., Authoritative Server). This takes one RTT. Time = 10 ms.

Since the queries are sequential (one after the other), the total time is the sum of all the RTTs.

Total Time = 60 ms + 30 ms + 10 ms = 100 ms

c. Discuss why layering helps in designing complex network protocols.

Layering is a fundamental design principle that uses abstraction to manage the complexity of network communication. It helps in several key ways:

  • Modularity: It breaks down the single, massive problem of "network communication" into a set of smaller, more manageable sub-problems (layers). Each layer solves one part of the problem.
  • Abstraction: Each layer provides a set of services to the layer above it, while hiding the complex details of *how* it implements those services. For example, the Application Layer (HTTP) just "sends data" without needing to know if it's over Wi-Fi, Ethernet, or how TCP is managing reliability.
  • Interoperability: Layering promotes standardization. As long as a vendor's product correctly implements a layer's interface (e.g., a Wi-Fi card at Layer 2), it will work with products from other vendors at Layer 3 (e.g., Microsoft's IP stack).
  • Ease of Maintenance and Updating: You can change the implementation of one layer without affecting any other layer. For example, you can upgrade your physical network from Ethernet (Layer 1/2) to faster Wi-Fi 6 (Layer 1/2) without changing your web browser (Layer 7) or your computer's TCP/IP stack (Layer 3/4).
Question 3
Note: This is a frequently repeated question (see 2024 - Q1.a). You can find it in the "Repeated Questions" file.

a. Describe the role of encapsulation and decapsulation in layered protocol communication.

Encapsulation (On the Sending Side):

As data moves *down* the protocol stack (from Application to Physical), each layer "wraps" the data it receives from the layer above by adding its own control information, called a header (and sometimes a trailer).

  • Application Layer: Creates the message (e.g., "GET /index.html").
  • Transport Layer: Takes the message, adds a TCP header (with ports, seq/ack numbers) to create a segment.
  • Network Layer: Takes the segment, adds an IP header (with source/dest IP) to create a datagram.
  • Link Layer: Takes the datagram, adds a Link header (with MAC addresses) to create a frame.
  • Physical Layer: Transmits the frame as raw bits.

Decapsulation (On the Receiving Side):

As data moves *up* the protocol stack, each layer "unwraps" the data by reading its header, processing it, and then stripping it off before passing the payload to the layer above.

  • Link Layer: Receives the frame, checks the MAC address, strips the Link header, and passes the datagram up.
  • Network Layer: Receives the datagram, checks the IP address, strips the IP header, and passes the segment up.
  • Transport Layer: Receives the segment, checks the port, strips the TCP header, and passes the message up.
  • Application Layer: Receives the original message.

b. Given a 5-layered model, if each layer adds a 20-byte header and a message is 200 bytes, what is the total transmission size?

This is slightly ambiguous, but the standard interpretation of the 5-layer (Internet) model is that the Application Layer *creates* the message, and the next 3 layers add headers (Physical layer deals in bits and doesn't add a header).

Let's assume the 5 layers are App, Transport, Network, Link, Physical.

  1. Application Layer (L5): Message = 200 bytes
  2. Transport Layer (L4): Adds 20-byte header. Size = 200 + 20 = 220 bytes.
  3. Network Layer (L3): Adds 20-byte header. Size = 220 + 20 = 240 bytes.
  4. Link Layer (L2): Adds 20-byte header. Size = 240 + 20 = 260 bytes.
  5. Physical Layer (L1): Transmits the 260 bytes as bits.

The total transmission size (the frame size on the wire, excluding physical-layer overhead) is 260 bytes.

(If the question literally means 5 layers *all* add headers, it would be 200 + 5*20 = 300 bytes, but this doesn't map to a standard model).

c. Compare and contrast persistent vs non-persistent HTTP. Under what circumstances is each preferred?

Non-Persistent HTTP (HTTP/1.0):

  • Process: 1. Client opens a new TCP connection to the server. 2. Client sends one HTTP request. 3. Server sends one HTTP response. 4. Server *closes* the TCP connection.
  • Problem: A modern webpage contains 1 HTML file + 10 images. This would require 11 separate TCP connections. The overhead of setting up and tearing down each connection (3-way handshake, slow start) makes this extremely slow.

Persistent HTTP (HTTP/1.1 Default):

  • Process: 1. Client opens a TCP connection to the server. 2. Client sends an HTTP request. 3. Server sends an HTTP response. 4. The TCP connection stays open. 5. Client can send more requests (e.g., for all 10 images) over the *same* connection.
  • Benefit: Drastically reduces latency by eliminating the repeated connection setup overhead.

When Preferred:

  • Non-Persistent: Almost never preferred today. It is a legacy model that is highly inefficient for modern, media-rich websites.
  • Persistent: Preferred for all modern web browsing. It is the default for HTTP/1.1 and the basis for HTTP/2 (which uses a single, multiplexed connection).
Question 4

a. Analyze the working of cookies in maintaining state in stateless HTTP connections. What are the security and privacy concerns associated with them?

Working of Cookies:

HTTP is a "stateless" protocol, meaning each request is independent; the server forgets about the client as soon as the response is sent. This is bad for shopping carts or logins. Cookies fix this:

  1. Server Sends Cookie: When you first log in, the server sends a response with a Set-Cookie: header (e.g., Set-Cookie: sessionID=abc12345).
  2. Browser Stores Cookie: The browser saves this sessionID=abc12345 text file, tying it to the server's domain (e.g., amazon.com).
  3. Browser Sends Cookie: On *every subsequent request* to amazon.com, the browser automatically includes a Cookie: header (e.g., Cookie: sessionID=abc12345).
  4. Server Remembers: The server reads this sessionID, looks it up in its database, and "remembers" who you are, keeping you logged in.

Security Concerns:

  • Session Hijacking: If an attacker steals your session cookie (e.g., on unencrypted Wi-Fi), they can use it to impersonate you. (Mitigation: Use HTTPS, set Secure flag on cookie).
  • Cross-Site Scripting (XSS): If a website has an XSS flaw, an attacker can inject a script to steal the cookie. (Mitigation: Set HttpOnly flag so scripts can't access it).

Privacy Concerns:

  • Tracking: This is the main concern. Third-party cookies (from ad networks) can be placed on your browser by one site (e.g., news.com) and read by another (e.g., shopping.com) if both use the same ad network. This allows ad networks to build a detailed profile of your browsing habits across the entire web.
Note: This is a frequently repeated question (see 2023 - Q4.c and 2024 - Q4.c). You can find it in the "Repeated Questions" file.

b. A network engineer is evaluating... Stop-and-Wait, Go-Back-N, and Selective Repeat...

This question asks for a comparison of three Reliable Data Transfer (RDT) protocols.

  • Stop-and-Wait:
    • Performance: Very poor. Sender window size = 1. It sends one packet and waits for the ACK. The link is idle for most of the RTT. Utilization is very low.
    • Complexity: Very simple. Receiver logic is trivial.
    • Retransmission: On timeout, retransmits the one un-ACKed packet.
  • Go-Back-N (GBN):
    • Performance: Good. Sender window size = N. Allows "pipelining" (sending N packets without waiting for ACKs), which keeps the link busy.
    • Complexity: Moderate. Sender is complex, but receiver is simple (it only accepts in-order packets and discards all out-of-order packets).
    • Retransmission: Uses *cumulative ACKs* (ACK `n` means `n` and all before it are received). If packet `n` is lost, the sender's timer expires, and it retransmits packet `n` *and all subsequent packets* in the window (`n+1`, `n+2`, ...) even if they were received correctly. This is wasteful.
  • Selective Repeat (SR):
    • Performance: Excellent. Sender window size = N. Allows pipelining.
    • Complexity: High. *Both* sender and receiver are complex. The receiver must *individually* ACK packets and buffer out-of-order packets for later delivery.
    • Retransmission: If packet `n` is lost, the sender's timer for *only packet `n`* expires, and it retransmits *only packet `n`*. This is the most efficient retransmission strategy.

Selection Strategy:

  • Use Stop-and-Wait only if the bandwidth-delay product is tiny (e.g., a slow link with near-zero delay).
  • Use Go-Back-N if the receiver is very simple/memory-constrained.
  • Use Selective Repeat for most modern, high-bandwidth, high-delay networks (like the internet). It provides the best performance, and its complexity is worth the efficiency. (TCP's "Selective Acknowledgment" or SACK is a variant of SR).

c. Describe how TCP handles connection setup and teardown with state diagrams.

Connection Setup (Three-Way Handshake):

  1. Client (CLOSED) -> Server (LISTEN): Client sends a SYN packet (Seq=x). Client state becomes SYN_SENT.
  2. Server (LISTEN) -> Client (SYN_SENT): Server receives SYN. It replies with a SYN-ACK packet (Seq=y, Ack=x+1). Server state becomes SYN_RCVD.
  3. Client (SYN_SENT) -> Server (SYN_RCVD): Client receives SYN-ACK. It replies with an ACK packet (Seq=x+1, Ack=y+1). Client state becomes ESTABLISHED.
  4. The server receives the final ACK and its state becomes ESTABLISHED. The connection is now open.

Connection Teardown (Four-Way Handshake):

  1. Side 1 (ESTABLISHED): Decides to close. Sends a FIN packet. State becomes FIN_WAIT_1.
  2. Side 2 (ESTABLISHED): Receives FIN. Sends an ACK to acknowledge it. State becomes CLOSE_WAIT. (It can still send data if it needs to).
  3. Side 1 (FIN_WAIT_1): Receives the ACK. State becomes FIN_WAIT_2.
  4. Side 2 (CLOSE_WAIT): When done sending data, it sends its own FIN packet. State becomes LAST_ACK.
  5. Side 1 (FIN_WAIT_2): Receives Side 2's FIN. Sends its final ACK. State becomes TIME_WAIT (it waits 2*MSL to ensure the ACK isn't lost).
  6. Side 2 (LAST_ACK): Receives the final ACK. State becomes CLOSED.
  7. Side 1 (TIME_WAIT): After the timer expires, state becomes CLOSED.
Question 5
Note: This is a frequently repeated question (see 2024 - Q4.b). You can find it in the "Repeated Questions" file.

a. Discuss how TCP implements flow control and congestion control mechanisms.

These are two different mechanisms to "throttle" the sender, but for different reasons.

Flow Control (Protects the Receiver):

  • Purpose: To prevent the sender from sending data faster than the *receiver's application* can read it, (i.e., overflowing the *receiver's buffer*).
  • Mechanism: The receiver advertises its available buffer space in every ACK it sends. This is the Receive Window (rwnd) field in the TCP header. The sender maintains the rule: LastByteSent - LastByteAcked <= rwnd. If the receiver's buffer is full, it advertises rwnd=0, and the sender stops sending (except for small probe packets).

Congestion Control (Protects the Network):

  • Purpose: To prevent the sender from sending data faster than the *network* (i.e., routers) can handle, (i.e., causing packet loss due to router buffer overflow).
  • Mechanism: The sender maintains a *second* window, the Congestion Window (cwnd). The actual number of un-ACKed bytes the sender can have is min(rwnd, cwnd). cwnd is adjusted based on *perceived network congestion* (i.e., packet loss).
    • Slow Start: cwnd starts at 1 MSS and grows exponentially (doubles every RTT).
    • Congestion Avoidance: After cwnd passes a threshold (ssthresh), it grows linearly (adds 1 MSS per RTT).
    • Congestion Detection:
      • On 3 Duplicate ACKs (Packet loss): Halve ssthresh, set cwnd to new ssthresh, enter Congestion Avoidance. (Fast Recovery).
      • On Timeout (Severe congestion): Halve ssthresh, reset cwnd to 1, enter Slow Start.
Note: This is a frequently repeated question (see 2023 - Q5.c and 2024 - Q5.c). You can find it in the "Repeated Questions" file.

b. Discuss how NAT allows multiple devices to share one public IP address.

NAT (Network Address Translation) is a technique used by routers to solve the IPv4 address shortage. A typical home has many devices (laptops, phones) with *private* IPs (e.g., 192.168.1.x) but only *one public IP* from the ISP (e.g., 123.45.67.89).

Here's how it works:

  1. Outbound Packet: Your laptop (192.168.1.100) sends a packet from port 5000 to google.com port 80.
    • (Source: 192.168.1.100:5000, Dest: google.com:80)
  2. NAT Router (Sending): The router receives this. It rewrites the source IP and port.
    • It replaces the private source IP (192.168.1.100) with its *own public IP* (123.45.67.89).
    • It replaces the source port (5000) with a new, random, unused port (e.g., 62000).
    • (New Source: 123.45.67.89:62000, Dest: google.com:80)
  3. NAT Table: The router records this mapping in a "NAT translation table": (192.168.1.100:5000) <=> (123.45.67.89:62000)
  4. Inbound Packet: Google's server replies, sending the packet to the router's public IP and new port.
    • (Source: google.com:80, Dest: 123.45.67.89:62000)
  5. NAT Router (Receiving): The router receives this. It looks up port 62000 in its NAT table.
  6. It rewrites the destination IP and port back to the original private ones.
  7. (New Source: google.com:80, Dest: 192.168.1.100:5000)
  8. The packet is forwarded to your laptop. The laptop is unaware any of this happened.

c. A company is allocated 192.168.10.0/24. It needs to create 5 subnets with at least 30 hosts each. Design the subnetting scheme.

  1. Host Requirement: We need 30 usable hosts per subnet.
    • The formula is $2^h - 2 \ge 30$, where `h` is the number of host bits.
    • $2^4 - 2 = 14$ (Too small)
    • $2^5 - 2 = 30$ (Perfect!)
    • So, we must reserve 5 bits for hosts (`h=5`).
  2. Subnet Mask: A full IP address is 32 bits.
    • New network bits = 32 - `h` = 32 - 5 = 27.
    • The new subnet mask is /27, which is 255.255.255.224.
  3. Subnet Requirement: Does this mask give us 5 subnets?
    • The original mask was /24. The new is /27.
    • We "borrowed" $s = 27 - 24 = 3$ bits for subnets.
    • Number of subnets = $2^s = 2^3 = 8$ subnets.
    • This meets the requirement of "at least 5 subnets".
  4. The Subnetting Scheme:
    • The "block size" is $2^h = 2^5 = 32$. We will increment the 4th octet by 32.
    • Subnet 1: 192.168.10.0/27 (Hosts: .1 to .30, Broadcast: .31)
    • Subnet 2: 192.168.10.32/27 (Hosts: .33 to .62, Broadcast: .63)
    • Subnet 3: 192.168.10.64/27 (Hosts: .65 to .94, Broadcast: .95)
    • Subnet 4: 192.168.10.96/27 (Hosts: .97 to .126, Broadcast: .127)
    • Subnet 5: 192.168.10.128/27 (Hosts: .129 to .158, Broadcast: .159)
    • (Subnets .160, .192, and .224 are also available).
TCS-604 End Semester Examination - 2024
Question 1
Note: This is a frequently repeated question (see 2025 - Q3.a). You can find it in the "Repeated Questions" file.

a. What do encapsulation and De-encapsulation mean?

Encapsulation (Sending): As data moves *down* the protocol stack, each layer "wraps" the data from the layer above by adding its own header. (App Message -> [TCP Header + Message] = Segment -> [IP Header + Segment] = Datagram -> [Link Header + Datagram] = Frame).

Decapsulation (Receiving): As data moves *up* the stack, each layer "unwraps" the data by reading its header, processing it, and stripping it off before passing the payload to the layer above.

b. Which layers in the Internet Protocol stack does a router process? Which layers does a switch process? Which layer does a host process?

Using the 5-layer Internet model (Application, Transport, Network, Link, Physical):

  • Host (e.g., your laptop): Processes all 5 layers. It originates and terminates the data. It runs the application (L7), manages the TCP connection (L4), creates the IP packet (L3), frames it (L2), and sends the bits (L1).
  • Router: Processes Layers 1, 2, and 3. It receives bits (L1), de-frames them (L2), and inspects the Network Layer (L3) IP header. It uses this IP header to make a forwarding decision (routing). It then re-frames the packet (L2) and sends the bits out (L1). It *does not* look at L4 or L7.
  • Switch (L2 Switch): Processes Layers 1 and 2. It receives bits (L1) and inspects the Link Layer (L2) MAC address. It uses this MAC address to make a forwarding decision (switching). It *does not* look at L3, L4, or L7.
Note: This is a frequently repeated question (see 2023 - Q1.c). You can find it in the "Repeated Questions" file.

c. Consider two host A and B... Find the distance m so that $d_{prop}$ equals $d_{trans}$.

Given:

  • Propagation speed (s) = $2.5 \times 10^8$ m/s
  • Packet size (L) = 100 bits
  • Bandwidth (R) = 28 Kbps = 28,000 bits/sec

We need to find distance `m` where $d_{prop} = d_{trans}$.

  1. Formulas:
    • $d_{prop}$ = distance / speed = m / s
    • $d_{trans}$ = size / bandwidth = L / R
  2. Set them equal:
    • m / s = L / R
  3. Solve for m:
    • m = (L * s) / R
    • m = (100 bits * $2.5 \times 10^8$ m/s) / 28,000 bits/s
    • m = (2.5 * $10^{10}$) / (2.8 * $10^4$)
    • m = (2.5 / 2.8) * $10^6$
    • m ≈ 0.892857 * $10^6$ meters
    • m ≈ 892,857 meters (or 892.86 km)
Question 2

a. How does TCP protocol provide reliablility? Write down the names of services provided by the TCP? Write the name of well known ports used by TCP.

How TCP provides reliability:

  • Connection-Oriented: A 3-way handshake ensures both sender and receiver are ready before data is sent.
  • Sequencing: Each byte is numbered. The receiver uses sequence numbers to reorder packets that arrive out-of-order.
  • Acknowledgments (ACKs): The receiver sends cumulative ACKs to confirm which bytes it has received.
  • Error Detection: A checksum field in the header detects corrupted segments.
  • Retransmission: If the sender doesn't receive an ACK within a certain time (timeout) or receives 3 duplicate ACKs, it assumes the packet was lost and retransmits it.

Services provided by TCP:

  • Reliable Data Transfer
  • Connection-Oriented Service
  • Flow Control (protects the receiver)
  • Congestion Control (protects the network)
  • Full-Duplex Communication
  • Process-to-Process Delivery (using ports)

Well-Known Ports (TCP):

  • 20, 21: FTP (File Transfer Protocol)
  • 22: SSH (Secure Shell)
  • 23: Telnet
  • 25: SMTP (Simple Mail Transfer Protocol)
  • 53: DNS (also uses UDP 53)
  • 80: HTTP (Hypertext Transfer Protocol)
  • 110: POP3 (Post Office Protocol)
  • 143: IMAP (Internet Message Access Protocol)
  • 443: HTTPS (HTTP Secure)

b. Suppose Alice (webmail) sends a message to Bob (IMAP). Discuss how the message gets from Alice's host to Bob's host.

This involves several protocols working in sequence:

  1. Alice (Client) to Alice's Server (HTTP): Alice opens her web browser and logs into her webmail (e.g., Gmail). She composes and sends the email. This entire interaction between her browser and the Gmail server happens over HTTP.
  2. Alice's Server to Bob's Server (SMTP):
    • Gmail's mail server (MTA) receives the message. It now needs to send it to Bob's mail server.
    • It uses DNS to find the MX (Mail Exchange) record for Bob's domain (e.g., bob.com) to get the IP of his mail server.
    • Gmail's server (as an SMTP client) connects to Bob's mail server (as an SMTP server) using SMTP over port 25.
    • It transfers the email.
  3. Bob (Client) to Bob's Server (IMAP):
    • Bob opens his local mail client (e.g., Outlook, Apple Mail).
    • His client connects to his mail server using IMAP.
    • IMAP syncs the state of the server with his client. He sees the new message from Alice, downloads it, and reads it.

Protocol Chain: HTTP (Alice) -> DNS (Server lookup) -> SMTP (Server-to-Server) -> IMAP (Bob)

c. What are the factors that influence the RTT. Why is the calculation of RTT is advantageous. Also what are the measures to reduce the RTT.

Factors influencing RTT (Round Trip Time):

  • Propagation Delay: The time for a signal to travel the physical distance. This is the main, fixed component (e.g., speed of light in fiber).
  • Processing Delay: Time taken by routers to examine a packet header.
  • Queueing Delay: Time a packet spends waiting in a buffer (queue) at a congested router. This is the *most variable* component.
  • Transmission Delay: Time to push the packet's bits onto the link (L/R).

Advantages of Calculating RTT:

  • TCP Retransmission Timer: TCP must know the RTT to set a proper Retransmission Timeout (RTO). If the RTO is too short, it retransmits unnecessarily; too long, and it's slow to recover from loss.
  • TCP Congestion Control: The RTT is used to "clock" the increase of the congestion window.
  • Performance Diagnosis: Tools like `ping` use RTT to measure network latency and diagnose connectivity issues.

Measures to Reduce RTT:

  • Content Delivery Network (CDN): This is the most effective method. A CDN caches content (images, videos) on servers *geographically closer* to the user, which drastically reduces the physical distance and thus the propagation delay.
  • Improved Routing: Finding more direct network paths (e.g., better BGP peering).
  • Reducing Congestion: Increasing bandwidth on links reduces queueing delay.
  • Protocol Optimization: Using persistent connections (HTTP/1.1) or QUIC (HTTP/3) reduces the number of RTTs needed for connection setup.
Question 3

a. What are the different services provided by the Transport layer? Explain the transport layer protocol used for DNS and also state why it is suitable for DNS.

Transport Layer Services:

  • Process-to-Process Delivery: Using port numbers, it delivers data not just to a host (like IP) but to a *specific application process* on that host.
  • Multiplexing / Demultiplexing: Gathers data from multiple app sockets (multiplexing) and delivers incoming data to the correct socket (demultiplexing).
  • Connection-Oriented Service (TCP): Provides reliable, in-order delivery.
  • Connectionless Service (UDP): Provides unreliable, "best-effort" delivery.
  • Error Checking: Both TCP and UDP provide a checksum to detect bit errors.

Transport Protocol for DNS:

DNS primarily uses UDP (User Datagram Protocol) on port 53.

Why UDP is Suitable for DNS:

  • Speed: DNS queries are small (one request packet, one response packet). UDP is connectionless, so there is no 3-way handshake. This saves an entire RTT, making DNS lookups very fast.
  • Simplicity: If a DNS query packet is lost, the application (resolver) can simply time out and send the query again. It doesn't need the complex state and reliability mechanism of TCP.

(Note: DNS *does* use TCP on port 53 for special cases, like large zone transfers between servers, where reliability is critical).

b. Suppose Host A and Host B use a GBN protocol with window size N=3... Draw the timing diagram...

This requires a timing diagram showing the loss of `ACK1` and `Pkt5`.

Host A (Sender)         Host B (Receiver)
Window: [1,2,3]
Send Pkt1
                      Recv Pkt1
                      Send ACK1  (LOST)
Send Pkt2
                      Recv Pkt2
                      Send ACK2
Send Pkt3
                      Recv Pkt3
                      Send ACK3
Recv ACK2
Window: [3,4,5]
Send Pkt4
                      Recv Pkt4
                      Send ACK4
Recv ACK3
Window: [4,5,6]
Send Pkt5 (LOST)
Recv ACK4
Window: [5,6]
Send Pkt6
                      Recv Pkt6 (Out of order)
                      DISCARD Pkt6
                      Resend ACK4 (last in-order)

...Timer for Pkt5 expires...
(Sender goes back N)

Send Pkt5
                      Recv Pkt5
                      Send ACK5
Send Pkt6
                      Recv Pkt6
                      Send ACK6
Recv ACK5
Window: [6]
Recv ACK6
Window: []
                            
Note: This is a frequently repeated question (see 2023 - Q3.c). You can find it in the "Repeated Questions" file.

c. ...what would be the IP address of E0... what would be the IP address of S0...

Network: 192.168.10.0/28. Mask: 255.255.255.240. Block size = 16. We are told to skip the "zero subnet" (subnet .0).

A: IP of E0 (eighth subnet, last available IP):

  • Subnet 1: .16
  • Subnet 2: .32
  • ...
  • Subnet 8: .128
  • Network ID: 192.168.10.128
  • Broadcast ID: 192.168.10.143 (128 + 16 - 1)
  • Host Range: .129 to .142
  • Last available IP: 192.168.10.142

B: IP of S0 (first subnet, last available IP):

  • First subnet (skipping .0) is .16
  • Network ID: 192.168.10.16
  • Broadcast ID: 192.168.10.31 (16 + 16 - 1)
  • Host Range: .17 to .30
  • Last available IP: 192.168.10.30
Question 4
Note: This is a frequently repeated question (see 2023 - Q4.a). You can find it in the "Repeated Questions" file.

a. ...Compute the checksum at sender's side... Compute the checksum at receiver's side.

We use 8-bit 1's complement arithmetic (add, and wrap any carry-out bit).

Sender Side (Data): 11001100 (204) 10101010 (170) 11110000 (240) 11000011 (195)

  11001100
+ 10101010
----------
1 01110110  -> Wrap carry: 01110110 + 1 = 01110111

  01110111
+ 11110000
----------
1 01100111  -> Wrap carry: 01100111 + 1 = 01101000

  01101000
+ 11000011
----------
1 00101011  -> Wrap carry: 00101011 + 1 = 00101100
                            

Final Sum = 00101100. Checksum (1's complement of sum) = 11010011.

Receiver Side (Data + Checksum): 11001100 (204) 10101011 (171) <-- ERROR! 11110000 (240) 11000011 (195) 11010011 (211) <-- Checksum

  11001100
+ 10101011
----------
1 01110111  -> Wrap carry: 01110111 + 1 = 01111000

  01111000
+ 11110000
----------
1 01101000  -> Wrap carry: 01101000 + 1 = 01101001

  01101001
+ 11000011
----------
1 00101100  -> Wrap carry: 00101100 + 1 = 00101101

  00101101 (Sum of received data)
+ 11010011 (Received checksum)
----------
1 00000000  -> Wrap carry: 00000000 + 1 = 00000001
                            

Final receiver sum = 00000001. The 1's complement of this is 11111110.

Conclusion: The result is NOT 00000000 (or all ones, depending on check method). Therefore, the receiver detects that an error has occurred.

Note: This is a frequently repeated question (see 2025 - Q5.a). You can find it in the "Repeated Questions" file.

b. How many phases are there in TCP congestion control algorithm...

TCP congestion control has three main phases/states, governed by the Congestion Window (cwnd):

  1. Slow Start: At the beginning, cwnd = 1 MSS. It grows *exponentially*, doubling every RTT (it adds 1 MSS for every ACK received). This continues until cwnd reaches the threshold (ssthresh).
  2. Congestion Avoidance: Once cwnd > ssthresh, growth slows to be *linear* (it adds 1 MSS per RTT). This probes for more bandwidth more gently.
  3. Congestion Detection / Fast Recovery: This is the reaction to packet loss.
    • On Timeout (severe loss): The ssthresh is set to cwnd / 2. cwnd is reset to 1, and it re-enters Slow Start.
    • On 3 Duplicate ACKs (mild loss): The ssthresh is set to cwnd / 2. cwnd is also set to ssthresh. It skips Slow Start and enters Congestion Avoidance/Fast Recovery.

The threshold value (ssthresh) is the "memory" of the last congestion event. It is initially set to a large value, but when packet loss occurs, it is set to half of the current congestion window, marking the point where the network is believed to be congested.

Note: This is a frequently repeated question (see 2023 - Q4.c and 2025 - Q4.b). You can find it in the "Repeated Questions" file.

c. What is the drawback of stop-and-wait protocol? How it can be solved and what protocols can be used...

Drawback:

The primary drawback of Stop-and-Wait is its extreme inefficiency. The sender sends one packet and then *stops* and *waits* for an ACK. The entire link is idle for the full Round Trip Time (RTT). This is a low "utilization" problem. In a network with a high bandwidth-delay product (e.g., fast, long-distance links), Stop-and-Wait might use less than 0.1% of the available capacity.

How to solve it:

The problem is solved using pipelining. Pipelining allows the sender to send *multiple* packets (a "window" of N packets) before it needs to receive the ACK for the first packet. This keeps the "pipe" full of data and dramatically increases efficiency.

Protocols that use pipelining:

  1. Go-Back-N (GBN)
  2. Selective Repeat (SR)
Question 5

a. We know that a router typically consist of... hardware and... software? ...data plane and control plane...

Router Components (HW vs SW):

  • Input/Output Ports: Implemented in Hardware. Physical and Link-layer operations (receiving bits, checking MACs, queueing) must happen at "line speed," which requires dedicated hardware.
  • Switching Fabric: Implemented in Hardware. This is the high-speed backplane that connects input ports to output ports. It must be hardware to forward millions of packets per second.
  • Routing Processor: Implemented in Software. This is the router's "brain." It runs complex programs like routing protocols (OSPF, BGP) to build the forwarding table. This logic is too complex and changes too often for hardware.

Data Plane vs. Control Plane:

  • Data Plane: This is the "fast path" that forwards individual packets.
    • Implementation: Hardware (Input ports, switching fabric, output ports).
    • Why: For pure speed. Its job is simple: look up destination in forwarding table, send to output port. This must be done in nanoseconds.
  • Control Plane: This is the "slow path" that "thinks" about the network.
    • Implementation: Software (Routing processor).
    • Why: Its job is complex: run routing algorithms, communicate with other routers, build the forwarding table. These are high-level logic tasks, not per-packet actions.
Note: This is a frequently repeated question (see 2023 - Q5.b). You can find it in the "Repeated Questions" file.

b. A: ...divide this network into 4 subnets... B: What is the subnetwork address for a host...

A: Divide 200.1.2.0 (Class C, /24) into 4 subnets.

  • Need 4 subnets. $2^s \ge 4$. So we need $s = 2$ subnet bits.
  • New mask = 24 + 2 = /26 (or 255.255.255.192).
  • Block size = 256 - 192 = 64.
  • Subnet 1: 200.1.2.0/26
  • Subnet 2: 200.1.2.64/26
  • Subnet 3: 200.1.2.128/26
  • Subnet 4: 200.1.2.192/26

B: Subnetwork address for 200.10.5.68/28.

  • IP: 200.10.5.68
  • Mask: /28 = 255.255.255.240.
  • Block size = 256 - 240 = 16.
  • We need the multiple of 16 that is less than or equal to 68.
  • (16*1=16, 16*2=32, 16*3=48, 16*4=64, 16*5=80)
  • The correct multiple is 64.
  • Subnetwork address: 200.10.5.64
Note: This is a frequently repeated question (see 2023 - Q5.c and 2025 - Q5.b). You can find it in the "Repeated Questions" file.

c. Explain the working functionality of... DHCP, NAT, ICMP, IP Security in IPV4

  1. DHCP (Dynamic Host Configuration Protocol):
    • Function: Automatically assigns IP addresses to devices on a network.
    • Process (DORA): Discover (Client broadcasts: "Any DHCP servers?"), Offer (Server replies: "You can have this IP"), Request (Client broadcasts: "I'll take that IP"), Acknowledge (Server confirms: "It's yours").
  2. NAT (Network Address Translation):
    • Function: Allows multiple devices with private IPs to share one public IP.
    • Process: The router rewrites the source IP and port on outgoing packets and uses a "NAT table" to rewrite the destination IP and port on incoming packets. (See 2025 Q5.b. for full detail).
  3. ICMP (Internet Control Message Protocol):
    • Function: A network-layer "control" protocol used for error reporting and diagnostics.
    • Examples: ping uses ICMP Echo Request/Reply. traceroute uses ICMP Time Exceeded messages. A router sends an ICMP Destination Unreachable if it can't forward a packet.
  4. IPsec (IP Security):
    • Function: A suite of protocols that provides security (encryption and authentication) at the Network Layer (L3).
    • Modes: Tunnel Mode: Encrypts the *entire* original IP packet (header + data) and puts it in a new IP packet. Used to create VPNs between routers. Transport Mode: Encrypts *only the payload* (e.g., TCP segment). Used for host-to-host security.
TCS-604 End Semester Examination - June 2023
Question 1

a. Explain TCP/IP protocol stack with diagram and proper functionality of each layer.

The 5-layer TCP/IP (or Internet) protocol stack:

  1. Application Layer (L5):
    • Functionality: Provides services to network applications. This is where protocols users interact with (like HTTP, SMTP, DNS) live.
    • PDU: Message
  2. Transport Layer (L4):
    • Functionality: Provides process-to-process communication, using port numbers. It handles reliability (TCP) or "best-effort" delivery (UDP), along with flow control and congestion control.
    • PDU: Segment (TCP) or Datagram (UDP)
  3. Network Layer (L3):
    • Functionality: Responsible for routing packets from the source *host* to the destination *host* across multiple networks. It uses logical IP addresses.
    • PDU: Datagram
  4. Link Layer (L2):
    • Functionality: Responsible for moving data between *adjacent nodes* (e.g., host-to-router) on the same link. It uses physical MAC addresses and handles error detection.
    • PDU: Frame
  5. Physical Layer (L1):
    • Functionality: Responsible for transmitting the raw *bits* of a frame over the physical medium (e.g., copper cable, fiber, radio waves).
    • PDU: Bits

b. Define the working functionality of the circuit and packet switching with the help of a suitable diagram.

Circuit Switching:

  • Concept: A dedicated, end-to-end physical connection (a "circuit") is established *before* data transfer begins.
  • Phases: 1. Connection Setup, 2. Data Transfer, 3. Teardown.
  • Functionality: Resources (e.g., a time slot, a frequency) are *reserved* for the entire duration of the call. This guarantees bandwidth and zero congestion, but is inefficient, as the resources are wasted if no data is being sent (e.g., silence in a phone call).
  • Example: The old telephone network (PSTN).

Packet Switching:

  • Concept: Data is broken into small blocks called "packets". Each packet is sent independently, with headers indicating its destination.
  • Functionality: Packets travel from router to router in a "store-and-forward" manner. Resources are *shared* (statistical multiplexing). Multiple users' packets can be interleaved on the same link. This is highly efficient but can lead to congestion, variable delay (jitter), and packet loss.
  • Example: The Internet.
Note: This is a frequently repeated question (see 2024 - Q1.c). You can find it in the "Repeated Questions" file.

c. Consider two host A and B... Find the distance m so that $d_{prop}$ equals $d_{trans}$.

Given:

  • Propagation speed (s) = $2.5 \times 10^8$ m/s
  • Packet size (L) = 100 bits
  • Bandwidth (R) = 28 Kbps = 28,000 bits/sec

We need to find distance `m` where $d_{prop} = d_{trans}$.

  1. m / s = L / R
  2. m = (L * s) / R
  3. m = (100 bits * $2.5 \times 10^8$ m/s) / 28,000 bits/s
  4. m ≈ 892,857 meters (or 892.86 km)
Question 2

a. Explain the working of cookies, proxy server and conditional GET...

Cookies:

  • Function: Used to maintain state (e.g., a login session) for the stateless HTTP protocol.
  • Process: The server sends a Set-Cookie: header. The browser stores it and sends it back to that same server with every future request in a Cookie: header. (See 2025 Q4.a. for full detail).

Proxy Server (Web Proxy):

  • Function: An intermediary server that clients send requests to. The proxy then fetches the content from the real server on the client's behalf.
  • Uses: Caching (storing popular pages to serve them faster), Filtering (blocking access to websites), and Anonymity (hiding the client's IP).

Conditional GET:

  • Function: Allows a browser to ask the server to send a page *only if it has changed*.
  • Process: The browser stores the Last-Modified date of a page. On the next request, it sends this date in an If-Modified-Since: header. If the page hasn't changed, the server replies with 304 Not Modified (empty body), saving bandwidth. If it has, it sends 200 OK with the new page.

b. Explain the working functionality of the DNS...

DNS (Domain Name System) translates human-readable names (google.com) into IP addresses (142.250.196.196). It's a distributed, hierarchical database.

Iterative Query Process:

  1. Your host asks its Local Resolver.
  2. Resolver asks a Root Server -> Root points to the TLD Server (for .com).
  3. Resolver asks the TLD Server -> TLD points to the Authoritative Server (for google.com).
  4. Resolver asks the Authoritative Server -> Authoritative server gives the IP address.
  5. Resolver gives the IP to your host and *caches* the result.

c. What are the different mail access protocols? Explain the working of any two.

Mail *transfer* uses SMTP. Mail *access* uses POP3, IMAP, or HTTP (webmail).

  1. SMTP (Simple Mail Transfer Protocol):
    • Working: This is a "push" protocol used to send email from a client to its server, and from that server to the recipient's server. It is not used to read email.
  2. POP3 (Post Office Protocol 3):
    • Working: This is a "pull" protocol. The client connects, authenticates, and downloads all new emails to the local machine. By default, messages are *deleted* from the server after download. It's a "download and delete" model that keeps state on the client.
  3. IMAP (Internet Message Access Protocol):
    • Working: This is also a "pull" protocol, but it *syncs* with the server. All emails and folders are kept on the server. The client (e.g., phone, laptop) just displays a local copy. State (read/unread, folders) is kept on the server and synced across all devices. This is the modern, more flexible standard.
Question 3

a. What are the different services provided by the Transport layer? Explain the difference between connection-oriented and less services.

Transport Layer Services: (See 2024 Q3.a. for full list).

  • Process-to-Process Delivery (Ports)
  • Multiplexing / Demultiplexing
  • Connection-Oriented Service (TCP)
  • Connectionless Service (UDP)
  • Error Checking (Checksum)

Connection-Oriented (e.g., TCP):

  • Establishes a connection (3-way handshake) before sending data.
  • Guarantees reliable, in-order delivery using ACKs, sequence numbers, and retransmissions.
  • Higher overhead. Used for web, email, files.

Connectionless (e.g., UDP):

  • No connection setup. Just sends packets.
  • Provides "best-effort" delivery. No guarantees. Packets can be lost, reordered, or duplicated.
  • Very low overhead. Used for DNS, VoIP, gaming (where speed > perfect reliability).

b. Explain the working functionality of the TCP header segment with a suitable diagram.

A TCP header is typically 20 bytes long (without options).

Key Fields:

  • Source Port (16 bits) / Destination Port (16 bits): Identifies the sending and receiving applications (processes).
  • Sequence Number (32 bits): Used for ordering. The byte-stream number of the first byte in this segment.
  • Acknowledgment Number (32 bits): If ACK flag is set, this is the sequence number of the *next* byte the sender is expecting.
  • Header Length (4 bits): Size of the header in 32-bit words.
  • Flags (e.g., SYN, ACK, FIN, RST): Control bits to manage the connection (setup, teardown, reset).
  • Window Size (16 bits): The Receive Window (rwnd) used for *flow control*.
  • Checksum (16 bits): Error-detection for the header and data.
Note: This is a frequently repeated question (see 2024 - Q3.c). You can find it in the "Repeated Questions" file.

c. ...what would be the IP address of E0... what would be the IP address of S0...

Network: 192.168.10.0/28. Block size = 16. Skip subnet .0.

A: IP of E0 (eighth subnet, last available IP):

  • Subnet 8 is .128.
  • Host Range: .129 to .142.
  • Last available IP: 192.168.10.142

B: IP of S0 (first subnet, last available IP):

  • First subnet is .16.
  • Host Range: .17 to .30.
  • Last available IP: 192.168.10.30
Question 4
Note: This is a frequently repeated question (see 2024 - Q4.a). You can find it in the "Repeated Questions" file.

a. ...Compute the checksum at sender's side... Compute the checksum at receiver's side.

Sender Side:

  • Data: 11001100, 10101010, 11110000, 11000011
  • Final Sum = 00101100
  • Checksum (1's complement) = 11010011

Receiver Side:

  • Received Data: 11001100, 10101011 (Error), 11110000, 11000011
  • Received Checksum: 11010011
  • Final Sum (of all 5 blocks) = 00000001

Conclusion: The result is not all-zeros (or all-ones). An error is detected.

b. Explain the connection establishment concept of TCP with a suitable diagram.

This is the Three-Way Handshake:

  1. Client -> Server: Sends a SYN packet (Seq=x). "Hi, I'd like to connect."
  2. Server -> Client: Receives SYN. Replies with a SYN-ACK packet (Seq=y, Ack=x+1). "I acknowledge your request and I'd also like to connect."
  3. Client -> Server: Receives SYN-ACK. Replies with an ACK packet (Seq=x+1, Ack=y+1). "I acknowledge your acknowledgment. The connection is open."
Note: This is a frequently repeated question (see 2024 - Q4.c and 2025 - Q4.b). You can find it in the "Repeated Questions" file.

c. Explain the working functionality of... Stop-and-wait, Go Back N, Selective Repeat

  • Stop-and-Wait: Sender window = 1. Sends packet 1, stops, waits for ACK 1. Very inefficient.
  • Go-Back-N (GBN): Sender window = N. Sends packets 1,2,3... Uses *cumulative ACKs*. If packet `n` is lost, sender retransmits `n` *and all packets after it* (`n+1, n+2...`).
  • Selective Repeat (SR): Sender window = N. Sends packets 1,2,3... Uses *individual ACKs*. If packet `n` is lost, sender retransmits *only packet `n`*. Most efficient.
Question 5

a. Explain IP datagram Header format with suitable diagram and functionality.

The IPv4 header is 20 bytes (without options).

Key Fields:

  • Version (4 bits): Set to 4 for IPv4.
  • Header Length (IHL) (4 bits): Size of the header in 32-bit words.
  • Total Length (16 bits): Total size of the *entire packet* (header + data).
  • Identification, Flags, Fragment Offset: Used to manage fragmentation (when a large packet is split into smaller ones).
  • Time To Live (TTL) (8 bits): Decremented by 1 at each router. If it hits 0, the packet is discarded (prevents infinite loops).
  • Protocol (8 bits): Identifies the transport layer protocol (6 for TCP, 17 for UDP).
  • Header Checksum (16 bits): Error detection *for the header only*.
  • Source IP Address (32 bits): The IP of the original sender.
  • Destination IP Address (32 bits): The IP of the final recipient.
Note: This is a frequently repeated question (see 2024 - Q5.b). You can find it in the "Repeated Questions" file.

b. (i) ...divide this network into 4 subnets... (ii) What is the subnetwork address for a host...

(i): Divide 200.1.2.0 (Class C, /24) into 4 subnets.

  • Need 2 subnet bits -> /26 mask. Block size = 64.
  • Subnets: 200.1.2.0/26, 200.1.2.64/26, 200.1.2.128/26, 200.1.2.192/26.

(ii): Subnetwork address for 200.10.5.68/28.

  • Mask /28 -> Block size = 16.
  • Find multiple of 16 <= 68, which is 64.
  • Subnetwork address: 200.10.5.64
Note: This is a frequently repeated question (see 2024 - Q5.c and 2025 - Q5.b). You can find it in the "Repeated Questions" file.

c. Explain the working functionality of... DHCP, NAT, ICMP

  1. DHCP (Dynamic Host Configuration Protocol):
    • Function: Automatically assigns IP addresses (and other info like DNS server, gateway) to devices when they join a network.
    • Process (DORA): Discover, Offer, Request, Acknowledge.
  2. NAT (Network Address Translation):
    • Function: Allows multiple devices with private IPs (192.168.x.x) to share a single public IP.
    • Process: The router rewrites source IP/port on outgoing packets and destination IP/port on incoming packets using a NAT table.
  3. ICMP (Internet Control Message Protocol):
    • Function: A network-layer protocol for error reporting and diagnostics.
    • Examples: ping (Echo Request/Reply), traceroute (Time Exceeded).
Go to Second Page →
Support Ashok Fitness

Support Ashok Fitness

Ashok Fitness is currently open to DATE.