The Socket Abstraction
At the kernel level, a socket is an endpoint for communication. For senior engineers, the focus shifts from how to open a socket to how to manage its lifecycle and performance characteristics under load.
1. The TCP Socket Lifecycle
The sequence of system calls is rigid. Deviating results in errno states that can be cryptic.
- Server Side:
socket() → bind() (attach to IP/Port) → listen() (mark as passive, set backlog) → accept() (block/wait for handshake). - Client Side:
socket() → connect() (initiate 3-way handshake).
2. Blocking vs. Non-Blocking I/O
By default, sockets are blocking. accept() and recv() will put the thread to sleep until data arrives.
Non-blocking I/O returns control immediately. If no data is ready, it raises an error (e.g., EAGAIN or EWOULDBLOCK). This is the foundation of high-concurrency event loops (Node.js, Redis, Nginx).
3. Connection Pooling
The TCP 3-way handshake takes 1.5 RTT (Round Trip Time). For a database query taking 2ms, a 50ms handshake is a massive overhead.
Pooling keeps established sockets open in a "pool" to be reused for subsequent requests, bypassing the handshake/teardown phase entirely.
4. Critical Socket Options
- SO_REUSEADDR: Allows a server to bind to a port that remains in
TIME_WAIT state. Without this, restarting a crashed server often results in "Address already in use" errors for 60+ seconds. - TCP_NODELAY: Disables Nagle's Algorithm. Nagle buffers small packets to reduce network congestion. For real-time apps (gaming, high-frequency trading), this buffering introduces unacceptable latency.