networks - modrpc/info GitHub Wiki
-
TCP: two virtual queue between sender/receiver
- reliable: ACKs/retransmission
-
UDP:
- not reliable: no ACKs/retransmission
- sending from one point to another
- forwarding: packets are moved through intermiediate nodes
- routing: it's about building forwarding table -- i.e. determines which next node to forward
- Wikipedia
-
Ad Hoc ๋คํธ์ํฌ ์๊ฐ
- Tree-based protocol
- Mesh-based protocol
- A very nice article: http://www.nightmare.com/medusa/async_sockets.html
-
service: abstraction of the interface between layers that is system independent
- service is independent of protocol
-
service definition: consists of two parts
- a set of service primitives, which specify the operations to be performed on the service and a set of parameters that are used as arguments to the operations
- a set of rules that determine the legal sequences in which the service primitives can be invoked
-
protocol: rules and behavior required by any entity participating in the transfer of data
- protocol specification defines sequences of message exchanges in the transfer of data
- protocol specifies the minimal state machine that any implementation must conform to.

-
Endnodes: endpoints of the network
- mostly for (general-purpose) computation
- two sources of overhead comes from:
- structure: layered SW structure for protection (e.g. kernel vs user space) -- overhead
- scale: two many jobs to do (E.g. many web requests to server endnode)
-
Routers: mostly for network interconnection
- two sources of overhead comes from:
- scale:
- bandwidth scaling: optical link gets faster
- population scaling: more endpoints added (more target destinations)
- services: new types/categories of network services (ebay, IOT, Amazon, netflix)
- scale:
- two sources of overhead comes from:
Bottleneck | Cause | Solutions |
Copying | Protection, abstration layers | Copying many data blocks without OS intervention (e.g. RDMA) |
Context switching | Complex scheduling | User-level protocol implementation, Event-driven servers |
System calls | Protection, abstration layers | Direct channels from applications to drivers (e.g. VIA) |
Timers | Scaling with number of timers | Timing wheel |
Demultiplexing | Scaling with number of endpoints | BPF and Pathfiner |
Checksums/CRCs | Generality, Scailing with link speeds | Multibit computation |
Protocol code | Generality | Header prediction |
Bottleneck | Cause | Solutions |
Exact lookups | Link speed scaling | Parallel hashing |
Prefix lookups | Link speed csaling, Prefix database size scaling | Compressed multibit tries |
Package classification | Service differentiation, Link speed and size scaling | Decision tree algorithnms, Hardware parallelisms (TCAMs) |
Switching | Optical-electronic speed gate, Head-of-line blocking | Crossbar switches, Virtual output queues |
Fair scheduling | Service differentiation, Link speed scaling, Memory scaling | Weighted fair queueing, Deficit round robin, DiffServ, Core Stateless |
Internal bandwidth | Scaling of internal bus speeds | Reliable striping |
Measurement | Link speed scaling | Juniper's DCU |
Security | Scaling in number and intensity of attacks | Traceback with bloom filters, Extracting worm signatures |
- e.g. bad packet detection must happen at the same rate as packet arrival
- Why Timers?
-
(Periodic) Failure Detection
- ping (autonomous) peers to see if it's live
- detect lack of some action (message acknowledgement) within a specific period -- needed for retransmission for communication
- Algorithm where time or relative time is integral
- rate-based flow control: control the production rate of some entities
- scheduling algorithms: round-robin scheduling over time quantum
-
(Periodic) Failure Detection
-
Timer performance is critical if:
- timers are implemented with interrupts in a processor and interrupt overhead is large
- fine-grained timers (e.g. microsec or lower) are required
- starting or stopping a timer incurs overhead
- the number of timers are large
- when there are 2000 live connections and each connection requires 3 timers, 6000 timers are needed
-
BSD TCP implementation: does not use timer per packet -- only use a few timers for the entire networking package
- e.g. 200-msec timer and 500-msec timer
- StartTimer(Interval, RequestId, ExpiryAction)
- StopTimer(RequestId)
- PerTickBookKeeping: let the granularity of the timer be T units. Then every T units this routine checks whether any outstanding timers have expired
- ExpiryProcessing: does ExpiryAction specified in the start timer
- StartTimer: just put "time until expiry" in a memory location -- O(1)
- PerTickBookKeeping: scan each timer memory location and decrement 'time until expiry" -- O(n)
- StartTimer: put "absolute expire time" in a priority queue (ordered according to time) -- O(lg n)
- PerTickBookKeeping: check if head elements of queue are expired, if yes, process -- O(1)
- optimization: Let timer wake up on "earliest expire time" -- no need to wake up regularly but depends on whether architecture allows this
- Need circular buffer of size N
- StartTimer: put timer at "time util expriry" (j) + current time (i) mod N -- O(1)
- PerTickBookKeeping: check current slot and process each timer in the linked list (if any) -- O(1)