Telematica systemen en toepassingen (261000)
Transmission rate=Bandwidth | Network edge: Client/server model: Client host requests, recieves service from always-on server (webbrowser/webserver), P2P: Minimal (or no) use of dedicated servers (Kazaa),
scalable
cnction oriented: TCP: handshaking, reliable, in-order bytestream data transfer, flow control, congestion control (HTTP, SMTP) | cnctionless: UDP: Unreliable data transfer, no flow/congestion control (streaming
media, DNS)
Network Core: Circuit switched: dedicated circuit per call (telephone), dividing link bandwidth: FDM/TDM | packet switched: data send thru net in discrete 'chunks' (internet), congestion (packets queue), store
and forward..
Datagram network: dest. addr determines next hop | virtual circuit: tag carried by packet determines next hop (routers maintain per-call state).
Delay: Nodal Delay= processing + queueing + transmission (=L/R) + propagation (=d/s).
Layering: Application: supporting network application(http/smtp), Transport: host-host data transfer(TCP/UDP), Network: routing of datagrams from src. to dest.(IP/routing prot.), Link: data transfer between
neighbouring elements.
Application layer:
Client process initiate communication, server process waits to be contacted | Process sends/receives messages to/from its socket | Identifier includes IP address and port number (HTTP:80, SMTP: 25, FTP:
21,c/20,d) | HTTP: stateless
Nonpersistent HTTP: HTTP/1.0, at most 1 object is send over a TCP-cnction | Persistent HTTP without pipelining: multiple objects can be send over a single TCP cnction, new requests only when previous
response has been received | Persistent HTTP with pipelining: HTTP/1.1, client sends requests as soon as it encounters a referenced object.
RTT time: time to send a small packet to travel from client to server and back.
Web caches (proxy servers):web access and HTTP requests via cache,if objects in cache,cache returns objects.
FTP:Seperate control/data cnctions,1data cnction per file; server maintains “state”: current directory and earlier authentication.
SMTP: TCP to reliable transfer e-mail messages from client to server; transfer: handshaking->transfer-> closure | Commands: ASCII, Responsds: status code+phrase; messages: 7-bits ASCII, cnctions are
persistent
HTTP<>SMTP: HTTP: pull, each object encapsulated in its own response msg | SMTP: push, multiple objects sent in multipart msg.
Mail access protocols: POP: authorization, download, stateless | IMAP: more features, manipulation of stored msgs on server, keeps user state across sessions | HTTP
DNS: Distributed database implemented in hierarchy of many name-servers; application-layer protocol: in hosts, routers, name-servers to communicate to resolve names. Tasks: host-name/IP-address
translation, host/mail-server aliasing, load distribution (set of IP-addresses for 1 canonical name). Resource Record format: (name, value, type, TTL); type=A: name=hostname, value IP-adres; type=NS:
name=domain, value=ip adres of authoritive name server, type=cname: name=alias for some canonical name, value=canonical name; type=MX: value=name of mailserver associated with “name”.
Hierarchy: Root->Top Level Domain->authoritative-> evtually Local Name server (acts as a proxy); Contacting: Recursive queries, iterated queries.
Caching: TLD servers cached in local name servers; entries time-out | DNS Records: Resource Record format: (name, value, type ttl).
P2P: All peers are both Web client and transient Web servers->scalable. Configurations: Centralized dir (file transfer decentralized, locating content centralized) or fully distributed (given peer will typically be
connected with <10 overlay neighbours).
Exploiting heterogeneity: each peer is either a group-leader or assigned to a group leader: leader tracks the content in all its children and each file has a hash and descriptor.
Transport services and protocols:
Provide logical communication between app. processes running on different hosts; running in end-systems | Protocols: TCP/UDP.
Demultiplexing at rcv host: delivering received segments to correct socket (with port # and IP) | Multiplexing at send host: gathering data from multiple sockets, enveloping data with header (later used for
demultiplexing). UDP socket identified by 2-tuple: dest IP-address, dest port # | TCP-socket identified bij 4-tuple: src/dest IP-address, src/dest port #.
UDP: no cnction estab/cnction state, each segment handled independently of others; UDP uses: DNS, SNMP.
UDP checksum: Sender: treat segment contents as sequence of 16 bit integers. Checksum: addition (1's complement sum) of segment contents (carryout added to the result), sender puts checksum value into
UDP checksum field. Receiver: Compute checksum of received segment, check if computed checksum equals checksum field value.
RDT: rdt_send, udt_send, rdt_rcv, deliver_data | Rdt2.0: error detection (checksum), receiver Acks/Naks | stop and wait: sender sends, then wait for receiver's Ack/Nack.
Rdt 2.1: send: handles garbled acks/nak's, seq# added, check if rcvd ack/nak is corrupted (2x zoveel states); rcv: check if rcvd pkt is duplicate, can't check if rcvd ack/nak corrupted.
Rdt 2.2: nak-free, ipv nak, rcvr acks last pkt rcvd ok, duplicate ack->same action as nack.
Rdt 3.0: handles errors and loss. Retransmit if no ack received in time-out | rcvr specify seq# of pkt in ack | performance (utilization): U(send.)=(L/R)/(RTT+L/R).
Pipelining: multiple “in flight” pkts | range of seq#'s increased and buffering at send/rcvr.
GBN: Window, N consecutive unack'ed pkts allowed | timer for each in-flight pkt | timeout: retransmit pkt n and all higher seq#'s. Ack only: always ack for correctly rcvd pkt with highest in-order seq# (may
generate duplicate acks and only remember “expectedseqnum”) | out-of-order pkt: discard (no rcvr buffering) pkt, re-ack highest-order seq#.
Selective-repeat: Rcvr individually acks all correctly rcvd pkts (can buffer) | sender only resends pkts for which ack not rcvd (sender has timer for each unack'ed pkt); Window: N consecutive seq#'s, limit for
seq#'s for sent, unack'ed pkts;
Sender: if next available seq# in window: send pkt | if timeout: resend pkt n-> restart timer | if ack(n) in [sendbase,sendbase+N]: mark pkt n as rcvd | if n smallest unack'ed pkt: advance window base to next
unack'ed seq#.
Rcvr: send ack(n) | if out-of-order: buffer | if in order: deliver and advance window to next not-yet rcvd pkt | if pkt n in [rcvbase-N, rcvbase-1]: ack(n), otherwise:ignore.
st
TCP: point-to-point, pipelined, send/rcv buffers, full duplex data (max. segm. Size) | seq#: bytestream # of 1 byte in segm data | ack: seq# of next byte expected from other side: cumulative ack.
Timeout value: set with RTT: EstimatedRTT = (1-a)*EstimatedRTT + a*SampleRTT, typical a: 0,125; safety margin: DevRTT = (1-B)*DevRTT + B*|SampleRTT-EstimatedRTT|, typical: B=0,25; TimeoutInterval =
EstimatedRTT + 4DevRTT.
TCP Rdt: uses single retransmission timer | retransmissions triggered by timeout and duplicate acks | cumulative acks.
Fast retransmit: detect lost segments via 3 duplicate acks by transmitting 3 times, resend segment before timer expires.
Flow control: speed matching service (send-rate to apps drain rate) | spaceroom in buffer rcvr | rcvr include value of rcv window in segm | sender limits unAcked data to rcv window rcvr.
TCP cnction management: handshake: 1: client send TCP SYN segm to server (specifies initial seq#, no data) 2: server replies with SYNACK (allocates buffers, specifies initial seq#) 3: client replies with Ack,
may contain data. Closing cnction: 1: Client sends TCP FIN to server 2: server Acks, closing cnction, sends FIN 3: Client replies with Ack, enters “timed wait” (will respond with Ack to rcvd FIN's) 4: server rcvs
Ack, cnction closed.
Congestion Control: costs: more work (retrans) for given 'goodput' | unneeded retransmissions | when pkt dropped: wasted upstream transmission capacity | Approaches: end-end control (end-system observes
loss/delay), network-assisted control (routers feedback).
Floris van den Brink – oktober 2004