Relay Station for Real-Time Broadcasting on the Internet
 

1. Architecture and Storage Modeling of the Relay Staion

        The relay station architecture, illustrated in Figure 1, consists of three main parts: incoming data management, storage management, and outgoing data management. Each part is controlled by central manager process. This process controls when each part or process starts to work and what kinds of job to be done. Moreover, the central manager is used to receive an user request for on-demand broadcast event. If a requested event is available, central manager updates a request in requested queue.
 

Figure 1 Overview of the relay station architecture

        The sources of relay station are broadcast servers that broadcast data, such as real-time audio and video, presentation, conference and so on, over the Mbone. The clients of relay station are users or organizations that are interested in those broadcast data and have enough resources (for example, speed of connection line, play back software, and etc.) to receive them. Figure 2 illustrates the architecture and storage modeling of the relay station.
 

Figure 2 Architecture and storage modeling of relay station
 

1.1 Incoming Data Management

        Incoming data management is responsible for monitoring incoming packets at the router. Only the packets sent from a desired broadcast server are selected and those packets are temporarily stored in an incoming buffer. This part has only one process, called packet monitor. This process is executed at the router for monitoring the incoming packets, and stores the selected packets in an incoming buffer waiting for the next process of the storage management part. The packet monitor process works in the Internet layer for retrieving IP datagrams. The selected datagrams are the datagrams in which their source addresses are the desired broadcast servers.

        Incoming buffer is a shared memory and its size must be large enough for buffering broadcast packets for storage management part. The data structure of an incoming buffer is a static dynamic buffer in the form of priority queue, which is constructed from heap data structure. In this case, key of priority queue is a sequence number of each incoming packet sorted in ascending order. So, the root node of priority queue is a packet, which has the smallest sequence number and is removed first for the next process.
 

1.2 Storage Management

        The storage devices of the relay station are connected hierarchically with varying speed and capacity. The data in storage hierarchy is managed by the storage management part. The storage hierarchy has two levels. The upper level is a set of magnetic disks (Hard Disk 0 (HD0), Hard Disk 1 (HD1), Hard Disk 2 (HD2), and so on) and the lower level is a tertiary storage. The upper level is divided into two parts. The first part is used to temporarily store the stream of incoming data, and the second part is used to temporarily store the rebroadcasting data.

        It is suitable to keep all the data of each broadcast event on the tertiary storage, because the speed of tertiary storage is slower than magnetic disk, but the capacity of tertiary storage is more than magnetic disk at the equivalent price. The data is transferred from the tertiary storage to the upper level, when each rebroadcasting event is initialized. However, the data of each event will be removed from the tertiary storage when it is not requested by the users.

        The storage management part consists of four processes. The first two processes are used to handle the incoming broadcast data. The details of both processes are described below.

        The last two processes are used in rebroadcasting part for multicasting data to clients. The details of these processes are described below. 1.3 Outgoing Data Management

         Outgoing data management is responsible for multicasting data from outgoing buffer to clients over the local Mbone. An event is multicasted on the group address announced to the users. The user can receive data from a channel that he/she is interested and suitable for the current available bandwidth.

         Outgoing buffer is an array of the shared memory in the relay station, which is separated to each multicast channel. It is used to temporarily store block of data waiting for multicasting to a group of users. In this design, the I/O speed of hard disk must be faster than the multicast speed of data from each disk.

         Outgoing data management part has only one process, called multicast control. This process is executed at the relay station and it is separated to each outgoing buffer. It is responsible for reading block of data from outgoing buffer and generating multicast packet from those data. Multicast packet consists of data block and information needed in multicasting, and it is transmitted to clients via the local Mbone.

 

 
2. Performance of the Relay Station

         The performance of the relay station is evaluated by collecting statistics from the Computer Science and Information Management program lab at AIT during 4.00pm to 11.00pm on 7 October 1998. Data was multicasted from "cache-naist1.ai3.net" as a server to "freezbie.cs.ait.ac.th" as a relay station (client). The relay station was implemented following the above architecture. In this experiment, data was send via the group address "239.192.11.111", Port 5999 and TTL 40. The size of data per one packet is 1 Kbyte. The maximum data transfer rate between server and staion is obout 160 Kbits/seconds and the average disk I/O rate of the relay station is about 206 Kbytes/seconds.

         From the statistics in the tables below, the percent of data loss between two types of client is compared. The first client is a simple receiver, which is responsible for receiveing data from the server and counting the number of received packets (no disk I/O). The second client is a relay station, which is responsible for caching the multicasted data in disk. The data is collected in various rates and various sizes of data per session. The percent of data loss is measured in the unit of packet.
 

 Table 1. Percent of data loss when client is a relay station
 
Data Rate (Kbits/seconds)
Size of data per session (bytes)
< 1 Mbyte
1 Mbyte - 5 Mbytes
> 5 Mbytes
0 - 64
1.13 %
1.23 %
0.94 %
65 - 120
1.15 %
22.96 %
20.9 %
121 - 160
1 %
48.06 %
34.42 %
 
 
 Table 2. Percent of data loss when client is a simple receiver
 
Data Rate (Kbits/seconds)
Size of data per session (bytes)
< 1 Mbyte
1 Mbyte - 5 Mbytes
> 5 Mbytes
0 - 64
1.2 %
1.1 %
1.32 %
65 - 120
1.2 %
0.96 %
1.21 %
121 - 160
0.8 %
1.3 %
1.45 %
 

        From the Table 1, the relay staion can cache real-time data with 0 - 64 Kbits/seconds data rate and data loss in this rate is only 0.94 - 1.23 percents. At the data rate 65 - 160 Kbits/seconds, the data loss is only 1 - 1.15 percents when the size of data per session is less than 1 Mbyte, but the data loss is increased to 20.9 - 48.06 percents when the size of data per session is more than 1 Mbyte.

        Compared with Table 2, the data loss rate of relay station is in the same rate with a simple receiver (0.8 % - 1.45 %). This rate is a normal case for UDP, which can not guarantee the order and the loss of packet. At the data rate more than 64 Kbits/seconds and size of data more than 1 Mbyte, the data loss rate of relay station is more increased due to the overhead of disk I/O. So, the maximum data rate for the relay station is 0 - 64 Kbits/seconds. This rate is depended on the current network traffic.