Topic > Network Management - 1481

The roots of the network management protocols in common use today can be traced back to developments in the late 1980s. Before that, network management was generally performed using low-level signaling techniques to send special control information. Receiving this information would cause the receiving hardware to stop normal operation and enter a special diagnostic mode where it responds to commands contained in the information. This approach worked well in homogeneous networks, which used the same interface technology everywhere. However, with the advent of protocol stacks and the abstraction of lower-level network characteristics, networks began to support more different types of interface technologies, which meant that a different approach was needed to support management at network level. At this point, both the TCP/IP and Open System Interconnect (OSI) protocol stacks began to define network management protocols that operated at the application layer. This change in approach had both advantages and disadvantages. The main advantage was that management could now be performed using the same tools anywhere in the network; the main disadvantage was that management was only possible when the entire protocol stack was up and running correctly. The move to an application layer approach led to the creation of a client/server architecture that is still widely used. A management client running under the network administrator user account on a host computer communicated with management servers residing on each of the other devices on the network that required management. In early networks, servers tended to reside on only two types of "other equipment" hosts, a...... card model...... half card, and view-based access control, was completed in 1999 but has not yet advanced beyond Draft Standard status at the IETF. The history described above, therefore, is characterized by incremental extensions and clarifications since the move to application-level management in the late 1980s. The "data model" has remained largely unchanged since then and still consists of independently specified variables collected into groups only for ease of access. The relationships and dependencies between variables are still expressed only in the text of the descriptions and therefore cannot be analyzed or checked by tools such as compilers. While this approach is well-suited to the job for which it was initially intended, namely collecting statistics from gateways, it has distinct limitations with respect to the overall management of today's ever-changing networks..