Monday, January 14, 2008

NETWORKING FUNDAMENTALS

NETWORKING FUNDAMENTALS

TABLE OF CONTENTS
1.0 General 3
1.1 Module Objectives: 3
1.2 Module Structure: 3
2.0 TCP/IP architectural model 4
2.1 Inter-networking 4
2.2 The TCP/IP protocol layers 6
3.0 TCP/IP applications 8
3.1 The client/server model 8
3.2 Bridges, routers, and gateways 9
4.0 The Open Systems Interconnection (OSI) Reference Model 10
4.1 The IP Address and Classes 11
4.2 IP addressing 11
5.0 Domain & Workgroup Models 13
5.1 Workgroups 13
5.2 Domains 14
6.0 Directory and Naming protocols 16
6.1 Domain Name System (DNS) 16
7.0 The Hierarchical Namespace 17
7.1 Fully Qualified Domain Names (FQDNs) 17
7.2 Generic domains 18
7.3 Country domains 18
7.4 Mapping domain names to IP addresses 19
7.5 Mapping IP addresses to domain names – pointer queries 19
8.0 Virtual Private Network (VPN) 20
8.1 What Makes a VPN? 21
9.0 VMWare Work Station 23
9.1 What Is VMware Workstation? 23
9.2 How Is VMware Workstation Used? 23
9.3 How Does VMware Workstation Work? 23
9.4 Why Does Business Need VMware Workstation? 24
10.0 GENERAL HARDWARE ORIENTED SYSTEM TRANSFER 27
10.1 Comprehensive PC management for OS deployment, software distribution, user-state migration, back-up and disaster recovery 27
10.2 Centralized Management and Remote Capabilities 27
10.3 Benefit From Several New PC Change-Management Capabilities 27
10.4 Support Today’s Latest Technologies 27
10.5 Clone multiple target PCs using multicasting 28
10.6 Typical usage examples 29
10.7 Upgrade networked workstations 29
11.0 Unit Summary 30
11.1 Exercise 30

1.0 General
The following course will give you an understanding of various networking concepts. Basic understanding of networking concepts is essential for any testing engineer, to enable him to work better in a network-environment. These concepts prove to be useful in client/server-based projects, where many network issues need to address during testing.
Good understanding of networking is also essential in performance and load testing.
1.1 Module Objectives:
At the end of this Session you will:

 Be able to define and understand TCP/IP protocol fundamentals
 Be able to define TCP/IP applications.
 Understand the Concept of Domains in Networking
 Understand VM Ware and VPN concepts.

1.2 Module Structure:

S.no Topic
1 TCP /IP Model 1
2 OSI Model 1
3 Domain and Workgroup Model 2
4 VPN 2
5 VM Ware 2
Total Duration 8










2.0 TCP/IP architectural model
The TCP/IP protocol suite is so named for two of its most important protocols:

Transmission Control Protocol (TCP) and Internet Protocol (IP). A less used name for it is the Internet Protocol Suite, which is the phrase used in official Internet standards documents. We use the more common, shorter term, TCP/IP, to refer to the entire protocol suite in this book.
2.1 Inter-networking
The main design goal of TCP/IP was to build an interconnection of networks, referred to as an inter-network, or Internet, that provided universal communication services over heterogeneous physical networks. The clear benefit of such an inter-network is the enabling of communication between hosts on different networks, perhaps separated by a large geographical area. The words inter-network and Internet is simply a contraction of the phrase interconnected network. However, when written with a capital "I", the Internet refers to the worldwide set of interconnected networks. Hence, the Internet is an Internet, but the reverse does not apply.
The Internet is sometimes called the connected Internet.
The Internet consists of the following groups of networks:

 Backbones: Large networks that exist primarily to interconnect other networks. Currently the backbones are NSFNET in the US, EBONE in Europe, and large commercial backbones.

 Regional networks connecting, for example, universities and colleges. Commercial networks providing access to the backbones to subscribers and networks owned by commercial organizations for internal use that also have connections to the Internet.

 Local networks, such as campus-wide university networks. In most cases, networks are limited in size by the number of users that can belong to the network, by the maximum geographical distance that the network can span, or by the applicability of the network to certain environments. For example, an Ethernet network is inherently limited in terms of geographical size. Hence, the ability to interconnect a large number of networks in some hierarchical and organized fashion enables the communication of any two hosts belonging to this inter-network.




Figure 1 shows two examples of Internets. Each is comprised of two or more physical networks.

Another important aspect of TCP/IP inter-networking is the creation of a standardized abstraction of the communication mechanisms provided by each type of network. Each physical network has its own technology-dependent communication interface, in the form of a programming interface that provides basic communication functions (primitives).

TCP/IP provides communication services that run between the programming interface of a physical network and user applications. It enables a common interface for these applications, independent of the underlying physical network. The architecture of the physical network is therefore hidden from the user and from the developer of the application.

The application need only code to the standardized communication abstraction to be able to function under any type of physical network and operating platform. As is evident in Figure 1, to be able to interconnect two networks, we need a computer that is attached to both networks and can forward data packets from one network to the other; such a machine is called a router. The term IP router is also used because the routing function is part of the Internet Protocol portion of the TCP/IP protocol suite

To be able to identify a host within the inter-network, each host is assigned an address, called the IP address. When a host has multiple network adapters (interfaces), each interface has a unique IP address.


The IP address consists of two parts:
IP address =
The network number part of the IP address identifies the network within the Internet and is assigned by a central authority and is unique throughout the Internet. The authority for assigning the host number part of the IP address resides with the organization that controls the network identified by the network number.
2.2 The TCP/IP protocol layers
Like most networking software, TCP/IP is modeled in layers. This layered representation leads to the term protocol stack, which refers to the stack of layers in the protocol suite. It can be used for positioning (but not for functionally comparing) the TCP/IP protocol suite against others, such as Systems Network Architecture (SNA) and the Open System Interconnection (OSI) model. Functional comparisons cannot easily be extracted from this, as there are basic differences in the layered models used by the different protocol suites.
By dividing the communication software into layers, the protocol stack allows for division of labor, ease of implementation and code testing, and the ability to develop alternative layer implementations. Layers communicate with those above and below via concise interfaces. In this regard, a layer provides a service for the layer directly above it and makes use of services provided by the layer directly below it.
For example, the IP layer provides the ability to transfer data from one host to another without any guarantee to reliable delivery or duplicate suppression. Transport protocols such as TCP make use of this service to provide applications with reliable, in-order, data stream delivery. Figure 2 shows how the TCP/IP protocols are modeled in four layers.

These layers include:
Application layer: The program that uses TCP/IP for communication provides the application layer. An application is a user process cooperating with another process usually on a different host (there is also a benefit to application communication within a single host). Examples of applications include Telnet and the File Transfer Protocol (FTP). Port numbers and sockets define the interface between the application and transport layers
Transport layer The transport layer provides the end-to-end data transfer by delivering data from an application to its remote peer. Multiple applications can be supported simultaneously. The most-used transport layer protocol is the Transmission Control Protocol (TCP), which provides connection-oriented reliable data delivery, duplicate data suppression, congestion control, and flow control.

Another transport layer protocol is the User Datagram Protocol; it provides connectionless, unreliable, best-effort service. As a result, applications using UDP as the transport protocol have to provide their own end-to-end integrity, flow control, and congestion control, if it is so desired. Usually, UDP is used by applications that need a fast transport mechanism and can tolerate the loss of some data.
Inter-network layer The inter-network layer, also called the Internet layer or the network layer provides the "virtual network" image of an Internet (this layer shields the higher levels from the physical network architecture below it).

Internet Protocol (IP) is the most important protocol in this layer. It is a connectionless protocol that doesn't assume reliability from lower layers. IP does not provide reliability, flow control, or error recovery. These functions must be provided at a higher level. IP provides a routing function that attempts to deliver transmitted messages to their destination.

A message unit in an IP network is called an IP datagram. This is the basic unit of information transmitted across TCP/IP networks. Other inter-network layer protocols are IP, ICMP, IGMP, ARP and RARP.
Network interface layer The network interface layer, also called the link layer or the data-link layer, is the interface to the actual network hardware. This interface may or may not provide reliable delivery, and may be packet or stream oriented. In fact, TCP/IP does not specify any protocol here, but can use almost any network interface available, which illustrates the flexibility of the IP layer. Examples are IEEE 802.2, X.25 (which is reliable in itself), ATM, FDDI, and even SNA. TCP/IP specifications do not describe or standardize any network layer protocols per se; they only standardize ways of accessing those protocols from the inter-network layer.


A more detailed layering model is included in Figure 3.

3.0 TCP/IP applications
The highest-level protocols within the TCP/IP protocol stack are application protocols. They communicate with applications on other Internet hosts and are the user-visible interface to the TCP/IP protocol suite.
All application protocols have some characteristics in common:
• They can be user-written applications or applications standardized and shipped with the TCP/IP product. Indeed, the TCP/IP protocol suite includes application protocols such as:
- TELNET for interactive terminal access to remote Internet hosts.
- FTP (file transfer protocol) for high-speed disk-to-disk file transfers.
- SMTP (simple mail transfer protocol) as an Internet mailing system.
These are some of the most widely implemented application protocols, but many others exist. Each particular TCP/IP implementation will include a lesser or greater set of application protocols.
• They use either UDP or TCP as a transport mechanism. Remember that UDP is unreliable and offers no flow-control, so in this case, the application has to provide its own error recovery, flow control, and congestion control functionality. It is often easier to build applications on top of TCP because it is a reliable stream, connection-oriented, congestion-friendly, flow control-enabled protocol. As a result, most application protocols will use TCP, but there are applications built on UDP to achieve better performance through reduced protocol overhead.
• Most applications use the client/server model of interaction.
3.1 The client/server model

TCP is a peer-to-peer, connection-oriented protocol. There are no master/slave relationships. The applications, however, typically use a client/server model for communications.
A server is an application that offers a service to Internet users; a client is a requester of a service. An application consists of both a server and a client part, which can run on the same or on different systems. Users usually invoke the client part of the application, which builds a request for a particular service and sends it to the server part of the application using TCP/IP as a transport vehicle.
The server is a program that receives a request, performs the required service and sends back the results in a reply. A server can usually deal with multiple requests and multiple requesting clients at the same time.

Most servers wait for requests at a well-known port so that their clients know which port (and in turn, which application) they must direct their requests.
The client typically uses an arbitrary port called an ephemeral port for its communication. Clients that wish to communicate with a server that does not use a well-known port must have another mechanism for learning to which port they must address their requests. This mechanism might employ a registration service such as port map, which does use a well-known port.
3.2 Bridges, routers, and gateways
There are many ways to provide access to other networks. In an inter-network, this is done using routers. In this section, we distinguish between a router, a bridge and a gateway for allowing remote network access.
Bridge Interconnects LAN segments at the network interface layer level and forwards frames between them. A bridge performs the function of a MAC relay, and is independent of any higher layer protocol (including the logical link protocol). It provides MAC layer protocol conversion, if required. A bridge is said to be transparent to IP. That is, when an IP host sends an IP datagram to another host on a network connected by a bridge, it sends the datagram directly to the host and the datagram "crosses" the bridge without the sending IP host being aware of it.
Router Interconnects networks at the inter-network layer level and routes packets between them. The router must understand the addressing structure associated with the networking protocols it supports and take decisions on whether, or how, to forward packets. Routers are able to select the best transmission paths and optimal packet sizes. The basic routing function is implemented in the IP layer of the TCP/IP protocol stack, so any host or workstation running TCP/IP over more than one interface could, in theory and also with most of today's TCP/IP implementations, forward IP datagram. However, dedicated routers provide much more sophisticated routing than the minimum functions implemented by IP. Because IP provides this basic routing function, the term "IP router," is often used. Other, older terms for router are "IPGateway," "Internet gateway," and "gateway." The term gateway is now normally used for connections at a higher layer than the inter-network layer.
A router is said to be visible to IP. That is, when a host sends an IP datagram to another host on a network connected by a router, it sends the datagram to the router so that it can forward it to the target host.

Gateway Interconnects networks at higher layers than bridges and routers. A gateway usually supports address mapping from one network to another, and may also provide transformation of the data between the environments to support end-to-end application connectivity. Gateways typically limit the interconnectivity of two networks to a subset of the application protocols supported on either one. For example, a VM host running TCP/IP may be used as an SMTP/RSCS mail gateway. A gateway is said to be opaque to IP. That is, a host cannot send an IP datagram through a gateway; it can only send it to a gateway.

The higher-level protocol information carried by the datagrams is then passed on by the gateway using whatever networking architecture is used on the other side of the gateway. Closely related to routers and gateways is the concept of a firewall, or firewall gateway, which is used to restrict access from the Internet or some un-trusted network to a network or group of networks, controlled by an organization for security reasons.





4.0 The Open Systems Interconnection (OSI) Reference Model
The OSI (Open Systems Interconnect) Reference Model (ISO 7498) defines a seven-layer model of data communication with physical transport at the lower layer and application protocols at the upper layers. This model, shown in Figure 5, is widely accepted as a basis for the understanding of how a network protocol stack should operate and as a reference tool for comparing network stack implementation.

Each layer provides a set of functions to the layer above and, in turn, relies on the functions provided by the layer below. Although messages can only pass vertically through the stack from layer to layer, from a logical point of view, each layer communicates directly with its peer layer on other nodes.
The seven layers are:
Application Network applications such as terminal emulation and file transfer
Presentation Formatting of data and encryption
Session Establishment and maintenance of sessions
Transport Provision of reliable and unreliable end-to-end delivery
Network Packet delivery, including routing
Data Link Framing of units of information and error checking
Physical Transmission of bits on the physical hardware

In contrast to TCP/IP, the OSI approach started from a clean slate and defined standards, adhering tightly to their own model, using a formal committee process without requiring implementations. Internet protocols use a less formal engineering approach, where anybody can propose and comment on RFCs, and implementations are required to verify feasibility. The
OSI protocols developed slowly, and because running the full protocol stack is resource intensive, they have not been widely deployed, especially in the desktop and small computer market.


In the meantime, TCP/IP and the Internet was developing rapidly, with deployment occurring at a very high rate.
As with all other communications protocol, TCP/IP is composed of layers:
IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world.
TCP - is responsible for verifying the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received.
Sockets - is a name given to the package of subroutines that provide access to TCP/IP on most systems.
4.1 The IP Address and Classes
4.1.1.1 Hosts and networks
IP addressing is based on the concept of hosts and networks. A host is essentially anything on the network that is capable of receiving and transmitting IP packets on the network, such as a workstation or a router. It is not to be confused with a server: servers and client workstations are all IP hosts.

The hosts are connected together by one or more networks. The IP address of any host consists of its network address plus its own host address on the network. IP addressing, unlike, say, IPX addressing, uses one address containing both network and host address. How much of the address is used for the network portion and how much for the host portion varies from network to network.
4.2 IP addressing
An IP address is 32 bits wide, and as discussed, it is composed of two parts: the network number, and the host number [1, 2, 3]. By convention, it is expressed as four decimal numbers separated by periods, such as "200.1.2.3" representing the decimal value of each of the four bytes. Valid addresses thus range from 0.0.0.0 to 255.255.255.255, a total of about 4.3 billion addresses. The first few bits of the address indicate the Class that the address belongs to:
Class Prefix Network Number Host Number
A 0 Bits 0-7 Bits 8-31
B 10 Bits 1-15 Bits 16-31
C 110 Bits 2-24 Bits 25-31
D 1110 N/A
E 1111 N/A

The bits are labeled in network order, so that the first bit is bit 0 and the last is bit 31, reading from left to right. Class D addresses are multicast, and Class E is reserved. The range of network numbers and host numbers may then be derived:
Class Range of Net Numbers Range of Host Numbers
A 0 to 126 0.0.1 to 255.255.254
B 128.0 to 191.255 0.1 to 255.254
C 192.0.0 to 254.255.255 1 to 254
Any address starting with 127 is a loop back address and should never be used for addressing outside the host. A host number of all binary 1's indicates a directed broadcast over the specific network. For example, 200.1.2.255 would indicate a broadcast over the 200.1.2 network. If the host number is 0, it indicates "this host". If the network number is 0, it indicates "this network" [2]. All the reserved bits and reserved addresses severely reduce the available IP addresses from the 4.3 billion theoretical maximum. Most users connected to the Internet will be assigned addresses within Class C, as space is becoming very limited. This is the primary reason for the development of IPv6, which will have 128 bits of address space.
5.0 Domain & Workgroup Models
Before PCs, the network model revolved around a central computer server and terminals that users could access. These terminals had no autonomous computing power of their own. They provided the user only with an interactive view of the server.
With the invasion of personal computers in the late 1980s, people began to store their files on the local hard drive space available on their PC. This however, proposed a problem to sharing files: something that was trivial when everyone was logging into the same machine (that is, mainframe) from their terminal. People wanted to store their files locally so that they would be accessible during a server outage (something which they had no control over) while still allowing other users to access the files from their own computer. This PC-centric distributed model was named peer networking because all the machines were equally likely to be clients and servers and could operate in both modes.
5.1 Workgroups
The idea of a workgroup goes hand in hand with the concept of peer networking. A workgroup is a unit of people who share responsibilities to achieve a common goal. Each one has to pull his or her own weight. A computer workgroup is no different. As you will see, a computer workgroup can be used in two contexts.
The first concept of a workgroup is as an administrative group of machines that do not share user and group account information. Remember step 2 of the SMB protocol overview? That is when the client sends a username and some proof of identity. The question then becomes "Who will validate the request?" Each machine has a separate and local copy of an account database. Therefore, all validation is done locally. Remember that this is called peer networking, or sometimes peer-to-peer networking, because all machines are essentially equal. Each PC has the capability to serve files and printers as well as validate access requests. This equality does not mean that all machines perform the functions equally well, of course.
Figure 6 illustrates the idea of the workgroup authentication model. The client, shown on bottom, attempts to access the disk share on SERVER1. SERVER1 alone is responsible for validating the session setup against its local account database, whatever that might be. When the client attempts to access the printer share on SERVER2, that server is responsible for validating the connection. The outcome is entirely distinct from the outcome of the connection to SERVER1. Each server has a local distinct account database that is unrelated to the others.
The motivation for network browsing is the manner in which resources appear and disappear from the network as hosts start and stop. Unlike a central computing model, such as a mainframe or terminal solution, where everything is located on one machine, it is much more difficult to survey a large number of hosts that can come on and off the network at the whim of the PC's owner. Browsing allows users to view the current servers and resources available dynamically. In this context, a domain and a workgroup are equivalent.











Figure 6

5.2 Domains

A domain is similar to a workgroup with one major exception. In a domain, there is a central authentication server that maintains the domain's user and group accounts. Resources in the domain are accessed regardless of what machine they are located on by validating against the domain controller. This is still peer networking because all machines maintain the capability to serve files and printers and perform the necessary validation. The difference is that the validation is performed against a remote account database located on the domain controller.
Domains grew out of the need to get rid of the mass of passwords that was necessary when every machine had its own local account database. The solution provided users with one account that could allow access to all resources if so desired.
Figure 7 shows a sample connection to a server that is a member of some domain. First, the client sends the connection request containing the user information to SERVER1 asking to access some disk share. SERVER1 then sends a validation request to the domain controller (DC). The validation request contains the user information originally sent by the client. If the DC successfully validates the user, it sends a positive response to SERVER1 that then sends a positive connection response back to the client. This means, assuming that the access control mechanisms such as permission lists allow it, that a client can connect to any server in the domain using a single username and password. In Figure 7 the client needed a separate username and password to connect to each server.










Figure 7










6.0 Directory and Naming protocols
The TCP/IP protocol suite contains many applications, but these generally take the form of network utilities. Although these are obviously important to a company using a network, they are not, in themselves, the reason why a company invests in a network in the first place.
The network exists to provide access for users, who may be both local and remote, to a company's business applications, data, and resources, which may be distributed across many servers throughout building, city, or even the world. Those servers may be running on hardware from many different vendors and on several different operating systems. This chapter looks at methods of accessing resources and applications in a distributed network.
6.1 Domain Name System (DNS)
The Domain Name System is a standard protocol with STD number 13. Its status is recommended. It is described in RFC 1034 and RFC 1035. This section explains the implementation of the Domain Name System, and the implementation of name servers. The early Internet configurations required users to use only numeric IP addresses. Very quickly, this evolved to the use of symbolic host names. For example, instead of typing TELNET 128.12.7.14, one could type TELNET eduvm9, and eduvm9 is then translated in some way to the IP address 128.12.7.14.

This introduces the problem of maintaining the mappings between IP addresses and high-level machine names in a coordinated and centralized way. Initially, host names to address mappings were maintained by the Network Information Center (NIC) in a single file (HOSTS.TXT), which was fetched by all hosts using FTP. This is called a flat namespace. Due to the explosive growth in the number of hosts, this mechanism became too cumbersome (consider the work involved in the addition of just one host to the Internet) and was replaced by a new concept: Domain Name System.
Hosts can continue to use a local flat namespace (the HOSTS.LOCAL file) instead of or in addition to the Domain Name System, but outside small networks, the Domain Name System is practically essential. The Domain Name System allows a program running on a host to perform the mapping of a high-level symbolic name to an IP address for any other host without the need for every host to have a complete database of host names.




7.0 The Hierarchical Namespace
Consider the internal structure of a large organization. As the chief executive cannot do everything, the organization will probably be partitioned into divisions, each of them having autonomy within certain limits. Specifically, the executive in charge of a division has authority to make direct decisions, without permission from his or her chief executive. Domain names are formed in a similar way, and will often reflect the hierarchical delegation of authority used to assign them.

For example, consider the name:
small.itso.raleigh.ibm.com
Here, itso.raleigh.ibm.com is the lowest level domain name, a sub-domain of raleigh.ibm.com, which again is a sub-domain of ibm.com, a sub-domain of com. We can also represent this naming concept by a hierarchical tree.
7.1 Fully Qualified Domain Names (FQDNs)
When using the Domain Name System, it is common to work with only a part of the domain hierarchy, for example, the ral.ibm.com domain. The Domain Name System provides a simple method of minimizing the typing necessary in this circumstance. If a domain name ends in a dot (for example, wtscpok.itsc.pok.ibm.com.), it is assumed to be complete. This is termed a fully qualified domain name (FQDN) or an absolute domain name.
However, if it does not end in a dot (for example, wtscpok.itsc), it is incomplete and the DNS resolver (see below) may complete this, for example, by appending a suffix such as .pok.ibm.com to the domain name. The rules for doing this are implementation-dependent and locally configurable.





7.2 Generic domains
The three-character top-level names are called the generic domains or the organizational domains. Table 4 shows some of the top-level domains of today's Internet domain namespace.

Since the Internet began in the United States, the organization of the hierarchical namespace initially had only U.S. organizations at the top of the hierarchy, and it is still largely true that the generic part of the namespace contains US organizations. However, only the .gov and .mil domains are restricted to the US.
At the time of writing, the U.S. Department of Commerce – National Telecommunications and Information Administration is looking for a different organization for .us domains. As a result of this, it has been decided to change the status of the Internet Assigned Numbers Authority (IANA), which will no longer be funded and run by the U.S. Government. A new non-profit organization with an international Board of Directors will be funded by domain registries instead. On the other hand, there are some other organizations that have already begun to register new top-level domains.
For current information, see the IANA Web site at: http://www.iana.org

7.3 Country domains
There are also top-level domains named for the each of the ISO 3166 international 2-character country codes (from ae for the United Arab Emirates to zw for Zimbabwe). These are called the country domains or the geographical domains. Many countries have their own second-level domains underneath which parallel the generic top-level domains.

For example, in the United Kingdom, the domains equivalent to the generic domains .com and .edu are .co.uk and .ac.uk (ac is an abbreviation for academic). There is a .us top-level domain, which is organized geographically by state (for example, .ny.us refers to the state of New York). See RFC 1480 for a detailed description of the .us domain





7.4 Mapping domain names to IP addresses
The mapping of names to addresses consists of independent, cooperative systems called name servers. A name server is a server program that holds a master or a copy of a name-to-address mapping database, or otherwise points to a server that does, and that answers requests from the client software, called a name resolver.

Conceptually, all Internet domain servers are arranged in a tree structure that corresponds to the naming hierarchy in Figure 132 on page 285. Each leaf represents a name server that handles names for a single sub-domain. Links in the conceptual tree do not indicate physical connections. Instead, they show which other name server a given server can contact.
7.5 Mapping IP addresses to domain names – pointer queries
The Domain Name System provides for a mapping of symbolic names to IP addresses and vice versa. While it is a simple matter in principle to search the database for an IP address with its symbolic name (because of the hierarchical structure), the reverse process cannot follow the hierarchy. Therefore, there is another namespace for the reverse mapping. It is found in the domain in-addr.arpa (arpa is used because the Internet was originally the ARPAnet).

IP addresses are normally written in dotted decimal format, and there is one layer of domain for each hierarchy. However, because domain names have the least-significant parts of the name first, but dotted decimal format has the most significant bytes first, the dotted decimal address is shown in reverse order. For example, the domain in the domain name system corresponding to the IP address 129.34.139.30 is 30.139.34.129.in-addr.arpa. Given an IP address, the Domain Name System can be used to find the matching host name. A domain name query to find the host names associated with an IP address is called a pointer query.













8.0 Virtual Private Network (VPN)
The world has changed a lot in the last couple of decades. Instead of simply dealing with local or regional concerns, many businesses now have to think about global markets and logistics. Many companies have facilities spread out across the country or around the world, and there is one thing that all of them need: A way to maintain fast, secure and reliable communications wherever their offices are.
Until fairly recently, this has meant the use of leased lines to maintain a wide area network (WAN). Leased lines, ranging from ISDN (integrated services digital network, 128 Kbps) to OC3 (Optical Carrier-3, 155 Mbps) fiber, provided a company with a way to expand its private network beyond its immediate geographic area. A WAN had obvious advantages over a public network like the Internet when it came to reliability, performance and security. But maintaining a WAN, particularly when using leased lines, can become quite expensive and often rises in cost as the distance between the offices increases.
As the popularity of the Internet grew, businesses turned to it as a means of extending their own networks. First came intranets, which are password-protected sites designed for use only by company employees. Now, many companies are creating their own VPN (virtual private network) to accommodate the needs of remote employees and distant offices.


A typical VPN might have a main LAN at the corporate headquarters of a company, other LANs at remote offices or facilities and individual users connecting from out in the field.
Basically, a VPN is a private network that uses a public network (usually the Internet) to connect remote sites or users together. Instead of using a dedicated, real-world connection such as leased line, a VPN uses "virtual" connections routed through the Internet from the company's private network to the remote site or employee. In this article, you will gain a fundamental understanding of VPNs, and learn about basic VPN components, technologies, tunneling and security.



8.1 What Makes a VPN?

A well-designed VPN can greatly benefit a company. For example, it can:
• Extend geographic connectivity
• Improve security
• Reduce operational costs versus traditional WAN
• Reduce transit time and transportation costs for remote users
• Improve productivity
• Simplify network topology
• Provide global networking opportunities
• Provide telecommuter support
• Provide broadband networking compatibility
• Provide faster ROI (return on investment) than traditional WAN
• What features are needed in a well-designed VPN? It should incorporate:
• Security
• Reliability
• Scalability
• Network management
• Policy management
There are three types of VPN. In the next couple of sections, we'll describe them in detail.
Remote-Access VPN
There are two common types of VPN. Remote-access, also called a virtual private dial-up network (VPDN), is a user-to-LAN connection used by a company that has employees who need to connect to the private network from various remote locations. Typically, a corporation that wishes to set up a large remote-access VPN will outsource to an enterprise service provider (ESP).

The ESP sets up a network access server (NAS) and provides the remote users with desktop client software for their computers. The telecommuters can then dial a toll-free number to reach the NAS and use their VPN client software to access the corporate network.
A good example of a company that needs a remote-access VPN would be a large firm with hundreds of sales people in the field. Remote-access VPNs permit secure, encrypted connections between a company's private network and remote users through a third-party service provider.


Examples of the three types of VPN
Site-to-Site VPN
Through the use of dedicated equipment and large-scale encryption, a company can connect multiple fixed sites over a public network such as the Internet. Site-to-site VPNs can be one of two types:
Intranet-based - If a company has one or more remote locations that they wish to join in a single private network, they can create an intranet VPN to connect LAN to LAN.
Extranet-based - When a company has a close relationship with another company (for example, a partner, supplier or customer), they can build an extranet VPN that connects LAN to LAN, and that allows all of the various companies to work in a shared environment.













9.0 VMWare Work Station
9.1 What Is VMware Workstation?

VMware Workstation is powerful virtual machine software for developers and system administrators who want to revolutionize software development, testing and deployment in their enterprise. Shipping for more than five years and winner of over a dozen major product awards, VMware Workstation enables software developers to develop and test the most complex networked server-class applications running on Microsoft Windows, Linux or NetWare all on a single desktop.

Essential features such as virtual networking, live snapshots, drag and drop and shared folders, and PXE support make VMware Workstation the most powerful and indispensable tool for enterprise IT developers and system administrators.
9.2 How Is VMware Workstation Used?

With over five years of proven success and millions of users, VMware Workstation improves efficiency, reduces costs and increases flexibility and responsiveness. Installing VMware Workstation on the desktop is the first step to transforming your IT infrastructure into virtual infrastructure.
VMware Workstation is used in the enterprise to:

• Streamline software development and testing operations.
• Accelerate application deployments.
• Ensure application compatibility and perform operating system migrations.
9.3 How Does VMware Workstation Work?

VMware Workstation works by enabling multiple operating systems and their applications to run concurrently on a single physical machine. These operating systems and applications are isolated in secure virtual machines that co-exist on a single piece of hardware. The VMware virtualization layer maps the physical hardware resources to the virtual machine's resources, so each virtual machine has its own CPU, memory, disks, I/O devices, etc. Virtual machines are the full equivalent of a standard x86 machine.
VMware Workstation enables enterprise software developers to develop and test the most complex networked server-class applications running on Windows, Linux or NetWare all on a single desktop.

With VMware Workstation---
Build complex networks -- and develop, test, and deploy new applications -- all on a single computer.
Leverage the portability of virtual machines to easily share development environments and pre-packaged operating system/application testing configurations without risk.
Add or change operating systems without repartitioning disks or rebooting.
Run new operating systems and legacy applications on one computer.




9.4 Why Does Business Need VMware Workstation?

Since its launch in 1999, VMware Workstation has revolutionized the way software and IT infrastructure is developed and has become the de facto standard for IT professionals and developers worldwide. If your business is looking to simplify and accelerate development, testing and deployment of software and IT infrastructure, VMware Workstation is essential.
When you deploy VMware Workstation in your environment you will:

• Shorten development cycles.
• Reduce problem resolution time.
• Increase productivity.
• Accelerate time-to-market.
• Improve project quality.


Fig: VMware Workstation Architecture













Why Use VMware Workstation?
Usage Scenarios Benefits
Streamline Software Development and Testing

Create multiple development and testing environments on a single system

Build mission-critical Windows- and/or Linux-based applications

Archive test environments on file servers and restore them quickly, as needed

Test new application updates, OS patches and service packs on a single PC computer


Accelerate development cycles and reduce time to market

Reduce hardware costs by 50-60%

Reduce costly configuration and set-up time by 25-55%, freeing time to do important development and testing

Improve project quality with more rigorous testing

Eliminate costly deployment and maintenance problems

Accelerate Application Deployment

Test, configure and provision enterprise-class servers as VMware Workstation VMs and then deploy them on a physical server or VMware GSX or VMware ESX server

Create a whole network of applications composed of multiple computers and multiple network switches in a set of virtual machines and test them without affecting the production network

Test physical to virtual migrations for server consolidation and legacy application migrations


Reduce hardware costs by 50-60%

Improve quality of deployments

Improve productivity

Reduce risk to corporate networks by creating complex, secure and isolated virtual networks that mirror enterprise networks

Ensure Application Compatibility and Perform Operating System Migration

Support legacy applications while migrating safely to a new operating system

Test new operating systems in secure, clean virtual machines prior to deployment

Eliminate the need to port legacy applications


Complete complex OS migration projects on time and on budget

Increase operations efficiency by up to 50%

Reduce desktop capital costs by 50-60%

Minimize end-user pain during transition



10.0 GENERAL HARDWARE ORIENTED SYSTEM TRANSFER
10.1 Comprehensive PC management for OS deployment, software distribution, user-state migration, back-up and disaster recovery

Managing today’s increasingly heterogeneous enterprise environments of connected and mobile PCs poses major challenges for IT managers. Primary among them is the need to control the costs of setting up new PCs, migrating user desktop settings, and deploying OS and application upgrades and updates. By enabling the remote management of routine tasks such as PC deployment, cloning, changes in configuration settings, user migration, and backup and recovery of disk images, Symantec Ghost streamlines the configuration and management of networked PCs, thereby dramatically reducing IT costs.
10.2 Centralized Management and Remote Capabilities
With Symantec Ghost8.0, administrators can deploy or restore an OS image or application onto a PC in minutes and then migrate individual user settings and profiles to customize the PC. Robust centralized management and remote capabilities boost IT productivity and help lower the total cost of ownership for networked PCs and workstations. From the Symantec Ghost console, IT managers can remotely clone any Windows NT or Windows 2000 workstation. And they can quickly deploy whole application packages or specific PC changes such as registry changes or desktop settings. Plus, administrators can migrate user “personalities” (including PC settings and data) and also remotely clone multiple workstations, then quickly configure critical workstation data such as TCP/IP settings and machine, workgroup, and domain names—all from the Ghost central console.
10.3 Benefit From Several New PC Change-Management Capabilities
Several new features make the latest version of Symantec Ghost a more powerful, versatile, and compact PC change-management solution. User Migration allows administrators to remotely transfer user files, directories, and desktop and network settings between PCs. Incremental Backup enables the remote backup of only the most recent user changes. And AutoInstall Integration consolidates AutoInstall functionality within the central console, making the customization of software packages and the deployment of updates easy, while reducing the overall Symantec Ghost footprint.
10.4 Support Today’s Latest Technologies
Symantec Ghost supports Intel® Wired for Management and Pre-Boot eXecution (PXE) services —the standard industry guidelines for building advanced management capabilities into PCs. The all-new version also supports Microsoft’s System Preparation utility and is the only PC management tool that is Windows™ 2000 certified, making it the tool of choice for migrating to the latest operating system from Microsoft®.






10.5 Clone multiple target PCs using multicasting
The replication of a model workstation onto many computers can be a time-consuming task. One-to-one connections with a small number of computers is fast and efficient, but as the number of machines increases, the time for the overall completion of the entire replication task increases in proportion to the number of computers being cloned.
When Ghost is using a one-to-one approach for transferring information, each of the computer drives being replicated receives its own copy of information, and each of these copies needs to be passed through the same network channel. As the number of replications on the same network increases, the time for overall task completion increases due to multiple copies of information being sent through the common information channel.
Ghost Multicasting uses TCP/IP multicasting in conjunction with a reliable session protocol to provide one to many communication. Ghost Multicasting supports both Ethernet and Token Ring Networks and clears away the bottleneck of having multiple copies of data being passed through the network. Ghost multicasting includes support for the
Parallel Cable
Master
Slave
Multicasting of disk images and partition images, as well as automatic multicast server session starting options and image file creation. A multicasting session consists of one server, a single image file, and a group of similar Ghost clients requiring the identical disk or partition image. The session name is used by Ghost clients to indicate the session they are to join and listen to.
Ghost Multicasting Client is built into the Ghost application software. Ghost operates in conjunction with the Ghost Multicast Server application to provide a fast and easy way of replicating workstations.




10.6 Typical usage examples
Ghost’s abilities to clone hard drives and partitions provide a flexible and powerful tool that can be used for anything from upgrading the hard drive in your PC at home, right through to managing organization-wide system configuration in large corporations.
10.7 Upgrade networked workstations
Your company has decided to upgrade from Windows NT to Windows 2000. You have 25 workstations to configure, and only a day to do it. With Ghost, you can create a model system with all of the necessary software installed (office software, web browser, etc.), and then save an image of the system to a network server.
1. Use Ghost to load the image on to other machines over the network. If you are using Ghost Multicast Server
2. Ghost Multicast Server receives the model image and 0creates an image file.
3. Ghost Multicast Server transmits an existing image file simultaneously to all listening Ghost machines 1. Model system an image file is saved onto the Multicast Server machine.
4. Cloned systems simultaneously updated using an image file sent by the Ghost Multicast Server you can load multiple machines at once, dramatically reducing installation time and network traffic.

11.0 Unit Summary
In this session we have learnt:
1. TCP/IP Model
2. TCP/IP applications
3. OSI Model
4. Domains and Work Groups
5. VPN and VM Ware
6. General Hardware Concepts
11.1 Exercise
Answer the following in short
1. Define the purpose of
 Gateways
 Routers
 Bridges
 DNS
 VM WorkStation.

2. Explain the IP addressing process?
3. Explain the components of VPN?
4. Explain the “Domains”?

No comments: