Explain role client network software
In the case of transport-layer switches, the switch accepts incoming TCP connection requests, and hands off such connections to one of the servers Hunt et al, ; and Pai et al.
The principle working of what is commonly known as TCP handoff is shown in Fig. The principle of TCP handoff. This item is displayed on page 95 in the print version. When the switch receives a TCP connection request, it subsequently identifies the best server for handling that request, and forwards the request packet to that server.
The server, in turn, will send an acknowledgment back to the requesting client, but inserting the switch's IP address as the source field of the header of the IP packet carrying the TCP segment. Note that this spoofing is necessary for the client to continue executing the TCP protocol: it is expecting an answer back from the switch, not from some arbitrary server it is has never heard of before.
Clearly, a TCP-handoff implementation requires operating-system level modifications. It can already be seen that the switch can play an important role in distributing the load among the various servers. By deciding where to forward a request to, the switch also decides which server is to handle further processing of the request. The simplest load-balancing policy that the switch can follow is round robin: each time it picks the next server from its list to forward a request to. More advanced server selection criteria can be deployed as well.
For example, assume multiple services are offered by the server cluster. If the switch can distinguish those services when a request comes in, it can then take informed decisions on where to forward the request to. This server selection can still take place at the transport level, provided services are distinguished by means of a port number.
One step further is to have the switch actually inspect the payload of the incoming request. This method can be applied only if it is known what that payload can look like. For example, in the case of Web servers, the switch can eventually expect an HTTP request, based on which it can then decide who is to process it. We will return to such content-aware request distribution when we discuss Web-based systems in Chap. The server clusters discussed so far are generally rather statically configured.
In these clusters, there is often an separate administration machine that keeps track of available servers, and passes this information to other machines as appropriate, such as the switch.
As we mentioned, most server clusters offer a single access point. When that point fails, the cluster becomes unavailable. To eliminate this potential problem, several access points can be provided, of which the addresses are made publicly available. This approach still requires clients to make several attempts if one of the addresses fails. Moreover, this does not solve the problem of requiring static access points.
Having stability, like a long-living access point, is a desirable feature from a client's and a server's perspective. On the other hand, it also desirable to have a high degree of flexibility in configuring a server cluster, including the switch. This observation has lead to a design of a distributed server which effectively is nothing but a possibly dynamically changing set of machines, with also possibly varying access points, but which nevertheless appears to the outside world as a single, powerful machine.
The design of such a distributed server is given in Szymaniak et al. We describe it briefly here. The basic idea behind a distributed server is that clients benefit from a robust, high-performing, stable server. These properties can often be provided by high-end mainframes, of which some have an acclaimed mean time between failure of more than 40 years.
However, by grouping simpler machines transparently into a cluster, and not relying on the availability of a single machine, it may be possible to achieve a better degree of stability than by each component individually. For example, such a cluster could be dynamically configured from end-user machines, as in the case of a collaborative distributed system.
Let us concentrate on how a stable access point can be achieved in such a system. The main idea is to make use of available networking services, notably mobility support for IP version 6 MIPv6.
In MIPv6, a mobile node is assumed to have a home network where it normally resides and for which it has an associated stable address, known as its home address HoA. This home network has a special router attached, known as the home agent, which will take care of traffic to the mobile node when it is away. To this end, when a mobile node attaches to a foreign network, it will receive a temporary care-of address CoA where it can be reached.
This care-of address is reported to the node's home agent who will then see to it that all traffic is forwarded to the mobile node. Note that applications communicating with the mobile node will only see the address associated with the node's home network. They will never see the care-of address.
This principle can be used to offer a stable address of a distributed server. In this case, a single unique contact address is initially assigned to the server cluster.
The contact address will be the server's life-time address to be used in all communication with the outside world. At any time, one node in the distributed server will operate as an access point using that contact address, but this role can easily be taken over by another node.
What happens is that the access point records its own address as the care-of address at the home agent associated with the distributed server. At that point, all traffic will be directed to the access point, who will then take care in distributing requests among the currently participating nodes.
If the access point fails, a simple fail-over mechanism comes into place by which another access point reports a new care-of address. This simple configuration would make the home agent as well as the access point a potential bottleneck as all traffic would flow through these two machines. This situation can be avoided by using an MIPv6 feature known as route optimization. Route optimization works as follows.
Whenever a mobile node with home address HA reports its current care-of address, say CA, the home agent can forward CA to a client. The latter will then locally store the pair HA, CA. From that moment on, communication will be directly forwarded to CA. Although the application at the client side can still use the home address, the underlying support software for MIPv6 will translate that address to CA and use that instead. Route optimization can be used to make different clients believe they are communicating with a single server, where, in fact, each client is communicating with a different member node of the distributed server, as shown in Fig.
To this end, when an access point of a distributed server forwards a request from client C1 to, say node S1 with address CA1 , it passes enough information to S1 to let it initiate the route optimization procedure by which eventually the client is made to believe that the care-of address is CA1. During this procedure, the access point as well as the home agent tunnel most of the traffic between C1 and S1.
This will prevent the home agent from believing that the care-of address has changed, so that it will continue to communicate with the access point. Route optimization in a distributed server. Of course, while this route optimization procedure is taking place, requests from other clients may still come in.
These remain in a pending state at the access point until they can be forwarded. As a result, different clients will be directly communicating with different members of the distributed server, where each client application still has the illusion that this server has address HA.
The home agent continues to communicate with the access point talking to the contact address. Managing Server Clusters. A server cluster should appear to the outside world as a single computer, as is indeed often the case. However, when it comes to managing a cluster, the situation changes dramatically.
Several attempts have been made to ease the management of server clusters as we discuss next. By far the most common approach to managing a server cluster is to extend the traditional managing functions of a single computer to that of a cluster. In its most primitive form, this means that an administrator can log into a node from a remote client and execute local managing commands to monitor, install, and change components.
Somewhat more advanced is to hide the fact that you need to login into a node and instead provide an interface at an administration machine that allows to collect information from one or more servers, upgrade components, add and remove nodes, etc. The main advantage of the latter approach is that collective operations, which operate on a group of servers, can be more easily provided.
This type of managing server clusters is widely applied in practice, exemplified by management software such as Cluster Systems Management from IBM Hochstetler and Beringer, However, as soon as clusters grow beyond several tens of nodes, this type of management is not the way to go. Many data centers need to manage thousands of servers, organized into many clusters but all operating collaboratively.
Doing this by means of centralized administration servers is simply out of the question. Moreover, it can be easily seen that very large clusters need continuous repair management including upgrades. To simplify matters, if p is the probability that a server is currently faulty, and we assume that faults are independent, then for a cluster of N servers to operate without a single server being faulty is 1-p N.
As it turns out, support for very large server clusters is almost always ad hoc. There are various rules of thumb that should be considered Brewer, , but there is no systematic approach to dealing with massive systems management. Cluster management is still very much in its infancy, although it can be expected that the self-managing solutions as discussed in the previous chapter will eventually find their way in this area, after more experience with them has been gained.
Let us now take a closer look at a somewhat unusual cluster server. PlanetLab is a collaborative distributed system in which different organizations each donate one or more computers, adding up to a total of hundreds of nodes. Together, these computers form a 1-tier server cluster, where access, processing, and storage can all take place on each node individually.
Management of PlanetLab is by necessity almost entirely distributed. Before we explain its basic principles, let us first describe the main architectural features Peterson et al. In PlanetLab, an organization donates one or more nodes, where each node is easiest thought of as just a single computer, although it could also be itself a cluster of machines.
Each node is organized as shown in Fig. There are two important components Bavier et al. The first one is the virtual machine monitor VMM , which is an enhanced Linux operating system. The enhancements mainly comprise adjustments for supporting the second component, namely vservers. A Linux vserver can best be thought of as a separate environment in which a group of processes run. Processes from different vservers are completely independent. They cannot directly share any resources such as files, main memory, and network connections as is normally the case with processes running on top of an operating systems.
Instead, a vserver provides an environment consisting of its own collection of software packages, programs, and networking facilities. For example, a vserver may provide an environment in which a process will notice that it can make use of Python 1. In contrast, another vserver may support the latest versions of Python and httpd. In this sense, calling a vserver a "server" is a bit of a misnomer as it really only isolates groups of processes from each other.
We return to vservers briefly below. The basic organization of a PlanetLab node. The Linux VMM ensures that vservers are separated: processes in different vservers are executed concurrently and independently, each making use only of the software packages and programs available in their own environment.
The isolation between processes in different vservers is strict. For example, two processes in different vservers may have the same user ID, but this does not imply that they stem from the same user. This separation considerably eases supporting users from different organizations that want to use PlanetLab as, for example, a testbed to experiment with completely different distributed systems and applications. To support such experimentations, PlanetLab introduces the notion of a slice, which is a set of vservers, each vserver running on a different node.
A slice can thus be thought of as a virtual server cluster, implemented by means of a collection of virtual machines.
The virtual machines in PlanetLab run on top of the Linux operating system, which has been extended with a number of kernel modules. There are several issues that make management of PlanetLab a special problem. Three salient ones are:. Nodes belong to different organizations. Each organization should be allowed to specify who is allowed to run applications on their nodes, and restrict resource usage appropriately.
There are various monitoring tools available, but they all assume a very specific combination of hardware and software. Moreover, they are all tailored to be used within a single organization. Programs from different slices but running on the same node should not interfere with each other. This problem is similar to process independence in operating systems. Let us take a look at each of these issues in more detail. Central to managing PlanetLab resources is the node manager. Each node has such a manager, implemented by means of a separate vserver, whose only task is to create other vservers on the node it manages and to control resource allocation.
The node manager does not make any policy decisions; it is merely a mechanism to provide the essential ingredients to get a program running on a given node. Keeping track of resources is done by means of a resource specification, or rspec for short. An rspec specifies a time interval during which certain resources have been allocated.
Resources include disk space, file descriptors, inbound and outbound network bandwidth, transport-level end points, main memory, and CPU usage.
An rspec is identified through a globally unique bit identifier known as a resource capability rcap. Given an rcap, the node manager can look up the associated rspec in a local table.
Resources are bound to slices. In other words, in order to make use of resources, it is necessary to create a slice. Each slice is associated with a service provider, which can best be seen as an entity having an account on PlanetLab.
To create a new slice, each node will run a slice creation service SCS , which, in turn, can contact the node manager requesting it to create a vserver and to allocate resources. The node manager itself cannot be contacted directly over a network, allowing it to concentrate only on local resource management. In turn, the SCS will not accept slice-creation requests from just anybody. Only specific slice authorities are eligible for requesting the creation of a slice.
Each slice authority will have access rights to a collection of nodes. The simplest model is that there is only a single slice authority that is allowed to request slice creation on all nodes. To complete the picture, a service provider will contact a slice authority and request it to create a slice across a collection of nodes. The service provider will be known to the slice authority, for example, because it has been previously authenticated and subsequently registered as a PlanetLab user.
In practice, Planet-Lab users contact a slice authority by means of a Web-based service. Further details can be found in Chun and Spalink What this procedure reveals is that managing PlanetLab is done through intermediaries. One important class of such intermediaries is formed by slice authorities.
Such authorities have obtained credentials at nodes to create slides. Obtaining these credentials has been achieved out-of-band, essentially by contacting system administrators at various sites.
Obviously, this is a time-consuming process which not be carried out by end users or, in PlanetLab terminology, service providers. Besides slice authorities, there are also management authorities. Where a slice authority concentrates only on managing slices, a management authority is responsible for keeping an eye on nodes.
In particular, it ensures that the nodes under its regime run the basic PlanetLab software and abide to the rules set out by PlanetLab. Service providers trust that a management authority provides nodes that will behave properly. This organization leads to the management structure shown in Fig.
The relations are as follows:. The management relationships between various PlanetLab entities. This item is displayed on page in the print version. These relationships cover the problem of delegating nodes in a controlled way such that a node owner can rely on a decent and secure management.
The second issue that needs to be handled is monitoring. What is needed is a unified approach to allow users to see how well their programs are behaving within a specific slice. PlanetLab follows a simple approach. Every node is equipped with a collection of sensors, each sensor being capable of reporting information such as CPU usage, disk activity, and so on. Sensors can be arbitrarily complex, but the important issue is that they always report information on a per-node basis.
This information is made available by means of a Web server: every sensor is accessible through simple HTTP requests Bavier et al. Admittedly, this approach to monitoring is still rather primitive, but it should be seen as a basis for advanced monitoring schemes.
For example, there is, in principle, no reason why Astrolabe, which we discussed in Chap. Finally, to come to our third management issue, namely the protection of programs against each other, PlanetLab uses Linux virtual servers called vservers to isolate slices. As mentioned, the main idea of a vserver is to run applications in there own environment, which includes all files that are normally shared across a single machine.
Such a separation can be achieved relatively easy by means of the UNIX chroot command, which effectively changes the root of the file system from where applications will look for files. Only the superuser can execute chroot. Of course, more is needed. Linux virtual servers not only separate the file system, but also normally shared information on processes, network addresses, memory usage, and so on.
As a consequence, a physical machine is actually partitioned into multiple units, each unit corresponding to a full-fledged Linux environment, isolated from the other parts. An overview of Linux virtual servers can be found in Potzl et al.
So far, we have been mainly concerned with distributed systems in which communication is limited to passing data. However, there are situations in which passing programs, sometimes even while they are being executed, simplifies the design of a distributed system.
In this section, we take a detailed look at what code migration actually is. We start by considering different approaches to code migration, followed by a discussion on how to deal with the local resources that a migrating program uses. A particularly hard problem is migrating code in heterogeneous systems, which is also discussed.
Approaches to Code Migration. Before taking a look at the different forms of code migration, let us first consider why it may be useful to migrate code. Traditionally, code migration in distributed systems took place in the form of process migration in which an entire process was moved from one machine to another Milojicic et al. Moving a running process to a different machine is a costly and intricate task, and there had better be a good reason for doing so. That reason has always been performance.
The basic idea is that overall system performance can be improved if processes are moved from heavily-loaded to lightly-loaded machines. Load is often expressed in terms of the CPU queue length or CPU utilization, but other performance indicators are used as well.
Load distribution algorithms by which decisions are made concerning the allocation and redistribution of tasks with respect to a set of processors, play an important role in compute-intensive systems. However, in many modern distributed systems, optimizing computing capacity is less an issue than, for example, trying to minimize communication. Moreover, due to the heterogeneity of the underlying platforms and computer networks, performance improvement through code migration is often based on qualitative reasoning instead of mathematical models.
Consider, as an example, a client-server system in which the server manages a huge database. If a client application needs to perform many database operations involving large quantities of data, it may be better to ship part of the client application to the server and send only the results across the network. Otherwise, the network may be swamped with the transfer of data from the server to the client.
In this case, code migration is based on the assumption that it generally makes sense to process data close to where those data reside. This same reason can be used for migrating parts of the server to the client. For example, in many interactivedatabase applications,clients need to fill in forms that are subsequently translated into a series of database operations. Processing the form at the client side, and sending only the completed form to the server, can sometimes avoid that a relatively large number of small messages need to cross the network.
The result is that the client perceives better performance, while at the same time the server spends less time on form processing and communication. Support for code migration can also help improve performance by exploiting parallelism, but without the usual intricacies related to parallel programming.
A typical example is searching for information in the Web. It is relatively simple to implement a search query in the form of a small mobile program, called a mobile agent, that moves from site to site. By making several copies of such a program, and sending each off to different sites, we may be able to achieve a linear speedup compared to using just a single program instance. Besides improving performance, there are other reasons for supporting code migration as well.
The most important one is that of flexibility. The traditional approach to building distributed applications is to partition the application into different parts, and decide in advance where each part should be executed.
This approach, for example, has led to the different multitiered client-server applications discussed in Chap. However, if code can move between different machines, it becomes possible to dynamically configure distributed systems. For example, suppose a server implements a standardized interface to a file system.
To allow remote clients to access the file system, the server makes use of a proprietary protocol. Normally, the client-side implementation of the file system interface, which is based on that protocol, would need to be linked with the client application. This approach requires that the software be readily available to the client at the time the client application is being developed.
An alternative is to let the server provide the client's implementation no sooner than is strictly necessary, that is, when the client binds to the server. At that point, the client dynamically downloads the implementation, goes through the necessary initialization steps, and subsequently invokes the server. This principle is shown in Fig. This model of dynamically moving code from a remote site does require that the protocol for downloading and initializing code is standardized.
Also, it is necessary that the downloaded code can be executed on the client's machine. Different solutions are discussed below and in later chapters. The principle of dynamically configuring a client to communicate to a server. The client first fetches the necessary software, and then invokes the server. The important advantage of this model of dynamically downloading clientside software is that clients need not have all the software preinstalled to talk to servers.
Instead, the software can be moved in as necessary, and likewise, discarded when no longer needed. Another advantage is that as long as interfaces are standardized, we can change the client-server protocol and its implementation as often as we like. Changes will not affect existing client applications that rely on the server. There are, of course, also disadvantages. The most serious one, which we discuss in Chap. Blindly trusting that the downloaded code implements only the advertised interface while accessing your unprotected hard disk and does not send the juiciest parts to heaven-knows-who may not always be such a good idea.
Although code migration suggests that we move only code between machines, the term actually covers a much richer area. There are five major teems role: 1. Business analyst 2. System analyst 3. Infrastructure analyst 4. Change Management analyst 5. Project manager.
The role of a systems analyst in an organization is to research problems and plan solutions for a business. The individual may recommend software and systems, and coordinate development to help the business reach its goals. Make assumptions, wherever necessary. What are attributes of system analyst.
An analyst is a person, particularly a mathematician, who studies analysis, a system analyst, or a practitioner of psychoanalysis. He is the on who defines the Functional Requirement. The Business analyst should explain the testers about the requirement and review the test documents prepared by the testers.
A System Analyst is responsible for maintaining an organizations computer systems. They analyze the computer systems in place, troubleshoot issues and procedures, create a solution to existing problems or inefficiencies, and may fix the systems in order to make them operate as efficiently as possible. Pepito Rodriguez. It is very important for acquiring energy. Most common responsibilities of System Analyst are following.
The system analyst is the person who guides through the development of an information system. The main wok of his are to : Analyze science, engineering, business, and all other data processing problems for application to electronic data processing systems. Analyze user requirements, procedures, and problems to automate or improve existing systems and review computer system capabilities, workflow, and scheduling limitations.
May analyze or recommend commercially available software. May supervise computer programmers. System Analyst. Actually there is no special degree needed in order to become a business system analyst BSA. Most business system analysts have some kind of technical background like software developer or engineers, but it's also possible to become business system analysts from a business role. IP is a connectionless protocol in which each packet traveling through the Internet is an independent unit of data unrelated to any other data units.
Client requests are organized and prioritized in a scheduling system, which helps servers cope in the instance of receiving requests from many distinct clients in a short space of time. The client-server approach enables any general-purpose computer to expand its capabilities by utilizing the shared resources of other hosts.
Popular client-server applications include email, the World Wide Web, and network printing. Microsoft MySQL Server is a popular example of a three-tier architecture, consisting of three major components: a protocol layer, a relational engine, and a storage engine. A client-server network is the medium through which clients access resources and services from a central computer, via either a local area network LAN or a wide-area network WAN , such as the Internet.
A unique server called a daemon may be employed for the sole purpose of awaiting client requests, at which point the network connection is initiated until the client request has been fulfilled. Network traffic is categorized as client-to-server north-south traffic or server-to-server east-west traffic.
Popular network services include e-mail, file sharing, printing, and the World Wide Web. A major advantage of the client-server network is the central management of applications and data. Helps enhance the performance of clients' systems. Helps extend the life span of clients' systems and devices. Helps ensure better end-user productivity. Helps clients get better customer loyalty and satisfaction. It Helps MSPs keep client-related issues under control or avoid issues and thereby ensure higher profit margins.
Helps MSPs expand their client base and provide better service.
0コメント