Coulouris et al. (2012)explain client-server architecture refers to nodes running processes (servers)which accepts requests from other nodes (client) and responds appropriately. Tannenbaum (2014)identifies three levels which are, the user interface (handles userinteraction), processing (which performs tasks as designed between layers) and data levels (files or databases) of this architecture.
Microsoft (2006) recognize that these three levels can run on different computers allowing the user interface to operate on a simple client device whilst the main processing and database levels operate on much more powerful servers. In contrast to Client/Server networking Microsoft emphasises that Peer-To-Peer (P2P) networking utilizes the relative power of users devices to provide both server and client services. Kshemkalyani and Singhal (2008) add that in the case of P2P networking all of the three levels can reside on the Client sytems.
Tanenbaum(2014) determines that a trend is developing in the case of client-server architecture (Also known as multitiered architecture or vertical architecture) where most of the processing and data storage is moving away from the client and being handled at the server(See Fig 1.) Shelly and Rosenblatt (2012) add that clients using their own processing power are known as Fat clients whilst those utilisng the servers processing are thin clients.
According to Coulouris et al. (2012) Client/Server technology provides access to resources such as files, web pages and othernodes on a single server, or a cluster, and claim that this makes for moresimpler management. Microsoft (2006) argues that P2P networking can provide access to the same resources from any of the systems making up a P2P network and suggests that a network of peers is easily scalable and more reliable than a centralised server. Microsoft (2006) explain that P2P networking enables and enhances Real Time Communications, collaboration,content distribution, distributed processing, improved internet technologies and argue that well designed P2P systems can be self managing.
Coulouris et al (2012, p.482) recognizes that the goal of security is the restriction of access to information only to authorized users. Tanenbaum (2014, p.379) identifies Encryption, Authentication, Authorization and Auditing as the four important security mechanisms which should be taken into account.
Of the two types of distributed systems Kilvington (2016) warns that P2P networks are more vulnerable to security breaches. Kilvington explains that P2P file sharing systems such as Napster and Gnutella are designed to bypass firewalls which make it very easy for sensitive data to be leaked out onto the internet. Inexperienced users may accidentally share their entire hard drive allowing unintended access to confidential data. Holme et al. (2011) argues that Client Server systems can be configured to provide more restrictive access to users from a centrally managed database. Holme et al. observes that this would ensure that the responsibility for the security of an individual client does not fall on each individual user. Thomas and McClean (2011) adds that onClient-Server systems a network administrator puts in place network wide security mechanisms to ensure that in the event of a security breach on a single client the whole of the network is not compromised.
In both P2P and Client Server networks Li (2007) argues that each infrastructure type is open to Distributed Denial Of Service (DDOS) attacks.Because of its nature a Server in a Client/Server infrastructure can be attacked by a hacker using a massive number of connections to render the server inoperable. A P2P network can be made inoperable in a similar way by making use of the the querying nature of P2p networks to overload the network, creating a broadcast storm.
Coulouris et al. (2012) states that a Name Service allows discovery of various services on a distributed system, which includes. as Tanenbaum (2014) explains. PCs,printers and other nodes. Having a name service, Tannenbaum (2014) adds can provide fort ransparency as it enables discovery even if location changes.
There are various Name Services, each designed for specific requirements, Wiley (2003) states that Distributed Hash Tables(DHT) are essential to robust P2P networks. Wiley explains that the basic design of DHT starts with a circular double-linked list where each node in the list is a client on the network. Each node keeps a reference to the previous and next nodes(double-link) in the list and by arranging these nodes into a ring it simplifies the process of referral. Stoica et al. (2003) observe that this basic design has been improved upon by systems such as Chord by using a logarithmic function and a finger table to reduce the number of hops a search must take to find the correct node which increases efficiency. (see Fig 2).
DHT is an example of a flat naming system which Tanenbaum (2014) explains are good for machines but are not very convenient for humans as they do not generally support simple human-readable names. In contrast Tanenbaum argues that hierarchical naming systems such as DNS follow a more structured approach which makes it far easier for users to remember and use.
Liu and Albitz (1998) state that DNS is a distributed database which allows the control of segments of the overall database to be made locally yet still allow the data in each segment tobe made available across the entire network utilising a client server configuration. Liu and Albitz explain that the DNS Database is essentially a large inverted tree with domain names as the branches (See diagram 3). The top of the tree is referred to as the root, represented by a dot, and a full domain name is the sequence of labels read from the node upto the root so a web server on the BBC network may be named WWW so this server would be named WWW.BBC.CO.UK.
Tanenbaum (2014) observes that this implementation of DNS can lead to a swamping of the higher level nodes however, caching of the name to address configurations on lower level nodes can avoid this.
DHT provides for a whole new raft of applications due to its key benefit of being content addressable explains Gummadi (2004) and this a big improvement on DNS where users must remember or know what website or fileserver the content they require is stored on. Due to their use of the O (log n) function their scalability is exceptional and this has been a contributing factor to their use by large peer to peer networks such as Napster, Skype, BitTorrent and others. However according to Coulouris et al. (2012)the efficiency of DNS is far superior, Coulouris et al. (2012)show that a DNS lookup takes on average 5 lookups per query compared to a DHT user who may need up to 30 lookups. This efficiency is the result of the hierarchical structure of DNS.
Gao (2005) explains that passing data across anetwork introduces many problems, especially if the systems are running different hardware or software. For instance, character encodings such as ASCII, UTF-8, EDCDIC or Unicode may differ, there may be different floating point representations or data may be stored differently in memory. Therefore,the systems that need to communicate must agree on the external representation of data.
The Standard C Foundation (2016) explains that serialization is the process of translating data structures into an agreed format that can be stored or transmitted over a network. It also involves the reconstruction, or deserialization, of that data, according to the serialization format by the same computer or other computers on the network.According to Coulouris et al. (2012) an agreed standard for the represenatation of data structures is caled an External Data Representation (XDR) and is described by Sun Microsystems (1987) in RFC 1014. Coulouris et al. (2012) mentions that Serialization is also known as Marshalling whilst the process of reconstructionof data is known as De-Serialization or Un-Marshalling.
CORBA was originally designed by the Object Management Group (OMG) (2016) who explain that CORBA is used extensively across the internet allowing applications on differing systems to work together due to its open vendor-independent architecture. Gao (2005) explains that the data structures and basic data items are described in the its Interface Definition Language (IDL), however CORBA’s serialization method, CDR, does not pass the“Type” of data instead it has to use prior knowledge of the data structures. (seeFigure 4.) In contrast Gao states that both XML and JSON include information about the “Type” of data which allow the recipient to construct it using a process called reflection.
Miller(2003) identifies some of the problems of integrating computer systems as being the complexity and the cost. He explains that the complexity increases as businesses begin integrating their legacy systems with newer systems, busines s partners, suppliers and customers. These complexities add additional issues such as security and integration. Chavan et al. (2012) identify Middleware sitting in between the Operating System and the Application programs and provides various programming and developer services whilst masking the differences of the underlying services. Chavan et al. recognizes that middleware provides developers with a formalized method of how applications interoperate. Middleware also provides network communication such as marshalling,explained in a previous chapter, and coordinate and monitor resources whilst providing scalability, security, heterogeneity and transparency.
Manylegacy systems still in operation use an aging middleware layer named CORBA which according to Tanenbaum (2014) uses messaging as a means of handling communications in distributed systems. OMG (2016) advocate the use of open systems based on standard object-orientated interfaces which are built from heterogenous hardware, networks operating systems and programming languages.The aim of CORBA is to allow anything to communicate with anything. A main component of CORBA is the Object Request Broker (ORB) states Coulouris etal. (2012) amd its role is to help a client to run a program (known as a method) on another system. This involves locating the system, initialising the remote program, and then communicating the request sand the replies between the two systems (See Figure 5).
Onmodern systems IBM (2004) conclude that Software Developers must choose at least one middleware platform, Java EE or.NET with which to deploy their applications when faced with cost and time limitations. According to Oracle (2016) Java provides an OR Band Application Programming Interfaces to allow Java applications to communicate with CORBA applications. Heinzl (2015) states that this means that an existing CORBA infrastructure can be integrated into a J2EE infrastructure however he argues that .NET can only communicate with CORBA using web services and therefore an additional bridging solution must alternatively be utillised creating new issues. (See Figure 6)
According to Miller (2003) The Middleware Company (TMC) created a web application named “The Pet Store” in both J2 EE and .NET in order to examine claims by Microsoft that .NET is superior in all measures of performance and scales much better as user numbers increase. According to this study King (2016) reports that .Net is superior in areas such as smaller number of lines of code required, response times and users supported. However King states that the Microsoft implementation was highly optimized whilst the java implementation wa simplemented with “best practice” in mind. King (2016) argues that a new optimized J2EE version was able to improve performance by more than seventeen times bringing it in line with that of the .NET version