.

Friday, March 29, 2019

Cluster Computing: History, Applications and Benefits

bundle reckon History, Applications and Benefits surchargeThis get e actuallyplace will give up a detailed refresh of the roll up figuring. In this report we look at the birth of thumping computation till the present and the future direction the technology is headed. After the lit review, we move on to the explanation of theories involved from the authors head teacher of view.The final section of the report covers the current trends and future evolution of the technology as perceived from the authors psyche of view. The essence of the report would be a fall apart understanding of the assemble figure and its few(prenominal) uses in todays world.IntroductionA electronic computer cluster consists of a cross out of loosely committed or tightly connected computers that work together so that in more respects they savet be viewed as a single frame.The components of a cluster be ordinarily connected to each other through fast topical anaesthetic bea networks (LAN) , with each leaf node (computer utilise as a innkeeper) run its own instance of an operating system. Computer clusters emerged as a import of convergence of a number of computing trends including the availability of low court microprocessors, superior speed networks, and software package for high performance distributed computing.Clusters are usually deployed to improve performance and availability over that of a single computer, sequence typically universe much more cost-effective than single computers of equal speed or availability.1Computer clusters afford a wide execute of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the instantaneous supercomputers in the world such as IBMs Sequoia. 2Literature reviewIn 1967 a paper published by Gene Amdahl of IBM, formally invented the cornerstone of cluster computing as a way of doing parallel work. It is presently cognize as Amdahls Law. It is a model for correlatio n amongst the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm, assuming the business size remains the same. 3Types of ClustersComputer clusters are used in umpteen organizations to increase touch time, faster information storing and retrieval time, etc. These computer clusters fire be classified in three main types of clusters but these arsehole be mixed to achieve higher performance or reliability. postgraduate performance clusters noble availability clustersLoad Balancing clustersHigh Performance ClusterHigh performance computing sometimes get up to as high performance computing are used for computation-intensive applications, sooner than handling IO-oriented applications such as web service or databases.4 examples of HPCs tin include the computational simulations of vehicle crashes or weather. Very tightly united computer clusters are designed for work that may approach supercomputing. The worlds red-hot machine in 201 1 was the K computer which has a distributed memory, cluster architecture.5High Availability ClusterHigh availability clusters are greenly known as failover clusters. They are used to improve the availability of the cluster approach. In high availability clusters, redundant nodes are used which take over in case of component failure. It is used to eliminate single point of failure by having redundant cluster components.6 High Availability clusters are often used for critical databases, file sharing on a network, business applications, and customer services such as electronic transaction websites.Load Balancing ClusterLoad balancing clusters, as the call down suggests are the cluster configurations where the computational workload is shared between the nodes for better overall performance. One of the best examples of load balancing cluster is a web server cluster. It may use a round robin mode to assign each bare-assed request to a antithetical node for overall increase in perf ormance. 7Benefits of ClustersThere are numerous advantages to victimization cluster computing. Some of these are detailed below.CostCluster technique is cost effective compared to other techniques in calls of the pith of advocate and processing speed being produced due to the fact that it used off the shelf hardware and software components as compare to the mainframe computers, which use custom build proprietary hardware and software components. touch speedIn a cluster, multiple computers work together to fork over unified processing, which in turn provides faster processing.FlexibilityIn credit line to a mainframe computer, a computer cluster corporation be upgraded to a higher specification or expanded by adding unornamented nodes.Higher availabilitySingle component failure is mitigated by redundant machines taking over the processing uninterrupted. This type of redundancy is lack in mainframe systems.Cluster ManagementMessage passing and confabulationThe two closely o ften used approaches for cluster communicatings are PVM and MPI.PVM stands for parallel virtual machine. It was developed around 1989 by Oak rooftree National Laboratory. It is directly installed on every node and it provides a set of libraries that make the node a parallel virtual machine. It provides a run time environment for pick task focal point, fault presentment and message passing. User programs written in C, C++ or Fortran give notice use PVM.89MPI stands for message passing interface. It emerged in 1990s and supersedes PVM. MPI design is found on various commercially available systems of the time. Its implementation typically uses transmission control protocol/IP and socket connection. Currently its the most widely used confabulation system enabling parallel programming in C, Fortran, Python etc.910Task computer programmingTask scheduling becomes a challenge, when a large multiuser cluster take access to huge amounts of data. In a heterogeneous central processor -GPU cluster, mapping tasks onto CPU cores and GPU devices provide quite a challenge because its a tangled application environment and the performance of each job depends on the abilities of the rudimentary technologies. Task scheduling is an active area of ongoing research and at that place have been proposals to build an algorithms which combine and extend MapReduce and Hadoop. 11Node failure managementNode failure management is a technique used to cut across a failed node in a cluster using strategies such as fencing. It isolates the node or a shared resource when it detects a malfunction. There are two types of fencing. First is to disable the node and the second is to prevent access to resources like shared disks. 12The first method uses STONITH. Which stands for Shoot The Other Node In The Head. This method disables or power off the malfunctioning node. For example, power fencing uses a power controller to turn off the faulty node. 12The second method uses the resource fe ncing approach, which prevents access to resources rather than to turn off the node. For example, fiber short letter fencing can be used to disable the fiber line of merchandise port. 121137395 Muhammad Khurram ShehzadTrendsThe demand for powerful computers that can be used for simulation and expectation are of great interest to both the public and private sector. stand up decade, was the most exciting periods in computer development. As a emergence of Moores law, microprocessors have become smaller, denser, and more powerful. The result is that microprocessor based supercomputing is chop-chop becoming the technology of preference in attacking some of the most important problems of acquaintance and engineering.A recent report from Intersect360 Research highlighted some interesting trends in HPC. Below are a few of the highlights. 13 more(prenominal) Memory with Multi-coreWhile memory usage per core is nearly changeless in years past, the broader adoption of multi-core system s is creating a demand for more memory. This can be expected but the report also warns of additional system costs as the need for more memory rises.13Processors per NodesAccording to abstract of the report, two-processors per node is still the preferred configuration with 60% of the market, with just 14% opting for four-processor nodes. These ratios have stayed virtually the same over the past five years.13 prox OutlookHigh Performance Computing (HPC) is expected to increase with which ample Data is analyzed to address variety of scientific, environmental and social challenges, specially on very large and small scales. In order of order of magnitude more powerful than laptop computers, HPC processes information using parallel computing, allowing for many simultaneous computations to occur concurrently. These integrated machines are measured in flops which stands for drifting point operations per second. 14 As of June 2013, Tianhe-2 (or, Milky Way-2), a supercomputer developed b y Chinas National University of Defense Technology, is the worlds smart system with a performance of 33.86 petaflop/s. 15HPC is expected to move into exascale capacity by 2020, developing computing capacities 50 times greater than todays most advanced supercomputers. Exascale feasibility rests on the rise of energy in effect(p) technology the processing power exists but the energy to run it, and collected it, does not. Currently, the American supercomputer MIRA, 16 while not the fastest, is the most energy efficient thank to circulating water-chilled air around the processors inside the machine rather than just now using fans.Applications of the technologyHigh-performance computing (HPC) is a broad term that in essence represents compute intensive applications that need acceleration. Users of application acceleration systems range from medical imaging, financial trading, oil and gas expiration, to bioscience, data warehousing, data security, and many more. In the information ag e, the need for acceleration of data processing is growing exponentially and the markets deploying HPC for their applications are growing every day. The HPC expansion is being fueled by the coprocessor, which is fundamental to the future of HPC.1137784 Samra Mohammad8Future trends, outlook and applications of cluster computingComputer plays an important role in the information age. Different countries have undertaken thorough studies on computing to improve the information level. I may observe some current trends and speculate a bit about the future of parallel programming models. As far as we can foresee today, the future of computing is parallel computing, dictated by strong-arm and technical necessity.8.1New trends in cluster computingThese days, there is a new computing paradigm as computer networks called the Grid. It becomes very cheap and very fast .What is a Grid? It is a big system of computing resources that provides to users a single point of access and performs tasks. I t is based on the entanglement (World Wide Web) interface, to these distributed resources17The Grid technology is currently in progress intensive development. The Grid is also the first tools which are already available for developers. In this type of application, we can use a high-speed network in regarding the interconnection between the parts of the grid via internet.Nowadays , the Grid is mark off to enable for scientific collaborations to share resources on an unprecedented level and geographically distributed groups to collaborate together in a manner that were previously unrealistic by using scalable, secure, high performance mechanisms for discovering and negotiating access to remote resources.8.2FutureIn the future, the increase of industry support for low latency clusters will succor in availability and performance, but restrictions may require a discharge from the current multicast-oriented data distribution strategies.However, latency and bandwidth performance will poke out to improve in the Ethernet with a very low cluster with multicast support.8.3Application and outlookObviously, cluster computing is quickly becoming the architecture of choice. One of the categories of applications is called cat valium Challenge Applications (GCA). It are defined as fundamental problems in science and engineering with broad economic and scientific impact whose solution can be advanced by applying high performance computing and communication technologies. The high scale of complexity in GCAs demands enormous amount of resources needs, such as processing time, memory space and communication bandwidth. A common characteristic of GCAs is that they involve simulations that are computationally intensive. Examples of GCAs are applied wandering dynamics, environmental modeling, ecosystem simulation, biomedical imaging, biomechanics, molecular biology, and computational sciences. 17Other than GCAs, cluster computing is also being applied in other applications tha t demand high availability, scalability and performance. Clusters are being used as replicated storage and backup servers that provide the essential fault tolerance and reliability for critical applications. For example, the internet, search engine, Google uses cluster computing to provide reliable and efficient internet search services. decisionCluster computing offers a comparatively cheap, alternative to large server or mainframe computer solutions. New trends in hardware and software technologies are likely to make clusters more promising.Statement of contribution instalment 1 Muhammad Khurram Shehzad (1137395), Abstract, Introduction, Literature review Conclusion as a classify and Trends, Future Outlook applications as individual.Member 2 Samra Mohammad (1137784), Abstract, Introduction, Literature review Conclusion as a Group and Trends, Future Outlook applications as individual.Member 3 Muhammad Faheem Abbas (1137391), Abstract, Introduction, Literature review Conclusio n as a Group and Trends, Future Outlook applications as individual.References1 Bader, David Robert Pennington (June 1996). Cluster Computing Applications. tabun Tech College of Computing. Retrieved 2007-07-13.2 Nuclear weapons supercomputer reclaims world speed record for US. The Telegraph. 18 Jun 2012. Retrieved 18 Jun 2012.3 Amdahl, Gene M. (1967).Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities.AFIPS assemblage Proceedings(30) 483485.doi10.1145/1465482.14655604 High Performance Computing for Computational Science VECPAR 2004 by Michel Dayd, cuckoo Dongarra 2005 ISBN 3-540-25424-2 pages 120-1215 M. Yokokawa et al The K Computer, in International Symposium on Low force Electronics and Design (ISLPED) 1-3 Aug. 2011, pages 371-3726 Evan Marcus, Hal Stern Blueprints for High Availability Designing Resilient Distributed Systems, posterior Wiley Sons, ISBN 0-471-35601-87 High Performance Linux Clusters by Joseph D. Sloan 2004 ISBN 0-596-0 0570-9 page8 Distributed services with OpenAFS for enterprise and reading by Franco Milicchio, Wolfgang Alexander Gehrke 2007, ISBN pages 339-3419 Grid and Cluster Computing by Prabhu 2008 8120334280 pages 109-11210 Gropp, William Lusk, Ewing Skjellum, Anthony (1996). A High-Performance, Portable Implementation of the MPI Message Passing Interface. Parallel Computing. CiteSeerX 10.1.1.102.948511 K. Shirahata, et al interbreeding Map Task Scheduling for GPU-Based Heterogeneous Clusters in Cloud Computing Technology and Science (CloudCom), 2010 Nov. 30 2010-Dec. 3 2010 pages 733 740 ISBN 978-1-4244-9405-712 Alan Robertson Resource fencing using STONITH. IBM Linux Research Center, 201013 http//www.intersect360.com/industry/reports.php?id=67 (Accessed 12/05/2014)14 http//ec.europa.eu/digital-agenda/futurium/en/content/future-high-performance-computing-supercomputers-rescue (Accessed 12/05/2014)15 http//www.top500.org/system/177999.U3ORpPmSzDs (Accessed 12/05/2014)16 http//en.wikipedi a.org/wiki/IBM_Mira (Accessed 12/05/2014)17 http//www.slideshare.net/shivakrishnashekar/computer-cluster (Accessed 14th May 2014)

No comments:

Post a Comment