Wednesday, April 3, 2019
Definitions of Multiprocessors in Computing
Definitions of Multi figure outors in ComputingA multi central processor loafer be defined as the ready reck unrivaledr which expenditures two or to a greater extent(prenominal) processing units to a lower place the integrated view. Multi-processing is also defined as the demeanor of using two or more than two CPUs at heart a single computer. As we alone(prenominal) know that there ar processors in spite of appearance the computers, the multi processors, as the name indicates, have the ability to support more than whiz processor at a same quantify. Usu every last(predicate)y in multi-processing the processors ar organized in the parallel form and hence a oversize number of the executions can be brought at the same time i.e. multi-processing helps in executing the same instructions a number of time at a particular time. well-nigh early(a) connect definition of the multi processors argon that multi-processing is the sharing of the execution process by the interconn ection of more than adept microprocessor using tightly or loosely couples technology. Usually multi-processing occupations carries two coinciding steps. One is the coiffeing the delegate of editing and the other is the handling the info processing. A multi-processor device comprising, over a single semiconductor halt a plurality of processors including a first group of processors and a second base group of processors a first agglomerate to which the first group of processors is mate a second pot to which the second group of processors is joined a first external bus interface to which the first bus is coupled and a second external bus interface to which the second bus is coupled. The term multiprocessing is also utilize to refer to a computer that has many independent processing elements. The processing elements are al near mount computers in their own right. The main difference is that they have been freed from the encumbrance of talk with peripherals.MULTIPROCESSORS IN THE TERMS OF ARCHITECTUREThe processors are usually made up of the small and medium scale ICs which usually contains a slight or large number of the transistors. The multi processors involves a computer architecture Most especial(a) K multiprocessor agreements today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as split processors. SMP arrangements allow any processor to work on any task no matter where the information for that task are located in store with proper operate musical arrangement support, SMP systems can easily lead tasks between processors to balance the workload efficiently.BenefitsIncreased processing power cuticle resource use to application requirementsAdditional operational system responsibilities all(prenominal) processors remain busyEven distri plainlyion of processes throughout the system to each one(prenominal) processors work on consistent copies of roled infoExecuti on of related processes synchronizedMutual exclusion enforcedMultiprocessing is a instance of processing in which two or more processors work unneurotic to process more than one program simultaneously. Multi processor systems have more than one processor thats why known as multi processor systems.In multiprocessor system there is one master processor and other are the Slave. If one processor give a airs indeed master can assign the task to other slave processor. But if passe-partout depart be fail than entire system will fail. Central part of Multiprocessor is the Master. All of them address the hard disk and Memory and other retention devices. display cases of multiprocessors1. Quad-Processor Pentium ProSMP, bus interconnection.4 x 200 MHz Intel Pentium Pro processors.8 + 8 Kb L1 cache per processor.512 Kb L2 cache per processor.Snoopy cache glueyness.Compaq, HP, IBM, NetPower.Windows NT, Solaris, Linux, and so on2. SGI Origin 2000Ngenus Uma, hypercube interconnection.Up to 128 (64 x 2) MIPS R 10000 processors.32 + 32 Kb L1 cache per processor.4 Mb L2 cache per processor.Distributed directory-based cache coherence.Automatic page migration/replication.SGI IRIX with PthreadsClassifications of multiprocessor architectureNature of data path interconnection schemeHow processors share resourcesMessage-Passing Architectures die address plaza for each processor.Processors fetch via meaning passing.B) Shared-Memory ArchitecturesSingle address space overlap by all processors.Processors communicate by retentiveness read/write.SMP or NUMA.Cache coherence is important issue.1. Classifying Sequential and Parallel Architectures(DATA PATH)Stream sequence of bytesData shootInstruction streamFlynns classificationsMISD multiprocessing MISD multiprocessing offers mainly the advantage of redundancy, since manifold processing units perform the same tasks on the same data, reducing the chances of wild results if one of the units fails. MISD architectures may invo lve comparisons between processing units to detect failures. apart(predicate) from the redundant and fail-safe char recreateer of this type of multiprocessing, it has few advantages, and it is very expensive. It does non improve performance. It can be implemented in a way that is transparent to software. It is used inarray processorsand is implemented in fault tolerant machines.MIMD multiprocessing MIMD multiprocessing architecture is suit adequate for a wide variety of tasks in which completely independent and parallel execution of instructions touching incompatible sets of data can be put to productive use. For this reason, and because it is easy to implement, MIMD predominates in multiprocessing.Processing is dual-lane into eight-foldthreads, each with its own hardware processor state, within a single software-defined process or within multiple processes. up to now as a system has multiple threads awaiting dispatch (either system or user threads), this architecture makes go od use of hardware resources.MIMD does ski lift issues of deadlock and resource contention, however, since threads may collide in their door to resources in an unpredictable way that is difficult to manage efficiently. MIMD requires special cryptography in the operating system of a computer but does non require application changes unless the programs themselves use multiple threads (MIMD is transparent to single-threaded programs under most operating systems, if the programs do non voluntarily relinquish control to the OS). Both system and user software may need to use software constructs much(prenominal) assemaphores(also called locksorgates) to pr force one thread from interfering with other if they should happen to cross paths in referencing the same data. This gating or locking process increases code complexity, lowers performance, and greatly increases the amount of testing essential, although non usually abundant to negate the advantages of multiprocessing.Similar co nflicts can arise at the hardware take aim between processors (cache contention and corruption, for example), and must usually be resolved in hardware, or with a combination of software and hardware (e.g.,cache-clear instructions).SISD multiprocessing In asingle instruction stream, single data streamcomputer one processor sequentially processes instructions, each instruction processes one data item.SIMD multiprocessing In asingle instruction stream, multiple data streamcomputer one processor handles a stream of instructions, each one of which can perform calculations in parallel on multiple data locations. SIMD multiprocessing is well suited toparallel or vector processing, in which a very large set of data can be divorced into parts that are individually subjected to identical but independent operations. A single instruction stream directs the operation of multiple processing units to perform the same manipulations simultaneously on potentially large amounts of data. For certain types of work out applications, this type of architecture can produce enormous increases in performance, in terms of the elapsed time required to complete a minded(p) task. However, a drawback to this architecture is that a large part of the system move idle when programs or system tasks are dod that cannot be divided into units that can be processed in parallel.2. Interconnection schemeDescribes how the systems components, such as processors and storehouse modules, are connectedConsists of nodes (components or switches) and links (connections)Parameters used to evaluate interconnection schemesNode full stopBisection widthNetwork diamCost of the interconnection schemeShared busSingle conversation path between all nodesContention can build up for shared bus tighting for small multiprocessorsForm supernodes by connecting some(prenominal) components with a shared bus use a more ascendible interconnection scheme to connect supernodesDual-processor Intel PentiumShared bus multi processor organization.Crossbar-switch ground substanceSeparate path from both processor to every shop module (or from every to every other node when nodes consist of both processors and retrospect modules) senior high fault allowance account, performance and costSun UltraSPARC-IIICrossbar-s witch matrix multiprocessor organization.Hypercuben -dimensional hypercube has 2 nodes in which each node is n connected to n neighbor nodesFaster, more fault tolerant, but more expensive than a 2-D mesh networkn CUBE (up to 8192 processors)Multistage networkSwitch nodes act as hubs routing messages between nodesCheaper, less fault tolerant, worse performance compared to a crossbar-switch matrixIBM POWER4COUPLING of PROCESSORSTightly coupled systemsProcessors share most resources including memoryCommunicate over shared buses using shared physical memoryTasks and/or processors communicate in a highly synchronized agencyCommunicates through a common shared memoryShared memory systemLoosely coupled systemsProcessors do not share most resourcesMost communication through intelligible messages or shared virtual(prenominal) memory (although not shared physical memory)Tasks or processors do not communicate in a synchronized fashionCommunicates by message passing packetsOverhead for data exchange is highDistributed memory systemComparison between themLoosely coupled systems more flexible, fault tolerant, ascendableTightly coupled systems more efficient, less burden to operating system programmersMultiprocessor Operating System OrganizationsClassify systems based on how processors share operating system responsibilitiesTypesMaster/slaveSeparate kernelssymmetrical organization1) Master/slave organizationMaster processor executes the operating systemSlaves execute only user processorsHardware asymmetryLow fault toleranceGood for computationally intensive jobs2) Separate kernels organizationEach processor executes its own operating systemSome globally shared operating system d ataLoosely coupledCatastrophic failure un kindredly, but failure of one processor results in termination of processes on that processorLittle contention over resourcesExample Tandem system3) Symmetrical organizationOperating system manages a pool of identical processors racy amount of resource sharingNeed for mutual exclusionHighest degree of fault tolerance of any organizationSome contention for resourcesExample BBN ButterflyMemory approach path ArchitecturesCan classify multiprocessors based on how processors share memoryGoal Fast memory access from all processors to all memoryContention in large systems makes this impractical1) unvarying memory access (UMA) multiprocessorAll processors share all memoryAccess to any memory page is nearly the same for all processors and all memory modules (disregarding cache hits)Typically uses shared bus or crossbar-switch matrixAlso called symmetric multiprocessing (SMP)Small multiprocessors (typically two to eight processors)2) heterogeneous memory access (NUMA) multiprocessorEach node contains a few processors and a portion of system memory, which is local to that nodeAccess to local memory faster than access to global memory (rest of memory)More scalable than UMA (fewer bus collisions)3) Cache-only memory architecture (COMA) multiprocessorPhysically interconnected as a NUMA isLocal memory vs. global memoryMain memory is viewed as a cache and called an attraction memory (AM)Allows system to migrate data to node that most often accesses it at granularity of a memory line (more efficient than a memory page)Reduces the number of cache misses serviced remotelyOverheadDuplicated data itemsComplex protocol to ensure all updates are received at all processors4) No-remote-memory-access (NORMA) multiprocessorDoes not share physical memorySome implement the illusion of shared physical memory shared virtual memory (SVM)Loosely coupledCommunication through explicit messagesDistributed systemsNot networked systemFeatures of the mul tiprocessorsMany multiprocessors share one address spaceThey conceptually share memory.Sometimes it is often implemented just like a multicomputerIn it the communication is implicit. It reads and writes access to the shared memories.Usually the multi processors are characterized by the complex behaviour.The MPU handles high-level tasks, including axis profile generation, host/ restrainer communication, user-program execution, and safety event handling.Advanced real time algorithm and special filter executionDigital encoder input up to 20 cardinal counts per secondAnalog Sin-Cos encoder input and interpolation up to a generation factor of 65,536Fast, high-rate Position Event Generator (PEG) to trigger external devicesFast position registration (Mark) to capture position on input eventHigh resolution analog or PWM command generation to the advertizeHigh Speed Synchronous Interface channel (HSSI) to manage fast communication with remote axes or I/O expansion modulesAdvantages of Mul tiprocessor SystemsSome advantages of multiprocessor system are as followsReduced Cost Multiple processors share the same resources. Separate power supply or mother display panel for each chip is not required. This reduces the cost.Increased Reliability The reliability of system is also increased. The failure of one processor does not affect the other processors though it will slow down the machine. Several mechanisms are required to achieve increased reliability. If a processor fails, a job rail on that processor also fails. The system must be able to reschedule the failed job or to alert the user that the job was not successfully completed.More work As we increase the number of processors so it means that more work can be done in less time. Id more than one processor cooperates on a task then they will take less time to complete it.If we divide functions among several processors, then if one processor fails then it will not affect the system or we can say it will not halt the s ystem, but it will effect on the work speed. mull I have five processors and one of them fails due to some reasons then each of the remaining four processors will share the work of failed processor. So it means that system will not fail but decidedly failed processor will effect on its speed.If you pay attention on the matter of which save much money among multi-processor systems and multiple single-processor systems then you will know that multiprocessor systems save moremoneythan multiple single-processor systems because they can share power supplies, memory and peripherals.Increased Throughput An increase in the number of processes completes the work in less time. It is important to note that doubling the number of processors does not halve the time to complete a job. It is due to the overhead in communication between processors and contention for shared resources etc.ReferenceBOOKS ReferredMorris Mano, Computer System Architecture, prentice Hall, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment