Misplaced Pages

Heterogeneous System Architecture

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Heterogeneous System Architecture ( HSA ) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks . The HSA is being developed by the HSA Foundation , which includes (among many others) AMD and ARM . The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices , and make these various devices more compatible from a programmer's perspective, relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA ).

#517482

70-431: CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance. Heterogeneous computing is widely used in system-on-chip devices such as tablets , smartphones , other mobile devices, and video game consoles . HSA allows programs to use the graphics processor for floating point calculations without separate memory or scheduling. The rationale behind HSA

140-521: A graphics processor , to operate at the same processing level as the system's CPU. Among its main features, HSA defines a unified virtual address space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share page tables so that devices can exchange data by sharing pointers . This is to be supported by custom memory management units . To render interoperability possible and also to ease various aspects of programming, HSA

210-413: A system-on-chip , or SoC. For example, many new processors now include built-in logic for interfacing with other devices ( SATA , PCI , Ethernet , USB , RFID , radios , UARTs , and memory controllers ), as well as programmable functional units and hardware accelerators ( GPUs , cryptography co-processors , programmable network processors, A/V encoders/decoders, etc.). Recent findings show that

280-491: A "big" or P-core and a more power efficient core usually known as a "small" or E-core. The terms P- and E-cores are usually used in relation to Intel's implementation of hetereogeneous computing, while the terms big and little cores are usually used in relation to the ARM architecture. Some processors have three categories of core, prime, performance and efficiency cores, with prime cores having higher performance than performance cores;

350-403: A certain point. Amdahl's Law has limitations, including assumptions of fixed workload, neglecting inter-process communication and synchronization overheads, primarily focusing on computational aspect and ignoring extrinsic factors such as data persistence, I/O operations, and memory access overheads. Gustafson's law and Universal Scalability Law give a more realistic assessment of

420-446: A constant value for large numbers of processing elements. The maximum potential speedup of an overall system can be calculated by Amdahl's law . Amdahl's Law indicates that optimal performance improvement is achieved by balancing enhancements to both parallelizable and non-parallelizable components of a task. Furthermore, it reveals that increasing the number of processors yields diminishing returns, with negligible speedup gains beyond

490-581: A couple of other software tools related to HSA. CodeXL version 2.0 includes an HSA profiler. As of February 2015, only AMD's "Kaveri" A-series APUs (cf. "Kaveri" desktop processors and "Kaveri" mobile processors ) and Sony's PlayStation 4 allowed the integrated GPU to access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity and Richland) included the version 2 IOMMU functionality, but only for use by an external GPU connected via PCI Express. Post-2015 Carrizo and Bristol Ridge APUs also include

560-425: A different microarchitecture ( floating point number processing is a special case of this - not usually referred to as heterogeneous). In the past heterogeneous computing meant different ISAs had to be handled differently, while in a modern example, Heterogeneous System Architecture (HSA) systems eliminate the difference (for the user) while using multiple processor types (typically CPUs and GPUs ), usually on

630-473: A heterogeneous-ISA chip multiprocessor that exploits diversity offered by multiple ISAs can outperform the best same-ISA homogeneous architecture by as much as 21% with 23% energy savings and a reduction of 32% in Energy Delay Product (EDP). AMD's 2014 announcement on its pin-compatible ARM and x86 SoCs, codename Project Skybridge, suggested a heterogeneous-ISA (ARM+x86) chip multiprocessor in

700-445: A mainstream programming task. In 2012 quad-core processors became standard for desktop computers , while servers have 10+ core processors. From Moore's law it can be predicted that the number of cores per processor will double every 18–24 months. This could mean that after 2020 a typical processor will have dozens or hundreds of cores, however in reality the standard is somewhere in the region of 4 to 16 cores, with some designs having

770-574: A mix of performance and efficiency cores (such as ARM's big.LITTLE design) due to thermal and design constraints. An operating system can ensure that different tasks and user programs are run in parallel on the available cores. However, for a serial software program to take full advantage of the multi-core architecture the programmer needs to restructure and parallelize the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelize their software code to take advantage of

SECTION 10

#1732775988518

840-520: A multi-core processor can issue multiple instructions per clock cycle from multiple instruction streams. IBM 's Cell microprocessor , designed for use in the Sony PlayStation 3 , is a prominent multi-core processor. Each core in a multi-core processor can potentially be superscalar as well—that is, on every clock cycle, each core can issue multiple instructions from one thread. Simultaneous multithreading (of which Intel's Hyper-Threading

910-403: A node), or n-dimensional mesh . Parallel computers based on interconnected networks need to have some kind of routing to enable the passing of messages between nodes that are not directly connected. The medium used for communication between the processors is likely to be hierarchical in large multiprocessor machines. Parallel computers can be roughly classified according to the level at which

980-410: A parallel program are often called threads . Some parallel computer architectures use smaller, lightweight versions of threads known as fibers , while others use bigger versions known as processes . However, "threads" is generally accepted as a generic term for subtasks. Threads will often need synchronized access to an object or other resource , for example when they must update a variable that

1050-543: A pipelined processor is a RISC processor, with five stages: instruction fetch (IF), instruction decode (ID), execute (EX), memory access (MEM), and register write back (WB). The Pentium 4 processor had a 35-stage pipeline. Most modern processors also have multiple execution units . They usually combine this feature with pipelining and thus can issue more than one instruction per clock cycle ( IPC > 1 ). These processors are known as superscalar processors. Superscalar processors differ from multi-core processors in that

1120-446: A prime core is known as "big", a performance core is known as "medium", and an efficiency core is known as "small". A common use of such topology is to provide better power efficiency, especially in mobile SoCs. Heterogeneous computing systems present new challenges not found in typical homogeneous systems. The presence of multiple processing elements raises all of the issues involved with homogeneous parallel processing systems, while

1190-421: A problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the other hand, uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking

1260-450: A result from instruction 2. It violates condition 1, and thus introduces a flow dependency. In this example, there are no dependencies between the instructions, so they can all be run in parallel. Bernstein's conditions do not allow memory to be shared between different processes. For that, some means of enforcing an ordering between accesses is necessary, such as semaphores , barriers or some other synchronization method . Subtasks in

1330-471: A result, shared memory computer architectures do not scale as well as distributed memory systems do. Processor–processor and processor–memory communication can be implemented in hardware in several ways, including via shared (either multiported or multiplexed ) memory, a crossbar switch , a shared bus or an interconnect network of a myriad of topologies including star , ring , tree , hypercube , fat hypercube (a hypercube with more than one processor at

1400-412: A single address space ), or distributed memory (in which each processing element has its own local address space). Distributed memory refers to the fact that the memory is logically distributed, but often implies that it is physically distributed as well. Distributed shared memory and memory virtualization combine the two approaches, where the processing element has its own local memory and access to

1470-595: A single machine, while clusters , MPPs , and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms , particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs , of which race conditions are

SECTION 20

#1732775988518

1540-500: A sufficient amount of memory bandwidth exists. A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable. The terms " concurrent computing ", "parallel computing", and "distributed computing" have a lot of overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed";

1610-521: A task independently. On the other hand, concurrency enables a program to deal with multiple tasks even on a single CPU core; the core switches between tasks (i.e. threads ) without necessarily completing each one. A program can have both, neither or a combination of parallelism and concurrency characteristics. Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within

1680-512: A time from multiple threads. A symmetric multiprocessor (SMP) is a computer system with multiple identical processors that share memory and connect via a bus . Bus contention prevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors. Because of the small size of the processors and the significant reduction in the requirements for bus bandwidth achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that

1750-416: Is a programming language construct that allows one thread to take control of a variable and prevent other threads from reading or writing it, until that variable is unlocked. The thread holding the lock is free to execute its critical section (the section of a program that requires exclusive access to some variable), and to unlock the data when it is finished. Therefore, to guarantee correct program execution,

1820-429: Is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level , instruction-level , data , and task parallelism . Parallelism has long been employed in high-performance computing , but has gained broader interest due to

1890-516: Is equivalent to an entirely sequential program. The single-instruction-multiple-data (SIMD) classification is analogous to doing the same operation repeatedly over a large data set. This is commonly done in signal processing applications. Multiple-instruction-single-data (MISD) is a rarely used classification. While computer architectures to deal with this were devised (such as systolic arrays ), few applications that fit this class materialized. Multiple-instruction-multiple-data (MIMD) programs are by far

1960-537: Is intended to be ISA -agnostic for both CPUs and accelerators, and to support high-level programming languages. So far, the HSA specifications cover: HSAIL (Heterogeneous System Architecture Intermediate Language), a virtual instruction set for parallel programs Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency. The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures. Some of

2030-465: Is known as burst buffer , which is typically built from arrays of non-volatile memory physically distributed across multiple I/O nodes. Computer architectures in which each element of main memory can be accessed with equal latency and bandwidth are known as uniform memory access (UMA) systems. Typically, that can be achieved only by a shared memory system, in which the memory is not physically distributed. A system that does not have this property

2100-429: Is known as a non-uniform memory access (NUMA) architecture. Distributed memory systems have non-uniform memory access. Computer systems make use of caches —small and fast memories located close to the processor which store temporary copies of memory values (nearby in both the physical and logical sense). Parallel computer systems have difficulties with caches that may store the same value in more than one location, with

2170-419: Is shared between them. Without synchronization, the instructions between the two threads may be interleaved in any order. For example, consider the following program: If instruction 1B is executed between 1A and 3A, or if instruction 1A is executed between 1B and 3B, the program will produce incorrect data. This is known as a race condition . The programmer must use a lock to provide mutual exclusion . A lock

Heterogeneous System Architecture - Misplaced Pages Continue

2240-429: Is the best known) was an early form of pseudo-multi-coreism. A processor capable of concurrent multithreading includes multiple execution units in the same processing unit—that is it has a superscalar architecture—and can issue multiple instructions per clock cycle from multiple threads. Temporal multithreading on the other hand includes a single execution unit in the same processing unit and can issue one instruction at

2310-529: Is the characteristic of a parallel program that "entirely different calculations can be performed on either the same or different sets of data". This contrasts with data parallelism, where the same calculation is performed on the same or different sets of data. Task parallelism involves the decomposition of a task into sub-tasks and then allocating each sub-task to a processor for execution. The processors would then execute these sub-tasks concurrently and often cooperatively. Task parallelism does not usually scale with

2380-600: Is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers' DSPs , as well. Modern GPUs are very well suited to perform single instruction, multiple data (SIMD) and single instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc. Originally introduced by embedded systems such as

2450-550: The Cell Broadband Engine , sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units – central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance

2520-423: The 1970s until about 1986, speed-up in computer architecture was driven by doubling computer word size —the amount of information the processor can manipulate per cycle. Increasing the word size reduces the number of instructions the processor must execute to perform an operation on variables whose sizes are greater than the length of the word. For example, where an 8-bit processor must add two 16-bit integers ,

2590-471: The AMD's IOMMU , was accepted into the Linux kernel mainline version 4.14. Integrated support for HSA platforms has been announced for the "Sumatra" release of OpenJDK , due in 2015. AMD APP SDK is AMD's proprietary software development kit targeting parallel computing , available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing. GPUOpen comprehends

2660-477: The HSA runtime. This very first implementation, known as amdkfd , focuses on "Kaveri" or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver. Additionally, amdkfd supports heterogeneous queuing (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. Support for heterogeneous memory management ( HMM ), suited only for graphics hardware featuring version 2 of

2730-542: The HSA-specific features implemented in the hardware need to be supported by the operating system kernel and specific device drivers. For example, support for AMD Radeon and AMD FirePro graphics cards, and APUs based on Graphics Core Next (GCN), was merged into version 3.19 of the Linux kernel mainline , released on 8 February 2015. Programs do not interact directly with amdkfd , but queue their jobs utilizing

2800-484: The above program can be rewritten to use locks: One thread will successfully lock variable V, while the other thread will be locked out —unable to proceed until V is unlocked again. This guarantees correct execution of the program. Locks may be necessary to ensure correct program execution when threads must serialize access to resources, but their use can greatly slow a program and may affect its reliability . Locking multiple variables using non-atomic locks introduces

2870-418: The average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V × F , where C is the capacitance being switched per clock cycle (proportional to

Heterogeneous System Architecture - Misplaced Pages Continue

2940-444: The costs associated with merging data from multiple processes. Specifically, inter-process communication and synchronization can lead to overheads that are substantially higher—often by two or more orders of magnitude—compared to processing the same data on a single thread. Therefore, the overall improvement should be carefully evaluated. From the advent of very-large-scale integration (VLSI) computer-chip fabrication technology in

3010-457: The easiest to parallelize. Michael J. Flynn created one of the earliest classification systems for parallel (and sequential) computers and programs, now known as Flynn's taxonomy . Flynn classified programs and computers by whether they were operating using a single set or multiple sets of instructions, and whether or not those instructions were using a single set or multiple sets of data. The single-instruction-single-data (SISD) classification

3080-547: The hardware supports parallelism. This classification is broadly analogous to the distance between basic computing nodes. These are not mutually exclusive; for example, clusters of symmetric multiprocessors are relatively common. A multi-core processor is a processor that includes multiple processing units (called "cores") on the same chip. This processor differs from a superscalar processor, which includes multiple execution units and can issue multiple instructions per clock cycle from one instruction stream (thread); in contrast,

3150-454: The increasing computing power of multicore architectures. Main article: Amdahl's law Optimally, the speedup from parallelization would be linear—doubling the number of processing elements should halve the runtime, and doubling it a second time should again halve the runtime. However, very few parallel algorithms achieve optimal speedup. Most of them have a near-linear speedup for small numbers of processing elements, which flattens out into

3220-608: The introduction of 32-bit processors, which has been a standard in general-purpose computing for two decades. Not until the early 2000s, with the advent of x86-64 architectures, did 64-bit processors become commonplace. A computer program is, in essence, a stream of instructions executed by a processor. Without instruction-level parallelism, a processor can only issue less than one instruction per clock cycle ( IPC < 1 ). These processors are known as subscalar processors. These instructions can be re-ordered and combined into groups which are then executed in parallel without changing

3290-456: The level of heterogeneity in the system can introduce non-uniformity in system development, programming practices, and overall system capability. Areas of heterogeneity can include: Heterogeneous computing hardware can be found in every domain of computing—from high-end servers and high-performance computing machines all the way down to low-power embedded devices including mobile phones and tablets. Parallel computing Parallel computing

3360-410: The making. A system with heterogeneous CPU topology is a system where the same ISA is used, but the cores themselves are different in speed. The setup is more similar to a symmetric multiprocessor . (Although such systems are technically asymmetric multiprocessors , the cores do not differ in roles or device access.) There are typically two types of cores: a higher performance core usually known as

3430-511: The memory on non-local processors. Accesses to local memory are typically faster than accesses to non-local memory. On the supercomputers , distributed shared memory space can be implemented using the programming model such as PGAS . This model allows processes on one compute node to transparently access the remote memory of another compute node. All compute nodes are also connected to an external shared memory system via high-speed interconnect, such as Infiniband , this external shared memory system

3500-442: The most common type of parallel programs. According to David A. Patterson and John L. Hennessy , "Some machines are hybrids of these categories, of course, but this classic model has survived because it is simple, easy to understand, and gives a good first approximation. It is also—perhaps because of its understandability—the most widely used scheme." Parallel computing can incur significant overhead in practice, primarily due to

3570-498: The most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance. A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law , which states that it is limited by the fraction of time for which the parallelization can be utilised. Traditionally, computer software has been written for serial computation . To solve

SECTION 50

#1732775988518

3640-447: The number of transistors whose inputs change), V is voltage , and F is the processor frequency (cycles per second). Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel 's May 8, 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm. To deal with

3710-824: The overhead from resource contention or communication dominates the time spent on other computation, further parallelization (that is, splitting the workload over even more threads) increases rather than decreases the amount of time required to finish. This problem, known as parallel slowdown , can be improved in some cases by software analysis and redesign. Applications are often classified according to how often their subtasks need to synchronize or communicate with each other. An application exhibits fine-grained parallelism if its subtasks must communicate many times per second; it exhibits coarse-grained parallelism if they do not communicate many times per second, and it exhibits embarrassing parallelism if they rarely or never have to communicate. Embarrassingly parallel applications are considered

3780-589: The parallel performance. Understanding data dependencies is fundamental in implementing parallel algorithms . No program can run more quickly than the longest chain of dependent calculations (known as the critical path ), since calculations that depend upon prior calculations in the chain must be executed in order. However, most algorithms do not consist of just a long chain of dependent calculations; there are usually opportunities to execute independent calculations in parallel. Let P i and P j be two program segments. Bernstein's conditions describe when

3850-441: The physical constraints preventing frequency scaling . As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture , mainly in the form of multi-core processors . In computer science , parallelism and concurrency are two different things: a parallel program uses multiple CPU cores , each core performing

3920-452: The possibility of incorrect program execution. These computers require a cache coherency system, which keeps track of cached values and strategically purges them, thus ensuring correct program execution. Bus snooping is one of the most common methods for keeping track of which values are being accessed (and thus should be purged). Designing large, high-performance cache coherence systems is a very difficult problem in computer architecture. As

3990-495: The possibility of program deadlock . An atomic lock locks multiple variables all at once. If it cannot lock all of them, it does not lock any of them. If two threads each need to lock the same two variables using non-atomic locks, it is possible that one thread will lock one of them and the second thread will lock the second variable. In such a case, neither thread can complete, and deadlock results. Many parallel programs require that their subtasks act in synchrony . This requires

4060-404: The problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Historically parallel computing was used for scientific computing and

4130-462: The problem of power consumption and overheating the major central processing unit (CPU or processor) manufacturers started to produce power efficient processors with multiple cores. The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers . Thus parallelization of serial programs has become

4200-564: The processor must first add the 8 lower-order bits from each integer using the standard addition instruction, then add the 8 higher-order bits using an add-with-carry instruction and the carry bit from the lower order addition; thus, an 8-bit processor requires two instructions to complete a single operation, where a 16-bit processor would be able to complete the operation with a single instruction. Historically, 4-bit microprocessors were replaced with 8-bit, then 16-bit, then 32-bit microprocessors. This trend generally came to an end with

4270-629: The result of the program. This is known as instruction-level parallelism. Advances in instruction-level parallelism dominated computer architecture from the mid-1980s until the mid-1990s. All modern processors have multi-stage instruction pipelines . Each stage in the pipeline corresponds to a different action the processor performs on that instruction in that stage; a processor with an N -stage pipeline can have up to N different instructions at different stages of completion and thus can issue one instruction per clock cycle ( IPC = 1 ). These processors are known as scalar processors. The canonical example of

SECTION 60

#1732775988518

4340-518: The same integrated circuit , to provide the best of both worlds: general GPU processing (apart from the GPU's well-known 3D graphics rendering capabilities, it can also perform mathematically intensive computations on very large data-sets), while CPUs can run the operating system and perform traditional serial tasks. The level of heterogeneity in modern computing systems is gradually increasing as further scaling of fabrication technologies allows for formerly discrete components to become integrated parts of

4410-409: The same type of processors, but by adding dissimilar coprocessors , usually incorporating specialized processing capabilities to handle particular tasks. Usually heterogeneity in the context of computing refers to different instruction-set architectures (ISA), where the main processor has one and other processors have another - usually a very different - architecture (maybe more than one), not just

4480-449: The second segment produces a variable needed by the first segment. The third and final condition represents an output dependency: when two segments write to the same location, the result comes from the logically last executed segment. Consider the following functions, which demonstrate several kinds of dependencies: In this example, instruction 3 cannot be executed before (or even in parallel with) instruction 2, because instruction 3 uses

4550-473: The several execution units are not entire processors (i.e. processing units). Instructions can be grouped together only if there is no data dependency between them. Scoreboarding and the Tomasulo algorithm (which is similar to scoreboarding but makes use of register renaming ) are two of the most common techniques for implementing out-of-order execution and instruction-level parallelism. Task parallelisms

4620-417: The simulation of scientific problems, particularly in the natural and engineering sciences , such as meteorology . This led to the design of parallel hardware and software, as well as high performance computing . Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004. The runtime of a program is equal to the number of instructions multiplied by

4690-432: The size of a problem. Superword level parallelism is a vectorization technique based on loop unrolling and basic block vectorization. It is distinct from loop vectorization algorithms in that it can exploit parallelism of inline code , such as manipulating coordinates, color channels or in loops unrolled by hand. Main memory in a parallel computer is either shared memory (shared between all processing elements in

4760-445: The two are independent and can be executed in parallel. For P i , let I i be all of the input variables and O i the output variables, and likewise for P j . P i and P j are independent if they satisfy Violation of the first condition introduces a flow dependency, corresponding to the first segment producing a result used by the second segment. The second condition represents an anti-dependency, when

4830-576: The use of a barrier . Barriers are typically implemented using a lock or a semaphore . One class of algorithms, known as lock-free and wait-free algorithms , altogether avoids the use of locks and barriers. However, this approach is generally difficult to implement and requires correctly designed data structures. Not all parallelization results in speed-up. Generally, as a task is split up into more and more threads, those threads spend an ever-increasing portion of their time communicating with each other or waiting on each other for access to resources. Once

4900-700: The version 2 IOMMU functionality for the integrated GPU. The following table shows features of AMD 's processors with 3D graphics, including APUs (see also: List of AMD processors with 3D graphics ). ARM's Bifrost microarchitecture, as implemented in the Mali-G71, is fully compliant with the HSA 1.1 hardware specifications. As of June 2016, ARM has not announced software support that would use this hardware feature. Heterogeneous computing Heterogeneous computing refers to systems that use more than one kind of processor or core . These systems gain performance or energy efficiency not just by adding

#517482