Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI


By Scot Schultz Director of HPC and Technical Computing, Mellanox

Business Challenge

Some computing jobs are so large that they must be split into pieces and solved in parallel, distributed via the network across a number of computing nodes. We find some of the world’s largest computing jobs in the realm of scientific research, where continuous advancement will require extreme-scale computing with machines that are 500-to-1000 times more capable than today’s supercomputers. As researchers constantly refine their models and push to increased resolutions, the demand for more parallel computation and advanced networking capabilities is paramount.

Computing Challenge

Efficient high-performance computing systems require high-bandwidth, low-latency connections between thousands of multi-processor nodes, as well as high-speed storage systems. As a result of the ubiquitous data explosion and the ascendance of Big Data, especially unstructured data, today’s systems need to move enormous amounts of data as well as perform more sophisticated analysis.

The network now becomes the critical element of gaining insight from today’s massive flows of data.


Only Mellanox delivers an industry standards based solutions with advanced native hardware acceleration engines, but leveraging the latest advancement from IBM’s OpenPOWER architecture takes performance to whole new level.

Already deployed in over 50% of the world’s most powerful super computing systems, Mellanox’s high speed interconnect solutions are proven to deliver the highest scalability, efficiency, and unmatched performance for HPC systems. The latest Mellanox EDR 100Gb/s interconnect architecture includes native support for one of the newest innovations brought forth by OpenPOWER, the Coherent Accelerator Processor Interface (CAPI).

Mellanox 100Gb/s ConnectX®-4 architecture with native support for CAPI is capable of handling communications of massive parallelism. By delivering up to 100Gb/s of reliable, zero-loss connectivity, ConnectX-4 with CAPI provides an optimized platform for moving enormous volumes of data. With much tighter integration between the Mellanox high-performance interconnect and the processor, POWER-based systems can rip through high volumes of data and bring compute and data closer together to derive greater insights. Mellanox ConnectX-4 can be leveraged for 100Gb CAPI-attached InfiniBand, Ethernet, or storage.


CAPI Interconnects with Mellanox Data Flow

CAPI also simplifies the memory management between interconnect and CPU – which results in reduced overhead, higher performance and increased scalability. Because CAPI provides a level of integration that removes additional latency compared to platforms featuring traditional PCI-Express bus semantics, the Mellanox interconnect can move data in and out of the system with even greater efficiency.

Back to tackling the world’s toughest scientific problems –Mellanox ConnectX-4 EDR 100Gb/s “Smart” interconnect technology and IBM’s POWER architecture with CAPI can help. Oak Ridge National Laboratory and Lawrence Livermore National Laboratory for example, have chosen solutions utilizing OpenPOWER designs developed by Mellanox, IBM, and NVIDIA– for the Department of Energy’s next generation Summit and Sierra supercomputer systems. Summit and Sierra will deliver raw computing power at more than 100 petaflops at peak performance, which will make them the most powerful computers in world.

From innovation in nanotechnologies, climate research, medical research and discovering renewable energies, Mellanox and members of the OpenPOWER ecosystem are leading innovations in high performance computing.

Learn more about Mellanox 100Gb/s and CAPI

Mellanox CAPI attached interconnects are suitable for the largest deployments, but they are also accessible for more modest clusters, clouds, and commercial datacenters. Here are a few ways to get started.

Keep coming to see blog posts from IBM and other OpenPOWER Foundation partners on how you can use CAPI to accelerate computing, networking and storage.

About Scot Schultz

Scot Schultz, MellanoxScot Schultz is a HPC technology specialist with broad knowledge in operating systems, high speed interconnects and processor technologies. Joining the Mellanox team in March 2013 as Director of HPC and Technical Computing, Schultz is 25-year veteran of the computing industry. Scot currently maintains his role as Director of Educational Outreach, founding member of the HPC Advisory Council and of various other industry organizations. Follow him on Twitter: @ScotSchultz