Intel MPI Library

mpi-large-product-plainMaking applications perform better on Intel® architecture-based clusters with multiple fabric flexibility

  • Scalability Up To 150K Processes
  • Sustained Scalability – Low Latencies, Higher Bandwidth & Increased Processes
  • Interconnect Independence & Flexible Runtime Fabric Selection

Download an Intel MPI Library demo


Deliver Flexible, Efficient, and Scalable Cluster Messaging

Intel® MPI Library 5.0 focuses on making applications perform better on Intel® architecture-based clusters—implementing the high performance Message Passing Interface Version 3.0 specification on multiple fabrics. It enables you to quickly deliver maximum end user performance even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment.

Use this high performance MPI message library to develop applications that can run on multiple cluster interconnects chosen by the user at runtime. Benefit from a free runtime environment kit for products developed with Intel® MPI Library. Get excellent performance for enterprise, divisional, departmental, workgroup, and personal High Performance Computing.




“Fast and accurate state of the art general purpose CFD solvers is the focus at S & I Engineering Solutions Pvt, Ltd. Scalability and efficiency are key to us when it comes to our choice and use of MPI Libraries. The Intel® MPI Library has enabled us to scale to over 10k cores with high efficiency and performance.”
Nikhil Vijay Shende, Director,
S & I Engineering Solutions, Pvt. Ltd.




  • Scaling verified up to 150k Processes
  • Thread safety allows you to trace hybrid multithreaded MPI applications for optimal performance on multi- and many-core Intel® Architecture.
  • Improved start scalability through the mpiexec.hydra process manager



  • Low latency MPI implementation up to 2 times as fast as alternative MPI libraries
  • Enable optimized shared memory dynamic connection mode for large SMP nodes
  • Increase performance with improved DAPL, OFA, and TMI fabric support
  • Accelerate your applications using the enhanced tuning utility for MPI


Interconnect Independence & Flexible Runtime Fabric Selection

  • Get high-performance interconnects, including InfiniBand* and Myrinet*, as well as TCP, shared memory, and others
  • Efficiently work through the Direct Access Programming Library (DAPL*), Open Fabrics Association (OFA*), and Tag Matching Interface (TMI*), making it easy for you to test and run applications on a variety of network fabrics.
  • Optimizations to all levels of cluster fabrics: from shared memory thru Ethernet and RDMA-based fabrics to the tag matching interconnects


What’s New

Feature Benefit
MPI 3.0 Standard Support The next major evolution of the Message Passing Interface is with the release of the MPI-3.0 standard. Significant changes to remote memory access (RMA) one-sided communications, addition of non-blocking collective operations, and large counts messages greater than 2GB will enhance usability and performance. Now available in the Intel® MPI Library 5.0.
Binary compatibility Intel® MPI Library 5.0 offers binary compatibility with existing MPI-1.x and MPI-2.x applications. Even if you’re not ready to move to the new standard, you can still take advantage of the latest Intel® MPI Library 5.0 performance improvements without recompiling. Furthermore, the Intel® MPI Library is an active collaborator in the MPICH ABI Compatibility Initiative, ensuring any MPICH-compiled code can use our runtimes.
Support for Mixed Operating Systems Run a single MPI job using a cluster with mixed operating systems (Windows* OS and Linux OS*) under the Hydra process manager. Get more flexibility in job deployment with this added functionality.
Latest Processor Support Haswell, Ivy Bridge, Intel® Many Integrated Core Architecture Intel consistently offers the first set of tools to take advantage of the latest performance enhancements in the newest Intel product, while preserving compatibility with older Intel and compatible processors. New support includes AVX2, TSX, FMA3 and AVX-512.




Implementing the high performance version 3.0 of the MPI-3 specification on multiple fabrics, Intel® MPI Library 5.0 for Windows* and Linux* focuses on making applications perform better on IA-based clusters. Intel® MPI Library 5.0 enables you to quickly deliver maximum end-user performance, even if you change or upgrade to new interconnects without requiring major modifications to the software or to the operating environment. Intel also provides a free runtime environment kit for products developed with the Intel® MPI Library.


Optimized shared memory path for multicore platforms allows more communication throughput and lower latencies. Native InfiniBand interface (OFED verbs) also provides support for lower latencies. Multi-rail capability for higher bandwidth and increased interprocess communication and Tag Matching Interface (TMI) support for higher performance on Intel® True Scale, Qlogic* PSM, and Myricom* MX solutions.

Intel® MPI Library 5.0 Supports Multiple Hardware Fabrics

Whether you need to run TCP sockets, shared memory, or one of many Remote Direct Memory Access (RDMA) based interconnects, including InfiniBand*, Intel® MPI Library 5.0 covers all your configurations by providing an accelerated universal, multi-fabric layer for fast interconnects via the Direct Access Programming Library (DAPL*) or the Open Fabrics Association (OFA*) methodology. Develop MPI code independent of the fabric, knowing it will run efficiently on whatever network is chosen by the user at runtime.

Additionally, Intel® MPI Library 5.0 provides new levels of performance and flexibility for applications achieved through improved interconnect support for Intel® True Scale, Myrinet* MX, and QLogic* PSM interfaces, faster on-node messaging and an application tuning capability that adjusts to the cluster architecture and application structure.

Intel® MPI Library 5.0 dynamically establishes the connection, but only when needed, which reduces the memory footprint. It also automatically chooses the fastest transport available. Memory requirements are reduced by several methods including a two-phase communication buffer enlargement capability which allocates only the memory space actually required.

Purchase Options

Several suites are available combining the tools to build, verify and tune your application. The products covered in this product brief are highlighted in blue. Named-user or multi-user licenses along with volume, academic, and student discounts are available.


Technical Specifications

Feature Benefit
Processor support Validated for use with multiple generations of Intel® and compatible processors including but not limited to: 2nd Generation Intel® Core™2 processor, Intel® Core™2 processor, Intel® Core™ processor, Intel® Xeon™ processor, and Intel® Xeon Phi™ Coprocessors
Operating systems Windows* and Linux*
Programming languages Natively supports C, C++ and Fortran development
System requirements Please refer to for details on hardware and software requirements.
Support A free Runtime Environment Kit is available to run applications that were developed using Intel® MPI LibraryAll product updates, Intel® Premier Support services, and Intel® Support Forums are included for one year. Intel Premier Support gives you confidential support, technical notes, application notes, and the latest documentation. Join the Intel® Support Forums community to learn, contribute, or just browse!


Videos to help you get started

The Next Steps

Was sagen unsere Kunden über uns?

I have tested the program with my instrument. It is now working very well, and I am really very happy with it. Many thanks for all your help indeed. I am deeply impressed by your enthusiastic contributions to it.

JX, Oxford, UK

Amazing, wish I had learned about Endnote 5 years ago

I can only say I wish all suppliers were as helpful as you.

CP, Newport, UK

I have had a very helpful response, and have passed it on to my Oxford colleagues with success. I am impressed with the way Adept seems to maintain the same staff for many years, which speaks very well for the organisation.

HG, Oxford, UK