Home

The devel/openmpi port

openmpi-4.1.6 – open source MPI-3.1 implementation (cvsweb github mirror)

Description

The Open MPI Project is an open source MPI-3.1 implementation that is
developed and maintained by a consortium of academic, research, and
industry partners. Open MPI is therefore able to combine the expertise,
technologies, and resources from all across the High Performance
Computing community in order to build the best MPI library available.
Open MPI offers advantages for system and software vendors, application
developers and computer science researchers.

Features implemented or in short-term development for Open MPI:
- Full MPI-3.1 standards conformance
- Thread safety and concurrency
- Dynamic process spawning
- Network and process fault tolerance
- Support network heterogeneity
- Single library supports all networks
- Run-time instrumentation
- Many job schedulers supported
- Many OS's supported (32 and 64 bit)
- Production quality software
- High performance on all platforms
- Portable and maintainable
- Tunable by installers and end-users
- Component-based design, documented APIs
- Active, responsive mailing list
- Open source license based on the BSD license
WWW: https://www.open-mpi.org/

Readme

+-----------------------------------------------------------------------
| Customizing ${PKGSTEM} execution on OpenBSD
+-----------------------------------------------------------------------

The OpenMPI runtime is controlled by numerous values specified
on the command line or with environment variables. See mpirun(1) and
ompi_info(1).  Example**:

    $ export PMIX_MCA_gds=hash
    $ mpirun -np 2 -H localhost:2 \
             -mca btl tcp,self \
             -mca mpi_yield_when_idle 1 -- \
             ./mpitest

These values (at least) are useful:

   OMPI_MCA_btl=self,tcp,vader
      Avoid "vader" when launching many processes per node
      and you have an NFS swap file.  Or use local backing store.
      (BTL is byte transfer layer.  "vader" is shared memory
      communication module.)

   OMPI_MCA_mpi_yield_when_idle=1
      Set to 1 may improve throughput when launching many
      processes per node.

   PMIX_MCA_gds=hash
      This is the one gds (general data service) that works on OpenBSD.

   OMPI_MCA_io=romio321
      This is the prefered IO component on OpenBSD.

**Example code taken from:
https://hpcc.usc.edu/support/documentation/examples-of-mpi-programs/
(now only available via waybackmachine)

Compile with:

	$ mpicc -o mpitest mpitest.c

/* Adapted from mpihello.f by drs */

#include 
#include 
#include 

int main(int argc, char **argv)
{
	int rank;
	char hostname[256];

	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &rank);
	gethostname(hostname, 255);

	printf("Hello world!  I am process number: %d on host %s\n",
		rank, hostname);

	MPI_Finalize();

	return 0;
}

Maintainer

Martin Reindl

Only for arches

aarch64 alpha amd64 arm hppa i386 mips64 mips64el powerpc powerpc64 riscv64 sparc64

Categories

devel

Library dependencies

Build dependencies

Files

Search