LinuxQuestions.org
Latest LQ Deal: Complete CCNA, CCNP & Red Hat Certification Training Bundle
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 10-02-2007, 03:44 AM   #1
gamekiller
LQ Newbie
 
Registered: Jun 2007
Posts: 7

Rep: Reputation: 0
problem compiling mpi program(not while install)


Hye everyone, last 2 days, I installed my open-mpi and lam succesfully,in my own laptop, no error showed. So, I try to compile my first mpi program,with the following code,

Code:
mpicc -o first first.c -lmpi
after that, I got an error,

Code:
/tmp/ccM49aqV.o: In function `main':
first.c: (.text+0x31): undefined reference to `lam_mpi_comm_world'
first.c: (.text+0x44): undefined reference to `lam_mpi_comm_world'
first.c: (.text+0xb7): undefined reference to `lam_mpi_comm_world'
first.c: (.text+0xc7): undefined reference to `lam_mpi_sum'
first.c: (.text+0xcf): undefined reference to `lam_mpi_int'
first.c: (.text+0x10b): undefined reference to `lam_mpi_comm_world'
first.c: (.text+0x11b): undefined reference to `lam_mpi_int'
first.c: (.text+0x157): undefined reference to `lam_mpi_comm_world'
first.c: (.text+0x15f): undefined reference to `lam_mpi_sum'
first.c: (.text+0x167): undefined reference to `lam_mpi_int'
collect2: ld returned 1 exit status
Then, i compile from in my faculty server, it doesnt show any error, it also can run.
why does this happen? Is it because of my laptop?

below is the code that I try to compile, sorry for the long source code.


Code:
/*This program demonstrates the use of MPI_Bcast, MPI_Reduce and MPI_Allreduce */

#include <stdio.h>
#include <mpi.h>

int main (int argc, char *argv[]) {
  int nprocs;                   /* number of processes */
  int rank;                     /* the unique identification of this process */

  int i, mysum, sum, imin, imax;
  const int N = 60;

  /* initialize the MPI environment: */

  MPI_Init (&argc, &argv);      /* note: use argc and argv yourself only
                                 * after this call */

  MPI_Comm_rank (MPI_COMM_WORLD, &rank);        /* rank will be different for
                                                 * all processes */
  MPI_Comm_size (MPI_COMM_WORLD, &nprocs);      /* nprocs will be the same for
                                                 * all processes */

  /* because a number of processes are executing this code now,the next statement will cause a number of lines to be printed, one line for each process. In general, this is not the way to print in a MPI program, because the processes print independently and lines can get mixed up. In a real program, in general the printing is done by one process, most of the time process 0. For debugging and demonstration purposes however, printing by all processes is of great value */

  printf ("Hello, this is process %d of a total of %d\n", rank, nprocs);

  /* Here follows the first example of a real parallel program:
   * We want to compute the sum of the first 60 integers. Each process will perform part of the work: for example with 4 processes: process 0 will sum up the numbers 1 to 15, process 1 the numbers 16 to 30, and so on. After this, the partial sums are summed up and the result is printed */
  /* for simplicity we assume that N is divisible by the number of processes. A generalization of this program is left as an excersize for the reader */

  /* which numbers are to be added in this process: */
  imin = (N / nprocs) * rank + 1;
  imax = imin + N / nprocs -1;

  mysum = 0;                    /* mysum will contain the sum of the numbers
                                 * imin .. imax */
  for (i = imin; i <= imax; i++)
    mysum += i;

  /* Now, each process has it's own partial sum: mysum
   * We want that the process with rank equal to 0 will receive the
   * sum of the partials sums. MPI_Reduce is used for this: */

  MPI_Reduce (&mysum,           /* the partial sum */
              &sum,             /* the total sum */
              1,                /* the number of elements in sum, in this
                                 * case one int */
              MPI_INT,          /* the type of sum: int. Other types are for
                                 * example:
                                 * MPI_DOUBLE
                                   MPI_FLOAT    */
              MPI_SUM,          /* we want to sum the partial sums. Other
                                 * operations are for example:
                                 * MPI_MAX    find the maximum value
                                 * MPI_MIN    find the minimum value
                                 * MPI_PROD   find the product
                                 */
              0,                /* the rank of the process that is going
                                   to receive the sum. */
              MPI_COMM_WORLD    /* the communicator */
    );

  /* Now the sum of all mysum's is available on process 0 in variable sum */

  /* let's print it: */
  if (rank == 0)
    printf ("process zero reports: sum is %d\n", sum);

  /* Let's try to communicate the value of sum to all processes using MPI_Bcast, a broadcasting subroutine: */

  MPI_Bcast (&sum,              /* the value to broadcast */
             1,                 /* number of elements in sum */
             MPI_INT,           /* the datatype of sum */
             0,                 /* the rank of the process that contains the
                                   value to broadcast */
             MPI_COMM_WORLD     /* the communicator */
    );

  /* Note that ALL processes execute the same call to MPI_Bcast. However, in this case the effect is that process 0 is sending and processes 1..nproc-1 are receiving. */

  /* print the results: */
  printf ("process %d reports: mysum is %d and sum is %d\n", rank, mysum,
          sum);

  /* In practice, one would use MPI_Allreduce in stead of MPI_Reduce followed by MPI_Bcast. MPI_Allreduce is more efficient and leads to a shorter program: */

  MPI_Allreduce (&mysum, &sum, 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD);

  /* Note that MPI_Allreduce is even more simple to use than MPI_Reduce: because the result is sent to every process, there is no need to specify to which process the answer must go */

  /* Let's print the result again: */
  printf ("process %d reports again: mysum is %d and sum is %d\n",
          rank, mysum, sum);

  MPI_Finalize ();              /* end of MPI. Do not forget this call: if a MPI
                                 * program ends without calling MPI_Finalize(),
                                 * strange things can happen ... */
  return 0;
}

Last edited by gamekiller; 10-02-2007 at 11:01 PM.
 
Old 10-02-2007, 04:55 PM   #2
jsquyres
LQ Newbie
 
Registered: Sep 2007
Posts: 8

Rep: Reputation: 0
You should probably post such questions to the LAM and/or Open MPI support mailing lists; I only noticed this question via Google Alerts...

You are mixing your MPI implementations. You must install LAM into one tree and Open MPI into another. If you attempt to install them into the same tree, there are files that have the same name (e.g., libmpi) that will conflict with each other and weirdness like what you describe can/will occur.

Install them into different trees and simply set your PATH to point to the one that you want to use (Open MPI defaults to shared libraries, so you'll need to set LD_LIBRARY_PATH as well; LAM defaults to static libraries so you likely won't need to se it).
 
Old 10-02-2007, 05:49 PM   #3
matthewg42
Senior Member
 
Registered: Oct 2003
Location: UK
Distribution: Kubuntu 12.10 (using awesome wm though)
Posts: 3,530

Rep: Reputation: 63
No idea about your problem, but you might want to know that instead of quote tags, there is a tag code, which preserves formatting better (and uses fixed width font). For example:
Code:
#include <stdio.h>

int main(int argc, char** argv)
{
    printf("hello world\n");
    return 0;
}
 
Old 10-02-2007, 11:17 PM   #4
gamekiller
LQ Newbie
 
Registered: Jun 2007
Posts: 7

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by jsquyres View Post
You should probably post such questions to the LAM and/or Open MPI support mailing lists; I only noticed this question via Google Alerts...

You are mixing your MPI implementations. You must install LAM into one tree and Open MPI into another. If you attempt to install them into the same tree, there are files that have the same name (e.g., libmpi) that will conflict with each other and weirdness like what you describe can/will occur.

Install them into different trees and simply set your PATH to point to the one that you want to use (Open MPI defaults to shared libraries, so you'll need to set LD_LIBRARY_PATH as well; LAM defaults to static libraries so you likely won't need to se it).
yeah, you right, I think I should post to LAM mailing list, never mind, I'll post right after this. So, what should i do now? remove all those things and re install back? how should i remove this? and if you have another option, tell me. I'll appreciate it..

Last edited by gamekiller; 10-02-2007 at 11:19 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
MPI program running on cluster. ArthurHuang Programming 1 09-24-2007 01:20 AM
problem with compiling program Nocker Linux - Newbie 2 05-30-2006 11:28 AM
MPI Problem, Additional compy of program hiren_bhatt Programming 4 04-18-2006 12:27 AM
Running Mandelbrot mpi program given with mpich installation Dinux Linux - Networking 0 08-04-2003 09:27 AM
Problem with compiling my program. Zeth_htp Linux - Newbie 4 04-04-2003 10:12 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 01:35 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration