ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
void *cal(void *t)
printf("i am in cal and my id is %d\n",pthread_self());
printf("enter the element for a[%d][%d] ",i,j);
printf("enter the element for b[%d][%d] ",i,j);
printf("A mat is\n");
printf("B mat is\n");
printf("C mat is\n");
And here is the output i am getting.
[root@localhost h]# ./mat
enter the element for a 1
enter the element for a 2
enter the element for a 3
enter the element for a 4
enter the element for b 1
enter the element for b 2
enter the element for b 3
enter the element for b 4
i am in cal and my id is -1208259696
i am in cal and my id is -1218749552
i am in cal and my id is -1229239408
i am in cal and my id is -1239729264
A mat is
B mat is
C mat is
I am new to thread programming.Please can anyone help me to debug the program or could guide me where i am going wrong
If you need gol_row and gol_col to be system globals, then you need to enforce locking and use signaling to enforce thread synchronization in order to get the correct values out of them.
I infer from your code that you expect your thread to run immediately after you start it, before your main code continues. As you can see, this is not happening. I also see that you have provided and commented out some mutex statements - presumably because they were not working the way you intended.
You need to set up a mutex condition variable as a global:
Now, your events will occur in the order you want. Of course, this renders your multi-threading trivial and moot. But then, clearly your example is not optimum for multi-threading. Presumably you have something more significant in mind, once you get the threading figured out.
You are not indexing through the rows and columns of your product matrix. You simply initialized those to 1 and 1 and left them there.
You may wish to consult reference material for linear algebra, vectors, and matrices. The product of two matrices, if the two matrices are compatible for multiplication, is not done by simply multiplying the two elements in corresponding positions and assigning the product to the corresponding element in the product matrix. Two matrices are compatible for multiplication when the number of rows in the second matrix equals the number of columns in the first matrix. If this condition is satisfied, then the product matrix will be a matrix with number of rows equal to the number of rows in the first matrix and number of columns equal to the number of columns in the second matrix. The element of the product matrix M_Product at some row Product_Row and some column Product_Column is the summation of all the products of the corresponding elements of the row of the first matrix and the column of the second matrix.
Matrix multiplication is not commutative except by coincidence. That is, A X B is not necessarily equal to B X A.
In your example, with
matrix A = B =
A X B =
unless I made an error in my evaluation arithmetic.
// for every row of product
for( Product_Row = 0, Product_Row < nRows_A, Product_Row++ )
// for every column in the row of the product
for( Product_Column = 0,
Product_Column < nColumnsB,
// initialize product element to 0
M_Product[Product_Row][Product_Column] = 0;
// for every element of row,column vectors
for( nElement = 0; nElement < nColumns_A, nElement++ )
// add product of element of first matrix row
// times second matrix column
} // for every column in the row of the product
} // for every row of product
If you are writing a massively multi-threaded program for matrix multiplication in order to take advantage of massively parallel processor architecture, the obvious point to implement multi-threading would be the code that implements the inner multiply-accumulate code that calculates each element in the product.
Last edited by ArthurSittler; 09-02-2010 at 10:04 AM.
Reason: add title, include calculated product,, add tags
There isn't that much of good "MIMD"* concurrent multiprocessing hardware to find in the stores. An X86 architecture is not optimal for massively concurrent multiprocessing, no matter how many cores one has to play with.
To make a multithreaded application really run fast and utilize the hardware efficiently, one has to do careful testing/debugging and analysis of the code produced.
*(MIMD - Multiple instruction flow, multiple data flow architecture).
I have tinkered some with Transputer arrays - They really like massively concurrent multiprocessing. Heck, they where built for it. But still, in 99% of the applications it is best practice to divide the processing into "chunks" big enough for a core to bite on for a while. Think of it like this - If each element of processing is smaller than the available rescheduling interval in the system, then it will spend most of its time and resources in rescheduling the processes, not doing any actual work.
Try to make a few threads that at least takes some intervals of rescheduling to complete each subtask. Descheduling more than some 1000 times per second on an X86 architecture is wasteful.
Start one work thread per core available. Or possibly two or three threads per core, depending on if the threads will be waiting for filling of its I/O buffers or waiting for memory to be pages in every once some often. This minimize the overhead work in scheduling, and leaves more cpu time to the actual processing.
I stumbled into such a problem yesterday, in an application handling large amounts of random access disk data. I suddenly got a throughput of about 15 megabytes per second. Not what i had expected. Found out that it was because every time a thread allocated a transfer buffer to store results it was forced to deschedule. Fixed the problem by using overallocation of memory, and let the paging mechanism fill in the pages best effort instead of asking for a buffer all the time. Now the data throughput in the application is around 2 gigabytes per second, which is quite satisfying.
So, in multiprocessing, the old saying still applies: "Keep it simple, stupid". (meaning no offence)
The calculations of the elements of the product matrix would actually work on a SIMD (Single Instruction Multiple Data) multiprocessor machine. All the operations in any particular matrix product are the vector dot product operation on different pairs of vectors all of the same length.
If there is any way to make use of the special multimedia extension instruction set, that might accelerate the operation.
GPUs actually execute such operations, at least multiplying 4-vectors by 4X4 matrices very rapidly and likely 4X4 matrices by other 4X4 matrices to combine minor frame transformation matrices into major transform matrices for 3D rendering.
I have never actually implemented any such parallelized operations. I can imagine that either one would use specialized parallelization libraries, a compiler which performs parallelization optimizations, or perhaps some specialized language with facilities to implement parallelized MAC operations.