I don't think you will be able to find much for general sparse matrices, because the algorithm would work very differently for different types of sparseness. For example you would code an algorithm very differently to deal with diagonal/tri-diagonal matrices and matrices which are constructed as the disjoint union of submatrices.
For a general sparse matrix (one with many zeroes, but placed uniformally randomly), I would probably store the zeros as bitmasks and then for each element in the answer matrix, intersect(and) the row bitmasks from the first matrix and the column bitmasks for the second matrix to find out which elements need to be multipled. Things you should also consider are zero rows in the left matrix and zero colums in the right matrix as these will lead to zero rows and columns in your output.
Only you know the properties of the matrices you are working with, so write a few different algorithms and generate lots and lots of random matrices with the properties yours will have, and see which methods are faster.
If you don't want to do the implementation yourself then there is some LU factorisation of sparse matrices in GMPLIB (I think) and there is also a lot of stuf in NAG (obviously) but it is not free.
|