Discussion:
[eigen] Reusing a sparse matrix for SparseLU
Matthieu Brucher
2018-12-02 22:05:28 UTC
Permalink
Hi all,

I want to reuse a sparse matrix structure and a solver for faster linear
solve:

Eigen::SparseMatrix<double, Eigen::ColMajor> A;
Eigen::SparseLU<Eigen::SparseMatrix<double, Eigen::ColMajor>,
Eigen::COLAMDOrdering<Eigen::Index> > solver;

I set A at the beginning with 0 entries and ask the solver to analyze the
pattern.

Now,, I'd like to reuse the matrix instead of creating a new one, as I'm
doing real-time computations and don't want any allocation there.

Can I use the comma operator? Or is there another operator that I can use
to update the matrix existing entries? I tried to find information online
but couldn't find the best practice for this.

Cheers,

Matthieu
--
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Alberto Luaces
2018-12-03 10:26:19 UTC
Permalink
Now,, I'd like to reuse the matrix instead of creating a new one, as I'm doing real-time computations and don't want any allocation there.
Can I use the comma operator? Or is there another operator that I can use to update the matrix existing entries? I tried to find information online but couldn't find the best practice for this.
Hi, as per

https://stackoverflow.com/questions/40593324/how-to-avoid-memory-allocations-in-sparse-expressions-with-eigen

you can use m.coeffs() to update the non-zero elements of the matrix as
a large array.

--
Alberto
Matthieu Brucher
2018-12-03 10:52:53 UTC
Permalink
Hi Alberto,

Thanks for the answer. Unfortunately, this still makes me create another
array, I'd rather populate the elements directly.

Cheers,

Matthieu
Post by Matthieu Brucher
Post by Matthieu Brucher
Now,, I'd like to reuse the matrix instead of creating a new one, as I'm
doing real-time computations and don't want any allocation there.
Post by Matthieu Brucher
Can I use the comma operator? Or is there another operator that I can
use to update the matrix existing entries? I tried to find information
online but couldn't find the best practice for this.
Hi, as per
https://stackoverflow.com/questions/40593324/how-to-avoid-memory-allocations-in-sparse-expressions-with-eigen
you can use m.coeffs() to update the non-zero elements of the matrix as
a large array.
--
Alberto
--
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Alberto Luaces
2018-12-03 11:25:03 UTC
Permalink
Post by Matthieu Brucher
Hi Alberto,
Thanks for the answer. Unfortunately, this still makes me create
another array,
How so?
Post by Matthieu Brucher
I'd rather populate the elements directly.
You can do precisely that, by using the coeffs() vector.
Matthieu Brucher
2018-12-03 12:31:13 UTC
Permalink
Post by Alberto Luaces
Post by Matthieu Brucher
Hi Alberto,
Thanks for the answer. Unfortunately, this still makes me create
another array,
How so?
I need to create an array for the coifs, don't I? Seems like I could use
valuePtr as well.
But the speed is very bad and profiles indicate that there may be mallocs
inside the factorisation or the solve :/
Post by Alberto Luaces
Post by Matthieu Brucher
I'd rather populate the elements directly.
You can do precisely that, by using the coeffs() vector.
Oh, you mean I could do coeffs() << v0, v1, v2...?
--
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Gael Guennebaud
2018-12-03 14:37:26 UTC
Permalink
hm... if you're thinking about using coeffs() << v0, v1, v2... then your
matrix is likely very small. What is its typical size and number of
non-zeros? If too small and not sparse enough better use a dense
matrix/solver.

gael
Post by Matthieu Brucher
Post by Alberto Luaces
Post by Matthieu Brucher
Hi Alberto,
Thanks for the answer. Unfortunately, this still makes me create
another array,
How so?
I need to create an array for the coifs, don't I? Seems like I could use
valuePtr as well.
But the speed is very bad and profiles indicate that there may be mallocs
inside the factorisation or the solve :/
Post by Alberto Luaces
Post by Matthieu Brucher
I'd rather populate the elements directly.
You can do precisely that, by using the coeffs() vector.
Oh, you mean I could do coeffs() << v0, v1, v2...?
--
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Matthieu Brucher
2018-12-03 15:51:05 UTC
Permalink
Hi Gael,

The typical size is between 10 and 50, between 3 and 5 entries per row, but
it has to run fast, without memory allocations.
I tried SpaseLU, but indeed, still twice as slow as dense Householder QR
for the lower end of my range. The "nice" thing is that I can generate the
structure offline (it's for a JIT PDE solver) and do lots of analysis
before (column permutations and all that stuff).

Cheers,

Matthieu
Post by Gael Guennebaud
hm... if you're thinking about using coeffs() << v0, v1, v2... then your
matrix is likely very small. What is its typical size and number of
non-zeros? If too small and not sparse enough better use a dense
matrix/solver.
gael
On Mon, Dec 3, 2018 at 2:28 PM Matthieu Brucher <
Post by Matthieu Brucher
Post by Alberto Luaces
Post by Matthieu Brucher
Hi Alberto,
Thanks for the answer. Unfortunately, this still makes me create
another array,
How so?
I need to create an array for the coifs, don't I? Seems like I could use
valuePtr as well.
But the speed is very bad and profiles indicate that there may be mallocs
inside the factorisation or the solve :/
Post by Alberto Luaces
Post by Matthieu Brucher
I'd rather populate the elements directly.
You can do precisely that, by using the coeffs() vector.
Oh, you mean I could do coeffs() << v0, v1, v2...?
--
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
--
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Gael Guennebaud
2018-12-03 11:59:24 UTC
Permalink
Hi,

I'm thinking about 3 different ways:

1 - the easiest:

for(...) A.coeffRef(i,j) = new_val(i,j);

but it costs 1 binary search per element

2 - via an iterator over the non-zeros:

for (int k=0; k<A.outerSize(); ++k)
for (SparseMatrix<double>::InnerIterator it(A,k); it; ++it)
it.valueRef() = new_val(it.row(), it.col());

3 - if you don't need the row/col indices:

for(...) A.coeffs()[k] = new_val;

In terms of speed, the options 2 and 3 are equivalent. Then the most
appropriate solution depends on how and in which order you can provide the
new values.

gael
Post by Matthieu Brucher
Hi all,
I want to reuse a sparse matrix structure and a solver for faster linear
Eigen::SparseMatrix<double, Eigen::ColMajor> A;
Eigen::SparseLU<Eigen::SparseMatrix<double, Eigen::ColMajor>,
Eigen::COLAMDOrdering<Eigen::Index> > solver;
I set A at the beginning with 0 entries and ask the solver to analyze the
pattern.
Now,, I'd like to reuse the matrix instead of creating a new one, as I'm
doing real-time computations and don't want any allocation there.
Can I use the comma operator? Or is there another operator that I can use
to update the matrix existing entries? I tried to find information online
but couldn't find the best practice for this.
Cheers,
Matthieu
--
Quantitative analyst, Ph.D.
Blog: http://blog.audio-tk.com/
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Loading...