Discussion:
[SciPy-Dev] ode and odeint callable parameters
Irvin Probst
2016-01-19 15:49:02 UTC
Permalink
Hi,
maybe my question has been asked a thousand times but why are the
callable's parameters in ode and odeint reversed ?

From scipy.integrate.ode:
http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode
f : callable f(t, y, *f_args)

From scipy.integrate.odeint:
http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.odeint.html
func : callable(y, t0, ...)

Admittedly one will usually choose ode or odeint depending on what has
to be done, but from an educational point of view this is really annoying.
Say you want to show how to use ode or odeint to simulate something ?
You have to define twice the same dynamics:


def f_odeint(y,t):
return y[1], -np.sin(y[0])

def f_ode(t,y):
return [[y[1], -np.sin(y[0])]]

Then come the usual questions:
- why do I have to reverse y and t ? => Err... you see... because...
- why can't I return a tuple in f_ode as in f_odeint ? => see ticket #1187

So I know that reversing the callable's parameters will break half the
code using ode or odeint in the world and this is out of question, but
couldn't it be possible to make it a bit more clear somewhere in the doc
that the parameters are indeed reversed ? Or maybe am I missing some
obvious explanation ?

Regards.
Jonathan Stickel
2016-01-19 16:14:14 UTC
Permalink
I also find this inconsistency to be annoying. Probably the result of of
two different contributors that were unaware of each other. One
workaround is to do something like the following:

def dfdt(t, f, [args]): # for ode
[your dfdt function statements]
return [dfdt]

def dfdt_odeint(f,t, *args): # swap t and f for odeint
return dfdt(t,f, *args)

so at least you only need to define your ode function just once. As
shown, it doesn't resolve your tuple issue, but perhaps you could do
that too (I tend to use lists/arrays rather than tuples when possible).

Perhaps a new keyword flag could be added to odeint to implement
functions calls the same as ODE. This would avoid breaking backward
compatibility.

Regards,
Jonathan
Post by Irvin Probst
Hi,
maybe my question has been asked a thousand times but why are the
callable's parameters in ode and odeint reversed ?
http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode
f : callable f(t, y, *f_args)
http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.odeint.html
func : callable(y, t0, ...)
Admittedly one will usually choose ode or odeint depending on what has
to be done, but from an educational point of view this is really annoying.
Say you want to show how to use ode or odeint to simulate something ?
return y[1], -np.sin(y[0])
return [[y[1], -np.sin(y[0])]]
- why do I have to reverse y and t ? => Err... you see... because...
- why can't I return a tuple in f_ode as in f_odeint ? => see ticket #1187
So I know that reversing the callable's parameters will break half the
code using ode or odeint in the world and this is out of question, but
couldn't it be possible to make it a bit more clear somewhere in the doc
that the parameters are indeed reversed ? Or maybe am I missing some
obvious explanation ?
Regards.
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
Irvin Probst
2016-01-19 16:28:04 UTC
Permalink
Post by Jonathan Stickel
I also find this inconsistency to be annoying. Probably the result of
of two different contributors that were unaware of each other.
That's my guess too
Post by Jonathan Stickel
def dfdt(t, f, [args]): # for ode
[your dfdt function statements]
return [dfdt]
def dfdt_odeint(f,t, *args): # swap t and f for odeint
return dfdt(t,f, *args)
This is indeed a more elegant solution, especially when you have some
complex dynamics to integrate, but still it feels odd when you are
learning scientific computing imho... The average student will see this
as black magic and I'm afraid it won't help them to feel confident in
what they are doing, on the other hand this is my problem and not scipy's...
Post by Jonathan Stickel
Perhaps a new keyword flag could be added to odeint to implement
functions calls the same as ODE. This would avoid breaking backward
compatibility.
Are there any statistics/survey on which one between ode and odeint is
the most used ? If one has to be modified it would be better to modify
the least used one. Or is there some standard policy to slowly deprecate
a function prototype ? This would take longer but at least the fix would
be much cleaner.

Regards.
Benny Malengier
2016-01-19 17:38:27 UTC
Permalink
Why would you ever want to use one or the other?

LSODA (odeint) is fixed coefficient BDF, and VODE (ode) variable
coefficient BDF, so should improve on LSODA as soon as there are often
sharp and frequent time variation. If not present, VODE should be as good
as LSODA.

So, VODE is supposed to be better at handling a wider array of use cases.
So you should just always use VODE (or even better one of the modern
bindings to cvode), and not select at all (or you need to have really a
huge system were fixed coefficient BDF is better, but then you should not
be using python, and cvode can run in parallel on a HPC, has Krylov
methods, and will outperform it.)

Benny
Post by Irvin Probst
Hi,
maybe my question has been asked a thousand times but why are the
callable's parameters in ode and odeint reversed ?
http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode
f : callable f(t, y, *f_args)
http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.odeint.html
func : callable(y, t0, ...)
Admittedly one will usually choose ode or odeint depending on what has to
be done, but from an educational point of view this is really annoying.
Say you want to show how to use ode or odeint to simulate something ? You
return y[1], -np.sin(y[0])
return [[y[1], -np.sin(y[0])]]
- why do I have to reverse y and t ? => Err... you see... because...
- why can't I return a tuple in f_ode as in f_odeint ? => see ticket #1187
So I know that reversing the callable's parameters will break half the
code using ode or odeint in the world and this is out of question, but
couldn't it be possible to make it a bit more clear somewhere in the doc
that the parameters are indeed reversed ? Or maybe am I missing some
obvious explanation ?
Regards.
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
Irvin Probst
2016-01-19 18:13:23 UTC
Permalink
Post by Benny Malengier
Why would you ever want to use one or the other?
Hi

Say for exemple I wish to write code to illustrate when/why ode is
better than odeint, I have to use both of them.
Say I want to show how Euler/RK2/Whatever behave compared to ode/odeint
I have to use both of them.
Say I'm a student in scientific computing 101 and I'll think that LSODA
is a brand of junk drink and I'll maybe never encounter a dynamics where
ode and odeint yield a result different enough to be noticeable and I'll
use one or another.


Anyway, what I wish to say is that your arguments are perfectly sound
from a mathematics point of view, but IMHO having two different API for
the callables might sometimes be a problem from a CS point of view.

Or else let's say that odeint should never be used and deprecate it.

Regards.
Benny Malengier
2016-01-19 19:37:35 UTC
Permalink
To reply your original question, all the original fortran from the 80s has
t,y, so there all is ok and as you would expect.

Travis himself added the y,t in odeint in the original creation of
integrate it seems :
https://github.com/scipy/scipy/blame/989318ad70e46915badde1bc6d950914bdfbce5c/Lib/integrate/__odepack.h
you see there it was y,t from the start. Perhaps because the fortan methods
have main method lsoda(...,y, t, tout,...), but the callable function is
never like that (eg http://netlib.sandia.gov/alliant/ode/prog/lsoda.f)
Travis explicitly moves arguments around so as to change y,t into t,y as
is required by Fortran for the rhs.

So, I suppose for backward compatibitility, nobody ever changed the order
to the expected and common t,y . Or nobody serious is using that code ;-)
LSODE solver is still popular, I wonder is this is not just because people
don't change things that work. In my own tests with the implicit versions
(LSODI), DDASPK (like vode) and modern IDA seriously outperform LSODI.
There are however many use cases, so probably for some it would be better
...

Travis is still around, but that code is from 2001, he will not remember
I'm afraid ...


Benny
Post by Benny Malengier
Why would you ever want to use one or the other?
Hi
Say for exemple I wish to write code to illustrate when/why ode is better
than odeint, I have to use both of them.
Say I want to show how Euler/RK2/Whatever behave compared to ode/odeint I
have to use both of them.
Say I'm a student in scientific computing 101 and I'll think that LSODA is
a brand of junk drink and I'll maybe never encounter a dynamics where ode
and odeint yield a result different enough to be noticeable and I'll use
one or another.
Anyway, what I wish to say is that your arguments are perfectly sound from
a mathematics point of view, but IMHO having two different API for the
callables might sometimes be a problem from a CS point of view.
Or else let's say that odeint should never be used and deprecate it.
Regards.
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
Irvin Probst
2016-01-20 10:39:31 UTC
Permalink
Post by Benny Malengier
To reply your original question, all the original fortran from the 80s
has t,y, so there all is ok and as you would expect.
Travis himself added the y,t in odeint in the original creation of
[...]

Thanks Benny for this very thorough answer, I have to admit I was too
lazy to dig into 15 years of commits.
Marcel Oliver
2016-01-20 10:28:59 UTC
Permalink
Hi, maybe my question has been asked a thousand times but why are
the callable's parameters in ode and odeint reversed ?
I would like to extend the question: why are there two different
interfaces with different call patterns at all? I understand the
answer will be "for historical reasons", but both interfaces are
annoying enough that it may make sense to think about a uniform ODE
interface and keep the current interfaces only as wrappers for legacy
code.

A few niggles worth fixing up from my point of view:

* Allow arbitrary array-valued vector fields. (I remember there was a
thread about existing code to do this on this list; it would be very
useful to get this functionality into official scipy.)

* integrate.ode seems to have the more "pythonic" interface and a
larger choice of integrator. However, most of the integrators are
not re-entrant per warning in the documentation, so the object
oriented call signature is actually sending a false message. In
addition, in many (if not most) applications, I need the solution at
an array of times, and often need to post-process the entire time
series as an array. Thus, I find the call signature of
integrate.ode extremely annoying and avoid it in favor of odeint
(when the latter works) even though integrate.ode is the more
powerful package.

* Finally, it would make sense to fix up as many of the integrators
(or depreciate them if they don't have a clear advantage in certain
circumstances) to make them re-entrant; that is useful indepedent of
the question of the best interface.

Best regards,
Marcel
Benny Malengier
2016-01-20 11:21:08 UTC
Permalink
Post by Marcel Oliver
Hi, maybe my question has been asked a thousand times but why are
the callable's parameters in ode and odeint reversed ?
I would like to extend the question: why are there two different
interfaces with different call patterns at all? I understand the
answer will be "for historical reasons", but both interfaces are
annoying enough that it may make sense to think about a uniform ODE
interface and keep the current interfaces only as wrappers for legacy
code.
* Allow arbitrary array-valued vector fields. (I remember there was a
thread about existing code to do this on this list; it would be very
useful to get this functionality into official scipy.)
* integrate.ode seems to have the more "pythonic" interface and a
larger choice of integrator. However, most of the integrators are
not re-entrant per warning in the documentation, so the object
oriented call signature is actually sending a false message. In
addition, in many (if not most) applications, I need the solution at
an array of times, and often need to post-process the entire time
series as an array. Thus, I find the call signature of
integrate.ode extremely annoying and avoid it in favor of odeint
(when the latter works) even though integrate.ode is the more
powerful package.
* Finally, it would make sense to fix up as many of the integrators
(or depreciate them if they don't have a clear advantage in certain
circumstances) to make them re-entrant; that is useful indepedent of
the question of the best interface.
You should consider using the odes scikit for stiff problems, which
interfaces the modern cvode impementation and is reentrant, see example
(google chrome needed for the latex)
https://github.com/bmcage/odes/blob/master/docs/ipython/Simple%20Oscillator.ipynb

The cvode solver and in general the sundials solver compare to what is
present in mathematica (a derivation of sundials according to rumor) and
matlab (same principle as VODE) for stiff problems.

There never have been many maintainers for the ode functionality. I looked
at what would be needed to add odes into scipy, but the recent work on ode
in scipy is not trivial to move (like the complex_ode method). The work is
too large for me to consider, the API of ode too different from ode in odes
scikit. So you would end up with odeint (for those wanting that interface
with LSODA), ode (for those wanting a VODE/ZVODE/DOPRI/RK method ...
approach with the added features), and odes (for those wanting the modern
features in sundials (roots, sensitivity, Krylov)). That seems worse. In
the end I have a feeling most advanced users with stiff problems just use
one of the python interfaces to sundials, be it via octave, assimulo, or
odes scikit.

Benny
Post by Marcel Oliver
Best regards,
Marcel
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
Irvin Probst
2016-01-20 11:28:51 UTC
Permalink
Post by Benny Malengier
There never have been many maintainers for the ode functionality. I
looked at what would be needed to add odes into scipy, but the recent
work on ode in scipy is not trivial to move (like the complex_ode
method). The work is too large for me to consider, the API of ode too
different from ode in odes scikit. So you would end up with odeint
(for those wanting that interface with LSODA), ode (for those wanting
a VODE/ZVODE/DOPRI/RK method ... approach with the added features),
and odes (for those wanting the modern features in sundials (roots,
sensitivity, Krylov)). That seems worse. In the end I have a feeling
most advanced users with stiff problems just use one of the python
interfaces to sundials, be it via octave, assimulo, or odes scikit.
Is that the kind of task for a Google SoC student ?
Benny Malengier
2016-01-20 11:39:32 UTC
Permalink
Post by Irvin Probst
Post by Benny Malengier
There never have been many maintainers for the ode functionality. I
looked at what would be needed to add odes into scipy, but the recent work
on ode in scipy is not trivial to move (like the complex_ode method). The
work is too large for me to consider, the API of ode too different from ode
in odes scikit. So you would end up with odeint (for those wanting that
interface with LSODA), ode (for those wanting a VODE/ZVODE/DOPRI/RK method
... approach with the added features), and odes (for those wanting the
modern features in sundials (roots, sensitivity, Krylov)). That seems
worse. In the end I have a feeling most advanced users with stiff problems
just use one of the python interfaces to sundials, be it via octave,
assimulo, or odes scikit.
Is that the kind of task for a Google SoC student ?
It was on the list of SoC to work on the ode part 2 years ago. Not last
year. If there are mentors in the current contributors to scipy, it can be
on it again I assume.

Benny
Post by Irvin Probst
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
Anne Archibald
2016-01-20 13:43:26 UTC
Permalink
There is periodically discussion of the mess of interfaces and solvers for
ODEs in scipy (see the archives about six months ago, for example). One of
the concerns is that people want to do very different things, so settling
on a good interface is not at all easy. I don't just mean the underlying
algorithms, but also the calling interface. It's easy to support someone
who drops in an RHS and asks for the solution at a hundred predefined
points. It's also clear that a general toolkit shouldn't support my use
case: a 22-dimensional solver with a compiled RHS generated by a symbolic
algebra package, with internal precision switchable between 80 and 128
bits, with a root-finder attached to the output to define stopping places,
that needs to all run with compiled speed. So where along that scale do you
aim scipy's interface? I have my own ideas (see aforementioned archive
thread) but don't have the energy to implement it myself. And anyway, some
people need very different things from their ODE solvers (for example,
solution objects evaluable anywhere).

Anne
Post by Benny Malengier
Post by Irvin Probst
Post by Benny Malengier
There never have been many maintainers for the ode functionality. I
looked at what would be needed to add odes into scipy, but the recent work
on ode in scipy is not trivial to move (like the complex_ode method). The
work is too large for me to consider, the API of ode too different from ode
in odes scikit. So you would end up with odeint (for those wanting that
interface with LSODA), ode (for those wanting a VODE/ZVODE/DOPRI/RK method
... approach with the added features), and odes (for those wanting the
modern features in sundials (roots, sensitivity, Krylov)). That seems
worse. In the end I have a feeling most advanced users with stiff problems
just use one of the python interfaces to sundials, be it via octave,
assimulo, or odes scikit.
Is that the kind of task for a Google SoC student ?
It was on the list of SoC to work on the ode part 2 years ago. Not last
year. If there are mentors in the current contributors to scipy, it can be
on it again I assume.
Benny
Post by Irvin Probst
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
Marcel Oliver
2016-01-25 13:29:20 UTC
Permalink
Post by Anne Archibald
There is periodically discussion of the mess of interfaces and
solvers for ODEs in scipy (see the archives about six months ago,
for example). One of the concerns is that people want to do very
different things, so settling on a good interface is not at all
easy. I don't just mean the underlying algorithms, but also the
calling interface. It's easy to support someone who drops in an RHS
and asks for the solution at a hundred predefined points. It's also
clear that a general toolkit shouldn't support my use case: a
22-dimensional solver with a compiled RHS generated by a symbolic
algebra package, with internal precision switchable between 80 and
128 bits, with a root-finder attached to the output to define
stopping places, that needs to all run with compiled speed. So
where along that scale do you aim scipy's interface? I have my own
ideas (see aforementioned archive thread) but don't have the energy
to implement it myself. And anyway, some people need very different
things from their ODE solvers (for example, solution objects
evaluable anywhere).
Anne
Thanks for all the comments. I had not been aware of the existence of
scikits odes - the additional capabilities are very nice, but API-wise
it just adds to the mess...

Being unsure what has previously been discussed, I'd just like to
share some random thoughts, as I don't claim to understand the problem
domain completely. Basically, the API should make simple thing simple
and natural, and complicated things possible. So I think your
requirements stated above should not be completely out of reach - in
principle - even though high precision arithmetic would require
completely new backends and whether one can get close to compiled
speed is probably very problem dependent.

So, here an unordered list of thoughts:

1. I don't think there is a big difference in practice between the
ode, odes, and odeint interfaces. Personally, think that

odeint (f, y0, t)

is just fine. But the object oriented approach of ode/odes is
good, too; however, the various parameters should be controlled by
(keyword) arguments as a statement like

r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0)

is just not pretty.

So I like the odes approach best, with a basic object oriented
structure, but the parameters set as arguments, not via methods.

Not sure if odes allows to set an initial time different from zero,
that is essential. I also dislike that specifying the solver is
mandatory and first in the argument list; I think this is
specialist territory as many of the existing solvers perform
sufficiently well over a large range of problems so that changing
the solver is more of an exception.

If the interface is object oriented (as in ode or odes), then at
least the default solver should be reentrant, otherwise the
API raises false expectations.

I would also suggest that, independent of the rest of the API, the
final time t can be an array or a scalar; the result is either
returned as a vector of shape y0 when t is scalar and as an array
with one additional axis for time when t is a 1-d-array.

2. It would be very nice to control output time intervals or exit
times. Mathematica has a fairly rich "WhenEvent" API, although it
seems to me it may also be no more than a fancy wrapper around a
standard solver.

The Mathematica API feels almost a bit too general, but there are
certainly two independent ways triggers could be implemented, both
seem useful to me:

(a) Simple Boolean trigger for "output event" and "termination
event" based on the evaluation of some function of (y,t,p)
which is passed to the solver and which is evaluated after each
time step.

(b) Trigger for "output event" and/or "termination" based on
finding the root of a scalar function of the (y,t,p). This
needs either a separate root finder or a modification to the
implicit solver (the latter may be hard as that would require
nontrivial modification of the embedded solver code).

(odes "rootfn" may be doing just that, but there seems to be no
detailed documentation...)

3. Returning interpolating function objects as, e.g., Mathematica
does, seems a bit of overkill to me. It would be easier to set up
an independent general interpolating function library (as possibly
even exists already) and pass a discrete time series to it. It is
likely that by deriving the interpolation function directly from
the interpolation on which the solver is based one could gain some
efficiency and give better control of the interpolation accuracy,
but to me that seems to require a lot of effort for relatively
little improvement.

4. odes includes DAE solvers. I wonder if it would not be more
natural to pass the algebraic part of the equation as a separate
keyword argument and let the solver wrapper decide, based on the
presence or absence of this argument, whether to call a DAE
backend.

5. As I wrote before, the API should deal transparently with
d-dimensional vector fields.

Just a question about solver backends: it seems that the SUNDIALS
solvers are particularly strong for large stiff systems which come
from parabolic PDEs? (And of course the ability to propagate
sensitivities, is that ability made available through odes, too?)
On smaller system of ODEs, I have found that ode "dop853" is extremely
accurate and seems capable of dealing with rather stiff problems even
though I have no clue how it does it. (E.g., for the Van der Pol
oscillator with large stiffness parameter where other explicit solvers
either blow up or grind to a halt, dop853 and also dopri5 are doing
just fine while Matlab's dopri5 solver fails as expected. It's a
mystery to me.)

In any case, it would be very nice to develop a clear plan for
cleaning up ODE solver backend in Scipy. Once there is a clear
target, it might be easier to see what is easy to do and what is
missing on the backend side.

Regards,
Marcel
Benny Malengier
2016-01-25 14:22:09 UTC
Permalink
<snip>
to implement it myself. And anyway, some people need very different
things from their ODE solvers (for example, solution objects
evaluable anywhere).
Anne
<snip>
2. It would be very nice to control output time intervals or exit
times. Mathematica has a fairly rich "WhenEvent" API, although it
seems to me it may also be no more than a fancy wrapper around a
standard solver.
The Mathematica API feels almost a bit too general, but there are
certainly two independent ways triggers could be implemented, both
(a) Simple Boolean trigger for "output event" and "termination
event" based on the evaluation of some function of (y,t,p)
which is passed to the solver and which is evaluated after each
time step.
(b) Trigger for "output event" and/or "termination" based on
finding the root of a scalar function of the (y,t,p). This
needs either a separate root finder or a modification to the
implicit solver (the latter may be hard as that would require
nontrivial modification of the embedded solver code).
(odes "rootfn" may be doing just that, but there seems to be no
detailed documentation...)
Yes rootfn does that.
A nicer interface with onroot functionality to that was added by a
contributor to CVODE, I just extended it to the IDA solver. Only available
through the solve method of the solvers (as STEP methods just stop),
example:
https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall.py
Same with fixed tstop added in a list of output times:
https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall_tstop.py
3. Returning interpolating function objects as, e.g., Mathematica
does, seems a bit of overkill to me. It would be easier to set up
an independent general interpolating function library (as possibly
even exists already) and pass a discrete time series to it. It is
likely that by deriving the interpolation function directly from
the interpolation on which the solver is based one could gain some
efficiency and give better control of the interpolation accuracy,
but to me that seems to require a lot of effort for relatively
little improvement.
agree. Also, just reinit the solver and solve again from closest start time
is an option too if for some reason interpolation error must be avoided.
CVODE can actually give you output backward if you have jumped too far
ahead, but that requires you to know where you want interpolation output
immediately after doing a STEP. All very problem depending.
4. odes includes DAE solvers. I wonder if it would not be more
natural to pass the algebraic part of the equation as a separate
keyword argument and let the solver wrapper decide, based on the
presence or absence of this argument, whether to call a DAE
backend.
These things all require a lot of boilerplate code. In the end, my
experience is that the documentation of the original solvers is the best to
learn the details, so keeping variable names the same in options is more
important than something that is more pythonic. Eg, the rootfn above is a
bad name, but fits the original documentation at
https://computation.llnl.gov/casc/sundials/documentation/documentation.html
CVODE doc is 164 pages, in the end, for problems you want to optimize you
will grab that documentation to learn what options you want to tweak.
About transparant algebraic part. The backends need a lot of info on the
algebraic part. You also need to structure your variables specifically if
you eg want banded Jacobian. And the calling sequence of all functions
becomes different with an extra parameter in and out (ydot).
Quite a lot of boilerplate that slows your wrapper down will be needed I'm
afraid.
5. As I wrote before, the API should deal transparently with
d-dimensional vector fields.
I'm not really following. You mean unwrap and wrap again like the
complex_ode solver does now in scipy?
http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.complex_ode.html

Writing your own wrapper for your problem does not seem like a big task to
me.
Just a question about solver backends: it seems that the SUNDIALS
solvers are particularly strong for large stiff systems which come
from parabolic PDEs? (And of course the ability to propagate
sensitivities, is that ability made available through odes, too?)
They are used for that too, but it is not the core. Large complex kinetic
systems with different time scales seems like the largest use case on the
mailing list, see also as first example in the example doc.
Sensitivities are not present in odes at the moment, but I think some
wrappers have that covered already. Let's say it is asked once or twice a
year to add.
On smaller system of ODEs, I have found that ode "dop853" is extremely
accurate and seems capable of dealing with rather stiff problems even
though I have no clue how it does it. (E.g., for the Van der Pol
oscillator with large stiffness parameter where other explicit solvers
either blow up or grind to a halt, dop853 and also dopri5 are doing
just fine while Matlab's dopri5 solver fails as expected. It's a
mystery to me.)
Some extra tests seem in order. How does method=adams in ode succeed?
Should you be able to quickly make an ipython notebook ...
In any case, it would be very nice to develop a clear plan for
cleaning up ODE solver backend in Scipy. Once there is a clear
target, it might be easier to see what is easy to do and what is
missing on the backend side.
Yes. We don't have our own backends, we only wrap backends. In the end,
those backends with continued development will win out. Wrapping solvers is
a tricky business, as it is very hard to avoid design of the solver to not
slip through
Regards,
Marcel
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
Marcel Oliver
2016-01-26 10:43:26 UTC
Permalink
Benny Malengier writes:
[...]
Yes rootfn does that. A nicer interface with onroot functionality
to that was added by a contributor to CVODE, I just extended it to
the IDA solver. Only available through the solve method of the
https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall.py
https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/freefall_tstop.py
Thanks, I'll be looking at that!

(Side remark: tried to compile odes today on a Fedora 23 machine, but
it fails with

gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory
gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory
error: Command "gcc -pthread -Wno-unused-result -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ibuild/src.linux-x86_64-3.4 -I/usr/lib64/python3.4/site-packages/numpy/core/include -I/usr/include/python3.4m -c build/src.linux-x86_64-3.4/fortranobject.c -o build/temp.linux-x86_64-3.4/build/src.linux-x86_64-3.4/fortranobject.o" failed with exit status 1

Seems there are some hard-coded paths somewhere, did not have time to
track this down other than noting that I get this error, so I may have
done something stupid - don't worry if this is the case.)

[...]
4. odes includes DAE solvers. I wonder if it would not be more
natural to pass the algebraic part of the equation as a separate
keyword argument and let the solver wrapper decide, based on the
presence or absence of this argument, whether to call a DAE
backend.
These things all require a lot of boilerplate code. In the end, my experience
is that the documentation of the original solvers is the best to learn the
details, so keeping variable names the same in options is more important than
something that is more pythonic. Eg, the rootfn above is a bad name, but fits
the original documentation at
https://computation.llnl.gov/casc/sundials/documentation/documentation.html
CVODE doc is 164 pages, in the end, for problems you want to optimize you will
grab that documentation to learn what options you want to tweak.
About transparant algebraic part. The backends need a lot of info on the
algebraic part. You also need to structure your variables specifically if you
eg want banded Jacobian. And the calling sequence of all functions becomes
different with an extra parameter in and out (ydot).
Quite a lot of boilerplate that slows your wrapper down will be needed I'm
afraid.
I see the point. But then one can do this only for the default solver
suite (I assume you'd go for sundials) and if it's necessary to
replace the backend, then the non-default solver should get the
wrapper code to make this transparent.
5. As I wrote before, the API should deal transparently with
d-dimensional vector fields.
I'm not really following. You mean unwrap and wrap again like the
complex_ode solver does now in scipy?
http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.complex_ode.html
Writing your own wrapper for your problem does not seem like a big
task to me.
I have done this many times, but if feels like something the computer
should be doing for me. I am coming from the user perspective, not as
a scipy developer, so I don't want the ugliness in MY code...

The other question is that best practice is not clear to me. What do
I need to do to avoid extra copies of the data, which should be
possible in general. A harder problem is probably what to do e.g. for
systems of reaction diffusion equations where one would want to keep
the sparse structure. Not sure if this is even possible to address in
a sufficiently general way.
On smaller system of ODEs, I have found that ode "dop853" is extremely
accurate and seems capable of dealing with rather stiff problems even
though I have no clue how it does it. (E.g., for the Van der Pol
oscillator with large stiffness parameter where other explicit solvers
either blow up or grind to a halt, dop853 and also dopri5 are doing
just fine while Matlab's dopri5 solver fails as expected. It's a
mystery to me.)
Some extra tests seem in order. How does method=adams in ode succeed? Should
you be able to quickly make an ipython notebook ...
I attach a small code snippet. In fact, my memory failed me in that
dop853 indeed complains about insufficient nmax (nsteps in the Python
API). What breaks is the accuracy of the solution, but it's still
remarkably good and can be controlled by nsteps. It's obvious that
one should not solve this kind of problem with dop853, but I am still
amazed that it does not fall flat on its face.

In fact, vode/Adams dies completely (as it should) and vode/BDF gives
lots of warnings and computes a completely wrong solution (BDF schemes
are supposed to work here). lsoda is fine, though.

Best,
Marcel

PS.: My motivation for looking at the solver API comes from a nascent
book project where the idea is to include computer experiments on
nonlinear dynamical systems done in Scientific Python. Thus, I am
exploring best practice examples that are generic for various problem
domains.
Benny Malengier
2016-01-26 11:03:26 UTC
Permalink
Post by Marcel Oliver
(Side remark: tried to compile odes today on a Fedora 23 machine, but
it fails with
gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory
gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory
error: Command "gcc -pthread -Wno-unused-result
-DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic
-D_GNU_SOURCE -fPIC -fwrapv -fPIC -Ibuild/src.linux-x86_64-3.4
-I/usr/lib64/python3.4/site-packages/numpy/core/include
-I/usr/include/python3.4m -c build/src.linux-x86_64-3.4/fortranobject.c -o
build/temp.linux-x86_64-3.4/build/src.linux-x86_64-3.4/fortranobject.o"
failed with exit status 1
Seems there are some hard-coded paths somewhere, did not have time to
track this down other than noting that I get this error, so I may have
done something stupid - don't worry if this is the case.)
This seems to be a part of numpy/f2py that is crashing ...
So this will be to expose the ddaspk solver in odes. Still, fortran
compiling should work. Using a debian system myself.
You should check what packages provides the specs sought for:
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1

Benny
Benny Malengier
2016-01-26 11:48:14 UTC
Permalink
Post by Benny Malengier
Post by Benny Malengier
Some extra tests seem in order. How does method=adams in ode succeed?
Should
Post by Benny Malengier
you be able to quickly make an ipython notebook ...
I attach a small code snippet. In fact, my memory failed me in that
dop853 indeed complains about insufficient nmax (nsteps in the Python
API). What breaks is the accuracy of the solution, but it's still
remarkably good and can be controlled by nsteps. It's obvious that
one should not solve this kind of problem with dop853, but I am still
amazed that it does not fall flat on its face.
In fact, vode/Adams dies completely (as it should) and vode/BDF gives
lots of warnings and computes a completely wrong solution (BDF schemes
are supposed to work here). lsoda is fine, though.
You forgot
, with_jacobian=True
when doing bdf metho via ode. So change into:

r2 = ode(van_der_pol).set_integrator('vode', method='bdf',
with_jacobian=True)

Normally, as vode is for stiff methods, you set ml or mr to indicate how
banded the jacobian is. Here you want full jacobian, so you should indicate
this, see doc.
Result then is as lsoda.

Can I use your code in an example under the scipy license ? I'll see what
odes does.

Benny
Benny Malengier
2016-01-26 13:49:19 UTC
Permalink
As a follow up, I adapted your code, to add odes, and also cvode versions
via the odes scikit.
Timings on my PC:

Time for dop853: 5.587895
Time for vode/BDF: 0.005166
Time for lsoda: 0.006365
Time for cvode/BDF: 0.006328
Time for cvode/BDF - class: 0.006171
Time for cvode/BDF - cython: 0.003059
Time for lsoda - cython: 0.00459
Time for vode/BDF - cython: 0.003977

So, fastes is odes with cvode and cython rhs. Then VODE and cython (almost
30% slower), then LSODA and cython.

Without cython, VODE is fastest, then cvode without a wrapper around the
function (almost 20% slower), then normal cvode = lsoda.

Nice to see C beating Fortran, but that is probably due to the trip around
to python.

Benny
Post by Benny Malengier
Post by Benny Malengier
Post by Benny Malengier
Some extra tests seem in order. How does method=adams in ode succeed?
Should
Post by Benny Malengier
you be able to quickly make an ipython notebook ...
I attach a small code snippet. In fact, my memory failed me in that
dop853 indeed complains about insufficient nmax (nsteps in the Python
API). What breaks is the accuracy of the solution, but it's still
remarkably good and can be controlled by nsteps. It's obvious that
one should not solve this kind of problem with dop853, but I am still
amazed that it does not fall flat on its face.
In fact, vode/Adams dies completely (as it should) and vode/BDF gives
lots of warnings and computes a completely wrong solution (BDF schemes
are supposed to work here). lsoda is fine, though.
You forgot
, with_jacobian=True
r2 = ode(van_der_pol).set_integrator('vode', method='bdf',
with_jacobian=True)
Normally, as vode is for stiff methods, you set ml or mr to indicate how
banded the jacobian is. Here you want full jacobian, so you should indicate
this, see doc.
Result then is as lsoda.
Can I use your code in an example under the scipy license ? I'll see what
odes does.
Benny
Benny Malengier
2016-01-26 13:50:17 UTC
Permalink
now the code in attach, hopefully it comes trough.

Benny
Post by Benny Malengier
As a follow up, I adapted your code, to add odes, and also cvode versions
via the odes scikit.
Time for dop853: 5.587895
Time for vode/BDF: 0.005166
Time for lsoda: 0.006365
Time for cvode/BDF: 0.006328
Time for cvode/BDF - class: 0.006171
Time for cvode/BDF - cython: 0.003059
Time for lsoda - cython: 0.00459
Time for vode/BDF - cython: 0.003977
So, fastes is odes with cvode and cython rhs. Then VODE and cython (almost
30% slower), then LSODA and cython.
Without cython, VODE is fastest, then cvode without a wrapper around the
function (almost 20% slower), then normal cvode = lsoda.
Nice to see C beating Fortran, but that is probably due to the trip around
to python.
Benny
Post by Benny Malengier
Post by Benny Malengier
Post by Benny Malengier
Some extra tests seem in order. How does method=adams in ode succeed?
Should
Post by Benny Malengier
you be able to quickly make an ipython notebook ...
I attach a small code snippet. In fact, my memory failed me in that
dop853 indeed complains about insufficient nmax (nsteps in the Python
API). What breaks is the accuracy of the solution, but it's still
remarkably good and can be controlled by nsteps. It's obvious that
one should not solve this kind of problem with dop853, but I am still
amazed that it does not fall flat on its face.
In fact, vode/Adams dies completely (as it should) and vode/BDF gives
lots of warnings and computes a completely wrong solution (BDF schemes
are supposed to work here). lsoda is fine, though.
You forgot
, with_jacobian=True
r2 = ode(van_der_pol).set_integrator('vode', method='bdf',
with_jacobian=True)
Normally, as vode is for stiff methods, you set ml or mr to indicate how
banded the jacobian is. Here you want full jacobian, so you should indicate
this, see doc.
Result then is as lsoda.
Can I use your code in an example under the scipy license ? I'll see what
odes does.
Benny
Marcel Oliver
2016-01-26 14:08:14 UTC
Permalink
Post by Marcel Oliver
I attach a small code snippet. In fact, my memory failed me in that
dop853 indeed complains about insufficient nmax (nsteps in the Python
API). What breaks is the accuracy of the solution, but it's still
remarkably good and can be controlled by nsteps. It's obvious that
one should not solve this kind of problem with dop853, but I am still
amazed that it does not fall flat on its face.
In fact, vode/Adams dies completely (as it should) and vode/BDF gives
lots of warnings and computes a completely wrong solution (BDF schemes
are supposed to work here). lsoda is fine, though.
You forgot
, with_jacobian=True
r2 = ode(van_der_pol).set_integrator('vode', method='bdf', with_jacobian=True)
Normally, as vode is for stiff methods, you set ml or mr to indicate how
banded the jacobian is. Here you want full jacobian, so you should indicate
this, see doc.
Oops, I completely overlooked that vode is defaulting to simple
fixed-point iteration in the BDF solver. Thanks for pointing this
out!

I wonder if there is any situation where one would want to use BDF and
still not want to use a full Newton or quasi-Newton iteration since
the convergence condition for a simple fixed point iteration is
basically as bad as using an explicit solver in the first place. But
there is certainly educational value to see this behavior...
Post by Marcel Oliver
Can I use your code in an example under the scipy license ? I'll
see what odes does.
Absolutely. It's pretty trivial of course, so feel free to use and/or
adapt.

Best,
Marcel

PS.: I'll try to get odes to compile later.
Benny Malengier
2016-01-29 14:29:07 UTC
Permalink
I would like to come back to this issue.

I added a dopri wrapper to the odes scikit,
https://github.com/bmcage/odes/blob/master/scikits/odes/dopri5.py which
wraps the scipy.integrate calls to dopri5 and dop853.

When doing test with the Van der Pol example, I see this solver fails. The
seemingly good solution, is actually bad. In the test script you send, you
integrate via:

sol1 = [r1.integrate(T) for T in tt[1:]]

What this does is actually raise an error after time 166.49 sec everytime,
but only one is printed to output, the others seemingly supressed, and you
keep restarting the solver via this call, even if not successful.
WIth the wrapper, after time 166.49, solutions never reach the output, they
go to
sol1.errors
assuming you did
sol1 = r1b.solve(tt, y0)
and as a user you must decide what to do.

This somewhat proves that returning output also on wrong output (not
satisfying atol/rtol) like ode.integrate does is not a good practice. In
the above case, with scipy.ode, you should test for r1.successful() after
every integrate, which you failed to do.

With the odes scikit approach, wrong output just stops the solve routine
and goes to sol.errors. The dopri warning output is to increase nsteps. If
you do that, and restart at the last computed solution, the error of dop853
becomes:
UserWarning: dop853: problem is probably stiff (interrupted)
So an indication to use another solver.

Your van der pol example:
https://github.com/bmcage/odes/blob/master/docs/src/examples/ode/van_der_pol.py
which shows this behavior via the scikit API.

Benny
Post by Marcel Oliver
Post by Marcel Oliver
I attach a small code snippet. In fact, my memory failed me in that
dop853 indeed complains about insufficient nmax (nsteps in the
Python
Post by Marcel Oliver
API). What breaks is the accuracy of the solution, but it's still
remarkably good and can be controlled by nsteps. It's obvious that
one should not solve this kind of problem with dop853, but I am
still
Post by Marcel Oliver
amazed that it does not fall flat on its face.
In fact, vode/Adams dies completely (as it should) and vode/BDF
gives
Post by Marcel Oliver
lots of warnings and computes a completely wrong solution (BDF
schemes
Post by Marcel Oliver
are supposed to work here). lsoda is fine, though.
You forgot
, with_jacobian=True
r2 = ode(van_der_pol).set_integrator('vode', method='bdf',
with_jacobian=True)
Post by Marcel Oliver
Normally, as vode is for stiff methods, you set ml or mr to indicate how
banded the jacobian is. Here you want full jacobian, so you should
indicate
Post by Marcel Oliver
this, see doc.
Oops, I completely overlooked that vode is defaulting to simple
fixed-point iteration in the BDF solver. Thanks for pointing this
out!
I wonder if there is any situation where one would want to use BDF and
still not want to use a full Newton or quasi-Newton iteration since
the convergence condition for a simple fixed point iteration is
basically as bad as using an explicit solver in the first place. But
there is certainly educational value to see this behavior...
Post by Marcel Oliver
Can I use your code in an example under the scipy license ? I'll
see what odes does.
Absolutely. It's pretty trivial of course, so feel free to use and/or
adapt.
Best,
Marcel
PS.: I'll try to get odes to compile later.
_______________________________________________
SciPy-Dev mailing list
https://mail.scipy.org/mailman/listinfo/scipy-dev
Marcel Oliver
2016-01-29 15:56:53 UTC
Permalink
Post by Benny Malengier
This somewhat proves that returning output also on wrong output (not
satisfying atol/rtol) like ode.integrate does is not a good practice. In the
above case, with scipy.ode, you should test for r1.successful() after every
integrate, which you failed to do.
I totally agree. In fact, I used this example to teach students to be
critical about solver output. We tested with a number of solvers -
not just library solvers, mostly simple self-written ones - and all of
failing ones except dop853 fail catastrophically on this example.

So in some sense the dop853 has the worst possible behavior, but on
the other hand I am still amazed how close it is (qualitatively) to
the correct solution. And dop853 does very well for non-stiff
problems. In particular, it is only very mildly dissipative on energy
conserving systems.
Post by Benny Malengier
With the odes scikit approach, wrong output just stops the solve routine and
goes to sol.errors. The dopri warning output is to increase nsteps. If you do
UserWarning: dop853: problem is probably stiff (interrupted)
So an indication to use another solver.
Interesting, I guess I never increased nsteps that much. I absolutely
agree that an explicit fail is much better.

Best,
Marcel

Irvin Probst
2016-01-20 11:26:41 UTC
Permalink
Post by Marcel Oliver
* integrate.ode seems to have the more "pythonic" interface and a
larger choice of integrator. However, most of the integrators are
not re-entrant per warning in the documentation, so the object
oriented call signature is actually sending a false message. In
addition, in many (if not most) applications, I need the solution at
an array of times, and often need to post-process the entire time
series as an array. Thus, I find the call signature of
integrate.ode extremely annoying and avoid it in favor of odeint
(when the latter works) even though integrate.ode is the more
powerful package.
And THIS is why a vast majority of the students here use odeint and not
ode if we don't tell them which one to use, actually even the official
documentation seems to imply that odeint should be used instead of ode,
that's written in a yellow box almost at the top of
http://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.integrate.ode.html
:

See also:
odeint: an integrator with a simpler interface based on lsoda from ODEPACK

I think that almost all newcomers who see "simpler" will run to odeint
and never come back to ode, and that is imho espacially true for people
coming from Matlab as odeint can be used almost exactly like ode45 & co.

Fortunately we never have to deal with systems who would require an
integrator on steroids so odeint is good enough (actually even a custom
RK2 with fixed step size is good enough in most cases...).

I trust Benny Malengier when he says that having an unified interface
for all available ODEs is not a trivial task and I wouldn't dare
touching this, but at least it would be great to have an unified
prototype for the callbacks.

Regards.
Loading...