HAL Id: hal-03668008
https://hal.science/hal-03668008
Submitted on 13 May 2022
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Programming with Ordinary Dierential Equations:
Some First Steps Towards a Programming Language
Olivier Bournez
To cite this version:
Olivier Bournez. Programming with Ordinary Dierential Equations: Some First Steps Towards
a Programming Language. Computability in Europe 2022, 2022, Swansea, United Kingdom. �hal-
03668008�
Programming with Ordinary Differential
Equations: Some First Steps Towards a
Programming Language
Olivier Bournez
Ecole Polytechnique, LIX, 91128 Palaiseau Cedex, France
Abstract. Various open problems have been recently solved using Or-
dinary Differential Equation (ODE) programming: basically, ODEs are
used to implement various algorithms, including simulation over the con-
tinuum of discrete models such as Turing machines, or simulation of dis-
crete time algorithms working over continuous variables. Applications
include: Characterization of computability and complexity classes using
ODEs [1–4]; Proof of the existence of a universal (in the sense of Rubel)
ODE [5]; Proof of the strong Turing completeness of biochemical reac-
tions [6], or more generally various statements about the completeness
of reachability problems (e.g. PTIME-completeness of bounded reacha-
bility) for ODEs [7].
It is rather pleasant to explain how this ODE programming technology
can be used in many contexts, as ODEs are in practice a kind of universal
language used by many experimental sciences, and how consequently we
got to these various applications.
However, when going to say more about proofs, their authors including
ourselves, often feel frustrated: Currently, the proofs are mostly based on
technical lemmas and constructions done with ODEs, often mixing both
the ideas behind these constructions, with numerical analysis consider-
ations about errors and error propagation in the equations. We believe
this is one factor hampering a more widespread use of this technology in
other contexts.
The current article is born from an attempt to popularize this ODE pro-
gramming technology to a more general public, and in particular master
and even undergraduate students. We show how some constructions can
be reformulated using some notations, that can be seen as a pseudo pro-
gramming language. This provides a way to explain in an easier and
modular way the main intuitions behind some of the constructions, fo-
cusing on the algorithm design part. We focus here, as an example, on
how the proof of the universality of polynomial ODEs (a result due to
[8], and fully developed in [2]) can be reformulated and presented.
1 Introduction
It has been understood quite recently that it is possible to program with Or-
dinary Differential Equations (ODEs). This actually was obtained as a side ef-
fect from attempts to relate the computational power of analog computational
models to classical computability. Refer to [9–11] for surveys on analog compu-
tation with a point of view based on computation theory aspects, or to [12–14]
for surveys discussing historical and technological aspects. In particular, sev-
eral authors revisited the General Purpose Analog Computer (GPAC) model of
Shannon [15]. Following [16], this model can essentially be abstracted as corre-
sponding to (vectorial) polynomial ordinary differential equations: That is to say,
as dynamics over R
d
corresponding to solutions of ODEs of the form y
0
= p(y)
where y(t) R
d
is some function of time, and p : R
d
R
d
is (componentwise)
polynomial, and d is some integer. If some initial condition is added, this is also
called a polynomial Initial Value Problems (pIVP).
We do not intend to repeat here the full story, but in short, two main notions
of computations by pIVP have been introduced: the notion of GPAC-generated
function, corresponding to the initial notion from Shannon in [15], and the notion
of GPAC-computable function. This latter notion of computability is now known
to be equivalent to classical computability [17]. It is also possible to talk about
complexity theory in such models: It has been established that this is indeed
possible, if measuring time of computation as the length of the solution [7]. This
has been recently extended to space complexity in [18], or to exponential time
and the Grzegorczyk hierarchy[19].
All these statements have been obtained by realizing that continuous time
processes defined by ODEs, and even defined by polynomial ODEs, can simulate
various discrete time processes. They hence can be used to simulate models such
as Turing machines [20, 2], and even some more exotic models working with a
discrete time but over continuous data. This is based on various refinements of
constructions done in [20, 2, 1, 3, 4]. We call this ODE programming, as this is
indeed some kind of programing with various continuous constructions.
Forgetting analog machines or models of computation, it is important to re-
alize that ODEs is a kind of universal language of mathematics that is used in
many, if not all, experimental sciences: Physics, Biology, Chemistry, . . . . Conse-
quently, once it is known that one can program with ODEs, many questions ask-
ing about universality, or computations, in experimental contexts can be solved.
This is exactly what has been done by several authors, including ourselves, to
solve various open problems, in various contexts such as applied maths, com-
puter algebra, biocomputing. . . We do not intend here to be exhaustive about
various applications, but we describe some of them.
Some applications of ODE programming:
Implicit complexity: computability: Concepts such as being computable
for a function over the reals (in the sense of computable analysis) can be
described using ODEs only, i.e., with concepts from analysis only, and no-
reference to computational models such as Turing machines.
Theorem 1 ([17]). Let a and b be computable reals. A function f : [a, b]
R is computable in the sense of computable analysis iff there exist some
polynomials with rational coefficients p : R
n+1
R
n
, some polynomial p
0
:
R R with rational coefficients, and n1 computable real values α
1
, ..., α
n1
such that:
1. (y
1
, ..., y
n
) is the solution of the initial value problem
1
y
0
= p(y, t) with
initial condition (α
1
, ..., α
n1
, p
0
(x)) set at time t
0
= 0
2. There are i, j {1, ..., n} such that lim
t→∞
y
j
(t) = 0 and |f(x)y
i
(t)|
y
j
(t) for all x [a, b] and all t [0, +).
Condition 2. basically says that some component, namely y
i
is converging
over time t toward f(x), with error given by some other component, namely
y
j
(t).
Robust complexity theory for continuous-time systems: Defining a
robust time complexity notion for continuous time systems was a well-known
open problem [10], with several attempts, but with no generic solution pro-
vided. In short, the difficulty is that the naive idea of using the time variable
of the ODE as a measure of “time complexity” is problematic, since time
can be arbitrarily contracted in a continuous system due to the “Zeno phe-
nomena”. This was solved by establishing that the length of the solutions
for pODEs provides a robust complexity, that corresponds to classical com-
plexity [7]. This also provides some implicit characterization of PTIME, and
completeness of bounded time reachability.
Characterization of other computability and complexity classes:
This has been extended very recently to space complexity, with a characteri-
zation of PSPACE in [18], and of EXPTIME in [19], and to the Grzegorczyk
hierarchy [4].
A universal ordinary differential equation: Following [21], there exists
a fixed non-trivial fourth-order polynomial differential algebraic equation
(DAE) p(y, y
0
, . . . , y
d
) = 0 such that for any continuous positive function
ϕ on the reals, and for any continuous positive function (t), it has a C
solution with |y(t) ϕ(t)| < (t) for all t. The question whether one can
require the solution that approximates ϕ to be the unique solution for a given
initial data was a well-known open problem [21, page 2], [22, Conjecture 6.2].
It has been solved using ODE programming.
Theorem 2 ([5], Universal PIVP). There exists a fixed polynomial vec-
tor p in d variables with rational coefficients such that for any functions
f C
0
(R) and ε C
0
(R, R
>0
), there exists α R
d
such that there exists
a unique solution y : R R
d
to y(0) = α, y
0
= p(y). Furthermore, this
solution satisfies that |y
1
(t) f(t)| 6 ε(t) for all t R.
Strong Turing completeness of biochemical reactions: Ordinary dif-
ferential equations are a well-used models for modeling the dynamics of the
kinetic of reactions, and in particular of biochemical reactions between pro-
teins. Their Turing completeness was an open problem (see e.g. [23, Section
8]) solved in [6] using ODE programming.
1
We suppose that y(t) is defined for all t 0. This condition is not necessarily satisfied
for all polynomial ODEs, and we restrict our attention only to ODEs satisfying this
condition.
However, one difficulty when one wants to present ideas of the proofs, is that
currently the proofs are based on technical lemmas and constructions done with
ODEs, often mixing both the ideas behind these constructions, with numerical
analysis considerations about errors and error propagation in the equations. We
believe this is one factor hampering a more widespread use of this technology in
other contexts. This is also clearly a difficulty for newcomers in the field.
The current document is a preliminary step towards solving this, by present-
ing through an example a pseudo-programming language. Actually, this docu-
ment follows from an attempt to popularize this ODE programming technology
to a more general public, and in particular master and even undergraduate stu-
dents. We show how some constructions can be reformulated using some nota-
tions, that can be seen as a pseudo programming language. This provides a way
to explain in an easier and modular way the main intuitions behind some of the
constructions, focusing on the algorithm design part.
From our experiments, it can be done more generally for all of the construc-
tions of the references above. By lack of space, we only take an example, namely
the main result of [8], fully developed in [2], establishing the universality of
ODEs. We agree that for experts, this might be, or it is, at the end the same
proofs, but we believe, that presented that way, with this pseudo-programming
language the intuition is easier to grasp for newcomers. One motivation of pop-
ularizing ODE programming is that this may then help to solve other various
problems in particular in experimental sciences.
We also believe that this example is good to make our reader feel what ODE
programming is at the end, and the kind of techniques that are used in all the
mentioned references to establish all these results.
2 Computing with pIVPs
Proving a result such as Theorem 1, is done in one direction by simulating some
Turing machine using some pIVP. The purpose in this section is to provide the
intuition on how this is indeed possible to do so. We believe this provides a good
intuition on how this is possible to program with ODEs, and more generally
solve various discrete problems in some continuous way.
Remark 1. The other direction of the theorem, that we will not discuss in this
article, is obtained by proving that one can solve the involved ordinary differen-
tial equations by some numerical method, and hence a Turing machine. Notice
that this is not as obvious as it may seem: All the ordinary differential equations
cannot be solved easily (see famous counterexample [24]), and we use the fact
that we restrict to some particular (polynomial) ordinary differential equations:
See for example [25] for some conditions and methods to guarantee effectivity
for more general ordinary differential equations.
Remark 2. Notice that our example of simulation of a Turing machine by an
ordinary differential equation may be however misleading, as it may be thought
that, as Turing machines are universal for classical computability, this is the
end of the story. But, actually some results (e.g. Theorem 2) are obtained by
simulating a discrete time model not directly covered by classical computability,
as working over continuous variables. Understanding the suitable underlying
models is a fascinating question from our point of view, which remains to be
fully explored and understood.
2.1 Discrete time computations
Encoding a Turing machine configuration with a triplet Consider with-
out loss of generality some Turing machine M using the ten symbols 0, 1, . . . , 9,
where B = 0 is the blank symbol. Let . . . BBBa
k
a
k+1
. . . a
1
a
0
a
1
. . . a
n
BBB. . . .
denotes the content of the tape of the Turing machine M. In this representation,
the head is in front of symbol a
0
, and a
i
{1, . . . , 9} for all i. Suppose that M
has m internal states, corresponding to integers 1 to m. Such a configuration C
can be encoded by some element γ(C) = (y
1
, y
2
, q) N
3
, by taking
y
1
= a
0
+ a
1
10 + · · · + a
n
10
n
,
y
2
= a
1
+ a
2
10 + · · · + a
k
10
k1
,
and where q denotes the internal state of M .
Let θ : N
3
N
3
be the transition function of M: Function θ maps the
encoding γ(C) of a configuration C to the encoding γ(C
next
) of its successor
configuration.
Determining next configuration Next step is to observe that one can con-
struct some function f : R
3
R
3
that realizes some analytic extension of θ: i.e.
that coincides with θ on N
3
. To do so, the problems to solve are the following:
1. Determine the symbol being read. The symbol a
0
in front of the head
of the machine is given by a
0
= mod
10
(y
1
), where mod
10
(n) is the
remainder of the division of integer n by 10. Since we want to consider some
analytic functions, we can instead consider
a
0
= ω(y
1
), (1)
where ω is some analytic extension of mod
10
. For example, it can be taken
of the form
ω(x) = a
0
+ a
5
cos(πx) +
4
X
j=1
a
j
cos
jπx
5
+ b
j
sin
jπx
5
, (2)
where a
0
, . . . , a
4
, b
1
, . . . , b
4
are some (computable) coefficients that can be
obtained by solving some system of linear equations: write ω(i) = i, for
i = 0, 1, ..., 9.
2. Determine the next state. The function that returns the next state can be
defined by a Lagrange interpolation polynomial as follows: Let y = ω(y
1
) =
a
0
be the symbol being currently read and q the current state. Take
q
next
=
9
X
i=0
m
X
j=1
9
Y
r=0,r6=i
(y r)
(i r)
m
Y
s=1,s6=j
(q s)
(j s)
q
i,j
, (3)
where q
i,j
is the state that follows symbol i and state j.
3. Determine the symbol to be written on the tape. The symbol to be
written, s
next
, can be obtained by some similar interpolation polynomial.
4. Determine the direction of the move for the head. Let h denote
the direction of the move of the head, where h = 0 denotes a move to the
left, h = 1 denotes a “no move”, and h = 2 denotes a move to the right.
Then, again, the “next move” h
next
can be obtained by some interpolation
polynomial.
5. Update the tape contents. Define functions P
1
, P
2
, P
3
, that provides
the tape contents after the head moves left, does not move, or moves right,
respectively. Then, the next value of y , denoted by y
next
1
, can be obtained
by
y
next
1
= P
1
(1 H)(2 H)
2
+ P
2
H(2 H) + P
3
H(H 1)
2
, (4)
With P
1
= 10 (y
1
+ s
next
y) + ω (y
2
), P
2
= y
1
+ s
next
y and P
3
=
y
1
y
10
,
where H = h is the direction of the move given by previous item, and still
y = ω(y
1
) = a
0
. We can do something similar to get y
next
2
.
Consequently, it is sufficient to take f
y
1
, y
2
, q
= (y
next
1
, y
next
2
, q
next
).
It follows that if one succeeds to repeat in a loop the instruction
x f
x
,
where denotes an assignment, that is to say replacing the value of x by f (x),
one will simulate Turing machine M : starting from x(0) = γ(C
0
), encoding the
initial configuration of the Turing machine M, then x(t) will be γ(C(t)), where
C(t) is the configuration of the machine at time t.
2.2 Computations with a continuous time
Write instruction
1
; instruction
2
to denote the fact of doing first instruction
1
and then instruction
2
. Actually, if we succeed to repeat in a loop x
2
f (x) ; x
x
2
that would be sufficient: We will do the same, but two times slower, in the
sense that starting from x(0) = γ(C
0
), then now x(2t) will be γ(C(t)). If we
want to preserve speed, it would be sufficient to repeat in a loop
x
2
1/2
f (x); x
1/2
x
2
(5)
when x
1/2
y means an assignment done in time 1/2, i.e. doing x(t +
1
2
) = y(t)
if executed at time t . That way, x(t) will still be γ(C(t)).
To do so with ODEs, one needs to be able to do this operation
1/2
. A
construction, due to Branicky in [20], is based on the following remark: One
can construct some ODE that does some assignment, and even this particular
assignment. More precisely, if one takes some ODE of the form
y
0
= c(g y)
3
φ(t), (6)
where c is some real constant, one can check by some simple arguments from
analysis, that whatever the value of y(0) is, the solution will converge very fast
to g. Even uniformly, in the sense that for any function φ(t) of positive integral,
for any precision > 0, real constant c > 0 can be fixed sufficient big, such that
for any y(0), it is certain that y(
1
2
) is at a distance less than of g. In other
words, looking what is written in italic like this, this essentially means realizing
the equivalent of assignment y
1/2
g (possibly with some error, but less than
).
Remark 3 (Notation). In order to help, we use the following notation: we write
y
1/2
g
0
for c(g y)
3
φ(t) with the real constant c associated to that .
Consequently, writing ODE
y
0
= y
1/2
g
0
is the same as dynamics (6), i.e. some ODE doing y
1/2
g with an error less
than .
The intuition about these notations is that operation is about a function
doing some exact operation, whereas operation is about some function that is
intending to do something similar, but possibly introducing some errors.
Once we understand this, we can solve our problem: Consider a function r
that does some rounding componentwise, say on every component r(x) = j for
x [j 1/4, j + 1/4], for j Z. We will use some function θ(u) that we will
also write when u 0 . Observing that sin(2πt) is alternatively positive and
negative, and of period 1, we consider dynamic:
x
0
= when sin(2πt) 0 · x
1/2
r(x
2
)
1/4
0
x
0
2
= when sin(2πt) 0 · x
2
1/2
f
r(x)
1/4
0
(7)
with x
1
(0) = x
2
(0) = x
0
.
Written in another way, this is the dynamic:
(
x
0
= c
1
(r(x
2
) x)
3
θ( sin(2πt))
x
0
2
= c
2
( f (r(x)) x
2
)
3
θ(sin(2πt))
(8)
We select the function θ(x) = when x 0 so that it is 0 for x 0, and
positive for x > 0 (say x
2
for x > 0). The idea is that sin(2πt) is alternatively
positive for t [j, j + 1/2], then negative for t [j + 1/2, j] when t increases,
where t is some integer. Consequently, θ(sin(2πt)) is alternatively positive and
null. Consequently, x
0
2
alternates between the right hand side of ODE (6) with
g = f (r(x)) (for some function φ) and exactly 0. In other words, alternatively
x
2
evolves in order to do x
2
1/2
f (r(x)), then stay fixed, and this process is
repeated for ever.
In a symmetric way, x
0
alternates between exactly 0 and exactly the right
hand side of ODE (6) with g = r(x
2
) (for some function φ). In other words, x
evolves between instants where it is fixed, and where one does x
1/2
r(x
2
)).
Clearly, if one starts from a point x
0
with integer coefficients, and since f
preserves the integers, this will do exactly what intended, that is to say repeat
in a loop (5): In order to get this working, it is sufficient to consider < 1/4 and
select c
1
and c
2
sufficiently big, by the property of the ODE (6) above.
Indeed, each time t multiple of 1/2 is starting to do some assignment of
type y
1/2
r(g). Actually, this does not do this exactly: y is not set to the
precise value r(g) at next multiple of
1
2
, but with these hypotheses, we (by a
simple induction) are always in the case where we know that r(g) has integer
coordinates, and since < 1/4, r(y) will indeed be the correct integer after time
1
2
. In other words, by recurrence, if we consider r(x) and r(x
2
) at some time
multiple of
1
2
, this does exactly the same as repeating in a loop (5).
This works perfectly fine, and provides a way to simulate some Turing ma-
chine by some ODE, and more generally the iterations of some functions over
the integers.
However, we want a stronger property: we want to simulate some Turing
machine by some analytic dynamic. We know from the theory of analytic func-
tions, that some analytic function that is constant in some interval is necessarily
constant. Consequently, our functions θ and r above cannot be analytic.
The idea is then to replace ideal when u 0 by when u 0
0
that would
approximate the first.
2.3 Some useful functions to correct errors
A function such as σ(x) = x 0.2 sin(2πx) is a contraction on the vicinity of
integers:
Lemma 1. Let n Z, and let [0, 1/2). Then there is some contracting factor
λ
(0, 1) such that δ [, ], |σ(n + δ) n| < λ
δ.
This function can be used to do some static error correction. To help read-
ability, we will also write x for σ(x). We will also write x
[n]
for σ
[n]
(x), i.e.
n fold composition of function σ. We do so, as when x is close to some integer,
these values are basically close to x: Just ignore rounded corners in our notations
for intuition.
It is also possible to do some dynamic error correction: We construct a func-
tion {0, 1} . It takes two arguments, a and y, and its value, that we will
write a {0, 1}
y
, values a with error at most 1/y when a is close to a, with
a {0, 1} and we have some positive y > 0. Formally:
Lemma 2 ([2, Lemma 4.2.5]). One can create {0, 1} : R
2
R such that
for all y > 0, and for all a R, as soon as |a a| 1/4 with a {0, 1}, then
a {0, 1}
y
i
a
1
y
, a +
1
y
h
.
Proof. Consider x {0, 1}
y
=
1
π
arctan(4y(x 1/2)) +
1
2
, and observe that
π
2
arctan x
<
1
x
for x (0, ) and
π
2
+ arctan x
<
1
|x|
for x (−∞, 0).
In x {0, 1}
y
, the argument x is expected to take essentially only two
values. We can construct a version with 3 values, namely 0, 1 and 2.
Lemma 3 ([2, Lemma 4.2.7]). Fix > 0. One can create {0, 1, 2} : R
2
R such that for all y 2, and for all a R, as soon as |a a| with
a {0, 1, 2}, then a {0, 1, 2}
y
i
a
1
y
, a +
1
y
h
.
Proof. Take
x
[d+1]
1
2
{0, 1}
3y
·
2 · x
[d]
/2 {0, 1}
3y
1
!
+1, where
d = 0 if 1/4 and d = d− log(4)/ log λ
e otherwise.
2.4 And hence, how to so with analytic functions?
We would like to do some dynamic close to the one of (7), but using only analytic
functions. We intend to replace φ(t) = θ(sin 2πt) = when sin 2πt 0 in the
dynamic ζ(t), by some analytic function ζ : R R.
An idea to get such a function is to consider ζ
(t) = ϑ(t) {0, 1}
1/
,
with ϑ(t) =
1
2
sin
2
(2πt) + sin(2πt)
. Using the notation when u 0
1/
=
u {0, 1}
1/
, we can also write ζ
(t) = when ϑ(t) 0
1/
.
We would then like to replace dynamic (7) by something like
x
0
= when ϑ(t) 0
1/
· x
1/2
r(x
2
)
0
x
0
2
= when ϑ(t) 0
1/
· x
2
1/2
f (r(x))
0
(9)
We however still need to get rid of functions r(r) doing some exact rounding,
that cannot be analytic. The ideas is to consider that σ
[n]
(x) = x
[n]
does the
job, if integer n is big enough, and if f commits an error that remains small
and uniform, in the vicinity of integers.
How to control errors on f? We basically replace f by some function f
that is built exactly with the same ideas, but keeping errors under control.
Theorem 3 ([2, Theorem 4.4.1]). Let θ : N
3
N
3
be the transition function
of some Turing machine. Then, given 0 < 1/2, θ has some analytic expansion
f : R
3
R
3
such that
k(y
1
, y
2
, q) (y
1
, y
2
, q)k
θ(y
1
, y
2
, q) f (y
1
, y
2
, q)
where (y
1
, y
2
, q) N
3
encodes a configuration of M.
Proof. To get so, using previous ideas, replace equation (1), by a
0
= y =
ω
y
1
[l]
, and the polynomial interpolations such as equation (3) to deter-
mine the next state, the next symbol and the direction of the move, by the
equivalent expression where each variable has been “rounded cornered”. For ex-
ample, instead of (3), write
q
next
=
9
X
i=0
m
X
j=1
9
Y
r=0,r6=i
y
[n]
r
(i r)
m
Y
s=1,s6=j
( q
[n]
s)
(j s)
q
i,j
,
If l and n are chosen sufficiently big, the approximation of the interpolation
will be uniform, and as small as desired.
The difficulty is on equation (4), since the error is not uniform if this is done
in a too naive way. But actually, instead of taking H = h, the idea is to take
some dynamic approximation sufficiently precise to correct errors.
More concretely: We still intend to define some functions P
1
, P
2
, P
3
, which
intend to approximate the content of the tapes if the head respectively move
left, don’t move or move right. Let H some “sufficiently big“ approximation of
h, still to be determined. Then y
next
1
can be approximated by
y
next
1
= P
1
1
2
(1 H)(2 H) + P
2
H(2 H) + P
3
(
1
2
)H(1 H), (10)
With P
1
= 10
y
1
[j]
+ s
next
[j]
y
[j]
+ ω
y
2
[j]
[j]
, P
2
= y
1
[j]
+
s
next
[j]
y
[j]
, and P
3
=
y
1
[j]
y
[j]
10
.
This is at the end, once again exactly similar to expressions in (4), but where
all variables have been “rounded cornered”.
The difficulty is that in that case, P
1
depends on y
1
, which is not a bounded
value. So if we take as before H = h = h
next
, the error on term (1 H)(2
H)/2 can be arbitrarily amplified when this term is multiplied by P
1
. But
one can take some error proportional to y
1
. One can then check that H =
h
3
{0, 1, 2}
10000(y
1
+1/2)+2
is fine.
Use same arguments on P
2
and P
3
to define |y
next
1
y
next
1
| < . And does
something similar for the other part of the tape, to define y
next
2
such that |y
next
2
y
next
2
| < .
At the end, consider f : R
3
R
3
defined by
f
y
1
, y
2
, ¯q
= (y
next
1
, y
next
2
, q
next
).
Coming back to the simulation So at the end, we have replaced the dynamic
of (7) by a dynamic of the form:
x
0
= when ϑ(t) 0
1/
· x
1/2
x
2
[n]
0
0
x
0
2
= when ϑ(t) 0
1/
· x
2
1/2
f
x
[m]
0
0
(11)
with when u 0
1/
= u {0, 1}
1/
.
We get consequently to some functions that are indeed analytic. It remains
to check that each of the parameters n, m, ,
0
, . . . can be fixed sufficiently small
so that the dynamic will never leave a vicinity of the dynamic that we intend to
simulate. This is basically developing all the previous arguments, without true
difficulties, using basic analysis: refer to [2] for details.
2.5 Going to pODEs
The ODE that we obtained is not polynomial: It uses sin, arctan, . . . for example.
But it can be transformed into some pODE. The idea is that many ODEs can be
transformed as such using a simple process: Introduce some variables that are
needed, express their derivative in terms of already present variables, using usual
relations on derivatives, and repeat this process until it terminates. It terminates
in practice very quickly for usual functions.
Maybe an example is more clear than a long and formal discussion. Suppose
that you want to program the equation
y
0
1
= sin
2
y
2
y
0
2
= y
1
cos y
2
e
e
y
1
+t
with the initial condition y
1
(0) = 0 and y
2
(0) = 0.
Since y
0
1
is expressed as the square of the sinus of y
2
, and this is not a
polynomial, we introduce y
3
= sin y
2
, and we write y
0
1
= y
2
3
with y
3
(0) = 0.
Similarly, since y
1
cos y
2
e
e
y
1
+t
is not polynomial, we introduce y
4
= cos y
2
and
y
5
= e
e
y
1
+t
, and we write y
0
2
= y
1
y
4
y
5
.
We then consider the derivatives of the variables that have been introduced.
We compute the derivative of y
3
, and we realize that this is y
0
2
cos(y
2
). We
already know y
0
2
, and cos(y
2
) is y
4
. We can hence write y
0
3
= y
4
(y
1
y
4
y
5
) that
is a polynomial. We now focus on the derivative of y
4
that values y
0
2
sin(y
2
),
which can be written y
0
4
= y
3
(y
1
y
4
y
5
). We next go to the derivative of y
5
that writes y
0
5
= y
5
(y
6
y
2
3
+ 1) if we introduce y
6
= e
y
1
to keep it polynomial,
writing y
5
(0) = e. We then go to the derivative of y
6
that values y
0
1
e
y
1
= y
6
y
2
3
which is polynomial and we write y
6
(0) = 1.
We have obtained that we can simulate (by just projecting) the previous
system by the pIVP
y
0
1
= y
2
3
y
0
2
= y
1
y
4
y
5
y
0
3
= y
4
(y
1
y
4
y
5
)
y
0
4
= y
3
(y
1
y
4
y
5
)
y
0
5
= y
5
(y
6
y
2
3
+ 1)
y
0
6
= y
6
y
2
3
with y(0) = (y
1
, . . . , y
6
)(0) = (0, 0, 0, 1, e, 1).
Coming back to previous dynamics (11): Just do the similar process on the
analytic dynamic. This will necessarily terminates (from known closure proper-
ties established in [2]) and this will provide a pIVP.
Of course not all the ingredients of ODE programming are covered by this
example, and we miss constructions such as using change of variables, or mixing
results, etc. . . by lack of space. But it is however quite representative of ODE
programming technology, and of our pseudo programming language.
References
1. Hainry, E.: Mod`eles de calculs sur les eels. R´esultats de Comparaisons. PhD
thesis, LORIA (7 ecembre 2006)
2. Gra¸ca, D.S.: Computability with Polynomial Differential Equations. PhD thesis,
Instituto Superior ecnico (2007)
3. Pouly, A.: Continuous models of computation: from computability to complexity.
PhD thesis, Ecole Polytechnique and Unidersidade Do Algarve (Defended on July
6, 2015. 2015) https://pastel.archives-ouvertes.fr/tel-01223284, Prix de Th`ese de
l’Ecole Polyechnique 2016, Ackermann Award 2017.
4. Gozzi, R.: Analog Characterization of Complexity Classes. PhD thesis, Insti-
tuto Superior T´ecnico, Lisbon, Portugal and University of Algarve, Faro, Portugal
(2022)
5. Bournez, O., Pouly, A.: A universal ordinary differential equation. Logical Methods
in Computer Science 16(1) (2020)
6. Fages, F., Le Guludec, G., Bournez, O., Pouly, A.: Strong turing completeness
of continuous chemical reaction networks and compilation of mixed analog-digital
programs. In: Computational Methods in Systems Biology-CMSB 2017. (2017)
7. Bournez, O., Gra¸ca, D.S., Pouly, A.: Polynomial Time corresponds to Solutions of
Polynomial Ordinary Differential Equations of Polynomial Length. Journal of the
ACM 64(6) (2017) 38:1–38:76
8. Gra¸ca, D.S., Campagnolo, M.L., Buescu, J.: Computability with polynomial dif-
ferential equations. Adv. Appl. Math. 40(3) (2008) 330–349
9. Bournez, O., Pouly, A.: A survey on analog models of computation. In: Handbook
of Computability and Complexity in Analysis. Springer (2021) 173–226
10. Bournez, O., Campagnolo, M.L.: New computational paradigms. changing concep-
tions of what is computable. Springer-Verlag, New York (2008) 383–423
11. Orponen, P.: A survey of continous-time computation theory. In Du, D.Z., Ko,
K.I., eds.: Advances in Algorithms, Languages, and Complexity, Kluwer Academic
Publishers (1997) 209–224
12. Ulmann, B.: Analog and hybrid computer programming. De Gruyter Oldenbourg
(2020)
13. Ulmann, B.: Analog computing. Walter de Gruyter (2013)
14. MacLennan, B.J.: Analog computation. In: Encyclopedia of complexity and sys-
tems science. Springer (2009) 271–294
15. Shannon, C.E.: Mathematical theory of the differential analyser. Journal of Math-
ematics and Physics MIT 20 (1941) 337–354
16. Gra¸ca, D.S., Costa, J.F.: Analog computers and recursive functions over the reals.
Journal of Complexity 19(5) (2003) 644–664
17. Bournez, O., Campagnolo, M.L., Gra¸ca, D., S. Hainry, E.: Polynomial differential
equations compute all real computable functions on computable compact intervals.
Journal of Complexity 23(3) (2007) 317–335
18. Bournez, O., Gozzi, R., Gra¸ca, D.S., Pouly, A.: A continuous characterization of
PSPACE using polynomial ordinary differential equations. (2022)
19. Gozzi, R., Gra¸ca, D.S.: Characterizing time computational complexity classes with
polynomial differential equations. Submitted (2022)
20. Branicky, M.S.: Universal computation and other capabilities of hybrid and contin-
uous dynamical systems. Theoretical Computer Science 138(1) (6 February 1995)
67–100
21. Rubel, L.A.: A universal differential equation. Bulletin of the American Mathe-
matical Society 4(3) (May 1981) 345–349
22. Boshernitzan, M.: Universal formulae and universal differential equations. Annals
of mathematics 124(2) (1986) 273–291
23. Cook, M., Soloveichik, D., Winfree, E., Bruck, J.: Programmability of chemical
reaction networks. In: Algorithmic Bioprocesses. Springer (2009) 543–584
24. Pour-El, M.B., Richards, J.I.: A computable ordinary differential equation which
possesses no computable solution. Ann. Math. Logic 17 (1979) 61–90
25. Collins, P., Gra¸ca, D.S.: Effective computability of solutions of ordinary differen-
tial equations the thousand monkeys approach. Electronic Notes in Theoretical
Computer Science 221 (2008) 103–114