A numerical solution to the 1D eikonal equation and a Python implementation

Vitality Learning
6 min readDec 20, 2023

This post follows Escaping the maze: the Eikonal Equation unveiled where the iconal equation was introduced as a form of dynamic programming in solving the problem of determining the minimum-time path. The goal here is to discuss a numerical solution for the simple one-dimensional case.

The iconal equation is presented below:

where Γ is the target set. It is a non-linear partial differential equation.

In the one-dimensional case, on assuming a constant speed equal to 1 throughout the domain of interest, the eikonal equation becomes:

One-dimensional eikonal equation.

Let’s start by noting that the eikonal equation is not uniquely solvable and that its solutions are non-regular ones. The figure below shows some of the infinitely possible solutions.

Non-uniqueness of the solution to the 1D eikonal equation.

To restore uniqueness, typically, an arbitrarily small viscous term is introduced to the eikonal equation and a solution to the following problem is sought:

Eikonal equation modified with a viscous term.

The solution of the equation modified with the viscous term is

which converges to

The reported asymptotic solution satisfies the eikonal equation except for the x=0 point at which it is non-differentiable. Furthermore, the asymptotic solution is an even function, as it can be expected from the symmetry of the equation (symmetry of the involved operators acting on U and of the speed) and of the geometry (target set).

Moreover, the asymptotic solution can be interpreted as follows. When x>0, the minimum time path to the target set

is to the right. Indeed, U(x) becomes lower and lower as long as x increases. Opposite to that, when x<0, the minimum-time path is to the left since U(x) becomes lower and lower as long as x decreases.

To numerically solve the one-dimensional eikonal equation, the following discretization is introduced:

with

The involved derivative could be approximated by finite differences as (see Numerical approximation of derivatives):

Eikonal equation uniformly discretized with forward differences.

where

Unfortunately, it can be seen that the eikonal equation uniformly discretized with forward differences admits infinite solutions.

To restore uniqueness, we observe that, for the considered problem, the target set Γ coincides with the boundary of the solution domain and so information flows, from the boundary conditions, inwards. In other words, for x>0, the information flows from the right boundary leftwards and so we should use backward differences to update rightmost points. Opposite to that, for x<0, the information flows from the left boundary rightwards and so we should use forward differences to update rightmost points.

Generally speaking, since U is a “residual time to go” and that time cannot be negative, the information flows from the point where U is zero to points where U is larger than zero, namely, from points with lower U to points with higher U.

This means that we should use finite differences in the following way:

Finite differences for the eikonal equation.

The above finite differences would lead to the following update rule:

in which the new value of the eikonal at the n-th grid point is determined by the neighboring points with smaller eikonal, thus partially meeting causality. To fully meet causality, we need to include in the update rule also the old value of the eikonal at the n-th grid point. Accordingly, the update rule becomes:

Update rule

The numerical solution should be initialized at a value larger than the maximum possible value of U in the computation domain. For the one-dimensional eikonal equation to be solved, the maximum value of the right hand side is 1, so the initially assigned value of the numerical solution should be larger than 2 which is the product between the maximum value of the right hand side and the size of the computation domain.

The numerical scheme for the solution of the eikonal equation is called upwind since the finite difference uses an upwind sample rather than a downwind one besides the sample at the derivation location. Upwind means that the exploited sample is taken in the direction opposite to the one from which the information flows, the direction to the closest boundary point in the case at hand.

The above numerical scheme converges in two sweeps, one from left to right followed by one from right to left. This occurs since there are only two possible minimum-time paths, one from left to right and one from right to left. In this respect, the first sweep from left to right will cover the path that goes from left to right and the second sweep from right to left will cover the path that goes from right to left. Since the numerical eikonal is updated only if the newly computed value is smaller, the values that have been correctly computed in the first sweep have reached their minimum possible values and will not be changed in the second sweep. Convergence in two sweep occurs for every eikonal equation in one-dimension, namely, for arbitrary right hand side.

The described numerical scheme converges to the following solution:

At the GitHub page, an implementation of the approach in Python language, based on the above presented finite differences and update rule for the eikonal equation is presented. The code solves the 1D Eikonal equation numerically by iteratively sweeping through the solution domain until convergence and visualizes the progress through dynamic plotting. It consists of the following steps.

Setting the algorithm parameters: Set the tolerance (tol) for convergence, the number of discretization points (N), and the boundaries of the solution domain (xmin, xmax).

Domain discretization: Create a linearly spaced grid (x) within the specified domain, and calculate the grid spacing (h).

Solution initialization: Initialize arrays u and uold for the solution at the current and previous iterations.

Iterations — Sweeping, convergence check and plot for visualization: Perform iterative updates until the solution converges (difference between consecutive iterations is below the tolerance). The iterations update the solution by sweeping from left to right and then from right to left. The update is based on the minimum value of neighboring points plus the grid spacing (h) according to the above update rule. To check the convergence, the L2 norm of the difference between the current and previous solutions is computed. Finally, the absolute values of the solution during each iteration is plotted. The plot updates dynamically during the iterations.

Final plot: Plot the final solution after convergence.

--

--

Vitality Learning

We are teaching, researching and consulting parallel programming on Graphics Processing Units (GPUs) since the delivery of CUDA. We also play Matlab and Python.