This proceedings volume gathers peer-reviewed, selected papers presented at the “Mathematical and Numerical Approaches f

*233*
*81*
*4MB*

*English*
*Pages VII, 142
[147]*
*Year 2020*

- Author / Uploaded
- Larisa Beilina
- Maïtine Bergounioux
- Michel Cristofol
- Anabela Da Silva
- Amelie Litman

*Table of contents : Preface......Page 6Contents......Page 81 Introduction and Overview......Page 92 Different Signal Generation Techniques......Page 102.1 Propagating Electric Fields Induce Thermoacoustic Pulses......Page 112.3 Thermoacoustic Signals Generated by Charged Particles......Page 143.1 Co-locating Ultrasound Pulse-Echo and Thermoacoustic Receive Elements......Page 18References......Page 20 On the Transport Method for Hybrid Inverse Problems......Page 231 Introduction......Page 242 Proof of Theorem 1......Page 25References......Page 281 Introduction......Page 292 Notations and Main Result......Page 303 Proof of the Main Result......Page 324 Proof of Propositions 1 and 2......Page 37References......Page 391 Introduction......Page 412 The Mathematical Model......Page 423 Finite Element Method......Page 443.2 Finite Element Discretization......Page 453.3 Error Analysis......Page 464 Numerical Examples......Page 485 Conclusion......Page 50References......Page 511 Introduction......Page 522.1 SVD and Pseudoinverse......Page 542.2 Randomized SVD......Page 563.1 Truncated RSVD......Page 593.2 Tikhonov Regularization......Page 603.3 General Tikhonov Regularization......Page 613.4 Dual Interpretation......Page 624 Error Analysis......Page 634.1 Truncated RSVD......Page 644.2 Tikhonov Regularization......Page 674.3 General Tikhonov Regularization......Page 685 Numerical Experiments and Discussions......Page 695.1 One-Dimensional Benchmark Inverse Problems......Page 705.2 Convergence of the Algorithm......Page 735.3 Electrical Impedance Tomography......Page 746 Conclusion......Page 76References......Page 78 Parameter Selection in Dynamic Contrast-Enhanced Magnetic Resonance Tomography......Page 801 Introduction......Page 812.2 Inverse Problem of Dynamic Image Reconstruction......Page 832.3 Regularization Parameter Selection......Page 852.4 Temporal Regularization Parameter Selection......Page 863 Materials and Methods......Page 874 Results......Page 895 Conclusions......Page 92References......Page 95 Convergence of Explicit P1 Finite-Element Solutions to Maxwell's Equations......Page 971 Introduction......Page 982 The Model Problem......Page 1003.2 Full Discretization......Page 1013.3 Convergence Results......Page 1024 Numerical Validation......Page 103References......Page 1081 Introduction......Page 1102 Modelling the Optical Coherence Tomography Measurement......Page 1113 Domain Decomposition of the Solution......Page 1154 Wave Propagation Through a Scattering Layer......Page 1195 Recovering the Susceptibility with Optical Coherence Elastography......Page 125References......Page 1301 Introduction......Page 1322 Statement of the Forward and Inverse MRI Problem......Page 1333 The Finite Element Method for Minimization of the Tikhonov Functional......Page 1384 Balancing Principle......Page 1394.2 Study of Convergence of Fixed Point Algorithm......Page 1415 Numerical Experiment......Page 143References......Page 146*

Springer Proceedings in Mathematics & Statistics

Larisa Beilina · Maïtine Bergounioux · Michel Cristofol · Anabela Da Silva · Amelie Litman Editors

Mathematical and Numerical Approaches for Multi-Wave Inverse Problems CIRM, Marseille, France, April 1–5, 2019

Springer Proceedings in Mathematics & Statistics Volume 328

Springer Proceedings in Mathematics & Statistics This book series features volumes composed of selected contributions from workshops and conferences in all areas of current research in mathematics and statistics, including operation research and optimization. In addition to an overall evaluation of the interest, scientiﬁc quality, and timeliness of each proposal at the hands of the publisher, individual contributions are all refereed to the high quality standards of leading journals in the ﬁeld. Thus, this series provides the research community with well-edited, authoritative reports on developments in the most exciting areas of mathematical and statistical research today.

More information about this series at http://www.springer.com/series/10533

Larisa Beilina Maïtine Bergounioux Michel Cristofol Anabela Da Silva Amelie Litman •

•

•

•

Editors

Mathematical and Numerical Approaches for Multi-Wave Inverse Problems CIRM, Marseille, France, April 1–5, 2019

123

Editors Larisa Beilina Department of Mathematical Sciences Chalmers University of Technology and University of Gothenburg Gothenburg, Sweden Michel Cristofol Institut de Mathématiques de Marseille Aix-Marseille University Marseille, France

Maïtine Bergounioux Départment de Mathématiques - MAPMO University of Orléans Orleans, France Anabela Da Silva Institut Fresnel Marseille, France

Amelie Litman Institut Fresnel Marseille, France

ISSN 2194-1009 ISSN 2194-1017 (electronic) Springer Proceedings in Mathematics & Statistics ISBN 978-3-030-48633-4 ISBN 978-3-030-48634-1 (eBook) https://doi.org/10.1007/978-3-030-48634-1 Mathematics Subject Classiﬁcation: 62P10, 65N21, 65N12, 65N15, 78M10, 65F22, 65R20, 65R32, 65J22, 65M32, 78A46 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional afﬁliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

In this volume are collected some papers related to the conference Mathematical and Numerical Approaches for Multi-Wave Inverse Problems which took place from 1 to 5 April, 2019 at Centre International de Rencontres Math’ematiques, CIRM (https://www.cirm-math.com/), Marseille, France. One of the main objectives of this conference was exchange of ideas and tools between different scientiﬁc communities, specially to favor the discussions between researchers more involved in theoretical aspects of inverse problems with the ones more interested in numerical implementation of these problems. Inverse problems have been a topic of interdisciplinary interest for many years for mathematicians, applied mathematicians, physical scientists or engineers. Considerable progress has been achieved in recent years in the development of innovative techniques, new theoretical tools, new approximations, as well as new optimization techniques to improve solution of multi-wave inverse problems. In the inverse problems community, the current scientiﬁc interests are focused towards achieving better contrasted images, with higher spatial resolution and with quantitative contents, such as functional imaging in biomedical applications. As a consequence, more and more multi-modal, multi-wave or hybrid systems are currently being proposed and/or being used routinely. Mathematically, solving these inverse problems is even more complicated because they require a coupling, that can be soft or hard, between a set of partial differential equations not necessarily of the same nature (elliptic, hyperbolic or parabolic). This leads to problems in terms of uniqueness, stability but also in terms of control which remain still little explored. From numerical point of view, these systems of PDEs may lead to large discretized domains, with large number of degrees of freedom, which must be adapted according to the wavelength of the considered waves. This book gathered scientiﬁc works from a number of researchers strongly involved in multi-modal applications. The volume highlights new theoretical and numerical tools for the solution of real-life problems as well as proposes new systems for their solution on the basis of theoretical understanding.

v

vi

Preface

For most of papers of this book, the reader will ﬁnd the complete description of a new technique or numerical method for solution of some inverse problem which is supported by numerical simulations. The intended audience of the book is undergraduate and graduate university students, Ph.D. students specialized in applied mathematics, electrical engineering and physics, researchers and university teachers and R&D engineers with interest in applied mathematics. Gothenburg, Sweden Orleans, France Marseille, France Marseille, France Marseille, France

Larisa Beilina Maïtine Bergounioux Michel Cristofol Anabela Da Silva Amélie Litman

Contents

Thermoacoustic Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. K. Patch

1

On the Transport Method for Hybrid Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francis J. Chung, Jeremy G. Hoskins, and John C. Schotland

15

Stable Determination of an Inclusion in a Layered Medium with Special Anisotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michele Di Cristo

21

Convergence of Stabilized P1 Finite Element Scheme for Time Harmonic Maxwell’s Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Asadzadeh and Larisa Beilina

33

Regularized Linear Inversion with Randomized Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kazufumi Ito and Bangti Jin

45

Parameter Selection in Dynamic Contrast-Enhanced Magnetic Resonance Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kati Niinimäki, M. Hanhela, and V. Kolehmainen

73

Convergence of Explicit P1 Finite-Element Solutions to Maxwell’s Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Larisa Beilina and V. Ruas

91

Reconstructing the Optical Parameters of a Layered Medium with Optical Coherence Elastography . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Peter Elbau, Leonidas Mindrinos, and Leopold Veselka The Finite Element Method and Balancing Principle for Magnetic Resonance Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Larisa Beilina, Geneviève Guillot, and Kati Niinimäki

vii

Thermoacoustic Applications S. K. Patch

Abstract This conference paper provides a cursory overview of thermoacoustic phenomena and attempts to highlight aspects that may be unfamiliar to the mathematical community or could benefit from more rigorous mathematical analysis. A new clinical application, thermoacoustic range verification during particle therapy, is presented. The goal is to ground expectations and generate further interest in thermoacoustics within the mathematical community. Keywords Thermoacoustic · Spherical radon transform · Time reversal MSC: 62P10

1 Introduction and Overview By 1881 Alexander Graham Bell noted the photoacoustic effect, by which electromagnetic energy in the optical regime is converted to mechanical energy [1]. More generally, thermoacoustics refers to generation of acoustic signals by heating. In the early 1960’s thermoacoustics was attributed to audible detection of microwave pulses, even by deaf study volunteers [2]. By the early 1980’s, photoacoustic spectroscopy and pulse oximetry were used in commercial [3] and medical applications, respectively. Image reconstruction was first considered by physicists who developed series solutions [4]. Decades later, filtered backprojection type inversion formulae were first developed by mathematicians for specific measurement surfaces [5, 6]. Engineers quickly improved upon these results and developed analytic inversion formulae for arbitrary measurement surfaces [7]. These early results assumed ideal experimental conditions, and mathematicians have since contributed many results to expand upon them. For instance, too many results to cite have been published on image reconstruction in the presence of acoustic heterogeneity. Rather than recite results that are S. K. Patch (B) Department of Physics, UW-Milwaukee and Acoustic Range Estimates, Milwaukee, WI 53211, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 L. Beilina et al. (eds.), Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, Springer Proceedings in Mathematics & Statistics 328, https://doi.org/10.1007/978-3-030-48634-1_1

1

2

S. K. Patch

well known to the mathematical community, this manuscript attempts to highlight aspects that could benefit from more nuanced mathematical analysis. In particular, different signal generation techniques, i.e. object heating, are discussed in Sect. 2. A few experimental results that may impact approaches to image reconstruction are presented in Sect. 3. It is hardly surprising that thermoacoustic signals are generated during rapid energy deposition, since the units of pressure and energy density are the same, [ p] = N /m 2 = J/m 3 . The conversion factor is the dimensionless Grueneisen parameter, Γ = Bβ/Cρ. Thermal expansion coefficient, β, and specific heat capacity, C, are thermal parameters, whereas bulk modulus, B, and density are mechanical parameters. Assuming instantaneous energy deposition pressure changes according to δp(x) = Γ (x, T )S(x)

(1)

where T denotes temperature and S(x) represents the applied energy density. Regardless of the type of energy used to heat the object under test, recovering useful information can be challenging. A perfect reconstruction of thermoacoustic pressure increase, δp(x), does not ensure perfect understanding of the applied energy density, the Grüneisen, or for that matter, any of the parameters upon which the Grüneisen depends. Parameters of interest vary depending upon application and fortunately, assumptions can be made to reduce the number of unknowns. For instance, in most soft tissues, B ∼ 2.2 GPa and ρ ∼ 1000 kg/m3 .

2 Different Signal Generation Techniques Signal generation parameters of three well-known versions of electromagnetically induced thermoacoustics are compared and contrasted with ion-induced thermoacoustics in Table 1, where PAT, MITAT, and TCT refer to photoacoustic tomography utilizing near-infrared radiation (NIR), microwave-induced thermoacoustic tomography and very high frequency (VHF) thermoacoustic computerized tomography. Ionoacoustics indicates that heating is due to energetic ions (positively or negatively charged particles). For each of the tomographic techniques non-ionizing electromagnetic photons with insufficient energy to break a DNA strand are applied to generate thermoacoustic signal. Ionoacoustic signal is created by charged particles delivered with the goal of breaking DNA. Brief descriptions of the electromagnetic heating modalities are followed by a more detailed description of ion-induced thermoacoustics in Sect. 2.3. The last row of Table 1 compares typical heating pulse durations. As a rule of thumb, driving amplifier tubes from 0 to kW power levels in less than a dozen cycles is challenging. We note that solid state amplifiers are fast, and prohibitively expensive for most research groups. Therefore, typical pulse durations for VHF and MITAT tend to be proportional to the period of the applied electromagnetic field. Additionally,

Thermoacoustic Applications

3

Table 1 Thermoacoustic signal generation TCT

MITAT

PAT

Ionoacoustic

Energy type

VHF

Microwave

NIR

Energetic ion

Frequency range

30–300 MHz

300 MHz–300 GHz

2-4 THz

NA

Goal to recover

Ionic content

Relaxation + absorption

Absorption

Dosimetry range verif.

Governing Eq.

Maxwell/wave

Maxwell/wave

Transport/diffusion

Bethe-Bloch

Energy transport

Wave propagation

Wave propagation

Photon scattering

Ion scattering

Polarization important?

YES

YES

NO

NA

O(100 ns)

1–10 ns

O (1 ns–10 μs)

Heating pulse O(1 μs) duration

different types of particle accelerators have pulse durations that vary by orders of magnitude. This may be why experimentalists tend to model thermoacoustic wave propagation using homogeneous initial conditions and place the source term in the wave equation, p(x, t) =

∂ Γ (x, T )S(x)I (t) ∂t

(2)

where S(x) represents energy density, I (t) is a dimensionless approximate delta function. Representative values of full width at half maximum for I(t) are listed in Table 1. S(x)I (t) is the rate of energy deposition in W/m3 . Although Du Hamel showed the equivalence of treating the source term as an initial condition, homogeneous initial conditions simplifies analysis of heating pulse duration [8]. Signal generation by applying electromagnetic waves is discussed briefly in Sect. 2.1, whereas heating due to photon migration is briefly mentioned in Sect. 2.2. Finally, a new thermoacoustic application for particle therapy is described more fully in Sect. 2.3.

2.1 Propagating Electric Fields Induce Thermoacoustic Pulses For electromagnetically induced heating, dielectric properties of the material govern electromagnetic energy penetration into the object and loss within the object. Following the notation used in reports on dielectric properties of tissue [9–11], we express permittivity as a complex number.

4

S. K. Patch

ε = ε − iε

(3)

where ε is the relative permittivity of the material and ε = εσo ω governs energy loss. Here, σ is the total conductivity of the material which accounts for loss due to frequency-dependent relaxation effects and also frequency-independent ionic conductivity, σi . A few facts are listed below to provide intuition regarding ionic content and thermoacoustic imaging: – Deionized water nearly zero ionic content and σi = 5.5 μS/m whereas in blood σi ∼ 1 S/m. – Permittivity in a vacuum is very small, εo = 8.85e − 12 F/m, so the impact of ionic loss becomes large at VHF frequencies. For instance, at the frequency of modern MRI systems (128 MHz), εσo iω ∼ 140σi whereas at the frequency of most microwave ovens (2.45 GHz) εσo iω ∼ 7σi . Most cell phones operate at intermediate frequencies (0.9–2 GHz). – Many organs are designed to secrete physiologic fluids with varying degrees of ionic content. Furthermore, health of the organ may be reflected by the ionic content of secreted fluid. For instance, ionic content of prostatic fluid is correlated to disease state [12]. Ion content in prostatic fluid from healthy and cancerous glands is qualitatively displayed in Fig. 1. For a more quantitative analysis, see Fig. 3 in [13]. Components of a plane wave propagating along the z-axis in free space decay according to

−σ

E(x, y, z, ω) = E o e−iεz = E o e−iε z e εo ω z

(4)

Here heating is due primarily to the E field, and the specific absorption rate (SAR) of nonionizing VHF and microwave energy is given by S A R = σ/ρ|E|2

(5)

When low frequency electric fields are used, relaxation effects can be dominated by ionic content and quantitative reconstructions can yield images with the potential to differentiate healthy from diseased organs.

2.1.1

Very High Frequencies (VHF)

Very high frequency (30–300 MHz) electromagnetic fields are used by FM radio stations and also MRI scanners. Therefore, SAR of VHF energy is strictly regulated to avoid patient overheating [14]. In the VHF regime, loss due to ionic content is at least as significant as relaxation effects. Therefore, the VHF-induced contrast mechanism may show changes in ionic [15, 16] and/or fat content.

Thermoacoustic Applications

5

Fig. 1 Qualitative Gamble plots depicting ion concentrations in prostatic fluids

Maxwell’s equations are well approximated by wave equations in the VHF regime, with wavelengths ranging from 1 to 10 m in vacuum. Wavelengths in media are √ reduced by a factor of 1/ εr , and εr is less than 10 in fatty tissue, 40–60 in muscle and fat, and close to 80 in water. At 100 MHz, λvac = 3 m and λwater = 33 cm < λorgan , so whole organs in adults can be heated without suffering hot or cold spots due to standing waves. Polarization [17] and diffraction of the applied electromagnetic field affect energy deposition. Applying a variably polarized field will heat more uniformly. For instance, circularly polarized fields are applied by MRI scanners.

2.1.2

Microwave Induced Thermoacoustic Imaging

Microwaves have become ubiquitous in our kitchens and next to our ears. SAR of microwaves concentrated near a cell phone has come under close scrutiny and is tightly regulated. Microwave ovens heat even deionized water due entirely to relaxation effects, which are minimal in the VHF regime.

6

S. K. Patch

The microwave band ranges from 300 MHz to 300 GHz, with wavelengths from 1 m to 1 mm in vacuum, respectively. As with VHF, wavelengths are reduced in tissue. The impact of polarization and diffraction [18] is even more pronounced than in the VHF regime. Back of the envelope analysis for simple waveguides and chambers yield standing waves. Indeed, early microwave ovens lacked turntables and mode mixers and induced hot and cold spots in solid foods. Intuition for standing waves can be developed easily using a cheap microwave oven that lacks a mode mixer by disabling the rotation stage. Simply remove the glass plate and cover it with an overturned flat-bottomed container. Place a large block of butter on the container and observe where microwave heating melts the butter—and where it does not. Although poor depth penetration and standing waves are disadvantageous compared to VHF, microwaves have some advantages. Sub-microsecond pulse durations are more easily achieved by microwave than VHF amplifiers, and energy loss and heating is greater in the imaging depth which is limited relative to VHF.

2.2 Photon Migration/Diffusion Photoacoustic imaging is perhaps the best-known application of thermoacoustics, so just a few sentences on photoacoustics follow. Depth penetration of near infrared (NIR) photons limits imaging depth even more severely than microwave and VHF. High photon fluence within a small volume near the optical source and very short ( 0. Then the regularity of u 1 and u 2 guarantees that there exists an open set V ⊂ ΣU containing y and a positive constant c such that |∇u| > c on V . Now consider an open set W which contains y and is compactly contained in V . Applying the Poincaré Recurrence Theorem to W , we see that there exists x0 ∈ W such that xk ∈ W for infinitely many k. Let {xk j } denote the subsequence of {xk } such that xk j ∈ W , and let γ j : [k j , k j+1 ] → Ω be the integral curve of D X joining xk j to xk j+1 . We can obtain u(xk j+1 ) from u(xk j ) by integrating ∇u over γ j ; in other words u(xk j+1 ) − u(xk j ) =

γj

∇u · dr.

(6)

For each j, one of the following two things must happen: Case I: the image of γ j is entirely contained in V . Case II: the image of γ j contains points outside V . In Case I, we can parametrize (6) to get

k j+1

u(xk j+1 ) − u(xk j ) =

∇u(γ j (t)) · γ˙ j (t) dt

kj

k j+1

=

∇u(γ j (t)) · D X (γ j (t)) dt.

kj

Then (5) implies that u(xk j+1 ) − u(xk j ) =

k j+1 kj

Du 22 (γ j (t))|∇u(γ j (t))|2 dt.

Since the image of γ j is entirely contained in V , and k j+1 − k j ≥ 1, we have u(xk j+1 ) − u(xk j ) ≥ min Du 22 · c2 > 0. Ω

In Case II, the length of the portion of γ j contained in V must be at least twice the distance from W to the exterior of V , so (6) tells us that u(xk j+1 ) − u(xk j ) ≥ 2c dist(W, ext V ) > 0.

20

F. J. Chung et al.

In both cases, u(xk j+1 ) − u(xk j ) is bounded below uniformly in j. By setting q to be the minimum of the bounds in both cases, we see that u(xk j+1 ) − u(xk j ) ≥ q for each j ∈ N, and therefore u is unbounded in W . But this contradicts the continuity of u, which is guaranteed by the continuity and positivity of u 1 and u 2 , and so our initial supposition is false. Therefore Ω \ Σ∂Ω has measure zero as claimed. As a final remark, note that if σ ≡ 0, we can take u 2 to be the identity function. Then (5) implies that X = ∇u 1 , and Theorem 1 gives us a useful corollary: ¯ and Corollary 1 Suppose u ∈ C 2 (Ω) ∩ C 1 (Ω), ∇ · D(x)∇u = 0 in Ω. Then the set of integral curves of ∇u that intersect the boundary of Ω is dense in Ω. Acknowledgements We are grateful to Guillaume Bal for valuable discussions. This work was supported in part by the NSF grant DMS-1912821 and the AFOSR grant FA9550-19-1-0320.

References 1. Arnold, V.I.: Mathematical Methods of Classical Mechanics, 2nd edn. Springer, Translated by K. Vogtmann and A. Weinstein (1978) 2. Bal, G., Ren, K.: On multi-spectral quantitative photoacoustic tomography. Inv. Prob. 28, 025010 (2012) 3. Bal, G., Ren, K.: Multi-source quantitative PAT in diffusive regime. Inv. Prob. 075003 (2011) 4. Bal, G., Schotland, J.: Inverse scattering and acousto-optic imaging. Phys. Rev. Lett. 104, 042902 (2010) 5. Bal, G., Uhlmann, G.: Reconstruction of coefficients in scalar second-order elliptic equations from knowledge of their solutions. Comm. Pure. App. Math. 66–10, 1629–1652 (2013) 6. Bal, G., Uhlmann, G.: Inverse diffusion theory for photoacoustics. Inv. Prob. 26–8, 085010 (2010) 7. Bonnetier, E., Choulli, M., Triki, F.: Stability for quantitative photoacoustic tomography revisited (2019). Preprint, arXiv 1905:07914 8. Chung, F.J., Hoskins, J., Schotland, J.: Coherent acousto-optic tomography with diffuse light (2019). Preprint 9. Evans, L.C.: Partial Differential Equations, 2nd edn. AMS (2010) 10. McLaughlin, J.R., Zhang, N., Manduca, A.: Calculating tissue shear modulus and pressure by 2D log-elastographic methods. Inv. Prob. 26, 085007 (2010) 11. Ren, K., Gao, H., Zhao, H.: A hybrid reconstruction method for quantitative PAT. SIAM J. Im. Sci. 6(1), 32–55 (2013)

Stable Determination of an Inclusion in a Layered Medium with Special Anisotropy Michele Di Cristo

Abstract In this note we review some recent results concerning the inverse inclusion problem. In particular we analyze the stability issue for defect contained in layered medium where the conductivity is different in each layer. We consider conductivities with special anisotropy. The modulus of continuity obtained is of logarithmic type, which as shown in Di Cristo and Rondi (Inverse Prob 19:685–701 [13]) turns out to be optimal. Keywords Inverse inclusion problem · Stability · Layered medium MSC: 65N21 · 65N12

1 Introduction In this note we review some recent results related to the inverse problem of determining an inclusion in a conductor body. This is a special instance of the well–known Calderon’s inverse conductivity problem [5] and it has been studied by Isakov [15], who shows that the defect can be uniquely recovered through a knowledge of all possible boundary measurements. In this paper the author shows that the defect can be uniquely recovered through a knowledge of all possible electrostatic boundary measurements, making use of the Runge Approximation Theorem and solutions of the governing equation with Green’s function type singularities. In 2005 Alessandrini and Di Cristo [2] have studied the stability issue, that is the continuous dependance of the inclusion from the given data. The approach proposed by the authors is to convert Isakov’s idea in a quantitative form. Under mild a priori assumptions on the regularity and the topology of the inclusion, they show that the modulus of continuity of the stability issue is of logarithmic type.

M. Di Cristo (B) Politecnico di Milano, Milan, Italy e-mail: [email protected] © Springer Nature Switzerland AG 2020 L. Beilina et al. (eds.), Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, Springer Proceedings in Mathematics & Statistics 328, https://doi.org/10.1007/978-3-030-48634-1_3

21

22

M. Di Cristo

Their result is proved for piecewise constant conductivities and for variable coefficients conductivities [7]. The argument proposed turns out to be extremely flexible and it has been extended to other physical situations governed by different differential equations. Logarithmic stability estimates hold true for the inverse problem of locating a scattered object by a knowledge of the near field data [8], or an inclusion in an elastic body by assuming the displacement and the traction on the boundary [3]. These papers are based on an accurate use of the fundamental solution of the differential operator involved and a precise and quantitative evaluation of unique continuation. These arguments work well in different frameworks with isotropic conductivities in homogeneous conductors but they become more delicate when the physical phenomena take place in a layered medium with anisotropic conductivities. The key items, that cause the main difficulties, are the presence of an unknown boundary (the layer), when we apply the unique continuation technique, and the matrixes, that model the anisotropic conductivities that create big difficulties in estimating the fundamental solution. In several recent results [9–12, 14] these problems have been considered and some preliminary results in this direction are now available. In this paper we go through these results and summarize the situation showing the state of the art. The paper is organized as follow. In the next Sect. 2 we define our notation and state the main theorem. Its proof is presented in Sect. 3 using some auxiliary results that are proved in Sect. 4.

2 Notations and Main Result To begin with, let us premise some notations and definitions, we will use throughout the paper. Let Ω be a bounded open set in Rn and Σ a layer contained in it. The layer Σ will be a closed hyper-surface that separates Ω in to the union of three parts Ω = Ω+ ∪ Σ ∪ Ω− , where Ω± are open subsets such that ∂Ω− = ∂Ω ∪ Σ and ∂Ω+ = Σ. We also denote by D a subset of Ω such that D ⊂ Ω+ ⊂ Ω. We consider γ (x) the conductivity if Ω of the form γ D (x) = c1 A(x) + (c2 − c1 )A(x)χΩ+ + (k − c2 A(x))χ D , where A(x) is a known Lipischitz matrix valued function satisfying AC 0,1 (Ω) ≤ A, where with k we mean the identity matrix multiplied by k and ellipticity condition with constant σ > 0, that is σ −1 |ξ |2 ≤ A(x)ξ · ξ ≤ σ |ξ |2 , ∀x ∈ Ω, ξ ∈ Rn , c1 and c2 are given constants and k is an unknown constant.

Stable Determination of an Inclusion in a Layered Medium with Special Anisotropy

23

For points x ∈ Rn , we will write x = (x , xn ), where x ∈ Rn−1 and x ∈ R. Moreover, denoted by dist(·, ·) the standard Euclidean distance, we define Br (x) = {y ∈ Rn |dist(x, y) < r },

Br (x ) = {y ∈ Rn−1 |dist(x , y ) < r }

as the open balls with radius r centered at x and x respectively. We write Q r (x) = Br (x ) × (xn − r, xn + r ) for the cylinder in Rn . For simplicity, we use Br , Br , Q r instead of Br (0), Br (0 ) and Q r (0) respectively. We shall also denote half domain, as well as its associated ball and cylinder Rn+ = {(x , xn ) ∈ Rn |xn > 0}; Br+ = Br ∩ Rn+ ; Q r+ = Q r ∩ Rn+ .

Definition 1 Let Ω be the bounded domain in Rn . Given α ∈ (0, 1], we say a portion S of ∂Ω is of C 1,α class with constants r, L > 0 if for any point p ∈ S, there exists a rigid transformation ϕ : Rn−1 → R of coordinates under which we have p = 0 and Ω ∩ Br = {(x , xn ) ∈ Br |xn > ϕ(x )}, where ϕ(·) is a C 1,α function on Br , which satisfies ϕ(0) = |∇ϕ(0)| = 0 and ||ϕ||C 1,α (Br ) ≤ Lr, where the norm is defined as ||ϕ||C 1,α (Br ) := ||ϕ|| L ∞ (Br ) + r ||∇ϕ|| L ∞ (Br ) + r 1+α |∇ϕ|α,Br |∇ϕ(x ) − ∇ϕ(y )| . |x − y |α x ,y ∈Br ,

|∇ϕ|α,Br := sup

x = y

For f ∈ H 1/2 (∂Ω), let u be the solution of the problem

div(γ D (x)∇u) = 0 in Ω, u= f on ∂Ω.

(1)

The inverse problem we addressed to is determine the anomalous region D when the Dirichlet-to-Neumann map D D : H 1/2 (∂Ω) −→ H −1/2 (∂Ω) f −→ γ (x) ∂u , ∂u

24

M. Di Cristo

is given for any f ∈ H 1/2 (∂Ω). Here, ν denotes the outer unit normal to ∂Ω, and ∂u corresponds to the current density measured on ∂Ω. Thus, the Dirichlet–to– ∂ν |∂Ω Neumann map represents the knowledge of infinitely many boundary measurements. Given constants r1 , M1 , M2 , δ1 , δ2 > 0 and 0 < α < 1, we assume the domain Ω ⊂ Rn is bounded |Ω| ≤ M2 r1n , where | · | denotes the Lebesgue measure. The interface Σ is C 2 and assumed to stay away from the boundary of the domain, as dist(Σ, ∂Ω) ≥ δ2 , and the inclusion D is assumed to stay away from Σ, as dist(D, Σ) ≥ δ1 , and also Ω\D is connected. Both ∂ D and ∂Ω are of C 1,α class with constants r1 , M1 . We refer to n, r1 , M1 , M2 , α, δ1 , δ2 as the a priori data. To study the stability, we denote by D1 and D2 two possible inclusions in Ω, which satisfy the above properties. The associated Dirichlet-to-Neumann maps are D1 and D2 . We also denote by dH the Housdorff distance between closed sets. Theorem 1 Let Ω ⊂ Rn , n ≥ 2 and we have two known constants c1 , c2 and one unknown constant k, which are given. Let D1 , D2 be two inclusions in Ω as above. If for any ε > 0 we have D1 − D2 L (H 1/2 ,H −1/2 ) ≤ ε, then dH (∂ D1 , ∂ D2 ) ≤ ω(ε), where ω is an increasing function on [0, +∞), which satisfies ω(t) ≤ C| log t|−η , ∀ t ∈ (0, 1) and C > 0, 0 < η ≤ 1 are constants depending on the a priori data only.

3 Proof of the Main Result The proof of Theorem 1 is based on some auxiliary propositions whose proofs are collected in the next Sect. 4. We denote by G the connected component of Ω\(D1 ∪ D2 ), whose boundary contains ∂Ω. Ω D = Ω\G , S2r := {x ∈ Rn |r ≤ dist (x, Ω) ≤ 2r }, Sr := {x ∈ C Ω|dist (x, Ω) ≤ r } and G h := {x ∈ G |dist (x, Ω D ) ≥ h}, where C Ω stands for the complement set of Ω. We recall that the layer Σ separates the domain into two parts known as Ω− and Ω+ . We also define F λ := {x ∈ Ω− |dist (x, Σ) ≥ λ}, and Σλ := {x ∈ Ω− |dist (x, Σ) = λ}

Stable Determination of an Inclusion in a Layered Medium with Special Anisotropy

25

We introduce a variation of the Hausdorff distance called the modified distance, which simplifies our proof. Definition 2 The modified distance between D1 and D2 is defined as dm (D1 , D2 ) := max sup dist (x, ∂ D2 ), sup dist (x, ∂ D1 ) . x∈∂Ω D ∩∂ D1

x∈∂Ω D ∩∂ D2

With no loss of generality, we can assume that there exists a point O ∈ ∂ D1 ∩ ∂Ω D such that the maximum of dm = dm (D1 , D2 ) = dist(O, D2 ) is attainted. We remark here that dm is not a metric, and in general, it does not dominate the Hausdorff distance. However, under our a priori assumptions on the inclusion, the following lemma holds. Lemma 1 Under the assumptions of Theorem 1, there exists a constant c0 ≥ 1 only depending on M1 and α such that dH (∂ D1 , ∂ D2 ) ≤ c0 dm (D1 , D2 ).

(2)

Proof See [2, Proposition 3.3] Another obstacle comes from the fact that the propagation of smallness arguments are based on an iterated application of the three spheres inequality for solutions of the equation over chains of balls contained in G . Therefore, it is crucial to control from below the radii of these balls. In the following Lemma 2 we treat the case of points of ∂Ω D that are not reachable by such chains of balls. This problem was originally considered by [4] in the context of cracks detection in electrical conductors. Let us premise notations. Given O = (0, . . . , 0) the origin, v a unit vector, some H > 0 and ϑ ∈ 0, π2 , we denote C(O, v, ϑ, H ) = x ∈ Rn : |x − (x · v)v| ≤ sin ϑ|x|, 0 ≤ x · v ≤ H the closed truncated cone with vertex at O, axis along the direction v, height H and aperture 2ϑ. Given R, d,

n , where en = (0, . . . , 0, 1), let 0 < R < d and Q 2= −de R d −R 2 . us consider the cone C O, −en , arcsin d , d From now on, without loss of generality, we assume that dm (D1 , D2 ) =

max

x∈∂ D1 ∩∂Ω D

dist(x, ∂ D2 )

and we write dm = dm (D1 , D2 ). We shall make use of paths connecting points in order that appropriate tubular neighborhoods of such paths still remain within Rn \ Ω D . Let us pick a point P ∈ ∂ D1 ∩ ∂Ω D , let ν be the outer unit normal to ∂ D1 at P and let d > 0 be such that the segment [(P + dν), P] is contained in Rn \ Ω D . Given P0 ∈ Rn \ Ω D , let γ be a path in Rn \ Ω D joining P0 to P + dν. We consider the following neighborhood of

26

M. Di Cristo

γ ∪ [(P + dν), P] \ {P} formed by a tubular neighborhood of γ attached to a cone with vertex at P and axis along ν V (γ ) =

B R (S) ∪ C

S∈γ

R d 2 − R2 P, ν, arcsin , d d

.

(3)

Note that two significant parameters are associated to such a set, the radius R of the tubular neighborhood of γ , ∪ S∈γ B R (S), and the half-aperture arcsin Rd of the

2 2 cone C P, ν, arcsin Rd , d −R . In other terms, V (γ ) depends on γ and also on the d parameters R and d. At each of the following steps, such two parameters shall be appropriately chosen and shall be accurately specified. For the sake of simplicity we convene to maintain the notation V (γ ) also when different values of R, d are introduced. Also we warn the reader that it will be convenient at various stages to use a reference frame such that P = O = (0, . . . , 0) and ν = −en . Lemma 2 Under the above notation, there exist positive constants d, c1 , where ρd0 only depends on M1 and α, and c1 only depends on M1 , α, M2 , and there exists a point P ∈ ∂ D1 satisfying c1 dm ≤ dist(P, D2 ), and such that, giving any point P0 ∈ S2ρ0 , there exists a path γ ⊂ (Ω ρ0 ∪ S2ρ0 ) \ Ω D joining P0 to P + dν, where ν is the unit outer normal to D1 at P, such that, choosing a coordinate system with origin O at P and axis en = −ν, the set V (γ ) introduced in (3) satisfies V (γ ) ⊂ Rn \ Ω D , provided R = √ d and α.

1+L 20

, where L 0 , 0 < L 0 ≤ M1 , is a constant only depending on M1

Proof See [3, Lamma 4.2]. A crucial tool to get the stability estimates is the so called Alessandrini identity [1] the permits to relate the information provided by the boundary measurements with the unknown inclusion. Let u i ∈ H 1 (∂Ω), i = 1, 2, solutions to (1) with conductivities γ Di (x) = c1 A(x) + (c2 − c1 )A(x)χΩ+ + (k − c2 )χ Di , we have γ D1 ∇u 1 · ∇u 2 − γ D2 ∇u 1 · ∇u 2 = Ω

Ω

∂Ω

i = 1, 2,

c1 A(x)u 1 [ D1 − D2 ]u 2 . (4)

Therefore, applying (4) replacing u i = Γ Di , i = 1, 2, where Γ Di is the fundamental solution of the operator div(γi ∇·), we get

Stable Determination of an Inclusion in a Layered Medium with Special Anisotropy

(k − c2 )∇Γ D1 (·, y) · ∇Γ D2 (·, z) − =

27

D1

∂Ω

(k − c2 )∇Γ D1 (·, y) · ∇Γ D2 (·, z) D2

c1 A(·)Γ D1 (·, y)[ D1 − D2 ]Γ D2 (·, z).

(5)

For y, z ∈ G ∩ C Ω, where C Ω is the complementary set of Ω, we define S D1 (y, z) = (k − c2 )

∇Γ D1 (·, y) · ∇Γ D2 (·, z)

D1

S D2 (y, z) = (k − c2 )

∇Γ D1 (·, y) · ∇Γ D2 (·, z) D2

f (y, z) = S D1 (y, z) − S D2 (y, z). Therefore (5) can be written as c1 A(·)Γ D1 (·, y)[ D1 − D2 ]Γ D2 (·, z), ∀y, z ∈ C Ω. f (y, z) =

(6)

∂Ω

In what follows, we analyze the behavior of f and S Di as the singularities y and z get close to the inclusion D. Proposition 1 Let Ω, D1 , D2 be open sets satisfying the above properties and let y = hν(O). If, given ε > 0, we have || D1 − D2 || L(H 1/2 ,H −1/2 ) < ,

(7)

then for every h where 0 < h < cr, 0 < c < 1, and c depends on M1 , we have F

Bh | f (y, y)| ≤ C0 T . h

(8)

Here 0 < T < 1 and C0 , B, F > 0 are constants that depend only on the a priori data. Proposition 2 Let Ω, D1 , D2 be open sets satisfying the above properties and let y = hν(O). Then for every 0 < h < r0 /2 |S D1 (y, y)| ≥ C1 h 2−n − C2 dm2−2n + C3 ,

(9)

where r0 := r2 min 21 (8M1 )−1/α , 21 , and C1 , C2 , C3 are positive constants depending only on the a priori data. We can conclude this section proving our main theorem. Proof (Proof of Theorem 1) We start from the origin of the coordinate system, point O ∈ ∂ D1 ∩ ∂Ω D , for which the maximum in Definition 2 is attainted

28

M. Di Cristo

dm := dm (D1 , D2 ) = dist(O, D2 ). By a transformation of coordinates, we can write y = hν(O) where 0 < h < h 1 , h 1 := min{dm , cr, r0 /2}, 0 < c < 1, where c depends on M1 . By applying [2] Proposition 3.4 (i); i.e., |∇x Γ Di (x, y)| ≤ d1 |x − y|1−n , where d1 > 0 depending only on k, n, α, M1 ; we have |S D2 (y, y)| = (k − c2 ) ∇Γ D1 (x, y)∇Γ D2 (x, y) D 2 ≤ d2 (k − c2 ) (d1 |x − y|1−n )2 ≤ d1 (k − c2 )d12 (|dm − h|1−n )2 ≤ d1 (k −

D2 2 c2 )d1 |dm

− h|

2−2n

|D2 | ≤ C4 |dm − h|

D2 2−2n

,

(10) where d2 , C4 are constants depending on the a priori data only. Here |D2 | is the measure of the inclusion D2 which is bounded by |D2 | ≤ |Ω| ≤ M2 r1n . If we apply the triangular inequality, we obtain F

Bh |S D1 (y, y)| − |S D2 (y, y)| ≤ |S D1 (y, y) − S D2 (y, y)| = | f (y, y)| ≤ C0 T . (11) h Meanwhile, (9) gives us the lower bound of S D1 (y, y). Therefore, together with (10) and (11), we obtain C1 h 2−n − C2 dm2−2n + C3 ≤ C4 |dm − h|2−2n + C0

Bh hT

F

Rearranging terms we get F

C1 h

2−n

≤ C4 |dm − h|

2−2n

Bh + C0 T . h

By setting C5 = C4 /C0 and C6 = C1 /C0 F

Bh F = C6 h 2−n (1 − Bh h K ), hT 1 where 0 < K = n − 2 − T . Now let h = h() = min | ln |− 2F , dm , for 0 < ≤ 1 1 , 1 ∈ (0, 1) such that exp(−B| ln 1 |1/2 ) = 1/2. It is easy to see if dm ≤ | ln |− 2F , 1 Theorem 1 is proved using Lemma 1. Indeed we can set η = 2F > 0, then C5 |dm − h|2−2n ≥ C6 h 2−n −

dH (∂ D1 , ∂ D2 ) ≤ c0 dm ≤ c0 | ln |−η = ω() In the other case if dm ≥ | ln |− 2F , it is easy to check 1

(12)

Stable Determination of an Inclusion in a Layered Medium with Special Anisotropy

(dm − h)2−2n ≥

C6 2−n h 2C5

=⇒

29

dm ≤ C7 | ln |− 4F(n−1) . n−2

Here we can solve dm because here h = h() = | ln |− 2F , and C7 depends only on n−2 the a priori data. Therefore we conclude the proof by setting η = 4F(n−1) 1

dH (∂ D1 , ∂ D2 ) ≤ c0 dm ≤ c0 C7 | ln |−η = ω()

(13)

and for 1 ≤ , we can also include the proof because dm ≤ |Ω| ≤ M2 r1n . dH (∂ D1 , ∂ D2 ) ≤ c0 dm ≤ c0 M2 r1n = ω().

(14)

We can conclude the proof Theorem 1 by (12), (13) and (14) dH (∂ D1 , ∂ D2 ) ≤ Cdm = ω(), where C only depends on the a priori data.

4 Proof of Propositions 1 and 2 In this section we prove the auxiliary propositions needed to prove our main theorem. The proofs are based on some quantitative estimates of unique continaution, which for this special context has been developed in [9] (see also [14]). Proof (Proposition 1) Let us consider f (y, ·) with a fixed y ∈ S2r then divw (γ D (x)∇ f (y, w)) = 0 in C Ω D .

(15)

For x ∈ S2r , by (6) and (7), we have the smallness quantity | f (y, x)| ≤ C(r, M1 , M2 )||Γ D1 − Γ D2 || = .

(16)

Also by y [Al-DC] Proposition 3.4, the uniform bound of f is given as | f (y, x)| ≤ ch 2−2n ,

in G h ∪ F λ .

(17)

At this point the proposition can be obtained using iteratively the three sphere inequality derived in [12] for elliptic equation wtih coefficients with jump discontinuity (see also [6] for similar results) along the line of the proof of [2, Proposition 3.5]. Proof (Proposition 2) We write the upper bound of S D1 as

30

M. Di Cristo

|S D1 (y, y)| = (k − c2 ) ∇Γ D1 (x, y)∇Γ D2 (x, y)dx D1

≥ C ∇Γ D1 ∇Γ D2 + D1 ∩Br (O)∩D2 D1 ∩Bρ (O)∩C D2 − C ∇Γ D1 ∇Γ D2 D1 ∩Br (O)∩C Bρ (O)∩C D2 − C ∇Γ D1 ∇Γ D2

(18)

D1 \Br (O)

where C depends on k, A only, r = |x − y|, 0 < r < r0, 0 < ρ < min{dm , r }. To explain the formula, notice we separate the integrand D1 ∩Br (O) ∇Γ D1 ∇Γ D2 into two parts, because we don’t have any information on x. So, either it can be x ∈ D1 ∩ Br (O) ∩ D2 or x ∈ D1 ∩ Br (O) ∩ C D2 . Then we separate the integrand again with respect to an even smaller ball Bρ (O). If x ∈ D1 ∩ Br (O) ∩ D2 , By [1, Lemma 3.1] and [10, Theorem 4.1], we get ∇Γ D1 (x, y) · ∇Γ D2 (x, y) ≥ C A |x − y|2−2n = C A r 2−2n > 0

(19)

where C A depends on the a priori data. If x ∈ D1 ∩ Br (O) ∩ C D2 , we consider in a smaller ball Bρ (O). In this case, we actually have x ∈ D1 ∩ Bρ (O) ∩ C D2 . By definition of dm , Bρ (O) ∩ D2 = ∅, for x, y ∈ Bρ (O), we have ⎧

⎨Δ Γ D (x, y) − Γ (x, y) = 0 in Bρ (O) 2

⎩ Γ D (x, y) − Γ (x, y) |∂ B (O) ≤ C K ρ 2−n , 2 ρ where Γ denotes the standard fundamental solution of the Laplace operator. By the maximum principle, the value on interior is smaller than boundary Γ D2 (x, y) − Γ (x, y) ≤ C K ρ 2−n

∀x, y ∈ Bρ (O)

And by interior gradient bound, we have ∇Γ D2 (x, y) − ∇Γ (x, y) ≤ C K 0 ρ 1−n

∀x ∈ Bρ/2 (O); ∀y ∈ Bρ (O)

Applying [A] Lemma 3.1 in Bρ/2 (O), we have (notice |x − y| = r > ρ) ∇Γ D1 (x, y) · ∇Γ D2 (x, y) ≥ C A |x − y|2−2n − C K ρ 2−2n = C A r 2−2n − C K ρ 2−2n > 0

(20)

Now we can bound the first term of (18) thanks to (19) and (20)

Stable Determination of an Inclusion in a Layered Medium with Special Anisotropy

∇Γ D1 ∇Γ D2 + D1 ∩Br (O)∩D2 D1 ∩Bρ (O)∩C D2

≥ (C A r 2−2n − C K ρ 2−2n ) + D1 ∩Br (O)∩D2 D1 ∩Bρ (O)∩C D2

≥ c1r 2−2n ≥ c1 h 2−n

31

(21)

[D1 ∩Br (O)∩D2 ]∪[D1 ∩Bρ (O)∩C D2 ]

For the upper bounds of the second and third term, we can apply the natural bound ∩ C Bρ (O) ∩ C D2 , we have of ∇Γ Di , i = 1, 2. When x ∈ D1 ∩ Br (O)

≤

D1 ∩Br (O)∩C Bρ (O)∩C D2

D1 ∩Br (O)∩C Bρ (O)∩C D2

D1 \Br (O)

∇Γ D1 ∇Γ D2 ≤ c1 r

1−n

· c1 r

D1 ∩Br (O)∩C Bρ (O)∩C D2

c1 |x − y|1−n · c1 |x − y|1−n

≤ c2 dm2−2n

1−n

∇Γ D1 ∇Γ D2 ≤ =

D1 \Br (O)

D1 \Br (O)

c1 |x − y|1−n · c1 |x − y|1−n dx c12 r 2−2n dx

(22) (23)

= c3

Now we can plug (21), (22) and (23) into (18), we obtain the lower bound for S D1 (y, y) |S D1 | ≥ c1 h 2−n − c2 dm2−2n − c3 where ci , i = 1, 2, 3 depends only on the a prior data.

References 1. Alessandrini, G.: Singular solutions of elliptic equations and the determination of conductivity by boundary measurements. J. Differ. Eqn. 84, 252–272 (1990) 2. Alessandrini, G., Di Cristo, M.: Stable determanation of an inclusion by boundary measurements SIAM. J. Math. Anal. 37, 200–217 (2005) 3. Alessandrini, G., Di Cristo, M., Morassi, A., Rosset, E.: Stable determination of an inclusion in an elastic body by boundary measurements SIAM. J. Math. Anal. 46, 2692–2729 (2014) 4. Alessandrini, G., Sincich, E.: Cracks with impedance, stable determination from boundary data Indiana Univ. Math. J. 62(3), 947–989 (2013) 5. Calderon, A.: On an inverse boundary value problem. Seminar on Numerical Analysis and its Applications to Continuum Physics (Rio de Janeiro, 1980), pp. 65–73, Soc. Brasil. Mat., Rio de Janeiro (1980) 6. Carstea, A., Wang, J.N.: Propagation of smallness for an elliptic PDE with piecewise Lipschitz ceofficients. J. Differ. Eqn 268, 7609-7628 (2020) 7. Di Cristo, M.: Stable determination of an inhomogeneous inclusion by local boundary measurements. J. Comput. Appl. Math. 198, 414–425 (2007) 8. Di Cristo, M.: Stability estimates in the inverse transmission scattering problem. Inverse Probl. Imaging 3, 551–565 (2009)

32

M. Di Cristo

9. Di Cristo, M., Francini, E, Lin, C.L., Vessella, S., Wang, J.N.: Carleman estimate for second order elliptic equations with Lipschitz leading coefficients and jumps at an interface. J. Math. Pures Appl. 108, 163–206 (2017) 10. Di Cristo, M., Ren, Y.: Stable determination of an inclusion for a class of anisotropic conductivities. Inverse Probl. 33, 15 (2017) 11. Di Cristo, M., Ren, Y.: Stable determination of an inclusion in a layered medium. Math. Methods Appl. Sci. 41, 4602–4611 (2018) 12. Di Cristo, M., Ren, Y.: Three sphere inequality for second order elliptic equations with coefficients with jump discontinuity. J. Diff. Eqn. 266, 936–941 (2019) 13. Di Cristo, M., Rondi, L.: Examples of exponential instability for inverse inclusion and scattering problems. Inverse Prob. 19, 685–701 (2003) 14. Francini, E., Lin, C.-L., Vessella, S., Wang, J.-N.: Three-region inequalities for the second order elliptic equation with discontinuous coefficients and size estimate. J. Diff. Eqn. 261, 5306–5323 (2016) 15. Isakov, V.: On uniqueness of recovery of a discontinuous conductivity coefficient. Comm. Pure Appl. Math. 41, 865–877 (1988)

Convergence of Stabilized P1 Finite Element Scheme for Time Harmonic Maxwell’s Equations M. Asadzadeh and Larisa Beilina

Abstract The paper considers the convergence study of the stabilized P1 finite element method for the time harmonic Maxwell’s equations. The model problem is for the particular case of the dielectric permittivity function which is assumed to be constant in a boundary neighborhood. For the stabilized model a coercivity relation is derived that guarantee’s the existence of a unique solution for the discrete problem. The convergence is addressed both in a priori and a posteriori settings. Our numerical examples validate obtained convergence results. Keywords Time harmonic Maxwell’s equations · P1 finite elements · A priori estimate · A posteriori estimate · Convergence MSC: 65N12 · 65N15 · 78M10

1 Introduction In implementing the finite element methods for the Maxwell system, the divergencefree edge elements are the most advantageous from a theoretical point of view [12, 13]. On the other hand for the time-dependent problems, where a linear system of equations need to be solved at each iteration step, the divergence-free approach requires an unrealistic fine degree of time resolution. To circumvent this difficulty, it has been suggested to use the continuous P1 finite elements which provides inexpensive and reliable algorithms for the numerical simulations, in particular compared to H (curl) conforming finite elements. Based on this fact, in this paper we consider stabilized P1 finite element for the approximate solution of time harmonic Maxwell’s M. Asadzadeh · L. Beilina (B) Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, 412 96 Gothenburg, Sweden e-mail: [email protected] M. Asadzadeh e-mail: [email protected] © Springer Nature Switzerland AG 2020 L. Beilina et al. (eds.), Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, Springer Proceedings in Mathematics & Statistics 328, https://doi.org/10.1007/978-3-030-48634-1_4

33

34

M. Asadzadeh and L. Beilina

equations when the dielectric permittivity function is constant in a boundary neighborhood. This converts the Maxwell’s equations into a set of time-independent wave equations on the boundary neighborhood. An outline of this paper is as follows. In Sect. 2 we introduce a model problem for the time harmonic Maxwell’s equations obtained through Laplace transform of the time-dependent equations. In Sect. 3 we introduce our finite element scheme, prove its well-posedness. As well as a optimal a priori and a posteriori error bounds which are derived in a, gradient dependent, triple norm. In the a posteriori case the boundary residual is in the form of a normal derivative and therefore is balanced by a multiplicative power of the mesh parameter h. Section 4 is devoted to implementations and justify the robustness of the approximation procedure. Finally, in Sect. 5 we conclude the results of the paper. Throughout the paper C will denote a generic constant, not necessarily the same at each occurrence and independent of the mesh parameter and solution, unless otherwise specifically specified.

2 The Mathematical Model We study the time-harmonic Maxwell’s equations for electric field Eˆ (x, s), under the assumption of the vanishing electric charges, given by ˆ ˆ s) + ∇ × ∇ × E(x, s) = sε(x) f 0 (x), x ∈ Rd , d = 2, 3 s 2 ε(x) E(x, ˆ ∇ · (ε(x) E(x, s)) = 0

(1)

where ε(x) = εr (x)ε0 is the dielectric permittivity, εr (x) is the dimensionless relative dielectric permittivity and ε0 is the permittivity of the free space. Furthermore ∇ × ∇ × E = ∇(∇ · E) − ∇ 2 E.

(2)

Equation (1) is obtained through the Laplace transformation in time s) := E(x,

+∞

E(x, t)e−st dt,

s = const. > 0

(3)

0

where E (x, t) is the the solution of time-dependent Maxwell’s equations: ε(x)

∂ 2 E(x, t) + ∇ × ∇ × E(x, t) = 0, x ∈ Rd , d = 2, 3, t ∈ (0, T ]. ∂t 2 ∇ · (εE)(x, t) = 0, ∂E (x, 0) = 0, x ∈ Rd , d = 2, 3. E(x, 0) = f 0 (x), ∂t

(4)

Convergence of Stabilized P1 Finite Element Scheme …

35

ε ∈ [1, d1 ] Ω1

ε =1

Ω2

a) Ω = Ω 1 Ω 2

ε =1

Ω2 b) Ω 2

Fig. 1 Domain decomposition in Ω

Note that, since we have a non-zero initial condition: E(x, 0) = f 0 (x), the problem (4) is adequate as a coefficient inverse problem to determine the function ε(x) in (4) through a finite number of observations E at the boundary [6]. To solve the problem (4) numerically, we consider it in a bounded convex and simply connected polygonal domain Ω ⊂ Rd , d = 2, 3 with boundary Γ : We define Ω2 := Ω \ Ω1 , where Ω1 ⊂ Ω has positive Lebesgue measure and ∂Ω ∩ ∂Ω1 = ∅. In this setting cutting out Ω1 from Ω, the new subdomain Ω2 shares the boundary with both Ω and Ω1 : ∂Ω2 = ∂Ω ∪ ∂Ω1 , Ω = Ω1 ∪ Ω2 , Ω1 = Ω \ Ω2 and Ω¯ 1 ∩ Ω¯ 2 = ∂Ω1 , (see Fig. 1). To proceed we assume that ε(x) ∈ C 2 (Rd ), d = 2, 3 satisfies ε(x) ∈ [1, d1 ] , ε(x) = 1, ∂ν ε = 0,

for x ∈ Ω1 , for x ∈ Ω \ Ω1 , for x ∈ ∂Ω2 .

(5)

Remark 1 Conditions (5) mean that, in the vicinity of the boundary of the computational domain Ω, the Eq. (4) transforms to a time-dependent wave equation. At the boundary Γ := ∂Ω of Ω, we use the split Γ = Γ1 ∪ Γ2 ∪ Γ3 , so that Γ1 and Γ2 are the top and bottom sides, with respect to y- (in 2d) or z-axis (in 3d), of the domain Ω, respectively, while Γ3 is the rest of the boundary. Further, ∂ν (·) denotes the normal derivative on Γ and ν is the outward unit normal to Γ . Remark 2 In most estimates below, it suffices to restrict the Neumann boundary condition for the dielectric permittivity function to: ∂ν ε(x) = 0, on Γ1 ∪ Γ2 .

36

M. Asadzadeh and L. Beilina

Now, using similar argument as in the studies in, e.g., [5] and by Remark 1, for the time-dependent wave equation, we impose first order absorbing boundary condition [11] at Γ1 ∪ Γ2 : ∂ν E + ∂t E = 0,

(x, t) ∈ (Γ1 ∪ Γ2 ) × (0, T ].

(6)

To impose boundary conditions at Γ3 we can assume that the surface Γ3 is located far from the domain Ω1 . Hence, we can assume that E ≈ E inc in a vicinity of Γ3 , where E inc is the incident field. Thus, at Γ3 we may impose Neumann boundary condition ∂ν E = 0,

(x, t) ∈ Γ3 × (0, T ].

(7)

Finally, using the well known vector-analysis relation (2) and applying the Laplace transform to the Eq. (4) and the boundary conditions (6)–(7) in the time domain, the problem (1) will be transformed to the following model problem ˆ ˆ ˆ s) + ∇(∇ · E(x, s)) − E(x, s) = sε(x) f 0 (x), x ∈ Rd , d = 2, 3 s 2 ε(x) E(x, ˆ ∇ · (ε(x) E(x, s)) = 0, ˆ ∂ν E(x, s) = 0, x ∈ Γ3 , ˆ ˆ ∂ν E(x, s), x ∈ Γ1 ∪ Γ2 . s) = f 0 (x) − s E(x, (8)

3 Finite Element Method We have the usual notation of the inner product in [L 2 (Ω)]d : (·, ·), d ∈ {2, 3}, and the corresponding norm · , whereas ·, · Γ is the inner product of [L 2 (Γ )]d−1 and the associated L 2 (Γ )-norm is denoted by · Γ . We define the L 2 scalar products

u · v dx, (u, v)ω :=

(u, v) := Ω

u · v ωdx, u, v Γ :=

Ω

u · v dσ, Γ

and the ω-weighted L 2 (Ω) norm uω := |u|2 ωdx, Ω

ω > 0, ω ∈ L ∞ (Ω).

Convergence of Stabilized P1 Finite Element Scheme …

37

3.1 Stabilized Model The stabilized formulation of the problem (8), with d = 2, 3, reads as follows: ˆ ˆ ˆ s 2 ε(x) E(x, s) − E(x, s) − ∇(∇ · ((ε − 1) E(x, s)) = sε(x) f 0 (x) x ∈ Rd , ˆ ∂ν E(x, s) = 0, x ∈ Γ3 , ˆ ˆ ∂ν E(x, s), x ∈ Γ1 ∪ Γ2 , s) = f 0 (x) − s E(x, (9) where the second equation of (8) is hidden in the first one.

3.2 Finite Element Discretization We consider a partition of Ω into elements K denoted by Th = {K }, satisfying the minimal angle condition. Here, h = h(x) is the mesh parameter defined as h| K = h K , representing the local diameter of the elements. We also denote by ∂Th = {∂ K } a partition of the boundary Γ into boundaries ∂ K of the elements K such that vertices of these elements lie on Γ . To formulate the finite element method for (9) in Ω, we introduce the, piecewise linear, finite element space WhE (Ω) for every component of the electric field E: WhE (Ω) := {w ∈ H 1 (Ω) : w| K ∈ P1 (K ), ∀K ∈ Th }, where P1 (K ) denote the set of piecewise-linear functions on K . Setting WhE (Ω) := [WhE (Ω)]3 we define f 0 h to be the WhE -interpolant of f 0 in (9). Then the finite element method for the problem (9) is formulated as: Find Eˆ h ∈ WhE (Ω) such that ∀v ∈ WhE (Ω) (s 2 ε Eˆ h , v) + (∇ Eˆ h , ∇v) + (∇ · (ε Eˆ h ), ∇ · v) − (∇ · Eˆ h , ∇ · v) + s Eˆ h , v Γ1 ∪Γ2 = (sε f 0 h , v) + f 0 h , v Γ1 ∪Γ2 .

(10)

Theorem 1 (well-posedness) Under the condition f 0,h ∈ L 2,ε ∩ L 2,1/s (Γ1 ∪ Γ2 ), on the data, the problem (10) has a unique solution Eˆ h ∈ WhE (Ω). Proof See [1].

(11)

38

M. Asadzadeh and L. Beilina

3.3 Error Analysis In this subsection first we give a swift a priori error bound and then continue with a posteriori error estimates. For the sake of completeness, we set up an adaptive algorithm for the a posteriori setting. This, however, requires a thorough and lengthy implementations procedure which is beyond the scope of the present paper and may be considered in a future study.

3.3.1

A Priori Error Estimates

To derive a priori error estimates we consider the continuous variational formulation and define linear and bilinear forms in the finite element space WhE (Ω): ˆ v) + (∇ E, ˆ ∇v) + (∇ · (ε E), ˆ ∇ · v) ˆ v) =(s 2 ε E, a( E, ˆ ∇ · v) + s E, ˆ v Γ1 ∪Γ2 , − (∇ · E,

∀v ∈ H 1 (Ω)

(12)

and L c (v) := (sε f 0 , v) + f 0 , v Γ1 ∪Γ2 ,

∀v ∈ H 1 (Ω).

(13)

Hence we have the concise form of the variational formulation ˆ v) = L c (v), a( E,

∀v ∈ H 1 (Ω).

(14)

This yields the Galerkin orthogonality [7] by letting, in (12) and (13), v ∈ WhE (Ω), as well as replacing f 0 by f 0,h in (13). Subtracting from (14) its discrete version ˆ and letting e(x, s) := E(x, s) − Eˆ h (x, s) be the pointwise spatial error of the finite element approximation (10), we get a( Eˆ − Eˆ h , v) = 0,

∀v ∈ WhE (Ω),

(Galerkin orthogonality).

(15)

Now we are ready to derive the following theoretical error bound Theorem 2 Let Eˆ and Eˆ h be the solutions for the continuous problem (9) and its finite element approximation, (10), respectively. Then, there is a constant C, independent of Eˆ and h, such that |||e||| ≤ C h Eˆ Hw2 (Ω) . where w = w(ε(x), s) is the weight function which depends on the dielectric permittivity function ε(x) and the pseudo-frequency variable s. Proof See the proof of Theorem 1 in [1].

Convergence of Stabilized P1 Finite Element Scheme …

3.3.2

39

A Posteriori Error Estimates

For the approximate solution Eˆ h = Eˆ h (x, s) of the problem (9), we define the residual errors −R( Eˆ h ) :=s 2 ε(x) Eˆ h − Eˆ h − ∇(∇ · ((ε(x) − 1) Eˆ h ) − sε(x) f 0,h (x), and −RΓ ( Eˆ h ) :=h −α ∂ν Eˆ h + s Eˆ h − f 0,h (x) , for x ∈ Γ1 ∪ Γ2 , 0 < α ≤ 1. (16) By the Galerkin orthogonality we have that R( Eˆ h ) ⊥ WhE (Ω). Now the objective is ˆ to bound the triple norm of the error e(x, s) := E(x, s) − Eˆ h (x, s) by some adequate norms of R( Eˆ h ) and RΓ ( Eˆ h ) with a relevant, fast, decay. This may be done in a few, relatively similar, ways, e.g., one can use the variational formulation and interpolation in the error combined with Galerkin orthogonality. Or one may use a dual problem approach setting the source term (or initial data) on the right hand side as the error. The proof of the main result relies on assuming a first order approximation for the initial value of the original field f 0 (x) := E(x, t)|t=0− , for β ≈ 1, viz, f 0 − f 0,h ε ≈ f 0 − f 0,h 1/s,Γ ≈ f 0 − f 0,h (ε−1)2 /s,Γ = O(h β ).

(17)

Theorem 3 Let Eˆ and Eˆ h be the solutions for the continuous problem (9) and its finite element approximation (10), respectively. Further we assume that we have the error bound (17) for the initial field f 0 (x) := E(x, t)|t=0− . Then, there exist ˆ but may depend on ε interpolation constants C1 and C2 , independent of h, and E, and s such that the following a posteriori error estimate holds true |||e||| ≤ C1 h R +C2 h α RΓ 1/s, Γ1 ∪Γ2 + O(h β ),

(18)

where α ≈ β ≈ 1. Proof See [1] An adaptivity algorithm Given an admissible small error tolerance T O L > 0, we outline formal adaptivity steps to reach |||e||| ≤ T O L . (19) To this end we start with a course mesh with mesh size h and Step 1. Compute the approximate solution Eˆ h and its corresponding domain and boundary residuals R and RΓ , respectively. Step 2. Check whether C1 h R +C2 h α RΓ 1/s, Γ1 ∪Γ2 + O(h β ) ≤ T O L? for α ≈ β ≈ 1.

(20)

40

M. Asadzadeh and L. Beilina

Step 3. If (20) is valid stop and accept the current h-function. Otherwise, refine in regions where the contribution to the right hand side in (18) is large (on each iteration step you need to choose a criterion for this largeness). Replace the h-function by the new refined one and go to Step 1.

4 Numerical Examples We refer to [1] for complete description of numerical tests. Numerical tests are performed in the computational domain Ω = [0, 1] × [0, 1]. The source data f (x), x ∈ R2 (the right hand side) in the model problem (8) for the electric field Eˆ = ( Eˆ 1 , Eˆ 2 ) is chosen such that the function 2 Eˆ 1 = 3 π sin2 π x cos π y sin π y, s ε 2 Eˆ 2 = − 3 π sin2 π y cos π x sin π x. s ε

(21)

is the exact solution of the model problem (8). We define the function ε as ε(x, y) =

1 + sinm π(2x − 0.5) · sinm π(2y − 0.5) in [0.25, 0.75] × [0.25, 0.75], (22) 1 otherwise.

for an integer m > 1. The computational domain Ω is discretized into triangles K of sizes h l = 2−l , l = 1, ..., 6. Numerical tests are performed for different m = 2, ..., 9 in (22), s = 20 in (8), and the relative errors el1 , el2 are measured in L 2 -norm and the H 1 -norms, respectively, which we compute as

Here, Eˆ :=

el1 =

Eˆ − Eˆ h L 2 , ˆ L2 E

(23)

el2 =

∇( Eˆ − Eˆ h ) L 2 . ˆ L2 ∇ E

(24)

Eˆ 12 + Eˆ 22

Eˆ h :=

2 2 + Eˆ 2h . Eˆ 1h

(25)

Figure 2 presents convergence of P1 finite element scheme for m = 2, 9 in (22). Tables 1 and 2 present convergence rates q1 , q2 for m = 2, 9 which we compute as q1 =

log

el1 h el1 2h

log(0.5)

, q2 =

log

el2 h el2 2h

log(0.5)

,

(26)

Convergence of Stabilized P1 Finite Element Scheme …

41

Fig. 2 Relative errors for m = 2 (left) and m = 9 (right) Table 1 Relative errors in the L 2 -norm and in the H 1 -norm for mesh sizes h l = 2−l , l = 1, ..., 6, for m = 2 in (22). Here, nel is number of elements and nno is number of nodes in the mesh l nel nno el1 q1 el2 q2 1 2 3 4 5 6

8 32 128 512 2048 8192

9 25 81 289 1089 4225

2.71 · 10−2 6.66 · 10−3 1.78 · 10−3 4.13 · 10−4 1.05 · 10−4 2.65 · 10−5

2.02 1.90 2.11 1.97 1.99

8.60 · 10−2 3.25 · 10−2 1.75 · 10−2 1.02 · 10−2 5.29 · 10−3 2.70 · 10−3

1.40 8.99 · 10−1 7.79 · 10−1 9.42 · 10−1 9.69 · 10−1

Table 2 Relative errors in the L 2 -norm and in the H 1 -norm for mesh sizes h l = 2−l , l = 1, ..., 6, for m = 9 in (22). Here, nel is number of elements and nno is number of nodes in the mesh l nel nno el1 q1 el2 q2 1 2 3 4 5 6

8 32 128 512 2048 8192

9 25 81 289 1089 4225

1.73 · 10−2 3.33 · 10−3 8.98 · 10−4 2.36 · 10−4 6.09 · 10−5 1.55 · 10−5

2.38 1.89 1.93 1.96 1.98

7.29 · 10−2 3.57 · 10−2 2.15 · 10−2 1.08 · 10−2 5.26 · 10−3 2.62 · 10−3

1.03 7.33 · 10−1 9.94 · 10−1 1.04 1.00

where eli h , eli 2h , i = 1, 2, are computed relative norms eli , i = 1, 2, on the finite element mesh with the mesh size h and 2h, respectively. Similar convergence rates are obtained for m = 3, 4, 5, 8. Figure 3 shows computed and exact solutions on different finite element meshes for m = 2 and m = 9 in (22). We observe that our P1 finite element scheme behaves like a first order method for H 1 (Ω)-norm and second order method for L 2 (Ω)-norm.

42

M. Asadzadeh and L. Beilina |Eh |, m = 2

h = 0.125

h = 0.0625

h = 0.125

h = 0.0625

h = 0.125

h = 0.0625

h = 0.125

h = 0.0625

|E|, m = 2

|Eh |, m = 9

|E|, m = 9

h = 0.03125

h = 0.015625

h = 0.03125

h = 0.015625

h = 0.03125

h = 0.015625

h = 0.03125

h = 0.015625

Fig. 3 Computed versus exact solution for different meshes taking m = 2 and m = 9 in (22)

5 Conclusion We presented convergence analysis for the stabilized P1 finite element scheme applied to the solution of time harmonic Maxwell’s equations with constant dielectric permittivity function ε(x) in a boundary neighborhood. For the convergence study

Convergence of Stabilized P1 Finite Element Scheme …

43

of stabilized P1 finite element method for a time dependent problem for Maxwell’s equations we refer to [2]. Optimal a priori and a posteriori error bounds are derived in weighted energy norms and numerical results validate obtained theoretical error bounds. Proposed scheme can be applied for the solution of coefficient inverse problems with constant dielectric permittivity function in a boundary neighborhood, see [3–5, 8–10, 14, 15] for a such problems.

References 1. Asadzadeh, M., Beilina, L.: On stabilized P1 finite element approximation fortime harmonic Maxwell’s equations. (2019). arXiv:1906.02089 2. Beilina, L., Ruas, V.: An explicit P1 finite element scheme for Maxwell’s equations with constant permittivity in a boundary neighborhood. arXiv:1808.10720 3. Beilina, L., Klibanov, M.V.: Approximate Global Convergence and Adaptivity for Coefficient Inverse Problems. Springer, New York (2012) 4. Beilina, L., Th‘anh, N.T., Klibanov, M., Malmberg, J.B.: Reconstruction of shapes and refractive indices from blind backscattering experimental data using the adaptivity. Inverse Probl. 30, 105007 (2014) 5. Beilina, L., Th‘anh, N.T., Klibanov, M.V., Malmberg, J.B.: Globally convergent and adaptive finite element methods in imaging of buried objects from experimental backscattering radar measurements. J. Comput. Appl. Math. (2015). Elsevier. https://doi.org/10.1016/j.cam.2014. 11.055 6. Bellassoued, M., Cristofol, M., Soccorsi, E.: Inverse boundary value problem for the dynamical heterogeneous Maxwell’s system. Inverse Probl. 28, 095009 (188 pp.) (2012) 7. Brenner, S.C., Scott, L.R.: The Mathematical Theory of Finite Element Methods. Springer, Berlin (1994) 8. Bondestam Malmberg, J., Beilina, L.: An adaptive finite element method in quantitative reconstruction of small inclusions from limited observations. Appl. Math. Inf. Sci. 12(1), 1–19 (2018) 9. Burov, V.A., Morozov, S.A., Rumyantseva, O.D.: Reconstruction of fine-scale structure of acoustical scatterers on large-scale contrast background. Acoust. Imaging 26, 231–238 (2002) 10. Bulyshev, A.E., Souvorov, A.E., Semenov, S.Y., Posukh, V.G., Sizov, Y.E.: Three-dimensional vector microwave tomography: theory and computational experiments. Inverse Probl. 20(4), 1239–1259 (2004) 11. Engquist, B., Majda, A.: Absorbing boundary conditions for the numerical simulation of waves. Math. Comp. 31, 629–651 (1977) 12. Monk, P.B.: Finite Element Methods for Maxwell’s Equations. Oxford University Press (2003) 13. Nédélec, J.-C.: Mixed finite elements in R3. Numer. Math. 35, 315–341 (1980) 14. Thánh, N.T., Beilina, L., Klibanov, M.V., Fiddy, M.A.: Reconstruction of the refractive index from experimental backscattering data using a globally convergent inverse method. SIAMJ. Sci. Comput. 36, B273–B293 (2014) 15. Thánh, N.T., Beilina, L., Klibanov, M.V., Fiddy, M.A.: Imaging of buried objects from experimental backscattering time-dependent measurements using a globally convergent inverse algorithm. SIAM J. Imaging Sci. 8(1), 757–786 (2015)

Regularized Linear Inversion with Randomized Singular Value Decomposition Kazufumi Ito and Bangti Jin

Abstract In this work, we develop efficient solvers for linear inverse problems based on randomized singular value decomposition (RSVD). This is achieved by combining RSVD with classical regularization methods, e.g., truncated singular value decomposition, Tikhonov regularization, and general Tikhonov regularization with a smoothness penalty. One distinct feature of the proposed approach is that it explicitly preserves the structure of the regularized solution in the sense that it always lies in the range of a certain adjoint operator. We provide error estimates between the approximation and the exact solution under canonical source condition, and interpret the approach in the lens of convex duality. Extensive numerical experiments are provided to illustrate the efficiency and accuracy of the approach. Keywords Linear inverse problmes · Randomized singular value decomposition · Tikhonov regularization MSC: 65F22

1 Introduction This work is devoted to randomized singular value decomposition (RSVD) for the efficient numerical solution of the following linear inverse problem Ax = b,

(1)

K. Ito Department of Mathematics, North Carolina State University, Raleigh, NC 27607, USA e-mail: [email protected] B. Jin (B) Department of Computer Science, University College London, Gower Street, London WC1E 6BT, UK e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 L. Beilina et al. (eds.), Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, Springer Proceedings in Mathematics & Statistics 328, https://doi.org/10.1007/978-3-030-48634-1_5

45

46

K. Ito and B. Jin

where A ∈ Rn×m , x ∈ Rm and b ∈ Rn denote the data formation mechanism, unknown parameter and measured data, respectively. The data b is generated by b = b† + e, where b† = Ax † is the exact data, and x † and e are the exact solution and noise, respectively. We denote by δ = e the noise level. Due to the ill-posed nature, regularization techniques are often applied to obtain a stable numerical approximation. A large number of regularization methods have been developed. The classical ones include Tikhonov regularization and its variant, truncated singular value decomposition, and iterative regularization techniques, and they are suitable for recovering smooth solutions. More recently, general variational type regularization methods have been proposed to preserve distinct features, e.g., discontinuity, edge and sparsity. This work focuses on recovering a smooth solution by Tikhonov regularization and truncated singular value decomposition, which are still routinely applied in practice. However, with the advent of the ever increasing data volume, their routine application remains challenging, especially in the context of massive data and multi-query, e.g., Bayesian inversion or tuning multiple hyperparameters. Hence, it is still of great interest to develop fast inversion algorithms. In this work, we develop efficient linear inversion techniques based on RSVD. Over the last decade, a number of RSVD inversion algorithms have been developed and analyzed [10, 11, 20, 26, 31]. RSVD exploits the intrinsic low-rank structure of A for inverse problems to construct an accurate approximation efficiently. Our main contribution lies in providing a unified framework for developing fast regularized inversion techniques based on RSVD, for the following three popular regularization methods: truncated SVD, standard Tikhonov regularization, and Tikhonov regularization with a smooth penalty. The main novelty is that it explicitly preserves a certain range condition of the regularized solution, which is analogous to source condition in regularization theory [5, 13], and admits interpretation in the lens of convex duality. Further, we derive error bounds on the approximation with respect to the true solution x † in Sect. 4, in the spirit of regularization theory for noisy operators. These results provide guidelines on the low-rank approximation, and differ from existing results [1, 14, 30, 32, 33], where the focus is on relative error estimates with respect to the regularized solution. Now we situate the work in the literature on RSVD for inverse problems. RSVD has been applied to solving inverse problems efficiently [1, 30, 32, 33]. Xiang and Zou [32] developed RSVD for standard Tikhonov regularization and provided relative error estimates between the approximate and exact Tikhonov minimizer, by adapting the perturbation theory for least-squares problems. In the work [33], the authors proposed two approaches based respectively on transformation to standard form and randomized generalized SVD (RGSVD), and for the latter, RSVD is only performed on the matrix A. There was no error estimate in [33]. Wei et al [30] proposed different implementations, and derived some relative error estimates. Boutsidis and Magdon [1] analyzed the relative error for truncated RSVD, and discussed the sample complexity. Jia and Yang [14] presented a different way to perform truncated RSVD via LSQR for general smooth penalty, and provided relative error estimates. See also [16] for an evaluation within magnetic particle imaging. More generally, the idea of randomization has been fruitfully employed to reduce the computational

Regularized Linear Inversion with Randomized Singular …

47

cost associated with regularized inversion in statistics and machine learning, under the name of sketching in either primal or dual spaces [2, 22, 29, 34]. All these works also essentially exploit the low-rank structure, but in a different manner. Our analysis may also be extended to these approaches. The rest of the paper is organized as follows. In Sect. 2, we recall preliminaries on RSVD, especially implementation and error bound. Then in Sect. 3, under one single guiding principle, we develop efficient inversion schemes based on RSVD for three classical regularization methods, and give the error analysis in Sect. 4. Finally we illustrate the approaches with some numerical results in Sect. 5. In the appendix, we describe an iterative refinement scheme for (general) Tikhonov regularization. Throughout, we denote by lower and capital letters for vectors and matrices, respectively, by I an identity matrix of an appropriate size, by · the Euclidean norm for vectors and spectral norm for matrices, and by (·, ·) for Euclidean inner product for vectors. The superscript ∗ denotes the vector/matrix transpose. We use the notation R(A) and N (A) to denote the range and kernel of a matrix A, and Ak and A˜ k denote the optimal and approximate rank-k approximations by SVD and RSVD, respectively. The notation c denotes a generic constant which may change at each occurrence, but is always independent of the condition number of A.

2 Preliminaries Now we recall preliminaries on RSVD and technical lemmas.

2.1 SVD and Pseudoinverse Singular value decomposition (SVD) is one of most powerful tools in numerical linear algebra. For any matrix A ∈ Rn×m , SVD of A is given by A = U Σ V ∗, where U = [u 1 , u 2 , . . . , u n ] ∈ Rn×n and V = [v1 , v2 , . . . , vm ] ∈ Rm×m are column orthonormal matrices, with the vectors u i and vi being the left and right singular vectors, respectively, and V ∗ denotes the transpose of V . The diagonal matrix Σ = diag(σi ) ∈ Rn×m has nonnegative diagonal entries σi , known as singular values (SVs), ordered nonincreasingly: σ1 ≥ σ2 ≥ · · · ≥ σr > σr +1 = · · · = σmin(m,n) = 0, where r = rank(A) is the rank of A. Let σi (A) be the ith SV of A. The complexity of the standard Golub-Reinsch algorithm for computing SVD is 4n 2 m + 8m 2 n + 9m 3 (for n ≥ m) [8, p. 254]. Thus, it is expensive for large-scale problems.

48

K. Ito and B. Jin

Now we can give the optimal low-rank approximation to A. By Eckhardt-Young theorem, the optimal rank-k approximation Ak of A (in spectral norm) is given by A − Uk Σk Vk∗ = σk+1 , where Uk ∈ Rn×k and Vk ∈ Rm×k are the submatrix formed by taking the first k columns of the matrices U and V , and Σk = diag(σ1 , . . . , σk ) ∈ Rk×k . The pseudoinverse A† ∈ Rm×n of A ∈ Rn×m is given by A† = Vr Σr−1 Ur∗ . We have the following properties of the pseudoinverse of matrix product. Lemma 1 For any A ∈ Rm×n , B ∈ Rn×l , the identity (AB)† = B † A† holds, if one of the following conditions is fulfilled: (i) A has orthonormal columns; (ii) B has orthonormal rows; (iii) A has full column rank and B has full row rank. The next result gives an estimate on matrix pseudoinverse. Lemma 2 For symmetric semipositive definite A, B ∈ Rm×m , there holds A† − B † ≤ A† B † B − A. Proof Since A is symmetric semipositive definite, we have A† = limμ→0+ (A + μI )−1 . By the identity C −1 − D −1 = C −1 (D − C)D −1 for invertible C, D ∈ Rm×m , A† − B † = lim+ [(A + μI )−1 − (B + μI )−1 ] μ→0

= lim+ [(A + μI )−1 (B − A)(B + μI )−1 ] = A† (B − A)B † . μ→0

Now the estimate follows from the matrix spectral norm estimate.

Remark 1 The estimate for general matrices is weaker than the one in Lemma 2: for general A, B ∈ Rn×m with rank(A) = rank(B) < min(m, n), there holds [25] A† − B † ≤

√ 1+ 5 A† B † B 2

− A.

The rank condition is essential, and otherwise, the estimate may not hold. Last, we recall the stability of SVs ([12, Corollary 7.3.8], [27, Sect. 1.3]). Lemma 3 For A, B ∈ Rn×m , there holds |σi (A + B) − σi (A)| ≤ B, i = 1, . . . , min(m, n).

Regularized Linear Inversion with Randomized Singular …

49

2.2 Randomized SVD Traditional numerical methods to compute a rank-k SVD, e.g., Lanczos bidiagonalization and Krylov subspace method, are especially powerful for large sparse or structured matrices. However, for many discrete inverse problems, there is no such structure. The prototypical model in inverse problems is a Fredholm integral equation of the first kind, which gives rise to unstructured dense matrices. Over the past decade, randomized algorithms for computing low-rank approximations have gained popularity. Frieze et al. [6] developed a Monte Carlo SVD to efficiently compute an approximate low-rank SVD based on non-uniform row and column sampling. Sarlos [23] proposed an approach based on random projection, using properties of random vectors to build a subspace capturing the matrix range. Below we describe briefly the basic idea of RSVD, and refer readers to [11] for an overview and to [10, 20, 26] for an incomplete list of recent works. RSVD can be viewed as an iterative procedure based on SVDs of a sequence of low-rank matrices to deliver a nearly optimal low-rank SVD. Given a matrix A ∈ Rn×m with n ≥ m, we aim at obtaining a rank-k approximation, with k min(m, n). Let Ω ∈ Rm×(k+ p) , with k + p ≤ m, be a random matrix, with its entries following an i.i.d. Gaussian distribution N (0, 1), and the integer p ≥ 0 is an oversampling parameter (with a default value p = 5 [11]). Then we form a random matrix Y by Y = (A A∗ )q AΩ,

(2)

where the exponent q ∈ N ∪ {0}. By SVD of A, i.e., A = U Σ V ∗ , Y is given by Y = U Σ 2q+1 V ∗ Ω. Thus Ω is used for probing R(A), and R(Y ) captures R(Uk ) well. The accuracy is determined by the decay of σi s, and the exponent q can greatly improve the performance when σi s decay slowly. Let Q ∈ Rn×(k+ p) be an orthonormal basis for R(Y ), which can be computed efficiently via QR factorization or skinny SVD. Next we form the (projected) matrix B = Q ∗ A ∈ R(k+ p)×m . Last, we compute SVD of B

B = W SV ∗ ,

with W ∈ R(k+ p)×(k+ p) , S ∈ R(k+ p)×(k+ p) and V ∈ Rm×(k+ p) . This again can be carried out efficiently by standard SVD, since the size of B is much smaller. With 1 : k denoting the index set {1, . . . , k}, let U˜ k = QW (1 : n, 1 : k) ∈ Rn×k , Σ˜ k = S(1 : k, 1 : k) ∈ Rk×k and V˜k = V (1 : m, 1 : k) ∈ Rm×k . The triple (U˜ k , Σ˜ k , V˜k ) defines a rank-k approximation A˜ k : A˜ k = U˜ k Σ˜ k V˜k∗ .

50

K. Ito and B. Jin

The triple (U˜ k , Σ˜ k , V˜k ) is a nearly optimal rank-k approximation to A; see Theorem 1 below for a precise statement. The approximation is random due to range probing by Ω. By its very construction, we have A˜ k = P˜k A,

(3)

where P˜k = U˜ k U˜ k∗ ∈ Rn×n is the orthogonal projection into R(U˜ k ). The procedure for RSVD is given in Algorithm 1. The complexity of Algorithm 1 is about 4(q + 1)nmk, which can be much smaller than that of full SVD if k min(m, n). Algorithm 1 RSVD for A ∈ Rn×m , n ≥ m. 1: 2: 3: 4: 5: 6: 7: 8:

Input matrix A ∈ Rn×m , n ≥ m, and target rank k; Set parameters p (default p = 5), and q (default q = 0); Sample a random matrix Ω = (ωi j ) ∈ Rm×(k+ p) , with ωi j ∼ N (0, 1); Compute the randomized matrix Y = (A A∗ )q AΩ; Find an orthonormal basis Q of range(Y ) by QR decomposition; Form the matrix B = Q ∗ A; Compute the SVD of B = W SV ∗ ; Return the rank k approximation (U˜ k , Σ˜ k , V˜k ), cf. (3).

Remark 2 The SV σi can be characterized by [8, Theorem 8.6.1, p. 441]: σi =

A∗ u . u u∈Rn ,u⊥span({u j }i−1 j=1 ) max

Thus, one may estimate σi (A) directly by σ˜ i (A) = A∗ U˜ (:, i), and refine the SV estimate, similar to Rayleigh quotient acceleration for computing eigenvalues. The following error estimates hold for RSVD (U˜ k , Σ˜ k , V˜k ) given by Algorithm 1 with q = 0 [11, Corollary 10.9, p. 275], where the second estimate shows how the parameter p improves the accuracy. The exponent q is in the spirit of a power method, and can significantly improve the accuracy in the absence of spectral gap; see [11, Corollary 10.10, p. 277] for related discussions. Theorem 1 For A ∈ Rn×m , n ≥ m, let Ω ∈ Rm×(k+ p) be a standard Gaussian matrix, k + p ≤ m and p ≥ 4, and Q an orthonormal basis for R(AΩ). Then with probability at least 1 − 3 p − p , there holds ⎛ ⎞ 21 σ j2 ⎠ , A − Q Q ∗ A ≤ (1 + 6((k + p) p log p) )σk+1 + 3 k + p ⎝ 1 2

j>k

Regularized Linear Inversion with Randomized Singular …

51

and further with probability at least 1 − 3e− p , there holds

k A − Q Q ∗ A ≤ 1 + 16 1 + p+1

21

⎞ 21 ⎛ 1 8(k + p) 2 ⎝ 2 ⎠ σj . σk+1 + p+1 j>k

The next result is an immediate corollary of Theorem 1. Exponentially decaying SVs arise in, e.g., backward heat conduction and elliptic Cauchy problem. j

Corollary 1 Suppose that the SVs σi decay exponentially, i.e., σ j = c0 c1 , for some c0 > 0 and c1 ∈ (0, 1). Then with probability at least 1 − 3 p − p , there holds

1

∗

1 2

A − Q Q A ≤ 1 + 6((k + p) p log p) +

3(k + p) 2

σk+1 ,

1

(1 − c12 ) 2

and further with probability at least 1 − 3e− p , there holds ∗

A − Q Q A ≤

k 1 + 16 1 + p+1

21

1

+

8(k + p) 2

1

( p + 1)(1 − c12 ) 2

σk+1 .

So far we have assumed that A is tall, i.e., n ≥ m. For the case n < m, one may apply RSVD to A∗ , which gives rise to Algorithm 2. Algorithm 2 RSVD for A ∈ Rn×m , n < m. 1: 2: 3: 4: 5: 6: 7: 8:

Input matrix A ∈ Rn×m , n < m, and target rank k; Set parameters p (default p = 5), and q (default q = 0); Sample a random matrix Ω = (ωi j ) ∈ R(k+ p)×n , with ωi j ∼ N (0, 1); Compute the randomized matrix Y = Ω A(A∗ A)q ; Find an orthonormal basis Q of range(Y ∗ ) by QR decomposition; Find the matrix B = AQ; Compute the SVD of B = U SV ∗ ; Return the rank k approximation (U˜ k , Σ˜ k , V˜k ).

The efficiency of RSVD resides crucially on the truly low-rank nature of the problem. The precise spectral decay is generally unknown for many practical inverse problems, although there are known estimates for several model problems, e.g., X-ray transform [18] and magnetic particle imaging [17]. The decay rates generally worsen with the increase of the spatial dimension d, at least for integral operators [9], which can potentially hinder the application of RSVD type techniques to high-dimensional problems.

52

K. Ito and B. Jin

3 Efficient Regularized Linear Inversion with RSVD Now we develop efficient inversion techniques based on RSVD for problem (1) via truncated SVD (TSVD), Tikhonov regularization and Tikhonov regularization with a smoothness penalty [5, 13]. For large-scale inverse problems, this can be expensive, since they either involve full SVD or large dense linear systems. We aim at reducing the cost by exploiting the inherent low-rank structure for inverse problems, and accurately constructing a low-rank approximation by RSVD. This idea has been pursued recently [1, 14, 30, 32, 33]. Our work is along the same line in of research but with a unified framework for deriving all three approaches and interpreting the approach in the lens of convex duality. The key observation is the range type condition on the approximation x: ˜ x˜ ∈ R(B),

(4)

with the matrix B is given by B=

truncated SVD, Tikhonov, A∗ , L † L ∗† A∗ , general Tikhonov,

where L is a regularizing matrix, typically chosen to the finite difference approximation of the first- or high-order derivatives [5]. Similar to (4), the approximation k ) in [34] for Tikhonov regularization, which is x˜ is assumed to live in span({vi }i=1 slightly more restrictive than (4). An analogous condition on the exact solution x † reads (5) x † = Bw for some w ∈ Rn . In regularization theory [5, 13], (5) is known as source condition, and can be viewed as the Lagrange multiplier for the equality constraint Ax † = b† , whose existence is generally not ensured for infinite-dimensional problems. It is often employed to bound the error x˜ − x † of the approximation x˜ in terms of the noise level δ. The construction below explicitly maintains (4), thus preserving the structure of the regularized solution x. ˜ We will interpret the construction by convex analysis. Below we develop three efficient computational schemes based on RSVD.

3.1 Truncated RSVD Classical truncated SVD (TSVD) stabilizes problem (1) by looking for the leastsquares solution of min Ak xk − b,

with Ak = Uk Σk Vk∗ .

Regularized Linear Inversion with Randomized Singular …

53

Then the regularized solution xk is given by xk =

A†k b

=

Vk Σk−1 Uk∗ b

=

k

σi−1 (u i , b)vi .

i=1

The truncated level k ≤ rank(A) plays the role of a regularization parameter, and determines the strength of regularization. TSVD requires computing the (partial) SVD of A, which is expensive for large-scale problems. Thus, one can substitute a rank-k RSVD (U˜ k , Σ˜ k , V˜k ), leading to truncated RSVD (TRSVD): xˆk = V˜k Σ˜ k−1 U˜ k∗ b. By Lemma 3, A˜ k = U˜ k Σ˜ k V˜k∗ is indeed of rank k, if A − A˜ k < σk . This approach was adopted in [1]. Based on RSVD, we propose an approximation x˜k defined by x˜k = A∗ ( A˜ k A˜ ∗k )† b = A∗

k (u˜ i , b) i=1

σ˜ i2

u˜ i .

(6)

By its construction, the range condition (4) holds for x˜k . To compute x˜k , one does not need the complete RSVD (U˜ k , Σ˜ k , V˜k ) of rank k, but only (U˜ k , Σ˜ k ), which is advantageous for complexity reduction [8, p. 254]. Given the RSVD (U˜ k , Σ˜ k ), computing x˜k by (6) incurs only O(nk + nm) operations.

3.2 Tikhonov Regularization Tikhonov regularization stabilizes (1) by minimizing the following functional Jα (x) = 21 Ax − b2 + α2 x2 , where α > 0 is the regularization parameter. The regularized solution xα is given by xα = (A∗ A + α I )−1 A∗ b = A∗ (A A∗ + α I )−1 b.

(7) 3

The latter identity verifies (4). The cost of the step in (7) is about nm 2 + m3 or 3 mn 2 + n3 [8, p. 238], and thus it is expensive for large scale problems. One approach to accelerate the computation is to apply the RSVD approximation A˜ k = U˜ k Σ˜ k V˜k∗ . Then one obtains a regularized approximation [32] xˆα = ( A˜ ∗k A˜ k + α I )−1 A˜ ∗k b. To preserve the range property (4), we propose an alternative

(8)

54

K. Ito and B. Jin

x˜α = A∗ ( A˜ k A˜ ∗k + α I )−1 b = A∗

k (u˜ i , b) u˜ i . σ ˜2 +α i=1 i

(9)

For α → 0+ , x˜α recovers the TRSVD x˜k in (6). Given RSVD (U˜ k , Σ˜ k ), the complexity of computing x˜α is nearly identical with the TRSVD x˜k .

3.3 General Tikhonov Regularization Now we consider Tikhonov regularization with a general smoothness penalty: Jα (x) = 21 Ax − b2 + α2 L x2 ,

(10)

where L ∈ R×m is a regularizing matrix enforcing smoothness. Typical choices of L include first-order and second-order derivatives. We assume N (A) ∩ N (L) = {0} so that Jα has a unique minimizer xα . By the identity (A∗ A + α I )−1 A∗ = A∗ (A A∗ + α I )−1 ,

(11)

if N (L) = {0}, the minimizer xα to Jα is given by (with Γ = L † L †∗ ) xα = (A∗ A + αL ∗ L)−1 (A∗ y) = L † ((AL † )∗ AL † + α I )−1 (AL † )∗ b = Γ A∗ (AΓ A∗ + α I )−1 b.

(12)

The Γ factor reflects the smoothing property of L x2 . Similar to (9), we approximate B := AL † via RSVD: B˜ k = Uk Σk Vk∗ , and obtain a regularized solution x˜α by x˜α = Γ A∗ ( B˜ k B˜ k∗ + α I )−1 b.

(13)

It differs from [33] in that [33] uses only the RSVD approximation of A, thus it does not maintain the range condition (19). The first step of Algorithm 1, i.e., AL −1 Ω, is to probe R(A) with colored Gaussian noise with covariance Γ . Numerically, it also involves applying Γ , which can be carried out efficiently if L is structured. If L is rectangular, we have the following decomposition [4, 32]. The A-weighted pseudoinverse L # [4] can be computed efficiently, if L † is easy to compute and the dimensionality of W is small. Lemma 4 Let W and Z be any matrices satisfying R(W ) = N (L), R(Z ) = R(L), Z ∗ Z = I , and L # = (I − W (AW )† A)L † . Then the solution xα to (10) is given by (14) xα = L # Z ξα + W (AW )† b,

Regularized Linear Inversion with Randomized Singular …

55

where the variable ξα minimizes 21 AL # Z ξ − b2 + α2 ξ 2 . Lemma 4 does not necessarily entail an efficient scheme, since it requires an orthonormal basis Z for R(L). Hence, we restrict our discussion to the case: L ∈ R×m

with rank(L) = < m.

(15)

It arises most commonly in practice, e.g., first-order or second-order derivative, and there are efficient ways to perform standard-form reduction. Then we can let Z = I . By slightly abusing the notation Γ = L # L #∗ , by Lemma 4, we have xα = L # ((AL # )∗ AL # + α I )−1 (AL # )∗ b + W (AW )† b = Γ A∗ (AΓ A∗ + α I )−1 b + W (AW )† b. The first term is nearly identical with (12), with L # in place of L † , and the extra term W (AW )† b belongs to N (L). Thus, we obtain an approximation x˜α defined by x˜α = Γ A∗ ( B˜ k B˜ k + α I )−1 b + W (AW )† b,

(16)

where B˜ k is a rank-k RSVD to B ≡ AL # . The matrix B can be implemented implicitly via matrix-vector product to maintain the efficiency.

3.4 Dual Interpretation Now we give an interpretation of (13) in the lens of Fenchel duality theory in Banach spaces (see, e.g., [3, Chap. 2.4]). Recall that for a functional F : X → R := R ∪ {∞} defined on a Banach space X , let F ∗ : X ∗ → R denote the Fenchel conjugate of F given for x ∗ ∈ X ∗ by F ∗ (x ∗ ) = supx ∗ , x X ∗ ,X − F(x). x∈X

˜ − F(x) ∀x˜ ∈ X } be the Further, let ∂ F(x) := {x ∗ ∈ X ∗ : x ∗ , x˜ − x X ∗ ,X ≤ F(x) subdifferential of the convex functional F at x, which coincides with Gâteaux derivative F (x) if it exists. The Fenchel duality theorem states that if F : X → R and G : Y → R are proper, convex and lower semicontinuous functionals on the Banach spaces X and Y , Λ : X → Y is a continuous linear operator, and there exists an x0 ∈ W such that F(x0 ) < ∞, G(Λx0 ) < ∞, and G is continuous at Λx0 , then inf F(x) + G(Λx) = sup −F ∗ (Λ∗ y ∗ ) − G ∗ (−y ∗ ),

x∈X

y ∗ ∈Y ∗

Further, the equality is attained at (x, ¯ y¯ ∗ ) ∈ X × Y ∗ if and only if

56

K. Ito and B. Jin

Λ∗ y¯ ∗ ∈ ∂ F(x) ¯ and

− y¯ ∗ ∈ ∂G(Λx), ¯

(17)

hold [3, Remark 3.4.2]. The next result indicates that the approach in Sects. 3.2–3.3 first applies RSVD to the dual problem to obtain an approximate dual p˜ α , and then recovers the optimal primal x˜α via duality relation (17). This connection is in the same spirit of dual random projection [29, 34], and it opens up the avenue to extend RSVD to functionals whose conjugate is simple, e.g., nonsmooth fidelity. Proposition 1 If N (L) = {0}, then x˜α in (13) is equivalent to RSVD for the dual problem. Proof For any symmetric positive semidefinite Q, the conjugate functional F ∗ of 1 ∗ † ξ Q ξ , with its domain being R(Q). By F(x) = α2 x ∗ Qx is given by F ∗ (ξ ) = − 2α 1 ∗ † † †∗ L †∗ ξ 2 . Hence, by Fenchel SVD, we have (L L) = L L , and thus F ∗ (ξ ) = − 2α ∗ duality theorem, the conjugate Jα (ξ ) of Jα (x) is given by 1 L †∗ A∗ ξ 2 − 21 ξ − b2 . Jα∗ (ξ ) := − 2α

Further, by (17), the optimal primal and dual pair (xα , ξα ) satisfies αL ∗ L xα = A∗ ξα and ξα = b − Axα . Since N (L) = {0}, L ∗ L is invertible, and thus xα = α −1 (L ∗ L)−1 A∗ ξα = α −1 Γ A∗ ξα . The optimal dual ξα is given by ξα = α(AL † L ∗† A∗ + α I )−1 b. To approximate ξα by ξ˜α , we employ the RSVD approximation B˜ k to B = AL † and solve 1 ξ˜α = arg maxn {− 2α B˜ k∗ ξ 2 − 21 ξ − b2 }. ξ ∈R

We obtain an approximation via the relation x˜α = α −1 Γ A∗ ξ˜α , recovering (13).

Remark 3 For a general regularizing matrix L, one can appeal to the decomposition in Lemma 4, by applying first the standard transformation and then approximating the regularized part via convex duality.

4 Error Analysis Now we derive error estimates for the approximation x˜ with respect to the true solution x † , under sourcewise type conditions. In addition to bounding the error, the estimates provide useful guidelines on constructing the approximation A˜ k .

Regularized Linear Inversion with Randomized Singular …

57

4.1 Truncated RSVD We derive an error estimate under the source condition (5). We use the projection matrices Pk = Uk Uk∗ and P˜k = U˜ k U˜ k∗ frequently below. Lemma 5 For any k ≤ r and A − A˜ k ≤ σk /2, there holds A∗ ( A˜ ∗k )† ≤ 2. Proof It follows from the decomposition A = P˜k A + (I − P˜k )A = A˜ k + (I − P˜k )A that A∗ ( A˜ ∗k )† = ( A˜ k + (I − P˜k )A)∗ ( A˜ ∗k )† ≤ A˜ ∗k ( A˜ ∗k )−1 + A − A˜ k A˜ −1 k −1 ˜ ≤ 1 + σ˜ k A − Ak . Now the condition A − A˜ k ≤ σk /2 and Lemma 3 imply σ˜ k ≥ σk − A − A˜ k ≥ σk /2, from which the desired estimate follows. Now we can state an error estimate for the approximation x˜k . Theorem 2 If Condition (5) holds and A − A˜ k ≤ σk /2, then for the estimate x˜k in (6), there holds x † − x˜k ≤ 4δσk−1 + 8σ1 σk−1 Ak − A˜ k w + σk+1 w. Proof By the decomposition b = b† + e, we have (with Pk⊥ = I − Pk ) x˜k − x † = A∗ ( A˜ k A˜ ∗k )† b − A∗ (A A∗ )† b† = A∗ ( A˜ k A˜ ∗k )† e + A∗ [( A˜ k A˜ ∗k )† − (Ak A∗k )† ]b† − Pk⊥ A∗ (A A∗ )† b† . The source condition x † = A∗ w in (5) implies x˜k − x † = A∗ ( A˜ k A˜ ∗k )† e + A∗ [( A˜ k A˜ ∗k )† − (Ak A∗k )† ]A A∗ w − Pk⊥ A∗ (A A∗ )† A A∗ w. By the triangle inequality, we have x˜k − x † ≤ A∗ ( A˜ k A˜ ∗k )† e + A∗ [( A˜ k A˜ ∗k )† − (Ak A∗k )† ]A A∗ w + Pk⊥ A∗ (A A∗ )† A A∗ w := I1 + I2 + I3 . It suffices to bound the three terms separately. First, for the term I1 , by the identity ( A˜ k A˜ ∗k )† = ( A˜ ∗k )† A˜ †k and Lemma 5, we have I1 ≤ A∗ ( A˜ ∗k )† A˜ †k e ≤ 2σ˜ k−1 δ. Second, for I2 , by Lemmas 5 and 2 and the identity ( A˜ k A˜ ∗k )† = ( A˜ ∗k )† A˜ †k , we have

58

K. Ito and B. Jin

I2 ≤ A∗ [( A˜ k A˜ ∗k )† − (Ak A∗k )† ]A A∗ w ≤ A∗ ( A˜ k A˜ ∗k )† ( A˜ k A˜ ∗k − Ak A∗k )(Ak A∗k )† A A∗ w ≤ A∗ ( A˜ ∗k )† A˜ †k A˜ k A˜ ∗k − Ak A∗k (Ak A∗k )† A A∗ w ≤ 4σ˜ k−1 AAk − A˜ k w, since A˜ k A˜ ∗k − Ak A∗k ≤ A˜ k − Ak ( A˜ k + A∗k ) ≤ 2A A˜ k − Ak and (Ak A∗k )† A A∗ ≤ 1. By Lemma 3, we can bound the term ( A˜ ∗k )† by ( A˜ ∗k )† = σ˜ k−1 ≤ (σk − A − A˜ k )−1 ≤ 2σk−1 . Last, we can bound the third term I3 directly by I3 ≤ Pk⊥ A∗ w ≤ σk+1 w. Combining these estimates yields the desired assertion. Remark 4 The bound in Theorem 2 contains three terms: propagation error σk−1 δ, approximation error σk+1 w, and perturbation error σk−1 AA − A˜ k w. It is of the worst-case scenario type and can be pessimistic. In particular, the error A∗ ( A˜ k A˜ ∗k )−1 e can be bounded more precisely by A∗ ( A˜ k A˜ ∗k )† e ≤ A∗ ( A˜ ∗k )† A˜ †k e, and A˜ †k e can be much smaller than σ˜ k−1 e, if e concentrates in the high-frequency modes. By balancing the terms, it suffices for A˜ k to have an accuracy O(δ). This is consistent with the analysis for regularized solutions with perturbed operators. Remark 5 The condition A − A˜ k < σk /2 in Theorem 2 requires a sufficiently accurate low-rank RSVD approximation (U˜ k , Σ˜ k , V˜k ) to A, i.e., the rank k is sufficiently large. It enables one to define a TRSVD solution x˜k of truncation level k. Next we give a relative error estimate for x˜k with respect to the TSVD approximation xk . Such an estimate was the focus of a few works [1, 30, 32, 33]. First, we give a bound on A˜ k A˜ ∗k (A∗k )† − Ak . Lemma 6 The following error estimate holds A˜ k A˜ ∗k (A∗k )† − Ak ≤ 1 + σ1 σk−1 Ak − A˜ k . Proof This estimate follows by direct computation: A˜ k A˜ ∗k (A∗k )† − Ak = [ A˜ k A˜ ∗k − Ak A∗k ](A∗k )† ≤ A˜ k ( A˜ ∗k − A∗k )(A∗k )† + ( A˜ k − Ak )A∗k (A∗k )† ≤ A˜ k A˜ ∗k − A∗k (A∗k )† + A˜ k − Ak A∗k (A∗k )† ≤ (σ1 σk−1 + 1) A˜ k − Ak ,

Regularized Linear Inversion with Randomized Singular …

59

since A˜ k = P˜k A ≤ A = σ1 . Then the desired assertion follows directly. Next we derive a relative error estimate between the approximations xk and x˜k . Theorem 3 For any k < r , and A − A˜ k < σk /2, there holds σ1 Ak − A˜ k xk − x˜k ≤4 1+ . xk σk σk Proof We rewrite the TSVD solution xk as xk = A∗ (Ak A∗k )† b = A∗k (Ak A∗k )† b.

(18)

By Lemma 3 and the assumption A − A˜ k < σk /2, we have σ˜ k > 0. Then xk − x˜k = A∗ ((Ak A∗k )† − ( A˜ k A˜ ∗k )† )b. By Lemma 2, (Ak A∗k )† − ( A˜ k A˜ ∗k )† = ( A˜ k A˜ ∗k )† ( A˜ k A˜ ∗k − Ak A∗k )(Ak A∗k )† . It follows from the identity (Ak A∗k )† = (A∗k )† A†k and (18) that xk − x˜k = A∗ ( A˜ k A˜ ∗k )† ( A˜ k A˜ ∗k − Ak A∗k )(Ak A∗k )† b = A∗ ( A˜ k A˜ ∗k )† ( A˜ k A˜ ∗k (A∗k )† − Ak )A∗k (Ak A∗k )† b = A∗ ( A˜ ∗k )† A˜ †k ( A˜ k A˜ ∗k (A∗k )† − Ak )xk . Thus, we obtain xk − x˜k ≤ A∗ ( A˜ ∗k )† A˜ †k A˜ k A˜ ∗k (A∗k )† − Ak . xk By Lemma 3, we bound the term A˜ †k by A˜ †k ≤ 2σk−1 . Combining the preceding estimates with Lemmas 5 and 6 completes the proof. Remark 6 The relative error is determined by k (and in turn by δ etc). Due to the presence of the factor σk−2 , the estimate requires a highly accurate low-rank approximation, i.e., Ak − A˜ k σk (A)−2 , and hence it is more pessimistic than Theorem 2. The estimate is comparable with the perturbation estimate for the TSVD

1 σ1 Ak − A˜ k Ak − A˜ k Axk − b xk − x¯k ≤ + + . xk σk b σk σk − Ak − A˜ k σ1 Modulo the α factor, the estimates in [30, 32] for Tikhonov regularization also depend on σk−2 (but can be much milder for a large α).

60

K. Ito and B. Jin

4.2 Tikhonov Regularization The following bounds are useful for deriving error estimate on x˜α in (9). Lemma 7 The following estimates hold (A A∗ + α I )( A˜ k A˜ ∗k + α I )−1 − I ≤ 2α −1 AA − A˜ k , [(A A∗ + α I )( A˜ k A˜ ∗k + α I )−1 − I ]A A∗ ≤ 2A(2α −1 AA − A˜ k + 1)A − A˜ k .

Proof It follows from the identity (A A∗ + α I )( A˜ k A˜ ∗k + α I )−1 − I = (A A∗ − A˜ k A˜ ∗k )( A˜ k A˜ ∗k + α I )−1 and the inequality A˜ k = P˜k A ≤ A that (A A∗ + α I )( A˜ k A˜ ∗k + α I )−1 − I ≤ α −1 (A + A˜ k )A − A˜ k ≤ 2α −1 AA − A˜ k . Next, by the triangle inequality, [(A A∗ + α I )( A˜ k A˜ ∗k + α I )−1 − I ]A A∗ ≤ A A∗ − A˜ k A˜ ∗k (( A˜ k A˜ ∗k + α I )−1 (A A∗ + α I ) + α( A˜ k A˜ ∗k + α I )−1 ). This, together with the identity A A∗ − A˜ k A˜ ∗k = A(A∗ − A˜ ∗k ) + (A − A˜ k )A∗k and the first estimate, yields the second estimate, completing the proof of the lemma. Now we can give an error estimate on x˜α in (9) under condition (5). Theorem 4 If condition (5) holds, then the estimate x˜α satisfies 3 1 x˜α − x † ≤ α − 2 AA − A˜ k δ + (2α −1 AA − A˜ k + 1)αw + 2−1 α 2 w.

Proof First, with condition (5), x † can be rewritten as x † = (A∗ A + α I )−1 (A∗ A + α I )x † = (A∗ A + α I )−1 (A∗ b† + αx † ) = (A∗ A + α I )−1 A∗ (b† + αw). The identity (11) implies x † = A∗ (A A∗ + α I )−1 (b† + αw). Consequently, x˜α − x † = A∗ [( A˜ k A˜ ∗k + α I )−1 b − (A A∗ + α I )−1 (b† + αw)] = A∗ [( A˜ k A˜ ∗k + α I )−1 e + (( A˜ k A˜ ∗k + α I )−1 − (A A∗ + α I )−1 )b† − α(A A∗ + α I )−1 w].

Regularized Linear Inversion with Randomized Singular …

61

Let I˜ = (A A∗ + α I )( A˜ k A˜ ∗k + α I )−1 . Then by the identity (11), there holds (A∗ A + α I )(x˜α − x † ) = A∗ [ I˜e + ( I˜ − I )b† − αw], and taking inner product with x˜α − x † yields A(x˜α − x † )2 + αx˜α − x † 2 ≤ I˜e + ( I˜ − I )b† + αw A(x˜α − x † ). By Young’s inequality ab ≤ 14 a 2 + b2 for any a, b ∈ R, we deduce 1 α 2 x˜α − x † ≤ 2−1 ( I˜e + ( I˜ − I )b† + αw).

By Lemma 7 and the identity b† = A A∗ w, we have I˜e ≤ 2α −1 AA − A˜ k δ, ( I˜ − I )b† = ( I˜ − I )A A∗ w ≤ 2A(2α −1 AA − A˜ k + 1)A − A˜ k w. Combining the preceding estimates yield the desired assertion.

Remark 7 To maintain the error x˜α − x † , the accuracy of A˜ k should be of O(δ), and α should be of O(δ), which gives an overall accuracy O(δ 1/2 ). The tolerance on A − A˜ k can be relaxed for high noise levels. It is consistent with existing theory for Tikhonov regularization with noisy operators [19, 21, 28]. Remark 8 The following relative error estimate was shown [32, Theorem 1]: xα − xˆα 2 ≤ c(2 sec θ κ + tan θ κ 2 )σk+1 + O(σk+1 ), xα 1

1

(b−Axα 2 +αxα 2 ) 2 b

2 i and κ = (σ12 + α)( σα2 +α + max1≤i≤n σ 2σ+α ). κ is a n i ˜ variant of condition number. Thus, Ak should approximate accurately A in order not to spoil the accuracy, and the estimate can be pessimistic for small α for which the estimate tends to blow up.

with θ = sin−1

4.3 General Tikhonov Regularization Last, we give an error estimate for x˜α defined in (13) under the following condition x † = Γ A∗ w,

(19)

62

K. Ito and B. Jin

where N (L) = {0}, and Γ = L † L ∗† . Also recall that B = AΓ † . Theorem 5 If Condition (19) holds, then the regularized solution x˜α in (13) satisfies 3 1 L(x † − x˜α ) ≤ α − 2 BB − B˜ k δ + (2α −1 BB − B˜ k + 1)αw + 2−1 α 2 w.

Proof First, by the source condition (19), we rewrite x † as x † = (A∗ A + αL ∗ L)−1 (A∗ A + αL ∗ L)x † = (A∗ A + αL ∗ L)−1 (A∗ b† + α A∗ w). Now with the identity (A∗ A + αL ∗ L)−1 A∗ = Γ A∗ (AΓ A∗ + α I )−1 , we have x † = Γ A∗ (AΓ A∗ + α I )−1 (b† + αw). Thus, upon recalling B = AL † , we have x˜α − x † = Γ A∗ [( B˜ k B˜ k∗ + α I )−1 b − (B B ∗ + α I )−1 (b† + αw)] = Γ A∗ [( B˜ k B˜ k∗ + α I )−1 e + (( B˜ k B˜ k∗ + α I )−1 − (B B ∗ + α I )−1 )b† − α(B B ∗ + α I )−1 w]. It follows from the identity (A∗ A + αL ∗ L)Γ A∗ = (A∗ A + αL ∗ L)L † L †∗ A∗ = A∗ (B B ∗ + α I ), that

(A∗ A + αL ∗ L)(x˜α − x † ) = A∗ [ I˜e + ( I˜ − I )b† − αw],

with I˜ = (B B ∗ + α I )( B˜ k B˜ k∗ + α I )−1 . Taking inner product with xα − x † and applying Cauchy-Schwarz inequality yield A(x˜α − x † )2 + αL(x˜α − x † )2 ≤ ( I˜e + ( I˜ − I )b† + αw)A(x˜α − x † ), 1 Young’s inequality implies α 2 L(x˜α − x † ) ≤ 2−1 ( I˜e + ( I˜ − I )b† + αw). The identity b† = Ax † = AL † L †∗ A∗ w = B B ∗ w from (19) and Lemma 7 complete the proof.

5 Numerical Experiments and Discussions Now we present numerical experiments to illustrate our approach. The noisy data b is generated from the exact data b† as follows

Regularized Linear Inversion with Randomized Singular …

63

bi = bi† + δ max(|b†j |)ξi , i = 1, . . . , n, j

where δ is the relative noise level, and the random variables ξi s follow the standard Gaussian distribution. All the computations were carried out on a personal laptop with 2.50 GHz CPU and 8.00G RAM by MATLAB 2015b. When implementing Algorithm 1, the default choices p = 5 and q = 0 are adopted. Since the TSVD and Tikhonov solutions are close for suitably chosen regularization parameters, we present only results for Tikhonov regularization (and the general case with L given by the first-order difference, which has a one-dimensional kernel N (L)). Throughout, the regularization parameter α is determined by uniformly sampling an interval on a logarithmic scale, and then taking the value attaining the smallest reconstruction error, where approximate Tikhonov minimizers are found by either (9) or (16) with a large k (k = 100 in all the experiments).

5.1 One-Dimensional Benchmark Inverse Problems First, we illustrate the efficiency and accuracy of proposed approach, and compare it with existing approaches [32, 33]. We consider seven examples (i.e., deriv2, heat, phillips, baart, foxgood, gravity and shaw), taken from the popular public-domain MATLAB package regutools (available from http://www.imm.dtu. dk/~pcha/Regutools/, last accessed on January 8, 2019), which have been used in existing studies (see, e.g., [30, 32, 33]). They are Fredholm integral equations of the first kind, with the first three examples being mildly ill-posed (i.e., σi s decay algebraically) and the rest severely ill-posed (i.e., σi s decay exponentially). Unless otherwise stated, the examples are discretized with a dimension n = m = 5000. The resulting matrices are dense and unstructured. The rank k of A˜ k is fixed at k = 20, which is sufficient to for all examples. The numerical results by standard Tikhonov regularization and two randomized variants, i.e., (8) and (9), for the examples are presented in Table 1. The accuracy of the approximations, i.e., the Tikhonov solution xα , and two randomized approximations xˆα (cf. (8), proposed in [32]) and x˜α (cf. (9), the proposed in this work), is measured in two different ways: e˜x z = xˆα − xα , e˜i j = x˜α − xα , e = xα − x † , ex z = xˆα − x † , ei j = x˜α − x † , where the methods are indicated by the subscripts. That is, e˜x z and e˜i j measure the accuracy with respect to the Tikhonov solution xα , and e, ex z and ei j measure the accuracy with respect to the exact one x † . The following observations can be drawn from Table 1. For all examples, the three approximations xα , x˜α and xˆα have comparable accuracy relative to the exact solution x † , and the errors ei j and ex z are fairly close to the error e of the Tikhonov solution

64

K. Ito and B. Jin

Table 1 Numerical results by standard Tikhonov regularization at two noise levels example δ e˜x z e˜i j e ex z ei j 1% 5% deriv2 1% 5% foxgood 1% 5% gravity 1% 5% heat 1% 5% phillips 1% 5% shaw 1% 5% baart

1.14e−9 5.51e−11 2.19e−2 1.88e−2 2.78e−7 1.91e−7 1.38e−4 1.83e−4 1.33e0 9.41e−1 5.53e−3 6.89e−3 3.51e−9 1.34e−9

1.14e−9 6.32e−11 2.41e−2 2.38e−2 2.79e−7 1.96e−7 1.41e−4 1.84e−4 1.13e0 9.45e−1 4.09e−3 7.53e−3 3.49e−9 1.37e−9

1.68e−1 2.11e−1 1.18e−1 1.59e−1 4.93e−1 1.18e0 7.86e−1 2.63e0 9.56e−1 2.02e0 6.28e−2 9.57e−2 4.36e0 8.23e0

1.68e−1 2.11e−1 1.20e−1 1.60e−1 4.93e−1 1.18e0 7.86e−1 2.63e0 1.67e0 1.70e0 6.19e−2 9.53e−2 4.36e0 8.23e0

1.68e−1 2.11e−1 1.13e−1 1.62e−1 4.93e−1 1.18e0 7.86e−1 2.63e0 1.50e0 1.99e0 6.24e−2 9.79e−2 4.36e0 8.23e0

xα . Thus, RSVD can maintain the reconstruction accuracy. For heat, despite the apparent large magnitude of the errors e˜x z and e˜i j , the errors ex z and ei j are not much worse than e. A close inspection shows that the difference of the reconstructions are mostly in the tail part, which requires more modes for a full resolution. The computing time (in seconds) for obtaining xα and x˜α and xˆα is about 6.60, 0.220 and 0.220, where for the latter two, it includes also the time for computing RSVD. Thus, for all the examples, with a rank k = 20, RSVD can accelerate standard Tikhonov regularization by a factor of 30, while maintaining the accuracy, and the proposed approach is competitive with the one in [32]. Note that the choice k = 20 can be greatly reduced for severely ill-posed problems; see Sect. 5.2 below for discussions. The preceding observations remain largely valid for general Tikhonov regularization; see Table 2. Since the construction of the approximation xˆα does not retain the structure of the regularized solution xα , the error e˜x z can potentially be much larger than e˜i j , which can indeed be observed. The errors e, ex z and ei j are mostly comparable, except for deriv2. For deriv2, the approximation xˆα suffers from grave errors, since the projection of L into R(Q) is very inaccurate for preserving L. It is expected that the loss occurs whenever general Tikhonov penalty is much more effective than the standard one. This shows the importance of structure preservation. Note that, for a general L, x˜α takes only about 1.5 times the computing time of xˆα . This cost can be further reduced since L is highly structured and admits fast inversion. Thus preserving the range structure of xα in (4) does not incur much overhead. Last, we present some results on the computing time for deriv2 versus the problem dimension, and at two truncation levels for RSVD, i.e., k = 20 and k = 30. The numerical results are given in Fig. 1. The cubic scaling of the standard approach and quadratic scaling of the approach based on RSVD are clearly observed,

Regularized Linear Inversion with Randomized Singular …

65

Table 2 Numerical results by general Tikhonov regularization (with the first-order derivative penalty) for the examples at two noise levels example δ e˜x z e˜i j e ex z ei j 1% 5% deriv2 1% 5% foxgood 1% 5% gravity 1% 5% heat 1% 5% phillips 1% 5% shaw 1% 5%

3.35e−10 3.11e−10 1.36e−1 1.57e−1 4.84e−2 1.90e−2 3.92e−2 1.96e−2 5.54e−1 8.90e−1 3.25e−3 5.64e−3 3.79e−4 9.73e−4

baart

2.87e−10 8.24e−12 4.51e−4 3.85e−4 2.26e−8 1.51e−9 2.33e−5 9.47e−6 8.74e−1 1.01e0 3.98e−4 5.82e−4 3.70e−8 2.17e−8

1.43e−1 1.48e−1 1.79e−2 2.40e−2 9.98e−1 2.27e0 1.39e0 3.10e0 8.95e−1 1.87e0 6.14e−2 8.37e−2 3.32e0 9.23e0

102

101

101

100

100

1.43e−1 1.48e−1 1.78e−2 2.40e−2 9.98e−1 2.27e0 1.39e0 3.10e0 1.32e0 1.99e0 6.14e−2 8.34e−2 3.32e0 9.23e0

t

t

102

1.43e−1 1.48e−1 1.48e−1 1.77e−1 1.02e0 2.28e0 1.41e0 3.10e0 1.06e0 1.76e0 6.06e−2 8.18e−2 3.32e0 9.23e0

10-1

10-1

10-2

10-2

10-3

0

5000

10000 n

(a) standard Tikhonov

15000

10-3

0

5000

10000

15000

n

(b) general Tikhonov

Fig. 1 The computing time t (in seconds) for the example deriv2 at different dimension n (m = n). The red, green and blue curves refer to Tikhonov regularization, existing approach [32, 33] and the new approach, respectively, and the sold and dashed curves denote k = 20 and k = 30, respectively

confirming the complexity analysis in Sects. 2 and 3. In both (9) and (16), computing RSVD represents the dominant part of the overall computational efforts, and thus the increase of the rank k from 20 to 30 adds very little overheads (compare the dashed and solid curves in Fig. 1). Further, for Tikhonov regularization, the two randomized variants are equally efficient, and for the general one, the proposed approach is slightly more expensive due to its direct use of L in constructing the approximation

66

K. Ito and B. Jin

B˜ k to B := AL # . Although not presented, we note that the results for other examples are very similar.

5.2 Convergence of the Algorithm There are several factors influencing the quality of x˜α the regularization parameter α, the noise level δ and the rank k of the RSVD approximation. The optimal truncation level k should depend on both α and δ. This part presents a study with deriv2 and shaw, which are mildly and severely ill-posed, respectively. First, we examine the influence of α on the optimal k. The numerical results for three different levels of regularization are given in Fig. 2. In the figure, the notation α ∗

0.35

10-1

α* 10α* α*/10

0.3

α* 10α* α*/10

0.25

eij

eij

0.2

10-2

0.15

0

10

20

30

10-3

0

10

102

α* 10α* α*/10

101

eij

eij

102

100

0

10

20

30

k

k

20 k

(a) standard Tikhonov

30

α* 10α* α*/10

101

100

0

10

20

30

k

(b) general Tikhonov

Fig. 2 The convergence of the error ei j with respect to the rank k for deriv2 (top) and shaw (bottom) with δ = 1% and different regularization parameters

Regularized Linear Inversion with Randomized Singular …

67

refers to the value attaining the the smallest error for Tikhonov solution xα , and thus 10α ∗ and α ∗ /10 represent respectively over- and under-regularization. The optimal k value decreases with the increase of α when α α ∗ . This may be explained by the fact that a too large α causes large approximation error and thus can tolerate large errors in the approximation A˜ k (for a small k). The dependence can be sensitive for mildly ill-posed problems, and also on the penalty. The penalty influences the singular value spectra in the RSVD approximation implicitly by preconditioning: since L is a discrete differential operator, the (weighted) pseudoinverse L # (or L † ) is a smoothing operator, and thus the singular values of B = AL # decay faster than that of A. In all cases, the error ei j is nearly monotonically decreasing in k (and finally levels off at e, as expected). In the under-regularized regime (i.e., α α ∗ ), the behavior is slightly different: the error ei j first decreases, and then increases before eventually leveling off at e. This is attributed to the fact that proper low-rank truncation of A induces extra regularization, in a manner similar to TSVD in Sect. 3.1. Thus, an approximation that is only close to xα (see e.g., [1, 30, 32, 33]) is not necessarily close to x † , when α is not chosen properly. Next we examine the influence of the noise level δ; see Fig. 3. With the optimal choice of α, the optimal k increases as δ decreases, which is especially pronounced for mildly ill-posed problems. Thus, RSVD is especially efficient for the following two cases: (a) highly noisy data (b) severely ill-posed problem. These observations agree well with Theorem 4: a low-rank approximation A˜ k whose accuracy is commensurate with δ is sufficient, and in either case, a small rank is sufficient for obtaining an acceptable approximation. For a fixed k, the error ei j almost increases monotonically with the noise level δ. These empirical observations naturally motivate developing an adaptive strategy for choosing the rank k on the fly so as to effect the optimal complexity. This requires a careful analysis of the balance between k, δ, α, and suitable a posteriori estimators. We leave this interesting topic to a future work.

5.3 Electrical Impedance Tomography Last, we illustrate the approach on 2D electrical impedance tomography (EIT), a diffusive imaging modality of recovering the electrical conductivity from boundary voltage measurement. This is one canonical nonlinear inverse problem. We consider the problem on a unit circle with sixteen electrodes uniformly placed on the boundary, and adopt the complete electrode model [24] as the forward model. It is discretized by the standard Galerkin FEM with conforming piecewise linear basis functions, on a quasi-uniform finite element mesh with 2129 nodes. For the inversion step, we employ ten sinusoidal input currents, unit contact impedance and measure the voltage data (corrupted by δ = 0.1% noise). The reconstructions are obtained with an H 1 (Ω)-seminorm penalty. We refer to [7, 15] for details on numerical implementation. We test the RSVD algorithm with the linearized model. It can be implemented efficiently without explicitly computing the linearized map. More precisely, let F

68

K. Ito and B. Jin 10-1

10-1

eij

eij

100

10-2

=1e-3 =3e-3 =1e-2 =5e-2

10-2

0

=1e-3 =3e-3 =1e-2 =5e-2

10

20

30

10-3

0

10

20

102

102 =1e-3 =3e-3 =1e-2 =5e-2

101

eij

eij

=1e-3 =3e-3 =1e-2 =5e-2

100

30

k

k

0

10

20 k

(a) standard Tikhonov

30

101

100

0

10

20

30

k

(b) general Tikhonov

Fig. 3 The convergence of the error ei j with respect to the rank k for deriv2 (top) and shaw (bottom) at different noise levels

be the (nonlinear) forward operator, and σ0 be the background (fixed at 1). Then the random probing of the range R(F (σ0 )) of the linearized forward operator F (σ0 ) (cf. Step 4 of Algorithm 1) can be approximated by F (σ0 )ωi ≈ F(σ0 + ωi ) − F(σ0 ), i = 1, . . . k + p, and it can be made very accurate by choosing a small variance for the random vector ωi . Step 6 of Algorithm 1 can be done efficiently via the adjoint technique. The numerical results are presented in Fig. 4, where linearization refers to the reconstruction by linearizing the nonlinear forward model at the background σ0 . This is one of the most classical reconstruction methods in EIT imaging. The rank k is taken to be k = 30 for x˜α , which is sufficient given the severe ill-posed nature of the EIT inverse problem. Visually, the RSVD reconstruction is indistinguishable

Regularized Linear Inversion with Randomized Singular …

exact

linearization

69

RSVD

Fig. 4 Numerical reconstructions for EIT with 0.1% noise

from the conventional approach. Note that contrast loss is often observed for EIT reconstructions obtained by a smoothness penalty. The computing time (in seconds) for RSVD is less than 8, whereas that for the conventional method is about 60. Hence, RSVD can greatly accelerate EIT imaging.

6 Conclusion In this work, we have provided a unified framework for developing efficient linear inversion techniques via RSVD and classical regularization methods, building on a certain range condition on the regularized solution. The construction is illustrated on three popular linear inversion methods for finding smooth solutions, i.e., truncated singular value decomposition, Tikhonov regularization and general Tikhonov regularization with a smoothness penalty. We have provided a novel interpretation of the approach via convex duality, i.e., it first approximates the dual variable via randomized SVD and then recovers the primal variable via duality relation. Further, we gave rigorous error bounds on the approximation under the canonical sourcewise representation, which provide useful guidelines for constructing a low-rank approximation. We have presented extensive numerical experiments, including nonlinear tomography, to illustrate the efficiency and accuracy of the approach, and demonstrated its competitiveness with existing methods. Algorithm 3 Iterative refinement of RSVD-Tikhonov solution. 1: 2: 3: 4: 5: 6: 7: 8: 9:

Give A, b and J , and initialize (x 0 , p 0 ) = (0, 0). Compute RSVD (U˜ k , Σ˜ k , V˜k ) to AL † by Algorithm 1. for j = 1, . . . , J do Compute the auxiliary variable z j by (21). Update the dual variable p j+1 by (22). Update the primal variable x j+1 by (23). Check the stopping criterion. end for Output x J as an approximation to xα .

70

K. Ito and B. Jin

Appendix A: Iterative refinement Proposition 1 enables iteratively refining the inverse solution when RSVD is not sufficiently accurate. This idea was proposed in [29, 34] for standard Tikhonov regularization, and we describe the procedure in a slightly more general context. j Suppose N (L) = {0}. Given a current iterate x j , we define a functional Jα (δx) for the increment δx by Jαj (δx) := A(δx + x j ) − b2 + αL(δx + x j )2 . Thus the optimal correction δxα satisfies (A∗ A + αL ∗ L)δxα = A∗ (b − Ax j ) − αL ∗ L x j , i.e.,

(B ∗ B + α I )Lδxα = B ∗ (b − Ax j ) − αL x j ,

(20)

with B = AL † . However, its direct solution is expensive. We employ RSVD for a low-dimensional space V˜k (corresponding to B), parameterize the increment Lδx by Lδx = V˜k∗ z and update z only. That is, we minimize the following functional in z Jαj (z) := A(L † V˜k∗ z + x j ) − b2 + αz + V˜k L x j 2 . Since k m, the problem can be solved efficiently. More precisely, given the current estimate x j , the optimal z solves (V˜k B ∗ B V˜k∗ + α I )z = V˜k B ∗ (b − Ax j ) − α V˜k L x j .

(21)

It is the Galerkin projection of (20) for δxα onto the subspace V˜k . Then we update the dual ξ and the primal x by the duality relation in Sect. 6: ξ j+1 = b − Ax j − B V˜k∗ z j ,

(22)

x j+1 = α −1 Γ A∗ ξ j+1 .

(23)

Summarizing the steps gives Algorithm 3. Note that the duality relation (17) enables A and A∗ to enter into the play, thereby allowing progressively improving the accuracy. The main extra cost lies in matrix-vector products by A and A∗ . The iterative refinement is a linear fixed-point iteration, with the solution xα being a fixed point and the iteration matrix being independent of the iterate. Hence, if the first iteration is contractive, i.e., x 1 − xα ≤ cx 0 − xα , for some c ∈ (0, 1), then Algorithm 3 converges linearly to xα . It can be satisfied if the RSVD approximation (U˜ k , Σ˜ k , V˜k ) is reasonably accurate to B.

Regularized Linear Inversion with Randomized Singular …

71

References 1. Boutsidis, C., Magdon-Ismail, M.: Faster SVD-truncated regularized least-squares. In: 2014 IEEE International Symposium on Information Theory, pp. 1321–1325 (2014). https://doi.org/ 10.1109/ISIT.2014.6875047 2. Chen, S., Liu, Y., Lyu, M.R., King, I., Zhang, S.: Fast relative-error approximation algorithm for ridge regression. In: Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence, pp. 201–210 (2015) 3. Ekeland, I., Témam, R.: Convex Analysis and Variational Problems. SIAM, Philadelphia, PA (1999). https://doi.org/10.1137/1.9781611971088 4. Eldén, L.: A weighted pseudoinverse, generalized singular values, and constrained least squares problems. BIT 22(4), 487–502 (1982). https://doi.org/10.1007/BF01934412 5. Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer, Dordrecht (1996). https://doi.org/10.1007/978-94-009-1740-8 6. Frieze, A., Kannan, R., Vempala, S.: Fast Monte-Carlo algorithms for finding low-rank approximations. J. ACM 51(6), 1025–1041 (2004). https://doi.org/10.1145/1039488.1039494 7. Gehre, M., Jin, B., Lu, X.: An analysis of finite element approximation in electrical impedance tomography. Inverse Prob. 30(4), 045,013,24 (2014). https://doi.org/10.1088/0266-5611/30/ 4/045013 8. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore, MD (1996) 9. Griebel, M., Li, G.: On the decay rate of the singular values of bivariate functions. SIAM J. Numer. Anal. 56(2), 974–993 (2018). https://doi.org/10.1137/17M1117550 10. Gu, M.: Subspace iteration randomization and singular value problems. SIAM J. Sci. Comput. 37(3), A1139–A1173 (2015). https://doi.org/10.1137/130938700 11. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011). https://doi.org/10.1137/090771806 12. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985). https://doi.org/10.1017/CBO9780511810817 13. Ito, K., Jin, B.: Inverse Problems: Tikhonov Theory and Algorithms. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2015) 14. Jia, Z., Yang, Y.: Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization. Inverse Prob. 34(5), 055,013,28 (2018). https://doi.org/10.1088/1361-6420/aab92d 15. Jin, B., Xu, Y., Zou, J.: A convergent adaptive finite element method for electrical impedance tomography. IMA J. Numer. Anal. 37(3), 1520–1550 (2017). https://doi.org/10.1093/imanum/ drw045 16. Kluth, T., Jin, B.: Enhanced reconstruction in magnetic particle imaging by whitening and randomized SVD approximation. Phys. Med. Biol. 64(12), 125,026,21 (2019). https://doi.org/ 10.1088/1361-6560/ab1a4f 17. Kluth, T., Jin, B., Li, G.: On the degree of ill-posedness of multi-dimensional magnetic particle imaging. Inverse Prob. 34(9), 095,006,26 (2018). https://doi.org/10.1088/1361-6420/aad015 18. Maass, P.: The x-ray transform: singular value decomposition and resolution. Inverse Prob. 3(4), 729–741 (1987). http://stacks.iop.org/0266-5611/3/729 19. Maass, P., Rieder, A.: Wavelet-accelerated Tikhonov-Phillips regularization with applications. In: Inverse Problems in Medical Imaging and Nondestructive Testing (Oberwolfach, 1996), pp. 134–158. Springer, Vienna (1997) 20. Musco, C., Musco, C.: Randomized block Krylov methods for stronger and faster approximate singular value decomposition. In: NIPS 2015 (2015) 21. Neubauer, A.: An a posteriori parameter choice for Tikhonov regularization in the presence of modeling error. Appl. Numer. Math. 4(6), 507–519 (1988). https://doi.org/10.1016/01689274(88)90013-X

72

K. Ito and B. Jin

22. Pilanci, M., Wainwright, M.J.: Iterative Hessian sketch: fast and accurate solution approximation for constrained least-squares. J. Mach. Learn. Res. 17, 38 (2016) 23. Sarlos, T.: Improved approximation algorithms for large matrices via random projections. In: Foundations of Computer Science, 2006. FOCS ’06. 47th Annual IEEE Symposium on (2006). https://doi.org/10.1109/FOCS.2006.37 24. Somersalo, E., Cheney, M., Isaacson, D.: Existence and uniqueness for electrode models for electric current computed tomography. SIAM J. Appl. Math. 52(4), 1023–1040 (1992). https:// doi.org/10.1137/0152060 25. Stewart, G.W.: On the perturbation of pseudo-inverses, projections and linear least squares problems. SIAM Rev. 19(4), 634–662 (1977). https://doi.org/10.1137/1019104 26. Szlam, A., Tulloch, A., Tygert, M.: Accurate low-rank approximations via a few iterations of alternating least squares. SIAM J. Matrix Anal. Appl. 38(2), 425–433 (2017). https://doi.org/ 10.1137/16M1064556 27. Tao, T.: Topics in Random Matrix Theory. AMS, Providence, RI (2012). https://doi.org/10. 1090/gsm/132 28. Tautenhahn, U.: Regularization of linear ill-posed problems with noisy right hand side and noisy operator. J. Inverse Ill-Posed Probl. 16(5), 507–523 (2008). https://doi.org/10.1515/JIIP. 2008.027 29. Wang, J., Lee, J.D., Mahdavi, M., Kolar, M., Srebro, N.: Sketching meets random projection in the dual: a provable recovery algorithm for big and high-dimensional data. Electron. J. Stat. 11(2), 4896–4944 (2017). https://doi.org/10.1214/17-EJS1334SI 30. Wei, Y., Xie, P., Zhang, L.: Tikhonov regularization and randomized GSVD. SIAM J. Matrix Anal. Appl. 37(2), 649–675 (2016). https://doi.org/10.1137/15M1030200 31. Witten, R., Candès, E.: Randomized algorithms for low-rank matrix factorizations: sharp performance bounds. Algorithmica 72(1), 264–281 (2015). https://doi.org/10.1007/s00453-0149891-7 32. Xiang, H., Zou, J.: Regularization with randomized SVD for large-scale discrete inverse problems. Inverse Prob. 29(8), 085,008,23 (2013). https://doi.org/10.1088/0266-5611/29/8/085008 33. Xiang, H., Zou, J.: Randomized algorithms for large-scale inverse problems with general Tikhonov regularizations. Inverse Prob. 31(8), 085,008,24 (2015). https://doi.org/10.1088/ 0266-5611/31/8/085008 34. Zhang, L., Mahdavi, M., Jin, R., Yang, T., Zhu, S.: Random projections for classification: a recovery approach. IEEE Trans. Inform. Theory 60(11), 7300–7316 (2014). https://doi.org/10. 1109/TIT.2014.2359204

Parameter Selection in Dynamic Contrast-Enhanced Magnetic Resonance Tomography Kati Niinimäki, M. Hanhela, and V. Kolehmainen

Abstract In this work we consider the image reconstruction problem of sparsely sampled dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI). DCE-MRI is a technique for acquiring a series of MR images before, during and after intravenous contrast agent administration, and it is used to study microvascular structure and perfusion. To overcome the ill-posedness of the related spatio-temporal inverse problem, we use regularization. In regularization one of the main problems is how to determine the regularization parameter which controls the balance between data fitting term and regularization term. Most methods for selecting this parameter require the computation of a large number of estimates even in stationary problems. In dynamic imaging, the parameter selection is even more time consuming since separate regularization parameters are needed for the spatial and temporal regularization functionals. In this work, we study the possibility of using the S-curve with DCE-MR data. We select the spatial regularization parameter using the S-curve, leaving the temporal regularization parameter as the only free parameter in the reconstruction problem. In this work, the temporal regularization parameter is selected manually by computing reconstructions with several values of the temporal regularization parameter. Keywords Sparsely sampled MRI · Dynamic contrast-enhanced MRI · Image reconstruction · S-curve regularization MSC: 65R20 · 65R32 K. Niinimäki (B) IR4M UMR8081, CNRS, Université Paris-Sud, Université Paris-Saclay, SHFJ, 4 place du Général Leclerc, 91401 Orsay, France e-mail: [email protected]; [email protected] Xray Division, Planmeca Oy, Asentajankatu 6, 00880 Helsinki, Finland M. Hanhela · V. Kolehmainen Department of Applied Physics, University of Eastern Finland, Kuopio, Finland e-mail: [email protected] V. Kolehmainen e-mail: [email protected] © Springer Nature Switzerland AG 2020 L. Beilina et al. (eds.), Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, Springer Proceedings in Mathematics & Statistics 328, https://doi.org/10.1007/978-3-030-48634-1_6

73

74

K. Niinimäki et al.

1 Introduction Dynamic contrast-enhanced MRI (DCE-MRI) is an imaging method which is used to study microvascular structure and tissue perfusion. The method has many applications including blood-brain-barrier assessment after acute ischemic stroke [20, 30] and treatment monitoring of breast cancer [19, 24] and glioma [25]. The operation principle of DCE-MRI is to inject a bolus of gadolinium based contrast agent into the blood stream, and acquire a time series of MRI data with a suitable T1 -weighting to obtain a time series of 2D (or 3D) images which exhibit contrast changes induced by concentration changes of the contrast agent in the tissues. The analysis of the contrast agent dynamics during imaging requires high resolution in space and in time; the high temporal resolution is necessary to measure when the contrast is passing through the artery and it is used for determining the subject specific Arterial Input function (AIF), and the high spatial resolution is necessary to adequately capture boundaries of perfused tissues. In many cases, sufficient time resolution can only be obtained by utilizing an imaging protocol during which only partial k-space sampling can be obtained for each image in the time series. However, acquiring less samples than required by the Nyquist criterion makes the image reconstruction problem ill-posed (and non-unique), causing artifacts and deterioration of the image quality when conventional reconstruction methods such as inverse Fourier transform or re-gridding are employed. According to the theory of compressed sensing (CS) [5–7], images that have a sparse representation can be recovered from undersampled measurements of a linear transform, i.e. sampling rate below Nyquist rate, using appropriate nonlinear reconstruction algorithms and appropriate (random) sampling of the data space. The compressibility of MR images and the fact that MR scanner measures samples of a linear transform of the unknown image (the k-space samples can be mathematically considered as Fourier coefficients) suggest that the idea of CS is applicable to MR imaging, offering thus a potentially significant scan time reduction without sacrificing the image quality. Since the seminal work of Candès, Romberg and Tao in 2006, CS has been extensively applied to MRI. In 2007 CS was applied to MRI in [18] and in 2018 it received FDA approval for clinical use. The undersampling of the k-space for speeding up the dynamic MRI data acquisition can in principle be done in many different ways. However, for many applications, the low frequency features that are present in the center of the k-space are of importance. In [33], for example, it was demonstrated that center weighted random sampling patterns were preferable to purely random sampling of the k-space within the CS approach. Radial sampling has the advantage that the center of the k-space is sampled densely, even when the sampling (i.e., number of radial spokes) is remarkably reduced. In this paper, we consider image reconstruction problem of DCE-MRI with sparsely sampled golden angle radial data, where the angle of subsequent spokes is ∼111.25◦ . The number of measurement spokes used for reconstructing a single time frame is chosen to be a Fibonacci number, which was shown to be an opti-

Parameter Selection in Dynamic Contrast-Enhanced Magnetic …

75

mal choice in [32]. This type of sparse sampling between the time frames results in each time frame having different spokes, and eventually the spokes cover the whole k-space. We also combine the Golden Angle (GA) sampling with concentric squares sampling. This sampling strategy resembles the linogram method [9] developed for computed tomography imaging, but the angles of subsequent spokes were chosen according to the golden angle method as opposed to the angles being equidistant in tan θ in linogram sampling. Unlike the conventional radial sampling pattern with spokes of equal length, the concentric squares sampling strategy also covers the corners of the k-space. The sampling pattern therefore also collects information of the high frequencies in the corners of the k-space, which leads to a reduction of artifacts originating from the lack of sampling in the corners. To overcome the ill-posedness of our image reconstruction problem, we use a variational framework. Thus we solve the following optimization problem fˆ = arg min D( f, m) + αS( f ) + βT ( f ),

(1)

where f = { f 1 , f 2 , . . . , f Nt } denotes the image sequence, m = {m 1 , m 2 , . . . , m Nt } the data sequence, Nt the number of time frames, D the data-fitting term, S the spatial regularization, T the temporal regularization and α and β are the spatial and temporal regularization parameters, respectively. The selection of the regularization parameters is crucial in terms of resulting image quality. There exists several proposed parameter selection methods, but most of them require the computation of a large number of reconstructions with varying parameters. Having to fix two regularization parameters makes the selection even more time consuming. In this work, we propose to use the S-curve method [12, 15, 22, 23] for automatic selection of the spatial regularization parameter α. The idea in the S-curve method is to select the regularization parameter so that the reconstruction has a priori defined level of sparsity in the chosen transformation domain. In DCE-MRI, a reliable a priori estimate for the sparsity level can be extracted from an anatomical MRI image which is based on full-sampling of the k-space and is always taken as part of the MRI measurement protocol but is usually used only for visualization purposes. For the selection of the spatial regularization parameter α, we employ one time frame of the GA data from the baseline measurement before the contrast agent administration. After fixing the spatial regularization parameter, we compute dynamic reconstructions with several values of parameter β and select a suitable temporal regularization parameter manually. Furthermore we study the performance of three different temporal regularization functionals, namely temporal smoothness, temporal total variation and total generalized variation. The proposed method is evaluated using simulated GA DCE-MRI data from a rat brain phantom. The results are compared to re-gridding approach, which is the most widely used non-iterative algorithm for reconstructing images from nonCartesian MRI data. Our re-gridding method was developed in IR4M UMR8081, CNRS, Université Paris-Sud using Matlab® . This re-gridding approach does not need additional density correction and it was first used in [16], see also [11].

76

K. Niinimäki et al.

2 Image Reconstruction in Radial DCE-MRI 2.1 Forward Problem The forward problem in 2D MRI can be modelled for most measurement protocols by the Fourier transform m(k x , k y ) =

f (x, y)e−i2π(kx x+k y y) dxdy,

(2)

Ω

where Ω is the image domain f (x, y) is the unknown image, m(k x , k y ) is the measured data, and k x , k y denotes the k-space trajectories. In the discrete framework, the Fourier transform is typically approximated with the multidimensional FFT when using cartesian k-space trajectories and with the non-uniform FFT when using noncartesian k-space trajectories. In this work, we consider non-uniform k-space trajectories and approximate the Fourier transform by the non-uniform fast Fourier transform (nuFFT) operator [10]. We discretize our functions as follows; temporal direction is divided into a sequence of Nt (vectorized) images f = { f 1 , f 2 , . . . , f Nt } and data vectors m = {m 1 , m 2 , . . . , m Nt }, where each f t ∈ C N p and m t ∈ C M , respectively. The number of data per frame M is equal to the number of GA spokes per frame times the number of samples per spoke. The number of image pixels is N p = N × N . Thus, using nuFFT we re-write (2) in a discretized form at time t as m t = At f t + t , t = 1, . . . , Nt ,

(3)

At = Pt F St , where Pt is an interpolation matrix between Cartesian k-space and non-cartesian k-space, F is the 2D FFT operation and St is a scaling matrix.

2.2 Inverse Problem of Dynamic Image Reconstruction The dynamic inverse problem related to the Eq. (3) is: given measurement time series m = {m 1 , m 2 , . . . , m Nt } and the associated k-space trajectories, solve the unknown images f = { f 1 , f 2 , . . . , f Nt }. To recover f from m, we define the inverse problem as the optimization problem fˆ = arg min

Nt t=1

At f t −

m t 22

+ αS( f ) + βT ( f ) ,

(4)

Parameter Selection in Dynamic Contrast-Enhanced Magnetic …

77

where S( f ) denotes the spatial regularization functional, α the spatial regularization parameter, β the temporal regularization parameter and T ( f ) the temporal regularization functional. In this work, we study the applicability of S-curve method for selecting the spatial regularization parameter α. Once α has been fixed, the temporal regularization parameter is then selected manually by computing estimates with different values of β. Our minimization problem is based on L 2 -data fidelity term for the measurement model and we use spatial total variation regularization for promoting sparsity of the gradients of each image [27]. Furthermore we study the performance of three different temporal regularization functionals for promoting temporal regularity of the image series. Our spatio-temporal image reconstruction problem thus writes N t 2 At f t − m t 2 + α∇ f t 2,1 + βT ( f ) , fˆ = arg min

(5)

t=1

where the isotropic 2D spatial total variation norm for complex valued image f t is defined by · 2,1 =

N

(Re(Dx f t,k ))2 + (Re(D y f t,k ))2 + (I m(Dx f t,k ))2 + (I m(D y f t,k ))2 ,

k=1

(6) where Re and I m denoting the real and imaginary parts of f t respectively and Dx and D y the discrete forward first differences in horizontal and vertical directions, respectively.

2.2.1

Temporal Regularization 1: Temporal Smoothness (TS)

The temporal smoothness regularization (hereafter referred as TS) is defined as the L 2 norm of forward first differences in time: T( f ) =

N t −1

f t+1 − f t 22 .

(7)

t=1

This model promotes smooth slowly changing signals, and it has been used in [3] for radial DCE myocardial perfusion imaging. TS regularization was compared with temporal TV regularization in the same application in [1].

2.2.2

Temporal Regularization 2: Temporal Total Variation (TV)

Temporal total variation (hereafter referred as TV) is defined by the L 1 norm of the forward first differences in time:

78

K. Niinimäki et al.

T( f ) =

N t −1

f t+1 − f t 1 .

(8)

t=1

The temporal total variation model promotes sparsity of the time derivative of the pixel signals, being a highly feasible regularization model for reconstruction of piece-wise regular signals which may exhibit large jumps. The smoothed form of temporal total variation was used in [2] for multislice myocardial perfusion imaging.

2.2.3

Temporal Regularization 3: Total Generalized Variation (TGV)

The total generalized variation model [4] is a total variation model that is generalized to higher order differences. Here we use the second-order total generalized variation, which in the discrete 1-dimensional form is of the form T( f ) =

N t −1 t=1

min f t+1 − f t − vt 1 + γ vt+1 − vt 1 , v

(9)

where v is an auxiliary vector and √ γ is a parameter, which balances the first and second order terms, and is set to 2 here. This functional balances between minimizing the first-order and second-order differences of the signal. The difference with TV regularization is most clear in smooth regions where piecewise linear solutions are favored over the piecewise constant solutions of TV. From hereafter this temporal regularization is referred as TGV. TGV was first used in MRI as a spatial prior in [14], and it has also been used in DCE-MRI as a temporal prior in [31], where different temporal priors were compared in cartesian MRI of the breast.

2.3 Regularization Parameter Selection 2.3.1

Spatial Regularization Parameter Selection

The spatial regularization parameter is selected using the S-curve method, originally proposed in [12, 15, 22, 23], but here modified for TV regularization. Assume that we have an a priori estimate Sˆ for the total variation norm of the unknown function. In practice we can use an anatomical image of the same slice ˆ Such an anatomical image is practically in order to obtain a reliable estimate for S. always acquired as part of the DCE MRI acquisition experiment but usually only used for visualization purposes. However, if such an anatomical image was not acquired, we could, in case of GA acquisition, estimate the expected sparsity level from a conventional reconstruction of a long sequence of baseline data taken before the

Parameter Selection in Dynamic Contrast-Enhanced Magnetic …

79

contrast agent injection. Or the anatomical image could be estimated from the entire data-acquisition similarly as the composite image in [21]. Now, given the estimate Sˆ we select the regularization parameter α using the S-curve method as follows (1) Take a sequence of regularization parameters α ranging on the interval [0, ∞] such that 0 < α (1) < α (2) < · · · < α (L) < ∞. (2) Compute the corresponding estimates fˆ1 (α (1) ), . . . , fˆ1 (α (L) ). With DCE-MRI data, reconstructions fˆ1 (α ( ) ) are computed as follows; we take the data that correspond to the first time frame, i.e. m 1 which has number of elements equal to the number of GA spokes per frame times the number of points per spoke. We reconstruct f 1 for given value α ( ) by fˆ1 (α ( ) ) = arg min{A1 f 1 − m 1 22 + α ( ) ∇ f 1 2,1 }. f1

Here it is important that α (1) is taken to be so small that the problem is under regularized and the corresponding reconstruction fˆ1 (α (1) ) results to a very noisy image with a big TV-norm value and α (L) is taken so large that the problem is over regularized and TV norm of reconstruction fˆ1 (α (L) ) is very close to zero. (3) Compute the TV-norms of the recovered estimates fˆ1 (α ( ) ), = 1, . . . , L. (4) Fit a smooth interpolation curve to the data {α ( ) , S(α ( ) ), = 1, . . . , L} and use the interpolated sparsity curve to find the value of α for which S(α) = ˆ For the interpolation we use Matlab’s® interp function and we interpoS. late our original S-curve to a more dense discretization of the regularization parameter α.

2.4 Temporal Regularization Parameter Selection Once the spatial regularization parameter α has been fixed using the S-curve, the temporal regularization parameter β can be tuned by computing estimates with different values of β and selecting a suitable value manually, for example, by visual assessment of the results. In this work, we compute the results with three different temporal regularization models using simulated measurement data. Since we consider a simulated test case where a ground truth is available, we select an optimal value of β for each temporal regularization model by selecting the value of β which produces the reconstruction with the smallest root mean square error (RMSE). The RMSE values were calculated separately for three regions; tumor, vascular region and the rest of the image domain. The RMSEs of different ROIs were then used to define a joint RMSE as

80

K. Niinimäki et al.

R M S E joint =

2 2 2 R M S EΩ + R M S EΩ + R M S EΩ , r oi1 r oi2 r oi3

(10)

where Ωr oi1 corresponds to pixels in the vascular region, Ωr oi2 correspond to pixels in the tumour region and Ωr oi3 corresponds to pixels in rest of the image domain. The RMSE was calculated this way to weigh the small tumour and vascular regions appropriately. In estimating the pharmacokinetic parameters of tissues, obtaining an accurate arterial input function (AIF) is required [28]. The AIF can be obtained via population averaging, however, usage of patient specific AIF produces more accurate estimates of the kinetic parameters [26]. The AIF is preferably extracted from an artery feeding the tissues of interest, but it can also be estimated from a venous sinus or vein when an artery is not visible [17]. Here the AIF is estimated from the superior sagittal sinus.

3 Materials and Methods A simulated test case modelling DCE-MRI measurements of a glioma in rat brain was created. The rat brain phantom was based on the rat brain atlas in [29], and scaled to a size of 128 × 128. The rat brain image was divided into three subdomains of different signal behaviour: vascular region (labelled ‘1’ and highlighted with red in Fig. 1), tumour region (labelled ‘2’ and highlighted with blue in Fig. 1) and the rest of the brain tissue. The vascular signal region corresponds to the location of the superior sagittal sinus. A time series of 2800 ground truth images was simulated by multiplying the signal of each pixel with the template of the corresponding region and adding that to the baseline value of the pixels. The tumour signal templates were based on an experimental DCE-MRI measurement described in [13], where the three different ROIs were identified. Figure 1 shows the signal templates for each of the different tissue regions. One spoke of GA k-space data was simulated for each of the simulated images, leading to a dynamic experiment with 2800 spokes of k-space data. The time scale of the simulation was set to be similar to the in vivo measurements in [13] where the measurement time between consecutive GA spokes was 38.5 ms. Gaussian complex noise at 5% of the mean of the absolute values of the signal was added to the simulated k-space signal. The simulated test case was carried out using a k-space trajectory which combines the golden angle and the concentric squares sampling strategies. In [13] it was found that reconstruction of the form (5) performed optimally with segment length1 of 34 for a similar data set, thus we selected 34 as the segment length for our reconstructions leading to a temporal resolution of ∼1.3 s. All the regularized reconstructions in this work were computed using the Chambolle-Pock primal-dual algorithm [8]. In the NUFFT implementation of the 1 Segment

length equals the number of radial spokes per image. The number of elements M in the data vector m t is segment length times number of samples per spoke.

Parameter Selection in Dynamic Contrast-Enhanced Magnetic …

81

Fig. 1 Template signals of different ROIs. Top left: the simulated image with vascular ROI labelled ‘1’ and marked with red and tumour ROI labelled ‘2’ and marked with blue. Top right: the template signal of the vascular ROI. Bottom left: the template signal of the tumour ROI. Bottom right: the template signal of tissue outside the two ROIs. The vertical axis in the three template signals is the multiplier for the signal added to the base signal

forward model, the measurements were interpolated into a twice oversampled cartesian grid with min-max Kaiser-Bessel interpolation with a neighbourhood size of 4 [10]. The regridding reconstructions were computed using a Matlab code developped at Imagerie par résonance magnétique médicale et multi-modalités (IR4M) UMR8081, Université Paris-Sud, France. We remark that when computing the RMS error (10), the reconstructed time signals of each pixel were linearly interpolated in the temporal direction to match the temporal resolution of the ground truth phantom.

82

K. Niinimäki et al.

4 Results The selection of α was carried out using the first 34 spokes (i.e. the first frame m 1 ) of the DCE-MRI data and then the selected α was used for all the spatio-temporal reconstructions with different temporal regularizations. The rat brain phantom of Sect. 3 was used to compute the a priori level of sparsity, i.e. in our case we computed the TV norm of the first time frame of the dynamic phantom. This resulted in a sparsity level of Sˆ = 0.0259. The spatial regularization term α was selected using the S-curve as described in Sect. 2.3.1 and resulted in spatial regularization parameter value of α = 7.3e−4. The TV norms of the reconstructions for the S-curve were computed with 15 values of alpha ranging on interval [10−7 , 103 ]. These 15 values of TV norm were then interpolated using Matlab’s® interp function to 405 values. The resulting S-curve for the determination of α is presented in Fig. 2. In many practical applications, the a priori information, which we use to estimate ˆ may come from a different modality or from acquisition with different the value of S, pulse sequence than the one used in the dynamic measurement. Therefore, in order to compute meaningful estimate of Sˆ for the TV-regularized case, the reference image has to be scaled such that it is compatible with the measured dynamic data. This normalization of the reference image can be obtained by f ref =

m t f ref , At f ref

where f ref denotes the reference image, m t the frame of dynamic data that is used in the S-curve and At the respective forward model.

0.06

Sˆ

0.01

10−7

103

Fig. 2 S-curve for selecting spatial regularization parameter for the simulated data case. Left: plot of the interpolation curve used to determine the value of α such that S(α) = Sˆ = 0.0259. Right: reconstruction (resolution 128 × 128) of the first time frame (t = 1) using the selected value of α = 7.3e−4

Parameter Selection in Dynamic Contrast-Enhanced Magnetic … Fig. 3 L-surface for selecting the spatial regularization parameter and the temporal regularization parameter simultaneously, resulting in α = 3.1623e−4 and β = 3.1623e−6. The axes in the image are: x1 = log( f t+1 − f t ), x2 = log(∇ f t 2,1 ) and z = log(At f t − m t 2 )

83

z

x1

x2

The temporal regularization parameter β was selected by computing reconstructions (5) with different values of β and then selecting the values which had the smallest joint RMSE error, see (10). This selection resulted in smallest joint RMSE of 0.085 for TS model which corresponded to β = 0.01, smallest joint RMSE of 0.058 for the TV model corresponding to β = 0.0017 and smallest joint RMSE of 0.063, corresponding to β = 0.0022 for the TGV model. As a reference method we selected the regularization parameters α and β using L-surface method for the case where we use spatial total variation and temporal total variation as our regularization model. The resulted L-surface is presented in Fig. 3. Application of L-surface on our data resulted parameters α = 3.1623e−4 and β = 3.1623e−6. Figure 4 shows slices of all reconstructions before, during and after contrast agent administration. Figure 5 shows the reconstructed images with different methods for one time frame. The top row of Fig. 5 shows the phantom with a red square denoting a domain that is presented as a closeup in Fig. 6 for all of the reconstructions. As can be seen from Figs. 4, 5 and 6, the classical regridding method fails on such high time resolution data as employed here, and thus we leave out the regridded reconstruction from the figures of the temporal evolution of the ROI signals. However the L-surface method seems to work nicely on our data, so we include it in the temporal evolution studies. The temporal evolutions in the vascular domain and in the tumor region are averages of Ωr oi1 and Ωr oi2 , respectively. Figure 7 shows the time signals of the reconstructions in the vascular region (Ωr oi1 ) with the different temporal regularization models. Corresponding signals of the reconstructions in the tumor region (i.e. in Ωr oi2 ) are shown in Fig. 8. The tumor region is accurately reconstructed with all the methods, with only small differences between the methods, L-surface method being the noisiest. In the vascular region, the methods have some differences with TGV having the best reconstruction quality and TS having the worst reconstruction quality. The TS method shows smoothing at both the maximum and minimum signal levels whereas TGV reconstructs the fast signal change of the vascular region most reliably. L-surface method is again more

84

K. Niinimäki et al.

ftrue

TS

TV

TGV

Regrid

L-surface

Fig. 4 Reconstructions with different temporal regularizations at different time frames. From top to bottom: true phantom, TS, TV, TGV, regrid and L-surface. From left to right time frames: before, during and after CA administration

Parameter Selection in Dynamic Contrast-Enhanced Magnetic …

85

Fig. 5 Reconstructions with different temporal regularizations after CA administration. Top row: true phantom (left), TS reconstruction (middle) and TV reconstruction (right). Bottom row: TGV reconstruction (left), regrid-reconstruction (middle) and L-surface reconstruction (right). The area highlighted in red is presented as closeups in Fig. 6

noisy than the other reconstructions in the vascular region and its performance of reconstructing the vascular signal falls between TS and TV most likely due to the temporal regularization parameter being too small whereas the spatial regularization parameter is on the same order of magnitude as the parameter obtained with S-curve.

5 Conclusions Variational regularization based solutions for dynamic MRI problems usually include two regularization parameters, one for the spatial and one for the temporal regularization, that the user has to select. Typically, the selection of both of the parameters is carried out manually based on visual assessment of the reconstructed images. In this work we proposed to use the S-curve method for the automatic selection of the spatial regularization parameter, leaving the time regularization parameter the only free parameter. The S-curve method selects the regularization parameter based on the expected sparsity of unknown images in domain of the regularization functional. Furthermore, the method requires computation of the reconstructions with relatively few values of the parameter, making it computationally efficient. The approach was demonstrated to lead to a feasible choice of the spatial regularization parameter in

86

K. Niinimäki et al.

Fig. 6 Closeups of reconstructions with different temporal regularization methods after CA administration. Top row: true phantom (left), TS reconstruction (middle) and TV reconstruction (right). Bottom row: TGV reconstruction (left), regrid-reconstruction (middle) and L-surface reconstruction (right)

Fig. 7 Reconstructions of vascular region (Ωr oi1 ) with the different methods at their optimal parameters according to the joint RMSE. Black line: true signal, blue line: TV reconstruction, green line: TS reconstruction, red line: TGV reconstruction and light blue line: L-surface reconstruction. Left: temporal evolution during all time frames. Right: closeup image during CA administration

Parameter Selection in Dynamic Contrast-Enhanced Magnetic …

87

Fig. 8 Reconstructions of tumor region (Ωr oi2 ) with the different methods at their optimal parameters according to the joint RMSE. Black line: true signal, blue line: TV reconstruction, green line: TS reconstruction, red line: TGV reconstruction and light blue line: L-surface reconstruction. Left: temporal evolution during all time frames. Right: closeup image during CA administration

a simulated DCE-MRI experiment of rat brain. The reconstructions were also computed with three different temporal regularization models with the same fixed spatial regularization parameter, demonstrating the robustness of the approach with different time regularization models. While we proposed automatic selection of the spatial regularization parameter, the temporal regularization parameter was still selected manually. In this work, we selected the temporal regularization parameter by computing RMS errors with respect to a ground truth. If on the other hand the ground truth is unknown, the temporal regularization parameter could be selected based on visual assessment of reconstructed images, leaving β to be the only free parameter. In the future work, we aim to study methods for automatic selection of the time regularization parameter as well. One possibility could be to extend the S-curve to select a parameter which leads to an expected sparsity level in the domain of the temporal regularization. A feasible estimate for the expected level of sparsity in the time direction could potentially be extracted from the changes in the dynamic measurement data. Acknowledgements This work was supported by Jane and Aatos Erkko foundation and the Academy of Finland, Centre of Excellence in Inverse Modelling and imaging (project 312343).

88

K. Niinimäki et al.

References 1. Adluru, G., DiBella, E.V.R.: A comparison of L1 and L2 norms as temporal constraints for reconstruction of undersampled dynamic contrast enhanced cardiac scans with respiratory motion. In: Proceedings of International Society for Magnetic Resonance in Medicine, vol. 16, p. 340 (2008) 2. Adluru, G., McGann, C., Speier, P., Kholmovski, E.G., Shaaban, A., Dibella, E.V.R.: Acquisition and reconstruction of undersampled radial data for myocardial perfusion magnetic resonance imaging. J. Magn. Reson. Imaging 29(2), 466–473 (2009) 3. Adluru, G., Whitaker, R.T., DiBella, E.V.R.: Spatio-temporal constrained reconstruction of sparse dynamic contrast enhanced radial MRI data. In: Proceedings of IEEE International Symposium on Biomedical Imaging, pp. 109–112 (2007) 4. Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010) 5. Candès, E., Romberg, K.J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006) 6. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006) 7. Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006) 8. Chambolle, A., Levine, S.E., Lucier, B.J.: An upwind finite-difference method for total variation-based image smoothing. SIAM J. Imaging Sci. 4(1), 277–299 (2011) 9. Edholm, P.R., Herman, G.T.: Linograms in image reconstruction from projections. IEEE Trans. Med. Imaging 6(4), 301–307 (1987) 10. Fessler, J.A., Sutton, B.P.: Nonuniform fast fourier transforms using min-max interpolation. IEEE Trans. Signal Process. 51, 560–574 (2003) 11. Guillot, G., Giovannelli.: Iddn.fr.001.080011.000.s.p.2019.000.31230. International Identifier of Digital Works, 2, 2019. An optional note 12. Hämäläinen, K., Kallonen, A., Kolehmainen, V., Lassas, M., Niinimäki, K., Siltanen, S.: Sparse tomography. SIAM J. Sci. Comput. 35(3), B644–B665 (2013) 13. Hanhela, M., Kettunen, M., Gröhn, O., Vauhkonen, M., Kolehmainen, V.: Temporal Huber regularization for DCE-MR. J. Math. Imaging Vis. manuscr. (2019) 14. Knoll, F., Bredies, K., Pock, T., Stollberger, R.: Second order total generalized variation (TGV) for MRI. Magn. Reson. Med. 65(2), 480–491 (2011) 15. Kolehmainen, V., Lassas, M., Niinimäki, K., Siltanen, S.: Sparsity-promoting Bayesian inversion. Inverse Probl. 28(2), 025005 (2012) 16. Kusmia, S., Eliav, U., Navon, G., Guillot, G.: DQF-MT MRI of connective tissues: application to tendon and muscle. Magn. Reson. Mater. Phys. Biol. Med. 26, 203–214 (2013) 17. Lavini, C., Verhoeff, J.J.C.: Reproducibility of the gadolinium concentration measurements and of the fitting parameters of the vascular input function in the superior sagittal sinus in a patient population. Magn. Reson. Imaging 28(10), 1420–1430 (2010) 18. Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58, 118–195 (2007) 19. Martincich, L., Montemurro, F., De Rosa, G., Marra, V., Ponzone, R., Cirillo, S., Gatti, M., Biglia, N., Sarotto, I., Sismondi, P., Regge, D., Aglietta, M.: Monitoring response to primary chemotherapy in breast cancer using dynamic contrast-enhanced magnetic resonance imaging. Breast Cancer Res. Treat. 83(1), 67–76 (2004) 20. Merali, Z., Huang, K., Mikulis, D., Silver, F., Kassner, A.: Evolution of blood-brain-barrier permeability after acute ischemic stroke. PLoS One 12(2), 1–11 (2017) 21. Mistretta, C.A., Wieben, O., Velikina, J., Block, W., Perry, J., Wu, Y., Johnson, K., Wu, Y.: Highly constrained backprojection for time-resolved MRI. Magn. Reson. Med. 55(1), 30–40 (2006) 22. Niinimäki, K.: Computational optimization methods for large-scale inverse problems. Ph.D. thesis, University of Eastern Finland (2013)

Parameter Selection in Dynamic Contrast-Enhanced Magnetic …

89

23. Niinimäki, K., Lassas, M., Hämäläinen, K., Kallonen, A., Kolehmainen, V., Niemi, E., Siltanen, S.: Multi-resolution parameter choice method for total variation regularised tomography (2015). Submitted. http://arxiv.org/abs/1407.2386 24. Pickles, M., Lowry, M., Manton, D., Gibbs, P., Turnbull, L.: Role of dynamic contrast enhanced MRI in monitoring early response of locally advanced breast cancer to neoadjuvant chemotherapy. Breast Cancer Res. Treat. 91(1), 1–10 (2005) 25. Piludu, F., Marzi, S., Pace, A., Villani, V., Fabi, A., Carapella, C., Terrenato, I., Antenucci, A., Vidiri, A.: Early biomarkers from dynamic contrast-enhanced magnetic resonance imaging to predict the response to antiangiogenic therapy in high-grade gliomas. Neuroradiology 57(12), 1269–1280 (2015) 26. Port, R.E., Knopp, M.V., Brix, G.: Dynamic contrast-enhanced MRI using Gd-DTPA: interindividual variability of the arterial input function and consequences for the assessment of kinetics in tumors. Magn. Reson. Med. 45(6), 1030–1038 (2001) 27. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(1–4), 259–268 (1992) 28. Tofts, P.S., Brix, G., Buckley, D.L., Evelhoch, J.L., Henderson, E., Knopp, M.V., Larsson, H.B.W., Lee, T.-Y., Mayr, N.A., Parker, G.J.M., Port, R.E., Taylor, J., Weisskoff, R.M.: Estimating kinetic parameters from dynamic contrast-enhanced T1-weighted MRI of a diffusable tracer: standardized quantities and symbols. J. Magn. Reson. 10(3), 223–232 (1999) 29. Valdés-Hernández, P.A., Sumiyoshi, A., Nonaka, H., Haga, R., Aubert-Vásquez, E., Ogawa, T., Iturria-Medina, Y., Riera, J.J., Kawashima, R.: An in vivo MRI template set for morphometry, tissue segmentation, and fMRI localization in rats. Front. Neuroinformatics 5, 26 (2011) 30. Villringer, K., Sanz Cuesta, B.E., Ostwaldt, A.-C., Grittner, U., Brunecker, P., Khalil, A.A., Schindler, K., Eisenblätter, O., Audebert, H., Fiebach, J.B.: DCE-MRI blood–brain barrier assessment in acute ischemic stroke. Neurology 88(5), 433–440 (2017) 31. Wang, D., Arlinghaus, L.R., Yankeelov, T.E., Yang, X., Smith, D.S.: Quantitative evaluation of temporal regularizers in compressed sensing dynamic contrast enhanced MRI of the breast. Int. J. Biomed. Imaging 7835749 (2017) 32. Winkelmann, S., Schaeffter, T., Koehler, T., Eggers, H., Doessel, O.: An optimal radial profile order based on the golden ratio for time-resolved MRI. IEEE Trans. Med. Imaging 26(1), 68–76 (2007) 33. Zong, X., Lee, J., Poplawsky, A.J., Kim, S.-G., Jong, C.Y.: Compressed sensing fMRI using gradient-recalled echo and epi sequences. NeuroImage 92, 312–321 (2014)

Convergence of Explicit P1 Finite-Element Solutions to Maxwell’s Equations Larisa Beilina and V. Ruas

Abstract This paper is devoted to the numerical validation of an explicit finitedifference scheme for the integration in time of Maxwell’s equations in terms of the sole electric field. The space discretization is performed by the standard P1 finite element method assorted with the treatment of the time-derivative term by a technique of the mass-lumping type. The rigorous reliability analysis of this numerical model was the subject of authors’ another paper [2]. More specifically such a study applies to the particular case where the electric permittivity has a constant value outside a sub-domain, whose closure does not intersect the boundary of the domain where the problem is defined. Our numerical experiments in two-dimension space certify that the convergence results previously derived for this approach are optimal, as long as the underlying CFL condition is satisfied. Keywords CFL condition · Explicit scheme · Mass-lumping · Maxwell’s equations · P1 finite elements MSC: 65N12 · 65N15 · 78M10

L. Beilina (B) Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, SE-412 96, Gothenburg, Sweden e-mail: [email protected] V. Ruas Institut Jean Le Rond d’Alembert, UMR 7190 CNRS - Sorbonne Universit é, 75005 Paris, France e-mail: [email protected] CNPq, Brasilia, Brazil

© Springer Nature Switzerland AG 2020 L. Beilina et al. (eds.), Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, Springer Proceedings in Mathematics & Statistics 328, https://doi.org/10.1007/978-3-030-48634-1_7

91

92

L. Beilina and V. Ruas

1 Introduction The purpose of this article is to provide a numerical validation of an explicit scheme based on P1 finite-element space discretizations, to solve hyperbolic Maxwell’s equations for the electric field with constant dielectric permittivity in a neighborhood of the boundary of the computational domain. This numerical model was thoroughly studied in [2] from the theoretical point of view. The standard continuous P1 FEM is a tempting possibility to solve Maxwell’s equations, owing to its simplicity. It is well known however that, for different reasons, this method is not always well suited for this purpose. The first reason is that in general the natural function space for the electric field is not the Sobolev space H1 , but rather the space H(curl). Another issue difficult to overcome with continuous Lagrange finite elements is the prescription of the zero tangential-component boundary conditions for the electric field, which hold in many important applications. All this motivated the proposal by Nédélec about four decades ago of a family of H(curl)-conforming methods to solve these equations (cf. [23]). These methods are still widely in use, as much as other approaches well adapted to such specific conditions (see e.g. [1, 13, 25]). A comprehensive description of finite element methods for Maxwell’s equations can be found in [20]. There are situations however in which the P1 finite element method does provide an inexpensive and reliable way to solve the Maxwell’s equations. In this work we address one of such cases, characterized by the fact that the electric permittivity is constant in a neighborhood of the whole boundary of the domain of interest. This is because, at least in theory, whenever the electric permittivity is constant, the Maxwell’s equations simplify into as many wave equations as the space dimension under consideration. More precisely here we show by means of numerical examples that, in such a particular case, a space discretization with conforming linear elements, combined with a straightforward explicit finite-difference scheme of the mass-lumping type for the time integration, gives rise to optimal approximations of the electric field, as long as a classical CFL condition is satisfied. Actually this work is strongly connected with studies presented in [3, 4] for a combination of a finite difference discretization in a sub-domain with constant permittivity with a finite element discretization in the complementary sub-domain. As pointed out above, the Maxwell’s equation reduces to the wave equation in the former case. Since the study of finite-difference methods for this type of equation is well established, only P1 finite element space discretizations of Maxwell’s equations are considered in this paper. In [3, 4] a stabilized domain-decomposition finite-element/finite-difference approach for the solution of the time-dependent Maxwell’s system for the electric field was proposed and numerically verified. In these works [3, 4] different manners to handle the divergence-free condition in a finite-element formulation were considered. The main idea behind the domain decomposition methods in [3, 4] is that a rectangular computational domain is decomposed into two sub-domains, in which two different type of discretizations are employed, namely, the finite-element domain

Convergence of Explicit P1 Finite-Element Solutions to Maxwell’s Equations

93

in which a classical P1 finite element approach is employed, and the finite-difference domain, in which the standard five- or seven-point finite difference scheme is applied, according to the space dimension. The finite-element domain lies strictly inside the finite- difference domain, in such a way that both domains overlap in two layers of structured nodes. First order absorbing boundary conditions [15] are enforced on the boundary of the computational domain, i.e. on the outer boundary of the finitedifference domain. In [3, 4] it was assumed that the dielectric permittivity function is strictly positive and has a constant value in the overlapping nodes as well as in a neighborhood of the boundary of the domain. An explicit time-integration scheme was used both in the finite-element and in the finite-difference domain. We recall that from the theoretical point of view, for a stable finite-element solution of Maxwell’s equation, divergence-free edge elements are the most satisfactory [20, 23]. However the edge elements are less attractive for solving time-dependent problems, since a linear system of equations has to be solved at every time iteration. In contrast, P1 elements with lumped mass matrix can be efficiently used in connection with an explicit solution scheme [14, 19]. On the other hand it is also well known that the numerical solution of Maxwell’s equations with nodal finite elements can result in unstable spurious solutions [21, 24]. Nevertheless a number of techniques are available to remove them, and in this respect we refer for example to [16–18, 22, 24]. In the current work, similar to [3, 4], the spurious solutions are removed from the finite-element solution by adding the divergence-free term to the model equation for the electric field. Numerical tests given in [4] demonstrate that spurious solutions are removable indeed, in case an explicit scheme with P1 finite elements is employed. Efficient usage of an explicit scheme combined with P1 finite-element discretizations for the solution of coefficient inverse problems (CIPs), in the particular context described above was made evident in [5]. In many algorithms aimed at solving electromagnetic CIPs, a qualitative collection of experimental measurements is necessary on the boundary of a computational domain, in order to determine the dielectric permittivity function therein. In this case, in principle the numerical solution of the time-dependent Maxwell’s equations is required in the entire space R3 (see e.g. [5–10], but instead it can be more efficient to consider Maxwell’s equations with a constant dielectric permittivity in a neighborhood of the boundary of a computational domain. The explicit scheme with P1 finite elements considered in this work was numerically tested in the solution of the time-dependent Maxwell’s system in both two- and three-dimensional geometry (cf. [4]). It was also combined with a few algorithms to solve different CIPs for determining the dielectric permittivity function in connection with the time-dependent Maxwell’s equations, using both simulated and experimentally generated data (see [6–10]). In short, the validation of our formal reliability analysis for such a method conducted in this work, confirms the previously observed adequacy of this numerical approach. An outline of this paper is as follows: In Sect. 2 we describe the model problem, and give its equivalent variational form. In Sect. 3 we set up the discretizations of the model problem in both space and time, and recall the main results of the reliability

94

L. Beilina and V. Ruas

analysis conducted in [2] for the underlying numerical model. Section 4 is devoted to the numerical experiments that validate such results. We conclude in Sect. 5 with a few comments.

2 The Model Problem The particular form of Maxwell’s equations for the electric field e = (e1 , e2 ) in a bounded domain Ω of 2 with boundary ∂Ω that we deal with in this work is as follows. First we consider that Ω = Ω¯ in ∪ Ωout , where Ωin is an interior open set whose boundary does not intersect ∂Ω and Ωout is the complementary set of Ω¯ in with respect to Ω. Now in case e satisfies (homogeneous) Dirichlet boundary conditions, we are given e0 ∈ [H 1 (Ω)]2 and e1 ∈ H(div, Ω) satisfying ∇ · (εe0 ) = ∇ · (εe1 ) = 0 where ε is the electric permittivity. ε is assumed to belong to W 2,∞ (Ω) and to fulfill ε ≡ 1 in Ωout and ε ≥ 1 otherwise. Incidentally, throughout this article we denote ¯ by | · |m,∞ for m > 0 and the standard norm of the standard semi-norm of C m (Ω) ¯ by · 0,∞ . C 0 (Ω) In doing so, the problem to solve is: ε∂tt e + ∇ × ∇ × e = 0 e(·, 0) = e0 (·), and ∂t e(·, 0) = e1 (·) e=0 ∇ · (εe) = 0

in Ω × (0, T ), in Ω, on ∂Ω × (0, T ), in Ω.

(1)

Remark 1 The analysis carried out in [2] extends in a rather straightforward manner to absorbing conditions ∂n e = −∂t e prescribed on the boundary, where ∂n e represents the outer normal derivative of e on ∂Ω. This case is important for it corresponds to practical situations considered in [6–10]. Details on such an extension will be addressed in a forthcoming paper. Next, we set (1) in variational form. With this aim we denote the standard inner product of [L 2 (Ω)]2 by (·, ·) and the corresponding norm by {·} . Further, for a ∞ 2 given non-negative function ω ∈ L (Ω) we introduce the weighted L (Ω)-semi2 norm {·} ω := Ω |ω||{·}| dx, which is actually a norm if ω = 0 everywhere in ¯ We also introduce, the notation (a, b)ω := Ω. Ω ωa · bdx for two fields a, b which are square integrable in Ω. Notice that if ω is strictly positive this expression defines an inner product associated with the norm {·} ω . Then requiring that e|t=0 = e0 and {∂t e}|t=0 = e1 and e = 0 on ∂Ω × [0, T ], we write for all v ∈ [H01 (Ω)]2 , (∂tt e, v)ε + (∇e, ∇v) + (∇ · εe, ∇ · v) − (∇ · e, ∇ · v) = 0 ∀t ∈ (0, T ).

(2)

Convergence of Explicit P1 Finite-Element Solutions to Maxwell’s Equations

95

3 The Numerical Model Henceforth we restrict our studies to the case where Ω is a polygonal domain.

3.1 Space Semi-discretization Let Vh be the usual P1 FE-space of continuous functions related to a mesh Th fitting Ω, consisting of triangles with maximum edge length h, belonging to a quasi-uniform family of meshes (cf. [12]). Setting Vh := [Vh ∩ H01 (Ω)]2 we define e0h (resp. e1h ) to be the usual Vh interpolate of e0 (resp. e1 ). Then the semi-discretized problem in space that we wish to solve reads, Find eh ∈ Vh such that ∀v ∈ Vh (∂tt eh , v)ε + (∇eh , ∇v) + (∇ · [εeh ], ∇ · v) − (∇ · eh , ∇ · v) = 0, (3) eh (·, 0) = e0h (·) and ∂t eh (·, 0) = e1h (·) in Ω.

3.2 Full Discretization To begin with we consider a natural centered time-discretization scheme to solve (3), namely: Given a number N of time steps we define the time increment τ := T /N . Then we approximate eh (kτ ) by ehk ∈ Vh for k = 1, 2, . . . , N according to the following scheme for k = 1, 2, . . . , N − 1:

ehk+1 − 2ehk + ehk−1 ,v τ2

eh0

= e0h and

eh1

=

eh0

+ (∇ehk , ∇v) + (∇ · εehk , ∇ · v) − (∇ · ehk , ∇ · v) = 0 ∀v ∈ Vh , ε

(4)

+ τ e1h in Ω.

Owing to its coupling with ehk and ehk−1 on the left hand side of (4), ehk+1 cannot be determined explicitly by (4) at every time step. In order to enable an explicit solution we resort to the classical mass-lumping technique. We recall that for a constant ε this consists of replacing on the left hand side the inner product (u, v)ε by a discrete inner product (u, v)ε.h , using the trapezoidal rule to compute the integral of K εu|K · v|K dx (resp. K ∩∂Ω u|K · v|K d S), for every element K in Th , where u stands for ehk+1 − 2ehk + ehk−1 . It is well-known that in this case the matrix associated with (εehk+1 , v)h for v ∈ Vh , is a diagonal matrix. In our case ε is not constant, but the same property will hold if we replace in each element K the integral of εu|K · v|K in a triangle K ∈ Th as follows:

96

L. Beilina and V. Ruas

εu|K · v|K dx ≈ ε(G K )ar ea(K ) K

3 u(SK ,i ) · v(SK ,i ) i=1

3

,

where SK ,i are the vertexes of K , i = 1, 2, 3, G K is the centroid of K . Before pursuing we define the auxiliary function εh whose value in each K ∈ Th is constant equal to ε(G K ). Then still denoting the approximation of eh (kτ ) by ehk , for k = 1, 2, . . . , N we determine ehk+1 by,

ehk+1 − 2ehk + ehk−1 ,v τ2

+ (∇ehk , ∇v) + (∇ · εehk , ∇ · v) − (∇ · ehk , ∇ · v) = 0 ∀v ∈ Vh , εh ,h

eh0 = e0h and eh1 = eh0 + τ e1h in Ω.

(5)

3.3 Convergence Results Recalling the assumption that ε ∈ W 2,∞ (Ω) we first set η := 2 + |ε|1,∞ + 2|ε|2,∞ ;

(6)

Next we recall the classical inverse inequality (cf. [12]) together with a result in [11] according to which, ∇v ≤ Ch −1 v εh ,h for all v ∈ Vh ,

(7)

where C is a mesh-independent constant. Now we assume that τ satisfies the following CFL-condition: τ ≤ h/ν with ν = C(1 + 3 ε − 1 ∞ )1/2 .

(8)

Further we assume that the solution e to Eq. (1) belongs to [H 4 {Ω × (0, T )}]2 . Let us define a function eh in Ω¯ × [0, T ] whose value at t = kτ equals ehk for k = 1, 2, . . . , N and that varies linearly with t in each time interval ([k − 1]τ, kτ ), in such ek (x) − ehk−1 (x) for every x ∈ Ω¯ and t ∈ ([k − 1]τ, kτ ). We a way that ∂t eh (x, t) = h τ also define am+1/2 (·) for any field a(·, t) to be a(·, [m + 1/2]τ ). Provided the CFL condition (8) is fulfilled and τ also satisfies τ ≤ 1/[2η], under the above regularity assumption on e, there exists a constant C depending only on Ω, ε and T such that,

Convergence of Explicit P1 Finite-Element Solutions to Maxwell’s Equations

[∂t (eh − e)]m+1/2 + max ∇(em − em ) h 2≤m≤N

2 ≤ C (τ + h + h /τ ) e H 4 [Ω×(0,T )] + |e0 |2 + |e1 |2 . max

1≤m≤N −1

97

(9)

(9) means that, as long as τ varies linearly with h, first order convergence of scheme (5) in terms of either τ or h holds in the sense of the norms on the left hand side of (9).

4 Numerical Validation We perform numerical tests in time (0, T ) = (0, 0.5) in the computational domain Ω = [0, 1] × [0, 1] for the model problem in two space dimension, namely ε∂tt e − ∇ 2 e − ∇∇ · (ε − 1)e = f in Ω × (0, T ), in Ω, e(·, 0) = 0 and ∂t e(·, 0) = 0 e=0 on ∂Ω × (0, T ).

(10)

for the electric field e = (e1 , e2 ). The source data f (the right hand side) is chosen such that the functions 1 t2 2π sin2 π x cos π y sin π y , ε 2 1 t2 2 e2 = − 2π sin π y cos π x sin π x ε 2 e1 =

(11)

are the components of the exact solution to the model problem (10). In (11) the function ε is defined to be, ε(x, y) =

1 + sinm π(2x − 0.5) · sinm π(2y − 0.5) in [0.25, 0.75] × [0.25, 0.75], 1 otherwise,

(12) where m is an integer greater than one. In Fig. 1 the function ε is illustrated for different values of m. The solution given by (11) satisfies homogeneous initial conditions together with homogeneous Dirichlet conditions on the boundary ∂Ω of the square Ω for every time t. In our computations we used the software package WavES [26] only for the finite element method applied to the solution of the model problem (10). We note that this package was also used in [4] to solve the the same model problem (10) by a domain decomposition FEM/FDM method. We discretized the computational domain Ω × (0, T ) denoting by K h l = {K } a partition of the spatial domain Ω into triangles K of sizes h l = 2−l , l = 1, . . . , 6. We let Jτl be a partition of the time domain (0, T ) into time intervals J = (tk−1 , tk ]

98

L. Beilina and V. Ruas

m=2

m=3

m=4

m=5

m=6

m=7

m=8

m=9

Fig. 1 Function ε(x, y) in the domain Ω = [0, 1] × [0, 1] for different values of m in (12)

of uniform length τl for a given number of time intervals N , l = 1, . . . , 6. We choose the time step τl = 0.025 × 2−l , l = 1, . . . , 6, which provides numerical stability for all meshes. We performed numerical tests taking m = 2, . . . , 9 in (12) and computed the maximum value over the time steps of the relative errors measured in the L 2 -norm, the H 1 -semi-norm and in the L 2 norm for the time-derivative, respectively: el1

=

max ek − ehk

1≤k≤N

max ek

,

1≤k≤N

el2

=

max ∇(ek − ehk )

1≤k≤N

max ∇ek

,

(13)

1≤k≤N

el3

=

max {∂t (e − eh )}k+1/2

1≤k≤N −1

max {∂t e}k+1/2

.

1≤k≤N −1

Here e is the exact solution of (10) given by (11) and eh is the computed solution, while N = T /τl . In Tables 1 and 2 method’s convergence in these three senses is observed for m = 2, 7. Figure 2 shows convergence rates of our numerical scheme based on a P1 space discretization, taking the function ε defined by (12) with m = 2 (on the left) and m = 7 (on the right) for ε(x). Notice that we obtained similar convergence results taking m = 3, 4, 6, 8, 9 in (12). Observation from these tables and figures clearly indicates that our scheme behaves like a first order method in the (semi-)norm of L ∞ [(0, T ); H 1 (Ω)] for e

Convergence of Explicit P1 Finite-Element Solutions to Maxwell’s Equations

99

Table 1 Maximum over the time steps of relative errors in the L 2 -norm, in the H 1 -seminorm and in the L 2 -norm of the time derivative for mesh sizes h l = 2−l , l = 1, . . . , 6 taking m = 2 in (12) 1 /e1 2 /e2 3 /e3 l nel nno el1 el−1 el2 el−1 el3 el−1 l l l 1 2 3 4 5 6

8 32 128 512 2048 8192

9 25 81 289 1089 4225

0.054247 0.013902 0.003706 0.000852 0.000229 0.000059

3.902100 3.751214 4.349765 3.720524 3.881356

0.2767 0.1216 0.0532 0.0234 0.0121 0.0061

2.2755 2.2857 2.2735 1.9339 1.9836

1.0789 0.4811 0.2544 0.1279 0.0641 0.0321

2.2426 1.8911 1.9891 1.9953 1.9969

Table 2 Maximum over the time steps of relative errors in the L 2 -norm, in the H 1 -seminorm and in the L 2 -norm of the time derivative for mesh sizes h l = 2−l , l = 1, . . . , 6 taking m = 7 in (12) 1 /e1 2 /e2 3 /e3 l nel nno el1 el−1 el2 el−1 el3 el−1 l l l 1 2 3 4 5 6

8 32 128 512 2048 8192

9 25 81 289 1089 4225

0.054224 0.012483 0.002751 0.000627 0.000158 0.000040

4.343828 4.537623 4.387559 3.968354 3.949999

0.5710 0.1505 0.0686 0.0240 0.0114 0.0057

3.7940 2.1939 2.8583 2.1053 2

1.1208 0.5024 0.2688 0.1339 0.0669 0.0334

2.2309 1.8690 2.0075 2.0015 2.0030

Fig. 2 Maximum in time of relative errors for m = 2 (left) and m = 7 (right)

and in the norm of L ∞ [(0, T ); L 2 (Ω)] for ∂t e for the chosen values of m. As far as the value m = 7 is concerned this perfectly conforms to the a priori error estimates given in [2] under the assumption that e ∈ {H 4 [Ω × (0, T )]}2 . On the other hand Table 2 and Fig. 2 also show that the theoretical predictions of [2] extend to the cases not considered therein such as m = 2, in which the regularity of the exact solution is lower than assumed. Otherwise stating some of our assumptions seem to be of academic interest only and a lower regularity of the solution such as H 2 [Ω × (0, T )] should be sufficient to attain optimal first order convergence in both senses.

100

|eh |

|e|

e1h

e1

e2h

e2

L. Beilina and V. Ruas

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

Fig. 3 Computed versus exact solution at t = 0.25 for different meshes taking m = 2 in (12)

Convergence of Explicit P1 Finite-Element Solutions to Maxwell’s Equations

|eh |

|e|

e1h

e1

e2h

e2

101

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

h = 0.125

h = 0.0625

h = 0.03125

h = 0.015625

Fig. 4 Computed versus exact solution at t = 0.25 for different meshes taking m = 7 in (12)

102

L. Beilina and V. Ruas

Finally we observe that second-order convergence can be expected from our scheme in the norm L ∞ [(0, T ); L 2 (Ω)] for e, according to Tables 1, 2 and Fig. 2. The above representations of the numerical results are enriched by Figs. 3 and 4, in which a graphic comparison between the exact solution and the approximate solutions at time t = 0.25 generated by our scheme with different meshes and corresponding time steps are supplied.

5 Conclusions In this work we validated the reliability analysis conducted in [2] for a numerical scheme to solve Maxwell’s equations of electromagnetism, combining an explicit finite difference time discretization with a lumped-mass P1 finite element space discretization. The scheme is effective in the particular case where the dielectric permittivity is constant in a neighborhood of the boundary of the spatial domain. After presenting the problem under consideration for the electric field we supplied the detailed description of such a scheme and recalled the a priori error estimates that hold for the latter under suitable regularity assumptions specified in [2]. Then we showed by means of numerical experiments performed for a test-problem in twodimension space with known exact solution, that the convergence results given in [2] are confirmed in practice. Furthermore we presented convincing evidence that such theoretical predictions extend to solutions with much lower regularity than the one assumed in our analysis. Similar optimal second-order convergence is observed in a norm other than those in which convergence was formally established. In short we undoubtedly indicated that Maxwell’s equations can be efficiently solved with classical conforming linear finite elements in some relevant particular cases, among those is the model problem stated in (10). Acknowledgements The research of the first author is supported by the Swedish Research Council grant VR 2018-03661. The second author gratefully acknowledges the financial support provided by CNPq/Brazil through grant 307996/2008-5.

References 1. Assous, F., Degond, P., Heintze, E., Raviart, P.: On a finite-element method for solving the three-dimensional Maxwell equations. J. Comput. Phys. 109, 222–237 (1993) 2. Beilina, L., Ruas, V.: An explicit P1 finite element scheme for Maxwell’s equations with constant permittivity in a boundary neighborhood. arXiv:1808.10720 3. Beilina, L., Grote, M.: Adaptive hybrid finite element/difference method for Maxwell’s equations. TWMS J. Pure Appl. Math. 1(2), 176–197 (2010)

Convergence of Explicit P1 Finite-Element Solutions to Maxwell’s Equations

103

4. Beilina, L.: Energy estimates and numerical verification of the stabilized domain decomposition finite element/finite difference approach for time-dependent Maxwell’s system. Cent. Eur. J. Math. 11(4), 702–733 (2013). https://doi.org/10.2478/s11533-013-0202-3 5. Beilina, L., Klibanov, M.V.: Approximate Global Convergence and Adaptivity for Coefficient Inverse Problems. Springer, New York (2012) 6. Beilina, L., Cristofol, M., Niinimaki, K.: Optimization approach for the simultaneous reconstruction of the dielectric permittivity and magnetic permeability functions from limited observations. Inverse Probl. Imaging 9(1), 1–25 (2015) 7. Beilina, L., Thanh, N.T., Klibanov, M.V., Malmberg, J.B.: Globally convergent and adaptive finite element methods in imaging of buried objects from experimental backscattering radar measurements. J. Comp. Appl. Maths. Elsevier (2015). https://doi.org/10.1016/j.cam.2014.11. 055 8. Bondestam-Malmberg, J., Beilina, L.: An adaptive finite element method in quantitative reconstruction of small inclusions from limited observations. Appl. Math. Inf. Sci. 12–1, 1–19 (2018) 9. Bondestam-Malmberg, J., Beilina, L.: Iterative regularization and adaptivity for an electromagnetic coefficient inverse problem. In: AIP Conference Proceedings, vol. 1863, p. 370002 (2017). https://doi.org/10.1063/1.4992549 10. Bondestam-Malmberg, J.: Efficient Adaptive Algorithms for an Electromagnetic Coefficient Inverse Problem, Doctoral thesis. University of Gothenburg, Sweden (2017) 11. Carneiro de Araujo, J.H., Gomes, P.D., Ruas, V.: Study of a finite element method for the time-dependent generalized Stokes system associated with viscoelastic flow. J. Comput. Appl. Math. 234–8, 2562–2577 (2010) 12. Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. North Holland (1978) 13. Ciarlet Jr., P., Zou, J.: Fully discrete finite element approaches for time-dependent Maxwell’s equations. Numerische Mathematik 82(2), 193–219 (1999) 14. Elmkies, A., Joly, P.: Finite elements and mass lumping for Maxwell’s equations: the 2D case. Numerical Analysis, C. R. Acad. Sci. Paris, 324, 1287–1293 (1997) 15. Engquist, B., Majda, A.: Absorbing boundary conditions for the numerical simulation of waves. Math. Comp. 31, 629–651 (1977) 16. Jiang, B.: The Least-Squares Finite Element Method. Theory and Applications in Computational Fluid Dynamics and Electromagnetics. Springer, Heidelberg (1998) 17. Jiang, B., Wu, J., Povinelli, L.A.: The origin of spurious solutions in computational electromagnetics. J. Comput. Phys. 125, 104–123 (1996) 18. Jin, J.: The finite element method in electromagnetics. Wiley (1993) 19. Joly, P.: Variational methods for time-dependent wave propagation problems. In: Lecture Notes in Computational Science and Engineering. Springer, Berlin (2003) 20. Monk, P.: Finite Element Methods for Maxwell’s Equations. Clarendon Press (2003) 21. Monk, P.B., Parrott, A.K.: A dispersion analysis of finite element methods for Maxwell’s equations. SIAM J. Sci. Comput. 15, 916–937 (1994) 22. Munz, C.D., Omnes, P., Schneider, R., Sonnendrucker, E., Voss, U.: Divergence correction techniques for Maxwell Solvers based on a hyperbolic model. J. Comput. Phys. 161, 484–511 (2000) 23. Nédélec, J.-C.: Mixed finite elements in R 3 . Numerische Mathematik 35, 315–341 (1980) 24. Paulsen, K.D., Lynch, D.R.: Elimination of vector parasites in finite element Maxwell solutions. IEEE Trans. Microw. Theory Technol. 39, 395–404 (1991) 25. Ruas, V., Ramos, M.A.S.: A hermite method for Maxwell’s equations. Appl. Math. Inf. Sci. 12(2), 271–283 (2018) 26. WavES, the software package. http://www.waves24.com

Reconstructing the Optical Parameters of a Layered Medium with Optical Coherence Elastography Peter Elbau, Leonidas Mindrinos, and Leopold Veselka

Abstract In this work we consider the inverse problem of reconstructing the optical properties of a layered medium from an elastography measurement where optical coherence tomography is used as the imaging method. We hereby model the sample as a linear dielectric medium so that the imaging parameter is given by its electric susceptibility, which is a frequency- and depth-dependent parameter. Additionally to the layered structure (assumed to be valid at least in the small illuminated region), we allow for small scatterers which we consider to be randomly distributed, a situation which seems more realistic compared to purely homogeneous layers. We then show that a unique reconstruction of the susceptibility of the medium (after averaging over the small scatterers) can be achieved from optical coherence tomography measurements for different compression states of the medium. Keywords Optical coherence tomography · Optical coherence elastography · Inverse problem · Parameter identification MSC: 65J22 · 65M32 · 78A46

1 Introduction Optical Coherence Tomography is an imaging modality producing high resolution images of biological tissues. It measures the magnitude of the back-scattered light of a focused laser illumination from a sample as a function of depth and provides P. Elbau (B) · L. Mindrinos · L. Veselka Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria e-mail: [email protected]t L. Mindrinos e-mail: [email protected] L. Veselka e-mail: [email protected] © Springer Nature Switzerland AG 2020 L. Beilina et al. (eds.), Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, Springer Proceedings in Mathematics & Statistics 328, https://doi.org/10.1007/978-3-030-48634-1_8

105

106

P. Elbau et al.

cross-sectional or volumetric data by performing a series of multiple axial scans at different positions. Initially, it used to operate in time where a movable mirror was giving the depth information. Later on, frequency-domain optical coherence tomography was introduced where the detector is replaced by a spectrometer and no mechanical movement is needed. We refer to [3, 4] for an overview of the physics of the experiment and to [6] for a mathematical description of the problem. Only lately, the inverse problems arising in optical coherence tomography have attracted the interest from the mathematical community, see, for example, [2, 7, 11, 13]. For many years, the proposed and commonly used reconstruction method was just the inverse Fourier transform. This approach is valid only if the properties of the medium are assumed to be frequency-independent in the spectrum of the light source. However, the less assumptions one takes, the more mathematically interesting but also difficult the problem becomes. The main assumption, we want to make is that the medium can be (at least locally in the region where the laser beam illuminates the object) well described by a layered structure. Since there are in real measurement images typically multiple small particles visible inside these layers, we will additionally include small, randomly distributed scatterers into the model and calculate the averaged contribution of these particles to the measured fields. To obtain a reconstruction of the medium, that is, of its electric susceptibility, we consider an elastography setup where optical coherence tomography is used as the imaging system. This so-called optical coherence elastography is done by recording optical coherence tomography data for different compression states of the medium, see [1, 5, 9, 12] for some recent works dealing with this interesting problem. Under the assumption that the sample can be described as a linear elastic medium, we show that these measurements can be used to achieve a unique reconstruction of the electric susceptibility of the layered medium. The paper is organised as follows: In Sect. 2 we review the main equations describing mathematically how the data in optical coherence tomography is collected and its relation to the optical properties of the medium. In Sect. 3, we show that the calculation of the back-scattered field can be decomposed into the corresponding subproblems for the single layers, for which we derive the resulting formulæ in Sect. 4. Finally, we present in Sect. 5 that from the measurements at different compression states a unique reconstruction of the susceptibility becomes feasible.

2 Modelling the Optical Coherence Tomography Measurement We model the sample by a dispersive, isotropic, non-magnetic, linear dielectric medium characterised by its scalar electric susceptibility. To include randomly distributed scatterers in the model, we introduce the susceptibility as a random variable; so let (X , A , P) be a probability space and write

Reconstructing the Optical Parameters of a Layered Medium …

107

χ : X × R × R3 → R, (σ, t, x) → χσ (t, x) for the electric susceptibility of the medium in the state σ . (Hereby, to have a causal model, we require an electric susceptibility χ : R × R3 → R to be a function fulfilling χ (t, x) = 0 for all t < 0.) The object (in a certain realisation state σ ∈ X ) is then probed with a laser beam, described by an incident electric field E (0) : R × R3 → R3 characterised by the following properties. Definition 1 We call E (0) : R × R3 → R3 an incident wave for a given susceptibility χ : R × R3 → R in a homogeneous background χ0 : R → R if it is a solution of Maxwell’s equations for χ0 , that is, 1 ∂tt D (0) (t, x), c2

ΔE (0) (t, x) =

where c denotes the speed of light in vacuum and D (0) (t, x) = E (0) (t, x) +

R

χ0 (τ )E (0) (t − τ, x)dτ,

and E (0) does not interact with the inhomogeneity for negative times, meaning that E (0) (t, x) = 0 for all t ∈ (−∞, 0), x ∈ Ω

(1)

with Ω = {x ∈ R3 | χ (·, x) = χ0 }. We then measure the resulting electric field E σ : R × R3 → R3 induced by the incident field E (0) in the presence of the dielectric medium described by the susceptibility χσ . Definition 2 Let χ : R × R3 → R be a susceptibility and E (0) : R × R3 → R3 be an incident wave for χ . Then, we call E the electric field induced by E (0) in the presence of χ if E is a solution of the equation system 1 ∂tt D(t, x) = 0 for all t ∈ R, x ∈ R3 , c2 E(t, x) − E (0) (t, x) = 0 for all t ∈ (−∞, 0), x ∈ R3

curl curl E(t, x) +

(2) (3)

with c being the speed of light in vacuum and with the electric displacement field D : R × R3 → R3 being related to the electric field via D(t, x) = E(t, x) +

R

χ (τ, x)E(t − τ, x)dτ.

108

P. Elbau et al.

Remark 1 The fact that E (0) does not interact with the object before time t = 0, see (1), guarantees that E (0) is a solution of (2) and thus the initial condition in (3) is compatible with (2). Remark 2 We do not want to specify the solution concept for solving (2) here (since we are going for a layered and therefore discontinuous susceptibility, there exists only a weak solution), but will silently assume that the susceptibility and the incident field are such that they induce an electric field with sufficient regularity and the appearing integrals and Fourier transforms are well-defined. Equation (2) is more conveniently written in Fourier space, where we use the convention 1 f (x)e−ik,x dx F [ f ](k) = n (2π ) 2 Rn for the Fourier transform of an integrable function f : Rn → R. For convenience, we also use the shorter notation √ ˇ F(t, x)eiωt dt F(ω, x) = 2π F −1 [t → F(t, x)](ω) = R

for this rescaled inverse Fourier transformation of a sufficiently regular function of the form F : R × Rm → Rn with respect to the time variable. Lemma 1 Let χ : R × R3 → R be a susceptibility, E (0) : R × R3 → R3 be an incident wave for χ , and E be the induced electric field. Then, Eˇ solves the vector Helmholtz equation ω2 ˇ ˇ curl curl E(ω, x) − 2 (1 + χ(ω, ˇ x)) E(ω, x) = 0 for all ω ∈ R, x ∈ R3 , c with the constraint

Eˇ ∈ H ( Eˇ (0) ),

(4)

(5)

where H ( Eˇ (0) ) is the space of all functions F : R × R3 → R3 so that the map ω → (F − Eˇ (0) )(ω, x) can be holomorphically extended to the space H × R3 , where H = {z ∈ C | m z > 0} denotes the upper half complex plane, and the extension fulfils sup |(F − Eˇ (0) )(ω + iλ, x)|2 dω < ∞ λ>0

R

for every x ∈ R3 . Proof Equation (4) is obtained directly from the application of the Fourier transform to (2). The condition (5) is according to the Paley–Wiener theorem, see, for example, [10, Theorem 9.2], equivalent to the condition (3), which states that t → (E − E (0) )(t, x) has for every x ∈ R3 only support in [0, ∞).

Reconstructing the Optical Parameters of a Layered Medium …

109

In frequency-domain optical coherence tomography, we detect with a spectrometer at a position x0 ∈ R3 outside the medium the intensity of the Fourier components of the superposition of the back-scattered light from the sample and the reference beam, which is the reflection of the incident laser beam from a mirror at some fixed position. Here, we consider two independent measurements for two different positions of the mirror in order to overcome the problem of phase-less data, see [8]. Thus, we record for some realisation σ ∈ X and all ω ∈ R the data m 0,σ (ω) = | Eˇ σ (ω, x0 )| and m i,σ (ω) = | Eˇ σ (ω, x0 ) + Eˇ i(r) (ω, x0 )|, i ∈ {1, 2}, where E 1(r) : R × R3 → R3 and E 2(r) : R × R3 → R3 denote the two known reference waves, which are solutions of Maxwell’s equations in the homogeneous background medium (usually well approximated by the vacuum). We can uniquely recover from this data the (complex-valued) Fourier transform ˇ E(ω, x0 ) of the electric field for every ω ∈ R by intersecting the three circles ∂ Bm 0,σ (ω) (0) ∩ ∂ Bm 1,σ (ω) (− Eˇ 1(r) (ω, x0 )) ∩ ∂ Bm 2,σ (ω) (− Eˇ 2(r) (ω, x0 )) provided that the points 0, Eˇ 1(r) (ω, x0 ), and Eˇ 2(r) (ω, x0 ) in the complex plane do not lie on a single straight line. In the following, we assume that the fields E 1(r) and E 2(r) are chosen such that this condition is satisfied and we can recover the function m σ (ω) = Eˇ σ (ω, x0 ) for all ω ∈ R. However, this information is still not enough for reconstructing the material parameter χσ , see, for example, [6]. Thus, we make the a priori assumption that the illuminated region of the medium can be well approximated by a layered medium. Since the layers are typically not completely homogeneous, we also allow for randomly distributed small inclusions in every layer. Thus, we describe χ to be of the form χσ (t, x) = χ j (t) + ψ j,σ j (t, x)

(6)

in the jth layer {x ∈ R3 | z j+1 < x3 < z j }, j ∈ {1, . . . , J }, where we write the mea sure space as a product X = Jj=1 X j with each factor representing the state of one layer. Here, χ j is the homogeneous background susceptibility of the layer and ψ j is the random contribution caused by some small particles in the layer. Outside these layers, we set χσ (t, x) = χ0 (t) for some homogeneous background susceptibility χ0 . To simplify the analysis, we will assume that the scatterers in the jth layer only occur at some distance to the layer boundaries z j and z j+1 , say between Z j and ζ j , where z j+1 < Z j < ζ j < z j . Moreover, we choose the particles independently, identically, uniformly distributed on the part U j,L j = [− 21 L j , 21 L j ] × [− 21 L j , 21 L j ] ×

110

P. Elbau et al.

[Z j , ζ j ] of the layer for some width L j > 0. Concretely, we assume that we have in the jth layer for some number N j of particles the probability measure P j,N j ,L j on the probability space X j = (U j,L j ) N j given by P j,N j ,L j (

N j

=1 A ) =

Nj

|A | − Z j)

L 2j (ζ j

=1

(7)

for all measurable subsets A ⊂ U j,L j , where |A | denotes the three dimensional Lebesgue measure of the set A . The full probability measure P= PN ,L is consistently chosen as the direct product PN ,L = Jj=1 P j,N j ,L j on X = Jj=1 X j . The particles themselves, we model in each layer as identical balls with a suf(p) ficiently small radius R and a homogeneous susceptibility χ j . Thus, we define for a realisation σ j ∈ X j of the jth layer the contribution of the particles to the susceptibility by ψ j,σ j (t, x) =

Nj

(p)

χ B R (σ j, ) (x) (χ j (t) − χ j (t)),

(8)

=1

where we ignore the problem of overlapping particles. Here, we denote by χ A the characteristic function of a set A and by Br (y) the open ball with radius r around a point y.

3 Domain Decomposition of the Solution The layered structure of the medium allows us to decompose the solution as a series of solution operators for the single layers. To do so, we split the medium at a horizontal stripe where the medium is homogeneous and consider the two subproblems where once the region above and once the region below is replaced by the homogeneous susceptibility X 0 : R → R in the stripe. We write the stripe as the set {x ∈ R3 | z − ε < x3 < z + ε} for some z ∈ R and some height ε > 0 and parametrise the electric susceptibility in the form χ (t, x) =

X 1 (t, x) if x ∈ Ω1 = {y ∈ R3 | y3 > z − ε}, X 2 (t, x) if x ∈ Ω2 = {y ∈ R3 | y3 < z + ε}.

(9)

with the necessary compatibility condition that X 1 and X 2 coincide in the intersection Ω1 ∩ Ω2 , where they should both be equal to the homogeneous susceptibility X 0 .

Reconstructing the Optical Parameters of a Layered Medium …

111

Additionally, we have the assumption that the medium is bounded in vertical direction. We can therefore assume that for some z − < z + , the susceptibilities X 1 and X 2 are homogeneous in Ω+ = {x ∈ R3 | x3 > z + } ⊂ Ω1 and Ω− = {x ∈ R3 | x3 < z − } ⊂ Ω2 , respectively. We set X 1 (t, x) = X + (t) for all x ∈ Ω+ and X 2 (t, x) = X − (t) for all x ∈ Ω− . Since we are solving Maxwell’s equations on the whole space, we extend X 1 and X 2 by the homogeneous susceptibility X 0 : X 1 (t, x) = X 0 (t) for all x ∈ Ω2 and X 2 (t, x) = X 0 (t) for all x ∈ Ω1 , see Picture (a) in Fig. 1 for an illustration of the notation. The aim is then to reduce the calculation of the electric field in the presence of χ to the subproblems of determining the electric fields in the presence of X 1 and X 2 , independently. To do so, we consider the solution in the intersection Ω1 ∩ Ω2 and split it there into waves moving in the positive and negative e3 direction. Lemma 2 Let a homogeneous susceptibility χ : R → R be given on a stripe Ω0 = {x ∈ R3 | x3 ∈ (z 0 − ε, z 0 + ε)}. Then, every solution Eˇ : R × Ω0 → C3 of ˇ curl curl E(ω, x) −

ω2 ˇ (1 + χ(ω)) ˇ E(ω, x) = 0 for all ω ∈ R, x ∈ Ω0 , c2

(10)

admits the form ˇ E(ω, x) =

R2

e1 (k1 , k2 )e

−ix3

+

R2

e2 (k1 , k2 )e

ix3

ω2 c2

(1+χˇ (ω))−k12 −k22 i(k1 x1 +k2 x2 )

ω2 c2

(1+χˇ (ω))−k12 −k22 i(k1 x1 +k2 x2 )

e

e

d(k1 , k2 ) d(k1 , k2 )

(11)

for all ω ∈ R and x ∈ Ω0 with some coefficients e1 , e2 : R2 → C3 . Proof Taking the divergence of (10), we see that we have div Eˇ = 0 on the stripe Ω0 with homogeneous susceptibility. Then, Eq. (10) reduces to the three independent Helmholtz equations ω2 ˇ ˇ x) = 0 for all ω ∈ R, x ∈ Ω0 . Δ E(ω, x) + 2 (1 + χˇ (ω)) E(ω, c Applying the Fourier transform with respect to x1 and x2 and solving the resulting ordinary differential equation in x3 gives us (11). Definition 3 Let Eˇ be a solution of the Eq. (10) on some stripe Ω0 , written in the form (11). We then call Eˇ a downwards moving solution if e2 = 0 and an upwards moving solution if e1 = 0.

112

P. Elbau et al.

Moreover, we define the solution operators G1 and G2 . To avoid having to define an incident wave on the whole space, we replace the condition (5) by radiation conditions of the form that we specify the upwards moving part on a stripe below the region and the downwards moving part on a stripe above the region. Definition 4 Let χ be given as in (9) and Eˇ 0 be an upwards moving solution in Ω1 ∩ Ω2 . Then, we define G1 Eˇ 0 as a solution Eˇ of the equation ˇ curl curl E(ω, x) −

ω2 ˇ (1 + Xˇ 1 (ω, x)) E(ω, x) = 0 c2

fulfilling the radiation condition that Eˇ − Eˇ 0 is a downwards moving solution in Ω1 ∩ Ω2 and that Eˇ is an upwards moving solution in Ω+ , see Picture (b) in Fig. 1. Analogously, we define G2 Eˇ 0 for a downwards moving solution Eˇ 0 in Ω1 ∩ Ω2 as a solution Eˇ of the equation ˇ curl curl E(ω, x) −

ω2 ˇ (1 + Xˇ 2 (ω, x)) E(ω, x) = 0 c2

fulfilling the radiation condition that Eˇ − Eˇ 0 is an upwards moving solution in Ω1 ∩ Ω2 and that Eˇ is a downwards moving solution in Ω− , see Picture (c) in Fig. 1. Remark 3 We do not consider the uniqueness of these solutions at this point since we will only need the result for particular, simplified problems where the verification that this gives the desired solution can be done directly. x3 X+ (t)

z+ X1 (t, x)

Ω1 ∩ Ω2

X2 (t, x)

Ω2

X− (t)

Ω1

Ω1

X0 (t)

ˇ E

Ω+

Ω+

z+ε z z−ε

x1 , x 2

Ω1 ∩ Ω2

x3

ˇ−E ˇ0 E

x1 , x 2

(b) The fields related to the operator G1

z−

Ω1 ∩ Ω2

Ω−

ˇ0 E

ˇ−E ˇ0 E

Ω2

Ω−

x3

(a) The subdomains and the corresponding optical parameters

ˇ0 E

ˇ E

x1 , x 2

(c) The fields related to the operator G2

Fig. 1 The geometry and the notation used in this section

Reconstructing the Optical Parameters of a Layered Medium …

113

Instead we will simply assume that the susceptibilities χ , X 1 , and X 2 are such that the only solution Eˇ in the presence of this susceptibility for which Eˇ is upwards moving on Ω+ and downwards moving on Ω− is the trivial solution Eˇ = 0, meaning that there is only the trivial solution in the absence of an incident wave. Lemma 3 Let χ be given by (9) and denote by G1 , G2 the solution operators as in Definition 4. Let further E (0) be an incident wave on χ which is moving downwards and E 1 be the induced electric fields in the presence of X 1 . Then, provided the following series converge, we have that the function E defined by

ˇ E(ω, x) =

⎧ ∞ ⎪ ⎪ ˇ ⎪ E (ω, x) + G1 (G˜2 G˜1 ) j G˜2 Eˇ 1 (ω, x) if x ∈ Ω1 , 1 ⎪ ⎨ j=0

∞ ⎪ ⎪ ⎪ G2 (G˜1 G˜2 ) j Eˇ 1 (ω, x) ⎪ ⎩

if x ∈ Ω2 ,

j=0

where we set G˜i = Gi − id, i ∈ {1, 2}, is an electric field in the presence of χ fulfilling the radiation conditions that Eˇ − Eˇ (0) is an upwards moving wave in Ω+ and Eˇ is a downwards moving wave in Ω− . Proof First, we remark that the composition of the operators is well defined, since Eˇ 1 ∈ H ( Eˇ (0) ) is a downwards moving solution in Ω1 ∩ Ω2 , see Lemma 1, the range of G˜2 consists of upwards moving solutions, and the range of G˜1 consists of downwards moving solutions. The field Eˇ is seen to satisfy (4) in Ω1 by using the definitions of E 1 and the solution operator G1 on Ω1 . Similarly, using the definition of G2 , we get that the function Eˇ satisfies (4) in Ω2 . Therefore, it only remains to check that the two formulas coincide in the intersection Ω1 ∩ Ω2 . Using that Gi = G˜i + id, i ∈ {1, 2}, we find that Eˇ 1 +

∞

G1 (G˜2 G˜1 ) j G˜2 Eˇ 1 = Eˇ 1 +

j=0

∞

G˜1 (G˜2 G˜1 ) j G˜2 Eˇ 1 +

j=0

=

∞ j=0

(G˜1 G˜2 ) j Eˇ 1 +

∞

(G˜2 G˜1 ) j G˜2 Eˇ 1

j=0 ∞ j=0

G˜2 (G˜1 G˜2 ) j Eˇ 1 =

∞

G2 (G˜1 G˜2 ) j Eˇ 1 .

j=0

Moreover, we have that Eˇ − Eˇ 1 is by construction an upwards moving wave in Ω+ , and therefore so is Eˇ − Eˇ (0) . Similarly, the wave Eˇ is a downwards moving wave in Ω− . If we are in a case where our uniqueness assumption mentioned in Remark 3 holds, then Lemma 3 allows us to iteratively reduce the problem of determining the electric field in the presence of the susceptibility χσ , defined in (6), to problems of simpler susceptibilities. To this end, we could, for example, successively apply the

114

P. Elbau et al.

result to values z ∈ (ζ j , z j ) and z ∈ (z j+1 , Z j ), j = 1, . . . , J , where each successive step is only used to further simplify the operator G2 from the previous step. This then leads to a sort of layer stripping algorithm, see, for example, [8], where a similar argument was presented.

4 Wave Propagation Through a Scattering Layer Using the above analysis, we can calculate the electric field in the presence of a layered medium of the form (6) as a combination of the solutions of the following two subproblems. Problem 1 Let j ∈ {0, . . . , J − 1}. Find the electric field induced by some incident field in the presence of the piecewise homogeneous susceptibility χ given by χ (t, x) =

χ j (t) if x3 > z j+1 , χ j+1 (t) if x3 < z j+1 .

(12)

Problem 2 Let σ ∈ X and j ∈ {1, . . . , J }. Find the electric field induced by some incident field in the presence of the susceptibility χ given by χ (t, x) = χ j (t) + ψ j,σ (t, x),

(13)

where the function ψ j is described by (8). We thus fix a layer j ∈ {0, . . . , J }, and to simplify the calculations, we restrict ourselves in both subproblems to an illumination by a downwards moving plane wave of the form ω Eˇ (0) (ω, x) = fˇ(ω)e−i c n j (ω)x3 η (14) for some function f : R → R and a polarisation vector η ∈ S1 × {0}. Here we define the complex-valued refractive indices for all j ∈ {0, . . . , J } by n j : R → H, n j (ω) =

1 + χˇ j (ω).

(15)

Then, the solution of Problem 1 can be explicitly written down. Lemma 4 Let j ∈ {0, . . . , J − 1} and E (0) be the incident wave given in (14). Then, the electric field E induced by E (0) in the presence of a susceptibility χ of the form (12) is given by

n j+1 (ω) − n j (ω) −i ω n j (ω)z j+1 i ω n j (ω)(x3 −z j+1 ) ω ˇ η e c ec E(ω, x) = fˇ(ω) e−i c n j (ω)x3 − n j+1 (ω) + n j (ω)

Reconstructing the Optical Parameters of a Layered Medium …

115

for x3 > z j+1 , and by ˇ E(ω, x) = fˇ(ω)

2n j (ω) ω ω e−i c n j (ω)z j+1 e−i c n j+1 (ω)(x3 −z j+1 ) η n j+1 (ω) + n j (ω)

for x3 < z j+1 , where the refractive indices n j and n j+1 are defined by (15). Proof Clearly, Eˇ satisfies the differential equation (4) in both regions x3 > z j+1 ˇ Therefore, it only and x3 < z j+1 . Moreover, Eˇ (0) is the only incoming wave in E. remains to check that Eˇ has sufficient regularity to be the weak solution along the discontinuity of the susceptibility at x3 = z j+1 , meaning that ˇ ˇ lim E(ω, x) = lim E(ω, x),

x3 ↑z j+1

x3 ↓z j+1

ˇ ˇ x) = lim n j (ω)∂x3 E(ω, x). lim n j+1 (ω)∂x3 E(ω,

x3 ↑z j+1

x3 ↓z j+1

Both identities are readily verified. For Problem 2, the situation is more complicated and we settle for an approximate solution for the electric field. For that, we assume (using the same notation as in (8)) (p) that the susceptibility χ j of the random particles does not differ much from the background χ j , so that the difference between the induced field and the incident field (p) becomes small, and we do a first order approximation in the difference χ j − χ j . For that purpose, we write the differential Eq. (4) in the form ˇ curl curl E(ω, x) −

ω2 2 ˇ n (ω)(1 + φ¯ j,σ j (ω, x)) E(ω, x) = 0, c2 j

where, according to (8), φ¯ j,σ j (ω, x) =

Nj

χ B R (σ j, ) (x) φ j (ω),

=1

and we abbreviate

(p)

φ j (ω) =

χˇ j (ω) − χˇ j (ω) 1 + χˇ j (ω)

.

(16)

¯ we then approximate the field by the solution Eˇ N(1),σ of the In first order in φ, j j equation curl curl Eˇ (1) (ω, x) −

ω2 2 ω2 n j (ω) Eˇ (1) (ω, x) = 2 n 2j (ω)φ¯ j,σ j (ω, x) Eˇ (0) (ω, x), 2 c c

116

P. Elbau et al.

the so called Born approximation. Using that the fundamental solution G of the Helmholtz equation, which by definition fulfils ΔG(κ, x) + κ 2 G(κ, x) = −δ(x), is given by G(κ, x) =

eiκ|x| , 4π |x|

we obtain the expression E N(1)j ,σ j (ω, x) = E (0) (ω, x) 2

Nj ω 2 + n (ω) + grad div G( ωc n j (ω), x − y)φ j (ω)E (0) (ω, y)dy c2 j B (σ ) R j,

=1 (17) for the Born approximation of the induced field, see, for example, [6, Proposition 4]. We now want to determine the expected value of E N(1)j ,σ j in the limit where the number of particles N j and the width L j of the region where the particles are horiN zontally distributed tend to infinity, while keeping the ratio ρ j = L 2j of particles per j surface area constant, that is, we want to calculate the expression E¯ (1) (ω, x) = lim

N j →∞ X j

where L j (N j ) =

Nj ρj

Eˇ N(1)j ,σ j (ω, x)d P j,N j ,L j (N j ) (σ j ),

(18)

and P denotes the probability measure introduced in (7).

Lemma 5 Let j ∈ {1, . . . , J } and ρ j > 0 be fixed, E (0) be an incident field of the form (14), and χ be the susceptibility specified in (13). Then, the expected value E¯ (1) of the Born approximation of the field induced by E (0) in the presence of the susceptibility χ in the limit N j → ∞ with L 2j ρ j = N j , as introduced in (18), is given by E¯ (1) (ω, x) = Eˇ (0) (ω, x) + (2π )4 ρ j φ j (ω) fˇ(ω) ω ω ω × h(2R ωc n j (ω)) e−i c n j (ω)Z j − e−i c n j (ω)ζ j ei c n j (ω)(x3 −μ j ) η

(19)

for x3 > ζ j + R and by (2π )4 ρ j φ j (ω) fˇ(ω) E¯ (1) (ω, x) = Eˇ (0) (ω, x) + 3 ω ω ω × e−i c n j (ω)Z j − e−i c n j (ω)ζ j e−i c n j (ω)(x3 −μ j ) η

(20)

Reconstructing the Optical Parameters of a Layered Medium …

117

for x3 < Z j − R, where μ j = 21 (ζ j + Z j ) and h(ξ ) =

sin(ξ ) − ξ cos(ξ ) . ξ3

(21)

Proof Inserting the expression (17) for the Born approximation of the electric field into the formula (18) for the expected value, we obtain the equation E¯ (1) (ω, x) = Eˇ (0) (ω, x)

2

ω 2 ˇ + lim N j φ j (ω) f (ω) n (ω) + grad div K L j (N j ) (ω, x)η, N j →∞ c2 j (22)

where

K L (ω, x) = U j,L

ω

B R (σ j,1 )

G( ωc n j (ω), x − y)e−i c n j (ω)x3 dy dσ j,1 .

We recall that U j,L = [− 21 L , 21 L] × [− 21 L , 21 L] × [Z j , ζ j ] is for L = L j the region in which the particles in the jth layer are lying. To symmetrise the expression, we set 1 1 μ j = (ζ j + Z j ) and d j = (ζ j − Z j ) 2 2 and shift U j,L to the origin, by defining U˜ j,L = U j,L − μ j e3 with e3 = (0, 0, 1). Introducing the probability density p L (ξ ) =

1 1 χ U j,L (μ j e3 + ξ ) = χ ˜ (ξ ) |U j,L | 2L 2 d j U j,L

for the variable ξ = σ j,1 − μ j e3 , we rewrite K L in the form

ω

p L (ξ )e−i c n j (ω)(μ j +ξ3 ) ω × χ B R (0) (y)G( ωc n j (ω), x − μ j e3 − ξ − y)e−i c n j (ω)y3 dy dξ R3 ω 3 = (2π ) 2 p L (ξ )e−i c n j (ω)(μ j +ξ3 )

K L (ω, x) =

R3

R3

× F [y → χ B R (0) (y)G( ωc n j (ω), x − μ j e3 − ξ − y)]( ωc n j (ω)e3 )dξ ω p L (ξ )e−i c n j (ω)(μ j +ξ3 ) = (2π )3 3 R × F [χ B R (0) ] ∗ F [y → G( ωc n j (ω), x − μ j e3 − ξ − y)] ( ωc n j (ω)e3 )dξ.

118

P. Elbau et al.

The shift in the function G now translates in Fourier space to a multiplication by a phase factor; explicitly, we find (using the symmetry G(κ, y) = G(κ, −y) and the ˆ notation G(κ, k) = F [y → G(κ, y)](k)) that ˆ k). F [y → G(κ, x − μ j e3 − ξ − y)](k) = e−ik,x−μ j e3 −ξ G(κ, Therefore, we can write K L (again using the shorter notation χˆ B R (0) = F [χ B R (0) ] and pˆ L = F [ p L ]) as ω

ˆ ω n j (ω), k) χˆ B R (0) ( ωc n j (ω)e3 − k)G( c ω × e−ik,x−μ j e3 p L (ξ )e−i c n j (ω)e3 −k),ξ dξ dk R3 9 ω −i n (ω)μ j = (2π ) 2 e c j χˆ B R (0) ( ωc n j (ω)e3 − k)

K L (ω, x) = (2π )3 e−i c n j (ω)μ j

×

R3

R3 ω pˆ L ( c n j (ω)e3

(23)

ˆ ω n j (ω), k)e−ik,x−μ j e3 dk. − k)G( c

Remarking that the Fourier transform of p L is explicitly given by pˆ L (k) =

1 3

(2π ) 2 L 2

L 2

− L2

e−ik1 ξ1 dξ1

L 2

− L2

e−ik2 ξ2 dξ2

2 sin( 21 Lk1 ) 2 sin( 21 Lk2 ) = 3 k1 k2 (2π ) 2 L 2 1

R

R

χ [−d j ,d j ] (ξ3 )e−ik3 ξ3 dξ3

χ [−d j ,d j ] (ξ3 )e−ik3 ξ3 dξ3 ,

we find with the abbreviation χˆ [−d j ,d j ] = F [χ [−d j ,d j ] ] in the limit N j → ∞ that N j pˆ L j (N j ) (k) → 2πρ j δ(k1 )δ(k2 )χˆ [−d j ,d j ] (k3 ) (N j → ∞).

(24)

Using (23) in (24), we can calculate the behaviour of K L in this limit to be ω

lim N j K L j (N j ) (ω, x) = (2π ) 2 e−i c n j (ω)μ j ρ j ˆ ω n j (ω), k3 e3 )χˆ [−d ,d ] (k3 )e−ik3 (x3 −μ j ) dk3 . χˆ B R (0) (( ωc n j (ω) − k3 )e3 )G( × j j c 11

N j →∞

R

Using further that Gˆ can be computed by taking the Fourier transform of the Helmholtz equation, giving us ˆ G(κ, k) =

1 , 2 − κ2 |k| (2π ) 1

3 2

Reconstructing the Optical Parameters of a Layered Medium …

119

and calculating the Fourier transform of the characteristic function of a sphere to be R π R r 1 1 (eir |k| − e−ir |k| )dr r 2 sin θ e−ir |k| cos θ dθ dr = √ χˆ B R (0) (k) = √ 2π 0 0 2π 0 i|k| 1 1 2 R|k| 2 (sin(R|k|) − R|k| cos(R|k|)); = α sin(α)dα = |k|3 π 0 |k|3 π

we are left with

2 ρj lim N j K L j (N j ) (ω, x) = (2π ) e N j →∞ π 1 × χˆ [−d j ,d j ] (k3 )e−ik3 (x3 −μ j ) dk3 , (25) h(R( ωc n j (ω) − k3 )) 2 ω2 2 k3 − c2 n j (ω) R 4 −i ωc n j (ω)μ j

where we used the abbreviation h from (21). Inserting finally 1 χˆ [−d j ,d j ] (k3 ) = √ 2π

dj

−d j

1 1 ik3 d j e e−ik3 x3 dx3 = √ − e−ik3 d j , ik 2π 3

we see that the integrand in (25) can for x3 − μ j > d j + R (that is, for x3 > ζ j + R) be meromorphically extended to a function of k3 in the lower half complex plane which decays sufficiently fast at infinity, so that the residue theorem yields ω

lim N j K L j (N j ) (ω, x) = (2π )4 e−i c n j (ω)μ j ρ j

N j →∞

h(2R ωc n j (ω)) ω2 2 n (ω) c2 j i ω n j (ω)(x3 −μ j ) −i ωc n j (ω)d j c

ω × ei c n j (ω)d j − e

e

.

Putting this into (22), we obtain with μ j + d j = ζ j and μ j − d j = Z j the formula (19). Similarly, we extend the integrand for x3 − μ j < −d j − R (that is, for x3 < Z j − R) meromorphically to a function of k3 in the upper half plane and find with the residue theorem that lim N j K L j (N j ) (ω, x) = (2π )4 ρ j

N j →∞

h(0) ω2 2 n (ω) c2 j i ω n j (ω)d j c

× e which gives us with (22) and with h(0) =

1 3

ω ω − e−i c n j (ω)d j e−i c n j (ω)x3 ,

the formula (20).

120

P. Elbau et al.

5 Recovering the Susceptibility with Optical Coherence Elastography So far, we have presented a way to model the measurements of an optical coherence tomography setup for a layered medium of the form (6). The question we are really interested in, however, is how to reconstruct the properties of the medium from this data. Let us first consider one of the layer stripping steps for a susceptibility χ of the form (9) with X 1 being either of the form (12) of Problem 1 or of the form (13) of (p) Problem 2. We make the additional assumption that supp χ j ⊂ [0, T ] and supp χ j ⊂ [0, T ] for a sufficiently small T > 0. Then, we see that by choosing a sufficiently short pulse as incident wave, that is, E (0) (t, x) = f (t + xc3 )η (assuming for the background medium χ0 = 0) with f having a sufficiently narrow support (this ability is of course limited by the available frequencies), we can arrange it such that the field E in the presence of χ and the field E 1 in the presence of X 1 (where we are content with the averaged Born approximation of the electric field, see (18), in the case of Problem 2) are such that E 1 (t, x0 ) = E(t, x0 ) for all t < t0 and E 1 (t, x0 ) = 0 for t ≥ t0 at the detector x0 ∈ R3 for some time t0 ∈ R. This allows us to split the reconstruction of the electric susceptibility by a layer stripping method and reconstruct each layer separately. We will therefore only describe the inductive steps, in which we independently consider the subproblems described in Sect. 4. We want to start with measurements from an optical coherence elastography setup, that is, we have optical coherence tomography data for different elastic states of the medium. Concretely, we apply a force proportional to some parameter δ ∈ R perpendicular to the layers of the medium, which causes under the assumption of a linear elastic medium a linear displacement of the position z j of the layer. Correspondingly, the refractive indices in the medium, defined by (15), will change, which we assume to be linear as well. Thus, each layer at the compression state corresponding to δ will be characterised by a refractive index n¯ j and a vertical position z¯ j of the beginning of the layer of the form n¯ j (ω, δ) = n j (ω) + δn j (ω) and z¯ j (δ) = z j + δz j for some functions n j : R → C and some slopes z j ∈ R. In the first reconstruction step, we have that the first layer is the background in which the medium resides, which we assume to be well described by the vacuum n 0 = 1 and not to be affected by the compression, that is, n 0 = 0. Moreover, the distance between the detector and the medium shall be kept fixed during the compression so that z 1 = 0 as well. According to Lemma 4, the measurements at the detector x0 ∈ R3 with x0,3 > z 1 then allow us to extract (knowing n¯ 0 = 1, the incident field E (0) , and the vertical position x0,3 of the detector explicitly) the information

Reconstructing the Optical Parameters of a Layered Medium …

m 0 [n 1 , n 1 , z](ω, δ) =

121

n¯ 1 (ω, δ) − 1 −2i ω z1 e c . n¯ 1 (ω, δ) + 1

(26)

From this data, we can uniquely compute the functions n 1 , n 1 , and z 1 . Lemma 6 Let I ⊂ R be a set which contains at least two incommensurable points ω1 , ω2 ∈ I \ {0} (that is, ωω21 ∈ R \ Q). Assume that we have (n 1 , n 1 , z 1 ) and (n˜ 1 , n˜ 1 , z˜ 1 ) with n 1 (ω) = 0, n˜ 1 (ω) = 0, and m 0 [n 1 , n 1 , z 1 ](ω, δ) = m 0 [n˜ 1 , n˜ 1 , z˜ 1 ](ω, δ) for all ω ∈ I, δ ∈ R.

(27)

Then, we have n 1 (ω) = n˜ 1 (ω), n 1 (ω) = n˜ 1 (ω), and z 1 = z˜ 1 for all ω ∈ I. Proof Expanding the fractions in (27), the equation reduces to the zeroes of a quadratic polynomial in δ. Comparing the coefficients of second order of δ, we find that ω ω n 1 (ω)n˜ 1 (ω) e−2i c z1 − e−2i c z˜1 = 0. Thus, we get

ω

ω

e−2i c z1 = e−2i c z˜1 for all ω ∈ I.

Evaluating this at ω1 and ω2 , we have that there exist two integers λ1 , λ2 ∈ Z with z 1 − z˜ 1 =

πc πc λ1 = λ2 . ω1 ω2

If λ2 = 0, then we would get the contradiction λλ21 = which means that z 1 = z˜ 1 . With this, (27) evaluated at δ = 0 simplifies to

ω1 ω2

∈ R \ Q. Therefore, λ2 = 0,

n 1 (ω) = n˜ 1 (ω) for all ω ∈ I. Finally, looking at the terms of first order in δ in the expanded version of (27), we find that they have been reduced to give the equation n 1 (ω) = n˜ 1 (ω). After having recovered the parameters up to the jth layer, j ∈ {1, . . . , J }, we can clean our measurement data from all effects caused by the previous layers and consider the next subproblem, namely the signal originating from the region of the randomly distributed particles. Here, the unknown parameters consist of

122

P. Elbau et al.

• the radius R of the particles, which we will assume to be so small that the approximation R = 0 is reasonable and that the particles can also after compression be considered to have a round shape; • the ratio ρ j > 0 of particles per surface area, which we assume to be invariant under the compression; • the refractive index ν¯ j of the particles, which we assume to deform linearly according to ν¯ j (ω, δ) = ν j (ω) + δν j (ω), where ν j (ω) =

(p)

1 + χˇ j (ω),

under compression; and • the vertical positions ζ¯ j and Z¯ j of the beginning and the end of the random medium inside the jth layer, which are also assumed to change linearly according to ζ¯ j (δ) = ζ j + δζ j and Z¯ j (δ) = Z j + δ Z j . We collect these unknowns in the tuple S j = (ρ j , ν j , ν j , ζ j , ζ j , Z j , Z j ). The (corrected) incident wave E (0) and the refractive index n j and its rate n j of change under compression are presumed to be already calculated. From the measurements of the electric field for this subproblem, provided that it can be well approximated by the expected value of the Born approximation as calculated in Lemma 5, we can extract the data (rewriting the expression (16) for φ j in (19) in terms of the refractive indices) M j [S j ](ω, δ) = ρ j (¯ν 2j (ω, δ) − n¯ 2j (ω, δ)) ω ω ¯ ¯ ¯ ¯ × e−i 2c n¯ j (ω,δ)(ζ j (δ)+3 Z j (δ)) − e−i 2c n¯ j (ω,δ)(3ζ j (δ)+ Z j (δ)) , Lemma 7 Let j ∈ {1, . . . , J } be fixed, I ⊂ R be an arbitrary subset and n j , n j be given such that n j (ω) = 0 for every ω ∈ I and that there exists a value ω0 ∈ I \ {0} with m(n j (ω0 )) > 0. Assume that we have S j = (ρ j , ν j , ν j , ζ j , ζ j , Z j , Z j ) and S˜ j = (ρ˜ j , ν˜ j , ν˜ j , ζ˜ j , ζ˜ j , Z˜ j , Z˜ j ) with M j [S j ](ω, δ) = M j [ S˜ j ](ω, δ) for all ω ∈ I, δ ∈ R.

(28)

Additionally, we enforce the ordering Z j < ζ j and Z˜ j < ζ˜ j about the beginning and the end of the random layer and make the assumptions Z j > ζ j > 0 and Z˜ j > ζ˜ j > 0 that the layer shrinks when being compressed. Moreover, we assume the existence of an element ω1 ∈ I so that n j (ω1 ) n j (ω1 )

=

ν j (ω1 ) ν j (ω1 )

.

(29)

Reconstructing the Optical Parameters of a Layered Medium …

Then, we have

123

S j = S˜ j .

Proof Considering the different orders of decay in δ in the exponents in (28), we require that all of them match, which yields the equation system ω ω

m(n j (ω))(ζ j + 3Z j ) = δ 2 m(n j (ω))(ζ˜ j + 3 Z˜ j ) and 2c c ω ω δ 2 m(n j (ω))(3ζ j + Z j ) = δ 2 m(n j (ω))(3ζ˜ j + Z˜ j ) 2c c

δ2

for the exponents quadratic in δ, which implies for the frequency ω = ω0 for which

m(n j (ω0 )) > 0 that ζ j = ζ˜ j and Z j = Z˜ j , and, using this result, the equation system ω ω

m(n j (ω))(3ζ j + Z j ) = δ m(n j (ω))(3ζ˜ j + Z˜ j ) and 2c 2c ω ω δ m(n j (ω))(ζ j + 3Z j ) = δ m(n j (ω))(ζ˜ j + 3 Z˜ j ) 2c 2c δ

for the exponents linear in δ, which further implies ζ j = ζ˜ j and Z j = Z˜ j . At this point, (28) is reduced to ρ j (ν j + δν j )2 − (n j + δn j )2 = ρ˜ j (˜ν j + δ ν˜ j )2 − (n j + δn j )2 . Comparing coefficients with respect to δ gives us the equation system ρ j ν j 2 − n j 2 = ρ˜ j ν˜ j 2 − n j 2 , ρ j ν j ν j − n j n j = ρ˜ j ν˜ j ν˜ j − n j n j , ρ j ν 2j − n 2j = ρ˜ j ν˜ 2j − n 2j .

(30) (31) (32)

We use Eqs. (32) in (30) and (31) to eliminate of the variables ρ j and ρ˜ j , and interpret the result as an equation system for the variables ν˜ j and ν˜ j . Solving these equations then for ν˜ j , gives us (ν 2j − n 2j )˜ν j 2 = (˜ν 2j − n 2j )ν j 2 + (ν 2j − ν˜ 2j )n j 2 , (ν 2j − n 2j )˜ν j ν˜ j = (˜ν 2j − n 2j )ν j ν j + (ν 2j − ν˜ 2j )n j n j . Eliminating further ν˜ j by multiplying the first equation with ν˜ j and subtracting the squared second equation, we find after some algebraic manipulations (˜ν 2j − n 2j )(ν 2j − ν˜ 2j )(ν j n j − ν j n j )2 = 0. Evaluating this at the value ω1 , we see that the last factor is by assumption (29) not zero. Thus, there are only two cases.

124

P. Elbau et al.

1. Either we have ν˜ j (ω1 ) = ν j (ω1 ) = n j (ω1 ) and therefore by (32) that ρ˜ j = ρ j ; then we get with (32) and (30) that ν˜ j = ν j and ν˜ j = ν j holds on the whole set I , which means that we have shown S˜ j = S j . 2. Or we have that ν˜ j (ω1 ) = n j (ω1 ). Then, (32) tells us that also ν j (ω1 ) = n j (ω1 ) and thus, by combining (30) and (31), that ν˜ j (ω1 ) = ν j (ω1 ). Furthermore, we know from assumption (29) that in this case ν j (ω1 ) = n j (ω1 ), and therefore (30) implies ρ˜ j = ρ j from which we again conclude that S˜ j = S j . As last type of subproblem, we encounter then the interface between the layer j and the layer j + 1. Similarly to the case of the initial layer, we obtain here from Lemma 4 the data m j [n j+1 , n j+1 , z j+1 , z j+1 ](ω, δ) =

n¯ j+1 (ω, δ) − n¯ j (ω, δ) −2i ω n¯ j (ω,δ)¯z j+1 (δ) e c . n¯ j+1 (ω, δ) + n¯ j (ω, δ)

Again, this data allows us to uniquely obtain the variables n j+1 , n j+1 , z j+1 , and from the already reconstructed values n j and n j .

z j+1

Lemma 8 Let j ∈ {1, . . . , J − 1} be fixed, I ⊂ R be an arbitrary subset and n j , n j be given such that n j (ω) = 0 for every ω ∈ I and that there exists a value ω0 ∈ I \ {0} with m(n j (ω0 )) > 0. Assume that we have (n j+1 , n j+1 , z j+1 , z j+1 ) and (n˜ j+1 , n˜ j+1 , z˜ j+1 , z˜ j+1 ) with m j [n j+1 , n j+1 , z j+1 , z j+1 ](ω, δ) = m j [n˜ j+1 , n˜ j+1 , z˜ j+1 , z˜ j+1 ](ω, δ)

(33)

for all ω ∈ I and δ ∈ R. Then, we have n j+1 (ω) = n˜ j+1 (ω), n j+1 (ω) = n˜ j+1 (ω), z j+1 = z˜ j+1 , and z j+1 = z˜ j+1 for all ω ∈ I . Proof Comparing again the different orders of decay in δ in the exponents in (33), we require that the coefficients on both sides coincide: ω

m(n j (ω))(z j+1 − z˜ j+1 ) = 0 and c ω

m(n j (ω))(z j+1 − z˜ j+1 ) + m(n j (ω))(z j+1 − z˜ j+1 ) = 0. 4δ c 2δ 2

Because of the assumption that m(n j (ω0 )) > 0, this is equivalent to z j+1 = z˜ j+1 and z j+1 = z˜ j+1 .

Reconstructing the Optical Parameters of a Layered Medium …

125

As in the proof of Lemma 6, Eq. (33) for δ = 0 then gives us 2n j (ω)(n j+1 (ω) − n˜ j+1 (ω)) = 0, resulting in n j+1 (ω) = n˜ j+1 (ω). Finally, dividing both sides of (33) by the exponential factors (which we already know to be the same), we get a quadratic equation for δ and equating the first order terms in δ, we obtain 2n j (ω)(n j+1 (ω) − n˜ j+1 (ω)) = 0, which yields n j+1 (ω) = n˜ j+1 (ω).

Conclusion We have thus shown that by analysing a layered medium endued with independently uniformly distributed scatterers in each layer with optical coherence tomography, we can reduce the inverse problem of reconstructing the electric susceptibility of the medium to subproblems for each layer separately by a layer stripping argument, provided the homogeneous parts between the different regions are not too small. Then by combining this imaging method with an elastography setup by recording measurements for different compression states (normal to the layered structure), we find out that this allows for the reconstruction of the optical parameters and leads to a unique reconstructability of all the optical parameters: the electric susceptibilities and positions of the layers, the electric susceptibilities of the randomly distributed particles, their particle density, and the locations of the regions of these particles (at every compression state). Of course, the recovered shifts of the layer boundaries for the different compression states could then be used in a next step to determine elastic parameters of the medium. Acknowledgements This work was made possible by the greatly appreciated support of the Austrian Science Fund (FWF) via the special research programme SFB F68 “Tomography Across the Scales”: Peter Elbau and Leopold Veselka have been supported via the subproject F6804-N36 “Quantitative Coupled Physics Imaging”, and Leonidas Mindrinos acknowledges support from the subproject F6801-N36.

References 1. Ammari, H., Bretin, E., Millien, P., Seppecher, L., Seo, J.K.: Mathematical modeling in fullfield optical coherence elastography. SIAM J. Appl. Math. 75(3), 1015–1030 (2015) 2. Ammari, H., Romero, F., Shi, C.: A signal separation technique for sub-cellular imaging using dynamic optical coherence tomography. Multiscale Model. Simul. 15(3), 1155–1175 (2017)

126

P. Elbau et al.

3. Brezinski, M.E.: Optical Coherence Tomography Principles and Applications. Academic Press, New York (2006) 4. Drexler, W., Fujimoto, J.G.: Optical Coherence Tomography: Technology and Applications, 2nd edn. Springer International Publishing, Switzerland (2015) 5. Drexler, W., Hubmer, S., Krainz, L., Neubauer, A., Scherzer, O., Schmid, J., Sherina, E.: Lamé parameter estimation from static displacement field measurements. In: Burger, M., Hahn, B., Quinto, E.T. (eds) Oberwolfach Conference: Tomographic Inverse Problems: Theory and Applications, Oberwolfach reports, pp. 74–76. EMS Publishing House (2019) 6. Elbau, P., Mindrinos, L., Scherzer, O.: Mathematical methods of optical coherence tomography. In: Scherzer, O. (ed.) Handbook of Mathematical Methods in Imaging, pp. 1169–1204. Springer, New York (2015) 7. Elbau, P., Mindrinos, L., Scherzer, O.: The inverse scattering problem for orthotropic media in polarization-sensitive optical coherence tomography. GEM. Int. J. Geomath. 9(1), 145–165 (2018) 8. Elbau, P., Mindrinos, L., Veselka, L.: Quantitative OCT reconstructions for dispersive media. In: Kaltenbacher, B., Wald, A., Schuster, T. (eds) Time-dependent problems in imaging and parameter identification, to appear. Springer, Heidelberg (2020) 9. Nahas, A., Bauer, M., Roux, S., Boccara, A.C.: 3D static elastography at the micrometer scale using full field oct. Biomed. Opt. Express 4(10), 2138–2149 (2013) 10. Rudin, W.: Real and Complex Analysis, 3rd edn. McGraw-Hill, New York (1987) 11. Santos, M., Araüjo, A., Barbeiro, S., Cramelo, F., Correia, A., Marques, M.I., Morgado, M., Pinto, L., Serranho, P., Bernardes, A.: Maxwell’s equations based 3d model of light scattering in the retina. In: 4th Portuguese Meeting on Bioengineering (ENBENG), p. 5. IEEE (2015) 12. Sun, C., Standish, B.A., Yang, V.: Optical coherence elastography: current status and future applications. J. Biomed. Opt. 16(4), 1–13 (2011) 13. Tricoli, U., Carminati, R.: Modeling of full-field optical coherence tomography in scattering media. J. Opt. Soc. Am. A 36(11), C122–C129 (2019)

The Finite Element Method and Balancing Principle for Magnetic Resonance Imaging Larisa Beilina, Geneviève Guillot, and Kati Niinimäki

Abstract This work considers a finite element method in combination with balancing principle for a posteriori choice of the regularization parameter for image reconstruction problem appearing in magnetic resonance imaging (MRI). The fixed point iterative algorithm is formulated and it’s performance is demonstrated on the image reconstruction from experimental MR data. Keywords MRI · Fredholm integral equation of the first kind · Finite element method · Regularization · Balancing principle MSC: 65R20 · 65R32

1 Introduction In this work is studied MRI problem described by a Fredholm integral equation of the first kind, via applying finite element method (FEM) to its solution. Fredholm integral equation of the first kind is an ill-posed problem and to approach it, a minimization of the Tikhonov functional [15–17] is usually used. L. Beilina (B) Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, 412 96 Gothenburg, Sweden e-mail: [email protected] G. Guillot IR4M UMR8081, CNRS, Université Paris-Sud, Université Paris-Saclay, bâtiment 220, 4 place du Général Leclerc, 91401 Orsay, France e-mail: [email protected] K. Niinimäki IR4M UMR8081, CNRS, Université Paris-Sud, Université Paris-Saclay, SHFJ, 4 place du Général Leclerc, 91401 Orsay, France e-mail: [email protected]; [email protected] Xray Division, Planmeca Oy, Asentajankatu 6, 00880 Helsinki, Finland © Springer Nature Switzerland AG 2020 L. Beilina et al. (eds.), Mathematical and Numerical Approaches for Multi-Wave Inverse Problems, Springer Proceedings in Mathematics & Statistics 328, https://doi.org/10.1007/978-3-030-48634-1_9

127

128

L. Beilina et al.

The paper is focused on the efficiency of applying of FEM for image reconstruction in magnetic resonance imaging. A FEM for integral equation of the first kind was elaborated in [5] and an adaptive FEM with a posteriori error estimates for Tikhonov functional and the regularized solution of this functional was developed in [10]. The present work employs the finite element method for minimization of the regularized Tikhonov functional where the regularized term is performed in the H 1 norm. Further, we derive the Fréchet derivative of the regularized Tikhonov functional and formulate the finite element method to find optimal solution of this functional. Balancing principle [6] is then used for a posteriori choice of the regularization parameter. Finally, a fixed point iterative algorithm, which combined the finite element method and balancing principle, is formulated and applied to the reconstruction of images from experimental MR data acquired at IR4M laboratory in Paris-Sud University, France. The outline of this paper is as follows. The statements of forward and inverse MRI problems are presented in Sect. 2. In Sect. 3, the finite element method for minimization of the Tikhonov functional is formulated. In Sect. 4, the balancing principle for choosing of the regularization parameter in the Tikhonov functional is briefly presented and in Sect. 4.1, the fixed point iterative algorithm for solution of MRI problem is formulated. Section 4.2 presents convergence analysis of the algorithm of Sect. 4.1. Finally, Sect. 5 presents numerical results of reconstruction from experimental MR data using proposed finite element method.

2 Statement of the Forward and Inverse MRI Problem Throught the paper, by H k (Ω) denotes the Hilbert space of all L 2 -functions ω(x) defined in the domain Ω which are k times continuously differentiable in Ω and with all partial derivatives of the order |α| ≤ k : D α w ∈ L 2 (Ω). The inner product in H k (Ω) is defined as D α w D α v d x. (w, v) H k (Ω) = |α|≤k Ω

We denote the domain of image reconstruction by Ω ⊂ C2 with the boundary ∂Ω, and the domain where MR data are collected, by Ωκ ⊂ C2 with the boundary ∂Ωκ , and call them as image-space and k-space, respectively. The goal of this work is to solve a two-dimensional Fredholm integral equation of the first kind u(k x , k y ) = f (x, y)G(x, y, k x , k y ) dxdy, (1) Ω

The Finite Element Method and Balancing Principle …

129

where f (x, y) ∈ H (Ω) denotes the unknown image function which should be reconstructed from the experimental MR data u(k x , k y ) ∈ L 2 (Ωκ ). In (1) the kernel function is defined by G(x, y, k x , k y ) = e−2πi(kx x+k y y) ∈ C k , k > 0

(2)

where i is an imaginary unit. In Eqs. (1)–(2) (k x , k y ) denote the k-space trajectories which correspond to the coordinates of measured data u ∈ Ωκ and are defined in time t as t γ γ k x (t) = G x (τ )dτ = (3) G x t, 2π 0 2π t γ γ G y (τ )dτ = (4) k y (t) = G y t, 2π 0 2π where G x , G y are the known magnetic field gradients which prescribe how the kspace data in Ωκ is acquired. For more details about the statement of MRI problem we refer to [4]. The Eq. (1) can be written as an operator equation A f = u,

(5)

with a bounded linear operator A : H 1 (Ω) → L 2 (Ωκ ) defined as A f :=

Ω

f (x, y)e−2πi(kx x+k y y) dxdy.

(6)

Further we will consider the following ill-posed problem Ill-posed problem (IP) Find f (x, y) in (1) when the measured MR data u(k x , k y ) ∈ Ωκ , the k-space coordinates (k x , k y ) and the kernel G(x, y, k x , k y ) are known. IP needs regularization [1, 2, 9, 13, 15–17]. Thus, to find a solution for IP we construct the Tikhonov regularization functional Mα ( f ) =

1 α A f − u2L 2 (Ωκ ) + f 2H 1 (Ω) , 2 2

(7)

Mα ( f ) : H 1 (Ω) → C, u ∈ L 2 (Ωκ ). In (7) α = α (δ) > 0 is a regularization parameter depending on the noise δ in data such that u − u ∗ L 2 (Ωκ ) ≤ δ, where u ∗ denote perfect noiseless data corresponding to the exact solution f ∗ of (5) such that

130

L. Beilina et al.

A f ∗ = u∗.

(8)

Our goal is to find minimum of (7) inf

f ∈H 1 (Ω)

Mα ( f ) =

such that for all b ∈ H 1 (Ω)

inf

1

f ∈H 1 (Ω)

2

A f − u2L 2 (Ωκ ) +

α f 2H 1 (Ω) 2

Mα ( f )(b) = 0,

(9)

(10)

where Mα ( f ) denotes the Fréchet derivative of the functional (7). The following lemma is well-known for the operator A : L 2 (Ω) → L 2 (Ωκ ), see [2]. Lemma 1 Let A : L 2 (Ω) → L 2 (Ωκ ) be a bounded linear operator. Then the Fréchet derivative of the functional (7) is Mα ( f )(b) = (A∗ A f − A∗ u, b) + α( f, b), ∀b ∈ L 2 (Ω).

(11)

In the case when the operator A : H 1 (Ω) → L 2 (Ωκ ) the derivation of the Fréchet derivative is more complicated because of the presence a H 1 norm in the regularization term. Below we formulate a lemma concerning the Fréchet derivative of the operator A : H 1 (Ω) → L 2 (Ωκ ). Lemma 2 Let A : H 1 (Ω) → L 2 (Ωκ ) be a bounded linear operator. Then the Fréchet derivative of the functional Mα ( f ) = is

1 α A f − u2L 2 (Ωκ ) + |∇ f |2L 2 (Ω) , 2 2

Mα ( f )(b) = (A∗ A f − A∗ u, b) + α(|∇ f |, |∇b|), ∀b ∈ H 1 (Ω),

(12)

(13)

with a convex growth factor b, i.e., |∇b| < b. Proof We have 1 α A f − u2L 2 (Ωκ ) + |∇ f |2L 2 (Ω) 2 2 2 1 = f (x, y)G(x, y, k x , k y )d xd y − u(k x , k y ) dk x dk y 2 Ωk Ω α + |∇ f |2 d xd y. 2 Ω

Mα ( f ) =

(14)

To find the Fréchet derivative (13) of the functional (12) we consider Mα ( f + b) − Mα ( f ) ∀b ∈ H 1 (Ω):

The Finite Element Method and Balancing Principle …

Mα ( f + b) − Mα ( f ) =

1 2

Ω

Ω

131

2 ( f + b)Gd xdy − u(k x , k y ) dk x dk y

k α + |∇( f + b)|2 d xd y 2 Ω 2 1 − f Gd xdy − u(k x , k y ) dk x dk y 2 Ωk Ω α − |∇ f |2 d xd y = I1 + I2 , 2 Ω 2 1 ( f + b)Gd xdy − u(k x , k y ) dk x dk y 2 Ωk Ω 2 1 − f Gd xdy − u(k x , k y ) dk x dk y , 2 Ωk Ω α α I2 := |∇( f + b)|2 d xd y − |∇ f |2 d xd y. 2 Ω 2 Ω

where

(15)

I1 :=

(16)

We consider separately terms I1 and I2 . As for the term I1 2 2 1 1 f Gd xdy − u + bGd xdy dk x dk y − f Gd xdy − u dk x dk y 2 Ωk Ω 2 Ω Ωk Ω 1 2 = f Gd xdy − u) + 2( f Gd xdy − u) · bGd xdy + ( bGd xdy)2 ( 2 Ωk Ω Ω Ω Ω − ( f Gd xdy − u)2 dk x dk y = f Gd xdy − u) · bGd xdy dk x dk y

I1 =

Ω

2 1 + bGd xdy dk x dk y . 2 Ωk Ω

Ωk

Ω

Ω

(17)

Similarly we rewrite the term I2 as: α I2 = 2

α (|∇ f + ∇b| − |∇ f | )d xd y ≤ 2 Ω 2

2

Ω

(2|∇ f ||∇b| + |∇b|2 )d xd y. (18)

Taking limits in I1 and I2 in the definition of Fréchet derivative we get I1

0 = lim

b→0 b2

= lim

Ωk

Ω

f Gd xd y − u) ·

Ω

bGd xd y dk x dk y +

1 2

Ωk

||b||2 α[ Ω |∇ f ||∇b|d xd y + 21 Ω |∇b|2 d xd y] I2 ≤ lim . 0 = lim b→0 b2 b→0 b2 b→0

Ω

2 bGd xd y dk x dk y

,

(19)

132

L. Beilina et al.

The second terms in limits for I1 and I2 in (19) are satisfying 1 2

lim

b→0 α 2

lim

Ωk

b→0

Ω

2 bGd xdy dk x dk y

→ 0, ||b||2 2 α ||b||22 Ω |∇b| d xd y ≤ → 0, C(b) = const. lim C(b) b2 2 b→0 ||b||2

(20)

Thus, the factors in the first terms for I1 and I2 of (19) should be also zero, from which (13) follows. Similarly can be proven the following Lemma. Lemma 3 Let A : H 1 (Ω) → L 2 (Ωκ ) be a bounded linear operator. Then the Fréchet derivative of the functional Mα ( f ) =

1 α 1 α A f − u2L 2 (Ωκ ) + f + |∇ f |2L 2 (Ω) = A f − u2L 2 (Ωκ ) + f 2H 1 (Ω) , 2 2 2 2

(21)

is given by Mα ( f )(b) = (A∗ A f − A∗ u, b) + α[( f, b) + ( f, |∇b|) + (|∇ f |, b) + (|∇ f |, |∇b|)], ∀b ∈ H 1 (Ω),

(22)

with a convex growth factor b, i.e., |∇b| < b. Proof Once again, we consider Mα ( f + b) − Mα ( f ) ∀b ∈ H 1 (Ω): Mα ( f + b) − Mα ( f ) =

1 2

Ω

Ω

2 ( f + b)Gd xdy − u(k x , k y ) dk x dk y

k α + ( f + b + |∇( f + b)|)2 d xd y 2 Ω 2 1 − f Gd xdy − u(k x , k y ) dk x dk y 2 Ωk Ω α − ( f + |∇ f |)2 d xd y = I1 + I2 , 2 Ω

(23)

where I1 is given in (16) and I2 :=

α 2

Ω

( f + |∇ f + ∇b| + b)2 d xd y −

α 2

Ω

( f + |∇ f |)2 d xd y.

(24)

The term I1 is estimated in Lemma 2. Thus, it remains to consider only the term I2 .

The Finite Element Method and Balancing Principle …

133

α α ( f + |∇ f + ∇b| + b)2 d xd y − ( f + |∇ f |)2 d xd y 2 Ω 2 Ω 2

α ≤ ( f + |∇ f |) + (b + |∇b|) − ( f + |∇ f |)2 d xd y 2 Ω

α = 2( f + |∇ f |)(b + |∇b|) + (b + |∇b|)2 d xd y. 2 Ω

I2 =

(25)

Taking limit in I2 in the definition of Fréchet derivative we get: 0 = lim

I2

b→0 b2

≤ lim

b→0

α

Ω ( f + |∇ f |)(b + |∇b|)d xd y +

1 2 Ω (b

+ |∇b|)2 d xd y

b2

The second term in limit for I2 in (26) can be written as α (b + |∇b|)2 d xd y α ||b||22 2 Ω lim D(b) ≤ → 0, D(b) = const. lim b→0 b2 2 b→0 ||b||2

.

(26)

(27)

which approaches zero when b goes to zero. Thus, the factors in the first terms for I1 in (19) and I2 in (26) should be also zero, from which (22) follows.

3 The Finite Element Method for Minimization of the Tikhonov Functional To formulate the finite element method for (13) we discretize the domains Ω ⊂ R2 , Ωκ ⊂ R2 by the meshes K h , K h κ , respectively, consisting of non-overlapping triangles K such that Ω = ∪ K ∈K h K = K 1 ∪ K 2 ... ∪ K s , K h = {K 1 , ..., K s }, Ωκ = ∪ K ∈K hκ K = K k1 ∪ K k2 ... ∪ K ks , K h κ = {K k1 , ..., K ks }. with the standard mesh regularity assumption [7]. We note that the number of elements s in both meshes is the same. We define the finite element space Vh ⊂ V as Vh = v ∈ L 2 (Ω) : v ∈ C(Ω), v|∂Ω = 0, v| K ∈ P1 (K ) ∀K ∈ K h .

(28)

The finite element method for (13) reads: find f h ∈ Vh such that for all v ∈ Vh Mα ( f h )(v) = (A∗ A f h − A∗ u, v) + α(∇ f h , ∇v) = 0. The function f is approximated by f h ∈ Vh , such that

(29)

134

L. Beilina et al.

fh =

N

f i ϕi ,

(30)

i=1 N are the standard continuous piecewise linear functions and f i denote where {ϕi }i=1 the unknown discrete function-values at the mesh point xi ∈ K h . Substituting (30) into (29) with v = ϕ j and taking discrete function values of u i at the mesh point xi ∈ K h k we get the discrete system of equations N

N

(Aϕi , Aϕ j ) f i −

i, j=1

(u i , Aϕ j ) + α

i, j=1

N

(∇ϕi , ∇ϕ j ) f i = 0.

(31)

i, j=1

This system can be rewritten as N

(Aϕi , Aϕ j ) f i + α

i, j=1

N

(∇ϕi , ∇ϕ j ) f i =

i, j=1

N

(u i , Aϕ j ),

(32)

i, j=1

which is equivalent to the following linear system of equations (C + α K )f = b.

(33)

In system (33), matrices C, K are the finally assembled block matrices, corresponding to the first two terms in the left hand side of (32), f denotes the vector of nodal values of finite element approximation f h , b is the finally assembled right hand side of (32), see details in [3].

4 Balancing Principle In this section, we briefly describe the balancing principle for finding the regularization parameter α in the functional (7) according to [6]. For this purpose the functional (7) is rewritten here as Mα ( f ) =

1 1 A f − u2L 2 (Ωk ) + α f 2H 1 (Ω) = ϕ( f ) + αψ( f ). 2 2

(34)

For the functional (34) the value function F(α) : C → C is defined according to [14] as (35) F(α) = inf Mα ( f ). f

If there exists derivative F (α) at α > 0 then from (34) and (35) follows that F(α) = inf Mα ( f ) = ϕ ( f ) +α ψ ( f ) .

f ϕ(α) ¯

¯ ψ(α)

(36)

The Finite Element Method and Balancing Principle …

135

¯ Since Fα (α) = ψ ( f ) = ψ(α) then from (36) we get ¯ ¯ = F(α) − α F (α). ψ(α) = F (α), ϕ(α)

(37)

For the functional (34) balancing principle (or Lepskii, see [11, 12]) finds α > 0 such that the following expression is fulfilled ¯ ϕ(α) ¯ = γ α ψ(α),

(38)

where γ is determined by the statistical a priori knowledge from shape parameters in Gamma distributions. When γ = 1 the method is called zero crossing method, see details in [8]. Let us show that the balancing rule (38) finds optimal α > 0 minimizing the balancing function F 1+γ (α) . (39) Φγ (α) = α From conditions (37) it follows that ¯ 0 = ϕ(α) ¯ − γ α ψ(α) = F(α) − α F (α) − γ α F (α) = F(α) − α F (α)(1 + γ ), which can be rewritten as F(α) = α F (α)(1 + γ ).

(40)

Dividing both sides of (40) by α F(α) we get F (α) d F/dα 1 = (1 + γ ) = (1 + γ ) α F(α) F(α) or

dα dF = (1 + γ ). α F(α)

Integrating with respect to α both sides of the above equation we obtain ln α + C1 = (1 + γ ) ln F(α) + C2 . Now choosing C1 = C2 = const. the above equation is rewritten as α = exp(1+γ ) ln F(α) = F(α)1+γ which in turn can be rewritten as the balancing function (39) to be minimized in the balancing principle. We can check that the minimum of Φγ (α) is achieved at

136

L. Beilina et al.

0 = (Φγ (α))α =

(1 + γ )F (α)F γ (α)α − F 1+γ (α) . α2

From the above equation we get (1 + γ )F (α)F γ (α)α = F 1+γ (α) → (1 + γ )F (α)α = F(α). This equation is the same as the Eq. (40) which gives the balancing principle ¯ ϕ(α) ¯ = γ α ψ(α).

(41)

Thus, the balancing principle computes optimal value of α where (Φγ (α))α = 0.

4.1 Fixed Point Algorithm for Finding Optimal α For the Tikhonov functional (34) the following fixed point algorithm for computing α is proposed. • Step 1. Start with the initial approximation of α0 , for example, choose α0 = δ μ , μ ∈ (0, 2), and compute the sequence of αk in the following steps. • Step 2. Compute the value function F(αk ) = inf f Mαk ( f ) and get f αk via solving (33) • Step 3. Update the regularization parameter α := αk+1 as αk+1 =

1 ϕ( f αk ) γ ψ( f αk )

• Step 4. For the tolerance 0 < θ < 1 chosen by the user, stop computing regularization parameters αk if computed αk are stabilized, or |αk − αk−1 | ≤ θ . Otherwise, set k := k + 1 and go to Step 2.

4.2 Study of Convergence of Fixed Point Algorithm The local convergence of the fixed point algorithm is developed under the following assumptions for the functional (34). Let the interval for finding optimal α be defined as [αl , αr ] and it is such that ¯ r ) > 0 → ψ(α) ¯ • 1. ψ(α > 0 for ∀α ∈ [0, αr ]. • 2. Then ∃αb ∈ [αl , αr ] : D ± Φγ (α) < 0 for α ∈ [αl , αb ] and D ± Φγ (α) > 0 for α ∈ [αb , αr ], where

The Finite Element Method and Balancing Principle …

137

F(α) − F(α − h) , h→0− h F(α + h) − F(α) D − F(α) = lim . h→0+ h D + F(α) = lim

The first assumption guarantees well-posedness of the algorithm which is valid for a broad class of ill-posed problems with different regularization terms like L 2 − l 1 and L 2 -TV. The second assumption guarantees that there exists only one local minimizer αb for Φγ on [αl , αr ]. Let us define the residual ¯ r (α) = ϕ(α) ¯ − γ α ψ(α).

(42)

The following Lemma will be used in the convergence theorem. Lemma [6] Under the above assumptions and with α0 = [αl , αr ] the sequence {αk } generated by the fixed point algorithm is such that • It is either finite or infinite and strictly monotone, and increasing if r (α) > 0 and decreasing if r (α) < 0. • If r (α) > 0, then the sequence {αk } ∈ [αl , αb ] • if r (α) < 0, then the sequence {αk } ∈ [αb , αr ]. Theorem [6] Under above assumptions with α0 = [αl , αr ] the sequence {αk } generated by the fixed point algorithm is such that • The sequence {Φγ (αk )} generated by the function Φγ (α) =

F 1+γ (α) α

is monotonically decreasing. • The sequence {αk } converges to the local minimizer αb . Sketch of the proof Let us consider the case r (α0 ) > 0, then the sequence {αk } is increasing and we have αk < αk+1 . The function F is concave, thus Lipschitz continuous. Thus Φγ (α) is locally Lipschitz continuous and there exists Φγ (α) such that (1 + γ )F γ (α)F (α)α − F 1+γ (α) , α2 γ F γ (α) F (α) ((1 + γ )F (α)α − F(α)) = (−r (α)) < 0. = α2 α2

Φγ (α) =

In (43) we have used the fact that

(43)

138

L. Beilina et al.

− r (α) = (1 + γ )F (α)α − F(α).

(44)

¯ ψ(α) = F (α), ϕ(α) ¯ = F(α) − α F (α),

(45)

Let us check (44). Since

then using the balancing principle we get ¯ r (α) = ϕ(α) ¯ − γ α ψ(α) = F(α) − α F (α) − γ α F (α) = F(α) − α F (α)(1 + γ ), and thus, (44) holds. Next, the function Φγ (α) is locally integrable and Φγ (αk+1 ) = Φγ (αk ) +

αk+1 αk

Φγ (α) dα.

(46)

Since by (43) we have Φγ (α) < 0 then from (46) it follows that Φγ (αk+1 ) < Φγ (αk ). Thus, the sequence {Φγ (αk )} is monotonically decreasing. By Lemma [6] there exists a limit α ∗ ∈ [αl , αr ] of the sequence {αk }. If αk < 0 αk+1 , Φγ (αk+1 ) < Φγ (αk ) we have for the finite sequence {αk }kk=1 lim D + Φγ (αk ) ≤ lim

k→k0

k→k0

F γ (αk ) (−r (αk )) ≤ lim D − Φγ (αk ), k→k0 αk2

(47)

then D ± Φγ (αk0 ) = 0 since −r (αk0 ) = 0. By our assumption, this local minimizer αk0 = αb . Now from iterations in the fixed point algorithm we have ¯ k) 1 F(αk ) − αk D − F(αk ) 1 F(αk ) − αk D + F(αk ) 1 ϕ(α ≤ α . ≤ = k+1 ¯ k) γ D − F(αk ) γ ψ(α γ D + F(αk )

(48)

Since limk→∞ D ± F(αk ) = D − F(α ∗ ) then the local minimizer is αb = α ∗ given by α∗ =

1 F(α ∗ ) − α ∗ D − F(α ∗ ) . γ D − F(α ∗ )

(49)

5 Numerical Experiment In this section, the performance of the algorithm in Sect. 4.1 on the reconstruction of phantoms from experimentally measured MR data is presented. The MR data acquisition is described in details in the recent paper by authors [3] where the performance

The Finite Element Method and Balancing Principle …

139

10

20

30

40

50

60

70 10

20

30

40

50

60

70

Fig. 1 Two-dimensional slice of data |u| used in tests 1 and 2. Left figure: visualized data |u| on rectangular mesh in matlab. Right figure: visualized data |u| on the finite element mesh K h k

of applying different interpolation techniques for obtaining the reconstructed images was discussed. For the MR data, a cylindrical plastic phantom of 10 mm diameter is used. This cylindrical phantom contained 8 small cylinders of hardened polymeric bone cement (Osteopal® ). The diameters of the 8 small cylinders were all equal to 2 mm. The plastic cylinder was filled with 0.4 mMol/L MnCl2 water solution. Using this cylindrical phantom, a three-dimensional experimental dataset was acquired in Ωκ . Next, 2D slice of this dataset was selected for for the computations of the function u in (12) in Ωκ , see Fig. 1. The 2D computational domain Ω is set of discrete elements n x × n y defined by

Fig. 2 Reconstructions | f h | obtained without using regularization parameter in the functional (12): via inverse Fourier transform (IDFT), left figure, and using the finite element method, right figure

140

L. Beilina et al.

Ω = {(x, y) ∈ (−37, 36) × (−37, 36)} , and the domain Ωk is a set of discrete elements n kx × n k y defined by Ωk = (k x , k y ) ∈ (−0.5, 0.5) × (−0.5, 0.5) . We choose the mesh size h = 1 in Ω and h = 1/73 in Ωk . In both meshes the number of points in x and y direction is the same, n x = n y = n kx = n k y = n = 74. Results of reconstructions without using the regularization term (α = 0) are presented in the Fig. 2. Using this figure, we observe that the reconstruction obtained via inverse discrete Fourier transform (IDFT) (Matlab® ’s FFT routines), see left Fig. 2, is less sharp than the result obtained using the finite element method.

Fig. 3 Reconstructions | f h | obtained using the finite element method for minimization of functional (12) (left) and functional (21) (right)

The Finite Element Method and Balancing Principle …

141

Table 1 The optimal values of the computed regularization parameter αb = α ∗ using the fixed point algorithm of Sect. 4.1, and corresponding computed residuals |A f h − u|2 for different regularization functions ψ( f h ) |∇ f h | L 1 |∇ f h |2L 2 | f h + | f h + No reg. |∇ f h || L 1 |∇ f h ||2L 2 α∗ |A f h − u|2

0.59 6.44

4.73 6.52

0.41 6.43

3.52 6.48

0 6.43

Results of reconstruction using finite element method for minimizing of functional (12) (Test 1) and (21) (Test 2), are presented in Fig. 3 and Table 1. For choosing the regularization parameter α, the fixed point algorithm of Sect. 4.1 on the interval [αl , αr ] = [0.01, 1] with tolerance = 10−7 was used. The optimal values of the computed regularization parameter αb = α ∗ are presented in Table 1. Convergence in fixed point algorithm was achieved after 9 and 11 iterations in Tests 1 and 2, respectively.

6 Conclusions The finite element method for image reconstruction problem in magnetic resonance imaging (MRI) was developed. The balancing principle for a posteriori choice of the regularization parameter was also presented and analyzed. The fixed point iterative algorithm was formulated and tested on the image reconstruction from experimental MR data. Numerical results compare reconstruction obtained via usual inverse discrete Fourier transform (IDFT) (Matlab® ’s FFT routines) and via the finite element method with and without using the regularization terms in the functional to be minimized. Numerical results show effectiveness of using the finite element method to get qualitative MR image reconstruction compared with standard techniques. Acknowledgements The research of LB was supported by the Swedish Research Council grant VR 2018-03661 and by the French Institute in Sweden, FRÖ program. The research of KN was supported by the French Institute in Sweden, TOR program.

References 1. Basistov, Y.A., Goncharsky, A.V., Lekht, E.E., Cherepashchuk, A.M., Yagola, A.G.: Application of the regularization method for increasing of the radiotelescope resolution power. Astron. zh. 56(2), 443–449 (1979) (in Russian) 2. Bakushinsky A.B., Kokurin, M.Y., Smirnova, A.: Iterative Methods for Ill-Posed Problems. Walter de Gruyter GmbH&Co. (2011)

142

L. Beilina et al.

3. Beilina, L., Guillot, G., Niinimäki, K.: On finite element method for magnetic resonance imaging. Springer Proc. Mathemat. Statist. 243, 119–132 (2018) 4. Brown, R.W., Cheng, Y.-C.N., Haacke, E.M., Thompson, M.R., Venkatesan, R.: Magnetic Resonance Imaging: Physical Principles and Sequence Design, 2nd edn. Wiley, Inc. (1999) 5. Hsiao, G.C., Wendland, W.L.: A finite element method for some integral equations of the first kind. J. Mathemat. Anal. Appl. Elsevier 58, 449–481 (1977) 6. Ito, K., Jin, B.: Inverse Problems: Tikhonov Theory and Algorithms. Series on Applied Mathematics, vol. 22. World Scientific (2015) 7. Johnson, C.: Numerical Solution of Partial Differential Equations by the Finite Element Method. Dover Books on Mathematics (2009) 8. Johnston, P.R., Gulrajani, R.M.: A new method for regularization parameter determination in the inverse problem of electrocardiography. IEEE Trans. Biomed. Eng. 44(1), 19–39 (1997) 9. Kaipio, J., Somersalo, E.: Statistical and Computational Inverse Problems. Springers Applied Mathematical Sciences, vol. 160. New York (2005) 10. Koshev, N., Beilina, L.: An adaptive finite element method for Fredholm integral equations of the first kind and its verification on experimental data. Central Euro. J. Mathemat. 11(8), 1489–1509 (2013) 11. Lazarov, R.D., Lu, S., Pereverzev, S.V.: On the balancing principle for some problems of numerical analysis. Numer. Math. 106(4), 659–689 12. Mathé, P.: The Lepskii principle revised. Inver. Prob. 22(3), L11–L15 (2006) 13. Mueller, J.L., Siltanen, S.: Linear and Nonlinear Inverse Problems with Practical Applications. SIAM Computational Science & Engineering (2012) 14. Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-Posed Problems. Wiley, New-York (1977) 15. Tikhonov, A.N., Leonov, A.S., Yagola, A.G.: Nonlinear Ill-Posed Problems. Chapman & Hall (1998) 16. Tikhonov, A.N., Goncharsky, A.V., Stepanov, V.V., Yagola, A.G.: Numerical Methods for the Solution of Ill-Posed Problems. Kluwer, London (1995) 17. Tikhonov, A.N., Goncharskiy, A.V., Stepanov, V.V., Kochikov, I.V.: Ill-Posed problem in image processing. DAN USSR, Moscow 294(4), 832–837 (1987) (in Russian)