Quantum Monte Carlo Integration#
This tutorial will provide you with an introduction to Quantum Monte Carlo (QMCI) Integration within Qrisp.
For this purpose, we will first give you a theoretical overview of what this technique is about and where it is used.
Then we will dive into the practical implemention within Qrisp. This also includes the usage of Iterative Quantum Amplitude Estimation (IQAE).
To finish of this tutorial, we investigate the full implementation of a simple example by integrating
The relevant literature can be found in the following papers: A general quantum algorithm for numerical integration and Option pricing using Quantum computers for QMCI and Accelerated Quantum Amplitude Estimation without QFT for IQAE.
QMCI tackles the same problems as its classical counterpart: Numerical integration of high-dimensional functions w.r.t. probility distributions.
Mathematically speaking, we want to find an approximation for the following (general) integral
Numerical integration approximates an integral which cannot be solved analytically as a bunch of sums.
Integrals like these may appear in many different places, from chemistry, through many-body physics, to mathematical finance.
Theory & Implementation in Qrisp#
The implementation in Qrisp requires the implementation of the function acting on a QuantumFloat. We will see how this can be done later in the example.
There is multiple ways to implement Monte Carlo integration in a quantum fashion. In this tutorial, we use an approach based on quantum counting. The idea is to discretize not only the QuantumFloats
for this.
One QuantumFloat
will hold the discretized values of the QuantumBool
which for each state
Don’t give up just yet, the mathematical description will bring you more clarity!
For simplicity, we consider the situation where
This integral is approximated as Riemann sum:
As a first step, we prepare a uniform superposition state in the QuantumFloats
(or any other distibution for the variable discretizing the
where
As a second step, we apply the oracle that evaluates the “points under the curve” condition:
where
We now arrive at the central step of this algorithm, which is Quantum Amplitude Estimation. We use it to find the probability of measuring a good state
In the last step, we calculate the ratio between the number of points under the curve and the total number of points
Iterative Quantum Amplitude Estimation#
In Qrisp we have the option of using a resource efficient amplitude estimation algorithm, namely Accelerated Quantum Amplitude Estimation, see Algorithm 1 , which iteratively applies amplitude amplification to find an estimation for the probability of measureing a good state. The goal of the algorithm is as follows:
We start with a unitary operator
producing a superposition of orthogonal good and bad components where
The algorithm in question allows for us to establish an estimate
Mathematically speaking this means, given an error
A documentation explaining how to use the Qrisp implementation of this algorithm can found in the IQAE reference.
Example implementation#
Next up, we will step-by-step go through a example implementation of QMCI tailored to the example of integrating the function
A general implementation for integration of multidimensional functions w.r.t. arbitrary probability distributions is provided by the QMCI method.
First, we define the function
that we want to integrate, and a function for preparing the uniform distribution.
Additionally, we define the variables repesenting the qf_x
) and qf_x
).
Thereby, the QuantumFloat representing the function(qf_x)
assumes.
In this example, we evaluate the function QuantumFloat(3,-3)
.
The resulting values that the function assumes are represented by QuantumFloat(6,-6)
.
We also define a QuantumBool
, which will indicate the “points under the curve”.
from qrisp import *
def function(qf):
return qf*qf
def distribution(qf):
h(qf)
qf_x = QuantumFloat(3,-3)
qf_y = QuantumFloat(6,-6)
qbl = QuantumBool()
Now, we arrive at the heart of the algorithm, the definition of the state_function
:
@auto_uncompute
def state_function(qf_x, qf_y, qbl):
distribution(qf_x)
h(qf_y)
with(qf_y < function(qf_x)):
x(qbl)
It receives the @auto_uncompute
decorator ensuring that all intermediate variables are properly uncomputed.
We apply the chosen distribution to qf_x
, which represents the h
gate to qf_y
.
Within a ConditionEnvironment, we then evaluate in superposition which states in qf_y
are smaller than the chosen function evaluated on qf_x
.
We store the result of the comparison in the QuantumBool qbl
, by applying an x
gate on the previously mentioned QuantumBool if said condition is satisfied.
With everything in place, we can now execute the Iterative QAE algorithm, with a chosen error tolerance eps
and a confidence level alpha
.
a = IQAE([qf_x,qf_y,qbl], state_function, eps=0.01, alpha=0.01)
Aaaand that’s it! The QMCI is complete!
Let us now have a look at the result, and compare it to the expected result:
print(a)
# Yields: 0.27442553839756095
N = 8
print(sum((i/N)**2 for i in range(N))/N)
# Yields: 0.2734375
Congratulations, in this tutorial you learned about the basic theory behind Quantum Monte Carlo Integration, as well as, how to implement it using the high-level concepts that Qrisp offers. You witnessed the power of quantum computing that allows for evaluation of functions at exponentially many points all at once, but also experienced the intricacies of making the quantum computer reveal the solution by using Quantum Amplitude Estimation. By doing so, you’re diving a step further into the world of quantum algorithms.