Quantum amplitude estimation with error mitigation for time-evolving probabilistic networks¶
- in this notebook, we consider the discrete time evolution of probabilistic network models and low-depth quantum amplitude estimation techniques
- see M.C. Braun, T. Decker, N. Hegemann, S.F. Kerstan, C. Maier, J. Ulmanis: Quantum amplitude estimation with error mitigation for time-evolving probabilistic networks
- functions for the network models are in module pygrnd.qc.probabilisticNetworks
- functions for gradient descent for low-depth QAE (with and without error model) are in module pygrnd.qc.lowDepthQAEgradientDescent
- the results from the simulations and optimizations might be different for each execution of the code
In [1]:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from pygrnd.qc.probabilisticNetworks import *
from pygrnd.qc.parallelQAE import constructGroverOperator, circuitStandardQAE, bit2prob, maxString
from pygrnd.qc.lowDepthQAEgradientDescent import *
from pygrnd.qc.helper import counts2probs
from qiskit.visualization import plot_histogram
from qiskit import transpile
from qiskit.circuit.library.standard_gates import ZGate
jos_palette = ['#4c32ff', '#b332ff', '#61FBF8', '#1E164F', '#c71ab1ff']
sns.set_palette(jos_palette)
matplotlib.rcParams.update({'font.size': 18})
Example of a probabilistic network model with 2 nodes¶
- each of the 2 nodes has probabilities for intrinsic failure and recovery in a time step
- a failed node can trigger the failure of another node in the next time step with the given probability
- the probabilities for triggering another node can be seen as the weigths of edges in the dependency graph of the nodes
- we are interested in the probabilities of the configurations 00, 01, 10 and 11 after 3 time steps following the initialization
In [2]:
nodes=['n1','n2']
probFail={'n1':0.2,'n2':0.7}
probRecovery={'n1':0.3,'n2':0.8}
edges={('n1','n2'):0.2,('n2','n1'):0.8}
Classical and Monte Carlo evaluation¶
- the evaluation function classicalEvaluation of Pygrnd calculates the probabilities exactly
- the function calculates the probability for all $2^k$ possible configurations in each time step when we have $k$ nodes
- the function returns a list of the probabilities of all possible configurations
In [3]:
timesteps=3
resultClassical=classicalEvaluation(timesteps, nodes, probFail, probRecovery, edges)
print(resultClassical)
[0.1399616, 0.21922239999999996, 0.3080864, 0.33272959999999996]
- the Monte Carlo evaluation function monteCarloEvaluation of Pygrnd simulates the configuration in each time step
- the default setting of the function is 100000 rounds
- the function returns a list of the approximated probabilities of all possible configurations
In [4]:
timesteps=3
resultMonteCarlo=monteCarloEvaluation(timesteps, nodes, probFail, probRecovery, edges, rounds=100000)
print(resultMonteCarlo)
[0.14103, 0.21815, 0.30775, 0.33307]
- we compare the exact evaluation with the results of 20 Monte Carlo simulations
- we run Monte Carlo simulations with 1000, 10000, 100000 and 1000000 rounds
- we visualize the deviations of the Monte Carlo results from the exact value depending on the number of rounds
In [5]:
xe=[] # store the number of Monte Carlo rounds
ye00=[] # results for configuration 00
ye01=[] # results for configuration 01
ye10=[] # results for configuration 10
ye11=[] # results for configuration 11
for re in [3, 4, 5, 6]:
r=10**re
for i in range(20):
mc=monteCarloEvaluation(timesteps, nodes, probFail, probRecovery, edges, rounds=r)
xe.append(re)
ye00.append(mc[0])
ye01.append(mc[1])
ye10.append(mc[2])
ye11.append(mc[3])
In [6]:
matplotlib.rcParams.update({'font.size': 18})
fig, ax = plt.subplots(2,2,figsize=(20,10))
ax[0,0].set_xscale("log")
ax[0,1].set_xscale("log")
ax[1,0].set_xscale("log")
ax[1,1].set_xscale("log")
ax[0,0].scatter([10**x for x in xe],ye00,color=jos_palette[3],label='$p_{00}(3)$')
ax[1,0].scatter([10**x for x in xe],ye10,color=jos_palette[3],label='$p_{01}(3)$')
ax[0,1].scatter([10**x for x in xe],ye01,color=jos_palette[3],label='$p_{10}(3)$')
ax[1,1].scatter([10**x for x in xe],ye11,color=jos_palette[3],label='$p_{11}(3)$')
ax[0,0].hlines(y=resultClassical[0], xmin=10**3, xmax=10**6, linewidth=1,colors=jos_palette[3],label='exact probability '+str(round(resultClassical[0],3)))
ax[1,0].hlines(y=resultClassical[2], xmin=10**3, xmax=10**6, linewidth=1,colors=jos_palette[3],label='exact probability '+str(round(resultClassical[2],3)))
ax[0,1].hlines(y=resultClassical[1], xmin=10**3, xmax=10**6, linewidth=1,colors=jos_palette[3],label='exact probability '+str(round(resultClassical[1],3)))
ax[1,1].hlines(y=resultClassical[3], xmin=10**3, xmax=10**6, linewidth=1,colors=jos_palette[3],label='exact probability '+str(round(resultClassical[3],3)))
ax[0,0].set(xlabel='number of Monte Carlo samples',ylabel='probability')
ax[0,1].set(xlabel='number of Monte Carlo samples',ylabel='probability')
ax[1,0].set(xlabel='number of Monte Carlo samples',ylabel='probability')
ax[1,1].set(xlabel='number of Monte Carlo samples',ylabel='probability')
ax[0,0].legend()
ax[0,1].legend()
ax[1,0].legend()
ax[1,1].legend()
Out[6]:
<matplotlib.legend.Legend at 0x7f09b2a875e0>
Out[6]:
- we additionally compare the results for time steps 1 to 4 and all configurations
- we compare the exact value with the results of 20 Monte Carlo simulations with 10000 rounds each
In [7]:
#
# Calculate the exact values for the configurations 00, 01, 10 and 11 for all time steps from 1 to 4
#
xe2=[]
ye2_00=[]
ye2_01=[]
ye2_10=[]
ye2_11=[]
for t in range(1,5):
vc=classicalEvaluation(t, nodes, probFail, probRecovery, edges)
xe2.append(t)
ye2_00.append(vc[0])
ye2_01.append(vc[1])
ye2_10.append(vc[2])
ye2_11.append(vc[3])
In [8]:
#
# Calculate the results of 20 Monte Carlo runs for the configurations 00, 01, 10 and 11 for time steps 1 to 4.
#
xe=[]
ye00=[]
ye01=[]
ye10=[]
ye11=[]
for t in range(1,5):
for i in range(20):
mc=monteCarloEvaluation(t, nodes, probFail, probRecovery, edges, rounds=10000)
xe.append(t)
ye00.append(mc[0])
ye01.append(mc[1])
ye10.append(mc[2])
ye11.append(mc[3])
In [9]:
matplotlib.rcParams.update({'font.size': 18})
fig, ax = plt.subplots(figsize=(20,10))
ax.scatter(xe,ye00,label='probability of 00 with 10000 Monte Carlo rounds after x time steps')
ax.scatter(xe,ye01,label='probability of 10 with 10000 Monte Carlo rounds after x time steps')
ax.scatter(xe,ye10,label='probability of 01 with 10000 Monte Carlo rounds after x time steps')
ax.scatter(xe,ye11,label='probability of 11 with 10000 Monte Carlo rounds after x time steps')
for t in range(4):
ax.hlines(y=ye2_00[t], xmin=t+1-0.1, xmax=t+1+0.1, linewidth=1,colors=jos_palette[0])
ax.hlines(y=ye2_01[t], xmin=t+1-0.1, xmax=t+1+0.1, linewidth=1,colors=jos_palette[1])
ax.hlines(y=ye2_10[t], xmin=t+1-0.1, xmax=t+1+0.1, linewidth=1,colors=jos_palette[2])
ax.hlines(y=ye2_11[t], xmin=t+1-0.1, xmax=t+1+0.1, linewidth=1,colors=jos_palette[3])
ax.set(xlabel='number of time steps',ylabel='probability')
ax.legend()
Out[9]:
<matplotlib.legend.Legend at 0x7f09ad220f10>
Out[9]: