Note
This page was generated from tutorials/operators/02_gradients_framework.ipynb.
Qiskit Gradient Framework¶
The gradient framework enables the evaluation of quantum gradients as well as functions thereof. Besides standard first order gradients of expectation values of the form
The gradient framework also supports the evaluation of second order gradients (Hessians), and the Quantum Fisher Information (QFI) of quantum states \(|\psi\left(\theta\right)\rangle\).
Imports¶
[1]:
#General imports
import numpy as np
#Operator Imports
from qiskit.opflow import Z, X, I, StateFn, CircuitStateFn, SummedOp
from qiskit.opflow.gradients import Gradient, NaturalGradient, QFI, Hessian
#Circuit imports
from qiskit.circuit import QuantumCircuit, QuantumRegister, Parameter, ParameterVector, ParameterExpression
from qiskit.circuit.library import EfficientSU2
First Order Gradients¶
Given a parameterized quantum state \(|\psi\left(\theta\right)\rangle = V\left(\theta\right)|\psi\rangle\) with input state \(|\psi\rangle\), parametrized Ansatz \(V\left(\theta\right)\), and observable \(\hat{O}\left(\omega\right)=\sum_{i}\omega_i\hat{O}_i\), we want to compute…
Gradients w.r.t. Measurement Operator Parameters¶
Gradient of an expectation value w.r.t. a coefficient of the measurement operator respectively observable \(\hat{O}\left(\omega\right)\), i.e.
First of all, we define a quantum state \(|\psi\left(\theta\right)\rangle\) and a Hamiltonian \(H\) acting as observable. Then, the state and the Hamiltonian are wrapped into an object defining the expectation value
[2]:
# Instantiate the quantum state
a = Parameter('a')
b = Parameter('b')
q = QuantumRegister(1)
qc = QuantumCircuit(q)
qc.h(q)
qc.rz(a, q[0])
qc.rx(b, q[0])
# Instantiate the Hamiltonian observable
H = (2 * X) + Z
# Combine the Hamiltonian observable and the state
op = ~StateFn(H) @ CircuitStateFn(primitive=qc, coeff=1.)
# Print the operator corresponding to the expectation value
print(op)
ComposedOp([
OperatorMeasurement(2.0 * X
+ 1.0 * Z),
CircuitStateFn(
┌───┐┌───────┐┌───────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b) ├
└───┘└───────┘└───────┘
)
])
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sympy/core/expr.py:3951: SymPyDeprecationWarning:
expr_free_symbols method has been deprecated since SymPy 1.9. See
https://github.com/sympy/sympy/issues/21494 for more info.
deprecated_since_version="1.9").warn()
We construct a list of the parameters for which we aim to evaluate the gradient. Now, this list and the expectation value operator are used to generate the operator which represents the gradient.
[3]:
params = [a, b]
# Define the values to be assigned to the parameters
value_dict = {a: np.pi / 4, b: np.pi}
# Convert the operator and the gradient target params into the respective operator
grad = Gradient().convert(operator = op, params = params)
# Print the operator corresponding to the Gradient
print(grad)
ListOp([
SummedOp([
ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌─────────────────────────┐┌───────┐┌───┐
q0_0: ┤ H ├┤ Rz(a + 1.5707963267949) ├┤ Rx(b) ├┤ H ├
└───┘└─────────────────────────┘└───────┘└───┘
)
]),
-1.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌─────────────────────────┐┌───────┐┌───┐
q0_0: ┤ H ├┤ Rz(a - 1.5707963267949) ├┤ Rx(b) ├┤ H ├
└───┘└─────────────────────────┘└───────┘└───┘
)
]),
0.5 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌─────────────────────────┐┌───────┐
q0_0: ┤ H ├┤ Rz(a + 1.5707963267949) ├┤ Rx(b) ├
└───┘└─────────────────────────┘└───────┘
)
]),
-0.5 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌─────────────────────────┐┌───────┐
q0_0: ┤ H ├┤ Rz(a - 1.5707963267949) ├┤ Rx(b) ├
└───┘└─────────────────────────┘└───────┘
)
])
]),
SummedOp([
ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌─────────────────────────┐┌───┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b + 1.5707963267949) ├┤ H ├
└───┘└───────┘└─────────────────────────┘└───┘
)
]),
-1.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌─────────────────────────┐┌───┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b - 1.5707963267949) ├┤ H ├
└───┘└───────┘└─────────────────────────┘└───┘
)
]),
0.5 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌─────────────────────────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b + 1.5707963267949) ├
└───┘└───────┘└─────────────────────────┘
)
]),
-0.5 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌─────────────────────────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b - 1.5707963267949) ├
└───┘└───────┘└─────────────────────────┘
)
])
])
])
All that is left to do is to assign values to the parameters and to evaluate the gradient operators.
[4]:
# Assign the parameters and evaluate the gradient
grad_result = grad.assign_parameters(value_dict).eval()
print('Gradient', grad_result)
Gradient [(-1.414213562373094+0j), (-0.7071067811865474+0j)]
Gradients w.r.t. State Parameters¶
Gradient of an expectation value w.r.t. a state \(|\psi\left(\theta\right)\rangle\) parameter, i.e.
respectively of sampling probabilities w.r.t. a state \(|\psi\left(\theta\right)\rangle\) parameter, i.e.
A gradient w.r.t. a state parameter may be evaluated with different methods. Each method has advantages and disadvantages.
[5]:
# Define the Hamiltonian with fixed coefficients
H = 0.5 * X - 1 * Z
# Define the parameters w.r.t. we want to compute the gradients
params = [a, b]
# Define the values to be assigned to the parameters
value_dict = { a: np.pi / 4, b: np.pi}
# Combine the Hamiltonian observable and the state into an expectation value operator
op = ~StateFn(H) @ CircuitStateFn(primitive=qc, coeff=1.)
print(op)
ComposedOp([
OperatorMeasurement(0.5 * X
- 1.0 * Z),
CircuitStateFn(
┌───┐┌───────┐┌───────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b) ├
└───┘└───────┘└───────┘
)
])
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sympy/core/expr.py:3951: SymPyDeprecationWarning:
expr_free_symbols method has been deprecated since SymPy 1.9. See
https://github.com/sympy/sympy/issues/21494 for more info.
deprecated_since_version="1.9").warn()
Parameter Shift Gradients¶
Given a Hermitian operator \(g\) with two unique eigenvalues \(\pm r\) which acts as generator for a parameterized quantum gate
Then, quantum gradients can be computed by using eigenvalue \(r\) dependent shifts to parameters. All standard, parameterized Qiskit gates can be shifted with \(\pi/2\), i.e.,
Probability gradients are computed equivalently.
[6]:
# Convert the expectation value into an operator corresponding to the gradient w.r.t. the state parameters using
# the parameter shift method.
state_grad = Gradient(grad_method='param_shift').convert(operator=op, params=params)
# Print the operator corresponding to the gradient
print(state_grad)
# Assign the parameters and evaluate the gradient
state_grad_result = state_grad.assign_parameters(value_dict).eval()
print('State gradient computed with parameter shift', state_grad_result)
ListOp([
SummedOp([
0.25 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌─────────────────────────┐┌───────┐┌───┐
q0_0: ┤ H ├┤ Rz(a + 1.5707963267949) ├┤ Rx(b) ├┤ H ├
└───┘└─────────────────────────┘└───────┘└───┘
)
]),
-0.25 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌─────────────────────────┐┌───────┐┌───┐
q0_0: ┤ H ├┤ Rz(a - 1.5707963267949) ├┤ Rx(b) ├┤ H ├
└───┘└─────────────────────────┘└───────┘└───┘
)
]),
-0.5 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌─────────────────────────┐┌───────┐
q0_0: ┤ H ├┤ Rz(a + 1.5707963267949) ├┤ Rx(b) ├
└───┘└─────────────────────────┘└───────┘
)
]),
0.5 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌─────────────────────────┐┌───────┐
q0_0: ┤ H ├┤ Rz(a - 1.5707963267949) ├┤ Rx(b) ├
└───┘└─────────────────────────┘└───────┘
)
])
]),
SummedOp([
0.25 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌─────────────────────────┐┌───┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b + 1.5707963267949) ├┤ H ├
└───┘└───────┘└─────────────────────────┘└───┘
)
]),
-0.25 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌─────────────────────────┐┌───┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b - 1.5707963267949) ├┤ H ├
└───┘└───────┘└─────────────────────────┘└───┘
)
]),
-0.5 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌─────────────────────────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b + 1.5707963267949) ├
└───┘└───────┘└─────────────────────────┘
)
]),
0.5 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌─────────────────────────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b - 1.5707963267949) ├
└───┘└───────┘└─────────────────────────┘
)
])
])
])
State gradient computed with parameter shift [(-0.35355339059327356+0j), (0.7071067811865472+0j)]
Linear Combination of Unitaries Gradients¶
Unitaries can be written as \(U\left(\omega\right) = e^{iM\left(\omega\right)}\), where \(M\left(\omega\right)\) denotes a parameterized Hermitian matrix. Further, Hermitian matrices can be decomposed into weighted sums of Pauli terms, i.e., \(M\left(\omega\right) = \sum_pm_p\left(\omega\right)h_p\) with \(m_p\left(\omega\right)\in\mathbb{R}\) and \(h_p=\bigotimes\limits_{j=0}^{n-1}\sigma_{j, p}\) for \(\sigma_{j, p}\in\left\{I, X, Y, Z\right\}\) acting on the \(j^{\text{th}}\) qubit. Thus, the gradients of \(U_k\left(\omega_k\right)\) are given by \begin{equation*} \frac{\partial U_k\left(\omega_k\right)}{\partial\omega_k} = \sum\limits_pi \frac{\partial m_{k,p}\left(\omega_k\right)}{\partial\omega_k}U_k\left(\omega_k\right)h_{k_p}. \end{equation*}
Combining this observation with a circuit structure presented in Simulating physical phenomena by quantum networks allows us to compute the gradient with the evaluation of a single quantum circuit.
[7]:
# Convert the expectation value into an operator corresponding to the gradient w.r.t. the state parameter using
# the linear combination of unitaries method.
state_grad = Gradient(grad_method='lin_comb').convert(operator=op, params=params)
# Print the operator corresponding to the gradient
print(state_grad)
# Assign the parameters and evaluate the gradient
state_grad_result = state_grad.assign_parameters(value_dict).eval()
print('State gradient computed with the linear combination method', state_grad_result)
ListOp([
SummedOp([
0.5 * ComposedOp([
OperatorMeasurement(ZZ) * 2.0,
CircuitStateFn(
┌───┐ ┌───────┐┌───────┐┌───┐
q0_0: ┤ H ├────────■─┤ Rz(a) ├┤ Rx(b) ├┤ H ├
├───┤┌─────┐ │ └─┬───┬─┘└───────┘└───┘
q81_0: ┤ H ├┤ Sdg ├─■───┤ H ├────────────────
└───┘└─────┘ └───┘
) * 0.7071067811865476
]),
-1.0 * ComposedOp([
OperatorMeasurement(ZZ) * 2.0,
CircuitStateFn(
┌───┐ ┌───────┐┌───────┐
q0_0: ┤ H ├────────■─┤ Rz(a) ├┤ Rx(b) ├
├───┤┌─────┐ │ └─┬───┬─┘└───────┘
q82_0: ┤ H ├┤ Sdg ├─■───┤ H ├───────────
└───┘└─────┘ └───┘
) * 0.7071067811865476
])
]),
SummedOp([
0.5 * ComposedOp([
OperatorMeasurement(ZZ) * 2.0,
CircuitStateFn(
┌───┐┌───────┐┌───┐┌───────┐┌───┐
q0_0: ┤ H ├┤ Rz(a) ├┤ X ├┤ Rx(b) ├┤ H ├
├───┤└┬─────┬┘└─┬─┘└─┬───┬─┘└───┘
q83_0: ┤ H ├─┤ Sdg ├───■────┤ H ├───────
└───┘ └─────┘ └───┘
) * 0.7071067811865476
]),
-1.0 * ComposedOp([
OperatorMeasurement(ZZ) * 2.0,
CircuitStateFn(
┌───┐┌───────┐┌───┐┌───────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ X ├┤ Rx(b) ├
├───┤└┬─────┬┘└─┬─┘└─┬───┬─┘
q84_0: ┤ H ├─┤ Sdg ├───■────┤ H ├──
└───┘ └─────┘ └───┘
) * 0.7071067811865476
])
])
])
State gradient computed with the linear combination method [(-0.3535533905932737+0j), (0.7071067811865472+0j)]
Finite Difference Gradients¶
Unlike the other methods, finite difference gradients are numerical estimations rather than analytical values. This implementation employs a central difference approach with \(\epsilon \ll 1\)
Probability gradients are computed equivalently.
[8]:
# Convert the expectation value into an operator corresponding to the gradient w.r.t. the state parameter using
# the finite difference method.
state_grad = Gradient(grad_method='fin_diff').convert(operator=op, params=params)
# Print the operator corresponding to the gradient
print(state_grad)
# Assign the parameters and evaluate the gradient
state_grad_result = state_grad.assign_parameters(value_dict).eval()
print('State gradient computed with finite difference', state_grad_result)
ListOp([
SummedOp([
250000.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌────────────────┐┌───────┐┌───┐
q0_0: ┤ H ├┤ Rz(a + 1.0e-6) ├┤ Rx(b) ├┤ H ├
└───┘└────────────────┘└───────┘└───┘
)
]),
-250000.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌────────────────┐┌───────┐┌───┐
q0_0: ┤ H ├┤ Rz(a - 1.0e-6) ├┤ Rx(b) ├┤ H ├
└───┘└────────────────┘└───────┘└───┘
)
]),
-500000.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌────────────────┐┌───────┐
q0_0: ┤ H ├┤ Rz(a + 1.0e-6) ├┤ Rx(b) ├
└───┘└────────────────┘└───────┘
)
]),
500000.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌────────────────┐┌───────┐
q0_0: ┤ H ├┤ Rz(a - 1.0e-6) ├┤ Rx(b) ├
└───┘└────────────────┘└───────┘
)
])
]),
SummedOp([
250000.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌────────────────┐┌───┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b + 1.0e-6) ├┤ H ├
└───┘└───────┘└────────────────┘└───┘
)
]),
-250000.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌────────────────┐┌───┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b - 1.0e-6) ├┤ H ├
└───┘└───────┘└────────────────┘└───┘
)
]),
-500000.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌────────────────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b + 1.0e-6) ├
└───┘└───────┘└────────────────┘
)
]),
500000.0 * ComposedOp([
OperatorMeasurement(Z),
CircuitStateFn(
┌───┐┌───────┐┌────────────────┐
q0_0: ┤ H ├┤ Rz(a) ├┤ Rx(b - 1.0e-6) ├
└───┘└───────┘└────────────────┘
)
])
])
])
State gradient computed with finite difference [(-0.35355339057345814+0j), (0.707106781149+0j)]
Natural Gradient¶
A special type of first order gradient is the natural gradient which has proven itself useful in classical machine learning and is already being studied in the quantum context. This quantity represents a gradient that is ‘rescaled’ with the inverse Quantum Fisher Information matrix (QFI)
Instead of inverting the QFI, one can also use a least-square solver with or without regularization to solve
The implementation supports ridge and lasso regularization with automatic search for a good parameter using L-curve corner search as well as two types of perturbations of the diagonal elements of the QFI.
The natural gradient can be used instead of the standard gradient with any gradient-based optimizer and/or ODE solver.
[9]:
# Besides the method to compute the circuit gradients resp. QFI, a regularization method can be chosen:
# `ridge` or `lasso` with automatic parameter search or `perturb_diag_elements` or `perturb_diag`
# which perturb the diagonal elements of the QFI.
nat_grad = NaturalGradient(grad_method='lin_comb', qfi_method='lin_comb_full', regularization='ridge').convert(
operator=op, params=params)
# Assign the parameters and evaluate the gradient
nat_grad_result = nat_grad.assign_parameters(value_dict).eval()
print('Natural gradient computed with linear combination of unitaries', nat_grad_result)
Natural gradient computed with linear combination of unitaries [-2.62902827 1.31451413]
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sklearn/linear_model/_base.py:155: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2. Please leave the normalize parameter to its default value to silence this warning. The default behavior of this estimator is to not do any normalization. If normalization is needed please use sklearn.preprocessing.StandardScaler instead.
FutureWarning,
Hessians (Second Order Gradients)¶
Four types of second order gradients are supported by the gradient framework.
Gradient of an expectation value w.r.t. a coefficient of the measurement operator respectively observable \(\hat{O}\left(\omega\right)\), i.e. \(\frac{\partial^2\langle\psi\left(\theta\right)|\hat{O}\left(\omega\right)|\psi\left(\theta\right)\rangle}{\partial\omega^2}\)
Gradient of an expectation value w.r.t. a state \(|\psi\left(\theta\right)\rangle\) parameter, i.e. \(\frac{\partial^2\langle\psi\left(\theta\right)|\hat{O}\left(\omega\right)|\psi\left(\theta\right)\rangle}{\partial\theta^2}\)
Gradient of sampling probabilities w.r.t. a state \(|\psi\left(\theta\right)\rangle\) parameter, i.e. \(\frac{\partial^2 p_i}{\partial\theta^2} = \frac{\partial^2\langle\psi\left(\theta\right)|i\rangle\langle i|\psi\left(\theta\right)\rangle}{\partial\theta^2}\)
Gradient of an expectation value w.r.t. a state \(|\psi\left(\theta\right)\rangle\) parameter and a coefficient of the measurement operator respectively observable \(\hat{O}\left(\omega\right)\), i.e. \(\frac{\partial^2\langle\psi\left(\theta\right)|\hat{O}\left(\omega\right)|\psi\left(\theta\right)\rangle}{\partial\theta\partial\omega}\)
In the following examples are given for the first two Hessian types. The remaining Hessians are evaluated analogously.
Hessians w.r.t. Measurement Operator Parameters¶
Again, we define a quantum state \(|\psi\left(\theta\right)\rangle\) and a Hamiltonian \(H\) acting as observable. Then, the state and the Hamiltonian are wrapped into an object defining the expectation value
[10]:
# Instantiate the Hamiltonian observable
H = X
# Instantiate the quantum state with two parameters
a = Parameter('a')
b = Parameter('b')
q = QuantumRegister(1)
qc = QuantumCircuit(q)
qc.h(q)
qc.rz(a, q[0])
qc.rx(b, q[0])
# Combine the Hamiltonian observable and the state
op = ~StateFn(H) @ CircuitStateFn(primitive=qc, coeff=1.)
Next, we can choose the parameters for which we want to compute second order gradients. - Given a tuple, the Hessian
will evaluate the second order gradient for the two parameters. - Given a list, the Hessian
will evaluate the second order gradient for all possible combinations of tuples of these parameters.
After binding parameter values to the parameters, the Hessian can be evaluated.
[11]:
# Convert the operator and the hessian target coefficients into the respective operator
hessian = Hessian().convert(operator = op, params = [a, b])
# Define the values to be assigned to the parameters
value_dict = {a: np.pi / 4, b: np.pi/4}
# Assign the parameters and evaluate the Hessian w.r.t. the Hamiltonian coefficients
hessian_result = hessian.assign_parameters(value_dict).eval()
print('Hessian \n', np.real(np.array(hessian_result)))
Hessian
[[-7.07106781e-01 0.00000000e+00]
[ 0.00000000e+00 -5.55111512e-17]]
Hessians w.r.t. State Parameters¶
[12]:
# Define parameters
params = [a, b]
# Get the operator object representing the Hessian
state_hess = Hessian(hess_method='param_shift').convert(operator=op, params=params)
# Assign the parameters and evaluate the Hessian
hessian_result = state_hess.assign_parameters(value_dict).eval()
print('Hessian computed using the parameter shift method\n', (np.array(hessian_result)))
# Get the operator object representing the Hessian
state_hess = Hessian(hess_method='lin_comb').convert(operator=op, params=params)
# Assign the parameters and evaluate the Hessian
hessian_result = state_hess.assign_parameters(value_dict).eval()
print('Hessian computed using the linear combination of unitaries method\n', (np.array(hessian_result)))
# Get the operator object representing the Hessian using finite difference
state_hess = Hessian(hess_method='fin_diff').convert(operator=op, params=params)
# Assign the parameters and evaluate the Hessian
hessian_result = state_hess.assign_parameters(value_dict).eval()
print('Hessian computed with finite difference\n', (np.array(hessian_result)))
Hessian computed using the parameter shift method
[[-7.07106781e-01+0.j 0.00000000e+00+0.j]
[ 0.00000000e+00+0.j -5.55111512e-17+0.j]]
Hessian computed using the linear combination of unitaries method
[[-7.07106781e-01+0.j -1.20000000e-17+0.j]
[-1.20000000e-17+0.j 5.60000000e-17+0.j]]
Hessian computed with finite difference
[[-7.07122803e-01+0.j -3.05175781e-05+0.j]
[-3.05175781e-05+0.j -6.10351562e-05+0.j]]
Quantum Fisher Information (QFI)¶
The Quantum Fisher Information is a metric tensor which is representative for the representation capacity of a parameterized quantum state \(|\psi\left(\theta\right)\rangle = V\left(\theta\right)|\psi\rangle\) with input state \(|\psi\rangle\), parametrized Ansatz \(V\left(\theta\right)\).
The entries of the QFI for a pure state reads
Circuit QFIs¶
The evaluation of the QFI corresponding to a quantum state that is generated by a parameterized quantum circuit can be conducted in different ways.
Linear Combination Full QFI¶
To compute the full QFI, we use a working qubit as well as intercepting controlled gates. See e.g. Variational ansatz-based quantum simulation of imaginary time evolution.
[13]:
# Wrap the quantum circuit into a CircuitStateFn
state = CircuitStateFn(primitive=qc, coeff=1.)
# Convert the state and the parameters into the operator object that represents the QFI
qfi = QFI(qfi_method='lin_comb_full').convert(operator=state, params=params)
# Define the values for which the QFI is to be computed
values_dict = {a: np.pi / 4, b: 0.1}
# Assign the parameters and evaluate the QFI
qfi_result = qfi.assign_parameters(values_dict).eval()
print('full QFI \n', np.real(np.array(qfi_result)))
full QFI
[[ 1.0000000e+00 -1.8679899e-16]
[-1.8679899e-16 5.0000000e-01]]
Block-diagonal and Diagonal Approximation¶
A block-diagonal resp. diagonal approximation of the QFI can be computed without additional working qubits. This implementation requires the unrolling into Pauli rotations and unparameterized Gates.
[14]:
# Convert the state and the parameters into the operator object that represents the QFI
# and set the approximation to 'block_diagonal'
qfi = QFI('overlap_block_diag').convert(operator=state, params=params)
# Assign the parameters and evaluate the QFI
qfi_result = qfi.assign_parameters(values_dict).eval()
print('Block-diagonal QFI \n', np.real(np.array(qfi_result)))
# Convert the state and the parameters into the operator object that represents the QFI
# and set the approximation to 'diagonal'
qfi = QFI('overlap_diag').convert(operator=state, params=params)
# Assign the parameters and evaluate the QFI
qfi_result = qfi.assign_parameters(values_dict).eval()
print('Diagonal QFI \n', np.real(np.array(qfi_result)))
Block-diagonal QFI
[[1. 0. ]
[0. 0.5]]
Diagonal QFI
[[1. 0. ]
[0. 0.5]]
Application Example: VQE with gradient-based optimization¶
Additional Imports¶
[15]:
# Execution Imports
from qiskit import Aer
from qiskit.utils import QuantumInstance
# Algorithm Imports
from qiskit.algorithms import VQE
from qiskit.algorithms.optimizers import CG
The Gradient Framework can also be used for a gradient-based VQE
. First, the Hamiltonian and wavefunction ansatz are initialized.
[16]:
from qiskit.opflow import I, X, Z
from qiskit.circuit import QuantumCircuit, ParameterVector
from scipy.optimize import minimize
# Instantiate the system Hamiltonian
h2_hamiltonian = -1.05 * (I ^ I) + 0.39 * (I ^ Z) - 0.39 * (Z ^ I) - 0.01 * (Z ^ Z) + 0.18 * (X ^ X)
# This is the target energy
h2_energy = -1.85727503
# Define the Ansatz
wavefunction = QuantumCircuit(2)
params = ParameterVector('theta', length=8)
it = iter(params)
wavefunction.ry(next(it), 0)
wavefunction.ry(next(it), 1)
wavefunction.rz(next(it), 0)
wavefunction.rz(next(it), 1)
wavefunction.cx(0, 1)
wavefunction.ry(next(it), 0)
wavefunction.ry(next(it), 1)
wavefunction.rz(next(it), 0)
wavefunction.rz(next(it), 1)
# Define the expectation value corresponding to the energy
op = ~StateFn(h2_hamiltonian) @ StateFn(wavefunction)
Now, we can choose whether the VQE
should use a Gradient
or NaturalGradient
, define a QuantumInstance
to execute the quantum circuits and run the algorithm.
[17]:
grad = Gradient(grad_method='lin_comb')
qi_sv = QuantumInstance(Aer.get_backend('aer_simulator_statevector'),
shots=1,
seed_simulator=2,
seed_transpiler=2)
#Conjugate Gradient algorithm
optimizer = CG(maxiter=50)
# Gradient callable
vqe = VQE(wavefunction, optimizer=optimizer, gradient=grad, quantum_instance=qi_sv)
result = vqe.compute_minimum_eigenvalue(h2_hamiltonian)
print('Result:', result.optimal_value, 'Reference:', h2_energy)
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/sympy/core/expr.py:3951: SymPyDeprecationWarning:
expr_free_symbols method has been deprecated since SymPy 1.9. See
https://github.com/sympy/sympy/issues/21494 for more info.
deprecated_since_version="1.9").warn()
Result: -1.8404998430549435 Reference: -1.85727503
[18]:
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
/home/computertreker/git/qiskit/qiskit/.tox/docs/lib/python3.7/site-packages/qiskit/aqua/__init__.py:86: DeprecationWarning: The package qiskit.aqua is deprecated. It was moved/refactored to qiskit-terra For more information see <https://github.com/Qiskit/qiskit-aqua/blob/main/README.md#migration-guide>
warn_package('aqua', 'qiskit-terra')
Version Information
Qiskit Software | Version |
---|---|
qiskit-terra | 0.18.2 |
qiskit-aer | 0.8.2 |
qiskit-ignis | 0.6.0 |
qiskit-ibmq-provider | 0.16.0 |
qiskit-aqua | 0.9.5 |
qiskit | 0.29.1 |
qiskit-nature | 0.2.2 |
qiskit-finance | 0.3.0 |
qiskit-optimization | 0.2.3 |
qiskit-machine-learning | 0.2.1 |
System information | |
Python | 3.7.12 (default, Nov 22 2021, 14:57:10) [GCC 11.1.0] |
OS | Linux |
CPUs | 32 |
Memory (Gb) | 125.71650314331055 |
Tue Jan 04 11:17:41 2022 EST |
This code is a part of Qiskit
© Copyright IBM 2017, 2022.
This code is licensed under the Apache License, Version 2.0. You may
obtain a copy of this license in the LICENSE.txt file in the root directory
of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
Any modifications or derivative works of this code must retain this
copyright notice, and modified files need to carry a notice indicating
that they have been altered from the originals.
[ ]: