Metrics
qudit.tools provides classes of information-theoretic functionals: Fidelity, Entropy, Info, and Distance.
All inputs can be statevectors (1D arrays) or density matrices (2D arrays). Methods will promote pure states as statevectors, to density matrices automatically.
Fidelity
Fidelity measures overlap between quantum states and channels.
| Method | Description |
|---|---|
Fidelity.default(rho, sigma) | Uhlmann fidelity |
Fidelity.channel(kraus, rho) | Apply a Kraus channel |
Fidelity.entanglement(R, E, codes) | Entanglement fidelity for encode→noise→recovery |
Fidelity.bare_qubit(R, E, state) | Average fidelity of a single logical state through noise→recovery |
Fidelity.cafaro(kraus) | Cafaro proxy |
Fidelity.negativity(rho, dA, dB) | Negativity |
State fidelity
For pure statevectors, fidelity is
psi = Ket("0")
phi = (Ket("0") + Ket("1")).norm()
print(Fidelity.default(psi, phi)) # 0.5from qudit import Basis
from qudit.tools import Fidelity
Ket = Basis(2)Entanglement fidelity
Used to benchmark quantum error correction: how well does a code+recovery pipeline preserve the logical subspace under noise?
where
ops = Process.GAD(2, 4, Y=0.01, p=0.001)
rec = Recovery.petz(ops, code)
fid = Fidelity.entanglement(rec, ops, code)
print(fid) # close to 1.0 for small noisefrom qudit.noise import Process, Recovery
from qudit.tools import Fidelity
import numpy as np
code = np.array([
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0.0],
], dtype=np.complex64)
code /= np.linalg.norm(code, axis=1)[:, None]Entropy
Entropy computes various entropic quantities. All density-matrix methods accept statevectors and promote them to
| Method | Formula | Notes |
|---|---|---|
Entropy.neumann(rho) | Default entropy | |
Entropy.shannon(probs) | Classical probabilities | |
Entropy.tsallis(rho, q) | ||
Entropy.renyi(rho, alpha) | ||
Entropy.hartley(probs) | Cardinality of support | |
Entropy.unified(rho, q, alpha) | Interpolates Tsallis/Renyi | |
Entropy.relative(rho, sigma) | Quantum KL divergence | |
Entropy.conditional(rho, dA, dB) | Bipartite state required |
All methods accept an optional base keyword (default 2.0) to change the logarithm base.
rho = np.array([[0.5, 0], [0, 0.5]])
print(Entropy.neumann(rho)) # 1.0 (maximally mixed qubit)
print(Entropy.tsallis(rho, q=2)) # 0.5 (linear entropy)
print(Entropy.renyi(rho, alpha=2)) # 1.0from qudit.tools import Entropy
import numpy as npNOTE
Entropy.default is an alias for Entropy.neumann. Shannon entropy takes a probability vector (not a density matrix).
Info
Info derives higher-level correlations from entropies on bipartite states
| Method | Formula | Description |
|---|---|---|
Info.mutual(rho, dA, dB) | Quantum mutual information | |
Info.coherent(rho_AB, dA, dB) | Coherent information | |
Info.conditional(rho, dA, dB) | see below | Two conventions |
Info.conditional has a true_case flag:
true_case=True(default): measurement-induced conditional entropy ongiven a projective measurement on true_case=False: algebraic conditional entropy
# Bell state (maximally entangled)
psi = (np.kron([1, 0], [1, 0]) + np.kron([0, 1], [0, 1])) / np.sqrt(2)
rho = np.outer(psi, psi.conj())
print(Info.mutual(rho, 2, 2)) # 2.0 (maximum for 2 qubits)
print(Info.coherent(rho, 2, 2)) # 1.0from qudit.tools import Info
import numpy as npDistance
Distance measures distinguishability between quantum states.
| Method | Formula | Description |
|---|---|---|
Distance.trace(rho, sigma) | Operational distinguishability | |
Distance.bures(rho, sigma) | Metric on density matrices | |
Distance.jensen_shannon(rho, sigma) | Symmetric, bounded in | |
Distance.relative_entropy(rho, sigma) | Alias for Entropy.relative |
rho = np.array([[1, 0], [0, 0]], dtype=complex) # |0><0|
sigma = np.array([[0.5, 0], [0, 0.5]], dtype=complex) # maximally mixed
print(Distance.trace(rho, sigma)) # 0.5
print(Distance.bures(rho, sigma)) #~0.765from qudit.tools import Distance
import numpy as npTIP
All four classes accept both np.ndarray density matrices and 1D pure statevectors; 1D inputs are automatically promoted to rank-1 density matrices where needed.