Skip to content

Gradient Descent (VQE)

Layered RY-CX ansatz minimising ZI. Full optimisation loop timed end-to-end.

qudit and PennyLane use PyTorch autograd — one backward pass per step. Others use parameter-shift — 2L forward passes per step where L = number of parameters.

Loading…

Notes:

  • qudit (AD)torch.autograd through nn.Module. One backward pass per step regardless of parameter count.
  • pennylane (AD) — same gradient method, higher constant overhead from the QNode abstraction.
  • qiskit (PS)StatevectorEstimator batch API with parameter-shift. Batches all shift evaluations per step.
  • cirq (PS), braket (PS), qutip (PS) — parameter-shift with sequential circuit rebuilds per evaluation. Cost is 2L× forward cost per step.