Skip to main contentIBM Quantum Documentation
Important

IBM Quantum Platform is moving and this version will be sunset on July 1. To get started on the new platform, read the migration guide.

Manage Qiskit Serverless compute and data resources

Note

This documentation is relevant to IBM Quantum® Platform Classic. If you need the newer version, go to the new IBM Quantum Platform documentation.

Package versions

The code on this page was developed using the following requirements. We recommend using these versions or newer.

qiskit[all]~=1.3.1
qiskit-ibm-runtime~=0.34.0
qiskit-aer~=0.15.1
qiskit-serverless~=0.18.0
qiskit-ibm-catalog~=0.2
qiskit-addon-sqd~=0.8.1
qiskit-addon-utils~=0.1.0
qiskit-addon-mpf~=0.2.0
scipy~=1.14.1
qiskit-addon-aqc-tensor~=0.1.2
qiskit-addon-obp~=0.1.0
scipy~=1.14.1
pyscf~=2.7.0

With Qiskit Serverless, you can manage compute and data across your Qiskit pattern, including CPUs, QPUs, and other compute accelerators.


Set detailed statuses

Serverless workloads have several stages across a workflow. By default, the following statuses are viewable with job.status():

  • QUEUED: the workload is queued for classical resources
  • INITIALIZING: the workload is set up
  • RUNNING: the workload is currently running on classical resources
  • DONE: the workload has successfully completed

You can also set custom statuses that further describe the specific workflow stage, as follows.

from qiskit_serverless import update_status, Job
 
## If your function has a mapping stage, particularly application functions, you can set the status to "RUNNING: MAPPING" as follows:
update_status(Job.MAPPING)
 
## While handling transpilation, error suppression, and so forth, you can set the status to "RUNNING: OPTIMIZING_FOR_HARDWARE":
update_status(Job.OPTIMIZING_HARDWARE)
 
## After you submit jobs to Qiskit Runtime, the underlying quantum job will be queued. You can set status to "RUNNING: WAITING_FOR_QPU":
update_status(Job.WAITING_QPU)
 
## When the Qiskit Runtime job starts running on the QPU, set the following status "RUNNING: EXECUTING_QPU":
update_status(Job.EXECUTING_QPU)
 
## Once QPU is completed and post-processing has begun, set the status "RUNNING: POST_PROCESSING":
update_status(Job.POST_PROCESSING)

After successful completion of this workload (with save_result()), this status will be updated to DONE automatically.


Parallel workflows

For classical tasks that can be parallelized, use the @distribute_task decorator to define compute requirements needed to perform a task. Start by recalling the transpile_remote.py example from the Write your first Qiskit Serverless program topic with the following code.

The following code requires that you have already saved your credentials.

./source_files/transpile_remote.py
from qiskit.transpiler import generate_preset_pass_manager
from qiskit_ibm_runtime import QiskitRuntimeService
from qiskit_serverless import distribute_task
 
service = QiskitRuntimeService()
 
@distribute_task(target={"cpu": 1})
def transpile_remote(circuit, optimization_level, backend):
    """Transpiles an abstract circuit (or list of circuits) into an ISA circuit for a given backend."""
    pass_manager = generate_preset_pass_manager(
        optimization_level=optimization_level,
        backend=service.backend(backend)
    )
    isa_circuit = pass_manager.run(circuit)
    return isa_circuit

In this example, you decorated the transpile_remote() function with @distribute_task(target={"cpu": 1}). When run, this creates an asynchronous parallel worker task with a single CPU core, and returns with a reference to track the worker. To fetch the result, pass the reference to the get() function. We can use this to run multiple parallel tasks:

./source_files/transpile_remote.py (appended)
from time import time
from qiskit_serverless import get, get_arguments, save_result, update_status, Job
 
# Get arguments
arguments = get_arguments()
circuit = arguments.get("circuit")
optimization_level = arguments.get("optimization_level")
backend = arguments.get("backend")
./source_files/transpile_remote.py (appended)
# Start distributed transpilation
 
update_status(Job.OPTIMIZING_HARDWARE)
 
start_time = time()
transpile_worker_references = [
    transpile_remote(circuit, optimization_level, backend)
    for circuit in arguments.get("circuit_list")
]
 
transpiled_circuits = get(transpile_worker_references)
end_time = time()
./source_files/transpile_remote.py (appended)
# Save result, with metadata
 
result = {
    "circuits": transpiled_circuits,
    "metadata": {
        "resource_usage": {
            "RUNNING: OPTIMIZING_FOR_HARDWARE": {
                "CPU_TIME": end_time - start_time,
                "QPU_TIME": 0,
            },
        }
    },
}
 
save_result(result)

Explore different task configurations

You can flexibly allocate CPU, GPU, and memory for your tasks via @distribute_task(). For Qiskit Serverless on IBM Quantum® Platform, each program is equipped with 16 CPU cores and 32 GB RAM, which can be allocated dynamically as needed.

CPU cores can be allocated as full CPU cores, or even fractional allocations, as shown in the following.

Memory is allocated in number of bytes. Recall that there are 1024 bytes in a kilobyte, 1024 kilobytes in a megabyte, and 1024 megabytes in a gigabyte. To allocate 2 GB of memory for your worker, you need to allocate "mem": 2 * 1024 * 1024 * 1024.

./source_files/transpile_remote.py (appended)
@distribute_task(target={
    "cpu": 16,
    "mem": 2 * 1024 * 1024 * 1024
})
def transpile_remote(circuit, optimization_level, backend):
    return None

Next steps

Recommendations
Was this page helpful?
Report a bug or request content on GitHub.