Skip to main contentIBM Quantum Administration

Frequently asked questions (FAQ)

If you do not find the assistance you need here, you can get help as described on the Getting help page.

How are instances, hubs, groups, and projects related?

Hubs, groups, and projects are the organization levels for instances. At the lowest level, collaborators are members of projects. Projects are members of groups, and groups are members of hubs. You typically have one hub, which can be divided into any number of groups. Each group can then be divided into any number of projects. Administrators assign access to systems at the project and group level.

See this topic for more information: Instances.

Our hub has an allocated number of queue slots. What does that mean?

Your queue slot allocation determines your reserved capacity. For example, given that each system with fewer than 200 qubits is worth 20 queue slots, if your hub is allocated five queue slots, then you will be guaranteed capacity equivalent to 25% of the up-time of the average <200-qubit system, if your users maintain a continuous demand.

Note that queue slot consumption changes depending on the number of qubits. Please refer to your contract or your IBM engagement manager for the exact metrics.

See this topic for more information: Fair-share scheduler.

How should I allocate my queue slots?

A good practice is to accord the sum of group shares with the number of queue slots. For example, assume that your Hub has five queue slots. Then an easy way to track access is to make the sum of your group shares to be equal to 5000. That way, if you want one group to have an effective allotment of 2/5 queue slots, you will configure the backend priority of this group to be equal to 2000.

See this topic for more information: Fair-share scheduler.

Can I allocate shares flexibly?

No. You must manually set the allocation for each group and project.

What do I do when a new system becomes available?

When you are notified that there is a new system available, you must add it first to each group and then to each project that should have access. If you do not complete this step, your hub will not use the system. For each relevant group and project, open the group or project, click the Manage backends tab, choose the new system under Select a backend, then fill out the rest of the values as appropriate.

For full details, see Add or remove backends within groups.

A device is only available to a project if you have added it first to the project’s group.

What can hub and group administrators do?

Hub administrator tasks:

  • Assign backends to all groups and projects
  • Assign shares to groups and projects
  • Invite and assign hub and group administrators
  • Add collaborators (non-administrator end users) to projects

Group administrator tasks:

  • Assign backends to projects
  • Assign shares to projects
  • Invite group administrators
  • Add collaborators (non-administrator end users) to projects

Is there a priority between what’s set by the group and hub administrators?

Both hub and group administrators can adjust shares at the project level. There is no precedence between what a hub administrator has set and what a group administrator has set. The most recent change is active regardless of who set it. However, hub administrators can make changes to all groups while group administrators can only make changes for their group.

What values should I set for Max experiments and Max shots?

It is recommended that you use circuit bundling (run several circuits in one job) to minimize the overhead of sending several jobs, which helps reduce queue wait time. The Max Experiments setting specifies how many circuits can be bundled in this manner.

Moreover, to minimize sampling errors of any experiments, your collaborators should maximize the number of shots (repetitions) that their circuit is run.

The default number of shots is 4000. The maximum number of circuits and shots for most systems is 300 circuits and 100,000 shots (under the Open Plan, the maximum is 100 circuits and 20,000 shots). We recommend that hub and group administrators keep the default settings for Max experiments and Max shots. However, there are some instances where you might want to use lower values. For example:

  • It could be useful to decrease those values for instances meant to be used by beginners. This will ensure that they have less impact on the allocated Hub usage, allowing more system time for advanced users.

Lowering ‘Max Shots’ and/or ‘Max Experiments’ may have an impact on the types of Jobs your users can submit.

Where is my API token?

Your API token is on your IBM Quantum Dashboard(opens in a new tab), right at the top for easy access.

What do I need to know about calibration jobs?

Several types of calibration jobs are run both daily and hourly to ensure that the systems are stable and return accurate results. Calibrations alert IBM to any system failures so that they can be resolved as soon as possible. They also provide users with the most up-to-date error rates and coherence times, allowing them to make better choices when choosing which qubits to use or how to compile their circuits. See more information in the About calibration jobs topic.

How do I train my new users on system usage?

To avoid situations in which a new user inadvertently submits many jobs and negatively impacts your Hub’s fair-share priority, we recommend you follow these onboarding steps to train new users.

  1. Give the user access to a Project that only contains simulators. This provides them with the opportunity to familiarize themselves with Qiskit while practicing basic job submission to a system.
  2. Next, give the user access to a Project in your Hub. This will give them experience sending jobs on larger systems that are only accessible through Projects in your Hub.

I need to configure my firewall to enable access to the IBM Quantum API endpoints. Which URLs should I add to our whitelist?


  • IBM Quantum APIs: https://* and https://*
  • IBM Cloud object storage for non-Runtime jobs: https://*


  • IBM Quantum APIs: wss://* and wss://*

Why does the Jobs page show multiple jobs running on the same system?

We have released a new feature that parallelizes some of the classical computation necessary to prepare a submitted job for its quantum computation. Before this feature, all aspects of job processing were executed serially, meaning the target backend would be held from processing another job until its current job completes. This would be visible in your dashboard as having at most one job in the “Running” state at any one time. With the parallel compilation feature, you may see multiple jobs in the Running state, and which remain in the Running state longer than before. With this change, we also expect to see faster completion times for Qiskit Runtime jobs. Currently, this optimization has been made available on ibmq_manila, ibm_auckland, ibm_bangkok, ibm_cairo, ibm_geneva, ibm_hanoi, ibmq_jakarta, ibm_lagos, ibmq_montreal, ibm_nairobi, ibm_peekskill, ibm_perth, ibmq_toronto, ibm_wellington, ibm_oslo, ibmq_kolkata, and ibmq_mumbai.

It is important to note that a single Qiskit Runtime job does not have exclusive access to a backend.

Do failed jobs affect my fair-share priority?

Yes, failed jobs count against the reserved capacity usage for the current month. To see if a job failed, visit the Results tab on your hub’s dashboard. A user can also view a job’s status on their Jobs(opens in a new tab) page.

How does the fair-share scheduler work?

See the Fair-share scheduler topic.

If we want to put all our jobs toward one system on a given month and that system happens to be really busy, how can we make sure we still get our contracted share that month?

Your contracted share will not be guaranteed on a single system. The new dashboard should help you track your members’ usage, and optimize systems and share balance to hit your targets. Usage is throughout a rolling window, and thus we cannot also guarantee that you can reach your share over a shorter span of time.

How do fair-share queuing and sessions impact job selection?

For each backend, jobs that are part of a session take priority. If there are no jobs from an active session, the next job from the regular fair-share queue is run.

A job from the fair-share queue could activate or reactivate a session.

Find more helpful information in the Common tasks topic.

Was this page helpful?
Report a bug or request content on GitHub.