What is CUDA-Q?

NVIDIA’s CUDA Quantum (CUDA-Q) is an open-source quantum development platform with a unified programming model designed for a hybrid setting, supporting computation on GPU, CPU, and QPU resources working together. CUDA-Q integrates with various QPUs (including IonQ systems) as well as GPU-accelerated quantum simulations, and it supports programming in Python and C++.

This guide covers how to use CUDA-Q with Python to submit basic circuits to IonQ backends; refer to the CUDA-Q docs for additional circuit and application examples in both Python and C++.

Getting started

You’ll need an account on the IonQ Quantum Cloud, and you’ll need to create an API key. We also have a guide about setting up and managing your API keys if you need some help.


Set up CUDA-Q

CUDA-Q contains built-in support for IonQ with no separate packages or add-ons.

Refer to CUDA-Q’s quick start and full installation guide for information about installing CUDA-Q. Depending on your operating system and preferences, you might install the cudaq Python package directly with pip, or you might use a Docker container provided by NVIDIA.

CUDA-Q can also be used in hosted environments like Google Colab or QBraid Lab, which offer easier setup and may provide access to GPU resources.

If you’re using a hosted notebook in an environment where CUDA-Q isn’t already installed, start by installing it via pip:

%%capture
pip install cudaq

Set up your environment

For CUDA-Q, you’ll specifically need to store your API key in an environment variable named IONQ_API_KEY, rather than passing it into a function in your Python code. If you’re running on your local machine or in another environment that you can manage directly, you can set this up persistently using the steps for your operating system in our guide to API keys.

Otherwise, you can set the environment variable from within a Python script or notebook using:

import os
os.environ['IONQ_API_KEY'] = "your_api_key_here"
Your API key is effectively a password that enables access to your IonQ Quantum Cloud account, so if you’ll be sharing your code file, we recommend reading in your key from a separate file or copying it in at runtime using a module like getpass.

Submit a circuit to the simulator

To submit a basic circuit to IonQ’s ideal cloud simulator, define it as a CUDA-Q quantum kernel (a Python function with the @cudaq.kernel decorator) and use cudaq.set_target('ionq') before sampling the kernel.

import cudaq

# Define a circuit (CUDA-Q kernel)
@cudaq.kernel
def hello_world():
    qubits = cudaq.qvector(2)
    h(qubits[0])
    x.ctrl(qubits[0], qubits[1])

# Set the target to IonQ's simulator
cudaq.set_target('ionq')

# Run the circuit and print results
result = cudaq.sample(hello_world)
print(result)

You should see a result like:

{ 00:500 11:500 }

Without specifying a QPU or noise model, cudaq.set_target('ionq') will target IonQ’s ideal cloud simulator. Jobs run after setting this target in your code will be sent to the ideal simulator. (If we hadn’t set any target at all, the circuit would run locally using CUDA-Q’s default simulator for your system and environment.)

For IonQ ideal simulations using CUDA-Q, results are retrieved by multiplying the calculated probabilities by the shot count (which defaults to 1000 if unspecified, as in this example). As a result, this Bell state example ends up with exactly 500 shots for the 00 state and 500 shots for the 11 state.

For circuits run using CUDA-Q with other simulation backends (and for IonQ ideal simulations using different SDKs or settings), ideal simulation results may be returned by sampling from the calculated probabilities instead, which may be preferable depending on your objectives.

Finally, note that the name of the CUDA-Q kernel function (hello_world in this case) is also used as the name of the IonQ job, which you should see on the “My Jobs” page in the IonQ Cloud Console.

Submit a circuit to the noisy simulator

To run the circuit using IonQ’s simulator with a noise model, add a noise argument when setting the target, like: cudaq.set_target('ionq', noise='aria-1'). The available noise models are harmony (legacy), aria-1, aria-2, and forte-1. You can read more about these noise models here.

import cudaq

# Define a circuit (CUDA-Q kernel)
@cudaq.kernel
def hello_world_noisy():
    qubits = cudaq.qvector(2)
    h(qubits[0])
    x.ctrl(qubits[0], qubits[1])

# Set the target to IonQ's simulator with Aria 1 noise model
cudaq.set_target('ionq', noise='aria-1')

# Run the circuit and print results
result = cudaq.sample(hello_world_noisy, shots_count=1000)
print(result)

You might see a result like:

{ 00:488 01:1 10:2 11:509 }

This example is almost the same as the ideal simulation above, except that it uses the aria-1 noise model, specifies the number of shots to be simulated, and changes the name of the job. As expected, the result is similar but there were a few instances of the 01 and 10 states recorded.

Submit a circuit to a QPU

To run the same circuit on IonQ’s quantum hardware (QPU), we can use cudaq.set_target('ionq', qpu='qpu.aria-1') or specify another backend name. Available QPU backend options may include ionq_qpu.aria-1, ionq_qpu.aria-2, ionq_qpu.forte-1, or ionq_qpu.forte-enterprise-1. You can view which of these systems you can access in the /v0.3/backends resource in the API and on the “Backends” tab of the IonQ Cloud Console.

Using CUDA-Q’s sample_async method rather than sample is also recommended, in order to submit a job and return to retrieve its results later.

Before submitting to any QPU, we recommend testing your code on a simulator (including with noise model) and following the other steps on this list to confirm your access and the QPU availability.
import cudaq

# Define a circuit (CUDA-Q kernel)
@cudaq.kernel
def hello_qpu():
    qubits = cudaq.qvector(2)
    h(qubits[0])
    x.ctrl(qubits[0], qubits[1])

# Set the target to IonQ Aria 1
cudaq.set_target('ionq', qpu='qpu.aria-1')

# Submit the circuit
async_result = cudaq.sample_async(hello_qpu, shots_count=1000)

# Save the job information to a file
with open("hello_qpu.txt", "w") as file:
    file.write(str(async_result))

You should see the submitted job in the IonQ Cloud Console, with status “ready”, when it is waiting in the queue.

If you save the job information to a file, you can easily retrieve the result later, after the job has finished. (Note that the file contains your IonQ API key, so we recommend storing it securely.)

Retrieve the result

The stored async result contains all of the information needed to retrieve the job, after you’ve confirmed it has completed.

import cudaq

with open("hello_qpu.txt", "r") as file:
    retrieved_async_result = cudaq.AsyncSampleResult(str(file.read()))

result = retrieved_async_result.get()
print(result)

Additional resources

CUDA-Q’s documentation includes much more information on its features and capabilities, with extensive examples and applications. You can find IonQ examples similar to the ones shown above in the Examples/Using Quantum Hardware Providers section and the Backends/Quantum Hardware (QPUs) section, but you can also try running many other examples with IonQ targets as demonstrated here.

News, partner stories, and more details about CUDA-Q are available in NVIDIA’s developer resources.

The CUDA-Q source code is also available on GitHub.