Skip to main contentIBM Quantum Documentation
This page is from an old version of Qiskit SDK and does not exist in the latest version. We recommend you migrate to the latest version. See the release notes for more information.
Important

IBM Quantum Platform is moving and this version will be sunset on July 1. To get started on the new platform, read the migration guide.

OpenQASMLexer

class OpenQASMLexer(*args, **kwds)

GitHub

A pygments lexer for OpenQasm.


Attributes

alias_filenames

Default value: []

Secondary file name globs

aliases

Default value: ['qasm']

Shortcuts for the lexer

filenames

Default value: ['*.qasm']

File name globs

flags

Default value: 8

Flags for compiling the regular expressions. Defaults to MULTILINE.

gates

Default value: ['id', 'cx', 'x', 'y', 'z', 's', 'sdg', 'h', 't', 'tdg', 'ccx', 'c3x', 'c4x', 'c3sqrtx', 'rx', 'ry', 'rz', 'cz', 'cy', 'ch', 'swap', 'cswap', 'crx', 'cry', 'crz', 'cu1', 'cu3', 'rxx', 'rzz', 'rccx', 'rc3x', 'u1', 'u2', 'u3']

mimetypes

Default value: []

MIME types

name

Default value: 'OpenQASM'

Name of the lexer

priority

Default value: 0

Priority, should multiple lexers match and no content is provided

tokens

Default value: {'gate': [('[unitary\\d+]', Token.Keyword.Type, '#push'), ('p\\d+', Token.Text, '#push')], 'if_keywords': [('[a-zA-Z0-9_]*', Token.Literal.String, '#pop'), ('\\d+', Token.Literal.Number, '#push'), ('.*\\(', Token.Text, 'params')], 'index': [('\\d+', Token.Literal.Number, '#pop')], 'keywords': [('\\s*("([^"]|"")*")', Token.Literal.String, '#push'), ('\\d+', Token.Literal.Number, '#push'), ('.*\\(', Token.Text, 'params')], 'params': [('[a-zA-Z_][a-zA-Z0-9_]*', Token.Text, '#push'), ('\\d+', Token.Literal.Number, '#push'), ('(\\d+\\.\\d*|\\d*\\.\\d+)([eEf][+-]?[0-9]+)?', Token.Literal.Number, '#push'), ('\\)', Token.Text)], 'root': [('\\n', Token.Text), ('[^\\S\\n]+', Token.Text), ('//\\n', Token.Comment), ('//.*?$', Token.Comment.Single), ('(OPENQASM|include)\\b', Token.Keyword.Reserved, 'keywords'), ('(qreg|creg)\\b', Token.Keyword.Declaration), ('(if)\\b', Token.Keyword.Reserved, 'if_keywords'), ('(pi)\\b', Token.Name.Constant), ('(barrier|measure|reset)\\b', Token.Name.Builtin, 'params'), ('(id|cx|x|y|z|s|sdg|h|t|tdg|ccx|c3x|c4x|c3sqrtx|rx|ry|rz|cz|cy|ch|swap|cswap|crx|cry|crz|cu1|cu3|rxx|rzz|rccx|rc3x|u1|u2|u3)\\b', Token.Keyword.Type, 'params'), ('[unitary\\d+]', Token.Keyword.Type), ('(gate)\\b', Token.Name.Function, 'gate'), ('[a-zA-Z_][a-zA-Z0-9_]*', Token.Text, 'index')]}

Dict of {'state': [(regex, tokentype, new_state), ...], ...}

The initial state is ‘root’. new_state can be omitted to signify no state transition. If it is a string, the state is pushed on the stack and changed. If it is a tuple of strings, all states are pushed on the stack and the current state will be the topmost. It can also be combined('state1', 'state2', ...) to signify a new, anonymous state combined from the rules of two or more existing ones. Furthermore, it can be ‘#pop’ to signify going back one step in the state stack, or ‘#push’ to push the current state on the stack again.

The tuple can also be replaced with include('state'), in which case the rules from the state named by the string are included in the current one.


Methods

add_filter

OpenQASMLexer.add_filter(filter_, **options)

Add a new stream filter to this lexer.

analyse_text

static OpenQASMLexer.analyse_text(text)

Has to return a float between 0 and 1 that indicates if a lexer wants to highlight this text. Used by guess_lexer. If this method returns 0 it won’t highlight it in any case, if it returns 1 highlighting with this lexer is guaranteed.

The LexerMeta metaclass automatically wraps this function so that it works like a static method (no self or cls parameter) and the return value is automatically converted to float. If the return value is an object that is boolean False it’s the same as if the return values was 0.0.

get_tokens

OpenQASMLexer.get_tokens(text, unfiltered=False)

Return an iterable of (tokentype, value) pairs generated from text. If unfiltered is set to True, the filtering mechanism is bypassed even if filters are defined.

Also preprocess the text, i.e. expand tabs and strip it if wanted and applies registered filters.

get_tokens_unprocessed

OpenQASMLexer.get_tokens_unprocessed(text, stack=('root',))

Split text into (tokentype, text) pairs.

stack is the inital stack (default: ['root'])

Was this page helpful?
Report a bug or request content on GitHub.