openpyxl.formula.tokenizer module¶
This module contains a tokenizer for Excel formulae.
The tokenizer is based on the Javascript tokenizer found at http://ewbi.blogs.com/develops/2004/12/excel_formula_p.html written by Eric Bachtal
-
class
openpyxl.formula.tokenizer.
Token
(value, type_, subtype='')[源代码]¶ 基类:
object
A token in an Excel formula.
Tokens have three attributes:
- value: The string value parsed that led to this token
- type: A string identifying the type of token
- subtype: A string identifying subtype of the token (optional, and
- defaults to “”)
-
ARG
= 'ARG'¶
-
ARRAY
= 'ARRAY'¶
-
CLOSE
= 'CLOSE'¶
-
ERROR
= 'ERROR'¶
-
FUNC
= 'FUNC'¶
-
LITERAL
= 'LITERAL'¶
-
LOGICAL
= 'LOGICAL'¶
-
NUMBER
= 'NUMBER'¶
-
OPEN
= 'OPEN'¶
-
OPERAND
= 'OPERAND'¶
-
OP_IN
= 'OPERATOR-INFIX'¶
-
OP_POST
= 'OPERATOR-POSTFIX'¶
-
OP_PRE
= 'OPERATOR-PREFIX'¶
-
PAREN
= 'PAREN'¶
-
RANGE
= 'RANGE'¶
-
ROW
= 'ROW'¶
-
SEP
= 'SEP'¶
-
TEXT
= 'TEXT'¶
-
WSPACE
= 'WHITE-SPACE'¶
-
classmethod
make_subexp
(value, func=False)[源代码]¶ Create a subexpression token.
value: The value of the token func: If True, force the token to be of type FUNC
-
subtype
¶
-
type
¶
-
value
¶
-
class
openpyxl.formula.tokenizer.
Tokenizer
(formula)[源代码]¶ 基类:
object
A tokenizer for Excel worksheet formulae.
Converts a str string representing an Excel formula (in A1 notation) into a sequence of Token objects.
formula: The str string to tokenize
Tokenizer defines a method ._parse() to parse the formula into tokens, which can then be accessed through the .items attribute.
-
ERROR_CODES
= ('#NULL!', '#DIV/0!', '#VALUE!', '#REF!', '#NAME?', '#NUM!', '#N/A', '#GETTING_DATA')¶
-
SN_RE
= re.compile('^[1-9](\\.[0-9]+)?[Ee]$')¶
-
STRING_REGEXES
= {'"': re.compile('"(?:[^"]*"")*[^"]*"(?!")'), "'": re.compile("'(?:[^']*'')*[^']*'(?!')")}¶
-
TOKEN_ENDERS
= ',;}) +-*/^&=><%'¶
-
WSPACE_RE
= re.compile('[ \\n]+')¶
-
assert_empty_token
(can_follow=())[源代码]¶ Ensure that there’s no token currently being parsed.
Or if there is a token being parsed, it must end with a character in can_follow.
If there are unconsumed token contents, it means we hit an unexpected token transition. In this case, we raise a TokenizerError
-