Files
pyGoEdge-UserPanel/.venv/Lib/site-packages/sqlparse/__pycache__/lexer.cpython-312.pyc

54 lines
6.6 KiB
Plaintext
Raw Normal View History

2025-11-18 03:36:49 +08:00
<EFBFBD>
f<> ig<00><00>`<00>dZddlZddlmZddlmZddlmZmZddl m
Z
Gd<07>d<08>Z d
d <09>Z y) z SQL Lexer<65>N)<01>Lock)<01>
TextIOBase)<02>tokens<6E>keywords)<01>consumec<00>X<00>eZdZdZdZe<05>Zed<03><00>Zd<04>Z d<05>Z
d<06>Z d<07>Z d<08>Z d
d <09>Zy) <0B>LexerzrThe Lexer supports configurable syntax.
To add support for additional keywords, use the `add_keywords` method.Nc<00><><00>|j5|j<00>&|<00>|_|jj<00>ddd<01>|jS#1swY|jSxYw)zRReturns the lexer instance used internally
by the sqlparse core functions.N)<03>_lock<63>_default_instance<63>default_initialization)<01>clss <20>=E:\Project\pygoedge\.venv\Lib\site-packages\sqlparse/lexer.py<70>get_default_instancezLexer.get_default_instance0sZ<00><00><11>Y<EFBFBD>Y<EFBFBD><12>$<24>$<24>,<2C>(+<2B><05><03>%<25><13>%<25>%<25><<3C><<3C>><3E><17><13>$<24>$<24>$<24> <17><13>$<24>$<24>$<24>s <00>3A<03>A(c<00><><00>|j<00>|jtj<00>|j tj
<00>|j tj <00>|j tj<00>|j tj<00>|j tj<00>|j tj<00>|j tj<00>|j tj<00>|j tj<00>y)zlInitialize the lexer with default dictionaries.
Useful if you need to revert custom syntax settings.N)<0E>clear<61> set_SQL_REGEXr<00> SQL_REGEX<45> add_keywords<64>KEYWORDS_COMMON<4F>KEYWORDS_ORACLE<4C>KEYWORDS_MYSQL<51>KEYWORDS_PLPGSQL<51> KEYWORDS_HQL<51>KEYWORDS_MSACCESS<53>KEYWORDS_SNOWFLAKE<4B>KEYWORDS_BIGQUERY<52>KEYWORDS<44><01>selfs rr zLexer.default_initialization:s<><00><00> <0A>
<EFBFBD>
<EFBFBD> <0C> <0C><1A><1A>8<EFBFBD>-<2D>-<2D>.<2E> <0C><19><19>(<28>2<>2<>3<> <0C><19><19>(<28>2<>2<>3<> <0C><19><19>(<28>1<>1<>2<> <0C><19><19>(<28>3<>3<>4<> <0C><19><19>(<28>/<2F>/<2F>0<> <0C><19><19>(<28>4<>4<>5<> <0C><19><19>(<28>5<>5<>6<> <0C><19><19>(<28>4<>4<>5<> <0C><19><19>(<28>+<2B>+<2B>,<2C>c<00> <00>g|_g|_y)z<>Clear all syntax configurations.
Useful if you want to load a reduced set of syntax configurations.
After this call, regexps and keyword dictionaries need to be loaded
to make the lexer functional again.N)<02>
_SQL_REGEX<EFBFBD> _keywordsrs rrz Lexer.clearIs<00><00>
<1D><04><0F><1B><04>r!c<00><><00>tjtjz}|D<00><03>cgc]'\}}tj||<02>j|f<02><02>)c}}|_ycc}}w)z.Set the list of regex that will parse the SQL.N)<06>re<72>
IGNORECASE<EFBFBD>UNICODE<44>compile<6C>matchr#)r r<00>FLAGS<47>rx<72>tts rrzLexer.set_SQL_REGEXQsT<00><00><12> <0A> <0A><02>
<EFBFBD>
<EFBFBD>*<2A><05>$<24>
<EFBFBD>#<23><06><02>B<EFBFBD><10>Z<EFBFBD>Z<EFBFBD><02>E<EFBFBD> "<22> (<28> (<28>"<22> -<2D>#<23>
<EFBFBD><04><0F><>
s<00>,Ac<00>:<00>|jj|<01>y)zhAdd keyword dictionaries. Keywords are looked up in the same order
that dictionaries were added.N)r$<00>append)r rs rrzLexer.add_keywordsYs<00><00> <0A><0E><0E><1D><1D>h<EFBFBD>'r!c<00><><00>|j<00>}|jD]}||vs<01>|||fcStj|fS)z<>Checks for a keyword.
If the given value is in one of the KEYWORDS_* dictionary
it's considered a keyword. Otherwise, tokens.Name is returned.
)<04>upperr$r<00>Name)r <00>value<75>val<61>kwdicts r<00>
is_keywordzLexer.is_keyword^sD<00><00> <14>k<EFBFBD>k<EFBFBD>m<EFBFBD><03><1A>n<EFBFBD>n<EFBFBD>F<EFBFBD><12>f<EFBFBD>}<7D><1D>c<EFBFBD>{<7B>E<EFBFBD>)<29>)<29>%<25><1A>;<3B>;<3B><05>%<25> %r!c#<00><>K<00>t|t<00>r|j<00>}t|t<00>rnZt|t<00>r'|r|j |<02>}n6 |j d<01>}n#tdjt|<01><00><00><00>t|<01>}|D]<5D>\}}|jD]<5D>\}}|||<04>}|s<01>t|tj<00>r||j<00>f<02><01>n3|tj ur!|j#|j<00><00><00><01>t%||j'<00>|z
dz
<00><00><>tj(|f<02><01><00><>y#t $r|j d<02>}Y<00><>wxYw<01>w)a<>
Return an iterable of (tokentype, value) pairs generated from
`text`. If `unfiltered` is set to `True`, the filtering mechanism
is bypassed even if filters are defined.
Also preprocess the text, i.e. expand tabs and strip it if
wanted and applies registered filters.
Split ``text`` into (tokentype, text) pairs.
``stack`` is the initial stack (default: ``['root']``)
zutf-8zunicode-escapez+Expected text or file-like object, got {!r}<7D>N)<15>
isinstancer<00>read<61>str<74>bytes<65>decode<64>UnicodeDecodeError<6F> TypeError<6F>format<61>type<70> enumerater#r<00>
_TokenType<EFBFBD>groupr<00>PROCESS_AS_KEYWORDr6r<00>end<6E>Error) r <00>text<78>encoding<6E>iterable<6C>pos<6F>char<61>rexmatch<63>action<6F>ms r<00>
get_tokenszLexer.get_tokensksF<00><00><><00> <16>d<EFBFBD>J<EFBFBD> '<27><17>9<EFBFBD>9<EFBFBD>;<3B>D<EFBFBD> <15>d<EFBFBD>C<EFBFBD> <20> <10> <17><04>e<EFBFBD> $<24><17><1B>{<7B>{<7B>8<EFBFBD>,<2C><04>9<><1F>;<3B>;<3B>w<EFBFBD>/<2F>D<EFBFBD><1C>I<>"<22>F<EFBFBD>4<EFBFBD><04>:<3A>.<2E>0<> 0<><1D>T<EFBFBD>?<3F><08>!<21>I<EFBFBD>C<EFBFBD><14>$(<28>O<EFBFBD>O<EFBFBD> <20><08>&<26><1C>T<EFBFBD>3<EFBFBD>'<27><01><18><1C><1F><06><06>(9<>(9<>:<3A> <20>!<21>'<27>'<27>)<29>+<2B>+<2B><1B>x<EFBFBD>:<3A>:<3A>:<3A><1E>/<2F>/<2F>!<21>'<27>'<27>)<29>4<>4<><17><08>!<21>%<25>%<25>'<27>C<EFBFBD>-<2D>!<21>"3<>4<><15>%4<><1D>l<EFBFBD>l<EFBFBD>D<EFBFBD>(<28>(<28>"<22><>*<2A>9<><1F>;<3B>;<3B>'7<>8<>D<EFBFBD>9<>s+<00>AE;<01>E<00>*C1E;<01>E8<03>5E;<01>7E8<03>8E;<01>N)<0F>__name__<5F>
__module__<EFBFBD> __qualname__<5F>__doc__r rr <00> classmethodrr rrrr6rP<00>r!rr r sI<00><00>N<01><1D><15> <10>F<EFBFBD>E<EFBFBD>(<11>%<25><11>%<25> -<2D><1C>
<EFBFBD>(<28>
&<26>-)r!r c<00>J<00>tj<00>j||<01>S)z<>Tokenize sql.
Tokenize *sql* using the :class:`Lexer` and return a 2-tuple stream
of ``(token type, value)`` items.
)r rrP)<02>sqlrIs r<00>tokenizerZ<00>s <00><00> <11> %<25> %<25> '<27> 2<> 2<>3<EFBFBD><08> A<>Ar!rQ) rUr&<00> threadingr<00>ior<00>sqlparserr<00>sqlparse.utilsrr rZrWr!r<00><module>r_s.<00><01><10> <09><1A><1A>%<25>"<22>A)<29>A)<29>HBr!