Rc @sdZddlZddlmZddlmZddlmZddlm Z ddl m Z m Z m Z mZmZe dZejd ejZejd ejZejd Zyed d dWn ek rejdZn0XddlmZejdejejfZejdZejdZedZedZ edZ!edZ"edZ#edZ$edZ%edZ&edZ'edZ(edZ)edZ*ed Z+ed!Z,ed"Z-ed#Z.ed$Z/ed%Z0ed&Z1ed'Z2ed(Z3ed)Z4ed*Z5ed+Z6ed,Z7ed-Z8ed.Z9ed/Z:ed0Z;ed1Z<ed2Z=ed3Z>ed4Z?ed5Z@ed6ZAed7ZBed8ZCed9ZDed:ZEed;ZFed<ZGed=ZHed>ZIed?ZJed@ZKedAZLedBZMedCZNedDZOiedE6e7dF6e#dG6e&dH6e/dI6e.dJ6e2dK6e8dL6e*dM6e4dN6e+dO6e5dP6e)dQ6e3dR6e%dS6e0dT6e'dU6e(dV6e,dW6e-dX6e dY6e$dZ6e!d[6e1d\6e"d]6e6d^6ZPeQge ePD]\ZRZSeSeRf^q[ZTeUePeUeTkstVd_ejd`d\jWdaeXePdbdcDZYeZeEeGeFe9e9eJeKeLgZ[eZe9eMeGeLgZ\ddZ]deZ^dfZ_dgZ`dhZadiebfdjYZcdkedfdlYZee dmebfdnYZfe doebfdpYZgdqZhdrebfdsYZidS(ts jinja2.lexer ~~~~~~~~~~~~ This module implements a Jinja / Python combination lexer. The `Lexer` class provided by this module is used to do some preprocessing for Jinja. On the one hand it filters out invalid operators like the bitshift operators we don't allow in templates. On the other hand it separates template code and python code in expressions. :copyright: (c) 2010 by the Jinja Team. :license: BSD, see LICENSE for more details. iN(t itemgetter(tdeque(tTemplateSyntaxError(tLRUCache(tnextt iteritemstimplements_iteratort text_typetinterni2s\s+s7('([^'\\]*(?:\\.[^'\\]*)*)'|"([^"\\]*(?:\\.[^"\\]*)*)")s\d+sföös tevals\b[a-zA-Z_][a-zA-Z0-9_]*\b(t _stringdefss [%s][%s]*s(?s>=tstkeycCs t| S(N(tlen(RS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytscCsx|tkrt|Si dt6dt6dt6dt6dt6dt6dt6dt6dt 6d t 6d t 6d t 6j ||S( Nsbegin of commentsend of commentR3sbegin of statement blocksend of statement blocksbegin of print statementsend of print statementsbegin of line statementsend of line statementstemplate data / textsend of template(treverse_operatorstTOKEN_COMMENT_BEGINtTOKEN_COMMENT_ENDt TOKEN_COMMENTtTOKEN_LINECOMMENTtTOKEN_BLOCK_BEGINtTOKEN_BLOCK_ENDtTOKEN_VARIABLE_BEGINtTOKEN_VARIABLE_ENDtTOKEN_LINESTATEMENT_BEGINtTOKEN_LINESTATEMENT_ENDt TOKEN_DATAt TOKEN_EOFtget(t token_type((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt_describe_token_types   cCs#|jdkr|jSt|jS(s#Returns a description of the token.R((ttypetvalueRf(ttoken((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytdescribe_tokenscCsGd|kr7|jdd\}}|dkr=|Sn|}t|S(s0Like `describe_token` but for token expressions.RLiR((tsplitRf(texprRgRh((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytdescribe_token_exprs   cCsttj|S(ssCount the number of newline characters in the string. This is useful for extensions that filter a stream. (RUt newline_retfindall(Rh((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytcount_newlinesscCs tj}t|jd||jft|jd||jft|jd||jfg}|jd k r|jt|jdd||jfn|j d k r|jt|j dd||j fngt |dt D]}|d ^qS( sACompiles all the rules from the environment into a list of rules.R3tblocktvariablet linestatements ^[ \t\v]*R8s(?:^|(?<=\S))[^\S\r\n]*treverseiN( RPRQRUtcomment_start_stringtblock_start_stringtvariable_start_stringtline_statement_prefixtNonetappendtline_comment_prefixtsortedtTrue(t environmenttetrulesRS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt compile_ruless tFailurecBs#eZdZedZdZRS(sjClass that raises a `TemplateSyntaxError` if called. Used by the `Lexer` to specify known errors. cCs||_||_dS(N(tmessaget error_class(tselfRtcls((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__init__s cCs|j|j||dS(N(RR(Rtlinenotfilename((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__call__s(t__name__t __module__t__doc__RRR(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRs tTokencBs`eZdZdZdedD\ZZZdZdZ dZ dZ dZ RS( s Token class.ccs!|]}tt|VqdS(N(tpropertyR(RRRS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pys sicCs%tj||tt||fS(N(ttuplet__new__Rtstr(RRRgRh((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRscCs7|jtkrt|jS|jdkr0|jS|jS(NR((RgRWRh(R((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__str__s  cCsE|j|krtSd|krA|jdd|j|jgkStS(sTest a token against a token expression. This can either be a token type or ``'token_type:token_value'``. This can only test against string values and types. RLi(RgR}RkRhtFalse(RRl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyttests  "cGs(x!|D]}|j|rtSqWtS(s(Test against multiple token expressions.(RR}R(RtiterableRl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyttest_anys cCsd|j|j|jfS(NsToken(%r, %r, %r)(RRgRh(R((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__repr__s(( RRRt __slots__trangeRRgRhRRRRR(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRs   tTokenStreamIteratorcBs)eZdZdZdZdZRS(s`The iterator for tokenstreams. Iterate over the stream until the eof token is reached. cCs ||_dS(N(tstream(RR((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRscCs|S(N((R((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__iter__scCsE|jj}|jtkr4|jjtnt|j|S(N(RtcurrentRgRctcloset StopIterationR(RRi((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__next__s     (RRRRRR(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR s  t TokenStreamcBseZdZdZdZdZeZedddZdZ dZ d d Z d Z d Z d ZdZdZRS(sA token stream is an iterable that yields :class:`Token`\s. The parser however does not iterate over it but calls :meth:`next` to go one token ahead. The current active token is stored as :attr:`current`. cCsYt||_t|_||_||_t|_tdt d|_ t |dS(Nit( titert_iterRt_pushedR(RRtclosedRt TOKEN_INITIALRR(Rt generatorR(R((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR(s    cCs t|S(N(R(R((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR1scCst|jp|jjtk S(N(tboolRRRgRc(R((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt__bool__4scCs| S(N((RS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRV8stdocs Are we at the end of the stream?cCs|jj|dS(s Push a token back to the stream.N(RRz(RRi((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytpush:scCs/t|}|j}|j|||_|S(sLook at the next token.(RRR(Rt old_tokentresult((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytlook>s     icCs%xt|D]}t|q WdS(sGot n tokens ahead.N(RR(RtnRS((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytskipFscCs |jj|rt|SdS(sqPerform the token test and return the token if it matched. Otherwise the return value is `None`. N(RRR(RRl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytnext_ifKscCs|j|dk S(s8Like :meth:`next_if` but only returns `True` or `False`.N(RRy(RRl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytskip_ifRscCst|j}|jr'|jj|_nI|jjtk rpyt|j|_Wqptk rl|jqpXn|S(s)Go one token ahead and return the old one( RRtpopleftRgRcRRRR(Rtrv((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRVs   cCs1t|jjtd|_d|_t|_dS(sClose the stream.RN(RRRRcRyRR}R(R((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRbs cCs|jj|st|}|jjtkrXtd||jj|j|jntd|t |jf|jj|j|jnz |jSWdt |XdS(s}Expect a given token type and return it. This accepts the same argument as :meth:`jinja2.lexer.Token.test`. s(unexpected end of template, expected %r.sexpected token %r, got %rN( RRRmRgRcRRR(RRjR(RRl((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pytexpecths    (RRRRRRt __nonzero__RteosRRRRRRRR(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR!s        c Cs|j|j|j|j|j|j|j|j|j|j |j |j f }t j |}|dkrt|}|t |(?:\s*%s\-|%s)\s*raw\s*(?:\-%s\s*|%s))s(?P<%s_begin>\s*%s\-|%s)s#bygroups.+troots(.*?)((?:\-%s\s*|%s)%s)s#pops(.)sMissing end of comment tags(?:\-%s\s*|%s)%ss \-%s\s*|%ss1(.*?)((?:\s*%s\-|%s)\s*endraw\s*(?:\-%s\s*|%s%s))sMissing end of raw directives \s*(\n|$)s(.*?)()(?=\n|$)(/RPRQt whitespace_retTOKEN_WHITESPACERytfloat_ret TOKEN_FLOATt integer_ret TOKEN_INTEGERtname_ret TOKEN_NAMEt string_ret TOKEN_STRINGt operator_retTOKEN_OPERATORRRRRvtmatchRutgroupRwRRtjoinRRdRbRRZRYRRXR]R\RR_R^t TOKEN_RAW_ENDtTOKEN_RAW_BEGINRaR`R[tTOKEN_LINECOMMENT_ENDtTOKEN_LINECOMMENT_BEGINR(RR~tcRt tag_rulestroot_tag_rulestblock_suffix_ret prefix_ret no_lstrip_ret block_difftmt comment_difftno_variable_ret lstrip_retblock_prefix_retcomment_prefix_reRtr((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRs          ))%         :   "          " cCstj|j|S(s@Called for strings and template data to normalize it to unicode.(RnR#R(RRh((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyt_normalize_newlinesscCs7|j||||}t|j|||||S(sCCalls tokeniter + tokenize and wraps it in a token stream. (t tokeniterRtwrap(RtsourceR(RtstateR((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyttokenizesc csx|D]\}}}|tkr(qn|dkr=d}np|dkrRd}n[|dkrdqnI|dkr|j|}n+|dkr|}n|d krt|}n|d kr^y/|j|d d !jd djd}WnGtk r6}t|jdd j}t||||nXyt|}Wqt k rZqXnO|dkryt |}n4|dkrt |}n|dkrt |}nt |||VqWdS(sThis is called with the stream as returned by `tokenize` and wraps every token in a :class:`Token` and converts the value. R4R+R5R,R/R0R9tkeywordR(R)iitasciitbackslashreplacesunicode-escapeRLR'R&R*N(R/R0(tignored_tokensRRtencodetdecodet ExceptionRktstripRt UnicodeErrortintR&t operatorsR( RRR(RRRiRhRtmsg((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyR$sD                  c csrt|}|j}|jr[|r[x1d D]&}|j|r.|jdPq.q.Wndj|}d}d}dg} |dk r|dkr|d!kstd | j|d nd}|j| d } t |} g} xxy| D]>\} }}| j ||}|dkr)qn| rA|d"krAqnt |t rMx t |D]\}}|jtkr|||q]|dkrxt|jD]=\}}|dk r|||fV||jd7}PqqWtd| q]|j|d}|s"|tkr3|||fVn||jd7}q]Wn|j}|dkr'|dkr| jdq'|dkr| jdq'|dkr| jdq'|d#kr'| std||||n| j}||kr$td||f|||q$q'n|s9|tkrJ|||fVn||jd7}|j}|dk r|dkr| jnl|dkrx]t|jD])\}}|dk r| j|PqqWtd| n | j||j| d } n||kr-td| n|}PqW|| krHdStd|||f|||qdS($sThis method tokenizes the text and returns the tokens in a generator. Use this method if you just want to tokenize a template. s s s RiiRRrRqs invalid statet_beginiR.R,R5s#bygroups?%r wanted to resolve the token dynamically but no group matchedR*RFRGRDRERBRCsunexpected '%s'sunexpected '%s', expected '%s's#popsC%r wanted to resolve the new state dynamically but no group matcheds,%r yielded empty string without stack changeNsunexpected char %r at %d(s s s (svariablesblock(R.s block_endslinestatement_end(RGRERC(Rt splitlinesRtendswithRzRRytAssertionErrorRRURt isinstanceRt enumeratet __class__RRt groupdicttcountt RuntimeErrorRtignore_if_emptyRtpoptend(RRR(RRtlinestnewlinetposRtstackt statetokenst source_lengthtbalancing_stacktregexttokenst new_stateRtidxRiRTRhR9t expected_optpos2((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRQs                                   N( RRRRRRyRRR(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyRs  -(jRRPR*Rt collectionsRtjinja2.exceptionsRt jinja2.utilsRtjinja2._compatRRRRRRRtURRRRt SyntaxErrorRtjinja2R t xid_startt xid_continueRRnt TOKEN_ADDt TOKEN_ASSIGNt TOKEN_COLONt TOKEN_COMMAt TOKEN_DIVt TOKEN_DOTtTOKEN_EQtTOKEN_FLOORDIVtTOKEN_GTt TOKEN_GTEQt TOKEN_LBRACEtTOKEN_LBRACKETt TOKEN_LPARENtTOKEN_LTt TOKEN_LTEQt TOKEN_MODt TOKEN_MULtTOKEN_NEt TOKEN_PIPEt TOKEN_POWt TOKEN_RBRACEtTOKEN_RBRACKETt TOKEN_RPARENtTOKEN_SEMICOLONt TOKEN_SUBt TOKEN_TILDERRRRRRR\R]R^R_RRRXRYRZR`RaRRR[RbRRcRtdicttktvRWRURRR|Rt frozensetRRRfRjRmRpRtobjectRRRRRRR(((s0/usr/lib/python2.7/site-packages/jinja2/lexer.pyts (                                                      1$        +[