简体   繁体   English

python: rest api 带自定义查询参数

[英]python: rest api with custom query parameters

I want to create an api which is responsible to get the data from the Database.我想创建一个 api 负责从数据库中获取数据。

My Requirment:我的要求:

GET /products?query=EQUAL(product_id,"56789")
GET /products?query=AND(EQUAL(product_id,"56789"),EQUAL(product_id,"2236"))
GET /products?query=OR(EQUAL(product_id,"56789"),EQUAL(product_id,"369"))

my api query parameters contains operators like EQUAL,OR,AND etc. I dont know how to parse these custom operators.我的 api 查询参数包含EQUAL、OR、AND等运算符。我不知道如何解析这些自定义运算符。

Can any one suggest a solution to parse these operators?任何人都可以提出解析这些运算符的解决方案吗?

Frankly, I would use normal SQL query instead of own operators.坦率地说,我会使用普通的 SQL 查询而不是自己的运算符。

But for own operators I would use parser SLY or PLY但对于自己的运营商,我会使用解析器SLYPLY

Both created by David Beazley .两者均由David Beazley创建。 PLY is older, SLY is never and use classes PLY较旧, SLY从不使用classes


It is hard to describe all code.很难描述所有代码。

Lexer recognizes tokens in string EQUAL , AND , OR , ( , ) , , , variables , strings (values). Lexer识别字符串EQUALANDOR(),variablesstrings (值)中的标记。

Parser checks grammary - it checks if tokens are in correct order. Parser检查语法 - 它检查标记是否按正确顺序。 It also converts to SQL query.它还转换为 SQL 查询。

from sly import Lexer, Parser

class CalcLexer(Lexer):
    tokens = {EQUAL, AND, OR, VARIABLE, STRING} #, NUMBER}

    EQUAL    = r'EQUAL'  # has to be before VARIABLE
    AND      = r'AND'    # has to be before VARIABLE
    OR       = r'OR'     # has to be before VARIABLE
    VARIABLE = r'[a-zA-Z][a-zA-Z0-9_]*'
    STRING   = r'"[^"]*"'
    #NUMBER  = r'\d+'

    #VARIABLE['EQUAL'] = EQUAL  # if there is no EQUAL before VARIABLE
    #VARIABLE['AND']   = AND    # if there is no AND before VARIABLE
    #VARIABLE['OR']    = OR     # if there is no OR before VARIABLE

    literals = { '(', ')', ',' } 
    ignore = ' \t'

    def error(self, t):
        print(f"Illegal character '{t.STRING[0]}'")
        #self.index += 1

class CalcParser(Parser):
    tokens = CalcLexer.tokens

    #def __init__(self):
        #self.variables = {}

    @_('EQUAL "(" VARIABLE "," STRING ")"')
    def statement(self, p):
        print('[PARSER]', p.VARIABLE, '=', p.STRING)
        #self.variables[p.VARIABLE] = p.STRING
        return f'{p.VARIABLE} = {p.STRING}'

    #@_('EQUAL "(" VARIABLE "," NUMBER ")"')
    #def statement(self, p):
    #    print('[PARSER]', p.VARIABLE, '=', p.NUMBER)
    #    #self.variables[p.VARIABLE] = p.NUMBER
    #    return f'{p.VARIABLE} = {p.NUMBER}'

    #@_('EQUAL "(" VARIABLE "," VARIABLE ")"')
    #def statement(self, p):
    #    print('[PARSER]', p.VARIABLE0, '=', p.VARIABLE1)
    #    #self.variables[p.VARIABLE0] = p.VARIABLE1
    #    return f'{p.VARIABLE0} = {p.VARIABLE1}'

    @_('AND "(" statement "," statement ")"')
    def statement(self, p):
        print('[PARSER]', p.statement0, 'AND', p.statement1)
        return f'({p.statement0}) AND ({p.statement1})'

    @_('OR "(" statement "," statement ")"')
    def statement(self, p):
        print('[PARSER]', p.statement0, 'OR', p.statement1)
        return f'({p.statement0}) OR ({p.statement1})'

#--------------------------------------------------------------------

examples = [
    'EQUAL(product_id,"56789")',
    'AND(EQUAL(product_id,"56789"),EQUAL(product_id,"2236"))',
    'OR(EQUAL(product_id,"56789"),EQUAL(product_id,"369"))',

    # nested
    'OR(EQUAL(product_id,"2236"),AND(EQUAL(product_id,"56789"),EQUAL(product_id,"2236")))',
    'OR(AND(EQUAL(product_id,"56789"),EQUAL(product_id,"2236")),EQUAL(product_id,"2236"))',
    'OR(AND(EQUAL(product_id,"56789"),EQUAL(product_id,"2236")),AND(EQUAL(product_id,"56789"),EQUAL(product_id,"2236")))',

    # with spaces
    'EQUAL  (  product_id ,  "56789" )  ',
    
    #'EQUAL(product_id,"56789")',  # VARIABLE,STRING
    #'EQUAL(product_id,56789)',    # VARIABLE,NUMBER
    #'EQUAL(product_id,other_id)', # VARIABLE,VARIABLE
]

if __name__ == '__main__':
    lexer  = CalcLexer()
    parser = CalcParser()
    for text in examples:
        print('---')
        print('text:', text)
        
        for tok in lexer.tokenize(text):
             print(f'[LEXER] type={tok.type!r}, value={tok.value!r}')
             
        print('result:', parser.parse(lexer.tokenize(text)))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM