简体   繁体   English

Python GRPC 13 尝试产生响应时出现内部错误

[英]Python GRPC 13 Internal Error when trying to yield response

When I print the response, everything seems to be correct, and the type is also correct.当我打印响应时,一切似乎都是正确的,而且类型也是正确的。

Assertion: True
Response type: <class 'scrape_pb2.ScrapeResponse'>

But on postman I get "13 INTERNAL" With no additional information:但是在 postman 上,我得到“13 INTERNAL”,没有其他信息:

错误截图

I can't figure out what the issue is, and I can't find out how to log or print the error from the server side.我不知道问题出在哪里,也不知道如何从服务器端记录或打印错误。

Relevant proto parts:相关原型部分:

syntax = "proto3";

service ScrapeService {
  rpc ScrapeSearch(ScrapeRequest) returns (stream ScrapeResponse) {};

}

message ScrapeRequest {
  string url = 1;
  string keyword = 2;
}

message ScrapeResponse {
  oneof result {
    ScrapeSearchProgress search_progress = 1;
    ScrapeProductsProgress products_progress = 2;
    FoundProducts found_products = 3;
  }
}


message ScrapeSearchProgress {
  int32 page = 1;
  int32 total_products = 2;
  repeated string product_links = 3;

}

scraper.py刮刀.py

def get_all_search_products(search_url: str, class_keyword: str):
    search_driver = webdriver.Firefox(options=options, service=service)
    search_driver.maximize_window()
    search_driver.get(search_url)
    # scrape first page
    product_links = scrape_search(driver=search_driver, class_keyword=class_keyword)
    page = 1
    search_progress = ScrapeSearchProgress(page=page, total_products=len(product_links), product_links=[])
    search_progress.product_links[:] = product_links

    # scrape next pages
    while go_to_next_page(search_driver):
        page += 1
        print(f'Scraping page=>{page}')
        product_links.extend(scrape_search(driver=search_driver, class_keyword=class_keyword))
        print(f'Number of products scraped=>{len(product_links)}')

        search_progress.product_links.extend(product_links)

        # TODO: remove this line
        if page == 6:
            break

        search_progress_response = ScrapeResponse(search_progress=search_progress)

        yield search_progress_response

Server:服务器:

class ScrapeService(ScrapeService):
    def ScrapeSearch(self, request, context):
        print(f"Request received: {request}")
        scrape_responses = get_all_search_products(search_url=request.url, class_keyword=request.keyword)

        for response in scrape_responses:
            print(f"Assertion: {response.HasField('search_progress')}")
            print(f"Response type: {type(response)}")
            yield response

Turns out it's just an issue with postman. I set up a python client and it worked.事实证明这只是 postman 的问题。我设置了一个 python 客户端并且它有效。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM