簡體   English   中英

Python GRPC 13 嘗試產生響應時出現內部錯誤

[英]Python GRPC 13 Internal Error when trying to yield response

當我打印響應時,一切似乎都是正確的,而且類型也是正確的。

Assertion: True
Response type: <class 'scrape_pb2.ScrapeResponse'>

但是在 postman 上,我得到“13 INTERNAL”,沒有其他信息:

錯誤截圖

我不知道問題出在哪里,也不知道如何從服務器端記錄或打印錯誤。

相關原型部分:

syntax = "proto3";

service ScrapeService {
  rpc ScrapeSearch(ScrapeRequest) returns (stream ScrapeResponse) {};

}

message ScrapeRequest {
  string url = 1;
  string keyword = 2;
}

message ScrapeResponse {
  oneof result {
    ScrapeSearchProgress search_progress = 1;
    ScrapeProductsProgress products_progress = 2;
    FoundProducts found_products = 3;
  }
}


message ScrapeSearchProgress {
  int32 page = 1;
  int32 total_products = 2;
  repeated string product_links = 3;

}

刮刀.py

def get_all_search_products(search_url: str, class_keyword: str):
    search_driver = webdriver.Firefox(options=options, service=service)
    search_driver.maximize_window()
    search_driver.get(search_url)
    # scrape first page
    product_links = scrape_search(driver=search_driver, class_keyword=class_keyword)
    page = 1
    search_progress = ScrapeSearchProgress(page=page, total_products=len(product_links), product_links=[])
    search_progress.product_links[:] = product_links

    # scrape next pages
    while go_to_next_page(search_driver):
        page += 1
        print(f'Scraping page=>{page}')
        product_links.extend(scrape_search(driver=search_driver, class_keyword=class_keyword))
        print(f'Number of products scraped=>{len(product_links)}')

        search_progress.product_links.extend(product_links)

        # TODO: remove this line
        if page == 6:
            break

        search_progress_response = ScrapeResponse(search_progress=search_progress)

        yield search_progress_response

服務器:

class ScrapeService(ScrapeService):
    def ScrapeSearch(self, request, context):
        print(f"Request received: {request}")
        scrape_responses = get_all_search_products(search_url=request.url, class_keyword=request.keyword)

        for response in scrape_responses:
            print(f"Assertion: {response.HasField('search_progress')}")
            print(f"Response type: {type(response)}")
            yield response

事實證明這只是 postman 的問題。我設置了一個 python 客戶端並且它有效。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM