简体   繁体   中英

How to make a post request with the Python requests library?

I am using the following filters in Postman to make a POST request in a Web API but I am unable to make a simple POST request in Python with the requests library.

First , I am sending a POST request to this URL ( http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets ) with the following filters in Postman applied to the Body, with the raw and JSON(application/json) options selected.

Filters in Postman

{
  "filter": {
    "filters": [
      {
        "field": "RCA_Assigned_Date",
        "operator": "gte",
        "value": "2017-05-31 00:00:00"
      },
      {
        "field": "RCA_Assigned_Date",
        "operator": "lte",
        "value": "2017-06-04 00:00:00"
      },
      {
        "field": "T_Subcategory",
        "operator": "neq",
        "value": "Temporary Degradation"
      },
      {
        "field": "Issue_Status",
        "operator": "neq",
        "value": "Queued"
      }],
     "logic": "and"
    }
}

The database where the data is stored is Cassandra and according to the following links Cassandra not equal operator , Cassandra OR operator , Cassandra Between order by operators , Cassandra does not support the NOT EQUAL TO , OR , BETWEEN operators, so there is no way I can filter the URL with these operators except with AND .

Second , I am using the following code to apply a simple filter with the requests library.

import requests
payload = {'field':'T_Subcategory','operator':'neq','value':'Temporary Degradation'}
url = requests.post("http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets",data=payload)

But what I've got is the complete data of tickets instead of only those that are not temporary degradation.

Third , the system is actually working but we are experiencing a delay of 2-3 mins to see the data. The logic goes as follows: We have 8 users and we want to see all the tickets per user that are not temporary degradation, then we do :

def get_json():
    if user_name == "user 001":
        with urllib.request.urlopen(
    "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets?user_name=user&001",timeout=15) as url:
            complete_data = json.loads(url.read().decode())

    elif user_name == "user 002":
        with urllib.request.urlopen(             
    "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets?user_name=user&002",timeout=15) as url:
            complete_data = json.loads(url.read().decode())
    return complete_data

def get_tickets_not_temp_degradation(start_date,end_date,complete_):
    return Counter([k['user_name'] for k in complete_data if start_date < dateutil.parser.parse(k.get('DateTime')) < end_date and k['T_subcategory'] != 'Temporary Degradation'])

Basically, we get the whole set of tickets from the current and last year, then we let Python to filter the complete set by user and so far there are only 10 users which means that this process is repeated 10 times and makes me no surprise to discover why we get the delay...

My questions is how can I fix this problem of the requests library? I am using the following link Requests library documentation as a tutorial to make it working but it just seems that my payload is not being read.

Your Postman request is a JSON body. Just reproduce that same body in Python. Your Python code is not sending JSON, nor is it sending the same data as your Postman sample.

For starters, sending a dictionary via the data arguments encodes that dictionary to application/x-www-form-urlencoded form, not JSON. Secondly, you appear to be sending a single filter.

The following code replicates your Postman post exactly:

import requests

filters = {"filter": {
    "filters": [{
        "field": "RCA_Assigned_Date",
        "operator": "gte",
        "value": "2017-05-31 00:00:00"
    }, {
        "field": "RCA_Assigned_Date",
        "operator": "lte",
        "value": "2017-06-04 00:00:00"
    }, {
        "field": "T_Subcategory",
        "operator": "neq",
        "value": "Temporary Degradation"
    }, {
        "field": "Issue_Status",
        "operator": "neq",
        "value": "Queued"
    }],
    "logic": "and"
}}

url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets"
response = requests.post(url, json=filters)

Note that filters is a Python data structure here, and that it is passed to the json keyword argument. Using the latter does two things:

  • Encode the Python data structure to JSON (producing the exact same JSON value as your raw Postman body value).
  • Set the Content-Type header to application/json (as you did in your Postman configuration by picking the JSON option in the dropdown menu after picking raw for the body).

requests is otherwise just an HTTP API , it can't make Cassandra do any more than any other HTTP library. The urllib.request.urlopen code sends GET requests, and are trivially translated to requests with:

def get_json():
    url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets"
    response = requests.get(url, params={'user_name': user}, timeout=15)    
    return response.json()

I removed the if branching and replaced that with using the params argument, which translates a dictionary of key-value pairs to a correctly encoded URL query (passing in the user name as the user_name key).

Note the json() call on the response; this takes care of decoding JSON data coming back from the server. This still takes long, you are not filtering the Cassandra data much here.

I would recommend using the json attribute instead of data. It handles the dumping for you.

import requests

data = {'user_name':'user&001'}
headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets/"
r = requests.post(url, headers=headers, json=data)

Update, answer for question 3. Is there a reason you are using urllib? I'd use python requests as well for this request.

import requests

def get_json():
    r = requests.get("http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets”, params={"user_name": user_name.replace(" ", "&")})

    return r.json

# not sure what you’re doing here, more context/code example would help
def get_tickets_not_temp_degradation(start_date, end_date, complete_):
    return Counter([k['user_name'] for k in complete_data if start_date < dateutil.parser.parse(k.get('DateTime')) < end_date and k['T_subcategory'] != 'Temporary Degradation'])

Also, is the username really supposed to be user+001 and not user&001 or user 001 ?

I think, you can use requests library as follows:

import requests
import json

payload = {'field':'T_Subcategory','operator':'neq','value':'Temporary Degradation'}
url = requests.post("http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets",data=json.dumps(payload))

You are sending user in url, use it through post, but its depend upon how end points are implemented. You can try the below code :

import requests
from json import dumps

data = {'user_name':'user&001'}
headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets/"
r = requests.post(url, headers=headers, data=dumps(data))

My experience is the following: I've tried urllib and requests a lot in python to deal with APIs. It is just very problematic. Usually GET requests are straightfoward, but POST requests are very problematic. Sometimes doesn't work (as in your payload case), or worse, it works badly. I had this script recently that passed my tests but it didn't work for some cases on production, until I found out that requests that had special characters (like ã á in portuguese) didnt work. :(
At some point I said 'to hell with this black box dependencies'. As a matter of fact, one can make any API call without python or any other scripting language (well you need bash at least). As a UNIX user one can use CURL for EVERYTHING. It is liberating! No more examples that doesnt work, no more versions of languages no more http libraries. I just send a curl request from the terminal and see if what I want works. Your case would be:

curl -H "Content-Type: application/json" -d '{"field":"T_Subcategory","operator":"neq","value":"Temporary Degradation"}' "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets"

Next I build the following python script for GET:

import subprocess
import json

header1 = "Content-Type: application/json"   
header2 = "app_token: your_token"
header3 = "access_token: your_token2"

url = "https://your_api"
command = ["curl","-s","-H",header1,"-H",header2,"-H",header3, url ]
raw_response = subprocess.check_output(command)
string_response = raw_response.decode('utf8')
data = json.loads(string_response)

For POST:

import subprocess
import json

header1 = "Content-Type: application/json"   
header2 = "app_token: your_token"
header3 = "access_token: your_token2"

data = {"my":"phyton_dictionary"}
data = json.dumps(data) #this is a json string now

url = "https://your_api"
command = ["curl","-s","-H",header1,"-H",header2,"-H",header3, -d, data, url ]
raw_response = subprocess.check_output(command)
string_response = raw_response.decode('utf8')
data = json.loads(string_response)

It is very easy to transform those two prototypes in python functions and make a 'library' as you need other calls. You can build the URL parameters by hand if you are dealing with an API tha needs that.

So, this answer does not solve your specific problem with requests. It just tells you that I had many many of those problems, solved a few by trial and error hacking style, and eventually solved the problem by changing the way I build the API script, by removing all dependencies that I could, until I reached a very clean script. Never had problems again if the docs have the url that I need to reach and the headers I need to send.

Hope it helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM