I can run a splunk api call in bash and get back a SID which I then use to get back a splunk query. The first part of it is below. However, I am having issues when changing this to a python request using requests
. I keep getting an ssl CERTIFICATE_VERIFY_FAILED
error.
Bash Command
data=$( curl -k -u username:password https://<splunk_endpoint>/services/search/jobs -d 'search=search earliest=-1m index=_internal')
echo $data
Bash Output: 1538748227.228319_D07875A9-FDD6-46E8-BE77-EDF9BD9A73B1
python requests
import requests
baseurl = 'https://<splunk_endpoint>/services/search/jobs'
headers = {
"Content-Type": "application/json",
}
data = {
'username': 'username',
'password': 'password',
"search": "search earliest=-1m index=_internal",
}
r = requests.get(baseurl, data=json.dumps(data), headers=headers)
print(r.json())
I'm not exactly sure where to put the username and password. Does that belong in 'data'? in headers? somewhere else? I also don't know if my -d conversino to the data dictionary is correct. I think it is.
Any thoughts
The Requests library verifies SSL certificates for HTTPS requests. You are most likely using a Splunk self-signed certificate which doesn't match.
You can ignore this check by adding verify=False to the get.
r = requests.get(baseurl, data=json.dumps(data), headers=headers, verify=False)
I came across this question when looking for interacting with Splunk using Python's requests library. I could not figure out how to create the search payload. Here is my basic code for scheduling a job and obtaining the SID:
import requests
username = 'my_username'
password = 'my_password'
search = {'search':'search earliest=-5m index=_internal'}
r = requests.post('https://splunk-search:8089/services/search/jobs/', auth=(my_username, my_password), data=search, verify="/etc/pki/tls/cert.pem")
print(r.text)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.