简体   繁体   中英

Read and Write HID device with PyUSB (not HIDAPI)

I would like to implement the read and write calls of the python hidapi, in pysub.

An example code using the python hidapi, looks like this:

import hid

hdev = hid.device()
h = hdev.open_path( path )

h.write( send_buffer )

res = h.read( 64 )    
receive_buffer = bytearray( res )

The main problem that I have with this is that the python hidapi read() returns a list of ints (one python int for each byte in the buffer received from the hardware), and I need the buffer as bytes and faithful to what was received.(*)

A secondary issue is that open, read and write are the only things I need and I need to keep the system as light as possible. Therefore I want to avoid the extra dependencies.

(*) bytearray() is not a good solution in this case, for reasons beyond the scope of this question.

Here is a minimum code example that seems to work. The HID device is a Teensy board that uses its RAWHID.recv() and RAWHID.send() to exchange text and binary with the host computer.

#!/usr/bin/python

import usb.core
import usb.util

dev = usb.core.find(idVendor=0x16C0, idProduct=0x0486)
    
try:
    dev.reset()
except Exception as e:
    print( 'reset', e)

if dev.is_kernel_driver_active(0):
    print( 'detaching kernel driver')
    dev.detach_kernel_driver(0)

endpoint_in = dev[0][(0,0)][0]
endpoint_out = dev[0][(0,0)][1]

# Send a command to the Teensy
endpoint_out.write( "version".encode() + bytes([0]) )

# Read the response, an array of byte, .tobytes() gives us a bytearray.
buffer = dev.read(endpoint_in.bEndpointAddress, 64, 1000).tobytes()

# Decode and print the zero terminated string response
n = buffer.index(0)
print( buffer[:n].decode() )

I've done some quick benchmarking and it seems that DrM's answer was definitely heading in the right direction with working on array s, but there is a slightly better option for conversion. Results below for 10 million iterations operating on 64-byte data buffers.

Using

data_list = [30] * 64
data_array = array('B', data_list)

I got the following run times in seconds:

Technique Time (seconds, 10 million iterations)
bytearray(data_list) 12.7
bytearray(data_array) 3.0
data_array.tobytes() 2.0
struct.pack('%uB' % len(data_list), *data_list) 18.6
struct.pack('%uB' % len(data_array), *data_array) 22.5

It appears that using the array.tobytes method is the fastest, followed by calling bytearray with the array as the argument.

Obviously I reused the same buffer on each iteration, probably among other unrealistic factors, so YMMV. These should be indicative results relative to each other though, even if not in absolute terms. Also this obviously doesn't account for the performance of then working on a bytearray versus bytes .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM