简体   繁体   English

写入python日志的最快方法

[英]Fastest way to write to a log in python

I am using the gevent loop in uWSGI and I write to a redis queue. 我在uWSGI中使用gevent循环,然后写入redis队列。 I get about 3.5 qps. 我得到大约3.5 qps。 On occasion, there will be an issue with the redis connection so....if fail, then write to a file where I will have a separate process do cleanup later. 有时候,redis连接会出现问题......如果失败,那么写一个文件,我将有一个单独的进程稍后进行清理。 Because my app very latency aware, what is the fastest way to dump to disk in python? 因为我的应用程序非常了解延迟,在python中转储到磁盘的最快方法是什么? Will python logging suffice? python日志记录是否足够?

If latency is a crucial factor for your app, undefinitely writing to disk could make things really bad. 如果延迟是您的应用程序的关键因素,无限期写入磁盘可能会使事情变得非常糟糕。

If you want to survive a reboot of your server while redis is still down i see no other solutions than writing to disk, otherwise you may want to try with a ramdisk. 如果你希望在redis仍处于运行状态时重启你的服务器,我认为没有其他解决方案而不是写入磁盘,否则你可能想尝试使用ramdisk。

Are you sure having a second server with a second instance of redis would not be a better choice ? 你确定第二台带有第二个redis实例的服务器不是更好的选择吗?

Regarding logging, i would simply use low-level I/O functions as they have less overhead (even if we are talking of very few machine cycles) 关于日志记录,我只是使用低级I / O函数,因为它们的开销较小(即使我们谈论的机器周期很少)

Appending to a file on disk is fast. 附加到磁盘上的文件很快。

:~$ time echo "this happened" >> test.file

real    0m0.000s
user    0m0.000s
sys     0m0.000s

Appending to a file with Python seems to be about the same order of magnitude as bash. 使用Python附加到文件似乎与bash大致相同。 The logging module does seem to add a little bit of overhead: 日志记录模块似乎确实增加了一点开销:

import logging
import time

logging.basicConfig(filename='output_from_logging')
counter = 0

while counter < 3:
    start = time.time()
    with open('test_python_append', 'a') as f:
        f.write('something happened')
    counter += 1
    print 'file append took ', time.time() - start

counter = 0
while counter < 3:
    start = time.time()
    logging.warning('something happened')
    counter += 1
    print 'logging append took ', time.time() - start

Which gives us this output: 这给了我们这个输出:

file append took  0.000263929367065
file append took  6.79492950439e-05
file append took  5.41210174561e-05
logging append took  0.000214815139771
logging append took  0.0001220703125
logging append took  0.00010085105896

But in the grand scheme of things I doubt that this operation will be a very costly part of your code base and is probably not worth worrying about. 但是在宏大的计划中,我怀疑这个操作将是代码库中非常昂贵的一部分,并且可能不值得担心。 If you are worried about latency then you should profile your code python -m cProfile code_to_test.py . 如果您担心延迟,那么您应该编写代码python -m cProfile code_to_test.py That will tell you how long each of your functions is taking and where your application is spending time. 这将告诉您每个函数的使用时间以及应用程序花费时间的位置。 I seriously doubt it will mainly be in logging errors. 我严重怀疑它主要是记录错误。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM