简体   繁体   English

为什么在Web应用程序中进行垃圾收集?

[英]Why Garbage Collect in web apps?

Consider building a web app on a platform where every request is handled by a User Level Thread(ULT) (green thread/erlang process/goroutine/... any light weight thread). 考虑在平台上构建一个Web应用程序,该平台上的每个请求都由用户级线程(ULT)(绿色线程/ erlang进程/ goroutine / ...任何轻量级线程)处理。 Assuming every request is stateless and resources like DB connection are obtained at startup of the app and shared between these threads. 假设每个请求都是无状态的,并且在应用程序启动时获得了诸如数据库连接之类的资源,并在这些线程之间共享。 What is the need for garbage collection in these threads? 这些线程中需要进行垃圾回收吗?

Generally such a thread is short running(a few milliseconds) and if well designed doesn't use more than a few (KB or MB) of memory. 通常,这样的线程运行时间很短(几毫秒),如果设计得当,不会占用过多的内存(KB或MB)。 If garbage collection of the resources allocated in the thread is done at the exit of the thread and independent of the other threads, then there would be no GC pauses for even the 98th or 99th percentile of requests. 如果在线程的出口完成线程中分配的资源的垃圾回收,并且独立于其他线程,那么即使第98%或第99%的请求也不会出现GC暂停。 All requests would be answered in predictable time. 所有请求将在可预测的时间内得到答复。

What is the problem with such a model and why is it not being widely used? 这样的模型有什么问题,为什么它没有被广泛使用?

You assumption might not be true. 您的假设可能不正确。

if well designed doesn't use more than a few (KB or MB) of memory 如果设计合理,则不会使用多于几个(KB或MB)的内存

Imagine a function for counting words in a text file which is used in a web app. 想象一下一个用于计算Web应用程序中使用的文本文件中单词的函数。 Some naive implementation could be, 一些幼稚的实现可能是

def count_words(text):
    words = text.split()
    count = {}
    for w in words:
        if w in count:
            count[w] += 1
        else:
            count[w] = 1
    return count

It allocates larger memory than text. 它分配的内存大于文本。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM