簡體   English   中英

跟蹤多台服務器上的客戶端

[英]Keeping track of clients across multiple servers

在過去的幾個月中,我一直在設計一個聊天系統,以達到有趣的目的,但是在負載平衡方面我找不到很多。

到目前為止,我的體系結構由WebSocket服務器組成,盡管為簡單起見,websocket層不包含在本主題中。 一個MySQL數據庫,用於存儲用戶帳戶和聊天信息; 在Nginx上運行的面向PHP的網站。

我已經考慮過使用memcached保留一個聊天列表,其中包含對每個已連接客戶端的引用,但是我不確定如何在消息發送或用戶加入/退出時如何使用消息傳遞/隊列系統來告訴其他已連接客戶端。 (Redis?)。

最終,此並發問題還有其他潛在的缺陷,即,我應該從套接字層抽象出處理層,還是在處理層中,不擔心其他客戶端在處理期間是否斷開連接? 我應該讓套接字層處理嗎?

在我的memcached示例中,我可以將所有相關的客戶端信息存儲在該ramdisk中,並在我認為合適的情況下請求/更新它。這是否是可以接受的方式?

最好是,我想閱讀一些材料並自行解決如何做到這一點,而不是僅僅從這里的某個人那里得到答案,並且我希望將來可以將其用作可伸縮性課程重新設計類似的東西。

這是我做的測試服務器

import multiprocessing
import socket, select
import redis, json

'''
'' The basic idea of this server is to have cores-1 processes running to munch data
'' and one "master" process handling all of the client connections and what not.
''
'' Scalability is simple, treat any other process as another server and there will be
'' no cross-coding required '''

'''
'' Sending data
''  master_pipe.send( {"__id__": "SEND_DATA", "fileno": "417", "data": "3025561204"} )
''
'' Closing a socket
''  master_pipe.send( {"__id__": "CLOSE_SOCKET", "fileno": 417} ) '''
def Worker(worker_index, worker_queue, master_pipe):
        memory = redis.StrictRedis(host = "127.0.0.1", port = 6379)

    #try:
        for client_id, *args in iter(worker_queue.get, None):
            client = json.loads(memory.get("server0:client-" + str(client_id)).decode("utf-8"))
            if not client:
                continue

            #print("NOPE", args)

            if args[0][:5] == "join:":
                client["chat"] = str(args[0][5:])
                memory.set("server0:client-" + str(client_id), json.dumps(client).encode("utf-8", "ignore"))
                memory.lpush("chat:" + str(args[0][5:]), client["__id__"])

            elif args[0][:7] == "online:":  
                #print(client)
                if "chat" in client:
                    print(memory.lrange("chat:" + client["chat"], 0, -1))

    #except Exception as e:
    #   #print(e)


def Master(master_pipe, workers):
    memory = redis.Redis(host = "127.0.0.1", port = 6379)
    memory.delete("clients")

    server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True)
    server.bind(("0.0.0.0", 7777))
    server.listen(socket.SOMAXCONN)

    epoll = select.epoll()
    epoll.register(server.fileno(), select.EPOLLIN)
    epoll.register(master_pipe.fileno(), select.EPOLLIN)

    sockets, i = {}, 0
    while True:
        for fileno, e_bits in epoll.poll():
            try:
                if fileno == server.fileno():
                    sock, (addr, port) = server.accept()
                    sockets[sock.fileno()] = sock
                    epoll.register(sock.fileno(), select.EPOLLIN | select.EPOLLHUP)

                    client_object = {"__id__": sock.fileno()}
                    memory.set("server0:client-" + str(sock.fileno()), json.dumps(client_object).encode("utf-8", "ignore"))

                elif fileno == master_pipe.fileno():
                    print(master_pipe.recv())

                elif e_bits & select.EPOLLIN:
                    recv = sockets[fileno].recv(1024).decode("utf-8", "ignore").rstrip("\r\n")
                    if not recv:
                        raise socket.error
                    if recv == "asdasdasd":
                        print(len(sockets))
                    #print(recv)
                    workers[i % len(workers)].put( (fileno, recv, ) )

            except socket.error:
                sockets[fileno].close()
                del sockets[fileno]

                client = json.loads(memory.get("server0:client-" + str(fileno)).decode("utf-8"))
                #print(client)

                if client:
                    if "chat" in client:
                        memory.lrem("chat:" + client["chat"], client["__id__"])

                    memory.delete("server0:client-" + str(fileno))

            finally:
                i += 1


if __name__ == "__main__":
    workers = []
    master_pipe, worker_pipe = multiprocessing.Pipe()

    for i in range( max(1, multiprocessing.cpu_count() - 1) ):
        workers.append(multiprocessing.Queue())

        p = multiprocessing.Process(target = Worker, args = (i, workers[-1], worker_pipe, ))
        p.daemon = True
        p.start()

    Master(master_pipe, workers)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM