簡體   English   中英

Apache Storm 地址已在使用錯誤

[英]Apache Storm address already in use error

有時我會在圈內出現這樣的錯誤:

022-11-07 06:34:30.977 o.a.s.m.n.Server main [INFO] Create Netty Server Netty-server-localhost-6704, buffer_size: 5242880, maxWorkers: 1
2022-11-07 06:34:31.566 o.a.s.u.Utils main [ERROR] Received error in thread main.. terminating worker...
java.lang.Error: java.security.PrivilegedActionException: java.net.BindException: Address already in use
    at org.apache.storm.utils.Utils.handleUncaughtException(Utils.java:663) ~[storm-client-2.4.0.jar:2.4.0]
    at org.apache.storm.utils.Utils.handleWorkerUncaughtException(Utils.java:671) ~[storm-client-2.4.0.jar:2.4.0]
    at org.apache.storm.utils.Utils.lambda$createWorkerUncaughtExceptionHandler$3(Utils.java:1058) ~[storm-client-2.4.0.jar:2.4.0]
    at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1055) [?:?]
    at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1050) [?:?]
    at java.lang.Thread.dispatchUncaughtException(Thread.java:2002) [?:?]

Storm 正在嘗試建立一個新的拓撲,但它不能。

部署到Storm之后,我已經面對了很長一段時間。 一旦我能夠通過在刪除舊拓撲和設置新拓撲之間設置一個大超時(大約 300 秒)並增加 workerShutdownSleepSecs 配置來修復它。 在這種情況下,Storm 能夠刪除所有 blob 以正常工作,因為我在日志中注意到 Storm 需要一些時間才能刪除所有內容,即使在拓撲被完全刪除之后也是如此。

但現在我再次面對它,但在較小的 QA 環境中進行了上述所有操作。 有誰知道還有什么可能導致它?

同樣在主管機器中,我檢查了風暴數據文件夾,在“/storm/workers”文件夾中我發現有一些名稱中帶有 id 的舊文件夾,我假設風暴沒有再次刪除舊拓撲。

我認為這應該是很常見的事情,因為我注意到風暴幾乎在它試圖自己做的所有事情之后都失敗了,所以我猜有人已經面對它了。

我的 storm.yaml:(storm.scheduler:“org.apache.storm.scheduler.resource.ResourceAwareScheduler”配置僅用於測試,但我認為它不會影響某些東西)

storm.zookeeper.servers:
  - storm-nimbus-cloud-qa1
  - storm-nimbus-cloud-qa2
  - storm-nimbus-cloud-qa3

nimbus.seeds: ["storm-nimbus-cloud-qa1", "storm-nimbus-cloud-qa2", "storm-nimbus-cloud-qa3"]
storm.local.dir: /data/ansible/storm
supervisor.slots.ports:
  - 6700
  - 6701
  - 6702
  - 6703
  - 6704

storm.log.dir: "/data/ansible/storm_logging"

nimbus.childopts: "-Xmx512m -Djava.net.preferIPv4Stack=true"

ui.childopts: "-Xmx512m -Djava.net.preferIPv4Stack=true"
ui.port: 8080

supervisor.childopts: "-Xmx512m -Djava.net.preferIPv4Stack=true"
supervisor.cpu.capacity: 200.0
supervisor.memory.capacity.mb: 3072.0

worker.childopts: "-Djava.net.preferIPv4Stack=true"
worker.heap.memory.mb: 512

topology.component.cpu.pcore.percent: 5.0

blacklist.scheduler.assume.supervisor.bad.based.on.bad.slot: false
nimbus.topology.blobstore.deletion.delay.ms: 120000
supervisor.worker.shutdown.sleep.secs: 60
scheduler.display.resource: true
storm.scheduler: "org.apache.storm.scheduler.resource.ResourceAwareScheduler"

logviewer.cleanup.interval.secs: 3600
logviewer.max.per.worker.logs.size.mb: 512
logviewer.max.sum.worker.logs.size.mb: 2560
logviewer.cleanup.age.mins: 20160

storm.messaging.netty.max_retries: 300
storm.messaging.netty.max_wait_ms: 10000
storm.messaging.netty.min_wait_ms: 1000

我還檢查了 supervisor 和 worker 日志,這就是我發現的所有內容: Supervisor.log 中與拓撲相關的所有日志

    Line  6493: 2022-11-04 11:09:55.880 o.a.s.d.s.BasicContainer SLOT_6704 [INFO] Created Worker ID 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6494: 2022-11-04 11:09:55.880 o.a.s.d.s.Container SLOT_6704 [INFO] Setting up 6a061042-8ce3-4b65-ab1b-46fd67a63093-172.23.16.27:7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6495: 2022-11-04 11:09:55.881 o.a.s.d.s.Container SLOT_6704 [INFO] GET worker-user for 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6496: 2022-11-04 11:09:55.882 o.a.s.d.s.Container SLOT_6704 [INFO] SET worker-user 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0 stormadmin
    Line  6497: 2022-11-04 11:09:55.882 o.a.s.d.s.Container SLOT_6704 [INFO] Creating symlinks for worker-id: 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0 storm-id: EventHandler-17-1667560186 for files(1): [resources]
    Line  6498: 2022-11-04 11:09:55.882 o.a.s.d.s.BasicContainer SLOT_6704 [INFO] Launching worker with assignment LocalAssignment(topology_id:EventHandler-17-1667560186, executors:[ExecutorInfo(task_start:4, task_end:4)], resources:WorkerResources(mem_on_heap:128.0, mem_off_heap:0.0, cpu:5.0, shared_mem_on_heap:0.0, shared_mem_off_heap:0.0, resources:{offheap.memory.mb=0.0, onheap.memory.mb=128.0, cpu.pcore.percent=5.0}, shared_resources:{}), owner:stormadmin) for this supervisor 6a061042-8ce3-4b65-ab1b-46fd67a63093-172.23.16.27 on port 6704 with id 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6499: 2022-11-04 11:09:55.883 o.a.s.d.s.BasicContainer SLOT_6704 [INFO] Launching worker with command: 'java' '-cp' '/usr/local/apache-storm-2.4.0/lib-worker/*:/usr/local/apache-storm-2.4.0/extlib/*:/opt/storm/conf:/data/ansible/storm/supervisor/stormdist/EventHandler-17-1667560186/stormjar.jar' '-Xmx64m' '-Dlogging.sensitivity=S3' '-Dlogfile.name=worker.log' '-Dstorm.home=/usr/local/apache-storm-2.4.0' '-Dworkers.artifacts=/data/ansible/storm_logging/workers-artifacts' '-Dstorm.id=EventHandler-17-1667560186' '-Dworker.id=7e1e50ed-0fba-4d8b-8c62-301edfaf32b0' '-Dworker.port=6704' '-Dstorm.log.dir=/data/ansible/storm_logging' '-DLog4jContextSelector=org.apache.logging.log4j.core.selector.BasicContextSelector' '-Dstorm.local.dir=/data/ansible/storm' '-Dworker.memory_limit_mb=128' '-Dlog4j.configurationFile=/usr/local/apache-storm-2.4.0/log4j2/worker.xml' 'org.apache.storm.LogWriter' 'java' '-server' '-Dlogging.sensitivity=S3' '-Dlogfile.name=worker.log' '-Dstorm.home=/usr/local/apache-storm-2.4.0' '-Dworkers.artifacts=/data/ansible/storm_logging/workers-artifacts' '-Dstorm.id=EventHandler-17-1667560186' '-Dworker.id=7e1e50ed-0fba-4d8b-8c62-301edfaf32b0' '-Dworker.port=6704' '-Dstorm.log.dir=/data/ansible/storm_logging' '-DLog4jContextSelector=org.apache.logging.log4j.core.selector.BasicContextSelector' '-Dstorm.local.dir=/data/ansible/storm' '-Dworker.memory_limit_mb=128' '-Dlog4j.configurationFile=/usr/local/apache-storm-2.4.0/log4j2/worker.xml,topology_logger.xml' '-Djava.net.preferIPv4Stack=true' '-javaagent:/opt/storm/agent/dd-java-agent.jar' '-Ddd.env=qa' '-Ddd.service=EventHandler' '-Djava.net.preferIPv4Stack=true' '-Ddd.logs.injection=true' '-Djava.library.path=/data/ansible/storm/supervisor/stormdist/EventHandler-17-1667560186/resources/Linux-amd64:/data/ansible/storm/supervisor/stormdist/EventHandler-17-1667560186/resources:/usr/local/lib:/opt/local/lib:/usr/lib:/usr/lib64' '-Dstorm.conf.file=' '-Dstorm.options=' '-Djava.io.tmpdir=/data/ansible/storm/workers/7e1e50ed-0fba-4d8b-8c62-301edfaf32 ...
    Line  6503: 2022-11-04 11:09:55.899 o.a.s.d.s.Slot SLOT_6704 [INFO] STATE waiting-for-blob-localization msInState: 42 -> waiting-for-worker-start msInState: 0 topo:EventHandler-17-1667560186 worker:7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6515: 2022-11-04 11:10:18.981 o.a.s.d.s.Slot SLOT_6704 [INFO] STATE waiting-for-worker-start msInState: 23082 topo:EventHandler-17-1667560186 worker:7e1e50ed-0fba-4d8b-8c62-301edfaf32b0 -> kill-blob-update msInState: 1 topo:EventHandler-17-1667560186 worker:7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6516: 2022-11-04 11:10:18.981 o.a.s.d.s.Container SLOT_6704 [INFO] Cleaning up 6a061042-8ce3-4b65-ab1b-46fd67a63093-172.23.16.27:7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6517: 2022-11-04 11:10:18.981 o.a.s.d.s.AdvancedFSOps SLOT_6704 [INFO] Deleting path /data/ansible/storm/workers/7e1e50ed-0fba-4d8b-8c62-301edfaf32b0/heartbeats
    Line  6518: 2022-11-04 11:10:18.982 o.a.s.d.s.AdvancedFSOps SLOT_6704 [INFO] Deleting path /data/ansible/storm/workers/7e1e50ed-0fba-4d8b-8c62-301edfaf32b0/pids
    Line  6519: 2022-11-04 11:10:18.982 o.a.s.d.s.AdvancedFSOps SLOT_6704 [INFO] Deleting path /data/ansible/storm/workers/7e1e50ed-0fba-4d8b-8c62-301edfaf32b0/tmp
    Line  6520: 2022-11-04 11:10:18.982 o.a.s.d.s.AdvancedFSOps SLOT_6704 [INFO] Deleting path /data/ansible/storm/workers/7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6521: 2022-11-04 11:10:18.982 o.a.s.d.s.Container SLOT_6704 [INFO] REMOVE worker-user 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6522: 2022-11-04 11:10:18.982 o.a.s.d.s.AdvancedFSOps SLOT_6704 [INFO] Deleting path /data/ansible/storm/workers-users/7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line  6531: 2022-11-04 11:10:18.990 o.a.s.d.s.BasicContainer SLOT_6704 [INFO] Removed Worker ID 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0

worker.log中所有與topology相關的日志

    Line      5: 2022-11-04 11:10:30.922 o.a.s.d.w.Worker main [INFO] Launching worker for EventHandler-17-1667560186 on 6a061042-8ce3-4b65-ab1b-46fd67a63093-172.23.16.27:6704 with id 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0 and conf {storm.messaging.netty.min_wait_ms=1000, topology.backpressure.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, storm.resource.isolation.plugin=org.apache.storm.container.cgroup.CgroupManager, storm.zookeeper.auth.user=null, storm.messaging.netty.buffer_size=5242880, storm.exhibitor.port=8080, topology.bolt.wait.progressive.level1.count=1, pacemaker.auth.method=NONE, storm.oci.cgroup.root=/sys/fs/cgroup, ui.filter=null, worker.profiler.enabled=false, executor.metrics.frequency.secs=60, supervisor.thrift.threads=16, ui.http.creds.plugin=org.apache.storm.security.auth.DefaultHttpCredentialsPlugin, supervisor.supervisors.commands=[], supervisor.queue.size=128, logviewer.cleanup.age.mins=20160, topology.tuple.serializer=org.apache.storm.serialization.types.ListDelegateSerializer, storm.cgroup.memory.enforcement.enable=false, drpc.port=3772, supervisor.localizer.update.blob.interval.secs=30, topology.max.spout.pending=null, topology.transfer.buffer.size=1000, storm.oci.nscd.dir=/var/run/nscd, nimbus.worker.heartbeats.recovery.strategy.class=org.apache.storm.nimbus.TimeOutWorkerHeartbeatsRecoveryStrategy, worker.metrics={CGroupMemory=org.apache.storm.metrics2.cgroup.CGroupMemoryUsage, CGroupMemoryLimit=org.apache.storm.metrics2.cgroup.CGroupMemoryLimit, CGroupCpu=org.apache.storm.metrics2.cgroup.CGroupCpu, CGroupCpuGuarantee=org.apache.storm.metrics2.cgroup.CGroupCpuGuarantee, CGroupCpuGuaranteeByCfsQuota=org.apache.storm.metrics2.cgroup.CGroupCpuGuaranteeByCfsQuota, CGroupCpuStat=org.apache.storm.metrics2.cgroup.CGroupCpuStat}, logviewer.port=8000, worker.childopts=-Djava.net.preferIPv4Stack=true, topology.component.cpu.pcore.percent=5.0, storm.daemon.metrics.reporter.plugins=[org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter], worker.max.timeout.secs=600, blac ...
    Line     16: 2022-11-04 11:10:34.608 o.a.s.s.o.a.z.ZooKeeper main [INFO] Client environment:java.io.tmpdir=/data/ansible/storm/workers/7e1e50ed-0fba-4d8b-8c62-301edfaf32b0/tmp
    Line     23: 2022-11-04 11:10:34.664 o.a.s.s.o.a.z.ZooKeeper main [INFO] Client environment:user.dir=/data/ansible/storm/workers/7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line     71: 2022-11-04 11:10:50.097 o.a.s.d.w.WorkerState Netty-server-localhost-6704-worker-1 [INFO] Sending BackPressure status to new client. BPStatus: {worker=7e1e50ed-0fba-4d8b-8c62-301edfaf32b0, bpStatusId=1, bpTasks=[], nonBpTasks=[4]}
    Line     73: 2022-11-04 11:10:51.622 o.a.s.d.w.WorkerState refresh-active-timer [INFO] All connections are ready for worker 6a061042-8ce3-4b65-ab1b-46fd67a63093-172.23.16.27:6704 with id 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0
    Line     83: 2022-11-04 11:10:53.841 o.a.s.d.w.Worker main [INFO] Worker 7e1e50ed-0fba-4d8b-8c62-301edfaf32b0 for storm EventHandler-17-1667560186 on 6a061042-8ce3-4b65-ab1b-46fd67a63093-172.23.16.27:6704  has finished loading

所以我看到主管從 11:09:55 到 11:10:18 等了大約 20 秒來考慮終止拓撲(想提醒這個環境不是很快,所以可能需要一些時間來建立拓撲)但是工作人員完成了在 11:10:53 設置拓撲,這也許就是為什么 /storm/workers 中沒有任何東西被殺死,而那些空閑的工作人員不斷發送心跳和其他東西的原因。 所以據我所知,我需要一些配置來要求風暴在檢查拓撲之前等待大約一分鍾 state?

UPD:我遇到了同樣的問題,但已經再次出現在 prod env 上。 似乎它不依賴於慢速或快速機器。 日志完全相同,但時間不同(在 02:15:45 主管啟動了一個工作人員,在 2:secs 狀態更改為“waiting-for-worker-start msInState: 2002 -> kill-blob-update msInState.0 “?但是為什么:我開始在代碼中搜索,我發現在 handleWaitingForWorkerStart 方法中的 Slot class 中有一段代碼為:

dynamicState = filterChangingBlobsFor(dynamicState, dynamicState.currentAssignment);
if (!dynamicState.changingBlobs.isEmpty()) {
    //Kill the container and restart it
    return killContainerFor(KillReason.BLOB_CHANGED, dynamicState, staticState);
}

不知道 changingBlobs 集合是什么意思(也許某些 blob 正在被更改)但看起來這里的狀態已更改並且工人被標記為被殺死。 我認為超時是不同的,因為在那種方法之前它等待心跳但是在不同的機器上它可能需要或多或少的時間。 那么主管改變狀態的原因是什么?為什么在主管刪除與該工人相關的所有內容后工人開始自行啟動?

我找到了解決它的方法,它解決了問題,但沒有解決風暴方面的問題。 所以我們知道,有時風暴主管會要求一個工作人員上拓撲。 然后風暴等待一段時間並決定將其刪除,因為沒有來自工作人員的響應並且在該工作人員開始部署拓撲之后。 在最后的結束風暴中,不知道在某個端口上打開的拓撲(並認為這個端口是空閑的),但它存在並導致 Address already in use 異常在一個周期的后面。 手動我總是通過殺死端口上的 pid 來修復它,然后它開始工作,所以如果我們知道它是 100% 的風暴,那么有一個機制來關閉它會很棒。 所以我克隆了原始的 Storm git 存儲庫並嘗試找到一種簡單的方法來關閉它。 它例如“風暴客戶”項目。 工人和實用程序類。 更改,實用程序 class 添加:

    public static UncaughtExceptionHandler createWorkerUncaughtExceptionHandler(String port) {
    return (thread, thrown) -> {
        try {
            try {
                String message = thrown.getMessage();
                Throwable cause = thrown.getCause();
                if (thrown instanceof BindException || cause instanceof BindException
                        || (message != null && message.contains("BindException"))) {
                    Process process = new ProcessBuilder().command("lsof", "-t",
                            String.format("-i:%s", port)).start();
                    try (BufferedReader reader = new BufferedReader(
                            new InputStreamReader(process.getInputStream()))) {
                        String pid;
                        if ((pid = reader.readLine()) != null) {
                            new ProcessBuilder().command("kill", "-9", pid).start();
                            LOG.error("killed pid " + pid);
                        }
                    }
                    LOG.error(String.format(
                            "Received BindException error on %s port, process was closed on this port", port));
                }
            } catch (Exception e) {
                LOG.error(String.format(
                        "Received BindException error on %s port, process was not closed on this port", port), e);
            }
            handleWorkerUncaughtException(thrown);
        } catch (Error err) {
            LOG.error("Received error in thread {}.. port " + port + ".. terminating worker...", thread.getName(),
                    err);
            Runtime.getRuntime().exit(-2);
        }
    };
}
public static void setupWorkerUncaughtExceptionHandler(String port) {
    Thread.setDefaultUncaughtExceptionHandler(createWorkerUncaughtExceptionHandler(port));
}

在 Worker class 的 main 方法中調用我們的 setupWorkerUncaughtExceptionHandler 端口:

public static void main(String[] args) throws Exception {
    Preconditions.checkArgument(args.length == 5, "Illegal number of arguments. Expected: 5, Actual: " + args.length);
    String stormId = args[0];
    String assignmentId = args[1];
    String supervisorPort = args[2];
    String portStr = args[3];
    String workerId = args[4];
    Map<String, Object> conf = ConfigUtils.readStormConfig();
    //Changes
    Utils.setupWorkerUncaughtExceptionHandler(portStr);
    //Changes
    StormCommon.validateDistributedMode(conf);
    int supervisorPortInt = Integer.parseInt(supervisorPort);
    Worker worker = new Worker(conf, null, stormId, assignmentId, supervisorPortInt, Integer.parseInt(portStr), workerId);

    //Add shutdown hooks before starting any other threads to avoid possible race condition
    //between invoking shutdown hooks and registering shutdown hooks. See STORM-3658.
    int workerShutdownSleepSecs = ObjectReader.getInt(conf.get(Config.SUPERVISOR_WORKER_SHUTDOWN_SLEEP_SECS));
    LOG.info("Adding shutdown hook with kill in {} secs", workerShutdownSleepSecs);
    Utils.addShutdownHookWithDelayedForceKill(worker::shutdown, workerShutdownSleepSecs);

    worker.start();
}

所以最后,如果我們捕獲到 Address already in use 異常,我們將關閉該進程。 不是最好但快速且有效的解決方案。 那么我們只需要構建這個庫,並將其替換到所有節點的storm文件夾中即可。 但是你應該確定storm端口不能打開任何其他東西,最好在Linux中自己的用戶上運行storm。在這種情況下,Storm將無法關閉除storm本身打開的任何其他進程。 庫是基於 2.4.0 風暴版本構建的。 My library build: https://gitlab.com/nikita_poddubskiy/storm-address-already-in-use也在這里開了一個問題,但仍然沒有回應。 https://lists.apache.org/list?user@storm.apache.org:2022-12

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM