简体   繁体   English

“ERR_EMPTY_RESPONSE”- 托管在 AWS (EC2 / EKS / ShinyProxy) 上的 ShinyApp 不工作

[英]"ERR_EMPTY_RESPONSE" - ShinyApp hosted over AWS (EC2 / EKS / ShinyProxy) does not work

Update #2: I have checked the health status of my instances within the auto scaling group - here the instances are titled as "healthy".更新 #2:我检查了自动缩放组中我的实例的健康状态——这里的实例标题为“健康”。 (Screenshot added) (已添加截图)

I followed this trouble-shooting tutorial from AWS - without success:我遵循了 AWS 的故障排除教程- 但没有成功:

Solution : Use the ELB health check for your Auto Scaling group.解决方案:对您的 Auto Scaling 组使用 ELB 运行状况检查。 When you use the ELB health check, Auto Scaling determines the health status of your instances by checking the results of both the instance status check and the ELB health check.当您使用ELB健康检查时,弹性伸缩通过检查实例状态检查和ELB健康检查的结果来确定您的实例的健康状态。 For more information, see Adding health checks to your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .有关更多信息,请参阅Amazon EC2 Auto Scaling 用户指南中的向您的 Auto Scaling 组添加运行状况检查


Update #1: I found out that the two Node-Instances are "OutOfService" (as seen in the screenshots below) because they are failing the Healtcheck from the loadbalancer - could this be the problem?更新#1:我发现这两个节点实例是“OutOfService”(如下面的屏幕截图所示),因为它们没有通过负载均衡器的 Healtcheck - 这可能是问题所在吗? And how do i solve it?我该如何解决?

Thanks!谢谢!


I am currently on the home stretch to host my ShinyApp on AWS.我目前正准备在 AWS 上托管我的 ShinyApp。

To make the hosting scalable, I decided to use AWS - more precisely an EKS cluster.为了使托管具有可扩展性,我决定使用 AWS——更准确地说是 EKS 集群。

For the creation I followed this tutorial: https://github.com/z0ph/ShinyProxyOnEKS对于创建,我遵循了本教程: https ://github.com/z0ph/ShinyProxyOnEKS

So far everything worked, except for the last step: "When accessing the load balancer address and port, the login interface of ShinyProxy can be displayed normally.至此一切正常,除了最后一步:“访问负载均衡器地址和端口时,可以正常显示ShinyProxy的登录界面。

The load balancer gives me the following error message as soon as I try to call it with the corresponding port: ERR_EMPTY_RESPONSE.当我尝试使用相应的端口调用它时,负载均衡器会立即给我以下错误消息: ERR_EMPTY_RESPONSE。

I have to admit that I am currently a bit lost and lack a starting point where the error could be.我不得不承认,我目前有点迷茫,并且缺乏可能出现错误的起点。

I was already able to host the Shiny sample application in the cluster (step 3.2 in the tutorial), so it must be somehow due to shinyproxy, kubernetes proxy or the loadbalancer itself.我已经能够在集群中托管 Shiny 示例应用程序(教程中的步骤 3.2),所以它一定是由于 shinyproxy、kubernetes 代理或负载均衡器本身。

I link you to the following information below:我将您链接到以下信息:

  • Overview EC2 Instances (Workspace + Cluster Nodes)概述 EC2 实例(工作区 + 集群节点)
  • Overview Loadbalancer概述负载均衡器
  • Overview Repositories概述存储库
  • Dockerfile ShinyProxy Dockerfile 闪亮代理
  • Dockerfile Kubernetes Proxy Dockerfile Kubernetes 代理
  • Dockerfile ShinyApp (sample application) Dockerfile ShinyApp(示例应用程序)

I have painted over some of the information to be on the safe side - if there is anything important, please let me know.为了安全起见,我已经涂掉了一些信息——如果有什么重要的,请告诉我。

If you need anything else I haven't thought of, just give me a hint!如果您需要其他我没有想到的东西,请给我一个提示!

And please excuse the confusing question and formatting - I just don't know how to word / present it better.请原谅令人困惑的问题和格式 - 我只是不知道如何更好地表达/呈现它。 sorry!对不起!

Many thanks and best regards致以真诚的感谢和诚挚的问候


Overview EC2 Instances (Workspace + Cluster Nodes)概述 EC2 实例(工作区 + 集群节点)

在此处输入图像描述

Overview Loadbalancer概述负载均衡器

在此处输入图像描述 在此处输入图像描述

Overview Repositories概述存储库

在此处输入图像描述

Dockerfile ShinyProxy (source https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes ) Dockerfile ShinyProxy(来源https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes

在此处输入图像描述

Dockerfile Kubernetes Proxy (source https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes - Fork) Dockerfile Kubernetes 代理(来源https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes - Fork)

在此处输入图像描述

Dockerfile ShinyApp (sample application) Dockerfile ShinyApp(示例应用程序)

在此处输入图像描述 在此处输入图像描述

The following files are 1:1 from the tutorial:以下文件是教程中的 1:1:

  1. application.yaml (shinyproxy) application.yaml(闪亮代理)
  2. sp-authorization.yaml sp-authorization.yaml
  3. sp-deployment.yaml sp-deployment.yaml
  4. sp-service.yaml sp-service.yaml

Health-Status in the AutoScaling-Group AutoScaling 组中的健康状态

在此处输入图像描述

Unfortunately, there is a known issue in AWS不幸的是, AWS中存在一个已知问题

externalTrafficPolicy: Local with Type: LoadBalancer AWS NLB health checks failing · Issue #80579 · kubernetes/kubernetes externalTrafficPolicy:Local with Type:LoadBalancer AWS NLB 健康检查失败 · 问题 #80579 · kubernetes/kubernetes

Closing this for now since it's a known issue暂时关闭此问题,因为这是一个已知问题

As per k8s manual :根据k8s手册

.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. .spec.externalTrafficPolicy - 表示此服务是否希望将外部流量路由到节点本地或集群范围的端点。 There are two available options: Cluster (default) and Local .有两个可用选项: Cluster (默认)和Local Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Cluster隐藏了客户端源 IP,可能会导致到另一个节点的第二跳,但应该具有良好的整体负载分布。 Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading. Local保留客户端源 IP 并避免 LoadBalancer 和 NodePort 类型服务的第二跳,但存在潜在的不平衡流量传播风险。

But you may try to fix local protocol like in this answer但是您可以尝试像这个答案中那样修复本地协议

Upd:更新:

This is actually a known limitation where the AWS cloud provider does not allow for --hostname-override , see #54482 for more details.这实际上是一个已知限制,其中 AWS 云提供商不允许--hostname-override ,请参阅#54482了解更多详细信息。

Upd 2: There is a workaround via patching kube-proxy :更新 2:通过修补kube-proxy有一个解决方法:

As per AWS KB根据AWS 知识库

A Network Load Balancer with the externalTrafficPolicy is set to Local (from the Kubernetes website), with a custom Amazon VPC DNS on the DHCP options set.具有externalTrafficPolicy 的网络负载均衡器设置为本地(来自 Kubernetes 网站),并在 DHCP 选项集上设置了自定义 Amazon VPC DNS。 To resolve this issue, patch kube-proxy with the hostname override flag.要解决此问题,请使用主机名覆盖标志修补 kube-proxy。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM