[英]crawler failed with internal issues
I am running crawler on S3 bucket its sizes is 2gb.Whenever I try to run crawler its get failed with Internal Service Exception我在 S3 存储桶上运行爬虫,它的大小是 2gb。每当我尝试运行爬虫时,它都会因内部服务异常而失败
Any resolution for this?.I'm stuck here有什么解决办法吗?我被困在这里
I have also used this link but didnt help alot https://aws.amazon.com/premiumsupport/knowledge-center/glue-crawler-internal-service-exception/?nc1=h_ls我也使用过此链接,但没有提供很多帮助https://aws.amazon.com/premiumsupport/knowledge-center/glue-crawler-internal-service-exception/?nc1=h_ls
Need your suggestion需要你的建议
2021-08-09T13:56:05.984+02:00 [60c350cc-eab6-4510-aa4a-cdee286d819a] ERROR : Internal Service Exception
Something similar happened to me a while ago, I also checked the documentation you shared but it didn't help me either, in my case the scenario was different, in short when I ran the crawler for the first time everything worked fine, after that I applied a modification to the schema from one of the tables obtained, this change involved what a partition is, days later I ran it again and received that error, to fix it I did the following:前段时间我也发生了类似的事情,我也检查了你分享的文档,但它也没有帮助我,在我的情况下情况不同,简而言之,当我第一次运行爬虫时一切正常,之后我从其中一个表中对模式进行了修改,此更改涉及分区是什么,几天后我再次运行它并收到该错误,为了修复它我做了以下操作:
I don't remember having read this scenario around, but I had to face it, in this case I caused it myself as you noticed, I hope it helps you, successes我不记得是否读过这种情况,但我不得不面对它,在这种情况下,正如你所注意到的,我自己造成了它,我希望它能帮助你,成功
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.