简体   繁体   中英

Getting “hell.php” error and “502 Bad Gateway” error after deploying Node.js page to AWS Elastic Beanstalk

I recently deployed my very first Node.js app to AWS Elastic Beanstalk. It is a very simple portfolio page. The site worked with no problem for several hours, but then the instance went to Severe status and the page returned this message:

502 Bad Gateway nginx/1.12.1

The error message in the log was "First argument must be a string or Buffer".

I restarted the app server, and the page worked for 12 hours with no problem, but then it went down again with the same message. So I started troublshooting and tried these things:

The Node.js version in Elastic Beanstalk was different than the version used to create my app, so I changed it to the same version the site was created with (8.12.0). Restarted app server. Same problem.

I thought that maybe the load balancer was having trouble reading the response, so I started converting the data sent in the response to a string (.toString()), but that did not help. And it turns out that my configuration does not even have a load balancer.

The Node documentation for fs.readFile said that the readFile method uses a lot of memory and to consider using readStream instead, so I made that change, but I'm getting the same result with readStream.

I rebuilt the environment and tried again. This time the page ran successfully for two days. Then after two days it errored again with this message:

Error: ENOENT: no such file or directory, open 'public//hell.php' events.js:183 throw er; // Unhandled 'error' event ^

I don't use ANY php code. Why is it referencing a php file called "hell"?

Here is my code in the server.js file:

const http = require("http");
const fs = require("fs");
//use AWS's default port, or if it's not available, use port 8081.
const port = process.env.PORT || 8081;
const server = http.createServer(function (req, res) {

    res.statusCode = 200;

    if (req.url == "/" || req.url == "/index.html" || req.url == "/home") {
        let readStream = fs.createReadStream("public/index.html");

        // When the stream is done being read, end the response
        readStream.on('close', () => {
            res.end();
        })

        // Stream chunks to response
        readStream.pipe(res);
    }
    else {
        let readStream = fs.createReadStream("public/" + req.url);

        // When the stream is done being read, end the response
        readStream.on('close', () => {
            res.end();
        })

        // Stream chunks to response
        readStream.pipe(res);
    }
}).listen(port);

A copy of the "public/index.html" file being read by fs can be found at: https://zurafuse.github.io/index.html

Does anyone have any idea what I am doing wrong?

I have resolved this issue. It turns out, bots frequently hit AWS sites like mine looking for vulnerabilities and in my case they were trying to open pages that do not exist (like Wordpress pages). So I modified my code to only open pages that exist that I have defined, and if any http requests come asking for something unexpected, I return a "page not found" response. I have not had a problem since.

Because my site was constantly getting errors trying to open pages that do not exist, it was crashing my AWS Elastic Beanstalk instance. And since I have the free version, it is not scalable at all and so not very forgiving.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM