简体   繁体   English

Next.js 构建在 bitbucket 管道中的 docker 构建期间卡住了编译

[英]Next.js build stuck compiling during docker build in bitbucket pipeline

I have converted a legacy react app from using Webpack 3 to use Next.js 12 and Webpack 5.我已将旧版 React 应用程序从使用 Webpack 3 转换为使用 Next.js 12 和 Webpack 5。

I am currently trying to deploy the project using Docker through bitbucket pipelines but when running next build it gets stuck on 'Creating an optimized production build' and eventually runs out of memory and the build fails.我目前正在尝试通过 bitbucket 管道使用 Docker 部署该项目,但是在运行next build时,它会卡在“创建优化的生产构建”并最终耗尽内存并且构建失败。

I am using the same Dockerfile setup from next.js example and the docker build runs perfectly on my local machine with the same steps.我正在使用与 next.js 示例相同的 Dockerfile 设置,并且 docker build 在我的本地机器上以相同的步骤完美运行。

Has anyone experience a similar issue?有没有人遇到过类似的问题? No errors or anything are shown during yarn install or the build itself and I have outputStandalone set to true.yarn install或构建过程中没有显示任何错误或任何内容,并且我已将 outputStandalone 设置为 true。

# Install dependencies only when needed

FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile

# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

ENV NEXT_TELEMETRY_DISABLED 1

RUN yarn build

# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app

ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/package.json ./package.json

# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT 3000

CMD ["node", "server.js"]

根据out of memory不足错误,您可以尝试在yarn build之前设置节点内存

ENV NODE_OPTIONS=--max-old-space-size=4096

I did run into a similar issue with npm@7, due to https://github.com/npm/cli/issues/2011 / https://github.com/npm/cli/issues/3208 , rant intended.由于https://github.com/npm/cli/issues/2011 / https://github.com/npm/cli/issues/3208 ,我确实遇到了与 npm@7 类似的问题,咆哮的意图。

A workaround was to increase the docker service memory limits from bitbucket-pipelines.yml (default is 1024 MB)一种解决方法是从 bitbucket-pipelines.yml 增加 docker 服务内存限制(默认为 1024 MB)

definitions:
  services:
    docker:
      memory: 3072

or possibly even doubling the resources allocated for your step:甚至可能为您的步骤分配的资源增加一倍:

definitions:
  services:
    docker:
      memory: 7128
pipelines:
  default:
    - step:
        size: 2x
        services:
          - docker
        script:
          - docker build .

Beware that service resources are subtracted from those allocated for your step, so the step script would run on decreased memory.请注意,从为您的步骤分配的资源中减去服务资源,因此步骤脚本将在减少的内存上运行。

Also remember you will be charged x2 for your size: 2x steps.另请记住,您需要为您的size: 2x步数。

See https://support.atlassian.com/bitbucket-cloud/docs/databases-and-service-containers/#Service-memory-limits请参阅https://support.atlassian.com/bitbucket-cloud/docs/databases-and-service-containers/#Service-memory-limits

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM