Skip to main content

Run Worker processes with the TypeScript SDK

How to run Worker Processes

The Worker Process is where Workflow Functions and Activity Functions are executed.

  • Each Worker Entity in the Worker Process must register the exact Workflow Types and Activity Types it may execute.
  • Each Worker Entity must also associate itself with exactly one Task Queue.
  • Each Worker Entity polling the same Task Queue must be registered with the same Workflow Types and Activity Types.

A Worker Entity is the component within a Worker Process that listens to a specific Task Queue.

Although multiple Worker Entities can be in a single Worker Process, a single Worker Entity Worker Process may be perfectly sufficient. For more information, see the Worker tuning guide.

A Worker Entity contains a Workflow Worker and/or an Activity Worker, which makes progress on Workflow Executions and Activity Executions, respectively.

How to run a Worker on Docker in TypeScript

note

To improve worker startup time, we recommend preparing workflow bundles ahead-of-time. See our productionsample for details.

Workers based on the TypeScript SDK can be deployed and run as Docker containers.

We recommend an LTS Node.js release such as 18, 20, 22, or 24. Both amd64 and arm64 architectures are supported. A glibc-based image is required; musl-based images are not supported (see below).

The easiest way to deploy a TypeScript SDK Worker on Docker is to start with the node:20-bullseye image. For example:

FROM node:20-bullseye

# For better cache utilization, copy package.json and lock file first and install the dependencies before copying the
# rest of the application and building.
COPY . /app
WORKDIR /app

# Alternatively, run npm ci, which installs only dependencies specified in the lock file and is generally faster.
RUN npm install --only=production \
&& npm run build

CMD ["npm", "start"]

For smaller images and/or more secure deployments, it is also possible to use -slim Docker image variants (like node:20-bullseye-slim) or distroless/nodejs Docker images (like gcr.io/distroless/nodejs20-debian11) with the following caveats.

Using node:slim images

node:slim images do not contain some of the common packages found in regular images. This results in significantly smaller images.

However, TypeScript SDK requires the presence of root TLS certificates (the ca-certificates package), which are not included in slim images. The ca-certificates package is required even when connecting to a local Temporal Server or when using a server connection config that doesn't explicitly use TLS.

For this reason, the ca-certificates package must be installed during the construction of the Docker image. For example:

FROM node:20-bullseye-slim

RUN apt-get update \
&& apt-get install -y ca-certificates \
&& rm -rf /var/lib/apt/lists/*

# ... same as with regular image

Failure to install this dependency results in a [TransportError: transport error] runtime error, because the certificates cannot be verified.

Using distroless/nodejs images

distroless/nodejs images include only the files that are strictly required to execute node. This results in even smaller images (approximately half the size of node:slim images). It also significantly reduces the surface of potential security issues that could be exploited by a hacker in the resulting Docker images.

It is generally possible and safe to execute TypeScript SDK Workers using distroless/nodejs images (unless your code itself requires dependencies that are not included in distroless/nodejs).

However, some tools required for the build process (notably the npm command) are not included in the distroless/nodejs image. This might result in various error messages during the Docker build.

The recommended solution is to use a multi-step Dockerfile. For example:

# -- BUILD STEP --

FROM node:20-bullseye AS builder

COPY . /app
WORKDIR /app

RUN npm install --only=production \
&& npm run build

# -- RESULTING IMAGE --

FROM gcr.io/distroless/nodejs20-debian11

COPY --from=builder /app /app
WORKDIR /app

CMD ["node", "build/worker.js"]

Properly configure Node.js memory in Docker

By default, node configures its maximum old-gen memory to 25% of the physical memory of the machine on which it is executing, with a maximum of 4 GB. This is likely inappropriate when running Node.js in a Docker environment and can result in either underusage of available memory (node only uses a fraction of the memory allocated to the container) or overusage (node tries to use more memory than what is allocated to the container, which will eventually lead to the process being killed by the operating system).

Therefore we recommended that you always explicitly set the --max-old-space-size node argument to approximately 80% of the maximum size (in megabytes) that you want to allocate the node process. You might need some experimentation and adjustment to find the most appropriate value based on your specific application.

In practice, it is generally easier to provide this argument through the NODE_OPTIONS environment variable.

Do not use Alpine

Alpine replaces glibc with musl, which is incompatible with the Rust core of the TypeScript SDK. If you receive errors like the following, it's probably because you are using Alpine.

Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /opt/app/node_modules/@temporalio/core-bridge/index.node)

Or like this:

Error: Error relocating /opt/app/node_modules/@temporalio/core-bridge/index.node: __register_atfork: symbol not found

How to run a Temporal Cloud Worker

To run a Worker that uses Temporal Cloud, you need to provide additional connection and client options that include the following:

  • An address that includes your Cloud Namespace Name and a port number: <Namespace>.<ID>.tmprl.cloud:<port>.
  • mTLS CA certificate.
  • mTLS private key.

For more information about managing and generating client certificates for Temporal Cloud, see How to manage certificates in Temporal Cloud.

For more information about configuring TLS to secure inter- and intra-network communication for a Temporal Service, see Temporal Customization Samples.

How to register types

All Workers listening to the same Task Queue name must be registered to handle the exact same Workflows Types and Activity Types.

If a Worker polls a Task for a Workflow Type or Activity Type it does not know about, it fails that Task. However, the failure of the Task does not cause the associated Workflow Execution to fail.

In development, use workflowsPath:

snippets/src/worker.ts

import { Worker } from '@temporalio/worker';
import * as activities from './activities';

async function run() {
const worker = await Worker.create({
workflowsPath: require.resolve('./workflows'),
taskQueue: 'snippets',
activities,
});

await worker.run();
}

In this snippet, the Worker bundles the Workflow code at runtime.

In production, you can improve your Worker's startup time by bundling in advance: as part of your production build, call bundleWorkflowCode:

production/src/scripts/build-workflow-bundle.ts

import { bundleWorkflowCode } from '@temporalio/worker';
import { writeFile } from 'fs/promises';
import path from 'path';

async function bundle() {
const { code } = await bundleWorkflowCode({
workflowsPath: require.resolve('../workflows'),
});
const codePath = path.join(__dirname, '../../workflow-bundle.js');

await writeFile(codePath, code);
console.log(`Bundle written to ${codePath}`);
}

Then the bundle can be passed to the Worker:

production/src/worker.ts

const workflowOption = () =>
process.env.NODE_ENV === 'production'
? {
workflowBundle: {
codePath: require.resolve('../workflow-bundle.js'),
},
}
: { workflowsPath: require.resolve('./workflows') };

async function run() {
const worker = await Worker.create({
...workflowOption(),
activities,
taskQueue: 'production-sample',
});

await worker.run();
}

How to shut down a Worker and track its state

Workers shut down if they receive any of the Signals enumerated in shutdownSignals: 'SIGINT', 'SIGTERM', 'SIGQUIT', and 'SIGUSR2'.

In development, we shut down Workers with Ctrl+C (SIGINT) or nodemon (SIGUSR2). In production, you usually want to give Workers time to finish any in-progress Activities by setting shutdownGraceTime.

As soon as a Worker receives a shutdown Signal or request, the Worker stops polling for new Tasks and allows in-flight Tasks to complete until shutdownGraceTime is reached. Any Activities that are still running at that time will stop running and will be rescheduled by Temporal Server when an Activity timeout occurs.

If you must guarantee that the Worker eventually shuts down, you can set shutdownForceTime.

You might want to programmatically shut down Workers (with Worker.shutdown()) in integration tests or when automating a fleet of Workers.

Worker states

At any time, you can Query Worker state with Worker.getState(). A Worker is always in one of seven states:

  • INITIALIZED: The initial state of the Worker after calling Worker.create() and successfully connecting to the server.
  • RUNNING: Worker.run() was called and the Worker is polling Task Queues.
  • FAILED: The Worker encountered an unrecoverable error; Worker.run() should reject with the error.
  • The last four states are related to the Worker shutdown process:
    • STOPPING: The Worker received a shutdown Signal or Worker.shutdown() was called. The Worker will forcefully shut down after shutdownGraceTime expires.
    • DRAINING: All Workflow Tasks have been drained; waiting for Activities and cached Workflows eviction.
    • DRAINED: All Activities and Workflows have completed; ready to shut down.
    • STOPPED: Shutdown complete; worker.run() resolves.

If you need more visibility into internal Worker state, see the Worker class in the API reference.