Skip to content

Your first GPU program

Running code on the GPU might sound intimidating, but compute shaders make it pretty easy to run arbitrary calculations. Most GPU programming guides start by creating a triangle (since that is what GPUs were primarily made for historically), but modern graphics programming is no longer bound by the classic execution model. This style is often called GPGPU: using graphics processors for general-purpose work. We can parallelize all kinds of work with much less effort.

The one major difference between calling GPU functions and standard JavaScript functions is that GPU functions require a little more ceremony. Since the code runs in an entirely different execution environment, it is not as simple as feeding another line to the JavaScript interpreter. We have to compile the code we want to run, allocate the needed memory, and pass resource handles back and forth.

Thankfully, when using TypeGPU, most of this can (optionally!) be done for you. Let’s take a look at possibly the simplest program we can run on the GPU:

import
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@module ― typegpu

tgpu
from 'typegpu';
const
const root: TgpuRoot
root
= await
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@module ― typegpu

tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@see ― initFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
();
const
const program: TgpuGuardedComputePipeline<[]>
program
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const
const hello: 12
hello
= 12;
});
const program: TgpuGuardedComputePipeline<[]>
program
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();

And that’s it! You just ran your first GPU program from JavaScript. Of course, it does not do anything meaningful yet, but before we change that, let’s take a look at it line by line.

  1. Initialize the root, which holds the reference to the GPU device and serves as the middleman when creating GPU resources:

    const root = await tgpu.init();
  2. Create a guarded compute pipeline, which we can think of for now as a function that will run on the GPU:

    const program = root.createGuardedComputePipeline(() => {
    'use gpu';
    const hello = 12;
    });
  3. Dispatch threads, or a single thread in this case. This tells TypeGPU to compile the program, attach any resources (there are none here), and submit the work to the GPU:

    program.dispatchThreads();

Of course, running a program that literally does nothing is not particularly useful. Right now we submit work to the GPU, but we never display anything or read anything back.

When we need global memory on the GPU, we usually choose between two resource types:

Buffers

Flexible, general-purpose GPU memory. Buffers can store arbitrary data, as long as the layout is host-shareable (which most regular data types apart from booleans are). You can learn more about buffers in the Buffers guide.

Textures

Image-oriented GPU memory. Textures are great for data laid out in a 2D or 3D grid, not just images, and the Textures guide walks through where they fit best.

In raw WebGPU, allocating a buffer starts with a size in bytes and a set of usage flags. That sounds simple, but manually calculating layouts and keeping CPU/GPU interpretations in sync gets tedious. TypeGPU asks for a schema instead. The schema describes how the data is interpreted in shader code and gives TypeGPU enough information to handle the CPU side too.

That gives us a few useful properties:

  1. No manual size calculation. The schema defines the byte layout for you.
  2. Automatic serialization. The schema tells TypeGPU how values should be serialized and deserialized, while still leaving fast paths available when serialization would become a bottleneck.
  3. Type safety. Schema changes surface type errors in both shader code and CPU-side logic.

Let’s use that to create a simple GPU program that actually writes data to GPU memory.

import
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@module ― typegpu

tgpu
, {
import d
d
} from 'typegpu';
const
const root: TgpuRoot
root
= await
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@module ― typegpu

tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@see ― initFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
();
const
const countMutable: TgpuMutable<d.U32>
countMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.U32>(typeSchema: d.U32, initial?: number | ((buffer: TgpuBuffer<NoInfer<d.U32>>) => void) | undefined): TgpuMutable<d.U32> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@param ― typeSchema The type of data that this buffer will hold.

@param ― initial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

createMutable
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
);
const
const program: TgpuGuardedComputePipeline<[]>
program
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const countMutable: TgpuMutable<d.U32>
countMutable
.
TgpuMutable<U32>.$: number
$
++;
});
function
function increment(): void
increment
() {
const program: TgpuGuardedComputePipeline<[]>
program
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();
}
function increment(): void
increment
();

That’s it: we now have a GPU program that increments a GPU value. Each time we call increment(), the shader increments countMutable by one.

But how do we prove that? Since the value lives on the GPU, JavaScript cannot read it like a normal variable. For quick debugging, TypeGPU lets shader code write to the console:

const
const countMutable: TgpuMutable<d.U32>
countMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.U32>(typeSchema: d.U32, initial?: number | ((buffer: TgpuBuffer<NoInfer<d.U32>>) => void) | undefined): TgpuMutable<d.U32> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@param ― typeSchema The type of data that this buffer will hold.

@param ― initial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

createMutable
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
);
const
const program: TgpuGuardedComputePipeline<[]>
program
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const
const currentCount: number
currentCount
=
const countMutable: TgpuMutable<d.U32>
countMutable
.
TgpuMutable<U32>.$: number
$
;
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@see ― source

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+2 overloads)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@since ― v0.1.100

log
('current count:',
const currentCount: number
currentCount
);
const countMutable: TgpuMutable<d.U32>
countMutable
.
TgpuMutable<U32>.$: number
$
++;
});
function
function increment(): void
increment
() {
const program: TgpuGuardedComputePipeline<[]>
program
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();
}
export function
function execute(): void
execute
() {
function increment(): void
increment
();
}
Checking WebGPU support...

This is useful for fast debugging, but console.log only displays the value. It does not give JavaScript a copy it can keep. To retrieve a value from GPU memory, use the .read() method:

const
const countMutable: TgpuMutable<d.U32>
countMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.U32>(typeSchema: d.U32, initial?: number | ((buffer: TgpuBuffer<NoInfer<d.U32>>) => void) | undefined): TgpuMutable<d.U32> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@param ― typeSchema The type of data that this buffer will hold.

@param ― initial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

createMutable
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
);
const
const program: TgpuGuardedComputePipeline<[]>
program
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const countMutable: TgpuMutable<d.U32>
countMutable
.
TgpuMutable<U32>.$: number
$
++;
});
function
function increment(): void
increment
() {
const program: TgpuGuardedComputePipeline<[]>
program
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();
}
export async function
function execute(): Promise<void>
execute
() {
function increment(): void
increment
();
const
const value: number
value
= await
const countMutable: TgpuMutable<d.U32>
countMutable
.
TgpuBufferShorthandBase<U32>.read(): Promise<number>
read
();
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@see ― source

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+2 overloads)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@since ― v0.1.100

log
(`Read value: ${
const value: number
value
}`);
}
Checking WebGPU support...

read() returns a promise that resolves with JavaScript data serialized according to the schema. In this case, the schema is d.u32, so the value we get back is a single number.

In the previous examples, we used createMutable() without paying much attention to what it creates. In short, it allocates GPU memory that can be accessed directly from a TypeGPU shader and modified from shader code. You might wonder why you could not just do something like:

let
let counter: number
counter
= 0;
const
const program: TgpuGuardedComputePipeline<[]>
program
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
let counter: number
counter
++;
});

This will not work because of how TypeGPU handles functions marked with 'use gpu'. They are translated into code that runs on the GPU, so they do not close over JavaScript variables the way normal JavaScript functions do. A plain let counter is CPU-side state, not writable GPU memory.

That is why we use buffers for state that the GPU can read or write. You can think of each allocated buffer as a handle to a particular piece of GPU memory. Inside shader code, you access that memory with the .$ property. The .read() method does not let JavaScript access the GPU memory directly; it creates a copy and reads that copy back to the CPU.

The createMutable() function is a shorthand for creating a storage buffer and viewing it as mutable:

const
const countMutable: TgpuBufferMutable<d.U32> & TgpuFixedBufferUsage<d.U32>
countMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createBuffer<d.U32>(typeSchema: d.U32, initial?: number | ((buffer: TgpuBuffer<NoInfer<d.U32>>) => void) | undefined): TgpuBuffer<d.U32> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader.

@param ― typeSchema The type of data that this buffer will hold.

@param ― initial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

createBuffer
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
).
TgpuBuffer<U32>.$usage<["storage"]>(usages_0: "storage"): TgpuBuffer<d.U32> & StorageFlag
$usage
('storage').
TgpuBuffer<U32>.as<"mutable">(usage: "mutable"): TgpuBufferMutable<d.U32> & TgpuFixedBufferUsage<d.U32>
as
('mutable');

This shorthand is useful when the buffer only needs that one role. If you need the same buffer in more than one role, create it explicitly and add the usages you need. For now, we will focus only on the buffer usages that make data accessible inside a compute shader.

UsageCan shader code mutate it?Alignment requirementSize requirement
uniformNo16-byte alignment for structs and arrays, unless supported device features allow a looser layoutMultiple of 4 bytes; must fit within the uniform buffer binding limit (commonly 64 KiB)
readonlyNoNone beyond the schema layoutMultiple of 4 bytes; must fit within the storage buffer binding limit (commonly 128 MiB)
mutableYesNone beyond the schema layoutMultiple of 4 bytes; must fit within the storage buffer binding limit (commonly 128 MiB)

As a rule of thumb, if data can be a uniform, it probably should be. If it needs storage-buffer size but does not need to be modified by shader code, keep it readonly.

So far, we know how to allocate GPU buffers, update them from inside shaders, and read back the values. But what if we have a CPU value that we want to pass to the GPU?

With our current knowledge, we could just write a shader that writes some values to the buffer:

const
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>(typeSchema: d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>, initial?: ((buffer: TgpuBuffer<NoInfer<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>>) => void) | d.InferInput<NoInfer<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>> | undefined): TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@param ― typeSchema The type of data that this buffer will hold.

@param ― initial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

createMutable
(
import d
d
.
struct<{
counter: d.U32;
incrementBy: d.U32;
}>(props: {
counter: d.U32;
incrementBy: d.U32;
}): d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>
export struct

Creates a struct schema that can be used to construct GPU buffers. Ensures proper alignment and padding of properties (as opposed to a d.unstruct schema). The order of members matches the passed in properties object.

@example const CircleStruct = d.struct({ radius: d.f32, pos: d.vec3f });

@param ― props Record with string keys and TgpuData values, each entry describing one struct member.

struct
({
counter: d.U32
counter
:
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
,
incrementBy: d.U32
incrementBy
:
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
}),
);
const
const initializeBuffer: TgpuGuardedComputePipeline<[]>
initializeBuffer
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
.
TgpuMutable<WgslStruct<{ counter: U32; incrementBy: U32; }>>.$: {
counter: number;
incrementBy: number;
}
$
.
incrementBy: number
incrementBy
= 10;
});
const
const program: TgpuGuardedComputePipeline<[]>
program
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
.
TgpuMutable<WgslStruct<{ counter: U32; incrementBy: U32; }>>.$: {
counter: number;
incrementBy: number;
}
$
.
counter: number
counter
+=
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
.
TgpuMutable<WgslStruct<{ counter: U32; incrementBy: U32; }>>.$: {
counter: number;
incrementBy: number;
}
$
.
incrementBy: number
incrementBy
;
});
const initializeBuffer: TgpuGuardedComputePipeline<[]>
initializeBuffer
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();
const program: TgpuGuardedComputePipeline<[]>
program
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();

This will work correctly (as long as we always want to increment by 10), and sometimes it is the right thing to do, especially when the initial data is computed on demand and highly parallelizable. In our current use case, though, it is overkill.

For now, we will explore three additional ways to pass CPU values to buffers: the .write() and .patch() methods, and the initial data passed when creating the buffer.

These are simple because the schema gives TypeGPU enough information to serialize values for us.

let
let incrementBy: number
incrementBy
= 10;
const
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>(typeSchema: d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>, initial?: ((buffer: TgpuBuffer<NoInfer<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>>) => void) | d.InferInput<NoInfer<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>> | undefined): TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@param ― typeSchema The type of data that this buffer will hold.

@param ― initial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

createMutable
(
import d
d
.
struct<{
counter: d.U32;
incrementBy: d.U32;
}>(props: {
counter: d.U32;
incrementBy: d.U32;
}): d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>
export struct

Creates a struct schema that can be used to construct GPU buffers. Ensures proper alignment and padding of properties (as opposed to a d.unstruct schema). The order of members matches the passed in properties object.

@example const CircleStruct = d.struct({ radius: d.f32, pos: d.vec3f });

@param ― props Record with string keys and TgpuData values, each entry describing one struct member.

struct
({
counter: d.U32
counter
:
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
,
incrementBy: d.U32
incrementBy
:
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
}),
{
counter: number
counter
: 0,
incrementBy: number
incrementBy
},
);
const
const program: TgpuGuardedComputePipeline<[]>
program
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
.
TgpuMutable<WgslStruct<{ counter: U32; incrementBy: U32; }>>.$: {
counter: number;
incrementBy: number;
}
$
.
counter: number
counter
+=
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
.
TgpuMutable<WgslStruct<{ counter: U32; incrementBy: U32; }>>.$: {
counter: number;
incrementBy: number;
}
$
.
incrementBy: number
incrementBy
;
});
export async function
function execute(): Promise<void>
execute
() {
let incrementBy: number
incrementBy
++;
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
.
TgpuBufferShorthandBase<WgslStruct<{ counter: U32; incrementBy: U32; }>>.patch(data: {
counter?: number | undefined;
incrementBy?: number | undefined;
} | undefined): void
patch
({
incrementBy?: number | undefined
incrementBy
});
const program: TgpuGuardedComputePipeline<[]>
program
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();
const
const state: {
counter: number;
incrementBy: number;
}
state
= await
const stateMutable: TgpuMutable<d.WgslStruct<{
counter: d.U32;
incrementBy: d.U32;
}>>
stateMutable
.
TgpuBufferShorthandBase<WgslStruct<{ counter: U32; incrementBy: U32; }>>.read(): Promise<{
counter: number;
incrementBy: number;
}>
read
();
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@see ― source

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+2 overloads)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@since ― v0.1.100

log
(
`counter: ${
const state: {
counter: number;
incrementBy: number;
}
state
.
counter: number
counter
}, incrementBy: ${
const state: {
counter: number;
incrementBy: number;
}
state
.
incrementBy: number
incrementBy
}`,
);
}
Checking WebGPU support...

As we can see, our program grew quite a bit. Let’s examine what changed:

  1. Add a new value to our GPU state, which we will use to control each increment step. We used d.struct, which works exactly like you would expect a struct to: it lets us group data together.

    const
    const stateMutable: TgpuMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>
    stateMutable
    =
    const root: TgpuRoot
    root
    .
    TgpuRoot.createMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>(typeSchema: d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>, initial?: ((buffer: TgpuBuffer<NoInfer<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>>) => void) | d.InferInput<NoInfer<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>> | undefined): TgpuMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>> (+1 overload)

    Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

    TgpuRoot.createBuffer

    .

    @param ― typeSchema The type of data that this buffer will hold.

    @param ― initial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

    createMutable
    (
    import d
    d
    .
    struct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>(props: {
    counter: d.U32;
    incrementBy: d.U32;
    }): d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>
    export struct

    Creates a struct schema that can be used to construct GPU buffers. Ensures proper alignment and padding of properties (as opposed to a d.unstruct schema). The order of members matches the passed in properties object.

    @example const CircleStruct = d.struct({ radius: d.f32, pos: d.vec3f });

    @param ― props Record with string keys and TgpuData values, each entry describing one struct member.

    struct
    ({
    counter: d.U32
    counter
    :
    import d
    d
    .
    const u32: d.U32
    export u32

    A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

    Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

    @example const value = u32(); // 0

    @example const value = u32(7); // 7

    @example const value = u32(3.14); // 3

    @example const value = u32(-1); // 4294967295

    @example const value = u32(-3.1); // 0

    u32
    ,
    incrementBy: d.U32
    incrementBy
    :
    import d
    d
    .
    const u32: d.U32
    export u32

    A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

    Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

    @example const value = u32(); // 0

    @example const value = u32(7); // 7

    @example const value = u32(3.14); // 3

    @example const value = u32(-1); // 4294967295

    @example const value = u32(-3.1); // 0

    u32
    }),
    );
  2. Add initial data. Apart from the schema, buffer constructors also take a second argument, which lets you efficiently write data at buffer creation time.

    let
    let incrementBy: number
    incrementBy
    = 10;
    const
    const stateMutable: TgpuMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>
    stateMutable
    =
    const root: TgpuRoot
    root
    .
    TgpuRoot.createMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>(typeSchema: d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>, initial?: ((buffer: TgpuBuffer<NoInfer<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>>) => void) | d.InferInput<NoInfer<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>> | undefined): TgpuMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>> (+1 overload)

    Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

    TgpuRoot.createBuffer

    .

    @param ― typeSchema The type of data that this buffer will hold.

    @param ― initial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

    createMutable
    (
    import d
    d
    .
    struct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>(props: {
    counter: d.U32;
    incrementBy: d.U32;
    }): d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>
    export struct

    Creates a struct schema that can be used to construct GPU buffers. Ensures proper alignment and padding of properties (as opposed to a d.unstruct schema). The order of members matches the passed in properties object.

    @example const CircleStruct = d.struct({ radius: d.f32, pos: d.vec3f });

    @param ― props Record with string keys and TgpuData values, each entry describing one struct member.

    struct
    ({
    counter: d.U32
    counter
    :
    import d
    d
    .
    const u32: d.U32
    export u32

    A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

    Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

    @example const value = u32(); // 0

    @example const value = u32(7); // 7

    @example const value = u32(3.14); // 3

    @example const value = u32(-1); // 4294967295

    @example const value = u32(-3.1); // 0

    u32
    ,
    incrementBy: d.U32
    incrementBy
    :
    import d
    d
    .
    const u32: d.U32
    export u32

    A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

    Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

    @example const value = u32(); // 0

    @example const value = u32(7); // 7

    @example const value = u32(3.14); // 3

    @example const value = u32(-1); // 4294967295

    @example const value = u32(-3.1); // 0

    u32
    }),
    {
    counter: number
    counter
    : 0,
    incrementBy: number
    incrementBy
    },
    );

    This data will be typed according to the schema, so there is no byte counting involved. If you want to modify a buffer after it has been created, you can use the .write() method, which takes the same data shape.

    const stateMutable: TgpuMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>
    stateMutable
    .
    TgpuBufferShorthandBase<WgslStruct<{ counter: U32; incrementBy: U32; }>>.write(data: d.InferInput<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>, options?: BufferWriteOptions): void
    write
    ({
    counter: number
    counter
    : 0,
    incrementBy: number
    incrementBy
    : 10 });
  3. Update the shader code so that it uses the new field and increments counter by the current incrementBy value.

    const
    const program: TgpuGuardedComputePipeline<[]>
    program
    =
    const root: TgpuRoot
    root
    .
    WithBinding.createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

    Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from createComputePipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

    @param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

    @example

    If no parameters are provided, the callback will be executed once, in a single thread.

    const fooPipeline = root
    .createGuardedComputePipeline(() => {
    'use gpu';
    console.log('Hello, GPU!');
    });
    fooPipeline.dispatchThreads();
    // [GPU] Hello, GPU!

    @example

    One parameter means n-threads will be executed in parallel.

    const fooPipeline = root
    .createGuardedComputePipeline((x) => {
    'use gpu';
    if (x % 16 === 0) {
    // Logging every 16th thread
    console.log('I am the', x, 'thread');
    }
    });
    // executing 512 threads
    fooPipeline.dispatchThreads(512);
    // [GPU] I am the 256 thread
    // [GPU] I am the 272 thread
    // ... (30 hidden logs)
    // [GPU] I am the 16 thread
    // [GPU] I am the 240 thread

    createGuardedComputePipeline
    (() => {
    'use gpu';
    const stateMutable: TgpuMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>
    stateMutable
    .
    TgpuMutable<WgslStruct<{ counter: U32; incrementBy: U32; }>>.$: {
    counter: number;
    incrementBy: number;
    }
    $
    .
    counter: number
    counter
    +=
    const stateMutable: TgpuMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>
    stateMutable
    .
    TgpuMutable<WgslStruct<{ counter: U32; incrementBy: U32; }>>.$: {
    counter: number;
    incrementBy: number;
    }
    $
    .
    incrementBy: number
    incrementBy
    ;
    });
  4. Use .patch() to update just one buffer field. By default, .write() expects you to write the entire buffer at once. Since we do not have the current counter value available on the CPU side, we would need to read it each time (or track mirrored state on the CPU) - in short, not something we want to do. The .patch() API allows us to pass a subset of the buffer and write only the present values.

    const stateMutable: TgpuMutable<d.WgslStruct<{
    counter: d.U32;
    incrementBy: d.U32;
    }>>
    stateMutable
    .
    TgpuBufferShorthandBase<WgslStruct<{ counter: U32; incrementBy: U32; }>>.patch(data: {
    counter?: number | undefined;
    incrementBy?: number | undefined;
    } | undefined): void
    patch
    ({
    incrementBy?: number | undefined
    incrementBy
    });

You now know how to run a function on the GPU, allocate GPU buffers, and read from or write to them. In the next sections, we will focus on more meaningful shader code; so far, all our examples have used just one logical GPU thread, while real workloads commonly dispatch thousands or even millions of them.