Skip to content

Sign up to be notified when the ShaderHunt platform is available, along with interactive examples teaching TypeGPU from the ground up.

Utilities

The root.createGuardedComputePipeline method streamlines running simple computations on the GPU. Under the hood, it creates a compute pipeline that calls the provided callback only if the current thread ID is within the requested range, and returns an object with a dispatchThreads method that executes the pipeline. Since the pipeline is reused, there’s no additional overhead for subsequent calls.

const
const data: TgpuMutable<d.WgslArray<d.U32>>
data
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.WgslArray<d.U32>>(typeSchema: d.WgslArray<d.U32>, initial?: number[] | undefined): TgpuMutable<d.WgslArray<d.U32>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial The initial value of the buffer. (optional)

createMutable
(
import d
d
.
arrayOf<d.U32>(elementType: d.U32, elementCount: number): d.WgslArray<d.U32> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
, 8), [0, 1, 2, 3, 4, 5, 6, 7]);
const
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
=
const root: TgpuRoot
root
['~unstable']
.
createGuardedComputePipeline<[x: number]>(callback: (x: number) => void): TgpuGuardedComputePipeline<[x: number]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@paramcallback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
((
x: number
x
) => {
'use gpu';
const data: TgpuMutable<d.WgslArray<d.U32>>
data
.
TgpuMutable<WgslArray<U32>>.$: number[]
$
[
x: number
x
] *= 2;
});
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
.
TgpuGuardedComputePipeline<[x: number]>.dispatchThreads(x: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(8);
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
.
TgpuGuardedComputePipeline<[x: number]>.dispatchThreads(x: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(8);
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
.
TgpuGuardedComputePipeline<[x: number]>.dispatchThreads(x: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(4);
// the command encoder will queue the read after `doubleUpPipeline`
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(await
const data: TgpuMutable<d.WgslArray<d.U32>>
data
.
TgpuBufferShorthandBase<WgslArray<U32>>.read(): Promise<number[]>
read
()); // [0, 8, 16, 24, 16, 20, 24, 28]

The callback can have up to three arguments (dimensions). createGuardedComputePipeline can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data. Buffer initialization commonly uses random number generators. For that, you can use the @typegpu/noise library.

import {
const randf: {
seed: typeof randSeed;
seed2: typeof randSeed2;
seed3: typeof randSeed3;
seed4: typeof randSeed4;
... 13 more ...;
onUnitSphere: typeof randOnUnitSphere;
}
randf
} from '@typegpu/noise';
const
const root: TgpuRoot
root
= await
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@seeinitFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
();
// buffer of 1024x512 floats
const
const waterLevelMutable: TgpuMutable<d.WgslArray<d.WgslArray<d.F32>>>
waterLevelMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.WgslArray<d.WgslArray<d.F32>>>(typeSchema: d.WgslArray<d.WgslArray<d.F32>>, initial?: number[][] | undefined): TgpuMutable<d.WgslArray<d.WgslArray<d.F32>>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial The initial value of the buffer. (optional)

createMutable
(
import d
d
.
arrayOf<d.WgslArray<d.F32>>(elementType: d.WgslArray<d.F32>, elementCount: number): d.WgslArray<d.WgslArray<d.F32>> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
arrayOf<d.F32>(elementType: d.F32, elementCount: number): d.WgslArray<d.F32> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
, 512), 1024),
);
const root: TgpuRoot
root
['~unstable'].
createGuardedComputePipeline<[x: number, y: number]>(callback: (x: number, y: number) => void): TgpuGuardedComputePipeline<[x: number, y: number]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@paramcallback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
((
x: number
x
,
y: number
y
) => {
'use gpu';
const randf: {
seed: typeof randSeed;
seed2: typeof randSeed2;
seed3: typeof randSeed3;
seed4: typeof randSeed4;
... 13 more ...;
onUnitSphere: typeof randOnUnitSphere;
}
randf
.
seed2: (seed: d.v2f) => void
seed2
(
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(
x: number
x
,
y: number
y
).
vecInfixNotation<v2f>.div(other: number): d.v2f (+1 overload)
div
(1024));
const waterLevelMutable: TgpuMutable<d.WgslArray<d.WgslArray<d.F32>>>
waterLevelMutable
.
TgpuMutable<WgslArray<WgslArray<F32>>>.$: number[][]
$
[
x: number
x
][
y: number
y
] = 10 +
const randf: {
seed: typeof randSeed;
seed2: typeof randSeed2;
seed3: typeof randSeed3;
seed4: typeof randSeed4;
... 13 more ...;
onUnitSphere: typeof randOnUnitSphere;
}
randf
.
sample: () => number
sample
();
}).
TgpuGuardedComputePipeline<[x: number, y: number]>.dispatchThreads(x: number, y: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(1024, 512);
// callback will be called for x in range 0..1023 and y in range 0..511
// (optional) read values in JS
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(await
const waterLevelMutable: TgpuMutable<d.WgslArray<d.WgslArray<d.F32>>>
waterLevelMutable
.
TgpuBufferShorthandBase<WgslArray<WgslArray<F32>>>.read(): Promise<number[][]>
read
());

The result of createGuardedComputePipeline can have bind groups bound using the with method.

const
const layout: TgpuBindGroupLayout<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>
layout
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
bindGroupLayout: <{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>(entries: {
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}) => TgpuBindGroupLayout<...> (+1 overload)
bindGroupLayout
({
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
}
values
: {
storage: (elementCount: number) => d.WgslArray<d.U32>
storage
:
import d
d
.
arrayOf<d.U32>(elementType: d.U32): (elementCount: number) => d.WgslArray<d.U32> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
),
access: "mutable"
access
: 'mutable' },
});
const
const buffer1: TgpuBuffer<d.WgslArray<d.U32>> & StorageFlag
buffer1
=
const root: TgpuRoot
root
.
TgpuRoot.createBuffer<d.WgslArray<d.U32>>(typeSchema: d.WgslArray<d.U32>, initial?: number[] | undefined): TgpuBuffer<d.WgslArray<d.U32>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial The initial value of the buffer. (optional)

createBuffer
(
import d
d
.
arrayOf<d.U32>(elementType: d.U32, elementCount: number): d.WgslArray<d.U32> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
, 3), [1, 2, 3]).
TgpuBuffer<WgslArray<U32>>.$usage<["storage"]>(usages_0: "storage"): TgpuBuffer<d.WgslArray<d.U32>> & StorageFlag
$usage
('storage');
const
const buffer2: TgpuBuffer<d.WgslArray<d.U32>> & StorageFlag
buffer2
=
const root: TgpuRoot
root
.
TgpuRoot.createBuffer<d.WgslArray<d.U32>>(typeSchema: d.WgslArray<d.U32>, initial?: number[] | undefined): TgpuBuffer<d.WgslArray<d.U32>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial The initial value of the buffer. (optional)

createBuffer
(
import d
d
.
arrayOf<d.U32>(elementType: d.U32, elementCount: number): d.WgslArray<d.U32> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
, 4), [2, 4, 8, 16]).
TgpuBuffer<WgslArray<U32>>.$usage<["storage"]>(usages_0: "storage"): TgpuBuffer<d.WgslArray<d.U32>> & StorageFlag
$usage
('storage');
const
const bindGroup1: TgpuBindGroup<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>
bindGroup1
=
const root: TgpuRoot
root
.
TgpuRoot.createBindGroup<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>(layout: TgpuBindGroupLayout<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>, entries: ExtractBindGroupInputFromLayout<...>): TgpuBindGroup<...>

Creates a group of resources that can be bound to a shader based on a specified layout.

@example

const fooLayout = tgpu.bindGroupLayout({ foo: { uniform: d.vec3f }, bar: { texture: 'float' }, });

const fooBuffer = ...; const barTexture = ...;

const fooBindGroup = root.createBindGroup(fooLayout, { foo: fooBuffer, bar: barTexture, });

@paramlayout Layout describing the bind group to be created.

@paramentries A record with values being the resources populating the bind group and keys being their associated names, matching the layout keys.

createBindGroup
(
const layout: TgpuBindGroupLayout<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>
layout
, {
values: GPUBuffer | (TgpuBuffer<d.WgslArray<d.U32 | d.Atomic<d.U32> | DecoratedLocation<d.U32>>> & StorageFlag)
values
:
const buffer1: TgpuBuffer<d.WgslArray<d.U32>> & StorageFlag
buffer1
,
});
const
const bindGroup2: TgpuBindGroup<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>
bindGroup2
=
const root: TgpuRoot
root
.
TgpuRoot.createBindGroup<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>(layout: TgpuBindGroupLayout<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>, entries: ExtractBindGroupInputFromLayout<...>): TgpuBindGroup<...>

Creates a group of resources that can be bound to a shader based on a specified layout.

@example

const fooLayout = tgpu.bindGroupLayout({ foo: { uniform: d.vec3f }, bar: { texture: 'float' }, });

const fooBuffer = ...; const barTexture = ...;

const fooBindGroup = root.createBindGroup(fooLayout, { foo: fooBuffer, bar: barTexture, });

@paramlayout Layout describing the bind group to be created.

@paramentries A record with values being the resources populating the bind group and keys being their associated names, matching the layout keys.

createBindGroup
(
const layout: TgpuBindGroupLayout<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>
layout
, {
values: GPUBuffer | (TgpuBuffer<d.WgslArray<d.U32 | d.Atomic<d.U32> | DecoratedLocation<d.U32>>> & StorageFlag)
values
:
const buffer2: TgpuBuffer<d.WgslArray<d.U32>> & StorageFlag
buffer2
,
});
const
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
=
const root: TgpuRoot
root
['~unstable'].
createGuardedComputePipeline<[x: number]>(callback: (x: number) => void): TgpuGuardedComputePipeline<[x: number]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@paramcallback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
((
x: number
x
) => {
'use gpu';
const layout: TgpuBindGroupLayout<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>
layout
.
TgpuBindGroupLayout<{ values: { storage: (elementCount: number) => WgslArray<U32>; access: "mutable"; }; }>.$: {
values: number[];
}
$
.
values: number[]
values
[
x: number
x
] *= 2;
});
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
.
TgpuGuardedComputePipeline<[x: number]>.with(bindGroup: TgpuBindGroup): TgpuGuardedComputePipeline<[x: number]>

Returns a pipeline wrapper with the specified bind group bound. Analogous to TgpuComputePipeline.with(bindGroup).

with
(
const bindGroup1: TgpuBindGroup<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>
bindGroup1
).
TgpuGuardedComputePipeline<[x: number]>.dispatchThreads(x: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(3);
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
.
TgpuGuardedComputePipeline<[x: number]>.with(bindGroup: TgpuBindGroup): TgpuGuardedComputePipeline<[x: number]>

Returns a pipeline wrapper with the specified bind group bound. Analogous to TgpuComputePipeline.with(bindGroup).

with
(
const bindGroup2: TgpuBindGroup<{
values: {
storage: (elementCount: number) => d.WgslArray<d.U32>;
access: "mutable";
};
}>
bindGroup2
).
TgpuGuardedComputePipeline<[x: number]>.dispatchThreads(x: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(4);
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(await
const buffer1: TgpuBuffer<d.WgslArray<d.U32>> & StorageFlag
buffer1
.
TgpuBuffer<WgslArray<U32>>.read(): Promise<number[]>
read
()); // [2, 4, 6];
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(await
const buffer2: TgpuBuffer<d.WgslArray<d.U32>> & StorageFlag
buffer2
.
TgpuBuffer<WgslArray<U32>>.read(): Promise<number[]>
read
()); // [4, 8, 16, 32];

It is recommended NOT to use guarded compute pipelines for:

  • More complex compute shaders. When using guarded compute pipelines, it is impossible to change workgroup sizes, or effectively utilize workgroup shared memory. For such cases, a manually created pipeline would be more suitable.

  • Small calls. Usually, for small data the shader creation and dispatch is more costly than serialization. Small buffers can be more efficiently initialized with the buffer.write() method.

Yes, you read that correctly, TypeGPU implements logging to the console on the GPU! Just call console.log like you would in plain JavaScript, and open the console to see the results.

const
const callCountMutable: TgpuMutable<d.U32>
callCountMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.U32>(typeSchema: d.U32, initial?: number | undefined): TgpuMutable<d.U32> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial The initial value of the buffer. (optional)

createMutable
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
, 0);
const
const compute: TgpuGuardedComputePipeline<[]>
compute
=
const root: TgpuRoot
root
['~unstable'].
createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@paramcallback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const callCountMutable: TgpuMutable<d.U32>
callCountMutable
.
TgpuMutable<U32>.$: number
$
+= 1;
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
('Call number',
const callCountMutable: TgpuMutable<d.U32>
callCountMutable
.
TgpuMutable<U32>.$: number
$
);
});
const compute: TgpuGuardedComputePipeline<[]>
compute
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();
const compute: TgpuGuardedComputePipeline<[]>
compute
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();
// Eventually...
// "[GPU] Call number 1"
// "[GPU] Call number 2"

Currently supported data types for logging include scalars, vectors, matrices, structs, and fixed-size arrays.

Under the hood, TypeGPU translates console.log to a series of serializing functions that write the logged arguments to a buffer that is read and deserialized after every draw/dispatch call.

The buffer is of fixed size, which may limit the total amount of information that can be logged; if the buffer overflows, additional logs are dropped. If that’s an issue, you may specify the size manually when creating the root object.

const
const root: TgpuRoot
root
= await
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@seeinitFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
({
unstable_logOptions?: LogGeneratorOptions
unstable_logOptions
: {
LogGeneratorOptions.logCountLimit?: number

The maximum number of logs that appear during a single draw/dispatch call. If this number is exceeded, a warning containing the total number of calls is logged and further logs are dropped.

@default64

logCountLimit
: 32,
LogGeneratorOptions.logSizeLimit?: number

The total number of bytes reserved for each log call. If this number is exceeded, an exception is thrown during resolution.

@default252

logSizeLimit
: 8, // in bytes, enough to fit 2*u32
},
});
/* vertex shader */
const
const mainFragment: TgpuFragmentFn<{}, d.Vec4f>
mainFragment
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
['~unstable'].
fragmentFn: <{
pos: d.BuiltinPosition;
}, d.Vec4f>(options: {
in: {
pos: d.BuiltinPosition;
};
out: d.Vec4f;
}) => TgpuFragmentFnShell<{
pos: d.BuiltinPosition;
}, d.Vec4f> (+1 overload)
fragmentFn
({
in: {
pos: d.BuiltinPosition;
}
in
: {
pos: d.BuiltinPosition
pos
:
import d
d
.
const builtin: {
readonly vertexIndex: d.BuiltinVertexIndex;
readonly instanceIndex: d.BuiltinInstanceIndex;
readonly position: d.BuiltinPosition;
readonly clipDistances: d.BuiltinClipDistances;
... 10 more ...;
readonly subgroupSize: BuiltinSubgroupSize;
}
export builtin
builtin
.
position: d.BuiltinPosition
position
},
out: d.Vec4f
out
:
import d
d
.
const vec4f: d.Vec4f
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
,
})(({
pos: d.v4f
pos
}) => {
// this log fits in 8 bytes
// static strings do not count towards the serialized log size
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
('X:',
import d
d
.
function u32(v?: number | boolean): number
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
(
pos: d.v4f
pos
.
v4f.x: number
x
), 'Y:',
import d
d
.
function u32(v?: number | boolean): number
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
(
pos: d.v4f
pos
.
v4f.y: number
y
));
return
import d
d
.
function vec4f(x: number, y: number, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(0, 1, 1, 1);
});
/* pipeline creation and draw call */

Other supported console functionalities include console.debug, console.info, console.warn, console.error and console.clear.

There are some limitations (some of which we intend to alleviate in the future):

  • console.log only works when used in TGSL, when calling or resolving a TypeGPU pipeline. Otherwise, for example when using tgpu.resolve on a WGSL template, logs are ignored.
  • console.log only works in fragment and compute shaders. This is due to a WebGPU limitation that does not allow modifying buffers during the vertex shader stage.
  • console.log currently does not support template literals (but you can use string substitutions, or just pass multiple arguments instead).