TypeGPU introduces a custom API to easily define and execute render and compute pipelines.
It abstracts away the standard WebGPU procedures to offer a convenient, type-safe way to run shaders on the GPU.
Creates a compute pipeline that executes the given callback in an exact number of threads.
This is different from withCompute(...).createPipeline() in that it does a bounds check on the
thread id, where as regular pipelines do not and work in units of workgroups.
@param ― callback A function converted to WGSL and executed on the GPU.
It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID
of the executing thread.
@example
If no parameters are provided, the callback will be executed once, in a single thread.
const fooPipeline = root
.createGuardedComputePipeline(()=> {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!
@example
One parameter means n-threads will be executed in parallel.
The createRenderPipeline method creates a render pipeline by accepting an options object that specifies the vertex function, fragment function, targets, and optional additional settings.
vertex: The TgpuVertexFn or 'use gpu' callback to use as the vertex shader.
fragment: The TgpuFragmentFn or 'use gpu' callback to use as the fragment shader.
targets: A record defining the formats and behaviors of the color targets, similar to WebGPU’s GPUColorTargetState, but as a record with named targets.
depthStencil (optional): Depth-stencil state, same as WebGPU’s GPUDepthStencilState.
multisample (optional): Multisample state, same as WebGPU’s GPUMultisampleState.
primitive (optional): Primitive state, same as WebGPU’s GPUPrimitiveState.
The vertex function’s input parameters (non-builtin) are matched to vertex attributes specified in the pipeline’s vertex layout when executing. Vertex attributes are validated at the type level for compatibility.
Using the pipelines should ensure the compatibility of the vertex output and fragment input on the type level.
These parameters are identified by their names, not by their numeric location index.
In general, when using vertex and fragment functions with TypeGPU pipelines, it is not necessary to set locations on the IO struct properties.
The library automatically matches up the corresponding members (by their names) and assigns common locations to them.
When a custom location is provided by the user (via the d.location attribute function) it is respected by the automatic assignment procedure,
as long as there is no conflict between vertex and fragment location values.
The createGuardedComputePipeline method streamlines running simple computations on the GPU.
Instead of dispatching workgroups, the guarded pipeline allows calling an exact number of GPU threads. Think of it as a parallelized for loop.
Under the hood, it creates a compute pipeline that calls the provided callback only if the current thread ID is within the requested range.
Creates a compute pipeline that executes the given callback in an exact number of threads.
This is different from withCompute(...).createPipeline() in that it does a bounds check on the
thread id, where as regular pipelines do not and work in units of workgroups.
@param ― callback A function converted to WGSL and executed on the GPU.
It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID
of the executing thread.
@example
If no parameters are provided, the callback will be executed once, in a single thread.
const fooPipeline = root
.createGuardedComputePipeline(()=> {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!
@example
One parameter means n-threads will be executed in parallel.
Dispatches the pipeline.
Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the
number of threads to run in each dimension.
Under the hood, the number of expected threads is sent as a uniform, and
"guarded" by a bounds check.
dispatchThreads(5);
// the command encoder will queue the read after `doubleUpPipeline`
var console:Console
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = newconsole.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
The callback can have up to three arguments (dimensions).
createGuardedComputePipeline can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.
Buffer initialization commonly uses random number generators.
For that, you can use the @typegpu/noise library.
Creates a compute pipeline that executes the given callback in an exact number of threads.
This is different from withCompute(...).createPipeline() in that it does a bounds check on the
thread id, where as regular pipelines do not and work in units of workgroups.
@param ― callback A function converted to WGSL and executed on the GPU.
It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID
of the executing thread.
@example
If no parameters are provided, the callback will be executed once, in a single thread.
const fooPipeline = root
.createGuardedComputePipeline(()=> {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!
@example
One parameter means n-threads will be executed in parallel.
const fooPipeline = root
.createGuardedComputePipeline((x)=> {
'use gpu';
if (x %16===0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread
createGuardedComputePipeline((
x: number
x,
y: number
y)=> {
'use gpu';
const randf: {
seed:typeofrandSeed;
seed2:typeofrandSeed2;
seed3:typeofrandSeed3;
seed4:typeofrandSeed4;
sample:typeofrandFloat01;
sampleExclusive:typeofrandUniformExclusive;
normal:typeofrandNormal;
exponential:typeofrandExponential;
cauchy:typeofrandCauchy;
bernoulli:typeofrandBernoulli;
...7 more ...;
onUnitSphere:typeofrandOnUnitSphere;
}
randf.
seed2: (seed: d.v2f)=>void
Threads do not share the generator's State.
As a result, unless you change the seed in each thread,
each thread will produce the same sequence.
randf.randSeed2 sets the private seed of the thread.
@param ― seed seed value to set. For the best results, all elements should be in [-1000, 1000] range.
Dispatches the pipeline.
Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the
number of threads to run in each dimension.
Under the hood, the number of expected threads is sent as a uniform, and
"guarded" by a bounds check.
dispatchThreads(1024, 512);
// callback will be called for x in range 0..1023 and y in range 0..511
// (optional) read values in JS
var console:Console
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = newconsole.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
Render pipelines require specifying a color attachment for each target.
The attachments are specified in the same way as in the WebGPU API (but accept both TypeGPU resources and regular WebGPU ones). However, similar to the targets argument, multiple targets need to be passed in as a record, with each target identified by name.
Similarly, when using withDepthStencil it is necessary to pass in a depth stencil attachment, via the withDepthStencilAttachment method.
Before executing pipelines, it is necessary to bind all of the utilized resources, like bind groups, vertex buffers and slots. It is done using the with method. It accepts either a bind group (render and compute pipelines) or a vertex layout and a vertex buffer (render pipelines only).
Pipelines also expose the withPerformanceCallback and withTimestampWrites methods for timing the execution time on the GPU.
For more info about them, refer to the Timing Your Pipelines guide.
After creating the render pipeline and setting all of the attachments, it can be put to use by calling the draw method.
It accepts the number of vertices and optionally the instance count, first vertex index and first instance index.
After calling the method, the shader is set for execution immediately.
Compute pipelines are executed using the dispatchWorkgroups method, which accepts the number of workgroups in each dimension.
The drawIndexed is analogous to draw, but takes advantage of index buffer to explicitly map vertex data onto primitives. When using an index buffer, you don’t need to list every vertex for every primitive explicitly. Instead, you provide a list of unique vertices in a vertex buffer. Then, the index buffer defines how these vertices are connected to form primitives.
The higher-level API has several limitations, therefore another way of executing pipelines is exposed, for some custom, more demanding scenarios. For example, with the high-level API, it is not possible to execute multiple pipelines per one render pass. It also may be missing some more niche features of the WebGPU API.
root['~unstable'].beginRenderPass is a method that mirrors the WebGPU API, but enriches it with a direct TypeGPU resource support.
root['~unstable'].beginRenderPass(
{
colorAttachments: [{
...
}],
},
(pass)=> {
pass.setPipeline(renderPipeline);
pass.setBindGroup(layout, group);
pass.draw(3);
},
);
It is also possible to access the underlying WebGPU resources for the TypeGPU pipelines, by calling root.unwrap(pipeline).
That way, they can be used with a regular WebGPU API, but unlike the root['~unstable'].beginRenderPass API, it also requires unwrapping all the necessary
resources.