TypeGPU functions let you define shader logic in a modular and type-safe way.
Their signatures are fully visible to TypeScript, enabling tooling and static checks.
Dependencies, including GPU resources or other functions, are resolved automatically, with no duplication or name clashes.
This also supports distributing shader logic across multiple modules or packages.
Imported functions from external sources are automatically resolved and embedded into the final shader when referenced.
@exampleconst buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);
vec2f(
a: number
a -
r: number
r,
a: number
a +
r: number
r);
};
The 'use gpu' directive allows the function to be picked up by our dedicated build plugin β unplugin-typegpu
and transformed into a format TypeGPU can understand. This doesnβt alter the fact that the function is still callable from JavaScript, and behaves
the same on the CPU and GPU.
There are three main ways to use TypeGPU functions.
Resolves a template with external values. Each external will get resolved to a code string and replaced in the template.
Any dependencies of the externals will also be resolved and included in the output.
@param β options - The options for the resolution.
Creates a compute pipeline that executes the given callback in an exact number of threads.
This is different from withCompute(...).createPipeline() in that it does a bounds check on the
thread id, where as regular pipelines do not and work in units of workgroups.
@param β callback A function converted to WGSL and executed on the GPU.
It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID
of the executing thread.
@example
If no parameters are provided, the callback will be executed once, in a single thread.
const fooPipeline = root
.createGuardedComputePipeline(()=> {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!
@example
One parameter means n-threads will be executed in parallel.
To make this all work, we perform a small transformation to functions marked with 'use gpu'. Every projectβs setup is different, and we want to be as non-invasive as possible. The unplugin-typegpu package hooks into existing bundlers and build tools, extracts ASTs from TypeGPU functions and compacts them into our custom format called tinyest. This metadata is injected into the final JS bundle, then used to efficiently generate equivalent WGSL at runtime.
Letβs take a closer look at neighborhood versus the WGSL it generates.
// TS
const neighborhood = (a:number, r:number) => {
'use gpu';
return d.vec2f(a - r, a + r);
};
// WGSL
fnneighborhood(a: f32, r: f32) ->vec2f {
returnvec2f(a-r, a+r);
}
How does TypeGPU determine that a and r are of type f32, and that the return type is vec2f? You might think that we parse the TypeScript source file and use the types
that the user provided in the function signature, but thatβs not the case.
While generating WGSL, TypeGPU infers the type of each expression, which means it knows the types of values passed in at each call site.
const
const main:() => d.v2f
main = () => {
'use gpu';
// A very easy case, just floating point literals, so f32 by default
return
const neighborhood:(a:number, r:number) => d.v2f
neighborhood(1.1, 0.5);
};
TypeGPU then propagates those types into the function body and analyses the types returned by the function.
If it cannot unify them into a single type, it will throw an error.
Notice how from and to are inlined, and how std.mix(from, to, constantMix) was precomputed. TypeGPU leverages the
fact that these values are known at shader compilation time, and can be optimized away. All other instructions are kept as is,
since they use values known only during shader execution.
After seeing this, you might be tempted to use this mechanism for sharing data between the CPU and GPU, or for defining
global variables used across functions, but values referenced by TypeGPU functions are assumed to be constant.
Creates a compute pipeline that executes the given callback in an exact number of threads.
This is different from withCompute(...).createPipeline() in that it does a bounds check on the
thread id, where as regular pipelines do not and work in units of workgroups.
@param β callback A function converted to WGSL and executed on the GPU.
It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID
of the executing thread.
@example
If no parameters are provided, the callback will be executed once, in a single thread.
const fooPipeline = root
.createGuardedComputePipeline(()=> {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!
@example
One parameter means n-threads will be executed in parallel.
You can generally assume that all JavaScript syntax is supported, and in the occasion that it is not, weβll throw a
descriptive error either at build time or at runtime (when compiling the shader).
Calling other functions β
Only functions marked with 'use gpu' can be called from within a shader. An exception to that rule is console.log, which allows for tracking runtime behavior
of shaders in a familiar way.
Operators β
JavaScript does not support operator overloading.
This means that, while you can still use operators for numbers,
you have to use supplementary functions from typegpu/std (add, mul, eq, lt, geβ¦) for operations involving vectors and matrices, or use a fluent interface (abc.mul(xyz), β¦)
Math.* β
Utility functions on the Math object canβt automatically run on the GPU, but can usually be swapped with functions exported from typegpu/std.
Additionally, if youβre able to pull the call to Math.* out of the function, you can store the result in a constant and use it in the function
no problem.
TypeGPU provides a set of standard functions under typegpu/std, which you can use in your own TypeGPU functions. Our goal is for all functions to have matching
behavior on the CPU and GPU, which unlocks many possibilities (shader unit testing, shared business logic, and moreβ¦).
In order to limit a functionβs signature to specific types, you can wrap it in a shell, an object holding only the input and output types.
The shell constructor tgpu.fn relies on TypeGPU schemas, objects that represent WGSL data types and assist in generating shader code at runtime.
It accepts two arguments:
An array of schemas representing argument types,
(Optionally) a schema representing the return type.
Shelled WGSL functions can use external resources passed via the $uses method.
Externals can include anything that can be resolved to WGSL by TypeGPU (numbers, vectors, matrices, constants, TypeGPU functions, buffer usages, textures, samplers, slots, accessors etc.).
Writing shader code in JavaScript has a few significant advantages.
It allows defining utilities once and using them both on the GPU and CPU,
as well as enables complete syntax highlighting and autocomplete in TypeGPU function definitions, leading to a better developer experience.
However, there are cases where WGSL might be more suitable.
Since JavaScript doesnβt support operator overloading, functions including complex matrix or vector operations can be more readable in WGSL.
Writing WGSL becomes a necessity whenever TypeGPU does not yet support some feature or standard library function yet.
Luckily, you donβt have to choose one or the other for the entire project. It is possible to mix and match WGSL and JavaScript at every step of the way, so youβre not locked into one or the other.
To describe the input and output of an entry point function, we use IORecords, JavaScript objects that map argument names to their types.
const vertexInput = {
idx: d.builtin.vertexIndex,
position: d.vec4f,
color: d.vec4f
}
As you may note, builtin inter-stage inputs and outputs are available on the d.builtin object,
and require no further type clarification.
Another thing to note is that there is no need to specify locations of the arguments,
as TypeGPU tries to assign locations automatically.
If you wish to, you can assign the locations manually with the d.location decorator.
During WGSL generation, TypeGPU automatically generates structs corresponding to the passed IORecords.
In WGSL implementation, input and output structs of the given function can be referenced as In and Out respectively.
Headers in WGSL implementations must be omitted, all input values are accessible through the struct named in.
describing the texture subresource that will be output to for this
color attachment.
view:
const context:any
context.
any
getCurrentTexture().
any
createView(),
ColorAttachment.clearValue?: GPUColor
Indicates the value to clear
GPURenderPassColorAttachment#view
to prior to executing the
render pass. If not map/exist|provided, defaults to {r: 0, g: 0, b: 0, a: 0}. Ignored
if
GPURenderPassColorAttachment#loadOp
is not
GPULoadOp# "clear"
.
The components of
GPURenderPassColorAttachment#clearValue
are all double values.
They are converted to a texel value of texture format matching the render attachment.
If conversion fails, a validation error is generated.
clearValue: [0, 0, 0, 0],
ColorAttachment.loadOp: GPULoadOp
Indicates the load operation to perform on
GPURenderPassColorAttachment#view
prior to
executing the render pass.
Note: It is recommended to prefer clearing; see