Functions
TypeGPU functions let you define shader logic in a modular and type-safe way. Their signatures are fully visible to TypeScript, enabling tooling and static checks. Dependencies, including GPU resources or other functions, are resolved automatically, with no duplication or name clashes. This also supports distributing shader logic across multiple modules or packages. Imported functions from external sources are automatically resolved and embedded into the final shader when referenced.
Defining a function
Section titled βDefining a functionβThe simplest and most powerful way to define TypeGPU functions is to just place 'use gpu' at the beginning of the function body.
const const neighborhood: (a: number, r: number) => d.v2f
neighborhood = (a: number
a: number, r: number
r: number) => { 'use gpu'; return import d
d.function vec2f(x: number, y: number): d.v2f (+3 overloads)export vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f(a: number
a - r: number
r, a: number
a + r: number
r);};The 'use gpu' directive allows the function to be picked up by our dedicated build plugin β unplugin-typegpu
and transformed into a format TypeGPU can understand. This doesnβt alter the fact that the function is still callable from JavaScript, and behaves
the same on the CPU and GPU.
There are three main ways to use TypeGPU functions.
const const main: () => d.v2f
main = () => { 'use gpu'; return const neighborhood: (a: number, r: number) => d.v2f
neighborhood(1.1, 0.5);};
// #1) Can be called in JSconst range = const main: () => d.v2f
main();const range: d.v2f
// #2) Used to generate WGSLconst wgsl = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu.resolve: (items: ResolvableObject[], options?: TgpuResolveOptions) => string (+1 overload)
A shorthand for calling tgpu.resolveWithContext(...).code.
resolve([const main: () => d.v2f
main]);const wgsl: string
// #3) Executed on the GPU (generates WGSL underneath)const root: TgpuRoot
root['~unstable'] .createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>
Creates a compute pipeline that executes the given callback in an exact number of threads.
This is different from withCompute(...).createPipeline() in that it does a bounds check on the
thread id, where as regular pipelines do not and work in units of workgroups.
createGuardedComputePipeline(const main: () => d.v2f
main) .TgpuGuardedComputePipeline<[]>.dispatchThreads(): void
Dispatches the pipeline.
Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the
number of threads to run in each dimension.
Under the hood, the number of expected threads is sent as a uniform, and
"guarded" by a bounds check.
dispatchThreads();The contents of the wgsl variable would contain the following:
// Generated WGSLfn neighborhood(a: f32, r: f32) -> vec2f { return vec2f(a - r, a + r);}
fn main() -> vec2f { return neighborhood(1.1, 0.5);}
// ...You can already notice a few things about TypeGPU functions:
- Using operators like
+,-,*,/, etc. is perfectly valid on numbers. - TS types are properly inferred, feel free to hover over the variables to see their types.
- The generated code closely matches your source code.
Code transformation
Section titled βCode transformationβTo make this all work, we perform a small transformation to functions marked with 'use gpu'. Every projectβs setup is different, and we want to be as non-invasive as possible. The unplugin-typegpu package hooks into existing bundlers and build tools, extracts ASTs from TypeGPU functions and compacts them into our custom format called tinyest. This metadata is injected into the final JS bundle, then used to efficiently generate equivalent WGSL at runtime.
Type inference
Section titled βType inferenceβLetβs take a closer look at neighborhood versus the WGSL it generates.
// TSconst neighborhood = (a: number, r: number) => { 'use gpu'; return d.vec2f(a - r, a + r);};// WGSLfn neighborhood(a: f32, r: f32) -> vec2f { return vec2f(a - r, a + r);}How does TypeGPU determine that a and r are of type f32, and that the return type is vec2f? You might think that we parse the TypeScript source file and use the types
that the user provided in the function signature, but thatβs not the case.
While generating WGSL, TypeGPU infers the type of each expression, which means it knows the types of values passed in at each call site.
const const main: () => d.v2f
main = () => { 'use gpu'; // A very easy case, just floating point literals, so f32 by default return const neighborhood: (a: number, r: number) => d.v2f
neighborhood(1.1, 0.5);};TypeGPU then propagates those types into the function body and analyses the types returned by the function. If it cannot unify them into a single type, it will throw an error.
Polymorphism
Section titled βPolymorphismβFor each set of input types, TypeGPU generates a specialized version of the function.
const const main: () => void
main = () => { 'use gpu'; const const a: d.v2f
a = const neighborhood: (a: number, r: number) => d.v2f
neighborhood(0, 1); // We can also use casts to coerce values into a specific type. const const b: d.v2f
b = const neighborhood: (a: number, r: number) => d.v2f
neighborhood(import d
d.function u32(v?: number | boolean): numberexport u32
A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)
Can also be called to cast a value to an u32 in accordance with WGSL casting rules.
u32(1), import d
d.function f16(v?: number | boolean): numberexport f16
A schema that represents a 16-bit float value. (equivalent to f16 in WGSL)
Can also be called to cast a value to an f16.
f16(5.25));};// WGSLfn neighborhood(a: i32, r: i32) -> vec2f { return vec2f(f32(a - r), f32(a + r));}
fn neighborhood2(a: u32, r: f16) -> vec2f { return vec2f(f32(f16(a) - r), f32(f16(a) + r));}
fn main() { var a = neighborhood(0, 1); var b = neighborhood2(1, 5.25);}You can limit the types that a function can accept by using wrapping it in a shell.
Generics
Section titled βGenericsβSince TypeScript types not taken into account when generating the shader code, there is no limitation on use of generic types.
const const double: <T extends d.v2f | d.v3f | d.v4f>(a: T) => T
double = <function (type parameter) T in <T extends d.v2f | d.v3f | d.v4f>(a: T): T
T extends import d
d.export v2f
Interface representing its WGSL vector type counterpart: vec2f or vec2.
A vector with 2 elements of type f32
v2f | import d
d.export v3f
Interface representing its WGSL vector type counterpart: vec3f or vec3.
A vector with 3 elements of type f32
v3f | import d
d.export v4f
Interface representing its WGSL vector type counterpart: vec4f or vec4.
A vector with 4 elements of type f32
v4f>(a: T extends d.v2f | d.v3f | d.v4f
a: function (type parameter) T in <T extends d.v2f | d.v3f | d.v4f>(a: T): T
T): function (type parameter) T in <T extends d.v2f | d.v3f | d.v4f>(a: T): T
T => { 'use gpu'; return import std
std.mul<T>(lhs: T, rhs: T): T (+7 overloads)export mul
mul(a: T extends d.v2f | d.v3f | d.v4f
a, a: T extends d.v2f | d.v3f | d.v4f
a);};You can explore the set of standard functions in the API Reference.
The outer scope
Section titled βThe outer scopeβThings from the outer scope can be referenced inside TypeGPU functions, and theyβll be automatically included in the generated shader code.
const const from: d.v3f
from = import d
d.function vec3f(x: number, y: number, z: number): d.v3f (+5 overloads)export vec3f
Schema representing vec3f - a vector with 3 elements of type f32.
Also a constructor function for this vector value.
vec3f(1, 0, 0);const const to: d.v3f
to = import d
d.function vec3f(x: number, y: number, z: number): d.v3f (+5 overloads)export vec3f
Schema representing vec3f - a vector with 3 elements of type f32.
Also a constructor function for this vector value.
vec3f(0, 1, 0);const const constantMix: 0.5
constantMix = 0.5;
const const getColor: (t: number) => d.v3f
getColor = (t: number
t: number) => { 'use gpu'; if (t: number
t > 0.5) { // Above a certain threshold, mix the colors with a constant value return import std
std.mix<d.v3f>(e1: d.v3f, e2: d.v3f, e3: number): d.v3f (+2 overloads)export mix
mix(const from: d.v3f
from, const to: d.v3f
to, const constantMix: 0.5
constantMix); } return import std
std.mix<d.v3f>(e1: d.v3f, e2: d.v3f, e3: number): d.v3f (+2 overloads)export mix
mix(const from: d.v3f
from, const to: d.v3f
to, t: number
t);};The above generates the following WGSL:
fn getColor(t: f32) -> vec3f { if (t > 0.5) { return vec3f(0.5, 0.5, 0); } return mix(vec3f(1, 0, 0), vec3f(0, 1, 0), t);}Notice how from and to are inlined, and how std.mix(from, to, constantMix) was precomputed. TypeGPU leverages the
fact that these values are known at shader compilation time, and can be optimized away. All other instructions are kept as is,
since they use values known only during shader execution.
After seeing this, you might be tempted to use this mechanism for sharing data between the CPU and GPU, or for defining global variables used across functions, but values referenced by TypeGPU functions are assumed to be constant.
const const settings: { speed: number;}
settings = { speed: number
speed: 1,};
const const pipeline: TgpuGuardedComputePipeline<[]>
pipeline = const root: TgpuRoot
root['~unstable'].createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>
Creates a compute pipeline that executes the given callback in an exact number of threads.
This is different from withCompute(...).createPipeline() in that it does a bounds check on the
thread id, where as regular pipelines do not and work in units of workgroups.
createGuardedComputePipeline(() => { 'use gpu'; const const speed: number
speed = const settings: { speed: number;}
settings.speed: number
speed; // ^ generates: var speed = 1;
// ...});
const pipeline: TgpuGuardedComputePipeline<[]>
pipeline.TgpuGuardedComputePipeline<[]>.dispatchThreads(): void
Dispatches the pipeline.
Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the
number of threads to run in each dimension.
Under the hood, the number of expected threads is sent as a uniform, and
"guarded" by a bounds check.
dispatchThreads();
// π«π«π« This is NOT allowed π«π«π«const settings: { speed: number;}
settings.speed: number
speed = 1.5;
// the shader doesn't get recompiled with the new value// of `speed`, so it's still 1.const pipeline: TgpuGuardedComputePipeline<[]>
pipeline.TgpuGuardedComputePipeline<[]>.dispatchThreads(): void
Dispatches the pipeline.
Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the
number of threads to run in each dimension.
Under the hood, the number of expected threads is sent as a uniform, and
"guarded" by a bounds check.
dispatchThreads();There are explicit mechanisms that allow you to achieve this:
- Use buffers to efficiently share data between the CPU and GPU
- Use variables to share state between functions
Supported JavaScript functionality
Section titled βSupported JavaScript functionalityβYou can generally assume that all JavaScript syntax is supported, and in the occasion that it is not, weβll throw a descriptive error either at build time or at runtime (when compiling the shader).
-
Calling other functions β Only functions marked with
'use gpu'can be called from within a shader. An exception to that rule isconsole.log, which allows for tracking runtime behavior of shaders in a familiar way. -
Operators β JavaScript does not support operator overloading. This means that, while you can still use operators for numbers, you have to use supplementary functions from
typegpu/std(add, mul, eq, lt, geβ¦) for operations involving vectors and matrices, or use a fluent interface (abc.mul(xyz), β¦) -
Math.* β Utility functions on the
Mathobject canβt automatically run on the GPU, but can usually be swapped with functions exported fromtypegpu/std. Additionally, if youβre able to pull the call toMath.*out of the function, you can store the result in a constant and use it in the function no problem.
Standard library
Section titled βStandard libraryβTypeGPU provides a set of standard functions under typegpu/std, which you can use in your own TypeGPU functions. Our goal is for all functions to have matching
behavior on the CPU and GPU, which unlocks many possibilities (shader unit testing, shared business logic, and moreβ¦).
import * as import d
d from 'typegpu/data';import * as import std
std from 'typegpu/std';
function function manhattanDistance(a: d.v3f, b: d.v3f): number
manhattanDistance(a: d.v3f
a: import d
d.export v3f
Interface representing its WGSL vector type counterpart: vec3f or vec3.
A vector with 3 elements of type f32
v3f, b: d.v3f
b: import d
d.export v3f
Interface representing its WGSL vector type counterpart: vec3f or vec3.
A vector with 3 elements of type f32
v3f) { 'use gpu'; const const dx: number
dx = import std
std.function abs(value: number): number (+1 overload)export abs
abs(a: d.v3f
a.v3f.x: number
x - b: d.v3f
b.v3f.x: number
x); const const dy: number
dy = import std
std.function abs(value: number): number (+1 overload)export abs
abs(a: d.v3f
a.v3f.y: number
y - b: d.v3f
b.v3f.y: number
y); const const dz: number
dz = import std
std.function abs(value: number): number (+1 overload)export abs
abs(a: d.v3f
a.v3f.z: number
z - b: d.v3f
b.v3f.z: number
z);
return import std
std.function max(fst: number, ...rest: number[]): number (+1 overload)export max
max(const dx: number
dx, const dy: number
dy, const dz: number
dz);}Function shells
Section titled βFunction shellsβIn order to limit a functionβs signature to specific types, you can wrap it in a shell, an object holding only the input and output types.
The shell constructor tgpu.fn relies on TypeGPU schemas, objects that represent WGSL data types and assist in generating shader code at runtime.
It accepts two arguments:
- An array of schemas representing argument types,
- (Optionally) a schema representing the return type.
const const neighborhoodShell: TgpuFnShell<[d.F32, d.F32], d.Vec2f>
neighborhoodShell = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu.fn: <[d.F32, d.F32], d.Vec2f>(argTypes: [d.F32, d.F32], returnType: d.Vec2f) => TgpuFnShell<[d.F32, d.F32], d.Vec2f> (+1 overload)
fn([import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32, import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32], import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f);
// Works the same as `neighborhood`, but more strictly typedconst const neighborhoodF32: TgpuFn<(a: d.F32, r: d.F32) => d.Vec2f>
neighborhoodF32 = const neighborhoodShell: <(a: number, r: number) => d.v2f>(implementation: (a: number, r: number) => d.v2f) => TgpuFn<(a: d.F32, r: d.F32) => d.Vec2f> (+2 overloads)
neighborhoodShell(const neighborhood: (a: number, r: number) => d.v2f
neighborhood);Although you can define the function and shell separately, the most common way to use shells is immediately wrapping functions with them:
const const neighborhood: TgpuFn<(a: d.F32, r: d.F32) => d.Vec2f>
neighborhood = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu.fn: <[d.F32, d.F32], d.Vec2f>(argTypes: [d.F32, d.F32], returnType: d.Vec2f) => TgpuFnShell<[d.F32, d.F32], d.Vec2f> (+1 overload)
fn([import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32, import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32], import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f)((a: number
a, r: number
r) => { 'use gpu'; return import d
d.function vec2f(x: number, y: number): d.v2f (+3 overloads)export vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f(a: number
a - r: number
r, a: number
a + r: number
r);});Implementing functions in WGSL
Section titled βImplementing functions in WGSLβInstead of passing JavaScript functions to shells, you can pass WGSL code directly:
const const neighborhood: TgpuFn<(args_0: d.F32, args_1: d.F32) => d.Vec2f>
neighborhood = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu.fn: <[d.F32, d.F32], d.Vec2f>(argTypes: [d.F32, d.F32], returnType: d.Vec2f) => TgpuFnShell<[d.F32, d.F32], d.Vec2f> (+1 overload)
fn([import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32, import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32], import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f)`(a: f32, r: f32) -> vec2f { return vec2f(a - r, a + r);}`;Since type information is already present in the shell, the WGSL header can be simplified to include only the argument names.
const const neighborhood: TgpuFn<(args_0: d.F32, args_1: d.F32) => d.Vec2f>
neighborhood = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu.fn: <[d.F32, d.F32], d.Vec2f>(argTypes: [d.F32, d.F32], returnType: d.Vec2f) => TgpuFnShell<[d.F32, d.F32], d.Vec2f> (+1 overload)
fn([import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32, import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32], import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f)`(a, r) { return vec2f(a - r, a + r);}`;Including external resources
Section titled βIncluding external resourcesβShelled WGSL functions can use external resources passed via the $uses method.
Externals can include anything that can be resolved to WGSL by TypeGPU (numbers, vectors, matrices, constants, TypeGPU functions, buffer usages, textures, samplers, slots, accessors etc.).
const const getBlue: TgpuFn<() => d.Vec4f>
getBlue = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu.fn: <[], d.Vec4f>(argTypes: [], returnType: d.Vec4f) => TgpuFnShell<[], d.Vec4f> (+1 overload)
fn([], import d
d.const vec4f: d.Vec4fexport vec4f
Schema representing vec4f - a vector with 4 elements of type f32.
Also a constructor function for this vector value.
vec4f)`() { return vec4f(0.114, 0.447, 0.941, 1);}`;
// Calling a schema to create a value on the JS sideconst const purple: d.v4f
purple = import d
d.function vec4f(x: number, y: number, z: number, w: number): d.v4f (+9 overloads)export vec4f
Schema representing vec4f - a vector with 4 elements of type f32.
Also a constructor function for this vector value.
vec4f(0.769, 0.392, 1.0, 1);
const const getGradientColor: TgpuFn<(args_0: d.F32) => d.Vec4f>
getGradientColor = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu.fn: <[d.F32], d.Vec4f>(argTypes: [d.F32], returnType: d.Vec4f) => TgpuFnShell<[d.F32], d.Vec4f> (+1 overload)
fn([import d
d.const f32: d.F32export f32
A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)
Can also be called to cast a value to an f32.
f32], import d
d.const vec4f: d.Vec4fexport vec4f
Schema representing vec4f - a vector with 4 elements of type f32.
Also a constructor function for this vector value.
vec4f)`(ratio) { return mix(purple, get_blue(), ratio);}`.TgpuFnBase<(args_0: F32) => Vec4f>.$uses(dependencyMap: Record<string, unknown>): TgpuFn<(args_0: d.F32) => d.Vec4f>
$uses({ purple: d.v4f
purple, get_blue: TgpuFn<() => d.Vec4f>
get_blue: const getBlue: TgpuFn<() => d.Vec4f>
getBlue });You can see for yourself what getGradientColor resolves to by calling tgpu.resolve, all relevant definitions will be automatically included:
// results of calling tgpu.resolve([getGradientColor])
fn getBlue_1() -> vec4f{ return vec4f(0.114, 0.447, 0.941, 1);}
fn getGradientColor_0(ratio: f32) -> vec4f { return mix(vec4f(0.769, 0.392, 1, 1), getBlue_1(), ratio);}Notice how purple was inlined in the final shader, and the reference to get_blue was replaced with
the functionβs eventual name of getBlue_1.
When to use JavaScript / WGSL
Section titled βWhen to use JavaScript / WGSLβWriting shader code in JavaScript has a few significant advantages. It allows defining utilities once and using them both on the GPU and CPU, as well as enables complete syntax highlighting and autocomplete in TypeGPU function definitions, leading to a better developer experience.
However, there are cases where WGSL might be more suitable. Since JavaScript doesnβt support operator overloading, functions including complex matrix or vector operations can be more readable in WGSL. Writing WGSL becomes a necessity whenever TypeGPU does not yet support some feature or standard library function yet.
Luckily, you donβt have to choose one or the other for the entire project. It is possible to mix and match WGSL and JavaScript at every step of the way, so youβre not locked into one or the other.
Entry functions
Section titled βEntry functionsβInstead of annotating a TgpuFn with attributes, entry functions are defined using dedicated shell constructors:
tgpu['~unstable'].computeFn,tgpu['~unstable'].vertexFn,tgpu['~unstable'].fragmentFn.
Entry point function I/O
Section titled βEntry point function I/OβTo describe the input and output of an entry point function, we use IORecords, JavaScript objects that map argument names to their types.
const vertexInput = { idx: d.builtin.vertexIndex, position: d.vec4f, color: d.vec4f}As you may note, builtin inter-stage inputs and outputs are available on the d.builtin object,
and require no further type clarification.
Another thing to note is that there is no need to specify locations of the arguments,
as TypeGPU tries to assign locations automatically.
If you wish to, you can assign the locations manually with the d.location decorator.
During WGSL generation, TypeGPU automatically generates structs corresponding to the passed IORecords.
In WGSL implementation, input and output structs of the given function can be referenced as In and Out respectively.
Headers in WGSL implementations must be omitted, all input values are accessible through the struct named in.
Compute
Section titled βComputeβTgpuComputeFn accepts an object with two properties:
inβ anIORecorddescribing the input of the function,workgroupSizeβ a JS array of 1-3 numbers that corresponds to the@workgroup_sizeattribute.
const const mainCompute: TgpuComputeFn<{ gid: d.BuiltinGlobalInvocationId;}>
mainCompute = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu['~unstable'].computeFn: <{ gid: d.BuiltinGlobalInvocationId;}>(options: { in: { gid: d.BuiltinGlobalInvocationId; }; workgroupSize: number[];}) => TgpuComputeFnShell<{ gid: d.BuiltinGlobalInvocationId;}> (+1 overload)
computeFn({ in: { gid: d.BuiltinGlobalInvocationId;}
in: { gid: d.BuiltinGlobalInvocationId
gid: import d
d.const builtin: { readonly vertexIndex: d.BuiltinVertexIndex; readonly instanceIndex: d.BuiltinInstanceIndex; readonly position: d.BuiltinPosition; readonly clipDistances: d.BuiltinClipDistances; readonly frontFacing: d.BuiltinFrontFacing; readonly fragDepth: d.BuiltinFragDepth; readonly sampleIndex: d.BuiltinSampleIndex; readonly sampleMask: d.BuiltinSampleMask; readonly localInvocationId: d.BuiltinLocalInvocationId; readonly localInvocationIndex: d.BuiltinLocalInvocationIndex; readonly globalInvocationId: d.BuiltinGlobalInvocationId; readonly workgroupId: d.BuiltinWorkgroupId; readonly numWorkgroups: d.BuiltinNumWorkgroups; readonly subgroupInvocationId: BuiltinSubgroupInvocationId; readonly subgroupSize: BuiltinSubgroupSize;}export builtin
builtin.globalInvocationId: d.BuiltinGlobalInvocationId
globalInvocationId }, workgroupSize: number[]
workgroupSize: [1],}) /* wgsl */`{ let index = in.gid.x; if index == 0 { time += deltaTime; } let phase = (time / 300) + particleData[index].seed; particleData[index].position += particleData[index].velocity * deltaTime / 20 + vec2f(sin(phase) / 600, cos(phase) / 500);}`.TgpuComputeFn<{ gid: BuiltinGlobalInvocationId; }>.$uses(dependencyMap: Record<string, unknown>): TgpuComputeFn<{ gid: d.BuiltinGlobalInvocationId;}>
$uses({ particleData: TgpuBufferMutable<d.WgslArray<d.U32>> & TgpuFixedBufferUsage<d.WgslArray<d.U32>>
particleData: const particleDataStorage: TgpuBufferMutable<d.WgslArray<d.U32>> & TgpuFixedBufferUsage<d.WgslArray<d.U32>>
particleDataStorage, deltaTime: TgpuUniform<d.F32>
deltaTime, time: TgpuMutable<d.F32>
time });Resolved WGSL for the compute function above is equivalent (with respect to some cleanup) to the following:
@group(0) @binding(0) var<storage, read_write> particleData: array<u32, 100>;@group(0) @binding(1) var<uniform> deltaTime: f32;@group(0) @binding(2) var<storage, read_write> time: f32;
struct mainCompute_Input { @builtin(global_invocation_id) gid: vec3u,}
@compute @workgroup_size(1) fn mainCompute(in: mainCompute_Input) { let index = in.gid.x; if index == 0 { time += deltaTime; } let phase = (time / 300) + particleData[index].seed; particleData[index].position += particleData[index].velocity * deltaTime / 20 + vec2f(sin(phase) / 600, cos(phase) / 500);}Vertex and fragment
Section titled βVertex and fragmentβTgpuVertexFn accepts an object with two properties:
inβ anIORecorddescribing the input of the function,outβ anIORecorddescribing the output of the function.
TgpuFragment accepts an object with two properties:
inβ anIORecorddescribing the input of the function,outβd.vec4f, or anIORecorddescribing the output of the function.
const const mainVertex: TgpuVertexFn<{}, { uv: d.Vec2f;}>
mainVertex = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu['~unstable'].vertexFn: <{ vertexIndex: d.BuiltinVertexIndex;}, { outPos: d.BuiltinPosition; uv: d.Vec2f;}>(options: { in: { vertexIndex: d.BuiltinVertexIndex; }; out: { outPos: d.BuiltinPosition; uv: d.Vec2f; };}) => TgpuVertexFnShell<{ vertexIndex: d.BuiltinVertexIndex;}, { outPos: d.BuiltinPosition; uv: d.Vec2f;}> (+1 overload)
vertexFn({ in: { vertexIndex: d.BuiltinVertexIndex;}
in: { vertexIndex: d.BuiltinVertexIndex
vertexIndex: import d
d.const builtin: { readonly vertexIndex: d.BuiltinVertexIndex; readonly instanceIndex: d.BuiltinInstanceIndex; readonly position: d.BuiltinPosition; readonly clipDistances: d.BuiltinClipDistances; readonly frontFacing: d.BuiltinFrontFacing; readonly fragDepth: d.BuiltinFragDepth; readonly sampleIndex: d.BuiltinSampleIndex; readonly sampleMask: d.BuiltinSampleMask; readonly localInvocationId: d.BuiltinLocalInvocationId; readonly localInvocationIndex: d.BuiltinLocalInvocationIndex; readonly globalInvocationId: d.BuiltinGlobalInvocationId; readonly workgroupId: d.BuiltinWorkgroupId; readonly numWorkgroups: d.BuiltinNumWorkgroups; readonly subgroupInvocationId: BuiltinSubgroupInvocationId; readonly subgroupSize: BuiltinSubgroupSize;}export builtin
builtin.vertexIndex: d.BuiltinVertexIndex
vertexIndex }, out: { outPos: d.BuiltinPosition; uv: d.Vec2f;}
out: { outPos: d.BuiltinPosition
outPos: import d
d.const builtin: { readonly vertexIndex: d.BuiltinVertexIndex; readonly instanceIndex: d.BuiltinInstanceIndex; readonly position: d.BuiltinPosition; readonly clipDistances: d.BuiltinClipDistances; readonly frontFacing: d.BuiltinFrontFacing; readonly fragDepth: d.BuiltinFragDepth; readonly sampleIndex: d.BuiltinSampleIndex; readonly sampleMask: d.BuiltinSampleMask; readonly localInvocationId: d.BuiltinLocalInvocationId; readonly localInvocationIndex: d.BuiltinLocalInvocationIndex; readonly globalInvocationId: d.BuiltinGlobalInvocationId; readonly workgroupId: d.BuiltinWorkgroupId; readonly numWorkgroups: d.BuiltinNumWorkgroups; readonly subgroupInvocationId: BuiltinSubgroupInvocationId; readonly subgroupSize: BuiltinSubgroupSize;}export builtin
builtin.position: d.BuiltinPosition
position, uv: d.Vec2f
uv: import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f },}) /* wgsl */`{ var pos = array<vec2f, 3>( vec2(0.0, 0.5), vec2(-0.5, -0.5), vec2(0.5, -0.5) );
var uv = array<vec2f, 3>( vec2(0.5, 1.0), vec2(0.0, 0.0), vec2(1.0, 0.0), );
return Out(vec4f(pos[in.vertexIndex], 0.0, 1.0), uv[in.vertexIndex]); }`;
const const mainFragment: TgpuFragmentFn<{ uv: d.Vec2f;}, d.Vec4f>
mainFragment = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; bindGroupLayout: { <Entries extends Record<string, TgpuLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<Entries>>; <Entries extends Record<string, TgpuLegacyLayoutEntry | null>>(entries: Entries): TgpuBindGroupLayout<Prettify<MapLegacyTextureToUpToDate<...>>>; }; ... 9 more ...; '~unstable': { ...; };}
tgpu['~unstable'].fragmentFn: <{ uv: d.Vec2f;}, d.Vec4f>(options: { in: { uv: d.Vec2f; }; out: d.Vec4f;}) => TgpuFragmentFnShell<{ uv: d.Vec2f;}, d.Vec4f> (+1 overload)
fragmentFn({ in: { uv: d.Vec2f;}
in: { uv: d.Vec2f
uv: import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f }, out: d.Vec4f
out: import d
d.const vec4f: d.Vec4fexport vec4f
Schema representing vec4f - a vector with 4 elements of type f32.
Also a constructor function for this vector value.
vec4f,}) /* wgsl */`{ return getGradientColor((in.uv[0] + in.uv[1]) / 2); }`.TgpuFragmentFn<{ uv: Vec2f; }, Vec4f>.$uses(dependencyMap: Record<string, unknown>): TgpuFragmentFn<{ uv: d.Vec2f;}, d.Vec4f>
$uses({ getGradientColor: TgpuFn<(args_0: d.F32) => d.Vec4f>
getGradientColor });Resolved WGSL for the pipeline including the two entry point functions above is equivalent (with respect to some cleanup) to the following:
struct mainVertex_Input { @builtin(vertex_index) vertexIndex: u32,}
struct mainVertex_Output { @builtin(position) outPos: vec4f, @location(0) uv: vec2f,}
@vertex fn mainVertex(in: mainVertex_Input) -> mainVertex_Output { var pos = array<vec2f, 3>( vec2(0.0, 0.5), vec2(-0.5, -0.5), vec2(0.5, -0.5) );
var uv = array<vec2f, 3>( vec2(0.5, 1.0), vec2(0.0, 0.0), vec2(1.0, 0.0), );
return mainVertex_Output(vec4f(pos[in.vertexIndex], 0.0, 1.0), uv[in.vertexIndex]);}
fn getGradientColor(ratio: f32) -> vec4f{ return mix(vec4f(0.769, 0.392, 1, 1), vec4f(0.114, 0.447, 0.941, 1), ratio);}
struct mainFragment_Input { @location(0) uv: vec2f,}
@fragment fn mainFragment(in: mainFragment_Input) -> @location(0) vec4f { return getGradientColor((in.uv[0] + in.uv[1]) / 2);}Usage in pipelines
Section titled βUsage in pipelinesβTyped functions are crucial for simplified pipeline creation offered by TypeGPU. You can define and run pipelines as follows:
const const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline = const root: TgpuRoot
root['~unstable'] .withVertex<{}, { uv: d.Vec2f;}>(entryFn: TgpuVertexFn<{}, { uv: d.Vec2f;}>, ...args: [] | [{}]): WithVertex<{ uv: d.Vec2f;}>
withVertex(const mainVertex: TgpuVertexFn<{}, { uv: d.Vec2f;}>
mainVertex, {}) .WithVertex<{ uv: Vec2f; }>.withFragment<{ uv: d.Vec2f;}, d.Vec4f>(entryFn: TgpuFragmentFn<{ uv: d.Vec2f;}, d.Vec4f>, targets: GPUColorTargetState): WithFragment<d.Vec4f>
withFragment(const mainFragment: TgpuFragmentFn<{ uv: d.Vec2f;}, d.Vec4f>
mainFragment, { GPUColorTargetState.format: GPUTextureFormat
The
GPUTextureFormat
of this color target. The pipeline will only be compatible with
GPURenderPassEncoder
s which use a
GPUTextureView
of this format in the
corresponding color attachment.
format: const presentationFormat: "rgba8unorm"
presentationFormat }) .WithFragment<Vec4f>.createPipeline(): TgpuRenderPipeline<d.Vec4f>
createPipeline();
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline .TgpuRenderPipeline<Vec4f>.withColorAttachment(attachment: ColorAttachment): TgpuRenderPipeline<d.Vec4f>
withColorAttachment({ ColorAttachment.view: (ColorTextureConstraint & RenderFlag) | GPUTextureView | TgpuTextureView<d.WgslTexture<WgslTextureProps>> | TgpuTextureRenderView
A
GPUTextureView
describing the texture subresource that will be output to for this
color attachment.
view: const context: any
context.any
getCurrentTexture().any
createView(), ColorAttachment.clearValue?: GPUColor
Indicates the value to clear
GPURenderPassColorAttachment#view
to prior to executing the
render pass. If not map/exist|provided, defaults to {r: 0, g: 0, b: 0, a: 0}. Ignored
if
GPURenderPassColorAttachment#loadOp
is not
GPULoadOp# "clear"
.
The components of
GPURenderPassColorAttachment#clearValue
are all double values.
They are converted to a texel value of texture format matching the render attachment.
If conversion fails, a validation error is generated.
clearValue: [0, 0, 0, 0], ColorAttachment.loadOp: GPULoadOp
Indicates the load operation to perform on
GPURenderPassColorAttachment#view
prior to
executing the render pass.
Note: It is recommended to prefer clearing; see
GPULoadOp# "clear"
for details.
loadOp: 'clear', ColorAttachment.storeOp: GPUStoreOp
The store operation to perform on
GPURenderPassColorAttachment#view
after executing the render pass.
storeOp: 'store', }) .TgpuRenderPipeline<Vec4f>.draw(vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number): void
draw(3);The rendering result looks like this:

You can check out the full example on our examples page.