Skip to content

Functions

TypeGPU functions let you define shader logic in a modular and type-safe way. Their signatures are fully visible to TypeScript, enabling tooling and static checks. Dependencies, including GPU resources or other functions, are resolved automatically, with no duplication or name clashes. This also supports distributing shader logic across multiple modules or packages. Imported functions from external sources are automatically resolved and embedded into the final shader when referenced.

The simplest and most powerful way to define TypeGPU functions is to just place 'use gpu' at the beginning of the function body.

const
const neighborhood: (a: number, r: number) => d.v2f
neighborhood
= (
a: number
a
: number,
r: number
r
: number) => {
'use gpu';
return
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(
a: number
a
-
r: number
r
,
a: number
a
+
r: number
r
);
};

The 'use gpu' directive allows the function to be picked up by our dedicated build plugin β€” unplugin-typegpu and transformed into a format TypeGPU can understand. This doesn’t alter the fact that the function is still callable from JavaScript, and behaves the same on the CPU and GPU.

There are three main ways to use TypeGPU functions.

const
const main: () => d.v2f
main
= () => {
'use gpu';
return
const neighborhood: (a: number, r: number) => d.v2f
neighborhood
(1.1, 0.5);
};
// #1) Can be called in JS
const range =
const main: () => d.v2f
main
();
const range: d.v2f
// #2) Used to generate WGSL
const wgsl =
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
resolve: (options: TgpuResolveOptions) => string

Resolves a template with external values. Each external will get resolved to a code string and replaced in the template. Any dependencies of the externals will also be resolved and included in the output.

@param ― options - The options for the resolution.

@returns ― The resolved code.

@example

const Gradient = d.struct({
from: d.vec3f,
to: d.vec3f,
});
const resolved = tgpu.resolve({
template: `
fn getGradientAngle(gradient: Gradient) -> f32 {
return atan(gradient.to.y - gradient.from.y, gradient.to.x - gradient.from.x);
}
`,
externals: {
Gradient,
},
});
console.log(resolved);
// struct Gradient_0 {
// from: vec3f,
// to: vec3f,
// }
// fn getGradientAngle(gradient: Gradient_0) -> f32 {
// return atan(gradient.to.y - gradient.from.y, gradient.to.x - gradient.from.x);
// }

resolve
({
TgpuResolveOptions.externals: Record<string, object | Wgsl>

Map of external names to their resolvable values.

externals
: {
main: () => d.v2f
main
} });
const wgsl: string
// #3) Executed on the GPU (generates WGSL underneath)
const root: TgpuRoot
root
['~unstable']
.
createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(
const main: () => d.v2f
main
)
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();

The contents of the wgsl variable would contain the following:

// Generated WGSL
fn neighborhood(a: f32, r: f32) -> vec2f {
return vec2f(a - r, a + r);
}
fn main() -> vec2f {
return neighborhood(1.1, 0.5);
}
// ...

You can already notice a few things about TypeGPU functions:

  • Using operators like +, -, *, /, etc. is perfectly valid on numbers.
  • TS types are properly inferred, feel free to hover over the variables to see their types.
  • The generated code closely matches your source code.

To make this all work, we perform a small transformation to functions marked with 'use gpu'. Every project’s setup is different, and we want to be as non-invasive as possible. The unplugin-typegpu package hooks into existing bundlers and build tools, extracts ASTs from TypeGPU functions and compacts them into our custom format called tinyest. This metadata is injected into the final JS bundle, then used to efficiently generate equivalent WGSL at runtime.

Let’s take a closer look at neighborhood versus the WGSL it generates.

// TS
const neighborhood = (a: number, r: number) => {
'use gpu';
return d.vec2f(a - r, a + r);
};
// WGSL
fn neighborhood(a: f32, r: f32) -> vec2f {
return vec2f(a - r, a + r);
}

How does TypeGPU determine that a and r are of type f32, and that the return type is vec2f? You might think that we parse the TypeScript source file and use the types that the user provided in the function signature, but that’s not the case.

While generating WGSL, TypeGPU infers the type of each expression, which means it knows the types of values passed in at each call site.

const
const main: () => d.v2f
main
= () => {
'use gpu';
// A very easy case, just floating point literals, so f32 by default
return
const neighborhood: (a: number, r: number) => d.v2f
neighborhood
(1.1, 0.5);
};

TypeGPU then propagates those types into the function body and analyses the types returned by the function. If it cannot unify them into a single type, it will throw an error.

For each set of input types, TypeGPU generates a specialized version of the function.

const
const main: () => void
main
= () => {
'use gpu';
const
const a: d.v2f
a
=
const neighborhood: (a: number, r: number) => d.v2f
neighborhood
(0, 1);
// We can also use casts to coerce values into a specific type.
const
const b: d.v2f
b
=
const neighborhood: (a: number, r: number) => d.v2f
neighborhood
(
import d
d
.
function u32(v?: number | boolean): number
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
(1),
import d
d
.
function f16(v?: number | boolean): number
export f16

A schema that represents a 16-bit float value. (equivalent to f16 in WGSL)

Can also be called to cast a value to an f16.

@example const value = f16(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f16(true); // 1

@example const value = f16(21877.5); // 21872

f16
(5.25));
};
// WGSL
fn neighborhood(a: i32, r: i32) -> vec2f {
return vec2f(f32(a - r), f32(a + r));
}
fn neighborhood2(a: u32, r: f16) -> vec2f {
return vec2f(f32(f16(a) - r), f32(f16(a) + r));
}
fn main() {
var a = neighborhood(0, 1);
var b = neighborhood2(1, 5.25);
}

You can limit the types that a function can accept by using wrapping it in a shell.

Since TypeScript types not taken into account when generating the shader code, there is no limitation on use of generic types.

const
const double: <T extends d.v2f | d.v3f | d.v4f>(a: T) => T
double
= <
function (type parameter) T in <T extends d.v2f | d.v3f | d.v4f>(a: T): T
T
extends
import d
d
.
export v2f

Interface representing its WGSL vector type counterpart: vec2f or vec2. A vector with 2 elements of type f32

v2f
|
import d
d
.
export v3f

Interface representing its WGSL vector type counterpart: vec3f or vec3. A vector with 3 elements of type f32

v3f
|
import d
d
.
export v4f

Interface representing its WGSL vector type counterpart: vec4f or vec4. A vector with 4 elements of type f32

v4f
>(
a: T extends d.v2f | d.v3f | d.v4f
a
:
function (type parameter) T in <T extends d.v2f | d.v3f | d.v4f>(a: T): T
T
):
function (type parameter) T in <T extends d.v2f | d.v3f | d.v4f>(a: T): T
T
=> {
'use gpu';
return
import std
std
.
mul<T>(lhs: T, rhs: T): T (+7 overloads)
export mul
mul
(
a: T extends d.v2f | d.v3f | d.v4f
a
,
a: T extends d.v2f | d.v3f | d.v4f
a
);
};

You can explore the set of standard functions in the API Reference.

Things from the outer scope can be referenced inside TypeGPU functions, and they’ll be automatically included in the generated shader code.

const
const from: d.v3f
from
=
import d
d
.
function vec3f(x: number, y: number, z: number): d.v3f (+5 overloads)
export vec3f

Schema representing vec3f - a vector with 3 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec3f(); // (0.0, 0.0, 0.0) const vector = d.vec3f(1); // (1.0, 1.0, 1.0) const vector = d.vec3f(1, 2, 3.5); // (1.0, 2.0, 3.5)

@example const buffer = root.createBuffer(d.vec3f, d.vec3f(0, 1, 2)); // buffer holding a d.vec3f value, with an initial value of vec3f(0, 1, 2);

vec3f
(1, 0, 0);
const
const to: d.v3f
to
=
import d
d
.
function vec3f(x: number, y: number, z: number): d.v3f (+5 overloads)
export vec3f

Schema representing vec3f - a vector with 3 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec3f(); // (0.0, 0.0, 0.0) const vector = d.vec3f(1); // (1.0, 1.0, 1.0) const vector = d.vec3f(1, 2, 3.5); // (1.0, 2.0, 3.5)

@example const buffer = root.createBuffer(d.vec3f, d.vec3f(0, 1, 2)); // buffer holding a d.vec3f value, with an initial value of vec3f(0, 1, 2);

vec3f
(0, 1, 0);
const
const constantMix: 0.5
constantMix
= 0.5;
const
const getColor: (t: number) => d.v3f
getColor
= (
t: number
t
: number) => {
'use gpu';
if (
t: number
t
> 0.5) {
// Above a certain threshold, mix the colors with a constant value
return
import std
std
.
mix<d.v3f>(e1: d.v3f, e2: d.v3f, e3: number): d.v3f (+2 overloads)
export mix
mix
(
const from: d.v3f
from
,
const to: d.v3f
to
,
const constantMix: 0.5
constantMix
);
}
return
import std
std
.
mix<d.v3f>(e1: d.v3f, e2: d.v3f, e3: number): d.v3f (+2 overloads)
export mix
mix
(
const from: d.v3f
from
,
const to: d.v3f
to
,
t: number
t
);
};

The above generates the following WGSL:

fn getColor(t: f32) -> vec3f {
if (t > 0.5) {
return vec3f(0.5, 0.5, 0);
}
return mix(vec3f(1, 0, 0), vec3f(0, 1, 0), t);
}

Notice how from and to are inlined, and how std.mix(from, to, constantMix) was precomputed. TypeGPU leverages the fact that these values are known at shader compilation time, and can be optimized away. All other instructions are kept as is, since they use values known only during shader execution.

After seeing this, you might be tempted to use this mechanism for sharing data between the CPU and GPU, or for defining global variables used across functions, but values referenced by TypeGPU functions are assumed to be constant.

const
const settings: {
speed: number;
}
settings
= {
speed: number
speed
: 1,
};
const
const pipeline: TgpuGuardedComputePipeline<[]>
pipeline
=
const root: TgpuRoot
root
['~unstable'].
createGuardedComputePipeline<[]>(callback: () => void): TgpuGuardedComputePipeline<[]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@param ― callback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
(() => {
'use gpu';
const
const speed: number
speed
=
const settings: {
speed: number;
}
settings
.
speed: number
speed
;
// ^ generates: var speed = 1;
// ...
});
const pipeline: TgpuGuardedComputePipeline<[]>
pipeline
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();
// 🚫🚫🚫 This is NOT allowed 🚫🚫🚫
const settings: {
speed: number;
}
settings
.
speed: number
speed
= 1.5;
// the shader doesn't get recompiled with the new value
// of `speed`, so it's still 1.
const pipeline: TgpuGuardedComputePipeline<[]>
pipeline
.
TgpuGuardedComputePipeline<[]>.dispatchThreads(): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
();

There are explicit mechanisms that allow you to achieve this:

You can generally assume that all JavaScript syntax is supported, and in the occasion that it is not, we’ll throw a descriptive error either at build time or at runtime (when compiling the shader).

  • Calling other functions β€” Only functions marked with 'use gpu' can be called from within a shader. An exception to that rule is console.log, which allows for tracking runtime behavior of shaders in a familiar way.

  • Operators β€” JavaScript does not support operator overloading. This means that, while you can still use operators for numbers, you have to use supplementary functions from typegpu/std (add, mul, eq, lt, ge…) for operations involving vectors and matrices, or use a fluent interface (abc.mul(xyz), …)

  • Math.* β€” Utility functions on the Math object can’t automatically run on the GPU, but can usually be swapped with functions exported from typegpu/std. Additionally, if you’re able to pull the call to Math.* out of the function, you can store the result in a constant and use it in the function no problem.

TypeGPU provides a set of standard functions under typegpu/std, which you can use in your own TypeGPU functions. Our goal is for all functions to have matching behavior on the CPU and GPU, which unlocks many possibilities (shader unit testing, shared business logic, and more…).

import * as
import d
d
from 'typegpu/data';
import * as
import std
std
from 'typegpu/std';
function
function manhattanDistance(a: d.v3f, b: d.v3f): number
manhattanDistance
(
a: d.v3f
a
:
import d
d
.
export v3f

Interface representing its WGSL vector type counterpart: vec3f or vec3. A vector with 3 elements of type f32

v3f
,
b: d.v3f
b
:
import d
d
.
export v3f

Interface representing its WGSL vector type counterpart: vec3f or vec3. A vector with 3 elements of type f32

v3f
) {
'use gpu';
const
const dx: number
dx
=
import std
std
.
function abs(value: number): number (+1 overload)
export abs
abs
(
a: d.v3f
a
.
v3f.x: number
x
-
b: d.v3f
b
.
v3f.x: number
x
);
const
const dy: number
dy
=
import std
std
.
function abs(value: number): number (+1 overload)
export abs
abs
(
a: d.v3f
a
.
v3f.y: number
y
-
b: d.v3f
b
.
v3f.y: number
y
);
const
const dz: number
dz
=
import std
std
.
function abs(value: number): number (+1 overload)
export abs
abs
(
a: d.v3f
a
.
v3f.z: number
z
-
b: d.v3f
b
.
v3f.z: number
z
);
return
import std
std
.
function max(a: number, b: number): number (+1 overload)
export max
max
(
const dx: number
dx
,
import std
std
.
function max(a: number, b: number): number (+1 overload)
export max
max
(
const dy: number
dy
,
const dz: number
dz
));
}

In order to limit a function’s signature to specific types, you can wrap it in a shell, an object holding only the input and output types. The shell constructor tgpu.fn relies on TypeGPU schemas, objects that represent WGSL data types and assist in generating shader code at runtime. It accepts two arguments:

  • An array of schemas representing argument types,
  • (Optionally) a schema representing the return type.
const
const neighborhoodShell: TgpuFnShell<[d.F32, d.F32], d.Vec2f>
neighborhoodShell
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
fn: <[d.F32, d.F32], d.Vec2f>(argTypes: [d.F32, d.F32], returnType: d.Vec2f) => TgpuFnShell<[d.F32, d.F32], d.Vec2f> (+1 overload)
fn
([
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
,
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
],
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
);
// Works the same as `neighborhood`, but more strictly typed
const
const neighborhoodF32: TgpuFn<(a: d.F32, r: d.F32) => d.Vec2f>
neighborhoodF32
=
const neighborhoodShell: <(a: number, r: number) => d.v2f>(implementation: (a: number, r: number) => d.v2f) => TgpuFn<(a: d.F32, r: d.F32) => d.Vec2f> (+2 overloads)
neighborhoodShell
(
const neighborhood: (a: number, r: number) => d.v2f
neighborhood
);

Although you can define the function and shell separately, the most common way to use shells is immediately wrapping functions with them:

const
const neighborhood: TgpuFn<(a: d.F32, r: d.F32) => d.Vec2f>
neighborhood
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
fn: <[d.F32, d.F32], d.Vec2f>(argTypes: [d.F32, d.F32], returnType: d.Vec2f) => TgpuFnShell<[d.F32, d.F32], d.Vec2f> (+1 overload)
fn
([
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
,
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
],
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
)((
a: number
a
,
r: number
r
) => {
'use gpu';
return
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(
a: number
a
-
r: number
r
,
a: number
a
+
r: number
r
);
});

Instead of passing JavaScript functions to shells, you can pass WGSL code directly:

const
const neighborhood: TgpuFn<(args_0: d.F32, args_1: d.F32) => d.Vec2f>
neighborhood
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
fn: <[d.F32, d.F32], d.Vec2f>(argTypes: [d.F32, d.F32], returnType: d.Vec2f) => TgpuFnShell<[d.F32, d.F32], d.Vec2f> (+1 overload)
fn
([
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
,
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
],
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
)`(a: f32, r: f32) -> vec2f {
return vec2f(a - r, a + r);
}`;

Since type information is already present in the shell, the WGSL header can be simplified to include only the argument names.

const
const neighborhood: TgpuFn<(args_0: d.F32, args_1: d.F32) => d.Vec2f>
neighborhood
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
fn: <[d.F32, d.F32], d.Vec2f>(argTypes: [d.F32, d.F32], returnType: d.Vec2f) => TgpuFnShell<[d.F32, d.F32], d.Vec2f> (+1 overload)
fn
([
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
,
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
],
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
)`(a, r) {
return vec2f(a - r, a + r);
}`;

Shelled WGSL functions can use external resources passed via the $uses method. Externals can include anything that can be resolved to WGSL by TypeGPU (numbers, vectors, matrices, constants, TypeGPU functions, buffer usages, textures, samplers, slots, accessors etc.).

const
const getBlue: TgpuFn<() => d.Vec4f>
getBlue
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
fn: <[], d.Vec4f>(argTypes: [], returnType: d.Vec4f) => TgpuFnShell<[], d.Vec4f> (+1 overload)
fn
([],
import d
d
.
const vec4f: d.Vec4f
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
)`() {
return vec4f(0.114, 0.447, 0.941, 1);
}`;
// Calling a schema to create a value on the JS side
const
const purple: d.v4f
purple
=
import d
d
.
function vec4f(x: number, y: number, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(0.769, 0.392, 1.0, 1);
const
const getGradientColor: TgpuFn<(args_0: d.F32) => d.Vec4f>
getGradientColor
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
.
fn: <[d.F32], d.Vec4f>(argTypes: [d.F32], returnType: d.Vec4f) => TgpuFnShell<[d.F32], d.Vec4f> (+1 overload)
fn
([
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
],
import d
d
.
const vec4f: d.Vec4f
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
)`(ratio) {
return mix(purple, get_blue(), ratio);
}
`.
TgpuFnBase<(args_0: F32) => Vec4f>.$uses(dependencyMap: Record<string, unknown>): TgpuFn<(args_0: d.F32) => d.Vec4f>
$uses
({
purple: d.v4f
purple
,
get_blue: TgpuFn<() => d.Vec4f>
get_blue
:
const getBlue: TgpuFn<() => d.Vec4f>
getBlue
});

You can see for yourself what getGradientColor resolves to by calling tgpu.resolve, all relevant definitions will be automatically included:

// results of calling tgpu.resolve({ externals: { getGradientColor } })
fn getBlue_1() -> vec4f{
return vec4f(0.114, 0.447, 0.941, 1);
}
fn getGradientColor_0(ratio: f32) -> vec4f {
return mix(vec4f(0.769, 0.392, 1, 1), getBlue_1(), ratio);
}

Notice how purple was inlined in the final shader, and the reference to get_blue was replaced with the function’s eventual name of getBlue_1.

Writing shader code in JavaScript has a few significant advantages. It allows defining utilities once and using them both on the GPU and CPU, as well as enables complete syntax highlighting and autocomplete in TypeGPU function definitions, leading to a better developer experience.

However, there are cases where WGSL might be more suitable. Since JavaScript doesn’t support operator overloading, functions including complex matrix or vector operations can be more readable in WGSL. Writing WGSL becomes a necessity whenever TypeGPU does not yet support some feature or standard library function yet.

Luckily, you don’t have to choose one or the other for the entire project. It is possible to mix and match WGSL and JavaScript at every step of the way, so you’re not locked into one or the other.

Instead of annotating a TgpuFn with attributes, entry functions are defined using dedicated shell constructors:

  • tgpu['~unstable'].computeFn,
  • tgpu['~unstable'].vertexFn,
  • tgpu['~unstable'].fragmentFn.

To describe the input and output of an entry point function, we use IORecords, JavaScript objects that map argument names to their types.

const vertexInput = {
idx: d.builtin.vertexIndex,
position: d.vec4f,
color: d.vec4f
}

As you may note, builtin inter-stage inputs and outputs are available on the d.builtin object, and require no further type clarification.

Another thing to note is that there is no need to specify locations of the arguments, as TypeGPU tries to assign locations automatically. If you wish to, you can assign the locations manually with the d.location decorator.

During WGSL generation, TypeGPU automatically generates structs corresponding to the passed IORecords. In WGSL implementation, input and output structs of the given function can be referenced as In and Out respectively. Headers in WGSL implementations must be omitted, all input values are accessible through the struct named in.

TgpuComputeFn accepts an object with two properties:

  • in β€” an IORecord describing the input of the function,
  • workgroupSize β€” a JS array of 1-3 numbers that corresponds to the @workgroup_size attribute.
const
const mainCompute: TgpuComputeFn<{
gid: d.BuiltinGlobalInvocationId;
}>
mainCompute
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
['~unstable'].
computeFn: <{
gid: d.BuiltinGlobalInvocationId;
}>(options: {
in: {
gid: d.BuiltinGlobalInvocationId;
};
workgroupSize: number[];
}) => TgpuComputeFnShell<{
gid: d.BuiltinGlobalInvocationId;
}> (+1 overload)
computeFn
({
in: {
gid: d.BuiltinGlobalInvocationId;
}
in
: {
gid: d.BuiltinGlobalInvocationId
gid
:
import d
d
.
const builtin: {
readonly vertexIndex: d.BuiltinVertexIndex;
readonly instanceIndex: d.BuiltinInstanceIndex;
readonly position: d.BuiltinPosition;
readonly clipDistances: d.BuiltinClipDistances;
... 10 more ...;
readonly subgroupSize: BuiltinSubgroupSize;
}
export builtin
builtin
.
globalInvocationId: d.BuiltinGlobalInvocationId
globalInvocationId
},
workgroupSize: number[]
workgroupSize
: [1],
}) /* wgsl */`{
let index = in.gid.x;
if index == 0 {
time += deltaTime;
}
let phase = (time / 300) + particleData[index].seed;
particleData[index].position += particleData[index].velocity * deltaTime / 20 + vec2f(sin(phase) / 600, cos(phase) / 500);
}`.
TgpuComputeFn<{ gid: BuiltinGlobalInvocationId; }>.$uses(dependencyMap: Record<string, unknown>): TgpuComputeFn<{
gid: d.BuiltinGlobalInvocationId;
}>
$uses
({
particleData: TgpuBufferMutable<d.WgslArray<d.U32>> & TgpuFixedBufferUsage<d.WgslArray<d.U32>>
particleData
:
const particleDataStorage: TgpuBufferMutable<d.WgslArray<d.U32>> & TgpuFixedBufferUsage<d.WgslArray<d.U32>>
particleDataStorage
,
deltaTime: TgpuUniform<d.F32>
deltaTime
,
time: TgpuMutable<d.F32>
time
});

Resolved WGSL for the compute function above is equivalent (with respect to some cleanup) to the following:

@group(0) @binding(0) var<storage, read_write> particleData: array<u32, 100>;
@group(0) @binding(1) var<uniform> deltaTime: f32;
@group(0) @binding(2) var<storage, read_write> time: f32;
struct mainCompute_Input {
@builtin(global_invocation_id) gid: vec3u,
}
@compute @workgroup_size(1) fn mainCompute(in: mainCompute_Input) {
let index = in.gid.x;
if index == 0 {
time += deltaTime;
}
let phase = (time / 300) + particleData[index].seed;
particleData[index].position += particleData[index].velocity * deltaTime / 20 + vec2f(sin(phase) / 600, cos(phase) / 500);
}

TgpuVertexFn accepts an object with two properties:

  • in β€” an IORecord describing the input of the function,
  • out β€” an IORecord describing the output of the function.

TgpuFragment accepts an object with two properties:

  • in β€” an IORecord describing the input of the function,
  • out β€” d.vec4f, or an IORecord describing the output of the function.
const
const mainVertex: TgpuVertexFn<{}, {
uv: d.Vec2f;
}>
mainVertex
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
['~unstable'].
vertexFn: <{
vertexIndex: d.BuiltinVertexIndex;
}, {
outPos: d.BuiltinPosition;
uv: d.Vec2f;
}>(options: {
in: {
vertexIndex: d.BuiltinVertexIndex;
};
out: {
outPos: d.BuiltinPosition;
uv: d.Vec2f;
};
}) => TgpuVertexFnShell<...> (+1 overload)
vertexFn
({
in: {
vertexIndex: d.BuiltinVertexIndex;
}
in
: {
vertexIndex: d.BuiltinVertexIndex
vertexIndex
:
import d
d
.
const builtin: {
readonly vertexIndex: d.BuiltinVertexIndex;
readonly instanceIndex: d.BuiltinInstanceIndex;
readonly position: d.BuiltinPosition;
readonly clipDistances: d.BuiltinClipDistances;
... 10 more ...;
readonly subgroupSize: BuiltinSubgroupSize;
}
export builtin
builtin
.
vertexIndex: d.BuiltinVertexIndex
vertexIndex
},
out: {
outPos: d.BuiltinPosition;
uv: d.Vec2f;
}
out
: {
outPos: d.BuiltinPosition
outPos
:
import d
d
.
const builtin: {
readonly vertexIndex: d.BuiltinVertexIndex;
readonly instanceIndex: d.BuiltinInstanceIndex;
readonly position: d.BuiltinPosition;
readonly clipDistances: d.BuiltinClipDistances;
... 10 more ...;
readonly subgroupSize: BuiltinSubgroupSize;
}
export builtin
builtin
.
position: d.BuiltinPosition
position
,
uv: d.Vec2f
uv
:
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
},
}) /* wgsl */`{
var pos = array<vec2f, 3>(
vec2(0.0, 0.5),
vec2(-0.5, -0.5),
vec2(0.5, -0.5)
);
var uv = array<vec2f, 3>(
vec2(0.5, 1.0),
vec2(0.0, 0.0),
vec2(1.0, 0.0),
);
return Out(vec4f(pos[in.vertexIndex], 0.0, 1.0), uv[in.vertexIndex]);
}`;
const
const mainFragment: TgpuFragmentFn<{
uv: d.Vec2f;
}, d.Vec4f>
mainFragment
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 10 more ...;
'~unstable': {
...;
};
}
tgpu
['~unstable'].
fragmentFn: <{
uv: d.Vec2f;
}, d.Vec4f>(options: {
in: {
uv: d.Vec2f;
};
out: d.Vec4f;
}) => TgpuFragmentFnShell<{
uv: d.Vec2f;
}, d.Vec4f> (+1 overload)
fragmentFn
({
in: {
uv: d.Vec2f;
}
in
: {
uv: d.Vec2f
uv
:
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
},
out: d.Vec4f
out
:
import d
d
.
const vec4f: d.Vec4f
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
,
}) /* wgsl */`{
return getGradientColor((in.uv[0] + in.uv[1]) / 2);
}`.
TgpuFragmentFn<{ uv: Vec2f; }, Vec4f>.$uses(dependencyMap: Record<string, unknown>): TgpuFragmentFn<{
uv: d.Vec2f;
}, d.Vec4f>
$uses
({
getGradientColor: TgpuFn<(args_0: d.F32) => d.Vec4f>
getGradientColor
});

Resolved WGSL for the pipeline including the two entry point functions above is equivalent (with respect to some cleanup) to the following:

struct mainVertex_Input {
@builtin(vertex_index) vertexIndex: u32,
}
struct mainVertex_Output {
@builtin(position) outPos: vec4f,
@location(0) uv: vec2f,
}
@vertex fn mainVertex(in: mainVertex_Input) -> mainVertex_Output {
var pos = array<vec2f, 3>(
vec2(0.0, 0.5),
vec2(-0.5, -0.5),
vec2(0.5, -0.5)
);
var uv = array<vec2f, 3>(
vec2(0.5, 1.0),
vec2(0.0, 0.0),
vec2(1.0, 0.0),
);
return mainVertex_Output(vec4f(pos[in.vertexIndex], 0.0, 1.0), uv[in.vertexIndex]);
}
fn getGradientColor(ratio: f32) -> vec4f{
return mix(vec4f(0.769, 0.392, 1, 1), vec4f(0.114, 0.447, 0.941, 1), ratio);
}
struct mainFragment_Input {
@location(0) uv: vec2f,
}
@fragment fn mainFragment(in: mainFragment_Input) -> @location(0) vec4f {
return getGradientColor((in.uv[0] + in.uv[1]) / 2);
}

Typed functions are crucial for simplified pipeline creation offered by TypeGPU. You can define and run pipelines as follows:

const
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
=
const root: TgpuRoot
root
['~unstable']
.
withVertex<{}, {
uv: d.Vec2f;
}>(entryFn: TgpuVertexFn<{}, {
uv: d.Vec2f;
}>, attribs: {}): WithVertex<{
uv: d.Vec2f;
}>
withVertex
(
const mainVertex: TgpuVertexFn<{}, {
uv: d.Vec2f;
}>
mainVertex
, {})
.
WithVertex<{ uv: Vec2f; }>.withFragment<{
uv: d.Vec2f;
}, d.Vec4f>(entryFn: TgpuFragmentFn<{
uv: d.Vec2f;
}, d.Vec4f>, targets: GPUColorTargetState): WithFragment<d.Vec4f>
withFragment
(
const mainFragment: TgpuFragmentFn<{
uv: d.Vec2f;
}, d.Vec4f>
mainFragment
, {
GPUColorTargetState.format: GPUTextureFormat

The

GPUTextureFormat

of this color target. The pipeline will only be compatible with

GPURenderPassEncoder

s which use a

GPUTextureView

of this format in the corresponding color attachment.

format
:
const presentationFormat: "rgba8unorm"
presentationFormat
})
.
WithFragment<Vec4f>.createPipeline(): TgpuRenderPipeline<d.Vec4f>
createPipeline
();
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
.
TgpuRenderPipeline<Vec4f>.withColorAttachment(attachment: ColorAttachment): TgpuRenderPipeline<d.Vec4f>
withColorAttachment
({
ColorAttachment.view: (TgpuTexture<TextureProps> & RenderFlag) | GPUTextureView | TgpuTextureView<d.WgslTexture<WgslTextureProps>>

A

GPUTextureView

describing the texture subresource that will be output to for this color attachment.

view
:
const context: any
context
.
any
getCurrentTexture
().
any
createView
(),
ColorAttachment.clearValue?: GPUColor

Indicates the value to clear

GPURenderPassColorAttachment#view

to prior to executing the render pass. If not map/exist|provided, defaults to {r: 0, g: 0, b: 0, a: 0}. Ignored if

GPURenderPassColorAttachment#loadOp

is not

GPULoadOp# "clear"

. The components of

GPURenderPassColorAttachment#clearValue

are all double values. They are converted to a texel value of texture format matching the render attachment. If conversion fails, a validation error is generated.

clearValue
: [0, 0, 0, 0],
ColorAttachment.loadOp: GPULoadOp

Indicates the load operation to perform on

GPURenderPassColorAttachment#view

prior to executing the render pass. Note: It is recommended to prefer clearing; see

GPULoadOp# "clear"

for details.

loadOp
: 'clear',
ColorAttachment.storeOp: GPUStoreOp

The store operation to perform on

GPURenderPassColorAttachment#view

after executing the render pass.

storeOp
: 'store',
})
.
TgpuRenderPipeline<Vec4f>.draw(vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number): void
draw
(3);

The rendering result looks like this: rendering result - gradient triangle

You can check out the full example on our examples page.