Skip to content

Pipelines

TypeGPU introduces a custom API to easily define and execute render and compute pipelines. It abstracts away the standard WebGPU procedures to offer a convenient, type-safe way to run shaders on the GPU.

A pipeline can be defined with one of the following methods on the root object:

const
const renderPipeline: TgpuRenderPipeline<d.Vec4f>
renderPipeline
=
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<{}, {}, {}, d.Vec4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {};
vertex: TgpuVertexFn<{}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<{}>);
fragment: TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, d.Vec4f> | ((input: AutoFragmentIn<InferGPURecord<{}>>) => d.v4f | (AnyAutoCustoms & Partial<...>));
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
vertex: TgpuVertexFn<{}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<{}>)
vertex
:
const mainVertex: TgpuVertexFn<{}, {}>
mainVertex
,
fragment: TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, d.Vec4f> | ((input: AutoFragmentIn<InferGPURecord<{}>>) => d.v4f | (AnyAutoCustoms & Partial<InferGPURecord<{
readonly $fragDepth: d.BuiltinFragDepth;
readonly $sampleMask: d.BuiltinSampleMask;
}>>))
fragment
:
const mainFragment: TgpuFragmentFn<{}, d.Vec4f>
mainFragment
,
targets?: TgpuColorTargetState
targets
: {
format?: GPUTextureFormat | undefined

The

GPUTextureFormat

of this color target. The pipeline will only be compatible with

GPURenderPassEncoder

s which use a

GPUTextureView

of this format in the corresponding color attachment.

@defaultnavigator.gpu.getPreferredCanvasFormat()

format
:
const presentationFormat: "rgba8unorm"
presentationFormat
},
});
const
const computePipeline1: TgpuComputePipeline
computePipeline1
=
const root: TgpuRoot
root
.
WithBinding.createComputePipeline<{}>(descriptor: TgpuComputePipeline.Descriptor<{}>): TgpuComputePipeline
createComputePipeline
({
compute: TgpuComputeFn<{}>
compute
:
const mainCompute: TgpuComputeFn<{}>
mainCompute
,
});
const
const computePipeline2: TgpuGuardedComputePipeline<[x: number, y: number, z: number]>
computePipeline2
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[x: number, y: number, z: number]>(callback: (x: number, y: number, z: number) => void): TgpuGuardedComputePipeline<[x: number, y: number, z: number]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@paramcallback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
((
x: number
x
,
y: number
y
,
z: number
z
) => {
'use gpu';
// ...
});

The createRenderPipeline method creates a render pipeline by accepting an options object that specifies the vertex function, fragment function, targets, and optional additional settings.

  • vertex: The TgpuVertexFn or 'use gpu' callback to use as the vertex shader.
  • fragment: The TgpuFragmentFn or 'use gpu' callback to use as the fragment shader.
  • targets: A record defining the formats and behaviors of the color targets, similar to WebGPU’s GPUColorTargetState, but as a record with named targets.
  • depthStencil (optional): Depth-stencil state, same as WebGPU’s GPUDepthStencilState.
  • multisample (optional): Multisample state, same as WebGPU’s GPUMultisampleState.
  • primitive (optional): Primitive state, same as WebGPU’s GPUPrimitiveState.

The vertex function’s input parameters (non-builtin) are matched to vertex attributes specified in the pipeline’s vertex layout when executing. Vertex attributes are validated at the type level for compatibility.

const
const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout
=
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
.
vertexLayout: <d.WgslArray<d.Vec2f>>(schemaForCount: (count: number) => d.WgslArray<d.Vec2f>, stepMode?: "vertex" | "instance") => TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout
(
import d
d
.
arrayOf<d.Vec2f>(elementType: d.Vec2f): (elementCount: number) => d.WgslArray<d.Vec2f> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
));
const
const renderPipeline: TgpuRenderPipeline<d.Vec4f>
renderPipeline
=
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<{
pos: d.Vec2f;
}, {
pos: TgpuVertexAttrib<"float32x2">;
}, {}, d.Vec4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {
pos: TgpuVertexAttrib<"float32x2">;
};
vertex: TgpuVertexFn<{
pos: d.Vec2f;
}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
pos: TgpuVertexAttrib<"float32x2">;
}>>>) => AutoVertexOut<{}>);
fragment: TgpuFragmentFn<{} & Record<...>, d.Vec4f> | ((input: AutoFragmentIn<...>) => d.v4f | (AnyAutoCustoms & Partial<...>));
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
attribs?: {
pos: TgpuVertexAttrib<"float32x2">;
}
attribs
: {
pos: TgpuVertexAttrib<"float32x2">
pos
:
const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout
.
TgpuVertexLayout<WgslArray<Vec2f>>.attrib: TgpuVertexAttrib<"float32x2">
attrib
},
vertex: TgpuVertexFn<{
pos: d.Vec2f;
}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
pos: TgpuVertexAttrib<"float32x2">;
}>>>) => AutoVertexOut<{}>)
vertex
:
const mainVertex: TgpuVertexFn<{
pos: d.Vec2f;
}, {}>
mainVertex
,
fragment: TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, d.Vec4f> | ((input: AutoFragmentIn<InferGPURecord<{}>>) => d.v4f | (AnyAutoCustoms & Partial<InferGPURecord<{
readonly $fragDepth: d.BuiltinFragDepth;
readonly $sampleMask: d.BuiltinSampleMask;
}>>))
fragment
:
const mainFragment: TgpuFragmentFn<{}, d.Vec4f>
mainFragment
,
targets?: TgpuColorTargetState
targets
: {
format?: GPUTextureFormat | undefined

The

GPUTextureFormat

of this color target. The pipeline will only be compatible with

GPURenderPassEncoder

s which use a

GPUTextureView

of this format in the corresponding color attachment.

@defaultnavigator.gpu.getPreferredCanvasFormat()

format
:
const presentationFormat: "rgba8unorm"
presentationFormat
},
// Additional options can be specified here
TgpuRenderPipeline<in Targets = never>.DescriptorBase.depthStencil?: GPUDepthStencilState | undefined

Describes the optional depth-stencil properties, including the testing, operations, and bias.

depthStencil
: {
GPUDepthStencilState.format: GPUTextureFormat

The

GPUTextureViewDescriptor#format

of

GPURenderPassDescriptor#depthStencilAttachment

this

GPURenderPipeline

will be compatible with.

format
: 'depth24plus',
GPUDepthStencilState.depthWriteEnabled?: boolean

Indicates if this

GPURenderPipeline

can modify

GPURenderPassDescriptor#depthStencilAttachment

depth values.

depthWriteEnabled
: true,
GPUDepthStencilState.depthCompare?: GPUCompareFunction

The comparison operation used to test fragment depths against

GPURenderPassDescriptor#depthStencilAttachment

depth values.

depthCompare
: 'less',
},
TgpuRenderPipeline<in Targets = never>.DescriptorBase.multisample?: GPUMultisampleState | undefined

Describes the multi-sampling properties of the pipeline.

multisample
: {
GPUMultisampleState.count?: number

Number of samples per pixel. This

GPURenderPipeline

will be compatible only with attachment textures (

GPURenderPassDescriptor#colorAttachments

and

GPURenderPassDescriptor#depthStencilAttachment

) with matching

GPUTextureDescriptor#sampleCount

s.

count
: 4,
},
TgpuRenderPipeline<in Targets = never>.DescriptorBase.primitive?: TgpuPrimitiveState

Describes the primitive-related properties of the pipeline.

primitive
: {
topology: "triangle-list"
topology
: 'triangle-list' },
});

Using the pipelines should ensure the compatibility of the vertex output and fragment input on the type level. These parameters are identified by their names, not by their numeric location index. In general, when using vertex and fragment functions with TypeGPU pipelines, it is not necessary to set locations on the IO struct properties. The library automatically matches up the corresponding members (by their names) and assigns common locations to them. When a custom location is provided by the user (via the d.location attribute function) it is respected by the automatic assignment procedure, as long as there is no conflict between vertex and fragment location values.

import
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
, {
import d
d
} from 'typegpu';
const
const vertex: TgpuVertexFn<{}, {}>
vertex
=
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
.
vertexFn: <{
pos: d.BuiltinPosition;
}>(options: {
out: {
pos: d.BuiltinPosition;
};
}) => TgpuVertexFnShell<{}, {
pos: d.BuiltinPosition;
}> (+1 overload)
vertexFn
({
out: {
pos: d.BuiltinPosition;
}
out
: {
pos: d.BuiltinPosition
pos
:
import d
d
.
const builtin: {
readonly vertexIndex: d.BuiltinVertexIndex;
readonly instanceIndex: d.BuiltinInstanceIndex;
readonly clipDistances: d.BuiltinClipDistances;
readonly position: d.BuiltinPosition;
readonly frontFacing: d.BuiltinFrontFacing;
readonly fragDepth: d.BuiltinFragDepth;
readonly primitiveIndex: BuiltinPrimitiveIndex;
readonly sampleIndex: d.BuiltinSampleIndex;
readonly sampleMask: d.BuiltinSampleMask;
readonly localInvocationId: d.BuiltinLocalInvocationId;
readonly localInvocationIndex: d.BuiltinLocalInvocationIndex;
... 6 more ...;
readonly numSubgroups: BuiltinNumSubgroups;
}
export builtin
builtin
.
position: d.BuiltinPosition
position
},
})`(...)`;
const
const fragment: TgpuFragmentFn<{
uv: d.Vec2f;
}, d.Vec4f>
fragment
=
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
.
fragmentFn: <{
uv: d.Vec2f;
}, d.Vec4f>(options: {
in: {
uv: d.Vec2f;
};
out: d.Vec4f;
}) => TgpuFragmentFnShell<{
uv: d.Vec2f;
}, d.Vec4f> (+1 overload)
fragmentFn
({
in: {
uv: d.Vec2f;
}
in
: {
uv: d.Vec2f
uv
:
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
},
out: d.Vec4f
out
:
import d
d
.
const vec4f: d.Vec4f
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
,
})`(...)`;
const
const root: TgpuRoot
root
= await
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@seeinitFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
();
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<{}, {}, {}, d.Vec4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {};
vertex: TgpuVertexFn<{}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<{}>);
fragment: TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, d.Vec4f> | ((input: AutoFragmentIn<InferGPURecord<{}>>) => d.v4f | (AnyAutoCustoms & Partial<...>));
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
vertex: TgpuVertexFn<{}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<{}>)
vertex
,
fragment,
Error ts(2769) ― No overload matches this call. Overload 3 of 3, '(descriptor: DescriptorBase & ({ attribs?: {}; vertex: TgpuVertexFn<{}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<...>); fragment: ((input: AutoFragmentIn<...>) => v4f | (AnyAutoCustoms & Partial<...>)) | TgpuFragmentFn<...>; targets?: TgpuColorTargetState; } | { ...; })): TgpuRenderPipeline<...> | TgpuRenderPipeline<...>', gave the following error. Type 'TgpuFragmentFn<{ uv: Vec2f; }, Vec4f>' is not assignable to type '((input: AutoFragmentIn<InferGPURecord<{}>>) => v4f | (AnyAutoCustoms & Partial<InferGPURecord<{ readonly $fragDepth: BuiltinFragDepth; readonly $sampleMask: BuiltinSampleMask; }>>)) | TgpuFragmentFn<...> | TgpuFragmentFn<...> | ((input: AutoFragmentIn<...>) => undefined) | undefined'. Type 'TgpuFragmentFn<{ uv: Vec2f; }, Vec4f>' is not assignable to type 'TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, Vec4f>'. Property 'uv' is missing in type '{} & Record<string, AnyFragmentInputBuiltin>' but required in type '{ uv: Vec2f; }'.
targets?: TgpuColorTargetState
targets
: {
format?: GPUTextureFormat | undefined

The

GPUTextureFormat

of this color target. The pipeline will only be compatible with

GPURenderPassEncoder

s which use a

GPUTextureView

of this format in the corresponding color attachment.

@defaultnavigator.gpu.getPreferredCanvasFormat()

format
: 'bgra8unorm' },
});

The createComputePipeline method creates a compute pipeline by accepting an options object with the compute function.

  • compute: The TgpuComputeFn to use as the compute shader.
const
const computePipeline: TgpuComputePipeline
computePipeline
=
const root: TgpuRoot
root
.
WithBinding.createComputePipeline<{}>(descriptor: TgpuComputePipeline.Descriptor<{}>): TgpuComputePipeline
createComputePipeline
({
compute: TgpuComputeFn<{}>
compute
:
const mainCompute: TgpuComputeFn<{}>
mainCompute
,
});

The createGuardedComputePipeline method streamlines running simple computations on the GPU. Instead of dispatching workgroups, the guarded pipeline allows calling an exact number of GPU threads. Think of it as a parallelized for loop. Under the hood, it creates a compute pipeline that calls the provided callback only if the current thread ID is within the requested range.

const
const data: TgpuMutable<d.WgslArray<d.U32>>
data
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.WgslArray<d.U32>>(typeSchema: d.WgslArray<d.U32>, initial?: number[] | undefined): TgpuMutable<d.WgslArray<d.U32>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial The initial value of the buffer. (optional)

createMutable
(
import d
d
.
arrayOf<d.U32>(elementType: d.U32, elementCount: number): d.WgslArray<d.U32> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const u32: d.U32
export u32

A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)

Can also be called to cast a value to an u32 in accordance with WGSL casting rules.

@example const value = u32(); // 0

@example const value = u32(7); // 7

@example const value = u32(3.14); // 3

@example const value = u32(-1); // 4294967295

@example const value = u32(-3.1); // 0

u32
, 8), [0, 1, 2, 3, 4, 5, 6, 7]);
const
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
=
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[x: number]>(callback: (x: number) => void): TgpuGuardedComputePipeline<[x: number]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@paramcallback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
((
x: number
x
) => {
'use gpu';
const data: TgpuMutable<d.WgslArray<d.U32>>
data
.
TgpuMutable<WgslArray<U32>>.$: number[]
$
[
x: number
x
] *= 2;
});
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
.
TgpuGuardedComputePipeline<[x: number]>.dispatchThreads(x: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(8);
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
.
TgpuGuardedComputePipeline<[x: number]>.dispatchThreads(x: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(8);
const doubleUpPipeline: TgpuGuardedComputePipeline<[x: number]>
doubleUpPipeline
.
TgpuGuardedComputePipeline<[x: number]>.dispatchThreads(x: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(5);
// the command encoder will queue the read after `doubleUpPipeline`
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(await
const data: TgpuMutable<d.WgslArray<d.U32>>
data
.
TgpuBufferShorthandBase<WgslArray<U32>>.read(): Promise<number[]>
read
()); // [0, 8, 16, 24, 32, 20, 24, 28]

The callback can have up to three arguments (dimensions). createGuardedComputePipeline can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data. Buffer initialization commonly uses random number generators. For that, you can use the @typegpu/noise library.

import {
const randf: {
seed: typeof randSeed;
seed2: typeof randSeed2;
seed3: typeof randSeed3;
seed4: typeof randSeed4;
sample: typeof randFloat01;
sampleExclusive: typeof randUniformExclusive;
normal: typeof randNormal;
exponential: typeof randExponential;
cauchy: typeof randCauchy;
bernoulli: typeof randBernoulli;
... 7 more ...;
onUnitSphere: typeof randOnUnitSphere;
}
randf
} from '@typegpu/noise';
const
const root: TgpuRoot
root
= await
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@seeinitFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
();
// buffer of 1024x512 floats
const
const waterLevelMutable: TgpuMutable<d.WgslArray<d.WgslArray<d.F32>>>
waterLevelMutable
=
const root: TgpuRoot
root
.
TgpuRoot.createMutable<d.WgslArray<d.WgslArray<d.F32>>>(typeSchema: d.WgslArray<d.WgslArray<d.F32>>, initial?: number[][] | undefined): TgpuMutable<d.WgslArray<d.WgslArray<d.F32>>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Can be mutated in-place on the GPU. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial The initial value of the buffer. (optional)

createMutable
(
import d
d
.
arrayOf<d.WgslArray<d.F32>>(elementType: d.WgslArray<d.F32>, elementCount: number): d.WgslArray<d.WgslArray<d.F32>> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
arrayOf<d.F32>(elementType: d.F32, elementCount: number): d.WgslArray<d.F32> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
, 512), 1024),
);
const root: TgpuRoot
root
.
WithBinding.createGuardedComputePipeline<[x: number, y: number]>(callback: (x: number, y: number) => void): TgpuGuardedComputePipeline<[x: number, y: number]>

Creates a compute pipeline that executes the given callback in an exact number of threads. This is different from withCompute(...).createPipeline() in that it does a bounds check on the thread id, where as regular pipelines do not and work in units of workgroups.

@paramcallback A function converted to WGSL and executed on the GPU. It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID of the executing thread.

@example

If no parameters are provided, the callback will be executed once, in a single thread.

const fooPipeline = root
.createGuardedComputePipeline(() => {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!

@example

One parameter means n-threads will be executed in parallel.

const fooPipeline = root
.createGuardedComputePipeline((x) => {
'use gpu';
if (x % 16 === 0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread

createGuardedComputePipeline
((
x: number
x
,
y: number
y
) => {
'use gpu';
const randf: {
seed: typeof randSeed;
seed2: typeof randSeed2;
seed3: typeof randSeed3;
seed4: typeof randSeed4;
sample: typeof randFloat01;
sampleExclusive: typeof randUniformExclusive;
normal: typeof randNormal;
exponential: typeof randExponential;
cauchy: typeof randCauchy;
bernoulli: typeof randBernoulli;
... 7 more ...;
onUnitSphere: typeof randOnUnitSphere;
}
randf
.
seed2: (seed: d.v2f) => void

Threads do not share the generator's State. As a result, unless you change the seed in each thread, each thread will produce the same sequence. randf.randSeed2 sets the private seed of the thread.

@paramseed seed value to set. For the best results, all elements should be in [-1000, 1000] range.

seed2
(
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(
x: number
x
,
y: number
y
).
vecInfixNotation<v2f>.div(other: number | d.v2f): d.v2f
div
(1024));
const waterLevelMutable: TgpuMutable<d.WgslArray<d.WgslArray<d.F32>>>
waterLevelMutable
.
TgpuMutable<WgslArray<WgslArray<F32>>>.$: number[][]
$
[
x: number
x
][
y: number
y
] = 10 +
const randf: {
seed: typeof randSeed;
seed2: typeof randSeed2;
seed3: typeof randSeed3;
seed4: typeof randSeed4;
sample: typeof randFloat01;
sampleExclusive: typeof randUniformExclusive;
normal: typeof randNormal;
exponential: typeof randExponential;
cauchy: typeof randCauchy;
bernoulli: typeof randBernoulli;
... 7 more ...;
onUnitSphere: typeof randOnUnitSphere;
}
randf
.
sample: () => number

Returns a random f32 value in [0, 1) range.

sample
();
}).
TgpuGuardedComputePipeline<[x: number, y: number]>.dispatchThreads(x: number, y: number): void

Dispatches the pipeline. Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the number of threads to run in each dimension.

Under the hood, the number of expected threads is sent as a uniform, and "guarded" by a bounds check.

dispatchThreads
(1024, 512);
// callback will be called for x in range 0..1023 and y in range 0..511
// (optional) read values in JS
var console: Console

The console module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers.

The module exports two specific components:

  • A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
  • A global console instance configured to write to process.stdout and process.stderr. The global console can be used without importing the node:console module.

Warning: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the note on process I/O for more information.

Example using the global console:

console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr

Example using the Console class:

const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err

@seesource

console
.
Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)

Prints to stdout with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to printf(3) (the arguments are all passed to util.format()).

const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout

See util.format() for more information.

@sincev0.1.100

log
(await
const waterLevelMutable: TgpuMutable<d.WgslArray<d.WgslArray<d.F32>>>
waterLevelMutable
.
TgpuBufferShorthandBase<WgslArray<WgslArray<F32>>>.read(): Promise<number[][]>
read
());
renderPipeline
.withColorAttachment({ view: context })
.draw(3);
computePipeline.dispatchWorkgroups(16);
guardedComputePipeline.dispatchThreads(4);

Render pipelines require specifying a color attachment for each target. The attachments are specified in the same way as in the WebGPU API (but accept both TypeGPU resources and regular WebGPU ones). However, similar to the targets argument, multiple targets need to be passed in as a record, with each target identified by name.

Similarly, when using withDepthStencil it is necessary to pass in a depth stencil attachment, via the withDepthStencilAttachment method.

renderPipeline
.withColorAttachment({
color: {
view: msaaTextureView,
resolveTarget: context,
loadOp: 'clear',
storeOp: 'store',
},
shadow: {
view: shadowTextureView,
clearValue: [1, 1, 1, 1],
loadOp: 'clear',
storeOp: 'store',
},
})
.withDepthStencilAttachment({
view: depthTextureView,
depthClearValue: 1,
depthLoadOp: 'clear',
depthStoreOp: 'store',
})
.draw(vertexCount);

Before executing pipelines, it is necessary to bind all of the utilized resources, like bind groups, vertex buffers and slots. It is done using the with method. It accepts either a bind group (render and compute pipelines) or a vertex layout and a vertex buffer (render pipelines only).

// vertex layout
const vertexLayout = tgpu.vertexLayout(
d.disarrayOf(d.float16),
'vertex',
);
const vertexBuffer = root
.createBuffer(d.disarrayOf(d.float16, 8), [0, 0, 1, 0, 0, 1, 1, 1])
.$usage('vertex');
// bind group layout
const bindGroupLayout = tgpu.bindGroupLayout({
size: { uniform: d.vec2u },
});
const sizeBuffer = root
.createBuffer(d.vec2u, d.vec2u(64, 64))
.$usage('uniform');
const bindGroup = root.createBindGroup(bindGroupLayout, {
size: sizeBuffer,
});
// binding and execution
renderPipeline
.with(vertexLayout, vertexBuffer)
.with(bindGroup)
.draw(8);
computePipeline
.with(bindGroup)
.dispatchWorkgroups(1);

Pipelines also expose the withPerformanceCallback and withTimestampWrites methods for timing the execution time on the GPU. For more info about them, refer to the Timing Your Pipelines guide.

After creating the render pipeline and setting all of the attachments, it can be put to use by calling the draw method. It accepts the number of vertices and optionally the instance count, first vertex index and first instance index. After calling the method, the shader is set for execution immediately.

Compute pipelines are executed using the dispatchWorkgroups method, which accepts the number of workgroups in each dimension.

The drawIndexed is analogous to draw, but takes advantage of index buffer to explicitly map vertex data onto primitives. When using an index buffer, you don’t need to list every vertex for every primitive explicitly. Instead, you provide a list of unique vertices in a vertex buffer. Then, the index buffer defines how these vertices are connected to form primitives.

const
const indexBuffer: TgpuBuffer<d.WgslArray<d.U16>> & IndexFlag
indexBuffer
=
const root: TgpuRoot
root
.
TgpuRoot.createBuffer<d.WgslArray<d.U16>>(typeSchema: d.WgslArray<d.U16>, initial?: number[] | undefined): TgpuBuffer<d.WgslArray<d.U16>> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial The initial value of the buffer. (optional)

createBuffer
(
import d
d
.
arrayOf<d.U16>(elementType: d.U16, elementCount: number): d.WgslArray<d.U16> (+1 overload)
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example

const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

If elementCount is not specified, a partially applied function is returned.

@example const array = d.arrayOf(d.vec3f); // ^? (n: number) => WgslArray<d.Vec3f>

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const u16: d.U16
export u16
u16
, 6), [0, 2, 1, 0, 3, 2])
.
TgpuBuffer<WgslArray<U16>>.$usage<["index"]>(usages_0: "index"): TgpuBuffer<d.WgslArray<d.U16>> & IndexFlag
$usage
('index');
const
const pipeline: TgpuRenderPipeline<d.Vec4f> & HasIndexBuffer
pipeline
=
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<{
color: d.Vec4f;
}, {
color: TgpuVertexAttrib<"float32x4">;
}, {
color: d.Vec4f;
}, d.Vec4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {
color: TgpuVertexAttrib<"float32x4">;
};
vertex: TgpuVertexFn<{
color: d.Vec4f;
}, {
color: d.Vec4f;
}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
color: TgpuVertexAttrib<"float32x4">;
}>>>) => AutoVertexOut<AnyAutoCustoms>);
fragment: TgpuFragmentFn<...> | ((input: AutoFragmentIn<...>) => d.v4f | (AnyAutoCustoms & Partial<...>));
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
attribs?: {
color: TgpuVertexAttrib<"float32x4">;
}
attribs
: {
color: TgpuVertexAttrib<"float32x4">
color
:
const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec4f>>
vertexLayout
.
TgpuVertexLayout<WgslArray<Vec4f>>.attrib: TgpuVertexAttrib<"float32x4">
attrib
},
vertex: TgpuVertexFn<{
color: d.Vec4f;
}, {
color: d.Vec4f;
}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
color: TgpuVertexAttrib<"float32x4">;
}>>>) => AutoVertexOut<AnyAutoCustoms>)
vertex
,
fragment: TgpuFragmentFn<{
color: d.Vec4f;
} & Record<string, AnyFragmentInputBuiltin>, d.Vec4f> | ((input: AutoFragmentIn<InferGPURecord<{
color: d.Vec4f;
}>>) => d.v4f | (AnyAutoCustoms & Partial<InferGPURecord<{
readonly $fragDepth: d.BuiltinFragDepth;
readonly $sampleMask: d.BuiltinSampleMask;
}>>))
fragment
:
const mainFragment: TgpuFragmentFn<{
color: d.Vec4f;
}, d.Vec4f>
mainFragment
,
targets?: TgpuColorTargetState
targets
: {
format?: GPUTextureFormat | undefined

The

GPUTextureFormat

of this color target. The pipeline will only be compatible with

GPURenderPassEncoder

s which use a

GPUTextureView

of this format in the corresponding color attachment.

@defaultnavigator.gpu.getPreferredCanvasFormat()

format
:
const presentationFormat: "rgba8unorm"
presentationFormat
},
})
.
TgpuRenderPipeline<Vec4f>.withIndexBuffer(buffer: TgpuBuffer<d.BaseData> & IndexFlag, offsetElements?: number, sizeElements?: number): TgpuRenderPipeline<d.Vec4f> & HasIndexBuffer (+1 overload)
withIndexBuffer
(
const indexBuffer: TgpuBuffer<d.WgslArray<d.U16>> & IndexFlag
indexBuffer
);
const pipeline: TgpuRenderPipeline<d.Vec4f> & HasIndexBuffer
pipeline
.
TgpuRenderPipeline<Vec4f>.with<d.WgslArray<d.Vec4f>>(vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec4f>>, buffer: TgpuBuffer<d.WgslArray<d.Vec4f>> & VertexFlag): TgpuRenderPipeline<d.Vec4f> & HasIndexBuffer (+2 overloads)
with
(
const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec4f>>
vertexLayout
,
const colorBuffer: TgpuBuffer<d.WgslArray<d.Vec4f>> & VertexFlag
colorBuffer
)
.
HasIndexBuffer.drawIndexed(indexCount: number, instanceCount?: number, firstIndex?: number, baseVertex?: number, firstInstance?: number): void
drawIndexed
(6);

The higher-level API has several limitations, therefore another way of executing pipelines is exposed, for some custom, more demanding scenarios. For example, with the high-level API, it is not possible to execute multiple pipelines per one render pass. It also may be missing some more niche features of the WebGPU API.

root['~unstable'].beginRenderPass is a method that mirrors the WebGPU API, but enriches it with a direct TypeGPU resource support.

root['~unstable'].beginRenderPass(
{
colorAttachments: [{
...
}],
},
(pass) => {
pass.setPipeline(renderPipeline);
pass.setBindGroup(layout, group);
pass.draw(3);
},
);

It is also possible to access the underlying WebGPU resources for the TypeGPU pipelines, by calling root.unwrap(pipeline). That way, they can be used with a regular WebGPU API, but unlike the root['~unstable'].beginRenderPass API, it also requires unwrapping all the necessary resources.

const
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
=
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<{}, {}, {}, d.Vec4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {};
vertex: TgpuVertexFn<{}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<{}>);
fragment: TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, d.Vec4f> | ((input: AutoFragmentIn<InferGPURecord<{}>>) => d.v4f | (AnyAutoCustoms & Partial<...>));
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
vertex: TgpuVertexFn<{}, {}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<{}>)
vertex
:
const mainVertex: TgpuVertexFn<{}, {}>
mainVertex
,
fragment: TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, d.Vec4f> | ((input: AutoFragmentIn<InferGPURecord<{}>>) => d.v4f | (AnyAutoCustoms & Partial<InferGPURecord<{
readonly $fragDepth: d.BuiltinFragDepth;
readonly $sampleMask: d.BuiltinSampleMask;
}>>))
fragment
:
const mainFragment: TgpuFragmentFn<{}, d.Vec4f>
mainFragment
,
targets?: TgpuColorTargetState
targets
: {
format?: GPUTextureFormat | undefined

The

GPUTextureFormat

of this color target. The pipeline will only be compatible with

GPURenderPassEncoder

s which use a

GPUTextureView

of this format in the corresponding color attachment.

@defaultnavigator.gpu.getPreferredCanvasFormat()

format
: 'rg8unorm' },
});
const rawPipeline =
const root: TgpuRoot
root
.
Unwrapper.unwrap(resource: TgpuRenderPipeline): GPURenderPipeline (+10 overloads)
unwrap
(
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
);
const rawPipeline: GPURenderPipeline