Skip to content

Pipelines

TypeGPU introduces a custom API to easily define and execute render and compute pipelines. It abstracts away the standard WebGPU procedures to offer a convenient, type-safe way to run shaders on the GPU.

A pipeline definition starts with the root object and follows a builder pattern.

const
const renderPipeline: TgpuRenderPipeline<d.Vec4f>
renderPipeline
=
const root: TgpuRoot
root
['~unstable']
.
withVertex<{}, {}>(entryFn: TgpuVertexFn<{}, {}>, attribs: {}): WithVertex<{}>
withVertex
(
const mainVertex: TgpuVertexFn<{}, {}>
mainVertex
, {})
.
WithVertex<{}>.withFragment<{}, d.Vec4f>(entryFn: TgpuFragmentFn<{}, d.Vec4f>, targets: GPUColorTargetState): WithFragment<d.Vec4f>
withFragment
(
const mainFragment: TgpuFragmentFn<{}, d.Vec4f>
mainFragment
, {
GPUColorTargetState.format: GPUTextureFormat

The

GPUTextureFormat

of this color target. The pipeline will only be compatible with

GPURenderPassEncoder

s which use a

GPUTextureView

of this format in the corresponding color attachment.

format
:
const presentationFormat: "rgba8unorm"
presentationFormat
})
.
WithFragment<Vec4f>.createPipeline(): TgpuRenderPipeline<d.Vec4f>
createPipeline
();
const
const computePipeline: TgpuComputePipeline
computePipeline
=
const root: TgpuRoot
root
['~unstable']
.
withCompute<{}>(entryFn: TgpuComputeFn<{}>): WithCompute
withCompute
(
const mainCompute: TgpuComputeFn<{}>
mainCompute
)
.
WithCompute.createPipeline(): TgpuComputePipeline
createPipeline
();

Creating a render pipeline requires calling the withVertex method first, which accepts TgpuVertexFn and matching vertex attributes. The attributes are passed in a record, where the keys match the vertex function’s (non-builtin) input parameters, and the values are attributes retrieved from a specific tgpu.vertexLayout. If the vertex shader does not use vertex attributes, then the latter argument should be an empty object. The compatibility between vertex input types and vertex attribute formats is validated at the type level.

const
const VertexStruct: d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>
VertexStruct
=
import d
d
.
struct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>(props: {
position: d.Vec2f;
velocity: d.Vec2f;
}): d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>
export struct

Creates a struct schema that can be used to construct GPU buffers. Ensures proper alignment and padding of properties (as opposed to a d.unstruct schema). The order of members matches the passed in properties object.

@example const CircleStruct = d.struct({ radius: d.f32, pos: d.vec3f });

@paramprops Record with string keys and TgpuData values, each entry describing one struct member.

struct
({
position: d.Vec2f
position
:
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
,
velocity: d.Vec2f
velocity
:
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
,
});
const
const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 7 more ...;
'~unstable': {
...;
};
}
tgpu
.
vertexLayout: <d.WgslArray<d.Vec2f>>(schemaForCount: (count: number) => d.WgslArray<d.Vec2f>, stepMode?: "vertex" | "instance") => TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout
(
(
n: number
n
) =>
import d
d
.
arrayOf<d.Vec2f>(elementType: d.Vec2f, elementCount: number): d.WgslArray<d.Vec2f>
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
,
n: number
n
),
'vertex',
);
const
const instanceLayout: TgpuVertexLayout<d.WgslArray<d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>>>
instanceLayout
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 7 more ...;
'~unstable': {
...;
};
}
tgpu
.
vertexLayout: <d.WgslArray<d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>>>(schemaForCount: (count: number) => d.WgslArray<d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>>, stepMode?: "vertex" | "instance") => TgpuVertexLayout<...>
vertexLayout
(
(
n: number
n
) =>
import d
d
.
arrayOf<d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>>(elementType: d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>, elementCount: number): d.WgslArray<d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>>
export arrayOf

Creates an array schema that can be used to construct gpu buffers. Describes arrays with fixed-size length, storing elements of the same type.

@example const LENGTH = 3; const array = d.arrayOf(d.u32, LENGTH);

@paramelementType The type of elements in the array.

@paramelementCount The number of elements in the array.

arrayOf
(
const VertexStruct: d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>
VertexStruct
,
n: number
n
),
'instance',
);
const root: TgpuRoot
root
['~unstable']
.
withVertex<{
v: d.Vec2f;
center: d.Vec2f;
velocity: d.Vec2f;
}, {}>(entryFn: TgpuVertexFn<{
v: d.Vec2f;
center: d.Vec2f;
velocity: d.Vec2f;
}, {}>, attribs: {
v: F32CompatibleFormats;
center: F32CompatibleFormats;
velocity: F32CompatibleFormats;
}): WithVertex<...>
withVertex
(
const mainVertex: TgpuVertexFn<{
v: d.Vec2f;
center: d.Vec2f;
velocity: d.Vec2f;
}, {}>
mainVertex
, {
v: F32CompatibleFormats
v
:
const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout
.
TgpuVertexLayout<WgslArray<Vec2f>>.attrib: TgpuVertexAttrib<"float32x2">
attrib
,
center: F32CompatibleFormats
center
:
const instanceLayout: TgpuVertexLayout<d.WgslArray<d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>>>
instanceLayout
.
TgpuVertexLayout<WgslArray<WgslStruct<{ position: Vec2f; velocity: Vec2f; }>>>.attrib: {
position: TgpuVertexAttrib<"float32x2">;
velocity: TgpuVertexAttrib<"float32x2">;
}
attrib
.
position: TgpuVertexAttrib<"float32x2">
position
,
velocity: F32CompatibleFormats
velocity
:
const instanceLayout: TgpuVertexLayout<d.WgslArray<d.WgslStruct<{
position: d.Vec2f;
velocity: d.Vec2f;
}>>>
instanceLayout
.
TgpuVertexLayout<WgslArray<WgslStruct<{ position: Vec2f; velocity: Vec2f; }>>>.attrib: {
position: TgpuVertexAttrib<"float32x2">;
velocity: TgpuVertexAttrib<"float32x2">;
}
attrib
.
velocity: TgpuVertexAttrib<"float32x2">
velocity
,
})
// ...

The next step is calling the withFragment method, which accepts TgpuFragmentFn and a targets argument defining the formats and behaviors of the color targets the pipeline writes to. Each target is specified the same as in the WebGPU API (GPUColorTargetState). The difference is that when there are multiple targets, they should be passed in a record, not an array. This way each target is identified by a name and can be validated against the outputs of the fragment function.

const
const mainFragment: TgpuFragmentFn<{}, {
color: d.Vec4f;
shadow: d.Vec4f;
}>
mainFragment
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 7 more ...;
'~unstable': {
...;
};
}
tgpu
['~unstable'].
fragmentFn: <{
color: d.Vec4f;
shadow: d.Vec4f;
}>(options: {
out: {
color: d.Vec4f;
shadow: d.Vec4f;
};
}) => TgpuFragmentFnShell<{}, {
color: d.Vec4f;
shadow: d.Vec4f;
}> (+1 overload)
fragmentFn
({
out: {
color: d.Vec4f;
shadow: d.Vec4f;
}
out
: {
color: d.Vec4f
color
:
import d
d
.
const vec4f: d.Vec4f
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
,
shadow: d.Vec4f
shadow
:
import d
d
.
const vec4f: d.Vec4f
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
,
},
})`{ ... }`;
const
const renderPipeline: TgpuRenderPipeline<{
color: d.Vec4f;
shadow: d.Vec4f;
}>
renderPipeline
=
const root: TgpuRoot
root
['~unstable']
.
withVertex<{}, {}>(entryFn: TgpuVertexFn<{}, {}>, attribs: {}): WithVertex<{}>
withVertex
(
const mainVertex: TgpuVertexFn<{}, {}>
mainVertex
, {})
.
WithVertex<{}>.withFragment<{}, {
color: d.Vec4f;
shadow: d.Vec4f;
}>(entryFn: TgpuFragmentFn<{}, {
color: d.Vec4f;
shadow: d.Vec4f;
}>, targets: {
color: GPUColorTargetState;
shadow: GPUColorTargetState;
}): WithFragment<{
color: d.Vec4f;
shadow: d.Vec4f;
}>
withFragment
(
const mainFragment: TgpuFragmentFn<{}, {
color: d.Vec4f;
shadow: d.Vec4f;
}>
mainFragment
, {
color: GPUColorTargetState
color
: {
format: string
format
: 'rg8unorm',
blend: {
color: {
srcFactor: string;
dstFactor: string;
operation: string;
};
alpha: {
srcFactor: string;
dstFactor: string;
operation: string;
};
}
blend
: {
color: {
srcFactor: string;
dstFactor: string;
operation: string;
}
color
: {
srcFactor: string
srcFactor
: 'one',
dstFactor: string
dstFactor
: 'one-minus-src-alpha',
operation: string
operation
: 'add',
},
alpha: {
srcFactor: string;
dstFactor: string;
operation: string;
}
alpha
: {
srcFactor: string
srcFactor
: 'one',
dstFactor: string
dstFactor
: 'one-minus-src-alpha',
operation: string
operation
: 'add',
},
},
},
shadow: GPUColorTargetState
shadow
: {
format: string
format
: 'r16uint' },
})
.
WithFragment<{ color: Vec4f; shadow: Vec4f; }>.createPipeline(): TgpuRenderPipeline<{
color: d.Vec4f;
shadow: d.Vec4f;
}>
createPipeline
();

Using the pipelines should ensure the compatibility of the vertex output and fragment input on the type level — withFragment only accepts fragment functions, which all non-builtin parameters are returned in the vertex stage. These parameters are identified by their names, not by their numeric location index. In general, when using vertex and fragment functions with TypeGPU pipelines, it is not necessary to set locations on the IO struct properties. The library automatically matches up the corresponding members (by their names) and assigns common locations to them. When a custom location is provided by the user (via the d.location attribute function) it is respected by the automatic assignment procedure, as long as there is no conflict between vertex and fragment location value.

import
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 7 more ...;
'~unstable': {
...;
};
}
tgpu
from 'typegpu';
import * as
import d
d
from 'typegpu/data';
const
const vertex: TgpuVertexFn<{}, {}>
vertex
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 7 more ...;
'~unstable': {
...;
};
}
tgpu
['~unstable'].
vertexFn: <{
pos: d.BuiltinPosition;
}>(options: {
out: {
pos: d.BuiltinPosition;
};
}) => TgpuVertexFnShell<{}, {
pos: d.BuiltinPosition;
}> (+1 overload)
vertexFn
({
out: {
pos: d.BuiltinPosition;
}
out
: {
pos: d.BuiltinPosition
pos
:
import d
d
.
const builtin: {
readonly vertexIndex: d.BuiltinVertexIndex;
readonly instanceIndex: d.BuiltinInstanceIndex;
readonly position: d.BuiltinPosition;
readonly clipDistances: d.BuiltinClipDistances;
... 10 more ...;
readonly subgroupSize: BuiltinSubgroupSize;
}
export builtin
builtin
.
position: d.BuiltinPosition
position
,
},
})`(...)`;
const
const fragment: TgpuFragmentFn<{
uv: d.Vec2f;
}, d.Vec4f>
fragment
=
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 7 more ...;
'~unstable': {
...;
};
}
tgpu
['~unstable'].
fragmentFn: <{
uv: d.Vec2f;
}, d.Vec4f>(options: {
in: {
uv: d.Vec2f;
};
out: d.Vec4f;
}) => TgpuFragmentFnShell<{
uv: d.Vec2f;
}, d.Vec4f> (+1 overload)
fragmentFn
({
in: {
uv: d.Vec2f;
}
in
: {
uv: d.Vec2f
uv
:
import d
d
.
const vec2f: d.Vec2f
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
},
out: d.Vec4f
out
:
import d
d
.
const vec4f: d.Vec4f
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
,
})`(...)`;
const
const root: TgpuRoot
root
= await
const tgpu: {
fn: {
<Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>;
<Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>;
};
... 7 more ...;
'~unstable': {
...;
};
}
tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@seeinitFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
();
const root: TgpuRoot
root
['~unstable']
.
withVertex<{}, {}>(entryFn: TgpuVertexFn<{}, {}>, attribs: {}): WithVertex<{}>
withVertex
(
const vertex: TgpuVertexFn<{}, {}>
vertex
, {})
.withFragment(
const fragment: TgpuFragmentFn<{
uv: d.Vec2f;
}, d.Vec4f>
fragment
, {
format: string
format
: 'bgra8unorm' });
Error ts(2554) ― Expected 3 arguments, but got 2.
WithVertex<{}>.withFragment<{
uv: d.Vec2f;
}, d.Vec4f>(entryFn: "n/a", targets: "n/a", MissingFromVertexOutput: {
uv: d.Vec2f;
}): WithFragment<d.Vec4f>

After calling withFragment, but before createPipeline, it is possible to set additional pipeline settings. It is done through builder methods like withDepthStencil, withMultisample, withPrimitive. They accept the same arguments as their corresponding descriptors in the WebGPU API.

const renderPipeline = root['~unstable']
.withVertex(vertexShader, modelVertexLayout.attrib)
.withFragment(fragmentShader, { format: presentationFormat })
.withDepthStencil({
format: 'depth24plus',
depthWriteEnabled: true,
depthCompare: 'less',
})
.withMultisample({
count: 4,
})
.withPrimitive({ topology: 'triangle-list' })
.createPipeline();

Creating a compute pipeline is even easier — the withCompute method accepts just a TgpuComputeFn with no additional parameters. Please note that compute pipelines are separate identities from render pipelines. You cannot combine withVertex and withFragment methods with withCompute in a singular pipeline.

The creation of TypeGPU pipelines ends with calling a createPipeline method on the builder.

renderPipeline
.withColorAttachment({
view: context.getCurrentTexture().createView(),
loadOp: 'clear',
storeOp: 'store',
})
.draw(3);
computePipeline.dispatchWorkgroups(16);

Render pipelines require specifying a color attachment for each target. The attachments are specified in the same way as in the WebGPU API (but accept both TypeGPU resources and regular WebGPU ones). However, similar to the targets argument, multiple targets need to be passed in as a record, with each target identified by name.

Similarly, when using withDepthStencil it is necessary to pass in a depth stencil attachment, via the withDepthStencilAttachment method.

renderPipeline
.withColorAttachment({
color: {
view: msaaTextureView,
resolveTarget: context.getCurrentTexture().createView(),
loadOp: 'clear',
storeOp: 'store',
},
shadow: {
view: shadowTextureView,
clearValue: [1, 1, 1, 1],
loadOp: 'clear',
storeOp: 'store',
},
})
.withDepthStencilAttachment({
view: depthTextureView,
depthClearValue: 1,
depthLoadOp: 'clear',
depthStoreOp: 'store',
})
.draw(vertexCount);

Before executing pipelines, it is necessary to bind all of the utilized resources, like bind groups, vertex buffers and slots. It is done using the with method. It accepts a pair of arguments: a bind group layout and a bind group (render and compute pipelines) or a vertex layout and a vertex buffer (render pipelines only).

// vertex layout
const vertexLayout = tgpu.vertexLayout(
(n) => d.disarrayOf(d.float16, n),
'vertex',
);
const vertexBuffer = root
.createBuffer(d.disarrayOf(d.float16, 8), [0, 0, 1, 0, 0, 1, 1, 1])
.$usage('vertex');
// bind group layout
const bindGroupLayout = tgpu.bindGroupLayout({
size: { uniform: d.vec2u },
});
const sizeBuffer = root
.createBuffer(d.vec2u, d.vec2u(64, 64))
.$usage('uniform');
const bindGroup = root.createBindGroup(bindGroupLayout, {
size: sizeBuffer,
});
// binding and execution
renderPipeline
.with(vertexLayout, vertexBuffer)
.with(bindGroupLayout, bindGroup)
.draw(8);
computePipeline
.with(bindGroupLayout, bindGroup)
.dispatchWorkgroups(1);

Pipelines also expose the withPerformanceCallback and withTimestampWrites methods for timing the execution time on the GPU. For more info about them, refer to the Timing Your Pipelines guide.

After creating the render pipeline and setting all of the attachments, it can be put to use by calling the draw method. It accepts the number of vertices and optionally the instance count, first vertex index and first instance index. After calling the method, the shader is set for execution immediately.

Compute pipelines are executed using the dispatchWorkgroups method, which accepts the number of workgroups in each dimension. Unlike render pipelines, after running this method, the execution is not submitted to the GPU immediately. In order to do so, root['~unstable'].flush() needs to be run. However, that is usually not necessary, as it is done automatically when trying to read the result of computation.

The higher-level API has several limitations, therefore another way of executing pipelines is exposed, for some custom, more demanding scenarios. For example, with the high-level API, it is not possible to execute multiple pipelines per one render pass. It also may be missing some more niche features of the WebGPU API.

root['~unstable'].beginRenderPass is a method that mirrors the WebGPU API, but enriches it with a direct TypeGPU resource support.

root['~unstable'].beginRenderPass(
{
colorAttachments: [{
...
}],
},
(pass) => {
pass.setPipeline(renderPipeline);
pass.setBindGroup(layout, group);
pass.draw(3);
},
);
root['~unstable'].flush();

It is also possible to access the underlying WebGPU resources for the TypeGPU pipelines, by calling root.unwrap(pipeline). That way, they can be used with a regular WebGPU API, but unlike the root['~unstable'].beginRenderPass API, it also requires unwrapping all the necessary resources.

const
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
=
const root: TgpuRoot
root
['~unstable']
.
withVertex<{}, {}>(entryFn: TgpuVertexFn<{}, {}>, attribs: {}): WithVertex<{}>
withVertex
(
const mainVertex: TgpuVertexFn<{}, {}>
mainVertex
, {})
.
WithVertex<{}>.withFragment<{}, d.Vec4f>(entryFn: TgpuFragmentFn<{}, d.Vec4f>, targets: GPUColorTargetState): WithFragment<d.Vec4f>
withFragment
(
const mainFragment: TgpuFragmentFn<{}, d.Vec4f>
mainFragment
, {
format: string
format
: 'rg8unorm' })
.
WithFragment<Vec4f>.createPipeline(): TgpuRenderPipeline<d.Vec4f>
createPipeline
();
const rawPipeline =
const root: TgpuRoot
root
.
Unwrapper.unwrap(resource: TgpuRenderPipeline): GPURenderPipeline (+10 overloads)
unwrap
(
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
);
const rawPipeline: GPURenderPipeline