Pipelines
TypeGPU introduces a custom API to easily define and execute render and compute pipelines. It abstracts away the standard WebGPU procedures to offer a convenient, type-safe way to run shaders on the GPU.
Creating pipelines
Section titled “Creating pipelines”A pipeline definition starts with the root object and follows a builder pattern.
const const renderPipeline: TgpuRenderPipeline<d.Vec4f>
renderPipeline = const root: TgpuRoot
root['~unstable'] .withVertex<{}, {}>(entryFn: TgpuVertexFn<{}, {}>, attribs: {}): WithVertex<{}>
withVertex(const mainVertex: TgpuVertexFn<{}, {}>
mainVertex, {}) .WithVertex<{}>.withFragment<{}, d.Vec4f>(entryFn: TgpuFragmentFn<{}, d.Vec4f>, targets: GPUColorTargetState): WithFragment<d.Vec4f>
withFragment(const mainFragment: TgpuFragmentFn<{}, d.Vec4f>
mainFragment, { GPUColorTargetState.format: GPUTextureFormat
The
GPUTextureFormat
of this color target. The pipeline will only be compatible with
GPURenderPassEncoder
s which use a
GPUTextureView
of this format in the
corresponding color attachment.
format: const presentationFormat: "rgba8unorm"
presentationFormat }) .WithFragment<Vec4f>.createPipeline(): TgpuRenderPipeline<d.Vec4f>
createPipeline();
const const computePipeline: TgpuComputePipeline
computePipeline = const root: TgpuRoot
root['~unstable'] .withCompute<{}>(entryFn: TgpuComputeFn<{}>): WithCompute
withCompute(const mainCompute: TgpuComputeFn<{}>
mainCompute) .WithCompute.createPipeline(): TgpuComputePipeline
createPipeline();
withVertex
Section titled “withVertex”Creating a render pipeline requires calling the withVertex
method first, which accepts TgpuVertexFn
and matching vertex attributes.
The attributes are passed in a record, where the keys match the vertex function’s (non-builtin) input parameters, and the values are attributes retrieved
from a specific tgpu.vertexLayout.
If the vertex shader does not use vertex attributes, then the latter argument should be an empty object.
The compatibility between vertex input types and vertex attribute formats is validated at the type level.
const const VertexStruct: d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>
VertexStruct = import d
d.struct<{ position: d.Vec2f; velocity: d.Vec2f;}>(props: { position: d.Vec2f; velocity: d.Vec2f;}): d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>export struct
Creates a struct schema that can be used to construct GPU buffers.
Ensures proper alignment and padding of properties (as opposed to a d.unstruct
schema).
The order of members matches the passed in properties object.
struct({ position: d.Vec2f
position: import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f, velocity: d.Vec2f
velocity: import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f,});const const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; ... 7 more ...; '~unstable': { ...; };}
tgpu.vertexLayout: <d.WgslArray<d.Vec2f>>(schemaForCount: (count: number) => d.WgslArray<d.Vec2f>, stepMode?: "vertex" | "instance") => TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout( import d
d.arrayOf<d.Vec2f>(elementType: d.Vec2f, elementCount?: undefined): (elementCount: number) => d.WgslArray<d.Vec2f> (+1 overload)export arrayOf
Creates an array schema that can be used to construct gpu buffers.
Describes arrays with fixed-size length, storing elements of the same type.
arrayOf(import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f), 'vertex',);const const instanceLayout: TgpuVertexLayout<d.WgslArray<d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>>>
instanceLayout = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; ... 7 more ...; '~unstable': { ...; };}
tgpu.vertexLayout: <d.WgslArray<d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>>>(schemaForCount: (count: number) => d.WgslArray<d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>>, stepMode?: "vertex" | "instance") => TgpuVertexLayout<...>
vertexLayout( import d
d.arrayOf<d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>>(elementType: d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>, elementCount?: undefined): (elementCount: number) => d.WgslArray<...> (+1 overload)export arrayOf
Creates an array schema that can be used to construct gpu buffers.
Describes arrays with fixed-size length, storing elements of the same type.
arrayOf(const VertexStruct: d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>
VertexStruct), 'instance',);
const root: TgpuRoot
root['~unstable'] .withVertex<{ v: d.Vec2f; center: d.Vec2f; velocity: d.Vec2f;}, {}>(entryFn: TgpuVertexFn<{ v: d.Vec2f; center: d.Vec2f; velocity: d.Vec2f;}, {}>, attribs: { v: F32CompatibleFormats; center: F32CompatibleFormats; velocity: F32CompatibleFormats;}): WithVertex<...>
withVertex(const mainVertex: TgpuVertexFn<{ v: d.Vec2f; center: d.Vec2f; velocity: d.Vec2f;}, {}>
mainVertex, { v: F32CompatibleFormats
v: const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec2f>>
vertexLayout.TgpuVertexLayout<WgslArray<Vec2f>>.attrib: TgpuVertexAttrib<"float32x2">
attrib, center: F32CompatibleFormats
center: const instanceLayout: TgpuVertexLayout<d.WgslArray<d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>>>
instanceLayout.TgpuVertexLayout<WgslArray<WgslStruct<{ position: Vec2f; velocity: Vec2f; }>>>.attrib: { position: TgpuVertexAttrib<"float32x2">; velocity: TgpuVertexAttrib<"float32x2">;}
attrib.position: TgpuVertexAttrib<"float32x2">
position, velocity: F32CompatibleFormats
velocity: const instanceLayout: TgpuVertexLayout<d.WgslArray<d.WgslStruct<{ position: d.Vec2f; velocity: d.Vec2f;}>>>
instanceLayout.TgpuVertexLayout<WgslArray<WgslStruct<{ position: Vec2f; velocity: Vec2f; }>>>.attrib: { position: TgpuVertexAttrib<"float32x2">; velocity: TgpuVertexAttrib<"float32x2">;}
attrib.velocity: TgpuVertexAttrib<"float32x2">
velocity, }) // ...
withFragment
Section titled “withFragment”The next step is calling the withFragment
method, which accepts TgpuFragmentFn
and a targets argument defining the
formats and behaviors of the color targets the pipeline writes to.
Each target is specified the same as in the WebGPU API (GPUColorTargetState).
The difference is that when there are multiple targets, they should be passed in a record, not an array.
This way each target is identified by a name and can be validated against the outputs of the fragment function.
const const mainFragment: TgpuFragmentFn<{}, { color: d.Vec4f; shadow: d.Vec4f;}>
mainFragment = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; ... 7 more ...; '~unstable': { ...; };}
tgpu['~unstable'].fragmentFn: <{ color: d.Vec4f; shadow: d.Vec4f;}>(options: { out: { color: d.Vec4f; shadow: d.Vec4f; };}) => TgpuFragmentFnShell<{}, { color: d.Vec4f; shadow: d.Vec4f;}> (+1 overload)
fragmentFn({ out: { color: d.Vec4f; shadow: d.Vec4f;}
out: { color: d.Vec4f
color: import d
d.const vec4f: d.Vec4fexport vec4f
Schema representing vec4f - a vector with 4 elements of type f32.
Also a constructor function for this vector value.
vec4f, shadow: d.Vec4f
shadow: import d
d.const vec4f: d.Vec4fexport vec4f
Schema representing vec4f - a vector with 4 elements of type f32.
Also a constructor function for this vector value.
vec4f, },})`{ ... }`;
const const renderPipeline: TgpuRenderPipeline<{ color: d.Vec4f; shadow: d.Vec4f;}>
renderPipeline = const root: TgpuRoot
root['~unstable'] .withVertex<{}, {}>(entryFn: TgpuVertexFn<{}, {}>, attribs: {}): WithVertex<{}>
withVertex(const mainVertex: TgpuVertexFn<{}, {}>
mainVertex, {}) .WithVertex<{}>.withFragment<{}, { color: d.Vec4f; shadow: d.Vec4f;}>(entryFn: TgpuFragmentFn<{}, { color: d.Vec4f; shadow: d.Vec4f;}>, targets: { color: GPUColorTargetState; shadow: GPUColorTargetState;}): WithFragment<{ color: d.Vec4f; shadow: d.Vec4f;}>
withFragment(const mainFragment: TgpuFragmentFn<{}, { color: d.Vec4f; shadow: d.Vec4f;}>
mainFragment, { color: GPUColorTargetState
color: { format: string
format: 'rg8unorm', blend: { color: { srcFactor: string; dstFactor: string; operation: string; }; alpha: { srcFactor: string; dstFactor: string; operation: string; };}
blend: { color: { srcFactor: string; dstFactor: string; operation: string;}
color: { srcFactor: string
srcFactor: 'one', dstFactor: string
dstFactor: 'one-minus-src-alpha', operation: string
operation: 'add', }, alpha: { srcFactor: string; dstFactor: string; operation: string;}
alpha: { srcFactor: string
srcFactor: 'one', dstFactor: string
dstFactor: 'one-minus-src-alpha', operation: string
operation: 'add', }, }, }, shadow: GPUColorTargetState
shadow: { format: string
format: 'r16uint' }, }) .WithFragment<{ color: Vec4f; shadow: Vec4f; }>.createPipeline(): TgpuRenderPipeline<{ color: d.Vec4f; shadow: d.Vec4f;}>
createPipeline();
Type-level validation
Section titled “Type-level validation”Using the pipelines should ensure the compatibility of the vertex output and fragment input on the type level —
withFragment
only accepts fragment functions, which all non-builtin parameters are returned in the vertex stage.
These parameters are identified by their names, not by their numeric location index.
In general, when using vertex and fragment functions with TypeGPU pipelines, it is not necessary to set locations on the IO struct properties.
The library automatically matches up the corresponding members (by their names) and assigns common locations to them.
When a custom location is provided by the user (via the d.location
attribute function) it is respected by the automatic assignment procedure,
as long as there is no conflict between vertex and fragment location value.
import const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; ... 7 more ...; '~unstable': { ...; };}
tgpu from 'typegpu';import * as import d
d from 'typegpu/data';
const const vertex: TgpuVertexFn<{}, {}>
vertex = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; ... 7 more ...; '~unstable': { ...; };}
tgpu['~unstable'].vertexFn: <{ pos: d.BuiltinPosition;}>(options: { out: { pos: d.BuiltinPosition; };}) => TgpuVertexFnShell<{}, { pos: d.BuiltinPosition;}> (+1 overload)
vertexFn({ out: { pos: d.BuiltinPosition;}
out: { pos: d.BuiltinPosition
pos: import d
d.const builtin: { readonly vertexIndex: d.BuiltinVertexIndex; readonly instanceIndex: d.BuiltinInstanceIndex; readonly position: d.BuiltinPosition; readonly clipDistances: d.BuiltinClipDistances; ... 10 more ...; readonly subgroupSize: BuiltinSubgroupSize;}export builtin
builtin.position: d.BuiltinPosition
position, },})`(...)`;const const fragment: TgpuFragmentFn<{ uv: d.Vec2f;}, d.Vec4f>
fragment = const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; ... 7 more ...; '~unstable': { ...; };}
tgpu['~unstable'].fragmentFn: <{ uv: d.Vec2f;}, d.Vec4f>(options: { in: { uv: d.Vec2f; }; out: d.Vec4f;}) => TgpuFragmentFnShell<{ uv: d.Vec2f;}, d.Vec4f> (+1 overload)
fragmentFn({ in: { uv: d.Vec2f;}
in: { uv: d.Vec2f
uv: import d
d.const vec2f: d.Vec2fexport vec2f
Schema representing vec2f - a vector with 2 elements of type f32.
Also a constructor function for this vector value.
vec2f }, out: d.Vec4f
out: import d
d.const vec4f: d.Vec4fexport vec4f
Schema representing vec4f - a vector with 4 elements of type f32.
Also a constructor function for this vector value.
vec4f,})`(...)`;
const const root: TgpuRoot
root = await const tgpu: { fn: { <Args extends d.AnyData[] | []>(argTypes: Args, returnType?: undefined): TgpuFnShell<Args, d.Void>; <Args extends d.AnyData[] | [], Return extends d.AnyData>(argTypes: Args, returnType: Return): TgpuFnShell<Args, Return>; }; ... 7 more ...; '~unstable': { ...; };}
tgpu.init: (options?: InitOptions) => Promise<TgpuRoot>
Requests a new GPU device and creates a root around it.
If a specific device should be used instead, use
init();
const root: TgpuRoot
root['~unstable'] .withVertex<{}, {}>(entryFn: TgpuVertexFn<{}, {}>, attribs: {}): WithVertex<{}>
withVertex(const vertex: TgpuVertexFn<{}, {}>
vertex, {}) .withFragment(const fragment: TgpuFragmentFn<{ uv: d.Vec2f;}, d.Vec4f>
fragment, { format: string
format: 'bgra8unorm' });Error ts(2554) ― WithVertex<{}>.withFragment<{ uv: d.Vec2f;}, d.Vec4f>(entryFn: "n/a", targets: "n/a", MissingFromVertexOutput: { uv: d.Vec2f;}): WithFragment<d.Vec4f>
Additional render pipeline methods
Section titled “Additional render pipeline methods”After calling withFragment
, but before createPipeline
, it is possible to set additional pipeline settings.
It is done through builder methods like withDepthStencil
, withMultisample
, withPrimitive
.
They accept the same arguments as their corresponding descriptors in the WebGPU API.
const renderPipeline = root['~unstable'] .withVertex(vertexShader, modelVertexLayout.attrib) .withFragment(fragmentShader, { format: presentationFormat }) .withDepthStencil({ format: 'depth24plus', depthWriteEnabled: true, depthCompare: 'less', }) .withMultisample({ count: 4, }) .withPrimitive({ topology: 'triangle-list' }) .createPipeline();
withCompute
Section titled “withCompute”Creating a compute pipeline is even easier — the withCompute
method accepts just a TgpuComputeFn
with no additional parameters.
Please note that compute pipelines are separate identities from render pipelines. You cannot combine withVertex
and withFragment
methods with withCompute
in a singular pipeline.
createPipeline
Section titled “createPipeline”The creation of TypeGPU pipelines ends with calling a createPipeline
method on the builder.
Execution
Section titled “Execution”renderPipeline .withColorAttachment({ view: context.getCurrentTexture().createView(), loadOp: 'clear', storeOp: 'store', }) .draw(3);
computePipeline.dispatchWorkgroups(16);
Attachments
Section titled “Attachments”Render pipelines require specifying a color attachment for each target. The attachments are specified in the same way as in the WebGPU API (but accept both TypeGPU resources and regular WebGPU ones). However, similar to the targets argument, multiple targets need to be passed in as a record, with each target identified by name.
Similarly, when using withDepthStencil
it is necessary to pass in a depth stencil attachment, via the withDepthStencilAttachment
method.
renderPipeline .withColorAttachment({ color: { view: msaaTextureView, resolveTarget: context.getCurrentTexture().createView(), loadOp: 'clear', storeOp: 'store', }, shadow: { view: shadowTextureView, clearValue: [1, 1, 1, 1], loadOp: 'clear', storeOp: 'store', }, }) .withDepthStencilAttachment({ view: depthTextureView, depthClearValue: 1, depthLoadOp: 'clear', depthStoreOp: 'store', }) .draw(vertexCount);
Resource bindings
Section titled “Resource bindings”Before executing pipelines, it is necessary to bind all of the utilized resources, like bind groups, vertex buffers and slots. It is done using the with
method. It accepts a pair of arguments: a bind group layout and a bind group (render and compute pipelines) or a vertex layout and a vertex buffer (render pipelines only).
// vertex layoutconst vertexLayout = tgpu.vertexLayout( d.disarrayOf(d.float16), 'vertex',);const vertexBuffer = root .createBuffer(d.disarrayOf(d.float16, 8), [0, 0, 1, 0, 0, 1, 1, 1]) .$usage('vertex');
// bind group layoutconst bindGroupLayout = tgpu.bindGroupLayout({ size: { uniform: d.vec2u },});
const sizeBuffer = root .createBuffer(d.vec2u, d.vec2u(64, 64)) .$usage('uniform');
const bindGroup = root.createBindGroup(bindGroupLayout, { size: sizeBuffer,});
// binding and executionrenderPipeline .with(vertexLayout, vertexBuffer) .with(bindGroupLayout, bindGroup) .draw(8);
computePipeline .with(bindGroupLayout, bindGroup) .dispatchWorkgroups(1);
Timing performance
Section titled “Timing performance”Pipelines also expose the withPerformanceCallback
and withTimestampWrites
methods for timing the execution time on the GPU.
For more info about them, refer to the Timing Your Pipelines guide.
draw, dispatchWorkgroups
Section titled “draw, dispatchWorkgroups”After creating the render pipeline and setting all of the attachments, it can be put to use by calling the draw
method.
It accepts the number of vertices and optionally the instance count, first vertex index and first instance index.
After calling the method, the shader is set for execution immediately.
Compute pipelines are executed using the dispatchWorkgroups
method, which accepts the number of workgroups in each dimension.
Unlike render pipelines, after running this method, the execution is not submitted to the GPU immediately.
In order to do so, root['~unstable'].flush()
needs to be run.
However, that is usually not necessary, as it is done automatically when trying to read the result of computation.
Drawing with drawIndexed
Section titled “Drawing with drawIndexed”The drawIndexed
is analogous to draw, but takes advantage of index buffer to explicitly map vertex data onto primitives. When using an index buffer, you don’t need to list every vertex for every primitive explicitly. Instead, you provide a list of unique vertices in a vertex buffer. Then, the index buffer defines how these vertices are connected to form primitives.
const const indexBuffer: TgpuBuffer<d.WgslArray<d.U16>> & IndexFlag
indexBuffer = const root: TgpuRoot
root .TgpuRoot.createBuffer<d.WgslArray<d.U16>>(typeSchema: d.WgslArray<d.U16>, initial?: number[] | undefined): TgpuBuffer<d.WgslArray<d.U16>> (+1 overload)
Allocates memory on the GPU, allows passing data between host and shader.
createBuffer(import d
d.arrayOf<d.U16>(elementType: d.U16, elementCount: number): d.WgslArray<d.U16> (+1 overload)export arrayOf
Creates an array schema that can be used to construct gpu buffers.
Describes arrays with fixed-size length, storing elements of the same type.
arrayOf(import d
d.const u16: d.U16export u16
u16, 6), [0, 2, 1, 0, 3, 2]) .TgpuBuffer<WgslArray<U16>>.$usage<["index"]>(usages_0: "index"): TgpuBuffer<d.WgslArray<d.U16>> & IndexFlag
$usage('index');
const const pipeline: TgpuRenderPipeline<d.Vec4f> & HasIndexBuffer
pipeline = const root: TgpuRoot
root['~unstable'] .withVertex<{ color: d.Vec4f;}, { color: d.Vec4f;}>(entryFn: TgpuVertexFn<{ color: d.Vec4f;}, { color: d.Vec4f;}>, attribs: { color: F32CompatibleFormats;}): WithVertex<...>
withVertex(const vertex: TgpuVertexFn<{ color: d.Vec4f;}, { color: d.Vec4f;}>
vertex, { color: F32CompatibleFormats
color: const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec4f>>
vertexLayout.TgpuVertexLayout<WgslArray<Vec4f>>.attrib: TgpuVertexAttrib<"float32x4">
attrib }) .WithVertex<{ color: Vec4f; }>.withFragment<{ color: d.Vec4f;}, d.Vec4f>(entryFn: TgpuFragmentFn<{ color: d.Vec4f;}, d.Vec4f>, targets: GPUColorTargetState): WithFragment<d.Vec4f>
withFragment(const mainFragment: TgpuFragmentFn<{ color: d.Vec4f;}, d.Vec4f>
mainFragment, { format: string
format: const presentationFormat: "rgba8unorm"
presentationFormat }) .WithFragment<Vec4f>.createPipeline(): TgpuRenderPipeline<d.Vec4f>
createPipeline() .TgpuRenderPipeline<Vec4f>.withIndexBuffer(buffer: TgpuBuffer<d.AnyWgslData> & IndexFlag, offsetElements?: number, sizeElements?: number): TgpuRenderPipeline<d.Vec4f> & HasIndexBuffer (+1 overload)
withIndexBuffer(const indexBuffer: TgpuBuffer<d.WgslArray<d.U16>> & IndexFlag
indexBuffer);
const pipeline: TgpuRenderPipeline<d.Vec4f> & HasIndexBuffer
pipeline .TgpuRenderPipeline<Vec4f>.with<d.WgslArray<d.Vec4f>>(vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec4f>>, buffer: TgpuBuffer<d.WgslArray<d.Vec4f>> & VertexFlag): TgpuRenderPipeline<...> & HasIndexBuffer (+1 overload)
with(const vertexLayout: TgpuVertexLayout<d.WgslArray<d.Vec4f>>
vertexLayout, const colorBuffer: TgpuBuffer<d.WgslArray<d.Vec4f>> & VertexFlag
colorBuffer) .HasIndexBuffer.drawIndexed(indexCount: number, instanceCount?: number, firstIndex?: number, baseVertex?: number, firstInstance?: number): void
drawIndexed(6);
Low-level render pipeline execution API
Section titled “Low-level render pipeline execution API”The higher-level API has several limitations, therefore another way of executing pipelines is exposed, for some custom, more demanding scenarios. For example, with the high-level API, it is not possible to execute multiple pipelines per one render pass. It also may be missing some more niche features of the WebGPU API.
root['~unstable'].beginRenderPass
is a method that mirrors the WebGPU API, but enriches it with a direct TypeGPU resource support.
root['~unstable'].beginRenderPass( { colorAttachments: [{ ... }], }, (pass) => { pass.setPipeline(renderPipeline); pass.setBindGroup(layout, group); pass.draw(3); },);
root['~unstable'].flush();
It is also possible to access the underlying WebGPU resources for the TypeGPU pipelines, by calling root.unwrap(pipeline)
.
That way, they can be used with a regular WebGPU API, but unlike the root['~unstable'].beginRenderPass
API, it also requires unwrapping all the necessary
resources.
const const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline = const root: TgpuRoot
root['~unstable'] .withVertex<{}, {}>(entryFn: TgpuVertexFn<{}, {}>, attribs: {}): WithVertex<{}>
withVertex(const mainVertex: TgpuVertexFn<{}, {}>
mainVertex, {}) .WithVertex<{}>.withFragment<{}, d.Vec4f>(entryFn: TgpuFragmentFn<{}, d.Vec4f>, targets: GPUColorTargetState): WithFragment<d.Vec4f>
withFragment(const mainFragment: TgpuFragmentFn<{}, d.Vec4f>
mainFragment, { format: string
format: 'rg8unorm' }) .WithFragment<Vec4f>.createPipeline(): TgpuRenderPipeline<d.Vec4f>
createPipeline();
const rawPipeline = const root: TgpuRoot
root.Unwrapper.unwrap(resource: TgpuRenderPipeline): GPURenderPipeline (+10 overloads)
unwrap(const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline);const rawPipeline: GPURenderPipeline