Skip to content

Vertices and fragments

Compute pipelines are great for, well, computations, but they are not the best choice for drawing shapes.

The other type of pipelines supported by WebGPU are render pipelines. Render pipelines are used for drawing points, lines and triangles onto textures.

Whereas compute pipelines execute threads on a grid you define, render pipelines work in stages. You configure the pipeline to draw a kind of shape (points, lines, or triangles), and during the draw call:

  1. The vertex function runs once per vertex and outputs a screen position for it.
  2. The GPU calculates which pixels each shape covers in a process called rasterization.
  3. The fragment function runs once per covered pixel and outputs a color.

The following example draws three red points. Each point lands on a single pixel, since 'point-list' topology rasterizes each vertex to one pixel regardless of canvas size.

import
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
, {
import d
d
} from 'typegpu';
const
const root: TgpuRoot
root
= await
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@seeinitFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
();
const
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
=
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<Record<string, any>, {
[x: string]: any;
}, {
$position: d.v4f;
}, d.v4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {
[x: string]: any;
};
vertex: TgpuVertexFn<Record<string, any>, TgpuVertexFn<in VertexIn extends TgpuVertexFn.In = Record<string, never>, out VertexOut extends TgpuVertexFn.Out = TgpuVertexFn.Out>.Out> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
[x: string]: any;
}>>>) => AutoVertexOut<...>);
fragment: TgpuFragmentFn<...> | ((input: AutoFragmentIn<...>) => d.v4f);
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
TgpuRenderPipeline<in Targets = never>.DescriptorBase.primitive?: TgpuPrimitiveState

Describes the primitive-related properties of the pipeline.

primitive
: {
topology: "point-list"
topology
: 'point-list' },
vertex: TgpuVertexFn<Record<string, any>, TgpuVertexFn.Out> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
[x: string]: any;
}>>>) => AutoVertexOut<{
$position: d.v4f;
}>)
vertex
: ({
$vertexIndex: number
$vertexIndex
:
vid: number
vid
}) => {
'use gpu';
const
const positions: d.v2f[]
positions
= [
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(0.0, 0.5),
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(-0.5, -0.5),
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(0.5, -0.5),
];
return {
$position?: d.v4f
$position
:
import d
d
.
function vec4f(v0: AnyNumericVec2Instance, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(
const positions: d.v2f[]
positions
[
vid: number
vid
], 0, 1) };
},
fragment: TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, TgpuFragmentFn.Out> | ((input: AutoFragmentIn<InferGPURecord<{}>>) => d.v4f)
fragment
: () => {
'use gpu';
return
import d
d
.
function vec4f(x: number, y: number, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(1, 0, 0, 1);
},
});
const
const canvas: HTMLCanvasElement
canvas
=
var document: Document

window.document returns a reference to the document contained in the window.

MDN Reference

document
.
ParentNode.querySelector<"canvas">(selectors: "canvas"): HTMLCanvasElement | null (+4 overloads)

Returns the first element that is a descendant of node that matches selectors.

MDN Reference

querySelector
('canvas') as
interface HTMLCanvasElement

The HTMLCanvasElement interface provides properties and methods for manipulating the layout and presentation of canvas elements.

MDN Reference

HTMLCanvasElement
;
const
const context: GPUCanvasContext
context
=
const root: TgpuRoot
root
.
TgpuRoot.configureContext(options: ConfigureContextOptions): GPUCanvasContext

Creates and configures context for the provided canvas. Automatically sets the format to navigator.gpu.getPreferredCanvasFormat() if not provided.

@throwsAn error if no context could be obtained

configureContext
({
canvas: HTMLCanvasElement | OffscreenCanvas

The canvas for which a context will be created and configured.

canvas
});
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
.
TgpuRenderPipeline<Vec4f>.withColorAttachment(attachment: ColorAttachment): TgpuRenderPipeline<d.Vec4f>

Attaches texture views to the pipeline's targets (outputs).

@example // Draw 3 vertices onto the context's canvas pipeline .withColorAttachment({ view: context }) .draw(3)

@paramattachment The object should match the shape returned by the fragment shader, with values matching the ColorAttachment type.

withColorAttachment
({
ColorAttachment.view: GPUCanvasContext | (ColorTextureConstraint & RenderFlag) | GPUTextureView | TgpuTextureView<d.WgslTexture<WgslTextureProps>> | TgpuTextureRenderView

A

GPUTextureView

describing the texture subresource that will be output to for this color attachment.

view
:
const context: GPUCanvasContext
context
}).
TgpuRenderPipeline<Vec4f>.draw(vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number): void
draw
(3);
Three red pixels

It is a little more complicated than creating and dispatching a compute pipeline. Let us go through the code step by step.

  1. Initialize the root, just like we did in a previous guide.

    import tgpu, { d } from 'typegpu';
    const root = await tgpu.init();
  2. Create the render pipeline with root.createRenderPipeline, setting its topology to 'point-list'. This will make the pipeline draw a point for each position returned from the vertex function.

    const pipeline = root.createRenderPipeline({
    primitive: { topology: 'point-list' },
    // ...
    });
  3. Define a vertex function, which tells the GPU where each vertex is located. When dispatching the pipeline later, we’ll specify how many vertices to draw. The GPU runs this function once per vertex, with $vertexIndex going from 0 up to count - 1.

    vertex: ({ $vertexIndex: vid }) => {
    'use gpu';
    const positions = [
    d.vec2f(0.0, 0.5),
    d.vec2f(-0.5, -0.5),
    d.vec2f(0.5, -0.5),
    ];
    return { $position: d.vec4f(positions[vid], 0, 1) };
    },

    The returned position is a 4-dimensional vector in clip space: X goes from -1 on the left to 1 on the right, and Y goes from -1 on the bottom to 1 on the top. Z is used for depth testing (irrelevant here), and W can stay 1 for now, see homogeneous coordinates if you’re curious.

  4. Define a fragment function, which the GPU calls for each pixel covered by a shape. For 'point-list' topology, that’s exactly once per returned vertex. This particular fragment function always returns red.

    fragment: () => {
    'use gpu';
    return d.vec4f(1, 0, 0, 1);
    },
  5. Query the canvas. This example assumes your page has a <canvas> element.

    const canvas = document.querySelector('canvas') as HTMLCanvasElement;
  6. Call root.configureContext. This will create and configure a context for the provided canvas. This context can be then used to render onto the canvas.

    const context = root.configureContext({ canvas });
  7. Dispatch the pipeline by providing a color attachment and calling .draw(count), where count is the number of vertices to draw. The color attachment specifies the draw target, plus optional props like clearValue and loadOp.

    pipeline.withColorAttachment({ view: context }).draw(3);

In the example above, the $position is the only value returned from the vertex function. We can actually return more values. In the example below, we pass in an additional color prop.

const
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
=
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<Record<string, any>, {
[x: string]: any;
}, {
$position: d.v4f;
color: d.v3f;
}, d.v4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {
[x: string]: any;
};
vertex: TgpuVertexFn<Record<string, any>, TgpuVertexFn<in VertexIn extends TgpuVertexFn.In = Record<string, never>, out VertexOut extends TgpuVertexFn.Out = TgpuVertexFn.Out>.Out> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
[x: string]: any;
}>>>) => AutoVertexOut<...>);
fragment: TgpuFragmentFn<...> | ((input: AutoFragmentIn<...>) => d.v4f);
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
TgpuRenderPipeline<in Targets = never>.DescriptorBase.primitive?: TgpuPrimitiveState

Describes the primitive-related properties of the pipeline.

primitive
: {
topology: "point-list"
topology
: 'point-list' },
vertex: TgpuVertexFn<Record<string, any>, TgpuVertexFn.Out> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
[x: string]: any;
}>>>) => AutoVertexOut<{
$position: d.v4f;
color: d.v3f;
}>)
vertex
: ({
$vertexIndex: number
$vertexIndex
:
vid: number
vid
}) => {
'use gpu';
const
const positions: d.v2f[]
positions
= [
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(0.0, 0.5),
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(-0.5, -0.5),
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(0.5, -0.5),
];
const
const colors: d.v3f[]
colors
= [
import d
d
.
function vec3f(x: number, y: number, z: number): d.v3f (+5 overloads)
export vec3f

Schema representing vec3f - a vector with 3 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec3f(); // (0.0, 0.0, 0.0) const vector = d.vec3f(1); // (1.0, 1.0, 1.0) const vector = d.vec3f(1, 2, 3.5); // (1.0, 2.0, 3.5)

@example const buffer = root.createBuffer(d.vec3f, d.vec3f(0, 1, 2)); // buffer holding a d.vec3f value, with an initial value of vec3f(0, 1, 2);

vec3f
(1, 0, 0),
import d
d
.
function vec3f(x: number, y: number, z: number): d.v3f (+5 overloads)
export vec3f

Schema representing vec3f - a vector with 3 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec3f(); // (0.0, 0.0, 0.0) const vector = d.vec3f(1); // (1.0, 1.0, 1.0) const vector = d.vec3f(1, 2, 3.5); // (1.0, 2.0, 3.5)

@example const buffer = root.createBuffer(d.vec3f, d.vec3f(0, 1, 2)); // buffer holding a d.vec3f value, with an initial value of vec3f(0, 1, 2);

vec3f
(0, 1, 0),
import d
d
.
function vec3f(x: number, y: number, z: number): d.v3f (+5 overloads)
export vec3f

Schema representing vec3f - a vector with 3 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec3f(); // (0.0, 0.0, 0.0) const vector = d.vec3f(1); // (1.0, 1.0, 1.0) const vector = d.vec3f(1, 2, 3.5); // (1.0, 2.0, 3.5)

@example const buffer = root.createBuffer(d.vec3f, d.vec3f(0, 1, 2)); // buffer holding a d.vec3f value, with an initial value of vec3f(0, 1, 2);

vec3f
(0, 0, 1)
];
return {
$position?: d.v4f
$position
:
import d
d
.
function vec4f(v0: AnyNumericVec2Instance, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(
const positions: d.v2f[]
positions
[
vid: number
vid
], 0, 1),
color: d.v3f
color
:
const colors: d.v3f[]
colors
[
vid: number
vid
] };
},
fragment: TgpuFragmentFn<{
color: d.Vec3f;
} & Record<string, AnyFragmentInputBuiltin>, TgpuFragmentFn.Out> | ((input: AutoFragmentIn<InferGPURecord<{
color: d.Vec3f;
}>>) => d.v4f)
fragment
: ({
color: d.v3f
color
}) => {
'use gpu';
return
import d
d
.
function vec4f(v0: AnyNumericVec3Instance, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(
color: d.v3f
color
, 1);
},
});
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
.
TgpuRenderPipeline<Vec4f>.withColorAttachment(attachment: ColorAttachment): TgpuRenderPipeline<d.Vec4f>

Attaches texture views to the pipeline's targets (outputs).

@example // Draw 3 vertices onto the context's canvas pipeline .withColorAttachment({ view: context }) .draw(3)

@paramattachment The object should match the shape returned by the fragment shader, with values matching the ColorAttachment type.

withColorAttachment
({
ColorAttachment.view: GPUCanvasContext | (ColorTextureConstraint & RenderFlag) | GPUTextureView | TgpuTextureView<d.WgslTexture<WgslTextureProps>> | TgpuTextureRenderView

A

GPUTextureView

describing the texture subresource that will be output to for this color attachment.

view
:
const context: GPUCanvasContext
context
}).
TgpuRenderPipeline<Vec4f>.draw(vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number): void
draw
(3);
Three pixels, one red, one green and one blue.

Each pixel ends up colored by the value returned by its vertex.

Let’s change the topology from 'point-list' to 'line-strip'. The vertex and fragment functions stay exactly the same. This changes how the GPU connects the vertices together.

primitive: { topology: 'point-list' },
primitive: { topology: 'line-strip' },
A slanted line going smoothly from red to green to blue.

The fragment now runs for every pixel along the line between the first and second vertices, and again between the second and third.

Notice that the line’s color transitions smoothly from red to green to blue. Each fragment receives values that are interpolated across the shape based on its position. If the first vertex returns color d.vec3f(1, 0, 0) and the second returns color d.vec3f(0, 1, 0), then a pixel exactly halfway between them sees d.vec3f(0.5, 0.5, 0). This applies to every extra prop returned by vertex, and this is a core principle of render pipelines.

Let’s change topology once more, this time to 'triangle-list' - the most widely used option.

primitive: { topology: 'line-strip' },
primitive: { topology: 'triangle-list' },
A triangle with a red-green-blue gradient.

Here’s how it looks on a high-resolution canvas.

A smooth triangle with a red-green-blue gradient.

A triangle is drawn for each 3 vertices passed, and the fragment runs for every pixel covered by the triangle they define. To draw more triangles, return more positions from the vertex function and pass a matching count to draw(). Two triangles require 6 vertices, three triangles require 9 and so on.

In all examples above, we draw once and stop. To animate, two things change: we call pipeline.draw() every frame, and we pass a value from JS into the shader that each frame can vary.

const
const timeUniform: TgpuUniform<d.F32>
timeUniform
=
const root: TgpuRoot
root
.
TgpuRoot.createUniform<d.F32>(typeSchema: d.F32, initial?: number | ((buffer: TgpuBuffer<NoInfer<d.F32>>) => void) | undefined): TgpuUniform<d.F32> (+1 overload)

Allocates memory on the GPU, allows passing data between host and shader. Read-only on the GPU, optimized for small data. For a general-purpose buffer, use

TgpuRoot.createBuffer

.

@paramtypeSchema The type of data that this buffer will hold.

@paraminitial Either initial value of the buffer, or an initializer to execute on the mapped buffer. (optional)

createUniform
(
import d
d
.
const f32: d.F32
export f32

A schema that represents a 32-bit float value. (equivalent to f32 in WGSL)

Can also be called to cast a value to an f32.

@example const value = f32(); // 0

@example const value = f32(1.23); // 1.23

@example const value = f32(true); // 1

f32
);
const
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
=
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<Record<string, any>, {
[x: string]: any;
}, {
$position: d.v4f;
}, d.v4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {
[x: string]: any;
};
vertex: TgpuVertexFn<Record<string, any>, TgpuVertexFn<in VertexIn extends TgpuVertexFn.In = Record<string, never>, out VertexOut extends TgpuVertexFn.Out = TgpuVertexFn.Out>.Out> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
[x: string]: any;
}>>>) => AutoVertexOut<...>);
fragment: TgpuFragmentFn<...> | ((input: AutoFragmentIn<...>) => d.v4f);
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
TgpuRenderPipeline<in Targets = never>.DescriptorBase.primitive?: TgpuPrimitiveState

Describes the primitive-related properties of the pipeline.

primitive
: {
topology: "triangle-list"
topology
: 'triangle-list' },
vertex: TgpuVertexFn<Record<string, any>, TgpuVertexFn.Out> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{
[x: string]: any;
}>>>) => AutoVertexOut<{
$position: d.v4f;
}>)
vertex
: ({
$vertexIndex: number
$vertexIndex
:
vid: number
vid
}) => {
'use gpu';
const
const positions: d.v2f[]
positions
= [
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(0.0, 0.5),
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(-0.5, -0.5),
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(0.5, -0.5),
];
const
const offset: d.v2f
offset
=
import d
d
.
function vec2f(x: number, y: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(
import std
std
.
function sin(value: number): number (+1 overload)
export sin
sin
(
const timeUniform: TgpuUniform<d.F32>
timeUniform
.
TgpuUniform<F32>.$: number
$
),
import std
std
.
function cos(value: number): number (+1 overload)
export cos
cos
(
const timeUniform: TgpuUniform<d.F32>
timeUniform
.
TgpuUniform<F32>.$: number
$
)) * 0.3;
return {
$position?: d.v4f
$position
:
import d
d
.
function vec4f(v0: AnyNumericVec2Instance, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(
const positions: d.v2f[]
positions
[
vid: number
vid
] +
const offset: d.v2f
offset
, 0, 1) };
},
fragment: TgpuFragmentFn<{} & Record<string, AnyFragmentInputBuiltin>, TgpuFragmentFn.Out> | ((input: AutoFragmentIn<InferGPURecord<{}>>) => d.v4f)
fragment
: () => {
'use gpu';
return
import d
d
.
function vec4f(x: number, y: number, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(1, 0, 0, 1);
},
});
function
function frame(timestamp: number): void
frame
(
timestamp: number
timestamp
: number) {
const timeUniform: TgpuUniform<d.F32>
timeUniform
.
TgpuBufferShorthandBase<F32>.write(data: number, options?: BufferWriteOptions): void
write
(
timestamp: number
timestamp
/ 1000);
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
.
TgpuRenderPipeline<Vec4f>.withColorAttachment(attachment: ColorAttachment): TgpuRenderPipeline<d.Vec4f>

Attaches texture views to the pipeline's targets (outputs).

@example // Draw 3 vertices onto the context's canvas pipeline .withColorAttachment({ view: context }) .draw(3)

@paramattachment The object should match the shape returned by the fragment shader, with values matching the ColorAttachment type.

withColorAttachment
({
ColorAttachment.view: GPUCanvasContext | (ColorTextureConstraint & RenderFlag) | GPUTextureView | TgpuTextureView<d.WgslTexture<WgslTextureProps>> | TgpuTextureRenderView

A

GPUTextureView

describing the texture subresource that will be output to for this color attachment.

view
:
const context: GPUCanvasContext
context
}).
TgpuRenderPipeline<Vec4f>.draw(vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number): void
draw
(3);
function requestAnimationFrame(callback: FrameRequestCallback): number
requestAnimationFrame
(
function frame(timestamp: number): void
frame
);
}
function requestAnimationFrame(callback: FrameRequestCallback): number
requestAnimationFrame
(
function frame(timestamp: number): void
frame
);

requestAnimationFrame calls frame once per repaint and passes a timestamp in milliseconds. We divide by 1000 to get seconds and write that into timeUniform, which the vertex function reads as timeUniform.$.

Before, we drew points and lines in a very low resolution. On high resolution canvases, they are barely visible. Unfortunately, there is no way to change the thickness of the drawn points and lines. To draw bigger points and lines, people usually use many triangles to approximate the shape.

There is however a different approach, that skips the geometry altogether.

import
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
, {
import d
d
,
import std
std
} from 'typegpu';
import {
const fullScreenTriangle: TgpuVertexFn<{}, {
uv: d.Vec2f;
}>

A vertex function that defines a single full-screen triangle out of three points.

@example

import { common } from 'typegpu';
const pipeline = root.createRenderPipeline({
vertex: common.fullScreenTriangle,
fragment: yourFragmentShader,
});
pipeline.draw(3);

fullScreenTriangle
} from 'typegpu/common';
const
const root: TgpuRoot
root
= await
const tgpu: {
const: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/constant/tgpuConstant").constant;
fn: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/tgpuFn").fn;
comptime: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/function/comptime").comptime;
resolve: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolve;
resolveWithContext: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/resolve/tgpuResolve").resolveWithContext;
init: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").init;
initFromDevice: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/root/init").initFromDevice;
slot: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/slot").slot;
lazy: typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/core/slot/lazy").lazy;
... 10 more ...;
'~unstable': typeof import("/home/runner/work/TypeGPU/TypeGPU/packages/typegpu/src/tgpuUnstable");
}

@moduletypegpu

tgpu
.
init: (options?: InitOptions) => Promise<TgpuRoot>

Requests a new GPU device and creates a root around it. If a specific device should be used instead, use

@seeinitFromDevice. *

@example

When given no options, the function will ask the browser for a suitable GPU device.

const root = await tgpu.init();

@example

If there are specific options that should be used when requesting a device, you can pass those in.

const adapterOptions: GPURequestAdapterOptions = ...;
const deviceDescriptor: GPUDeviceDescriptor = ...;
const root = await tgpu.init({ adapter: adapterOptions, device: deviceDescriptor });

init
();
const
const canvas: HTMLCanvasElement
canvas
=
var document: Document

window.document returns a reference to the document contained in the window.

MDN Reference

document
.
ParentNode.querySelector<"canvas">(selectors: "canvas"): HTMLCanvasElement | null (+4 overloads)

Returns the first element that is a descendant of node that matches selectors.

MDN Reference

querySelector
('canvas') as
interface HTMLCanvasElement

The HTMLCanvasElement interface provides properties and methods for manipulating the layout and presentation of canvas elements.

MDN Reference

HTMLCanvasElement
;
const
const context: GPUCanvasContext
context
=
const root: TgpuRoot
root
.
TgpuRoot.configureContext(options: ConfigureContextOptions): GPUCanvasContext

Creates and configures context for the provided canvas. Automatically sets the format to navigator.gpu.getPreferredCanvasFormat() if not provided.

@throwsAn error if no context could be obtained

configureContext
({
canvas: HTMLCanvasElement | OffscreenCanvas

The canvas for which a context will be created and configured.

canvas
,
alphaMode?: GPUCanvasAlphaMode

Determines the effect that alpha values will have on the content of textures returned by

GPUCanvasContext#getCurrentTexture

when read, displayed, or used as an image source.

alphaMode
: 'premultiplied' });
const
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
=
const root: TgpuRoot
root
.
WithBinding.createRenderPipeline<{}, {}, {
uv: d.Vec2f;
}, d.v4f>(descriptor: TgpuRenderPipeline<in Targets = never>.DescriptorBase & {
attribs?: {};
vertex: TgpuVertexFn<{}, {
uv: d.Vec2f;
}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<AnyAutoCustoms>);
fragment: TgpuFragmentFn<{
uv: d.Vec2f;
} & Record<string, AnyFragmentInputBuiltin>, TgpuFragmentFn<in Varying extends TgpuFragmentFn.In = Record<...>, out Output extends TgpuFragmentFn.Out = TgpuFragmentFn.Out>.Out> | ((input: AutoFragmentIn<...>) => d.v4f);
targets?: TgpuColorTargetState;
}): TgpuRenderPipeline<...> (+2 overloads)
createRenderPipeline
({
TgpuRenderPipeline<in Targets = never>.DescriptorBase.primitive?: TgpuPrimitiveState

Describes the primitive-related properties of the pipeline.

primitive
: {
topology: "triangle-list"
topology
: 'triangle-list' },
vertex: TgpuVertexFn<{}, {
uv: d.Vec2f;
}> | ((input: AutoVertexIn<InferGPURecord<AttribRecordToDefaultDataTypes<{}>>>) => AutoVertexOut<AnyAutoCustoms>)
vertex
:
const fullScreenTriangle: TgpuVertexFn<{}, {
uv: d.Vec2f;
}>

A vertex function that defines a single full-screen triangle out of three points.

@example

import { common } from 'typegpu';
const pipeline = root.createRenderPipeline({
vertex: common.fullScreenTriangle,
fragment: yourFragmentShader,
});
pipeline.draw(3);

fullScreenTriangle
,
fragment: TgpuFragmentFn<{
uv: d.Vec2f;
} & Record<string, AnyFragmentInputBuiltin>, TgpuFragmentFn.Out> | ((input: AutoFragmentIn<InferGPURecord<{
uv: d.Vec2f;
}>>) => d.v4f)
fragment
: ({
uv: d.v2f
uv
}) => {
'use gpu';
if (
import std
std
.
distance<d.v2f>(a: d.v2f, b: d.v2f): number (+1 overload)
export distance
distance
(
uv: d.v2f
uv
,
import d
d
.
function vec2f(xy: number): d.v2f (+3 overloads)
export vec2f

Schema representing vec2f - a vector with 2 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec2f(); // (0.0, 0.0) const vector = d.vec2f(1); // (1.0, 1.0) const vector = d.vec2f(0.5, 0.1); // (0.5, 0.1)

@example const buffer = root.createBuffer(d.vec2f, d.vec2f(0, 1)); // buffer holding a d.vec2f value, with an initial value of vec2f(0, 1);

vec2f
(0.5)) < 0.2) {
return
import d
d
.
function vec4f(x: number, y: number, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(1, 0, 0, 1);
}
return
import d
d
.
function vec4f(x: number, y: number, z: number, w: number): d.v4f (+9 overloads)
export vec4f

Schema representing vec4f - a vector with 4 elements of type f32. Also a constructor function for this vector value.

@example const vector = d.vec4f(); // (0.0, 0.0, 0.0, 0.0) const vector = d.vec4f(1); // (1.0, 1.0, 1.0, 1.0) const vector = d.vec4f(1, 2, 3, 4.5); // (1.0, 2.0, 3.0, 4.5)

@example const buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);

vec4f
(0, 0, 0, 1);
},
});
const pipeline: TgpuRenderPipeline<d.Vec4f>
pipeline
.
TgpuRenderPipeline<Vec4f>.withColorAttachment(attachment: ColorAttachment): TgpuRenderPipeline<d.Vec4f>

Attaches texture views to the pipeline's targets (outputs).

@example // Draw 3 vertices onto the context's canvas pipeline .withColorAttachment({ view: context }) .draw(3)

@paramattachment The object should match the shape returned by the fragment shader, with values matching the ColorAttachment type.

withColorAttachment
({
ColorAttachment.view: GPUCanvasContext | (ColorTextureConstraint & RenderFlag) | GPUTextureView | TgpuTextureView<d.WgslTexture<WgslTextureProps>> | TgpuTextureRenderView

A

GPUTextureView

describing the texture subresource that will be output to for this color attachment.

view
:
const context: GPUCanvasContext
context
}).
TgpuRenderPipeline<Vec4f>.draw(vertexCount: number, instanceCount?: number, firstVertex?: number, firstInstance?: number): void
draw
(3);
A perfect red circle on a black canvas.

The fullScreenTriangle vertex function (from typegpu/common) returns positions that lie outside clip space, so a single triangle ends up covering the entire visible canvas. Alongside $position it outputs uv, a 2D coordinate that runs from (0, 0) to (1, 1) across the canvas. The fragment function receives the interpolated uv in each pixel.

Each pixel then colors itself based on the distance from its uv to the center (0.5, 0.5): closer than 0.2 is red, otherwise black.