When working on top of some existing shader code, sometimes you may know for certain that some variable will be already defined and should be accessible in the code.
In such scenario you can use tgpu['~unstable'].rawCodeSnippet — an advanced API that creates a typed shader expression which can be injected into the final shader bundle upon use.
// `EXISTING_GLOBAL` is an identifier that we know will be in the
An advanced API that creates a typed shader expression which
can be injected into the final shader bundle upon use.
@param ― expression The code snippet that will be injected in place of foo.$
@param ― type The type of the expression
@param ―
origin Where the value originates from.
-- Which origin to choose?
Usually 'runtime' (the default) is a safe bet, but if you're sure that the expression or
computation is constant (either a reference to a constant, a numeric literal,
or an operation on constants), then pass 'constant' as it might lead to better
optimizations.
If what the expression is a direct reference to an existing value (e.g. a uniform, a
storage binding, ...), then choose from 'uniform', 'mutable', 'readonly', 'workgroup',
'private' or 'handle' depending on the address space of the referred value.
The optional third parameter origin lets TypeGPU transpiler know how to optimize the code snippet, as well as allows for some transpilation-time validity checks.
Usually 'runtime' (the default) is a safe bet, but if you’re sure that the expression or
computation is constant (either a reference to a constant, a numeric literal,
or an operation on constants), then pass 'constant' as it might lead to better
optimizations.
If what the expression is a direct reference to an existing value (e.g. a uniform, a
storage binding, …), then choose from 'uniform', 'mutable', 'readonly', 'workgroup',
'private' or 'handle' depending on the address space of the referred value.
tgpu.comptime(func) creates a version of func that instead of being transpiled to WGSL, will be called during the WGSL code generation.
This can be used to precompute and inject a value into the final shader code.
Creates a version of func that can called safely in a TypeGPU function to
precompute and inject a value into the final shader code.
Note how the function passed into comptime doesn't have to be marked with
'use gpu'. That's because the function doesn't execute on the GPU, it gets
executed before the shader code gets sent to the GPU.
Note how the function passed into comptime doesn’t have to be marked with
'use gpu' and can use Math. That’s because the function doesn’t execute on the GPU, it gets
executed before the shader code gets sent to the GPU.
If a condition is known at resolution time (comptime), then typegpu prunes the unvisited block.
Comptime-known conditions include:
referenced js values and operations on such, like std.pow(userSelection, 2) < THRESHOLD (note that these values will be concretized during shader resolution, if userSelection may change over time, use buffers to provide it),
When working on top of some existing shader code, sometimes you may know for certain that some variable will be already defined and should be accessible in the code.
In such scenario you can use tgpu['~unstable'].rawCodeSnippet — an advanced API that creates a typed shader expression which can be injected into the final shader bundle upon use.
// `EXISTING_GLOBAL` is an identifier that we know will be in the
An advanced API that creates a typed shader expression which
can be injected into the final shader bundle upon use.
@param ― expression The code snippet that will be injected in place of foo.$
@param ― type The type of the expression
@param ―
origin Where the value originates from.
-- Which origin to choose?
Usually 'runtime' (the default) is a safe bet, but if you're sure that the expression or
computation is constant (either a reference to a constant, a numeric literal,
or an operation on constants), then pass 'constant' as it might lead to better
optimizations.
If what the expression is a direct reference to an existing value (e.g. a uniform, a
storage binding, ...), then choose from 'uniform', 'mutable', 'readonly', 'workgroup',
'private' or 'handle' depending on the address space of the referred value.
The optional third parameter origin lets TypeGPU transpiler know how to optimize the code snippet, as well as allows for some transpilation-time validity checks.
Usually 'runtime' (the default) is a safe bet, but if you’re sure that the expression or
computation is constant (either a reference to a constant, a numeric literal,
or an operation on constants), then pass 'constant' as it might lead to better
optimizations.
If what the expression is a direct reference to an existing value (e.g. a uniform, a
storage binding, …), then choose from 'uniform', 'mutable', 'readonly', 'workgroup',
'private' or 'handle' depending on the address space of the referred value.
Yes, you read that correctly, TypeGPU implements logging to the console on the GPU!
Just call console.log like you would in plain JavaScript, and open the console to see the results.
const
const callCountMutable:TgpuMutable<d.U32>
callCountMutable =
const root:TgpuRoot
root.
TgpuRoot.createMutable<d.U32>(typeSchema: d.U32, initial?: number |undefined): TgpuMutable<d.U32> (+1 overload)
Allocates memory on the GPU, allows passing data between host and shader.
Can be mutated in-place on the GPU. For a general-purpose buffer,
use
TgpuRoot.createBuffer
.
@param ― typeSchema The type of data that this buffer will hold.
@param ― initial The initial value of the buffer. (optional)
createMutable(
import d
d.
const u32: d.U32
export u32
A schema that represents an unsigned 32-bit integer value. (equivalent to u32 in WGSL)
Can also be called to cast a value to an u32 in accordance with WGSL casting rules.
Creates a compute pipeline that executes the given callback in an exact number of threads.
This is different from withCompute(...).createPipeline() in that it does a bounds check on the
thread id, where as regular pipelines do not and work in units of workgroups.
@param ― callback A function converted to WGSL and executed on the GPU.
It can accept up to 3 parameters (x, y, z) which correspond to the global invocation ID
of the executing thread.
@example
If no parameters are provided, the callback will be executed once, in a single thread.
const fooPipeline = root
.createGuardedComputePipeline(()=> {
'use gpu';
console.log('Hello, GPU!');
});
fooPipeline.dispatchThreads();
// [GPU] Hello, GPU!
@example
One parameter means n-threads will be executed in parallel.
const fooPipeline = root
.createGuardedComputePipeline((x)=> {
'use gpu';
if (x %16===0) {
// Logging every 16th thread
console.log('I am the', x, 'thread');
}
});
// executing 512 threads
fooPipeline.dispatchThreads(512);
// [GPU] I am the 256 thread
// [GPU] I am the 272 thread
// ... (30 hidden logs)
// [GPU] I am the 16 thread
// [GPU] I am the 240 thread
createGuardedComputePipeline(() => {
'use gpu';
const callCountMutable:TgpuMutable<d.U32>
callCountMutable.
TgpuMutable<U32>.$: number
$ += 1;
var console:Console
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = newconsole.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
Dispatches the pipeline.
Unlike TgpuComputePipeline.dispatchWorkgroups(), this method takes in the
number of threads to run in each dimension.
Under the hood, the number of expected threads is sent as a uniform, and
"guarded" by a bounds check.
dispatchThreads();
// Eventually...
// "[GPU] Call number 1"
// "[GPU] Call number 2"
Currently supported data types for logging include scalars, vectors, matrices, structs, and fixed-size arrays.
Under the hood, TypeGPU translates console.log to a series of serializing functions that write the logged arguments to a buffer that is read and deserialized after every draw/dispatch call.
The buffer is of fixed size, which may limit the total amount of information that can be logged; if the buffer overflows, additional logs are dropped.
If that’s an issue, you may specify the size manually when creating the root object.
The maximum number of logs that appear during a single draw/dispatch call.
If this number is exceeded, a warning containing the total number of calls is logged and further logs are dropped.
@default ― 64
logCountLimit: 32,
LogGeneratorOptions.logSizeLimit?: number
The total number of bytes reserved for each log call.
If this number is exceeded, an exception is thrown during resolution.
@exampleconst buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);
vec4f,
})(({
pos: d.v4f
pos }) => {
// this log fits in 8 bytes
// static strings do not count towards the serialized log size
var console:Console
The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
A Console class with methods such as console.log(), console.error() and console.warn() that can be used to write to any Node.js stream.
A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without importing the node:console module.
Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(newError('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = newconsole.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(newError('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3)
(the arguments are all passed to util.format()).
@exampleconst buffer = root.createBuffer(d.vec4f, d.vec4f(0, 1, 2, 3)); // buffer holding a d.vec4f value, with an initial value of vec4f(0, 1, 2, 3);
vec4f(0, 1, 1, 1);
});
/* pipeline creation and draw call */
Other supported console functionalities include console.debug, console.info, console.warn, console.error and console.clear.
There are some limitations (some of which we intend to alleviate in the future):
console.log only works when used in TypeGPU functions that are transitively called in a TypeGPU pipeline.
Otherwise, for example when using tgpu.resolve on a WGSL template, logs are ignored.
console.log only works in fragment and compute shaders.
This is due to a WebGPU limitation that does not allow modifying buffers during the vertex shader stage.
console.log currently does not support template literals (but you can use string substitutions, or just pass multiple arguments instead).
TypeGPU supports for...of... loops in shader functions. The only constraints are that the loop variable must be declared with const and the iterable must be stored in a variable.
const
const processNeighbors:(cell: d.v2i) => void
processNeighbors = (
cell: d.v2i
cell:
import d
d.
export v2i
Interface representing its WGSL vector type counterpart: vec2i or vec2.
A vector with 2 elements of type i32
For code with small, fixed iteration counts, you can use tgpu.unroll to unroll loops at compile time. This eliminates branch prediction overhead and can significantly improve performance.