Skip to main content
Version: Next

Interface: GenerationConfig

Defined in: types/llm.ts:350

Object configuring generation settings.

Properties

batchTimeInterval?

optional batchTimeInterval: number

Defined in: types/llm.ts:358

Upper limit on the time interval between consecutive token batches.


minP?

optional minP: number

Defined in: types/llm.ts:355

Minimum probability threshold: tokens with prob < minP * max_prob are excluded. 0 disables filtering.


outputTokenBatchSize?

optional outputTokenBatchSize: number

Defined in: types/llm.ts:357

Soft upper limit on the number of tokens in each token batch (in certain cases there can be more tokens in given batch, i.e. when the batch would end with special emoji join character).


repetitionPenalty?

optional repetitionPenalty: number

Defined in: types/llm.ts:356

Multiplicative penalty applied to logits of recently generated tokens. Values > 1 discourage repetition. 1 disables the penalty.


temperature?

optional temperature: number

Defined in: types/llm.ts:351

Scales output logits by the inverse of temperature. Controls the randomness / creativity of text generation.


topp?

optional topp: number

Defined in: types/llm.ts:354

Deprecated. Use topP instead.


topP?

optional topP: number

Defined in: types/llm.ts:352

Only samples from the smallest set of tokens whose cumulative probability exceeds topP.