Skip to main content
Version: 0.7.x

Interface: GenerationConfig

Defined in: packages/react-native-executorch/src/types/llm.ts:247

Object configuring generation settings.

Properties

batchTimeInterval?

optional batchTimeInterval: number

Defined in: packages/react-native-executorch/src/types/llm.ts:251

Upper limit on the time interval between consecutive token batches.


outputTokenBatchSize?

optional outputTokenBatchSize: number

Defined in: packages/react-native-executorch/src/types/llm.ts:250

Soft upper limit on the number of tokens in each token batch (in certain cases there can be more tokens in given batch, i.e. when the batch would end with special emoji join character).


temperature?

optional temperature: number

Defined in: packages/react-native-executorch/src/types/llm.ts:248

Scales output logits by the inverse of temperature. Controls the randomness / creativity of text generation.


topp?

optional topp: number

Defined in: packages/react-native-executorch/src/types/llm.ts:249

Only samples from the smallest set of tokens whose cumulative probability exceeds topp.