Skip to main content
Version: Next

Interface: LLMTypeBase

Defined in: types/llm.ts:98

Base return type for useLLM. Contains all fields except sendMessage.

Extended by

Properties

configure()

configure: (configuration) => void

Defined in: types/llm.ts:139

Configures chat and tool calling. See Configuring the model for details.

Parameters

configuration

LLMConfig

Configuration object containing chatConfig, toolsConfig, and generationConfig.

Returns

void


deleteMessage()

deleteMessage: (index) => void

Defined in: types/llm.ts:169

Deletes all messages starting with message on index position. After deletion messageHistory will be updated.

Parameters

index

number

The index of the message to delete from history.

Returns

void


downloadProgress

downloadProgress: number

Defined in: types/llm.ts:127

Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval.


error

error: RnExecutorchError | null

Defined in: types/llm.ts:132

Contains the error message if the model failed to load.


generate()

generate: (messages, tools?) => Promise<string>

Defined in: types/llm.ts:153

Runs model to complete chat passed in messages argument. It doesn't manage conversation context. For multimodal models, set mediaPath on user messages to include images.

Parameters

messages

Message[]

Array of messages representing the chat history. User messages may include a mediaPath field with a local image path.

tools?

Object[]

Optional array of tools that can be used during generation.

Returns

Promise<string>

The generated tokens as string.


getGeneratedTokenCount()

getGeneratedTokenCount: () => number

Defined in: types/llm.ts:145

Returns the number of tokens generated so far in the current generation.

Returns

number

The count of generated tokens.


getPromptTokenCount()

getPromptTokenCount: () => number

Defined in: types/llm.ts:163

Returns the number of prompt tokens in the last message.

Returns

number

The count of prompt token.


getTotalTokenCount()

getTotalTokenCount: () => number

Defined in: types/llm.ts:158

Returns the number of total tokens from the previous generation. This is a sum of prompt tokens and generated tokens.

Returns

number

The count of prompt and generated tokens.


interrupt()

interrupt: () => void

Defined in: types/llm.ts:174

Function to interrupt the current inference.

Returns

void


isGenerating

isGenerating: boolean

Defined in: types/llm.ts:122

Indicates whether the model is currently generating a response.


isReady

isReady: boolean

Defined in: types/llm.ts:117

Indicates whether the model is ready.


messageHistory

messageHistory: Message[]

Defined in: types/llm.ts:102

History containing all messages in conversation. This field is updated after model responds to sendMessage.


response

response: string

Defined in: types/llm.ts:107

State of the generated response. This field is updated with each token generated by the model.


token

token: string

Defined in: types/llm.ts:112

The most recently generated token.