Interface: LLMTypeBase
Defined in: types/llm.ts:112
Base return type for useLLM. Contains all fields except sendMessage.
Extended by
Properties
configure()
configure: (
configuration) =>void
Defined in: types/llm.ts:153
Configures chat and tool calling. See Configuring the model for details.
Parameters
configuration
Configuration object containing chatConfig, toolsConfig, and generationConfig.
Returns
void
deleteMessage()
deleteMessage: (
index) =>void
Defined in: types/llm.ts:183
Deletes all messages starting with message on index position. After deletion messageHistory will be updated.
Parameters
index
number
The index of the message to delete from history.
Returns
void
downloadProgress
downloadProgress:
number
Defined in: types/llm.ts:141
Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval.
error
error:
RnExecutorchError|null
Defined in: types/llm.ts:146
Contains the error message if the model failed to load.
generate()
generate: (
messages,tools?) =>Promise<string>
Defined in: types/llm.ts:167
Runs model to complete chat passed in messages argument. It doesn't manage conversation context.
For multimodal models, set mediaPath on user messages to include images.
Parameters
messages
Message[]
Array of messages representing the chat history. User messages may include a mediaPath field with a local image path.
tools?
Object[]
Optional array of tools that can be used during generation.
Returns
Promise<string>
The generated tokens as string.
getGeneratedTokenCount()
getGeneratedTokenCount: () =>
number
Defined in: types/llm.ts:159
Returns the number of tokens generated so far in the current generation.
Returns
number
The count of generated tokens.
getPromptTokenCount()
getPromptTokenCount: () =>
number
Defined in: types/llm.ts:177
Returns the number of prompt tokens in the last message.
Returns
number
The count of prompt token.
getTotalTokenCount()
getTotalTokenCount: () =>
number
Defined in: types/llm.ts:172
Returns the number of total tokens from the previous generation. This is a sum of prompt tokens and generated tokens.
Returns
number
The count of prompt and generated tokens.
interrupt()
interrupt: () =>
void
Defined in: types/llm.ts:188
Function to interrupt the current inference.
Returns
void
isGenerating
isGenerating:
boolean
Defined in: types/llm.ts:136
Indicates whether the model is currently generating a response.
isReady
isReady:
boolean
Defined in: types/llm.ts:131
Indicates whether the model is ready.
messageHistory
messageHistory:
Message[]
Defined in: types/llm.ts:116
History containing all messages in conversation. This field is updated after model responds to sendMessage.
response
response:
string
Defined in: types/llm.ts:121
State of the generated response. This field is updated with each token generated by the model.
token
token:
string
Defined in: types/llm.ts:126
The most recently generated token.