Interface: LLMTypeBase
Defined in: types/llm.ts:101
Base return type for useLLM. Contains all fields except sendMessage.
Extended by
Properties
configure()
configure: (
configuration) =>void
Defined in: types/llm.ts:142
Configures chat and tool calling. See Configuring the model for details.
Parameters
configuration
Configuration object containing chatConfig, toolsConfig, and generationConfig.
Returns
void
deleteMessage()
deleteMessage: (
index) =>void
Defined in: types/llm.ts:172
Deletes all messages starting with message on index position. After deletion messageHistory will be updated.
Parameters
index
number
The index of the message to delete from history.
Returns
void
downloadProgress
downloadProgress:
number
Defined in: types/llm.ts:130
Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval.
error
error:
RnExecutorchError|null
Defined in: types/llm.ts:135
Contains the error message if the model failed to load.
generate()
generate: (
messages,tools?) =>Promise<string>
Defined in: types/llm.ts:156
Runs model to complete chat passed in messages argument. It doesn't manage conversation context.
For multimodal models, set mediaPath on user messages to include images.
Parameters
messages
Message[]
Array of messages representing the chat history. User messages may include a mediaPath field with a local image path.
tools?
Object[]
Optional array of tools that can be used during generation.
Returns
Promise<string>
The generated tokens as string.
getGeneratedTokenCount()
getGeneratedTokenCount: () =>
number
Defined in: types/llm.ts:148
Returns the number of tokens generated so far in the current generation.
Returns
number
The count of generated tokens.
getPromptTokenCount()
getPromptTokenCount: () =>
number
Defined in: types/llm.ts:166
Returns the number of prompt tokens in the last message.
Returns
number
The count of prompt token.
getTotalTokenCount()
getTotalTokenCount: () =>
number
Defined in: types/llm.ts:161
Returns the number of total tokens from the previous generation. This is a sum of prompt tokens and generated tokens.
Returns
number
The count of prompt and generated tokens.
interrupt()
interrupt: () =>
void
Defined in: types/llm.ts:177
Function to interrupt the current inference.
Returns
void
isGenerating
isGenerating:
boolean
Defined in: types/llm.ts:125
Indicates whether the model is currently generating a response.
isReady
isReady:
boolean
Defined in: types/llm.ts:120
Indicates whether the model is ready.
messageHistory
messageHistory:
Message[]
Defined in: types/llm.ts:105
History containing all messages in conversation. This field is updated after model responds to sendMessage.
response
response:
string
Defined in: types/llm.ts:110
State of the generated response. This field is updated with each token generated by the model.
token
token:
string
Defined in: types/llm.ts:115
The most recently generated token.