Interface: LLMTypeMultimodal<C>
Defined in: types/llm.ts:182
Return type for useLLM when model.capabilities is provided.
sendMessage accepts a typed media object based on declared capabilities.
Extends
Type Parameters
C
C extends readonly LLMCapability[] = readonly LLMCapability[]
Properties
configure()
configure: (
configuration) =>void
Defined in: types/llm.ts:139
Configures chat and tool calling. See Configuring the model for details.
Parameters
configuration
Configuration object containing chatConfig, toolsConfig, and generationConfig.
Returns
void
Inherited from
deleteMessage()
deleteMessage: (
index) =>void
Defined in: types/llm.ts:169
Deletes all messages starting with message on index position. After deletion messageHistory will be updated.
Parameters
index
number
The index of the message to delete from history.
Returns
void
Inherited from
downloadProgress
downloadProgress:
number
Defined in: types/llm.ts:127
Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval.
Inherited from
error
error:
RnExecutorchError|null
Defined in: types/llm.ts:132
Contains the error message if the model failed to load.
Inherited from
generate()
generate: (
messages,tools?) =>Promise<string>
Defined in: types/llm.ts:153
Runs model to complete chat passed in messages argument. It doesn't manage conversation context.
For multimodal models, set mediaPath on user messages to include images.
Parameters
messages
Message[]
Array of messages representing the chat history. User messages may include a mediaPath field with a local image path.
tools?
Object[]
Optional array of tools that can be used during generation.
Returns
Promise<string>
The generated tokens as string.
Inherited from
getGeneratedTokenCount()
getGeneratedTokenCount: () =>
number
Defined in: types/llm.ts:145
Returns the number of tokens generated so far in the current generation.
Returns
number
The count of generated tokens.
Inherited from
LLMTypeBase.getGeneratedTokenCount
getPromptTokenCount()
getPromptTokenCount: () =>
number
Defined in: types/llm.ts:163
Returns the number of prompt tokens in the last message.
Returns
number
The count of prompt token.
Inherited from
LLMTypeBase.getPromptTokenCount
getTotalTokenCount()
getTotalTokenCount: () =>
number
Defined in: types/llm.ts:158
Returns the number of total tokens from the previous generation. This is a sum of prompt tokens and generated tokens.
Returns
number
The count of prompt and generated tokens.
Inherited from
LLMTypeBase.getTotalTokenCount
interrupt()
interrupt: () =>
void
Defined in: types/llm.ts:174
Function to interrupt the current inference.
Returns
void
Inherited from
isGenerating
isGenerating:
boolean
Defined in: types/llm.ts:122
Indicates whether the model is currently generating a response.
Inherited from
isReady
isReady:
boolean
Defined in: types/llm.ts:117
Indicates whether the model is ready.
Inherited from
messageHistory
messageHistory:
Message[]
Defined in: types/llm.ts:102
History containing all messages in conversation. This field is updated after model responds to sendMessage.
Inherited from
response
response:
string
Defined in: types/llm.ts:107
State of the generated response. This field is updated with each token generated by the model.
Inherited from
sendMessage()
sendMessage: (
message,media?) =>Promise<string>
Defined in: types/llm.ts:193
Function to add user message to conversation.
Pass a media object whose shape is determined by the declared capabilities.
After model responds, messageHistory will be updated.
Parameters
message
string
The message string to send.
media?
MediaArg<C>
Optional media object (e.g. { imagePath } for vision.
Returns
Promise<string>
The model's response as a string.
token
token:
string
Defined in: types/llm.ts:112
The most recently generated token.