https://github.com/ionic-team/capacitor-local-llm.git
Capacitor Local LLM plugin
npm install @capacitor/local-llm
npx cap sync
systemAvailability()download()prompt(...)endSession(...)generateImage(...)The main plugin interface for interacting with on-device LLMs.
systemAvailability() => Promise<SystemAvailabilityResponse>
Checks the availability status of the on-device LLM.
Use this method to determine if the LLM is ready to use, needs to be downloaded, or is unavailable on the device.
Returns: Promise<SystemAvailabilityResponse>
Since: 1.0.0
download() => Promise<void>
Downloads the on-device LLM model.
This method initiates the download of the LLM model when it's not already present on the device. Only available on Android.
Since: 1.0.0
prompt(options: PromptOptions) => Promise<PromptResponse>
Sends a prompt to the on-device LLM and receives a response.
Use this method to interact with the LLM. You can optionally provide a sessionId to maintain conversation context across multiple prompts.
| Param | Type | Description |
|---|---|---|
options | PromptOptions | - The prompt options including the text prompt and optional configuration |
Promise<PromptResponse>
Since: 1.0.0
endSession(options: EndSessionOptions) => Promise<void>
Ends an active LLM session.
Use this method to clean up resources when you're done with a conversation session. This is important for managing memory and preventing resource leaks.
| Param | Type | Description |
|---|---|---|
options | EndSessionOptions | - The options containing the sessionId to end |
generateImage(options: GenerateImageOptions) => Promise<GenerateImageResponse>
Generates images from a text prompt using the on-device LLM.
Use this method to create images based on text descriptions. Optionally provide reference images to influence the generation. The generated images are returned as base64-encoded PNG strings in an array.
| Param | Type | Description |
|---|---|---|
options | GenerateImageOptions | - The image generation options including the prompt, optional reference images, and count |
Promise<GenerateImageResponse>
Since: 1.0.0
Response containing the system availability status of the on-device LLM.
| Prop | Type | Description | Since |
|---|---|---|---|
status | LLMAvailability | The current availability status of the LLM. | 1.0.0 |
Response from the LLM after processing a prompt.
| Prop | Type | Description | Since |
|---|---|---|---|
text | string | The text response generated by the LLM. | 1.0.0 |
Options for sending a prompt to the LLM.
| Prop | Type | Description | Since |
|---|---|---|---|
sessionId | string | Optional session identifier for maintaining conversation context. Provide the same sessionId across multiple prompts to maintain context. If not provided, each prompt is treated as independent. | 1.0.0 |
instructions | string | System-level instructions to guide the LLM's behavior. Use this to set the role, tone, or constraints for the LLM's responses. | 1.0.0 |
options | LLMOptions | Configuration options for controlling LLM inference behavior. | 1.0.0 |
prompt | string | The text prompt to send to the LLM. | 1.0.0 |
Configuration options for LLM inference behavior.
| Prop | Type | Description | Since |
|---|---|---|---|
temperature | number | Controls randomness in the model's output. Higher values (e.g., 0.8) make output more random, while lower values (e.g., 0.2) make it more focused and deterministic. | 1.0.0 |
maximiumOutputTokens | number | The maximum number of tokens to generate in the response. Note: This property name contains a typo ("maximium" instead of "maximum") but is kept for API consistency. | 1.0.0 |
Options for ending an active LLM session.
| Prop | Type | Description | Since |
|---|---|---|---|
sessionId | string | The identifier of the session to end. This should match the sessionId used in previous prompt() calls. | 1.0.0 |
Response containing the generated image data.
| Prop | Type | Description | Since |
|---|---|---|---|
pngBase64Images | string[] | Array of generated images as base64-encoded PNG strings. Each string contains raw base64 data (without data URI prefix). To use in an img tag, prefix with 'data:image/png;base64,'. | 1.0.0 |
Options for generating an image from a text prompt.
| Prop | Type | Description | Default | Since |
|---|---|---|---|---|
prompt | string | The text prompt describing the image to generate. | 1.0.0 | |
promptImages | string[] | Optional array of reference images to influence the generated output. Provide base64-encoded image strings (with or without data URI prefix) that will be used as visual context or inspiration for the image generation. This allows you to combine text and image concepts for more controlled output. | 1.0.0 | |
count | number | The number of image variations to generate. Defaults to 1 if not specified. | 1 | 1.0.0 |
Availability status of the on-device LLM.
'available' | 'unavailable' | 'notready' | 'downloadable' | 'responding'