Generic LLM provider framework for macOS apps. Defines protocols and types — you bring the provider configs and API implementations.
LLMProvider — Protocol: implement sendStreaming + fetchModels
LLMProviderConfig — Full config: endpoint, auth, model, capabilities
LLMEndpoint — Connection: chat URL, models URL, auth, headers, port
LLMProviderKind — Hosting: cloudAPI, localServer, remoteServer, embedded, custom
LLMAPIProtocol — Format: anthropic, openAI, ollama, foundationModel, custom
LLMCapability — Features: streaming, tools, vision, caching, thinking, webSearch
LLMModelInfo — Model: id, name, context window, capabilities
LLMResponse — Result: content blocks, stop reason, token counts
LLMRegistry — Registry: O(1) lookup, register/update/remove
import AgentLLM
let myProvider = LLMProviderConfig(
id: "my-llm",
displayName: "My LLM",
kind: .cloudAPI,
apiProtocol: .openAI,
endpoint: LLMEndpoint(
chatURL: "https://api.example.com/v1/chat/completions",
modelsURL: "https://api.example.com/v1/models"
),
apiKey: "sk-...",
model: "my-model-v1",
capabilities: [.streaming, .tools, .systemPrompt]
)LLMRegistry.shared.register(myProvider)class MyLLMService: LLMProvider {
var config: LLMProviderConfig
var systemPrompt: String = ""
var overrideSystemPrompt: String?
var temperature: Double = 0.2
var compactTools: Bool = false
func sendStreaming(
messages: [[String: Any]],
activeGroups: Set<String>?,
onDelta: @escaping @Sendable (String) -> Void
) async throws -> LLMResponse {
// Your API call here
}
}| Kind | Auth | Example |
|---|---|---|
.cloudAPI |
API key required | Claude, OpenAI, DeepSeek |
.localServer |
Usually none | Ollama local, LM Studio |
.remoteServer |
Optional | vLLM, Ollama cloud |
.embedded |
None | Apple Intelligence |
.custom |
Varies | Hybrid setups |
| Protocol | Format | Used By |
|---|---|---|
.anthropic |
Messages API | Claude, LM Studio (Anthropic mode) |
.openAI |
Chat Completions | OpenAI, DeepSeek, HuggingFace, Z.ai, vLLM, LM Studio |
.ollama |
Ollama native | Ollama local + cloud |
.foundationModel |
Apple on-device | Apple Intelligence |
.custom |
App-defined | LM Studio Native, future providers |
LLMEndpoint.ollamaPort // 11434
LLMEndpoint.lmStudioPort // 1234
LLMEndpoint.vLLMPort // 8000Some providers support multiple API formats (e.g. LM Studio):
let lmStudio = LLMProviderConfig(
id: "lmStudio",
displayName: "LM Studio",
kind: .localServer,
apiProtocol: .openAI, // default
endpoint: LLMEndpoint(
chatURL: "http://localhost:1234/v1/chat/completions",
modelsURL: "http://localhost:1234/v1/models",
authHeader: "", authPrefix: "",
defaultPort: LLMEndpoint.lmStudioPort
),
supportedProtocols: [.openAI, .anthropic, .custom]
)Switch protocols by changing apiProtocol and endpoint.chatURL.
- Zero hardcoded providers — the package has no provider-specific code
- App owns the config — URLs, keys, models, capabilities defined in your app
- Protocol-based — implement
LLMProviderto add any LLM - Modular — each type is independent, compose as needed
- Future-proof — add new providers without touching the package
- macOS 26+
- Swift 6.2+