` where semantic elements are appropriate
+- Missing `main`, `nav`, `header`, `footer` landmarks
+- Lists (`
`, ``) not used for list content
+- Missing `lang` attribute on ``
+
+**Impact:** Screen reader users cannot navigate efficiently
+
+**Remediation:**
+
+- Maintain logical heading order (don't skip levels)
+- Use semantic HTML5 elements
+- Add ARIA landmarks if semantic HTML not possible
+- Wrap list items in proper list elements
+- Add `lang` attribute: ``
+
+### Category 6: ARIA Usage (WCAG A - Medium)
+
+**Detection:**
+
+- ARIA attributes on semantic HTML (redundant)
+- Invalid ARIA attribute values
+- `aria-label` or `aria-labelledby` missing on custom components
+- `role="presentation"` misused
+- `aria-hidden="true"` on focusable elements
+
+**Impact:** Screen readers receive incorrect or confusing information
+
+**Remediation:**
+
+- Remove redundant ARIA on semantic HTML
+- Validate ARIA values against spec
+- Add proper labels to custom interactive components
+- Use `role="presentation"` only for layout tables/images
+- Ensure hidden elements are not focusable
+
+### Category 7: Media Accessibility (WCAG A - High)
+
+**Detection:**
+
+- `` without captions/subtitles
+- `` without transcripts
+- Autoplay media without user control
+- Missing media controls
+
+**Impact:** Deaf/hard-of-hearing users cannot access audio content
+
+**Remediation:**
+
+- Add `` elements for captions (WebVTT)
+- Provide transcript links for audio
+- Remove `autoplay` or add `muted` attribute
+- Ensure native controls are enabled or custom controls are accessible
+
+### Category 8: Dynamic Content (WCAG A - Medium)
+
+**Detection:**
+
+- Content updates without `aria-live` regions
+- Focus not managed during route changes
+- Infinite scroll without keyboard alternatives
+- Loading states not announced
+
+**Impact:** Screen reader users miss dynamic updates
+
+**Remediation:**
+
+- Add `aria-live="polite"` for non-critical updates
+- Use `aria-live="assertive"` for critical updates
+- Manage focus on route/content changes
+- Provide "Load More" button alternative
+- Use `aria-busy="true"` during loading
+
+### Category 9: Mobile Accessibility (WCAG AA - Medium)
+
+**Detection:**
+
+- Touch targets < 44x44px
+- Viewport zoom disabled (`user-scalable=no`)
+- Horizontal scrolling required on mobile
+- Content not responsive to text resize
+
+**Impact:** Users with motor disabilities or low vision struggle on mobile
+
+**Remediation:**
+
+- Increase touch target sizes to 44x44px minimum
+- Remove viewport zoom restrictions
+- Implement responsive design
+- Test with 200% text zoom
+
+### Category 10: Skip Links & Navigation (WCAG A - Medium)
+
+**Detection:**
+
+- Missing "Skip to main content" link
+- Skip links not keyboard accessible
+- Multiple navigation menus without labels
+- Breadcrumbs without proper markup
+
+**Impact:** Keyboard users must tab through navigation repeatedly
+
+**Remediation:**
+
+- Add skip link as first focusable element
+- Ensure skip link is visible on focus
+- Add `aria-label` to multiple `nav` elements
+- Use `` with proper ARIA for breadcrumbs
+
+## GitHub Issue Template
+
+```markdown
+## Accessibility Issue: [Brief Description]
+
+**WCAG Level:** [A/AA/AAA]
+**Severity:** [Critical/High/Medium/Low]
+**Category:** [Category Name]
+
+### Issue Description
+[Clear explanation of the accessibility violation and why it matters]
+
+### User Impact
+- **Affected Users:** [Blind/Low Vision/Deaf/Motor Disability/Cognitive/etc.]
+- **Severity:** [What functionality is blocked or degraded]
+
+### Violations Found
+
+#### File: `[path/to/file.jsx]`
+**Lines:** [line numbers]
+```[language]
+
+[problematic code snippet]
+```
+
+**Issue:** [Specific problem with this code]
+
+---
+### Recommended Fix
+```[language]
+
+[corrected code snippet]
+```
+
+**Changes Made:**
+1. [Specific change 1]
+2. [Specific change 2]
+
+---
+### Additional Instances
+[If multiple files affected, list them here]
+
+- `file1.jsx` (line 45)
+- `file2.tsx` (line 120)
+- `file3.html` (line 89)
+
+### Testing Instructions
+1. [Step-by-step testing with screen reader]
+2. [Keyboard navigation testing]
+3. [Color contrast verification]
+4. [Tool to use: WAVE, axe DevTools, Lighthouse]
+
+### Resources
+- [WCAG Success Criterion link]
+- [MDN documentation link]
+- [WebAIM article link]
+
+### Acceptance Criteria
+- [ ] Code updated per recommendations
+- [ ] Tested with screen reader ([specify: NVDA/JAWS/VoiceOver])
+- [ ] Keyboard navigation works as expected
+- [ ] Automated tests pass (Lighthouse/axe)
+- [ ] Manual testing completed
+
+---
+
+
+```
+
+## Commit Message Format
+
+When fixing accessibility issues:
+
+```
+fix(a11y): [Brief description of fix]
+
+- Add alt text to product images (Issue #123)
+- Implement keyboard navigation for modal
+- Meets WCAG [Level] [Criterion]
+
+WCAG: [Success Criterion Number]
+Severity: [Critical/High/Medium/Low]
+
+```
+
+## HTML Comment Marker Format
+
+```html
+
+[code that needs fixing]
+
+
+```
+
+For fixed issues:
+
+```html
+
+[corrected code]
+
+
+```
+
+## Tools Integration
+
+### Required Tools
+
+- **GitHub API:** For issue creation and label management
+- **File System Access:** To scan and mark files
+
+### Recommended Testing Tools (reference in issues)
+
+- Chrome DevTools Lighthouse
+- axe DevTools browser extension
+- WAVE Web Accessibility Evaluation Tool
+- WebAIM Contrast Checker
+- Screen readers: NVDA (Windows), JAWS, VoiceOver (macOS/iOS)
+
+## Automated Checks
+
+Run the following automated checks during scan:
+
+1. Missing alt attributes on images
+2. Form inputs without labels
+3. Buttons/links without accessible names
+4. Heading hierarchy violations
+5. Missing ARIA labels on custom components
+6. Color contrast issues (if tools available)
+7. Missing lang attribute
+8. HTML validation errors
+
+## Reporting
+
+After completing the scan, create a summary comment in the weekly digest (if available) or as a standalone GitHub Discussion:
+
+```markdown
+# Accessibility Scan Results - [Date]
+
+## Summary
+- **Total Issues Found:** [number]
+- **Critical:** [number]
+- **High:** [number]
+- **Medium:** [number]
+- **Low:** [number]
+
+## Issues by WCAG Level
+- **Level A:** [number] issues
+- **Level AA:** [number] issues
+- **Level AAA:** [number] issues
+
+## New Issues Created
+[Links to GitHub issues]
+
+## Previously Tracked Issues
+[Status updates on existing accessibility issues]
+
+## Recommendations
+[Priority fixes based on severity and user impact]
+
+---
+
+
+```
+
+## Best Practices
+
+1. **Group Similar Issues:** Create one issue for multiple instances of the same problem
+2. **Prioritize Critical Path:** Focus on issues affecting core user journeys first
+3. **Provide Context:** Explain why each fix improves accessibility, not just what to change
+4. **Include Testing Steps:** Make fixes verifiable with specific testing instructions
+5. **Reference Standards:** Link to WCAG success criteria and documentation
+6. **Progressive Enhancement:** Suggest fixes that work across all browsers and assistive technologies
+
+## Notes
+
+- This agent focuses on **detectable** accessibility issues; manual testing with real assistive technologies is still required
+- Some issues (like semantic appropriateness) require human judgment
+- Color contrast can only be checked if you have access to rendered styles
+- Regular scans help catch regressions as code evolves
+- Consider running after major UI changes or before releases
\ No newline at end of file
diff --git a/.continue/checks/agentsmd-updater.md b/.continue/checks/agentsmd-updater.md
new file mode 100644
index 0000000000..b22f5b4a53
--- /dev/null
+++ b/.continue/checks/agentsmd-updater.md
@@ -0,0 +1,5 @@
+---
+name: agentsmd-updater
+---
+
+You are maintaining the project's AGENTS.md file. Review the pull request and identify new build steps, scripts, directory changes, dependencies, environment variables, architectures, code style rules, or workflows that an AI coding agent should know. Compare these findings with the existing AGENTS.md and update the file so it stays accurate, complete, and practical for automated agents. Keep the structure clean and keep explanations brief. If the file is missing you should create one. Do not modify any other file.
\ No newline at end of file
diff --git a/.continue/checks/improve-test-coverage.md b/.continue/checks/improve-test-coverage.md
new file mode 100644
index 0000000000..0dd1fcbadd
--- /dev/null
+++ b/.continue/checks/improve-test-coverage.md
@@ -0,0 +1,8 @@
+---
+name: Improve Test Coverage
+description: Adds missing tests to improve coverage
+---
+
+Run tests for this repo with coverage reporting (e.g. for vitest, npx vitest run --coverage | head 50). Pick a file that is under-tested and add tests.
+
+Focus on unit, integration, and other backend-esque tests. Only test client components if many other components in the repo are tested. Don't add tests for test files or DB entities/models.
\ No newline at end of file
diff --git a/.continue/rules/CONTINUE.md b/.continue/rules/CONTINUE.md
new file mode 100644
index 0000000000..d369bba3f8
--- /dev/null
+++ b/.continue/rules/CONTINUE.md
@@ -0,0 +1,444 @@
+# llama-cpp-python – Project Guide
+
+## 1. Project Overview
+
+**llama-cpp-python** is a Python binding for [`llama.cpp`](https://github.com/ggerganov/llama.cpp), enabling efficient local inference of large language models (LLMs) in GGUF format directly from Python.
+
+### Key Technologies
+
+| Layer | Technology |
+|---|---|
+| Core inference engine | `llama.cpp` (C/C++, vendored as a git submodule in `vendor/llama.cpp`) |
+| Python–C bridge | `ctypes` (no Cython / pybind11 required) |
+| Build system | CMake + [scikit-build-core](https://scikit-build-core.readthedocs.io/) |
+| Web server | FastAPI + Uvicorn (OpenAI-compatible REST API) |
+| Testing | pytest |
+| Linting / formatting | [Ruff](https://docs.astral.sh/ruff/) |
+| Documentation | MkDocs + mkdocstrings |
+| Python versions | 3.8 – 3.13 |
+
+### High-level Architecture
+
+```
+llama-cpp-python
+├── vendor/llama.cpp ← upstream C++ inference engine (git submodule)
+├── CMakeLists.txt ← builds llama.cpp shared libraries
+├── llama_cpp/ ← Python package
+│ ├── llama_cpp.py ← low-level ctypes bindings (mirrors llama.h)
+│ ├── llama.py ← high-level Llama class
+│ ├── llama_chat_format.py ← chat template handling
+│ ├── llama_grammar.py ← grammar / constrained generation
+│ ├── llama_cache.py ← KV-cache helpers
+│ ├── llama_speculative.py ← speculative decoding
+│ ├── llama_tokenizer.py ← HuggingFace tokenizer bridge
+│ ├── llava_cpp.py ← LLaVA multimodal C bindings
+│ └── server/ ← OpenAI-compatible HTTP server
+└── tests/ ← pytest test suite
+```
+
+---
+
+## 2. Getting Started
+
+### Prerequisites
+
+- **Python 3.8+**
+- A C/C++ compiler:
+ - Linux: `gcc` or `clang`
+ - Windows: Visual Studio or MinGW (`w64devkit`)
+ - macOS: Xcode command-line tools
+- **CMake ≥ 3.21** (installed automatically by scikit-build-core if needed)
+- *(Optional)* CUDA toolkit, ROCm, or Xcode for GPU-accelerated builds
+
+### Installation
+
+```bash
+# Basic CPU install (builds llama.cpp from source)
+pip install llama-cpp-python
+
+# Pre-built CPU wheel (faster, no compiler needed)
+pip install llama-cpp-python \
+ --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
+
+# GPU-accelerated builds (set CMAKE_ARGS before installing)
+CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python # CUDA
+CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python # macOS Metal
+CMAKE_ARGS="-DGGML_VULKAN=on" pip install llama-cpp-python # Vulkan
+
+# Install with the optional HTTP server extras
+pip install "llama-cpp-python[server]"
+
+# Install all extras (server + dev + test)
+pip install "llama-cpp-python[all]"
+```
+
+> **Reinstalling after a change** – add `--upgrade --force-reinstall --no-cache-dir` to rebuild from scratch.
+
+### Basic Usage
+
+```python
+from llama_cpp import Llama
+
+# Load a GGUF model
+llm = Llama(model_path="./models/llama-model.gguf")
+
+# Text completion
+output = llm("Q: What is 2+2? A:", max_tokens=16, stop=["\n"])
+print(output["choices"][0]["text"])
+
+# Pull a model directly from Hugging Face Hub
+llm = Llama.from_pretrained(
+ repo_id="lmstudio-community/Qwen3.5-0.8B-GGUF",
+ filename="*Q8_0.gguf",
+)
+
+# Chat completion (OpenAI-style)
+response = llm.create_chat_completion(
+ messages=[{"role": "user", "content": "Hello!"}]
+)
+print(response["choices"][0]["message"]["content"])
+```
+
+### Running the HTTP Server
+
+```bash
+# Start the OpenAI-compatible server
+python3 -m llama_cpp.server --model path/to/model.gguf
+
+# With explicit chat format
+python3 -m llama_cpp.server --model path/to/model.gguf --chat_format chatml
+
+# All options
+python3 -m llama_cpp.server --help
+```
+
+The server exposes standard OpenAI endpoints (e.g. `/v1/completions`, `/v1/chat/completions`, `/v1/embeddings`) and an interactive Swagger UI at `http://localhost:8000/docs`.
+
+### Running Tests
+
+```bash
+# Install test dependencies first
+pip install "llama-cpp-python[test]"
+
+# Run the full test suite
+make test
+# or directly
+python3 -m pytest --full-trace -v
+```
+
+---
+
+## 3. Project Structure
+
+```
+llama-cpp-python/
+├── .continue/rules/ ← Continue AI project rules (this file)
+├── .github/ ← CI workflows, issue / PR templates
+├── CMakeLists.txt ← Top-level CMake build for llama.cpp shared libs
+├── Makefile ← Developer convenience targets
+├── pyproject.toml ← Package metadata, build config, tool settings
+├── mkdocs.yml ← Documentation site config
+├── docs/ ← MkDocs markdown sources
+│ ├── index.md
+│ ├── server.md ← Server usage & configuration guide
+│ ├── api-reference.md
+│ └── install/macos.md
+├── docker/ ← Dockerfile examples
+├── examples/ ← Usage examples
+│ ├── high_level_api/
+│ ├── low_level_api/
+│ ├── gradio_chat/
+│ ├── batch-processing/
+│ ├── hf_pull/
+│ ├── ray/ ← Distributed inference with Ray
+│ └── notebooks/
+├── llama_cpp/ ← Main Python package
+│ ├── __init__.py ← Public API surface + version string
+│ ├── llama_cpp.py ← Low-level ctypes C-API bindings
+│ ├── llava_cpp.py ← LLaVA / vision C-API bindings
+│ ├── mtmd_cpp.py ← Multimodal C-API bindings
+│ ├── llama.py ← High-level Llama class
+│ ├── llama_cache.py ← DiskCache / RAM KV-cache wrappers
+│ ├── llama_chat_format.py ← Chat format registry & handlers
+│ ├── llama_grammar.py ← GBNF grammar & JSON schema support
+│ ├── llama_speculative.py ← Speculative decoding helpers
+│ ├── llama_tokenizer.py ← HF tokenizer integration
+│ ├── llama_types.py ← Pydantic / TypedDict response types
+│ ├── _ctypes_extensions.py ← ctypes helpers
+│ ├── _ggml.py ← GGML tensor type constants
+│ ├── _internals.py ← Internal C-struct wrappers
+│ ├── _logger.py ← Logging configuration
+│ ├── _utils.py ← Shared utilities
+│ └── server/ ← OpenAI-compatible HTTP server
+│ ├── __main__.py ← Entry point (`python -m llama_cpp.server`)
+│ ├── app.py ← FastAPI app factory
+│ ├── cli.py ← CLI argument parsing
+│ ├── model.py ← Per-request model management
+│ ├── settings.py ← Pydantic settings (ServerSettings, ModelSettings)
+│ ├── types.py ← OpenAI request/response Pydantic models
+│ └── errors.py ← HTTP error handlers
+├── scripts/ ← Helper scripts (release, etc.)
+├── tests/ ← pytest test suite
+│ ├── test_llama.py
+│ ├── test_llama_chat_format.py
+│ ├── test_llama_grammar.py
+│ └── test_llama_speculative.py
+└── vendor/llama.cpp/ ← llama.cpp C++ source (git submodule)
+```
+
+### Key Configuration Files
+
+| File | Purpose |
+|---|---|
+| `pyproject.toml` | Package metadata, scikit-build-core options, Ruff & pytest config |
+| `CMakeLists.txt` | Builds `libllama` and related shared libraries from `vendor/llama.cpp` |
+| `Makefile` | Convenience targets: `build`, `test`, `lint`, `format`, `docker` |
+| `.gitmodules` | Points `vendor/llama.cpp` to upstream `ggerganov/llama.cpp` |
+| `mkdocs.yml` | Documentation site structure |
+
+---
+
+## 4. Development Workflow
+
+### Setting Up a Dev Environment
+
+```bash
+git clone --recurse-submodules https://github.com/abetlen/llama-cpp-python.git
+cd llama-cpp-python
+pip install ".[all]" # installs in editable mode with all extras
+# or use the Makefile shortcut:
+make deps
+make build
+```
+
+### Coding Standards
+
+- **Linter / formatter**: [Ruff](https://docs.astral.sh/ruff/) (target Python 3.8).
+- Line length is **88 characters**.
+- Run checks before committing:
+
+```bash
+make lint # check for errors
+make format # auto-fix and reformat
+```
+
+### Testing
+
+```bash
+make test
+# Tests live in tests/; pytest config is in pyproject.toml [tool.pytest.ini_options]
+```
+
+Most tests require an actual GGUF model file at a path provided via environment variables; see `tests/test_llama.py` for the expected variables.
+
+### Build Variants
+
+```bash
+make build # standard CPU build (editable install)
+make build.debug # debug symbols, no optimisation
+make build.cuda # CUDA GPU build
+make build.metal # macOS Metal build
+make build.openblas # OpenBLAS CPU BLAS build
+make build.vulkan # Vulkan build
+```
+
+### Updating the vendored llama.cpp
+
+```bash
+make update.vendor # pulls latest master of llama.cpp
+git add vendor/llama.cpp
+git commit -m "chore: bump llama.cpp to "
+```
+
+### Documentation
+
+```bash
+mkdocs serve # live-reload local preview
+make deploy.gh-docs # build & push to GitHub Pages
+```
+
+### Release / Publishing
+
+```bash
+make build.sdist # create source distribution
+make deploy.pypi # upload to PyPI with twine
+```
+
+---
+
+## 5. Key Concepts
+
+### GGUF Format
+Models must be in the [GGUF](https://huggingface.co/docs/hub/gguf) file format — the successor to GGML. GGUF encodes weights, tokenizer, and metadata in a single file and supports various quantisation levels (e.g. Q4_0, Q8_0, F16).
+
+### ctypes Bindings (`llama_cpp.py` / `llava_cpp.py`)
+The low-level layer wraps `libllama` with pure Python `ctypes`. Every exported C function is declared with its argument types and return type. Consumers of this layer work directly with C structs and pointers.
+
+### High-level `Llama` Class (`llama.py`)
+Manages model loading, context creation, sampling, and streaming. Exposes OpenAI-compatible methods:
+- `__call__` / `create_completion` – text completion
+- `create_chat_completion` / `create_chat_completion_openai_v1` – chat
+- `create_embedding` – embeddings
+- `from_pretrained` – download + load from Hugging Face Hub
+
+### Chat Formats (`llama_chat_format.py`)
+A registry of named chat templates (`chatml`, `llama-2`, `gemma`, `mistral`, `functionary-v2`, etc.). Each format converts a list of messages into a single prompt string and handles stop tokens. Custom formats can be registered with `@register_chat_format`.
+
+### Grammar / Constrained Generation (`llama_grammar.py`)
+Supports GBNF (grammar-based constrained generation) to enforce structured output (e.g. valid JSON, JSON Schema). Pass `grammar` or `response_format` to `create_chat_completion`.
+
+### KV Cache (`llama_cache.py`)
+Optional caching of key-value pairs across calls. Supports in-memory (`LlamaRAMCache`) and disk-persistent (`LlamaDiskCache`) backends.
+
+### Speculative Decoding (`llama_speculative.py`)
+Accelerates generation using a smaller draft model to propose tokens that the target model verifies in parallel.
+
+### OpenAI-Compatible Server (`llama_cpp/server/`)
+A FastAPI application that implements the OpenAI REST API, allowing drop-in replacement for OpenAI clients. Supports multi-model configurations via a YAML/JSON config file.
+
+---
+
+## 6. Common Tasks
+
+### Load a Model with GPU Acceleration
+
+```python
+llm = Llama(
+ model_path="model.gguf",
+ n_gpu_layers=-1, # offload all layers to GPU
+ n_ctx=4096, # context window size
+)
+```
+
+### Stream a Chat Completion
+
+```python
+for chunk in llm.create_chat_completion(
+ messages=[{"role": "user", "content": "Tell me a joke"}],
+ stream=True,
+):
+ delta = chunk["choices"][0]["delta"]
+ if "content" in delta:
+ print(delta["content"], end="", flush=True)
+```
+
+### Constrain Output to a JSON Schema
+
+```python
+llm.create_chat_completion(
+ messages=[{"role": "user", "content": "Give me a user object"}],
+ response_format={
+ "type": "json_object",
+ "schema": {
+ "type": "object",
+ "properties": {"name": {"type": "string"}, "age": {"type": "integer"}},
+ "required": ["name", "age"],
+ },
+ },
+)
+```
+
+### Use the Low-level C API
+
+```python
+from llama_cpp import llama_cpp # ctypes bindings module
+
+# Example: list available model metadata
+ctx_params = llama_cpp.llama_context_default_params()
+```
+
+### Run Multi-model Server with Config File
+
+Create a `config.yaml`:
+
+```yaml
+models:
+ - model: /path/to/model1.gguf
+ model_alias: "llama3"
+ chat_format: chatml
+ - model: /path/to/model2.gguf
+ model_alias: "mistral"
+ chat_format: mistral
+```
+
+```bash
+python3 -m llama_cpp.server --config_file config.yaml
+```
+
+### Add a Custom Chat Format
+
+```python
+from llama_cpp import llama_chat_format
+
+@llama_chat_format.register_chat_format("my-format")
+def my_format(messages, **kwargs):
+ # Build and return a ChatFormatterResponse
+ ...
+```
+
+---
+
+## 7. Troubleshooting
+
+### Build fails: `Can't find 'nmake'` or `CMAKE_C_COMPILER` (Windows)
+
+Add MinGW to the path and set the generator:
+
+```powershell
+$env:CMAKE_GENERATOR = "MinGW Makefiles"
+$env:CMAKE_ARGS = "-DCMAKE_C_COMPILER=C:/w64devkit/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/w64devkit/bin/g++.exe"
+pip install llama-cpp-python
+```
+
+### macOS: `incompatible architecture (have 'x86_64', need 'arm64')`
+
+Force an arm64 build:
+
+```bash
+CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DGGML_METAL=on" \
+ pip install --upgrade --force-reinstall --no-cache-dir llama-cpp-python
+```
+
+### Slow inference on Apple Silicon
+
+Ensure you are using an `arm64` Python interpreter (e.g. via Miniforge/Mambaforge) and that Metal is enabled.
+
+### Debugging verbose build output
+
+```bash
+pip install --verbose llama-cpp-python
+# or use the Makefile target:
+make build.debug
+```
+
+### `libllama.so` / `.dylib` not found at runtime
+
+Run `make clean` then reinstall:
+
+```bash
+make clean
+pip install --force-reinstall -e .
+```
+
+### Tests fail: model file not found
+
+Most tests expect a GGUF model file. Check the environment variables in `tests/test_llama.py` and set the appropriate path before running.
+
+---
+
+## 8. References
+
+| Resource | URL |
+|---|---|
+| Official documentation | https://llama-cpp-python.readthedocs.io/en/latest/ |
+| API Reference | https://llama-cpp-python.readthedocs.io/en/latest/api-reference/ |
+| Server guide | https://llama-cpp-python.readthedocs.io/en/latest/server/ |
+| macOS install guide | https://llama-cpp-python.readthedocs.io/en/latest/install/macos/ |
+| Changelog | https://llama-cpp-python.readthedocs.io/en/latest/changelog/ |
+| PyPI package | https://pypi.org/project/llama-cpp-python/ |
+| GitHub repository | https://github.com/abetlen/llama-cpp-python |
+| upstream llama.cpp | https://github.com/ggerganov/llama.cpp |
+| GGUF model format | https://huggingface.co/docs/hub/gguf |
+| Hugging Face Hub | https://huggingface.co/models?library=gguf |
+| LangChain integration | https://python.langchain.com/docs/integrations/llms/llamacpp |
+| LlamaIndex integration | https://docs.llamaindex.ai/en/stable/examples/llm/llama_2_llama_cpp.html |
diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml
new file mode 100644
index 0000000000..ebbb37ee58
--- /dev/null
+++ b/.github/workflows/codeql.yml
@@ -0,0 +1,101 @@
+# For most projects, this workflow file will not need changing; you simply need
+# to commit it to your repository.
+#
+# You may wish to alter this file to override the set of languages analyzed,
+# or to provide custom queries or build logic.
+#
+# ******** NOTE ********
+# We have attempted to detect the languages in your repository. Please check
+# the `language` matrix defined below to confirm you have the correct set of
+# supported CodeQL languages.
+#
+name: "CodeQL Advanced"
+
+on:
+ push:
+ branches: [ "main" ]
+ pull_request:
+ branches: [ "main" ]
+ schedule:
+ - cron: '40 8 * * 6'
+
+jobs:
+ analyze:
+ name: Analyze (${{ matrix.language }})
+ # Runner size impacts CodeQL analysis time. To learn more, please see:
+ # - https://gh.io/recommended-hardware-resources-for-running-codeql
+ # - https://gh.io/supported-runners-and-hardware-resources
+ # - https://gh.io/using-larger-runners (GitHub.com only)
+ # Consider using larger runners or machines with greater resources for possible analysis time improvements.
+ runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
+ permissions:
+ # required for all workflows
+ security-events: write
+
+ # required to fetch internal or private CodeQL packs
+ packages: read
+
+ # only required for workflows in private repositories
+ actions: read
+ contents: read
+
+ strategy:
+ fail-fast: false
+ matrix:
+ include:
+ - language: actions
+ build-mode: none
+ - language: python
+ build-mode: none
+ # CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
+ # Use `c-cpp` to analyze code written in C, C++ or both
+ # Use 'java-kotlin' to analyze code written in Java, Kotlin or both
+ # Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
+ # To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
+ # see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
+ # If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
+ # your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v4
+
+ # Add any setup steps before running the `github/codeql-action/init` action.
+ # This includes steps like installing compilers or runtimes (`actions/setup-node`
+ # or others). This is typically only required for manual builds.
+ # - name: Setup runtime (example)
+ # uses: actions/setup-example@v1
+
+ # Initializes the CodeQL tools for scanning.
+ - name: Initialize CodeQL
+ uses: github/codeql-action/init@v4
+ with:
+ languages: ${{ matrix.language }}
+ build-mode: ${{ matrix.build-mode }}
+ # If you wish to specify custom queries, you can do so here or in a config file.
+ # By default, queries listed here will override any specified in a config file.
+ # Prefix the list here with "+" to use these queries and those in the config file.
+
+ # For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
+ # queries: security-extended,security-and-quality
+
+ # If the analyze step fails for one of the languages you are analyzing with
+ # "We were unable to automatically build your code", modify the matrix above
+ # to set the build mode to "manual" for that language. Then modify this step
+ # to build your code.
+ # ℹ️ Command-line programs to run using the OS shell.
+ # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
+ - name: Run manual build steps
+ if: matrix.build-mode == 'manual'
+ shell: bash
+ run: |
+ echo 'If you are using a "manual" build mode for one or more of the' \
+ 'languages you are analyzing, replace this with the commands to build' \
+ 'your code, for example:'
+ echo ' make bootstrap'
+ echo ' make release'
+ exit 1
+
+ - name: Perform CodeQL Analysis
+ uses: github/codeql-action/analyze@v4
+ with:
+ category: "/language:${{matrix.language}}"
diff --git a/.github/workflows/protect-branches.yml b/.github/workflows/protect-branches.yml
new file mode 100644
index 0000000000..3d5a772ef1
--- /dev/null
+++ b/.github/workflows/protect-branches.yml
@@ -0,0 +1,55 @@
+name: Protect Important Branches
+
+on:
+ workflow_dispatch:
+
+permissions:
+ administration: write
+
+jobs:
+ create-ruleset:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Create branch ruleset for main
+ uses: actions/github-script@v7
+ with:
+ script: |
+ try {
+ const { data: ruleset } = await github.request(
+ 'POST /repos/{owner}/{repo}/rulesets',
+ {
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ name: 'Protect main branch',
+ target: 'branch',
+ enforcement: 'active',
+ conditions: {
+ ref_name: {
+ include: ['refs/heads/main'],
+ exclude: [],
+ },
+ },
+ rules: [
+ { type: 'deletion' },
+ { type: 'non_fast_forward' },
+ { type: 'required_linear_history' },
+ {
+ type: 'required_status_checks',
+ parameters: {
+ strict_required_status_checks_policy: false,
+ do_not_enforce_on_create: false,
+ required_status_checks: [
+ { context: 'ruff' },
+ ],
+ },
+ },
+ ],
+ }
+ );
+ console.log(`Ruleset created: ${ruleset.name} (id: ${ruleset.id})`);
+ } catch (error) {
+ core.setFailed(
+ `Failed to create ruleset: ${error.message}. ` +
+ 'Ensure the token has administration:write permission and no duplicate ruleset exists.'
+ );
+ }
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
index 0000000000..034e848032
--- /dev/null
+++ b/SECURITY.md
@@ -0,0 +1,21 @@
+# Security Policy
+
+## Supported Versions
+
+Use this section to tell people about which versions of your project are
+currently being supported with security updates.
+
+| Version | Supported |
+| ------- | ------------------ |
+| 5.1.x | :white_check_mark: |
+| 5.0.x | :x: |
+| 4.0.x | :white_check_mark: |
+| < 4.0 | :x: |
+
+## Reporting a Vulnerability
+
+Use this section to tell people how to report a vulnerability.
+
+Tell them where to go, how often they can expect to get an update on a
+reported vulnerability, what to expect if the vulnerability is accepted or
+declined, etc.
diff --git a/docs/server.md b/docs/server.md
index 9c09a1f1cf..7c7528d298 100644
--- a/docs/server.md
+++ b/docs/server.md
@@ -25,12 +25,21 @@ python3 -m llama_cpp.server --model
You can also pass chat-template kwargs at model load time from the CLI:
```bash
+# Linux / macOS (bash)
python3 -m llama_cpp.server \
--model \
--chat_format chatml \
--chat_template_kwargs '{"enable_thinking": true}'
```
+```powershell
+# Windows (PowerShell) – use a backtick ` for line continuation, not a backslash \
+python -m llama_cpp.server `
+ --model `
+ --chat_format chatml `
+ --chat_template_kwargs '{\"enable_thinking\": true}'
+```
+
### Server options
For a full list of options, run:
diff --git a/examples/high_level_api/legion_slim5_rtx4060.py b/examples/high_level_api/legion_slim5_rtx4060.py
new file mode 100644
index 0000000000..90a00454cf
--- /dev/null
+++ b/examples/high_level_api/legion_slim5_rtx4060.py
@@ -0,0 +1,223 @@
+"""
+Optimized llama-cpp-python configuration for:
+ Lenovo Legion Slim 5 (16" RH8)
+ - CPU: Intel Core i7-13700H (6P + 8E cores)
+ - GPU: NVIDIA GeForce RTX 4060 Laptop (8 GB VRAM, GDDR6)
+ - RAM: 16 GB DDR5-5200
+ - SSD: 1 TB NVMe
+
+Install with CUDA support first:
+
+ Bash / Linux / macOS:
+ CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python --force-reinstall --no-cache-dir
+
+ PowerShell (Windows):
+ $env:CMAKE_ARGS = "-DGGML_CUDA=on"
+ python -m pip install llama-cpp-python --force-reinstall --no-cache-dir
+
+ Tip (Windows): install into a virtual environment to avoid dependency conflicts
+ with other tools in your global environment:
+ python -m venv .venv-llama
+ ./.venv-llama/Scripts/Activate.ps1
+ $env:CMAKE_ARGS = "-DGGML_CUDA=on"
+ python -m pip install llama-cpp-python --force-reinstall --no-cache-dir
+"""
+
+import argparse
+import json
+import os
+import sys
+
+from llama_cpp import Llama
+
+# ---------------------------------------------------------------------------
+# Hardware constants for this machine
+# ---------------------------------------------------------------------------
+VRAM_GB = 8 # RTX 4060 Laptop VRAM
+N_PHYSICAL_CORES = 6 # P-cores only (best single-thread perf on i7-13700H)
+
+# ---------------------------------------------------------------------------
+# Recommended quantisation levels (pick one based on your model size)
+# ---------------------------------------------------------------------------
+# Model 7B / 8B:
+# Q5_K_M → ~5.5 GB VRAM ✅ recommended
+# Q6_K → ~6.5 GB VRAM ✅ excellent quality
+# Q8_0 → ~8.5 GB VRAM ⚠️ tight fit, may spill to CPU RAM
+#
+# Model 13B:
+# Q4_K_M → ~7.5 GB VRAM ✅ fits
+# Q5_K_M → ~9.0 GB VRAM ❌ exceeds VRAM
+
+
+def build_llm(
+ model_path: str,
+ n_ctx: int = 4096,
+ n_gpu_layers: int = -1, # -1 = offload all layers to GPU
+ n_batch: int = 512,
+ verbose: bool = False,
+) -> Llama:
+ """
+ Create a Llama instance tuned for the Legion Slim 5 / RTX 4060 laptop.
+
+ Args:
+ model_path: Path to the .gguf model file.
+ n_ctx: Context window size (tokens). 4096 is safe for 8 GB VRAM.
+ n_gpu_layers: Number of transformer layers to offload to the GPU.
+ Use -1 to offload everything (default). Reduce if you
+ see CUDA out-of-memory errors.
+ n_batch: Batch size for prompt evaluation.
+ verbose: Print llama.cpp loading messages.
+
+ Returns:
+ A ready-to-use Llama instance.
+ """
+ return Llama(
+ model_path=model_path,
+ # --- GPU offload ---
+ n_gpu_layers=n_gpu_layers, # RTX 4060 has 8 GB – offload as much as fits
+ offload_kqv=True, # keep KV-cache on GPU for faster inference
+ # --- CPU threads ---
+ n_threads=N_PHYSICAL_CORES, # use P-cores only for best throughput
+ n_threads_batch=N_PHYSICAL_CORES,
+ # --- Context / batching ---
+ n_ctx=n_ctx,
+ n_batch=n_batch,
+ # --- Memory ---
+ use_mmap=True, # fast model loading from NVMe SSD
+ use_mlock=False, # don't pin 16 GB RAM – OS needs headroom
+ # --- Misc ---
+ verbose=verbose,
+ )
+
+
+def main() -> None:
+ parser = argparse.ArgumentParser(
+ description="Run inference optimised for the Lenovo Legion Slim 5 / RTX 4060"
+ )
+ parser.add_argument(
+ "-m", "--model",
+ required=True,
+ help="Path to the .gguf model file (e.g. mistral-7b-Q5_K_M.gguf)",
+ )
+ parser.add_argument(
+ "-p", "--prompt",
+ default="What are the names of the planets in the solar system?",
+ help="Prompt text",
+ )
+ parser.add_argument(
+ "--system-prompt",
+ default=None,
+ help="Optional system prompt prepended before the user prompt",
+ )
+ parser.add_argument(
+ "--max-tokens", type=int, default=256,
+ help="Maximum number of tokens to generate",
+ )
+ parser.add_argument(
+ "--n-ctx", type=int, default=4096,
+ help="Context window size",
+ )
+ parser.add_argument(
+ "--n-gpu-layers", type=int, default=-1,
+ help="GPU layers to offload (-1 = all)",
+ )
+ parser.add_argument(
+ "--seed", type=int, default=-1,
+ help="RNG seed for reproducible output (-1 = random)",
+ )
+ parser.add_argument(
+ "--temperature", type=float, default=0.8,
+ help="Sampling temperature (0.0 = greedy, higher = more creative)",
+ )
+ parser.add_argument(
+ "--top-p", type=float, default=0.95,
+ help="Nucleus sampling probability threshold",
+ )
+ parser.add_argument(
+ "--repeat-penalty", type=float, default=1.1,
+ help="Penalty applied to repeated tokens (1.0 = disabled)",
+ )
+ parser.add_argument(
+ "--json-output", action="store_true",
+ help="Print only raw JSON output (no banner); useful for piping",
+ )
+ parser.add_argument(
+ "--verbose", action="store_true",
+ help="Print llama.cpp loading messages",
+ )
+ args = parser.parse_args()
+
+ # --- Validate model path -------------------------------------------------
+ model_path = os.path.abspath(args.model)
+ if not os.path.isfile(model_path):
+ print(
+ f"ERROR: model file not found: {model_path}\n"
+ " Make sure the path is correct and the file exists.",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ if not args.json_output:
+ print(f"Loading model: {model_path}")
+ print(f"GPU layers : {'all' if args.n_gpu_layers == -1 else args.n_gpu_layers}")
+ print(f"Context size : {args.n_ctx} tokens\n")
+
+ # --- Load model ----------------------------------------------------------
+ try:
+ llm = build_llm(
+ model_path=model_path,
+ n_ctx=args.n_ctx,
+ n_gpu_layers=args.n_gpu_layers,
+ verbose=args.verbose,
+ )
+ except Exception as exc:
+ err = str(exc)
+ print(f"ERROR: failed to load model – {err}", file=sys.stderr)
+ if args.n_gpu_layers == -1 and (
+ "out of memory" in err.lower() or "cuda" in err.lower()
+ ):
+ print(
+ " Hint: GPU ran out of VRAM while loading all layers.\n"
+ " Try reducing --n-gpu-layers (e.g. --n-gpu-layers 28) to keep\n"
+ " some layers on CPU RAM instead.",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ # --- Build prompt --------------------------------------------------------
+ if args.system_prompt:
+ full_prompt = f"{args.system_prompt}\n\n{args.prompt}"
+ else:
+ full_prompt = args.prompt
+
+ # --- Run inference -------------------------------------------------------
+ try:
+ output = llm(
+ full_prompt,
+ max_tokens=args.max_tokens,
+ stop=["Q:", "\n\n"],
+ echo=True,
+ seed=args.seed,
+ temperature=args.temperature,
+ top_p=args.top_p,
+ repeat_penalty=args.repeat_penalty,
+ )
+ except Exception as exc:
+ err = str(exc)
+ print(f"ERROR: inference failed – {err}", file=sys.stderr)
+ if args.n_gpu_layers == -1 and (
+ "out of memory" in err.lower() or "cuda" in err.lower()
+ ):
+ print(
+ " Hint: GPU ran out of VRAM during inference.\n"
+ " Try reducing --n-gpu-layers (e.g. --n-gpu-layers 28) to keep\n"
+ " some layers on CPU RAM instead.",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ print(json.dumps(output, indent=2, ensure_ascii=False))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/llama_cpp/server/app.py b/llama_cpp/server/app.py
index f776fe159c..253012db76 100644
--- a/llama_cpp/server/app.py
+++ b/llama_cpp/server/app.py
@@ -2,6 +2,7 @@
import os
import json
+import logging
import typing
import contextlib
@@ -17,6 +18,7 @@
from fastapi import Depends, FastAPI, APIRouter, Request, HTTPException, status, Body
from fastapi.middleware import Middleware
from fastapi.middleware.cors import CORSMiddleware
+from fastapi.middleware.httpsredirect import HTTPSRedirectMiddleware
from fastapi.security import HTTPBearer
from sse_starlette.sse import EventSourceResponse
from starlette_context.plugins import RequestIdPlugin # type: ignore
@@ -130,7 +132,18 @@ def create_app(
)
set_server_settings(server_settings)
+
+ logger = logging.getLogger(__name__)
+ ssl_enabled = bool(server_settings.ssl_keyfile and server_settings.ssl_certfile)
+ if not ssl_enabled:
+ logger.warning(
+ "SSL is not configured. The server is running over plain HTTP, "
+ "which is not secure. Pass --ssl_keyfile and --ssl_certfile to enable HTTPS."
+ )
+
middleware = [Middleware(RawContextMiddleware, plugins=(RequestIdPlugin(),))]
+ if ssl_enabled:
+ middleware.append(Middleware(HTTPSRedirectMiddleware))
app = FastAPI(
middleware=middleware,
title="🦙 llama.cpp Python API",
diff --git a/llama_cpp/server/settings.py b/llama_cpp/server/settings.py
index 3c2bb7fd07..722be80659 100644
--- a/llama_cpp/server/settings.py
+++ b/llama_cpp/server/settings.py
@@ -208,7 +208,7 @@ class ServerSettings(BaseSettings):
# Uvicorn Settings
host: str = Field(default="localhost", description="Listen address")
- port: int = Field(default=8000, description="Listen port")
+ port: int = Field(default=8080, description="Listen port")
ssl_keyfile: Optional[str] = Field(
default=None, description="SSL key file for HTTPS"
)