-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathsumd.json
More file actions
192 lines (192 loc) · 124 KB
/
sumd.json
File metadata and controls
192 lines (192 loc) · 124 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
{
"project_name": "code2llm",
"description": "Generated Analysis Files",
"sections": [
{
"name": "contents",
"type": "unknown",
"content": "- [Metadata](#metadata)\n- [Architecture](#architecture)\n- [Interfaces](#interfaces)\n- [Workflows](#workflows)\n- [Quality Pipeline (`pyqual.yaml`)](#quality-pipeline-pyqualyaml)\n- [Configuration](#configuration)\n- [Dependencies](#dependencies)\n- [Deployment](#deployment)\n- [Environment Variables (`.env.example`)](#environment-variables-envexample)\n- [Release Management (`goal.yaml`)](#release-management-goalyaml)\n- [Makefile Targets](#makefile-targets)\n- [Code Analysis](#code-analysis)\n- [Source Map](#source-map)\n- [Call Graph](#call-graph)\n- [Intent](#intent)",
"level": 2
},
{
"name": "metadata",
"type": "metadata",
"content": "- **name**: `code2llm`\n- **version**: `0.5.141`\n- **python_requires**: `>=3.8`\n- **license**: Apache-2.0\n- **ai_model**: `openrouter/qwen/qwen3-coder-next`\n- **ecosystem**: SUMD + DOQL + testql + taskfile\n- **generated_from**: pyproject.toml, requirements.txt, Taskfile.yml, Makefile, app.doql.less, pyqual.yaml, goal.yaml, .env.example, src(5 mod), project/(2 analysis files)",
"level": 2
},
{
"name": "architecture",
"type": "architecture",
"content": "```\nSUMD (description) → DOQL/source (code) → taskfile (automation) → testql (verification)\n```",
"level": 2
},
{
"name": "# doql application declaration (`app.doql.less`)",
"type": "unknown",
"content": "```less markpact:doql path=app.doql.less\n// LESS format — define @variables here as needed\n// Generated by sumd for code2llm\n\napp {\n name: code2llm;\n version: 0.5.137;\n}\n\ninterface[type=\"cli\"] {\n framework: click;\n}\n\nworkflow[name=\"install\"] {\n trigger: manual;\n step-1: run cmd=pip install -e .;\n}\n\nworkflow[name=\"dev\"] {\n trigger: manual;\n step-1: run cmd=pip install -e \".[dev]\";\n}\n\nworkflow[name=\"build\"] {\n trigger: manual;\n step-1: run cmd=python -m build;\n}\n\nworkflow[name=\"test\"] {\n trigger: manual;\n step-1: run cmd=pytest -q;\n}\n\nworkflow[name=\"lint\"] {\n trigger: manual;\n step-1: run cmd=ruff check .;\n}\n\nworkflow[name=\"fmt\"] {\n trigger: manual;\n step-1: run cmd=ruff format .;\n}\n\nworkflow[name=\"clean\"] {\n trigger: manual;\n step-1: run cmd=rm -rf build/ dist/ *.egg-info;\n}\n\nworkflow[name=\"help\"] {\n trigger: manual;\n step-1: run cmd=task --list;\n}\n\ndeploy {\n target: pip;\n}\n\nenvironment[name=\"local\"] {\n runtime: python;\n}\n```",
"level": 2
},
{
"name": "# source modules",
"type": "unknown",
"content": "- `code2llm.api`\n- `code2llm.cli`\n- `code2llm.cli_analysis`\n- `code2llm.cli_commands`\n- `code2llm.cli_parser`",
"level": 2
},
{
"name": "interfaces",
"type": "interfaces",
"content": "",
"level": 2
},
{
"name": "# cli entry points",
"type": "unknown",
"content": "- `code2llm`",
"level": 2
},
{
"name": "workflows",
"type": "workflows",
"content": "",
"level": 2
},
{
"name": "# taskfile tasks (`taskfile.yml`)",
"type": "unknown",
"content": "```yaml markpact:taskfile path=Taskfile.yml\nversion: '1'\nname: code2llm\ndescription: Minimal Taskfile\nvariables:\n APP_NAME: code2llm\nenvironments:\n local:\n container_runtime: docker\n compose_command: docker compose\npipeline:\n python_version: \"3.12\"\n runner_image: ubuntu-latest\n branches: [main]\n cache: [~/.cache/pip]\n artifacts: [dist/]\n\n stages:\n - name: lint\n tasks: [lint]\n\n - name: test\n tasks: [test]\n\n - name: build\n tasks: [build]\n when: \"branch:main\"\n\ntasks:\n install:\n desc: Install Python dependencies (editable)\n cmds:\n - pip install -e .[dev]\n test:\n desc: Run pytest suite\n cmds:\n - pytest -q\n build:\n desc: Build wheel + sdist\n cmds:\n - python -m build\n clean:\n desc: Remove build artefacts\n cmds:\n - rm -rf build/ dist/ *.egg-info\n help:\n desc: '[imported from Makefile] help'\n cmds:\n - echo \"code2llm - Python Code Flow Analysis Tool with LLM Integration and TOON\n Format\"\n - echo \"\"\n - \"echo \\\"\\U0001F680 Installation:\\\"\"\n - echo \" make install - Install package\"\n - echo \" make dev-install - Install with development dependencies\"\n - echo \"\"\n - \"echo \\\"\\U0001F9EA Testing:\\\"\"\n - echo \" make test - Run test suite\"\n - echo \" make test-toon - Test TOON format only\"\n - echo \" make validate-toon - Validate TOON format output\"\n - echo \" make test-all-formats - Test all output formats\"\n - echo \"\"\n - \"echo \\\"\\U0001F527 Code Quality:\\\"\"\n - echo \" make lint - Run linters (flake8, black --check)\"\n - echo \" make format - Format code with black\"\n - echo \" make typecheck - Run mypy type checking\"\n - echo \" make check - Run all quality checks\"\n - echo \"\"\n - \"echo \\\"\\U0001F4CA Analysis:\\\"\"\n - echo \" make analyze - Run analysis on current project (TOON format)\"\n - echo \" make run - Run with example arguments\"\n - echo \" make analyze-all - Run analysis with all formats\"\n - echo \"\"\n - \"echo \\\"\\U0001F3AF TOON Format:\\\"\"\n - echo \" make toon-demo - Quick TOON format demo\"\n - echo \" make toon-compare - Compare TOON vs YAML formats\"\n - echo \" make toon-validate - Validate TOON format structure\"\n - echo \"\"\n - \"echo \\\"\\U0001F4E6 Building & Release:\\\"\"\n - echo \" make build - Build distribution packages\"\n - echo \" make publish - Publish to PyPI (with version bump)\"\n - echo \" make publish-test - Publish to TestPyPI\"\n - echo \" make bump-patch - Bump patch version\"\n - echo \" make bump-minor - Bump minor version\"\n - echo \" make bump-major - Bump major version\"\n - echo \"\"\n - \"echo \\\"\\U0001F3A8 Visualization:\\\"\"\n - echo \" make mermaid-png - Generate PNG from all Mermaid files\"\n - echo \" make install-mermaid - Install Mermaid CLI renderer\"\n - echo \" make check-mermaid - Check available Mermaid renderers\"\n - echo \"\"\n - \"echo \\\"\\U0001F9F9 Maintenance:\\\"\"\n - echo \" make clean - Remove build artifacts\"\n - echo \" make clean-png - Clean PNG files\"\n - echo \"\"\n dev-install:\n desc: '[imported from Makefile] dev-install'\n cmds:\n - $(PYTHON) -m pip install -e \".[dev]\"\n - \"echo \\\"\\u2713 code2llm installed with dev dependencies\\\"\"\n test-cov:\n desc: '[imported from Makefile] test-cov'\n cmds:\n - $(PYTHON) -m pytest tests/ --cov=code2llm --cov-report=html --cov-report=term\n 2>/dev/null || echo \"No tests yet\"\n test-toon:\n desc: '[imported from Makefile] test-toon'\n cmds:\n - \"echo \\\"\\U0001F3AF Testing TOON format...\\\"\"\n - $(PYTHON) -m code2llm ./ -v -o ./test_toon -m hybrid -f toon\n - $(PYTHON) validate_toon.py test_toon/analysis.toon\n - \"echo \\\"\\u2713 TOON format test complete\\\"\"\n validate-toon:\n desc: '[imported from Makefile] validate-toon'\n deps:\n - test-toon\n test-all-formats:\n desc: '[imported from Makefile] test-all-formats'\n cmds:\n - \"echo \\\"\\U0001F4CA Testing all output formats...\\\"\"\n - $(PYTHON) -m code2llm ./ -v -o ./test_all -m hybrid -f all\n - $(PYTHON) validate_toon.py test_all/analysis.toon\n - \"echo \\\"\\u2713 All formats test complete\\\"\"\n test-comprehensive:\n desc: '[imported from Makefile] test-comprehensive'\n cmds:\n - \"echo \\\"\\U0001F680 Running comprehensive test suite...\\\"\"\n - bash project.sh\n - \"echo \\\"\\u2713 Comprehensive tests complete\\\"\"\n lint:\n desc: '[imported from Makefile] lint'\n cmds:\n - $(PYTHON) -m flake8 code2llm/ --max-line-length=100 --ignore=E203,W503 2>/dev/null\n || echo \"flake8 not installed\"\n - $(PYTHON) -m black --check code2llm/ 2>/dev/null || echo \"black not installed\"\n - \"echo \\\"\\u2713 Linting complete\\\"\"\n format:\n desc: '[imported from Makefile] format'\n cmds:\n - '$(PYTHON) -m black code2llm/ --line-length=100 2>/dev/null || echo \"black not\n installed, run: pip install black\"'\n - \"echo \\\"\\u2713 Code formatted\\\"\"\n typecheck:\n desc: '[imported from Makefile] typecheck'\n cmds:\n - $(PYTHON) -m mypy code2llm/ --ignore-missing-imports 2>/dev/null || echo \"mypy\n not installed\"\n check:\n desc: '[imported from Makefile] check'\n cmds:\n - \"echo \\\"\\u2713 All checks passed\\\"\"\n deps:\n - lint\n - typecheck\n - test\n run:\n desc: '[imported from Makefile] run'\n cmds:\n - $(PYTHON) -m code2llm ../python/stts_core -v -o ./output\n analyze:\n desc: '[imported from Makefile] analyze'\n cmds:\n - \"echo \\\"\\U0001F3AF Running TOON format analysis on current project...\\\"\"\n - $(PYTHON) -m code2llm ./ -v -o ./analysis -m hybrid -f toon\n - $(PYTHON) validate_toon.py analysis/analysis.toon\n - \"echo \\\"\\u2713 TOON analysis complete - check analysis/analysis.toon\\\"\"\n analyze-all:\n desc: '[imported from Makefile] analyze-all'\n cmds:\n - \"echo \\\"\\U0001F4CA Running analysis with all formats...\\\"\"\n - $(PYTHON) -m code2llm ./ -v -o ./analysis_all -m hybrid -f all\n - $(PYTHON) validate_toon.py analysis_all/analysis.toon\n - \"echo \\\"\\u2713 All formats analysis complete - check analysis_all/\\\"\"\n toon-demo:\n desc: '[imported from Makefile] toon-demo'\n cmds:\n - \"echo \\\"\\U0001F3AF Quick TOON format demo...\\\"\"\n - $(PYTHON) -m code2llm ./ -v -o ./demo -m hybrid -f toon\n - \"echo \\\"\\U0001F4C1 Generated: demo/analysis.toon\\\"\"\n - \"echo \\\"\\U0001F4CA Size: $$(du -h demo/analysis.toon | cut -f1)\\\"\"\n - \"echo \\\"\\U0001F50D Preview:\\\"\"\n - head -20 demo/analysis.toon\n toon-compare:\n desc: '[imported from Makefile] toon-compare'\n cmds:\n - \"echo \\\"\\U0001F4CA Comparing TOON vs YAML formats...\\\"\"\n - $(PYTHON) -m code2llm ./ -v -o ./compare -m hybrid -f toon,yaml\n - \"echo \\\"\\U0001F4C1 Files generated:\\\"\"\n - 'echo \" - TOON: compare/analysis.toon ($$(du -h compare/analysis.toon | cut\n -f1))\"'\n - 'echo \" - YAML: compare/analysis.yaml ($$(du -h compare/analysis.yaml | cut\n -f1))\"'\n - 'echo \" - Ratio: $$(echo \"scale=1; $$(du -k compare/analysis.yaml | cut -f1)\n / $$(du -k compare/analysis.toon | cut -f1)\" | bc)x smaller\"'\n - $(PYTHON) validate_toon.py compare/analysis.yaml compare/analysis.toon\n toon-validate:\n desc: '[imported from Makefile] toon-validate'\n cmds:\n - \"echo \\\"\\U0001F50D Validating TOON format structure...\\\"\"\n - $(PYTHON) validate_toon.py analysis/analysis.toon 2>/dev/null || $(PYTHON) validate_toon.py\n test_toon/analysis.toon 2>/dev/null || echo \"Run 'make test-toon' first\"\n publish-test:\n desc: '[imported from Makefile] publish-test'\n cmds:\n - \"echo \\\"\\U0001F680 Publishing to TestPyPI...\\\"\"\n - $(PYTHON) -m venv publish-test-env\n - publish-test-env/bin/pip install twine\n - publish-test-env/bin/python -m twine upload --repository testpypi dist/*\n - rm -rf publish-test-env\n - \"echo \\\"\\u2713 Published to TestPyPI\\\"\"\n deps:\n - build\n bump-patch:\n desc: '[imported from Makefile] bump-patch'\n cmds:\n - \"echo \\\"\\U0001F522 Bumping patch version...\\\"\"\n - $(PYTHON) scripts/bump_version.py patch 2>/dev/null || echo \"Create scripts/bump_version.py\n or edit pyproject.toml manually\"\n bump-minor:\n desc: '[imported from Makefile] bump-minor'\n cmds:\n - \"echo \\\"\\U0001F522 Bumping minor version...\\\"\"\n - $(PYTHON) scripts/bump_version.py minor 2>/dev/null || echo \"Create scripts/bump_version.py\n or edit pyproject.toml manually\"\n bump-major:\n desc: '[imported from Makefile] bump-major'\n cmds:\n - \"echo \\\"\\U0001F522 Bumping major version...\\\"\"\n - $(PYTHON) scripts/bump_version.py major 2>/dev/null || echo \"Create scripts/bump_version.py\n or edit pyproject.toml manually\"\n publish:\n desc: '[imported from Makefile] publish'\n cmds:\n - \"echo \\\"\\U0001F680 Publishing to PyPI...\\\"\"\n - \"echo \\\"\\U0001F522 Bumping patch version...\\\"\"\n - $(MAKE) bump-patch\n - \"echo \\\"\\U0001F528 Rebuilding package with new version...\\\"\"\n - $(MAKE) build\n - \"echo \\\"\\U0001F4E6 Publishing to PyPI...\\\"\"\n - $(PYTHON) -m venv publish-env\n - publish-env/bin/pip install twine\n - publish-env/bin/python -m twine upload dist/*\n - rm -rf publish-env\n - \"echo \\\"\\u2713 Published to PyPI\\\"\"\n deps:\n - build\n mermaid-png:\n desc: '[imported from Makefile] mermaid-png'\n cmds:\n - $(PYTHON) mermaid_to_png.py --batch output output\n install-mermaid:\n desc: '[imported from Makefile] install-mermaid'\n cmds:\n - npm install -g @mermaid-js/mermaid-cli\n check-mermaid:\n desc: '[imported from Makefile] check-mermaid'\n cmds:\n - echo \"Checking available Mermaid renderers...\"\n - \"which mmdc > /dev/null && echo \\\"\\u2713 mmdc (mermaid-cli)\\\" || echo \\\"\\u2717\\\n \\ mmdc (run: npm install -g @mermaid-js/mermaid-cli)\\\"\"\n - \"which npx > /dev/null && echo \\\"\\u2713 npx (for @mermaid-js/mermaid-cli)\\\"\\\n \\ || echo \\\"\\u2717 npx (install Node.js)\\\"\"\n - \"which puppeteer > /dev/null && echo \\\"\\u2713 puppeteer\\\" || echo \\\"\\u2717 puppeteer\\\n \\ (run: npm install -g puppeteer)\\\"\"\n clean-png:\n desc: '[imported from Makefile] clean-png'\n cmds:\n - rm -f output/*.png\n - \"echo \\\"\\u2713 Cleaned PNG files\\\"\"\n quickstart:\n desc: '[imported from Makefile] quickstart'\n cmds:\n - \"echo \\\"\\U0001F680 Quick Start with code2llm TOON format:\\\"\"\n - echo \"\"\n - 'echo \"1. Install: make install\"'\n - 'echo \"2. Test TOON: make test-toon\"'\n - 'echo \"3. Analyze: make analyze\"'\n - 'echo \"4. Compare: make toon-compare\"'\n - 'echo \"5. All formats: make test-all-formats\"'\n - echo \"\"\n - \"echo \\\"\\U0001F4D6 For more: make help\\\"\"\n health:\n desc: '[from doql] workflow: health'\n cmds:\n - docker compose ps\n - docker compose exec app echo \"Health check passed\"\n import-makefile-hint:\n desc: '[from doql] workflow: import-makefile-hint'\n cmds:\n - 'echo ''Run: taskfile import Makefile to import existing targets.'''\n all:\n desc: Run install, lint, test\n cmds:\n - taskfile run install\n - taskfile run lint\n - taskfile run test\n fmt:\n desc: Auto-format with ruff\n cmds:\n - ruff format .\n sumd:\n desc: Generate SUMD (Structured Unified Markdown Descriptor) for AI-aware project description\n cmds:\n - |\n echo \"# $(basename $(pwd))\" > SUMD.md\n echo \"\" >> SUMD.md\n echo \"$(python3 -c \"import tomllib; f=open('pyproject.toml','rb'); d=tomllib.load(f); print(d.get('project',{}).get('description','Project description'))\" 2>/dev/null || echo 'Project description')\" >> SUMD.md\n echo \"\" >> SUMD.md\n echo \"## Contents\" >> SUMD.md\n echo \"\" >> SUMD.md\n echo \"- [Metadata](#metadata)\" >> SUMD.md\n echo \"- [Architecture](#architecture)\" >> SUMD.md\n echo \"- [Dependencies](#dependencies)\" >> SUMD.md\n echo \"- [Source Map](#source-map)\" >> SUMD.md\n echo \"- [Intent](#intent)\" >> SUMD.md\n echo \"\" >> SUMD.md\n echo \"## Metadata\" >> SUMD.md\n echo \"\" >> SUMD.md\n echo \"- **name**: \\`$(basename $(pwd))\\`\" >> SUMD.md\n echo \"- **version**: \\`$(python3 -c \"import tomllib; f=open('pyproject.toml','rb'); d=tomllib.load(f); print(d.get('project',{}).get('version','unknown'))\" 2>/dev/null || echo 'unknown')\\`\" >> SUMD.md\n echo \"- **python_requires**: \\`>=$(python3 --version 2>/dev/null | cut -d' ' -f2 | cut -d. -f1,2)\\`\" >> SUMD.md\n echo \"- **license**: $(python3 -c \"import tomllib; f=open('pyproject.toml','rb'); d=tomllib.load(f); print(d.get('project',{}).get('license',{}).get('text','MIT'))\" 2>/dev/null || echo 'MIT')\" >> SUMD.md\n echo \"- **ecosystem**: SUMD + DOQL + testql + taskfile\" >> SUMD.md\n echo \"- **generated_from**: pyproject.toml, Taskfile.yml, Makefile, src/\" >> SUMD.md\n echo \"\" >> SUMD.md\n echo \"## Architecture\" >> SUMD.md\n echo \"\" >> SUMD.md\n echo '```' >> SUMD.md\n echo \"SUMD (description) → DOQL/source (code) → taskfile (automation) → testql (verification)\" >> SUMD.md\n echo '```' >> SUMD.md\n echo \"\" >> SUMD.md\n echo \"## Source Map\" >> SUMD.md\n echo \"\" >> SUMD.md\n find . -name '*.py' -not -path './.venv/*' -not -path './venv/*' -not -path './__pycache__/*' -not -path './.git/*' | head -50 | sed 's|^./||' | sed 's|^|- |' >> SUMD.md\n echo \"Generated SUMD.md\"\n - |\n python3 -c \"\n import json, os, subprocess\n from pathlib import Path\n project_name = Path.cwd().name\n py_files = list(Path('.').rglob('*.py'))\n py_files = [f for f in py_files if not any(x in str(f) for x in ['.venv', 'venv', '__pycache__', '.git'])]\n data = {\n 'project_name': project_name,\n 'description': 'SUMD - Structured Unified Markdown Descriptor for AI-aware project refactorization',\n 'files': [{'path': str(f), 'type': 'python'} for f in py_files[:100]]\n }\n with open('sumd.json', 'w') as f:\n json.dump(data, f, indent=2)\n print('Generated sumd.json')\n \" 2>/dev/null || echo 'Python generation failed, using fallback'\n sumr:\n desc: Generate SUMR (Summary Report) with project metrics and health status\n cmds:\n - |\n echo \"# $(basename $(pwd)) - Summary Report\" > SUMR.md\n echo \"\" >> SUMR.md\n echo \"SUMR - Summary Report for project analysis\" >> SUMR.md\n echo \"\" >> SUMR.md\n echo \"## Contents\" >> SUMR.md\n echo \"\" >> SUMR.md\n echo \"- [Metadata](#metadata)\" >> SUMR.md\n echo \"- [Quality Status](#quality-status)\" >> SUMR.md\n echo \"- [Metrics](#metrics)\" >> SUMR.md\n echo \"- [Refactoring Analysis](#refactoring-analysis)\" >> SUMR.md\n echo \"- [Intent](#intent)\" >> SUMR.md\n echo \"\" >> SUMR.md\n echo \"## Metadata\" >> SUMR.md\n echo \"\" >> SUMR.md\n echo \"- **name**: \\`$(basename $(pwd))\\`\" >> SUMR.md\n echo \"- **version**: \\`$(python3 -c \"import tomllib; f=open('pyproject.toml','rb'); d=tomllib.load(f); print(d.get('project',{}).get('version','unknown'))\" 2>/dev/null || echo 'unknown')\\`\" >> SUMR.md\n echo \"- **generated_at**: \\`$(date -Iseconds)\\`\" >> SUMR.md\n echo \"\" >> SUMR.md\n echo \"## Quality Status\" >> SUMR.md\n echo \"\" >> SUMR.md\n if [ -f pyqual.yaml ]; then\n echo \"- **pyqual_config**: ✅ Present\" >> SUMR.md\n echo \"- **last_run**: $(stat -c %y .pyqual/pipeline.db 2>/dev/null | cut -d' ' -f1 || echo 'N/A')\" >> SUMR.md\n else\n echo \"- **pyqual_config**: ❌ Missing\" >> SUMR.md\n fi\n echo \"\" >> SUMR.md\n echo \"## Metrics\" >> SUMR.md\n echo \"\" >> SUMR.md\n py_files=$(find . -name '*.py' -not -path './.venv/*' -not -path './venv/*' | wc -l)\n echo \"- **python_files**: $py_files\" >> SUMR.md\n lines=$(find . -name '*.py' -not -path './.venv/*' -not -path './venv/*' -exec cat {} \\; 2>/dev/null | wc -l)\n echo \"- **total_lines**: $lines\" >> SUMR.md\n echo \"\" >> SUMR.md\n echo \"## Refactoring Analysis\" >> SUMR.md\n echo \"\" >> SUMR.md\n echo \"Run \\`code2llm ./ -f evolution\\` for detailed refactoring queue.\" >> SUMR.md\n echo \"Generated SUMR.md\"\n - |\n python3 -c \"\n import json, os, subprocess\n from pathlib import Path\n from datetime import datetime\n project_name = Path.cwd().name\n py_files = len([f for f in Path('.').rglob('*.py') if not any(x in str(f) for x in ['.venv', 'venv', '__pycache__', '.git'])])\n data = {\n 'project_name': project_name,\n 'report_type': 'SUMR',\n 'generated_at': datetime.now().isoformat(),\n 'metrics': {\n 'python_files': py_files,\n 'has_pyqual_config': Path('pyqual.yaml').exists()\n }\n }\n with open('SUMR.json', 'w') as f:\n json.dump(data, f, indent=2)\n print('Generated SUMR.json')\n \" 2>/dev/null || echo 'Python generation failed, using fallback'\n```",
"level": 2
},
{
"name": "quality pipeline (`pyqual.yaml`)",
"type": "unknown",
"content": "```yaml markpact:pyqual path=pyqual.yaml\npipeline:\n name: code2llm-quality\n\n metrics:\n cc_max: 15\n critical_max: 0\n\n custom_tools:\n - name: code2llm_code2llm\n binary: code2llm\n command: >-\n code2llm {workdir} -f toon -o ./project --no-chunk\n --exclude .git .venv .venv_test build dist __pycache__ .pytest_cache .code2llm_cache .benchmarks .mypy_cache .ruff_cache node_modules\n output: \"\"\n allow_failure: false\n\n - name: vallm_code2llm\n binary: vallm\n command: >-\n vallm batch {workdir} --recursive --format toon --output ./project\n --exclude .git,.venv,.venv_test,build,dist,__pycache__,.pytest_cache,.code2llm_cache,.benchmarks,.mypy_cache,.ruff_cache,node_modules\n output: \"\"\n allow_failure: false\n\n stages:\n - name: analyze\n tool: code2llm_code2llm\n optional: true\n timeout: 0\n\n - name: validate\n tool: vallm_code2llm\n optional: true\n timeout: 0\n\n - name: lint\n tool: ruff\n optional: true\n\n - name: fix\n tool: prefact\n optional: true\n when: metrics_fail\n timeout: 900\n\n - name: test\n run: python3 -m pytest -q\n when: always\n\n loop:\n max_iterations: 3\n on_fail: report\n\n env:\n LLM_MODEL: openrouter/qwen/qwen3-coder-next\n```",
"level": 2
},
{
"name": "configuration",
"type": "configuration",
"content": "```yaml\nproject:\n name: code2llm\n version: 0.5.141\n env: local\n```",
"level": 2
},
{
"name": "dependencies",
"type": "dependencies",
"content": "",
"level": 2
},
{
"name": "# runtime",
"type": "unknown",
"content": "```text markpact:deps python\nnetworkx>=2.6\nmatplotlib>=3.4\npyyaml>=5.4\nnumpy>=1.20\njinja2>=3.0\nradon>=5.1\nastroid>=3.0\ncode2logic\nvulture>=2.10\ntiktoken>=0.5\ntree-sitter>=0.21\ntree-sitter-python>=0.21\ntree-sitter-javascript>=0.21\ntree-sitter-typescript>=0.21\ntree-sitter-go>=0.21\ntree-sitter-rust>=0.21\ntree-sitter-java>=0.21\ntree-sitter-c>=0.21\ntree-sitter-cpp>=0.22\ntree-sitter-c-sharp>=0.21\ntree-sitter-php>=0.22\ntree-sitter-ruby>=0.21\n```",
"level": 2
},
{
"name": "# development",
"type": "unknown",
"content": "```text markpact:deps python scope=dev\npytest>=6.2\npytest-cov>=2.12\npytest-xdist>=3.0\nblack>=21.0\nflake8>=3.9\nmypy>=0.910\ngoal>=2.1.0\ncosts>=0.1.20\npfix>=0.1.60\n```",
"level": 2
},
{
"name": "deployment",
"type": "deployment",
"content": "```bash markpact:run\npip install code2llm\n\n# development install\npip install -e .[dev]\n```",
"level": 2
},
{
"name": "# requirements files",
"type": "unknown",
"content": "",
"level": 2
},
{
"name": "## `requirements.txt`",
"type": "unknown",
"content": "- `networkx>=3.0`\n- `matplotlib>=3.6.0`\n- `numpy>=1.21.0`\n- `pyyaml>=6.0`\n- `scipy>=1.7.0`\n- `radon>=5.1`\n- `psutil>=5.8.0`\n- `astroid>=3.0`\n- `code2logic`",
"level": 2
},
{
"name": "environment variables (`.env.example`)",
"type": "unknown",
"content": "| Variable | Default | Description |\n|----------|---------|-------------|\n| `CODE2FLOW_CALLS_SPLIT` | `1` | Enable/disable splitting |\n| `CODE2FLOW_CALLS_KEEP_MAIN` | `0` | Keep writing the full calls.mmd in addition to parts |\n| `CODE2FLOW_CALLS_MIN_NODES` | `30` | Minimum number of functions per part |\n| `CODE2FLOW_CALLS_MAX_NODES` | `250` | Maximum number of functions per part |\n| `CODE2FLOW_CALLS_MAX_PARTS` | `20` | Maximum number of parts to generate |\n| `CODE2FLOW_CALLS_INCLUDE_SINGLETONS` | `0` | Include singleton components (1 function with no edges) |\n| `CODE2FLOW_MERMAID_MAX_EDGES` | `20000` | Increase if Mermaid CLI reports edge/text limits. |\n| `CODE2FLOW_MERMAID_MAX_TEXT_SIZE` | `2000000` | |",
"level": 2
},
{
"name": "release management (`goal.yaml`)",
"type": "unknown",
"content": "- **versioning**: `semver`\n- **commits**: `conventional` scope=`code2flow`\n- **changelog**: `keep-a-changelog`\n- **build strategies**: `python`, `nodejs`, `rust`\n- **version files**: `VERSION`, `pyproject.toml:version`, `setup.py:version`, `code2llm/__init__.py:__version__`",
"level": 2
},
{
"name": "makefile targets",
"type": "unknown",
"content": "- `PYTHON`\n- `help` — Default target\n- `install`\n- `dev-install`\n- `test`\n- `test-cov`\n- `test-toon`\n- `validate-toon`\n- `test-all-formats`\n- `test-comprehensive`\n- `lint`\n- `format`\n- `typecheck`\n- `check`\n- `run`\n- `analyze`\n- `analyze-all`\n- `toon-demo`\n- `toon-compare`\n- `toon-validate`\n- `build`\n- `publish-test`\n- `bump-patch`\n- `bump-minor`\n- `bump-major`\n- `publish`\n- `mermaid-png`\n- `install-mermaid`\n- `check-mermaid`\n- `clean`\n- `clean-png`\n- `quickstart`",
"level": 2
},
{
"name": "code analysis",
"type": "unknown",
"content": "",
"level": 2
},
{
"name": "# `project/map.toon.yaml`",
"type": "unknown",
"content": "```toon markpact:analysis path=project/map.toon.yaml\n# code2llm | 195f 27349L | shell:3,python:191,java:1 | 2026-04-20\n# stats: 1188 func | 0 cls | 195 mod | CC̄=3.9 | critical:1 | cycles:0\n# alerts[5]: fan-out analyze_rust=24; fan-out _summarize_functions=21; fan-out _analyze_go_regex=20; fan-out ProjectYAMLExporter._build_project_yaml=18; fan-out _run_exports=18\n# hotspots[5]: analyze_rust fan=24; _summarize_functions fan=21; _analyze_go_regex fan=20; run_benchmark fan=18; ProjectYAMLExporter._build_project_yaml fan=18\n# evolution: CC̄ 4.1→3.9 (improved -0.2)\n# Keys: M=modules, D=details, i=imports, e=exports, c=classes, f=functions, m=methods\nM[195]:\n badges/server.py,110\n benchmarks/benchmark_constants.py,29\n benchmarks/benchmark_evolution.py,137\n benchmarks/benchmark_format_quality.py,143\n benchmarks/benchmark_optimizations.py,157\n benchmarks/benchmark_performance.py,306\n benchmarks/format_evaluator.py,138\n benchmarks/project_generator.py,233\n benchmarks/reporting.py,179\n code2llm/__init__.py,52\n code2llm/__main__.py,6\n code2llm/analysis/__init__.py,37\n code2llm/analysis/call_graph.py,198\n code2llm/analysis/cfg.py,280\n code2llm/analysis/coupling.py,77\n code2llm/analysis/data_analysis.py,375\n code2llm/analysis/dfg.py,219\n code2llm/analysis/pipeline_classifier.py,100\n code2llm/analysis/pipeline_detector.py,362\n code2llm/analysis/pipeline_resolver.py,91\n code2llm/analysis/side_effects.py,294\n code2llm/analysis/smells.py,192\n code2llm/analysis/type_inference.py,290\n code2llm/analysis/utils/__init__.py,5\n code2llm/analysis/utils/ast_helpers.py,86\n code2llm/api.py,73\n code2llm/cli.py,69\n code2llm/cli_analysis.py,331\n code2llm/cli_commands.py,317\n code2llm/cli_exports/__init__.py,54\n code2llm/cli_exports/code2logic.py,127\n code2llm/cli_exports/formats.py,315\n code2llm/cli_exports/orchestrator.py,293\n code2llm/cli_exports/orchestrator_chunked.py,87\n code2llm/cli_exports/orchestrator_constants.py,52\n code2llm/cli_exports/orchestrator_handlers.py,149\n code2llm/cli_exports/prompt.py,475\n code2llm/cli_parser.py,327\n code2llm/core/__init__.py,53\n code2llm/core/analyzer.py,442\n code2llm/core/ast_registry.py,102\n code2llm/core/config.py,269\n code2llm/core/export_pipeline.py,153\n code2llm/core/file_analyzer.py,392\n code2llm/core/file_cache.py,107\n code2llm/core/file_filter.py,127\n code2llm/core/gitignore.py,138\n code2llm/core/incremental.py,150\n code2llm/core/lang/__init__.py,171\n code2llm/core/lang/base.py,464\n code2llm/core/lang/cpp.py,35\n code2llm/core/lang/csharp.py,42\n code2llm/core/lang/generic.py,71\n code2llm/core/lang/go_lang.py,102\n code2llm/core/lang/java.py,43\n code2llm/core/lang/php.py,66\n code2llm/core/lang/ruby.py,164\n code2llm/core/lang/rust.py,94\n code2llm/core/lang/ts_extractors.py,180\n code2llm/core/lang/ts_parser.py,158\n code2llm/core/lang/typescript.py,53\n code2llm/core/large_repo.py,488\n code2llm/core/models.py,193\n code2llm/core/persistent_cache.py,322\n code2llm/core/refactoring.py,195\n code2llm/core/repo_files.py,174\n code2llm/core/streaming/__init__.py,7\n code2llm/core/streaming/cache.py,50\n code2llm/core/streaming/incremental.py,75\n code2llm/core/streaming/prioritizer.py,131\n code2llm/core/streaming/scanner.py,201\n code2llm/core/streaming/strategies.py,68\n code2llm/core/streaming_analyzer.py,181\n code2llm/core/toon_size_manager.py,265\n code2llm/exporters/__init__.py,82\n code2llm/exporters/article_view.py,159\n code2llm/exporters/base.py,174\n code2llm/exporters/context_exporter.py,250\n code2llm/exporters/context_view.py,136\n code2llm/exporters/dashboard_data.py,163\n code2llm/exporters/dashboard_renderer.py,342\n code2llm/exporters/evolution/__init__.py,78\n code2llm/exporters/evolution/computation.py,167\n code2llm/exporters/evolution/constants.py,25\n code2llm/exporters/evolution/exclusion.py,17\n code2llm/exporters/evolution/render.py,195\n code2llm/exporters/evolution/yaml_export.py,103\n code2llm/exporters/evolution_exporter.py,74\n code2llm/exporters/flow_constants.py,46\n code2llm/exporters/flow_exporter.py,385\n code2llm/exporters/flow_renderer.py,188\n code2llm/exporters/html_dashboard.py,68\n code2llm/exporters/index_generator/__init__.py,73\n code2llm/exporters/index_generator/renderer.py,637\n code2llm/exporters/index_generator/scanner.py,116\n code2llm/exporters/json_exporter.py,27\n code2llm/exporters/llm_exporter.py,12\n code2llm/exporters/map/__init__.py,60\n code2llm/exporters/map/alerts.py,84\n code2llm/exporters/map/details.py,115\n code2llm/exporters/map/header.py,71\n code2llm/exporters/map/module_list.py,26\n code2llm/exporters/map/utils.py,69\n code2llm/exporters/map/yaml_export.py,106\n code2llm/exporters/map_exporter.py,50\n code2llm/exporters/mermaid/__init__.py,66\n code2llm/exporters/mermaid/calls.py,62\n code2llm/exporters/mermaid/classic.py,93\n code2llm/exporters/mermaid/compact.py,67\n code2llm/exporters/mermaid/flow_compact.py,157\n code2llm/exporters/mermaid/flow_detailed.py,70\n code2llm/exporters/mermaid/flow_full.py,70\n code2llm/exporters/mermaid/utils.py,99\n code2llm/exporters/mermaid_exporter.py,74\n code2llm/exporters/mermaid_flow_helpers.py,263\n code2llm/exporters/project_yaml/__init__.py,15\n code2llm/exporters/project_yaml/constants.py,15\n code2llm/exporters/project_yaml/core.py,120\n code2llm/exporters/project_yaml/evolution.py,46\n code2llm/exporters/project_yaml/health.py,103\n code2llm/exporters/project_yaml/hotspots.py,106\n code2llm/exporters/project_yaml/modules.py,151\n code2llm/exporters/project_yaml_exporter.py,15\n code2llm/exporters/readme/__init__.py,40\n code2llm/exporters/readme/content.py,348\n code2llm/exporters/readme/files.py,26\n code2llm/exporters/readme/insights.py,52\n code2llm/exporters/readme/sections.py,67\n code2llm/exporters/readme_exporter.py,66\n code2llm/exporters/report_generators.py,76\n code2llm/exporters/toon/__init__.py,199\n code2llm/exporters/toon/helpers.py,111\n code2llm/exporters/toon/metrics.py,98\n code2llm/exporters/toon/metrics_core.py,305\n code2llm/exporters/toon/metrics_duplicates.py,78\n code2llm/exporters/toon/metrics_health.py,98\n code2llm/exporters/toon/module_detail.py,162\n code2llm/exporters/toon/renderer.py,471\n code2llm/exporters/toon_view.py,153\n code2llm/exporters/validate_project.py,118\n code2llm/exporters/yaml_exporter.py,354\n code2llm/generators/__init__.py,15\n code2llm/generators/_utils.py,15\n code2llm/generators/llm_flow/__init__.py,98\n code2llm/generators/llm_flow/analysis.py,173\n code2llm/generators/llm_flow/cli.py,76\n code2llm/generators/llm_flow/generator.py,118\n code2llm/generators/llm_flow/nodes.py,103\n code2llm/generators/llm_flow/parsing.py,39\n code2llm/generators/llm_flow/utils.py,84\n code2llm/generators/llm_task.py,309\n code2llm/generators/mermaid/__init__.py,70\n code2llm/generators/mermaid/fix.py,147\n code2llm/generators/mermaid/png.py,264\n code2llm/generators/mermaid/validation.py,119\n code2llm/nlp/__init__.py,23\n code2llm/nlp/config.py,174\n code2llm/nlp/entity_resolution.py,326\n code2llm/nlp/intent_matching.py,297\n code2llm/nlp/normalization.py,122\n code2llm/nlp/pipeline.py,388\n code2llm/parsers/toon_parser.py,147\n code2llm/patterns/__init__.py,0\n code2llm/patterns/detector.py,168\n code2llm/refactor/__init__.py,0\n code2llm/refactor/prompt_engine.py,150\n demo_langs/valid/sample.py,53\n examples/functional_refactoring/__init__.py,6\n examples/functional_refactoring/cache.py,121\n examples/functional_refactoring/cli.py,45\n examples/functional_refactoring/entity_preparers.py,129\n examples/functional_refactoring/generator.py,61\n examples/functional_refactoring/models.py,25\n examples/functional_refactoring/template_engine.py,104\n examples/functional_refactoring_example.py,61\n examples/litellm/run.py,120\n examples/streaming-analyzer/demo.py,251\n examples/streaming-analyzer/sample_project/__init__.py,1\n examples/streaming-analyzer/sample_project/api.py,76\n examples/streaming-analyzer/sample_project/auth.py,88\n examples/streaming-analyzer/sample_project/database.py,157\n examples/streaming-analyzer/sample_project/main.py,158\n examples/streaming-analyzer/sample_project/utils.py,84\n orchestrator.sh,58\n project.sh,49\n project2.sh,35\n scripts/benchmark_badges.py,392\n scripts/bump_version.py,96\n setup.py,72\n test_langs/invalid/sample_bad.java,18\n test_langs/valid/sample.py,40\n test_python_only/invalid/__init__.py,1\n test_python_only/valid/__init__.py,1\n test_python_only/valid/sample.py,40\n validate_toon.py,379\nD:\n code2llm/generators/llm_task.py:\n e: _strip_bom,_ensure_list,_deep_get,normalize_llm_task,_parse_bullets,_parse_sections,_create_empty_task_data,_apply_simple_sections,_apply_bullet_sections,_parse_acceptance_tests,parse_llm_task_text,load_input,create_parser,main\n _strip_bom(text)\n _ensure_list(value)\n _deep_get(d;path)\n normalize_llm_task(data)\n _parse_bullets(lines)\n _parse_sections(lines)\n _create_empty_task_data()\n _apply_simple_sections(sections;data)\n _apply_bullet_sections(sections;data)\n _parse_acceptance_tests(sections)\n parse_llm_task_text(text)\n load_input(path)\n create_parser()\n main(argv)\n code2llm/cli_analysis.py:\n e: _run_analysis,_run_standard_analysis,_build_config,_print_analysis_summary,_run_chunked_analysis,_print_chunked_plan,_filter_subprojects,_analyze_all_subprojects,_analyze_subproject,_merge_chunked_results,_run_streaming_analysis\n _run_analysis(args;source_path;output_dir)\n _run_standard_analysis(args;source_path;output_dir)\n _build_config(args;output_dir)\n _print_analysis_summary(result)\n _run_chunked_analysis(args;source_path;output_dir)\n _print_chunked_plan(subprojects)\n _filter_subprojects(args;subprojects)\n _analyze_all_subprojects(args;subprojects;output_dir)\n _analyze_subproject(args;subproject;output_dir)\n _merge_chunked_results(all_results;source_path)\n _run_streaming_analysis(args;config;source_path)\n code2llm/analysis/data_analysis.py:\n e: DataAnalyzer,DataFlowAnalyzer,OptimizationAdvisor,_categorize_functions,_make_stage\n DataAnalyzer: analyze_data_flow(1),analyze_data_structures(1),_find_data_pipelines(1),_find_state_patterns(1),_find_data_dependencies(1),_find_event_flows(1),_detect_types_from_name(2),_create_type_entry(4),_update_type_stats(4),_analyze_data_types(1),_infer_parameter_types(1),_infer_return_types(1),_build_data_flow_graph(1),_get_function_data_types(1),_identify_process_patterns(1),_analyze_optimization_opportunities(3) # Analyze data flows, structures, and optimization opportuniti...\n DataFlowAnalyzer: analyze(1),find_data_pipelines(1),find_state_patterns(1),find_data_dependencies(1),find_event_flows(1) # Analyze data flows: pipelines, state patterns, dependencies,...\n OptimizationAdvisor: analyze(1),analyze_data_types(1),build_data_flow_graph(1),identify_process_patterns(1),analyze_optimization_opportunities(3) # Analyze optimization opportunities: data types and process p...\n _categorize_functions(result)\n _make_stage(label;func_name;func)\n code2llm/analysis/side_effects.py:\n e: SideEffectInfo,SideEffectDetector\n SideEffectInfo: __init__(2),to_dict(0) # Side-effect analysis result for a single function...\n SideEffectDetector: __init__(1),analyze_function(1),analyze_all(1),get_purity_score(1),_scan_node(2),_check_calls(2),_check_assignments(2),_check_globals(2),_check_yield(2),_check_delete(2),_classify(1),_heuristic_classify(2),_get_call_name(1) # Detect side effects in Python functions via AST analysis.\n\nS...\n code2llm/core/streaming/scanner.py:\n e: StreamingScanner\n StreamingScanner: __init__(2),quick_scan_file(1),deep_analyze_file(1),build_call_graph_streaming(1),select_important_files(2),collect_files(1) # Handles file scanning operations...\n code2llm/core/lang/ruby.py:\n e: RubyParser,_extract_ruby_body,_adjust_ruby_module_qualnames,analyze_ruby\n RubyParser: analyze(4) # Ruby language parser - registered via @register_language in ...\n _extract_ruby_body(content;start_line)\n _adjust_ruby_module_qualnames(result;module_name;current_module)\n analyze_ruby(content;file_path;module_name;ext;stats)\n code2llm/core/lang/base.py:\n e: extract_function_body,calculate_complexity_regex,_resolve_call,extract_calls_regex,_extract_declarations,_update_brace_tracking,_process_decorators,_process_classes,_process_standalone_function,_match_method_name,_process_class_method,_process_functions,_clear_orphaned_decorators,analyze_c_family\n extract_function_body(content;start_line)\n calculate_complexity_regex(content;result;lang)\n _resolve_call(simple_call;func_qname;module_name;known_simple;calls_seen;func_info)\n extract_calls_regex(content;module_name;result)\n _extract_declarations(content;file_path;module_name;patterns;stats;lang_config)\n _update_brace_tracking(raw_line;brace_depth;current_class;class_brace_depth;track_braces)\n _process_decorators(decorator_re;line;pending_decorators)\n _process_classes(class_re;interface_re;line;line_no;file_path;module_name;result;stats;current_class;class_brace_depth;pending_decorators)\n _process_standalone_function(func_re;arrow_re;line;line_no;file_path;module_name;result;stats;pending_decorators;reserved)\n _match_method_name(arrow_prop_re;method_re;func_re;line;reserved)\n _process_class_method(method_re;arrow_prop_re;func_re;line;line_no;file_path;module_name;result;stats;current_class;pending_decorators;reserved)\n _process_functions(func_re;arrow_re;method_re;arrow_prop_re;line;line_no;file_path;module_name;result;stats;current_class;pending_decorators;reserved)\n _clear_orphaned_decorators(line;pending_decorators;func_re;arrow_re;class_re;interface_re;method_re)\n analyze_c_family(content;file_path;module_name;stats;patterns;lang_config;cc_lang;ext)\n code2llm/exporters/flow_renderer.py:\n e: FlowRenderer\n FlowRenderer: render_header(0),render_pipelines(0),render_transforms(0),render_contracts(0),render_data_types(0),render_side_effects(0) # Renderer dla sekcji formatu flow.toon...\n code2llm/generators/llm_flow/analysis.py:\n e: FuncSummary,_node_counts_by_function,_pick_relevant_functions,_summarize_functions,_build_call_graph,_reachable\n FuncSummary:\n _node_counts_by_function(nodes)\n _pick_relevant_functions()\n _summarize_functions(nodes;limit_decisions;limit_calls)\n _build_call_graph(func_summaries;known_functions)\n _reachable(g;roots;max_nodes)\n code2llm/cli_exports/prompt.py:\n e: _export_prompt_txt,_export_chunked_prompt_txt,_get_prompt_paths,_build_prompt_header,_find_existing_prompt_file,_build_prompt_file_lines,_build_main_files_section,_build_optional_files_section,_format_size,_get_missing_files,_build_subprojects_section,_build_missing_files_section,_analyze_generated_files,_build_dynamic_focus_areas,_build_dynamic_tasks,_build_priority_order,_build_strategy_section,_build_prompt_footer\n _export_prompt_txt(args;output_dir;formats;source_path)\n _export_chunked_prompt_txt(args;output_dir;formats;source_path;subprojects)\n _get_prompt_paths(source_path;output_dir)\n _build_prompt_header(project_path)\n _find_existing_prompt_file(output_dir;candidates)\n _build_prompt_file_lines(output_dir;output_rel_path;files)\n _build_main_files_section(output_dir;output_rel_path)\n _build_optional_files_section(output_dir;output_rel_path)\n _format_size(size_bytes)\n _get_missing_files(output_dir)\n _build_subprojects_section(subprojects;output_dir;output_rel_path)\n _build_missing_files_section(output_dir;output_rel_path)\n _analyze_generated_files(output_dir;subprojects)\n _build_dynamic_focus_areas(file_analysis)\n _build_dynamic_tasks(file_analysis)\n _build_priority_order(file_analysis)\n _build_strategy_section(file_analysis)\n _build_prompt_footer(chunked;file_analysis)\n code2llm/cli_exports/orchestrator.py:\n e: _build_export_config,_collect_dry_run_files,_show_dry_run_plan,_run_exports,_copy_cached_export,_copy_to_cache,_expand_all_formats,_export_single,_export_registry_formats,_get_format_kwargs,_export_chunked\n _build_export_config(args;formats)\n _collect_dry_run_files(formats;output_dir)\n _show_dry_run_plan(formats;output_dir;is_chunked;result)\n _run_exports(args;result;output_dir;source_path)\n _copy_cached_export(cached_dir;output_dir;verbose)\n _copy_to_cache(output_dir;cache_dir;verbose)\n _expand_all_formats(requested;include_png)\n _export_single(args;result;output_dir;formats;requested_formats;source_path)\n _export_registry_formats(args;result;output_dir;formats)\n _get_format_kwargs(fmt;args)\n _export_chunked(args;result;output_dir;source_path;formats;requested_formats)\n benchmarks/benchmark_evolution.py:\n e: parse_evolution_metrics,load_previous,save_current,run_benchmark\n parse_evolution_metrics(toon_content)\n load_previous(history_file)\n save_current(history_file;metrics)\n run_benchmark(project_path)\n scripts/benchmark_badges.py:\n e: get_shield_url,parse_evolution_metrics,parse_format_quality_report,parse_performance_report,generate_badges,generate_format_quality_badges,generate_performance_badges,create_html,main\n get_shield_url(label;message;color)\n parse_evolution_metrics(toon_content)\n parse_format_quality_report(report_path)\n parse_performance_report(report_path)\n generate_badges(metrics)\n generate_format_quality_badges(format_scores)\n generate_performance_badges(performance_data)\n create_html(badges;title)\n main()\n code2llm/nlp/entity_resolution.py:\n e: Entity,EntityResolutionResult,EntityResolver\n Entity: # Resolved entity...\n EntityResolutionResult: get_by_type(1),get_best_match(0) # Result of entity resolution...\n EntityResolver: __init__(2),resolve(3),_extract_candidates(2),_extract_from_patterns(2),_disambiguate(2),_resolve_hierarchical(1),_resolve_aliases(1),_name_similarity(2),load_from_analysis(1),step_3a_extract_entities(2),step_3b_match_threshold(1),step_3c_disambiguate(2),step_3d_hierarchical_resolve(1),step_3e_alias_resolve(1) # Resolve entities (functions, classes, etc.) from queries...\n code2llm/exporters/report_generators.py:\n e: load_project_yaml\n load_project_yaml(path)\n code2llm/exporters/readme/insights.py:\n e: extract_insights\n extract_insights(output_dir)\n code2llm/exporters/project_yaml/health.py:\n e: build_health,build_alerts,count_duplicates\n build_health(result;modules)\n build_alerts(result)\n count_duplicates(result)\n code2llm/exporters/project_yaml/hotspots.py:\n e: build_hotspots,hotspot_note,build_refactoring\n build_hotspots(result)\n hotspot_note(fi;fan_out)\n build_refactoring(result;modules;hotspots)\n code2llm/exporters/map/details.py:\n e: render_details,_rank_modules,_render_map_module,_render_map_class,_function_signature\n render_details(result;is_excluded_path)\n _rank_modules(result;is_excluded_path)\n _render_map_module(result;mi;lines;is_excluded_path)\n _render_map_class(result;ci;lines)\n _function_signature(fi)\n code2llm/exporters/mermaid/compact.py:\n e: export_compact\n export_compact(result;output_path)\n code2llm/exporters/mermaid/calls.py:\n e: export_calls\n export_calls(result;output_path)\n code2llm/cli_exports/formats.py:\n e: _export_evolution,_export_data_structures,_export_context_fallback,_export_readme,_export_project_yaml,_export_project_toon,_run_report,_export_simple_formats,_export_yaml,_export_mermaid_pngs,_export_calls_format,_export_calls,_export_calls_toon,_export_mermaid,_export_refactor_prompts,_export_index_html\n _export_evolution(args;result;output_dir)\n _export_data_structures(args;result;output_dir)\n _export_context_fallback(args;result;output_dir;formats)\n _export_readme(args;result;output_dir)\n _export_project_yaml(args;result;output_dir)\n _export_project_toon(args;result;output_dir)\n _run_report(args;project_yaml_path;output_dir)\n _export_simple_formats(args;result;output_dir;formats)\n _export_yaml(args;result;output_dir)\n _export_mermaid_pngs(args;output_dir)\n _export_calls_format(args;result;output_dir;toon)\n _export_calls(args;result;output_dir)\n _export_calls_toon(args;result;output_dir)\n _export_mermaid(args;result;output_dir)\n _export_refactor_prompts(args;result;output_dir)\n _export_index_html(args;output_dir)\n code2llm/analysis/pipeline_detector.py:\n e: PipelineStage,Pipeline,PipelineDetector\n PipelineStage: # A single stage in a detected pipeline...\n Pipeline: to_dict(0) # A detected pipeline with stages, purity info, and domain...\n PipelineDetector: __init__(2),detect(2),_build_graph(1),_find_pipeline_paths(1),_longest_path_from(3),_longest_path_in_dag(1),_build_pipelines(3),_build_stages(3) # Detect pipelines in a codebase using networkx graph analysis...\n code2llm/cli_commands.py:\n e: handle_special_commands,handle_cache_command,handle_report_command,validate_and_setup,print_start_info,validate_chunked_output,_get_chunk_dirs,_validate_chunks,_validate_single_chunk,_get_file_sizes,_print_chunk_errors,_print_validation_summary,generate_llm_context\n handle_special_commands()\n handle_cache_command(args_list)\n handle_report_command(args_list)\n validate_and_setup(args)\n print_start_info(args;source_path;output_dir)\n validate_chunked_output(output_dir;args)\n _get_chunk_dirs(output_dir)\n _validate_chunks(chunk_dirs;required_files)\n _validate_single_chunk(chunk_dir;required_files)\n _get_file_sizes(chunk_dir;required_files)\n _print_chunk_errors(chunk_name;chunk_issues)\n _print_validation_summary(chunk_dirs;valid_chunks;issues)\n generate_llm_context(args_list)\n code2llm/core/streaming_analyzer.py:\n e: StreamingAnalyzer\n StreamingAnalyzer: __init__(2),set_progress_callback(1),cancel(0),analyze_streaming(2),_estimate_eta(3),_report_progress(4) # Memory-efficient streaming analyzer with progress tracking...\n code2llm/core/file_analyzer.py:\n e: FileAnalyzer,_analyze_single_file\n FileAnalyzer: __init__(2),_route_to_language_analyzer(4),analyze_file(2),_analyze_python(3),_analyze_ast(4),_calculate_complexity(3),_perform_deep_analysis(4),_process_class(4),_process_function(5),_build_cfg(4),_process_cfg_block(8),_process_if_stmt(8),_process_loop_stmt(7),_process_return_stmt(6),_get_base_name(1),_get_decorator_name(1),_get_call_name(1) # Analyzes a single file...\n _analyze_single_file(args)\n code2llm/core/lang/generic.py:\n e: analyze_generic\n analyze_generic(content;file_path;module_name;ext;stats)\n code2llm/exporters/toon/metrics_core.py:\n e: CoreMetricsComputer\n CoreMetricsComputer: __init__(2),compute_file_metrics(1),_new_file_record(1),_compute_fan_in(1),_process_function_calls(2),_process_called_by(3),_process_callee_calls(3),_handle_suffix_match(3),compute_package_metrics(2),compute_function_metrics(1),compute_class_metrics(1),compute_coupling_matrix(1),_build_function_to_module_map(1),_build_coupling_matrix(2),_resolve_callee_module(3),_compute_package_fan(1) # Computes core structural and complexity metrics...\n code2llm/exporters/project_yaml/core.py:\n e: ProjectYAMLExporter\n ProjectYAMLExporter(BaseExporter): export(2),_build_project_yaml(2),_detect_primary_language(1) # Export unified project.yaml — single source of truth for dia...\n code2llm/generators/llm_flow/utils.py:\n e: _strip_bom,_safe_read_yaml,_as_dict,_as_list,_shorten\n _strip_bom(text)\n _safe_read_yaml(path)\n _as_dict(d)\n _as_list(v)\n _shorten(s;max_len)\n code2llm/generators/mermaid/validation.py:\n e: validate_mermaid_file,_strip_label_segments,_is_balanced_node_line,_check_bracket_balance,_scan_brackets,_check_node_ids\n validate_mermaid_file(mmd_path)\n _strip_label_segments(s)\n _is_balanced_node_line(line)\n _check_bracket_balance(lines;errors)\n _scan_brackets(text;line_num;bracket_stack;paren_stack;errors)\n _check_node_ids(lines;errors)\n code2llm/refactor/prompt_engine.py:\n e: PromptEngine\n PromptEngine: __init__(2),generate_prompts(0),_generate_prompt_for_smell(1),_get_template_for_type(1),_build_context_for_smell(1),_get_source_context(3),_get_instruction_for_smell(1) # Generate refactoring prompts from analysis results and detec...\n code2llm/core/analyzer.py:\n e: ProjectAnalyzer\n ProjectAnalyzer: __init__(2),analyze_project(1),_resolve_project_path(1),_load_from_persistent_cache(2),_run_analysis(1),_store_to_persistent_cache(3),_build_stats(4),_print_summary(1),_post_process(4),_collect_files(1),_analyze_parallel(1),_analyze_sequential(1),_merge_results(2),_build_simple_name_map(1),_resolve_call(4),_collect_call_edges(2),_find_entry_points(1),_build_call_graph(1),analyze_files(2),_detect_patterns(1) # Main analyzer with parallel processing...\n code2llm/exporters/context_view.py:\n e: ContextViewGenerator\n ContextViewGenerator(ViewGeneratorMixin): _render(1),_render_overview(1),_render_architecture(0),_render_exports(0),_render_hotspots(0),_render_refactoring(0),_render_guidelines(-1) # Generate context.md from project.yaml data...\n code2llm/exporters/validate_project.py:\n e: validate_project_yaml,_check_required_keys,_cross_check_toon\n validate_project_yaml(output_dir;verbose)\n _check_required_keys(data)\n _cross_check_toon(data;toon_path)\n code2llm/exporters/mermaid_flow_helpers.py:\n e: _filtered_functions,_entry_points,_group_functions_by_module,_classify_architecture_module,_group_architecture_functions,_select_key_functions,_append_flow_node,_render_module_subgraphs,_render_flow_edges,_append_entry_styles,_render_flow_styles,_render_architecture_view\n _filtered_functions(result;module_of;should_skip_module;include_examples)\n _entry_points(filtered_funcs;result;is_entry_point)\n _group_functions_by_module(funcs;module_of)\n _classify_architecture_module(func_name;module)\n _group_architecture_functions(funcs;module_of)\n _select_key_functions(func_names;funcs;entry_points;critical_path;get_cc;threshold)\n _append_flow_node(lines;func_name;fi;short_len;entry_points;readable_id;get_cc;high_threshold;med_threshold)\n _render_module_subgraphs(lines;modules;entry_points;short_len;readable_id;safe_module;get_cc;sort_funcs;max_funcs;high_threshold;med_threshold)\n _render_flow_edges(lines;funcs;readable_id;resolve;calls_per_function;limit;name_index)\n _append_entry_styles(lines;entry_points;readable_id;entry_limit)\n _render_flow_styles(lines;funcs;entry_points;readable_id;get_cc;high_threshold;med_threshold;high_limit;med_limit;entry_limit)\n _render_architecture_view(lines;filtered_funcs;entry_points;critical_path;module_of;readable_id;get_cc)\n code2llm/exporters/project_yaml/modules.py:\n e: build_modules,group_by_file,compute_module_entry,compute_inbound_deps,build_exports,build_class_export,build_function_exports\n build_modules(result;line_counts)\n group_by_file(result)\n compute_module_entry(fpath;result;line_counts;file_funcs;file_classes)\n compute_inbound_deps(funcs;fpath;result)\n build_exports(funcs;classes;result)\n build_class_export(ci;result)\n build_function_exports(funcs;classes)\n code2llm/exporters/evolution/yaml_export.py:\n e: export_to_yaml\n export_to_yaml(result;output_path)\n code2llm/exporters/mermaid/flow_compact.py:\n e: should_skip_module,is_entry_point,build_callers_graph,find_leaves,_longest_path_dfs,_select_longest_path,find_critical_path,export_flow_compact\n should_skip_module(module;include_examples)\n is_entry_point(func_name;fi;result)\n build_callers_graph(result;name_index)\n find_leaves(result;name_index)\n _longest_path_dfs(result;start;visited;name_index)\n _select_longest_path(result;entry_points;name_index)\n find_critical_path(result;entry_points)\n export_flow_compact(result;output_path;include_examples)\n code2llm/exporters/toon/renderer.py:\n e: ToonRenderer\n ToonRenderer: render_header(1),_detect_language_label(0),render_health(1),render_refactor(1),render_coupling(1),_select_top_packages(2),_render_coupling_header(1),_render_coupling_rows(4),_build_coupling_row(3),_coupling_row_tag(1),_render_coupling_summary(2),render_layers(1),_render_layer_package(5),_render_layer_files(4),_format_layer_file_row(1),_render_zero_line_files(1),render_duplicates(1),render_functions(1),_format_function_row(1),_render_cc_summary(2),render_hotspots(1),render_classes(1),render_pipelines(1),_trace_pipeline(3),_calculate_purity(2),render_external(1) # Renders all sections for TOON export...\n code2llm/analysis/pipeline_resolver.py:\n e: PipelineResolver\n PipelineResolver: resolve(3),_strip_self_prefix(1),_try_same_class_resolution(3),_get_suffix_candidates(2),_select_same_class_candidate(3) # Resolves callee names to qualified function names...\n code2llm/core/toon_size_manager.py:\n e: get_file_size_kb,should_split_toon,split_toon_file,_parse_modules,_split_by_modules,_split_by_lines,_write_chunk,manage_toon_size\n get_file_size_kb(filepath)\n should_split_toon(filepath;max_kb)\n split_toon_file(source_file;output_dir;max_kb;prefix)\n _parse_modules(content)\n _split_by_modules(source_file;output_dir;modules;max_kb;prefix)\n _split_by_lines(source_file;output_dir;max_kb;prefix)\n _write_chunk(output_dir;prefix;chunk_num;content)\n manage_toon_size(source_file;output_dir;max_kb;prefix;verbose)\n code2llm/core/persistent_cache.py:\n e: PersistentCache,_pack,_unpack,get_all_projects,clear_all\n PersistentCache: __init__(2),content_hash(1),get_file_result(1),put_file_result(2),get_changed_files(1),get_export_cache_dir(1),create_export_cache_dir(1),mark_export_complete(1),save(0),cache_size_mb(0),gc(2),clear(0),_load_manifest(0),_compute_run_hash(1) # Content-addressed persistent cache stored in ~/.code2llm/.\n\n...\n _pack(obj)\n _unpack(data)\n get_all_projects(cache_root)\n clear_all(cache_root)\n code2llm/core/lang/go_lang.py:\n e: _analyze_go_regex,analyze_go\n _analyze_go_regex(content;file_path;module_name;stats)\n analyze_go(content;file_path;module_name;ext;stats)\n code2llm/exporters/flow_exporter.py:\n e: FlowExporter\n FlowExporter(BaseExporter): __init__(0),export(2),_build_context(1),_pipeline_to_dict(1),_compute_transforms(1),_transform_label(2),_compute_type_usage(2),_normalize_type(1),_type_label(3),_classify_side_effects(2),_compute_contracts(4),_build_stage_contract(4),_infer_invariant(2),_is_excluded(1) # Export to flow.toon — data-flow focused format.\n\nSections: P...\n code2llm/exporters/context_exporter.py:\n e: ContextExporter\n ContextExporter(BaseExporter): export(2),_get_overview(1),_detect_languages(0),_get_architecture_by_module(1),_get_important_entries(1),_get_key_entry_points(1),_get_process_flows(2),_get_key_classes(1),_get_data_transformations(1),_get_behavioral_patterns(1),_get_api_surface(1),_get_system_interactions(1),_group_calls_by_module(2),_format_sub_flow(3),_trace_flow(5) # Export LLM-ready analysis summary with architecture and flow...\n code2llm/exporters/evolution/render.py:\n e: render_header,render_next,render_risks,render_metrics_target,render_patterns,render_history\n render_header(ctx)\n render_next(ctx)\n render_risks(ctx)\n render_metrics_target(ctx)\n render_patterns(ctx)\n render_history(ctx;output_path)\n code2llm/exporters/evolution/computation.py:\n e: compute_func_data,scan_file_sizes,aggregate_file_stats,make_relative_path,filter_god_modules,compute_god_modules,compute_hub_types,build_context\n compute_func_data(result)\n scan_file_sizes(project_path)\n aggregate_file_stats(result;file_lines)\n make_relative_path(fpath;project_path)\n filter_god_modules(file_stats;project_path)\n compute_god_modules(result)\n compute_hub_types(result)\n build_context(result)\n code2llm/generators/llm_flow/generator.py:\n e: generate_llm_flow,render_llm_flow_md\n generate_llm_flow(analysis;max_functions;limit_decisions;limit_calls)\n render_llm_flow_md(flow)\n code2llm/nlp/pipeline.py:\n e: PipelineStage,NLPPipelineResult,NLPPipeline\n PipelineStage: # Single pipeline stage result...\n NLPPipelineResult: is_successful(0),get_intent(0),get_entities(0),to_dict(0) # Complete NLP pipeline result (4b-4e aggregation)...\n NLPPipeline: __init__(1),process(2),_step_normalize(2),_step_match_intent(1),_step_resolve_entities(3),_infer_entity_types(1),_calculate_overall_confidence(1),_calculate_entity_confidence(1),_apply_fallback(1),_format_action(1),_format_response(1),step_4a_orchestrate(1),step_4b_aggregate(1),step_4c_confidence(1),step_4d_fallback(1),step_4e_format(1) # Main NLP processing pipeline (4a-4e)...\n code2llm/analysis/type_inference.py:\n e: TypeInferenceEngine\n TypeInferenceEngine: __init__(1),enrich_function(1),get_arg_types(1),get_return_type(1),get_typed_signature(1),extract_all_types(1),_extract_from_node(2),_extract_args(1),_annotation_to_str(1),_ann_constant(1),_ann_name(1),_ann_attribute(1),_ann_subscript(1),_ann_tuple(1),_ann_binop(1),_infer_from_name(1),_infer_arg_type(1) # Extract and infer type information from Python source files...\n code2llm/core/refactoring.py:\n e: RefactoringAnalyzer\n RefactoringAnalyzer: __init__(2),perform_refactoring_analysis(1),_build_call_graph(1),_calculate_centrality(2),_detect_cycles(2),_detect_communities(2),_analyze_coupling(1),_detect_smells(1),_detect_dead_code(1),_map_dead_code_to_items(2),_mark_reachable_items(1) # Performs refactoring analysis on code...\n code2llm/core/file_filter.py:\n e: FastFileFilter\n FastFileFilter: __init__(2),should_skip_dir(1),_passes_gitignore(1),_passes_excludes(2),_passes_includes(1),should_process(1),_passes_line_count(1),_passes_visibility(3),should_skip_function(4) # Fast file filtering with pattern matching...\n code2llm/core/streaming/prioritizer.py:\n e: FilePriority,SmartPrioritizer\n FilePriority: # Priority scoring for file analysis order...\n SmartPrioritizer: __init__(1),prioritize_files(2),_build_import_graph(1),_check_has_main(1) # Smart file prioritization for optimal analysis order...\n code2llm/core/lang/rust.py:\n e: analyze_rust\n analyze_rust(content;file_path;module_name;ext;stats)\n code2llm/analysis/call_graph.py:\n e: CallGraphExtractor\n CallGraphExtractor(ast.NodeVisitor): __init__(1),extract(3),_calculate_metrics(0),visit_Import(1),visit_ImportFrom(1),visit_ClassDef(1),visit_FunctionDef(1),visit_AsyncFunctionDef(1),visit_Call(1),_resolve_call(1),_resolve_with_astroid(1),_expr_to_str(1) # Extract call graph from AST...\n code2llm/exporters/toon/__init__.py:\n e: ToonExporter\n ToonExporter(BaseExporter): __init__(0),export(2),export_to_yaml(2),_build_header_dict(1),_build_health_dict(1),_build_refactor_dict(1),_build_pipelines_dict(1),_build_layers_dict(1),_build_coupling_dict(1),_build_external_dict(1),_is_excluded(1) # Export to toon v2 plain-text format — scannable, sorted by s...\n code2llm/cli_exports/orchestrator_chunked.py:\n e: _export_chunked,_get_filtered_subprojects,_process_subproject\n _export_chunked(args;result;output_dir;source_path;formats;requested_formats)\n _get_filtered_subprojects(args;source_path)\n _process_subproject(args;sp;output_dir)\n code2llm/patterns/detector.py:\n e: PatternDetector\n PatternDetector: __init__(1),detect_patterns(1),_detect_recursion(1),_detect_state_machines(1),_detect_factory_pattern(1),_detect_singleton(1),_detect_strategy_pattern(1),_check_returns_classes(2) # Detect behavioral patterns in code...\n code2llm/core/large_repo.py:\n e: SubProject,HierarchicalRepoSplitter,should_use_chunking,get_analysis_plan\n SubProject: # Represents a sub-project within a larger repository...\n HierarchicalRepoSplitter: __init__(2),get_analysis_plan(1),_split_hierarchically(1),_merge_small_l1_dirs(2),_split_level2_consolidated(3),_categorize_subdirs(2),_process_large_dirs(3),_process_level1_files(2),_merge_small_dirs(3),_chunk_by_files(5),_collect_files_in_dir(2),_collect_files_recursive(2),_collect_root_files(1),_count_py_files(1),_contains_python_files(1),_should_skip_file(1),_calculate_priority(2),_get_level1_dirs(1) # Splits large repositories using hierarchical approach.\n\nStra...\n should_use_chunking(project_path;size_threshold_kb)\n get_analysis_plan(project_path;size_limit_kb)\n code2llm/analysis/pipeline_classifier.py:\n e: PipelineClassifier\n PipelineClassifier: __init__(1),classify_domain(2),derive_pipeline_name(3),get_entry_type(1),get_exit_type(1) # Classify pipelines by domain and derive human-readable names...\n code2llm/analysis/utils/ast_helpers.py:\n e: get_ast,find_function_node,ast_unparse,qualified_name,expr_to_str\n get_ast(filepath;registry)\n find_function_node(tree;name;line)\n ast_unparse(node;default_none)\n qualified_name(module_name;class_stack;name)\n expr_to_str(node)\n code2llm/core/repo_files.py:\n e: _get_gitignore_parser,should_skip_file,collect_files_in_dir,collect_root_files,count_py_files,contains_python_files,get_level1_dirs,calculate_priority\n _get_gitignore_parser(project_path)\n should_skip_file(file_str;project_path;gitignore_parser)\n collect_files_in_dir(dir_path;project_path)\n collect_root_files(project_path)\n count_py_files(path)\n contains_python_files(dir_path)\n get_level1_dirs(project_path)\n calculate_priority(name;level)\n code2llm/core/models.py:\n e: BaseModel,FlowNode,FlowEdge,FunctionInfo,ClassInfo,ModuleInfo,Pattern,CodeSmell,Mutation,DataFlow,AnalysisResult\n BaseModel: to_dict(1),_filter_compact(1) # Base class for models with automated serialization...\n FlowNode(BaseModel): # Represents a node in the control flow graph...\n FlowEdge(BaseModel): # Represents an edge in the control flow graph...\n FunctionInfo(BaseModel): # Information about a function/method...\n ClassInfo(BaseModel): # Information about a class...\n ModuleInfo(BaseModel): # Information about a module/package...\n Pattern(BaseModel): # Detected behavioral pattern...\n CodeSmell(BaseModel): # Represents a detected code smell...\n Mutation(BaseModel): # Represents a mutation of a variable/object...\n DataFlow(BaseModel): # Represents data flow for a variable...\n AnalysisResult(BaseModel): get_function_count(0),get_class_count(0),get_node_count(0),get_edge_count(0) # Complete analysis result for a project...\n code2llm/core/lang/php.py:\n e: _parse_php_metadata,_adjust_qualified_names,_extract_php_traits,analyze_php\n _parse_php_metadata(content;module_name;result)\n _adjust_qualified_names(result;module_name;namespace)\n _extract_php_traits(content;file_path;module_name;namespace;result;stats)\n analyze_php(content;file_path;module_name;ext;stats)\n code2llm/exporters/yaml_exporter.py:\n e: YAMLExporter\n YAMLExporter(BaseExporter): __init__(0),_get_name_index(1),export(4),export_grouped(2),export_data_flow(3),export_data_structures(3),export_separated(3),export_split(3),export_calls(4),_collect_edges(3),_process_function_calls(8),_should_add_edge(2),_create_edge(2),_build_nodes(2),_create_node(3),_compute_calls_in_counts(1),_group_by_module(1),_build_calls_data(4),_resolve_callee(2),_get_cc(0),export_calls_toon(4),_render_calls_header(4),_render_hubs(1),_render_modules(3),_render_edges(1) # Export to YAML format...\n code2llm/exporters/toon/metrics_duplicates.py:\n e: DuplicatesMetricsComputer\n DuplicatesMetricsComputer: __init__(1),detect_duplicates(1),_check_class_for_duplicates(5),_calculate_duplicate_info(7) # Detects duplicate classes in the codebase...\n code2llm/exporters/toon/helpers.py:\n e: _rel_path,_package_of,_package_of_module,_traits_from_cfg,_dup_file_set,_hotspot_description,_scan_line_counts\n _rel_path(fpath;project_path)\n _package_of(rel_path)\n _package_of_module(module_name)\n _traits_from_cfg(fi;result)\n _dup_file_set(ctx)\n _hotspot_description(fi;fan_out)\n _scan_line_counts(project_path)\n code2llm/exporters/map/alerts.py:\n e: build_alerts,build_hotspots,load_evolution_trend,_read_previous_cc_avg\n build_alerts(funcs)\n build_hotspots(funcs)\n load_evolution_trend(evolution_path;current_cc)\n _read_previous_cc_avg(evolution_path)\n code2llm/exporters/map/header.py:\n e: render_header,_render_stats_line,_render_alerts_line,_render_hotspots_line\n render_header(result;output_path;is_excluded_path)\n _render_stats_line(funcs;files;total_lines;lang_str)\n _render_alerts_line(funcs)\n _render_hotspots_line(funcs)\n code2llm/exporters/map/utils.py:\n e: rel_path,file_line_count,count_total_lines,detect_languages\n rel_path(fpath;project_path)\n file_line_count(fpath)\n count_total_lines(result;is_excluded_path)\n detect_languages(result;is_excluded_path)\n code2llm/exporters/map/yaml_export.py:\n e: export_to_yaml,_build_module_entry,_build_module_exports,_build_module_classes_data,_build_module_functions_data\n export_to_yaml(result;output_path;is_excluded_path)\n _build_module_entry(mi;result;is_excluded_path)\n _build_module_exports(mi;result)\n _build_module_classes_data(mi;result)\n _build_module_functions_data(mi;result)\n code2llm/exporters/mermaid/classic.py:\n e: export_classic,_render_subgraphs,_render_edges,_render_cc_styles\n export_classic(result;output_path)\n _render_subgraphs(result;lines)\n _render_edges(result;lines;name_index;limit)\n _render_cc_styles(result;lines)\n code2llm/generators/mermaid/png.py:\n e: _is_png_fresh,_prepare_and_render,generate_pngs,_setup_puppeteer_config,_build_renderers,_run_mmdc_subprocess,generate_single_png,generate_with_puppeteer\n _is_png_fresh(mmd_file;output_dir)\n _prepare_and_render(mmd_file;output_dir;timeout)\n generate_pngs(input_dir;output_dir;timeout;max_workers)\n _setup_puppeteer_config()\n _build_renderers(mmd_file;output_file;cfg_path)\n _run_mmdc_subprocess(renderers;mmd_file;output_file;timeout;max_text_size;max_edges)\n generate_single_png(mmd_file;output_file;timeout)\n generate_with_puppeteer(mmd_file;output_file;timeout;max_text_size;max_edges)\n code2llm/generators/mermaid/fix.py:\n e: _sanitize_label_text,_sanitize_node_id,fix_mermaid_file,_fix_edge_line,_fix_edge_label_pipes,_fix_subgraph_line,_fix_class_line\n _sanitize_label_text(txt)\n _sanitize_node_id(node_id)\n fix_mermaid_file(mmd_path)\n _fix_edge_line(line)\n _fix_edge_label_pipes(line)\n _fix_subgraph_line(line)\n _fix_class_line(line)\n code2llm/parsers/toon_parser.py:\n e: _parse_header_line,_parse_stats_line,_parse_health_line,_parse_functions_line,_parse_classes_line,_parse_hotspots_line,_detect_section,parse_toon_content,is_toon_file,load_toon\n _parse_header_line(line;data)\n _parse_stats_line(line;data)\n _parse_health_line(line_stripped;data)\n _parse_functions_line(line_stripped;data)\n _parse_classes_line(line_stripped;data)\n _parse_hotspots_line(line_stripped;data)\n _detect_section(line)\n parse_toon_content(content)\n is_toon_file(filepath)\n load_toon(filepath)\n validate_toon.py:\n e: load_yaml,load_file,extract_functions_from_yaml,_extract_names_from_toon,extract_functions_from_toon,_extract_keys_from_yaml,extract_classes_from_yaml,extract_classes_from_toon,analyze_class_differences,extract_modules_from_yaml,extract_modules_from_toon,compare_basic_stats,compare_functions,compare_classes,compare_modules,validate_toon_completeness,_run_single_file_mode,_run_comparison_mode,_compare_all_aspects,_print_comparison_summary,main\n load_yaml(filepath)\n load_file(filepath)\n extract_functions_from_yaml(yaml_data)\n _extract_names_from_toon(toon_data;key)\n extract_functions_from_toon(toon_data)\n _extract_keys_from_yaml(yaml_data;key)\n extract_classes_from_yaml(yaml_data)\n extract_classes_from_toon(toon_data)\n analyze_class_differences(yaml_data;toon_data)\n extract_modules_from_yaml(yaml_data)\n extract_modules_from_toon(toon_data)\n compare_basic_stats(yaml_data;toon_data)\n compare_functions(yaml_data;toon_data)\n compare_classes(yaml_data;toon_data)\n compare_modules(yaml_data;toon_data)\n validate_toon_completeness(toon_data)\n _run_single_file_mode(file_path)\n _run_comparison_mode(yaml_path;toon_path)\n _compare_all_aspects(yaml_data;toon_data)\n _print_comparison_summary(results)\n main()\n examples/streaming-analyzer/sample_project/utils.py:\n e: validate_input,format_output,calculate_metrics,filter_data,transform_data\n validate_input(data)\n format_output(data)\n calculate_metrics(data)\n filter_data(data;criteria)\n transform_data(data;transformations)\n benchmarks/benchmark_optimizations.py:\n e: clear_caches,run_analysis,benchmark_cold_vs_warm,print_summary,main\n clear_caches(project_path)\n run_analysis(project_path;config)\n benchmark_cold_vs_warm(project_path;runs)\n print_summary(results)\n main()\n code2llm/cli.py:\n e: main\n main()\n code2llm/analysis/coupling.py:\n e: CouplingAnalyzer\n CouplingAnalyzer: __init__(1),analyze(0),_analyze_module_interactions(0),_detect_data_leakage(0),_detect_shared_state(0) # Analyze coupling between modules...\n code2llm/analysis/smells.py:\n e: SmellDetector\n SmellDetector: __init__(1),detect(0),_detect_god_functions(0),_detect_god_modules(0),_detect_feature_envy(0),_detect_data_clumps(0),_detect_shotgun_surgery(0),_detect_bottlenecks(0),_detect_circular_dependencies(0) # Detect code smells from analysis results...\n code2llm/core/gitignore.py:\n e: _GitIgnoreEntry,GitIgnoreParser,load_gitignore_patterns\n _GitIgnoreEntry: __init__(3) # Single parsed gitignore rule...\n GitIgnoreParser: __init__(1),_load_gitignore(1),_parse_entry(1),_pattern_to_regex(1),is_ignored(2) # Parse and apply .gitignore patterns to file paths...\n load_gitignore_patterns(project_path)\n code2llm/core/lang/ts_extractors.py:\n e: _get_node_text,_find_name_node,_extract_functions_ts,_extract_classes_ts,extract_declarations_ts\n _get_node_text(node;source_bytes)\n _find_name_node(node)\n _extract_functions_ts(tree;source_bytes;lang;module_name;file_path)\n _extract_classes_ts(tree;source_bytes;lang;module_name;file_path)\n extract_declarations_ts(tree;source_bytes;ext;file_path;module_name)\n code2llm/core/lang/ts_parser.py:\n e: TreeSitterParser,_init_tree_sitter,_get_language,_get_parser,get_parser,parse_source,is_available\n TreeSitterParser: __init__(0),parse(2),supports(1) # Unified tree-sitter parser for all supported languages.\n\nUsa...\n _init_tree_sitter()\n _get_language(ext)\n _get_parser(ext)\n get_parser()\n parse_source(content;ext)\n is_available()\n code2llm/nlp/intent_matching.py:\n e: IntentMatch,IntentMatchingResult,IntentMatcher\n IntentMatch: # Single intent match result...\n IntentMatchingResult: get_best_intent(0),get_confidence(0) # Result of intent matching...\n IntentMatcher: __init__(2),match(2),_fuzzy_match(1),_keyword_match(1),_apply_context(3),_combine_matches(1),_resolve_multi_intent(1),_calculate_similarity(2),step_2a_fuzzy_match(2),step_2b_semantic_match(2),step_2c_keyword_match(2),step_2d_context_score(2),step_2e_resolve_intents(1) # Match queries to intents using fuzzy and keyword matching...\n code2llm/exporters/article_view.py:\n e: ArticleViewGenerator\n ArticleViewGenerator(ViewGeneratorMixin): _render(1),_render_frontmatter(0),_render_health_summary(1),_render_alerts(0),_render_hotspots(0),_render_roadmap(0),_render_evolution(0),_render_footer(-1) # Generate status.md — publishable project health article...\n code2llm/exporters/dashboard_data.py:\n e: DashboardDataBuilder\n DashboardDataBuilder: health_verdict(0),build_evolution_section(0),build_language_breakdown(0),build_module_lines_chart(0),build_module_funcs_chart(0),build_top_modules_html(0),build_alerts_html(0),build_hotspots_html(0),build_refactoring_html(0) # Build dashboard data structures from project analysis result...\n code2llm/exporters/readme/sections.py:\n e: build_core_files_section,build_llm_files_section,build_viz_files_section\n build_core_files_section(existing;insights)\n build_llm_files_section(existing)\n build_viz_files_section(existing)\n code2llm/exporters/toon/metrics_health.py:\n e: HealthMetricsComputer\n HealthMetricsComputer: __init__(0),compute_health(1),_check_duplicates_health(2),_check_god_modules_health(2),_check_smells_health(2),_check_high_cc_health(2) # Computes health issues and quality alerts...\n code2llm/exporters/toon/module_detail.py:\n e: ModuleDetailRenderer\n ModuleDetailRenderer: render_details(1),_rank_modules_by_cc(1),_render_module_detail(4),_get_module_exports(2),_render_module_classes(4),_get_method_items(2),_find_root_method(1),_render_standalone_funcs(3),_render_call_chain(5) # Renders detailed module information...\n code2llm/generators/llm_flow/nodes.py:\n e: _collect_nodes,_group_nodes_by_file,_is_entrypoint_file,_extract_entrypoint_info,_deduplicate_entrypoints,_collect_entrypoints,_collect_functions\n _collect_nodes(analysis)\n _group_nodes_by_file(nodes)\n _is_entrypoint_file(filepath)\n _extract_entrypoint_info(node;filepath)\n _deduplicate_entrypoints(entrypoints)\n _collect_entrypoints(nodes)\n _collect_functions(nodes)\n code2llm/analysis/dfg.py:\n e: DFGExtractor\n DFGExtractor(ast.NodeVisitor): __init__(1),extract(3),visit_FunctionDef(1),visit_Assign(1),visit_AugAssign(1),visit_For(1),visit_Call(1),_extract_targets(1),_get_names(1),_extract_names(1),_expr_to_str(1),_build_data_flow_edges(0) # Extract Data Flow Graph from AST...\n benchmarks/reporting.py:\n e: _print_header,_print_scores_table,_print_problems_detail,_print_pipelines_detail,_print_structural_features,_print_gap_analysis,print_results,build_report,save_report\n _print_header()\n _print_scores_table(scores)\n _print_problems_detail(scores)\n _print_pipelines_detail(scores)\n _print_structural_features(scores)\n _print_gap_analysis(scores)\n print_results(scores)\n build_report(scores)\n save_report(report;filename)\n examples/streaming-analyzer/sample_project/main.py:\n e: UserRequest,Application,main\n UserRequest: # User request data structure...\n Application: __init__(0),start(0),get_next_request(0),process_request(1),handle_get_request(1),handle_set_request(1),handle_delete_request(1),handle_default_request(1) # Main application class with multiple responsibilities...\n main()\n examples/functional_refactoring/cli.py:\n e: generate\n generate(query;intent;dry_run;cache_dir)\n code2llm/core/__init__.py:\n e: __getattr__\n __getattr__(name)\n code2llm/nlp/normalization.py:\n e: NormalizationResult,QueryNormalizer\n NormalizationResult: # Result of query normalization...\n QueryNormalizer: __init__(1),normalize(2),_unicode_normalize(1),_lowercase(1),_remove_punctuation(1),_normalize_whitespace(1),_remove_stopwords(2),_tokenize(1),step_1a_lowercase(1),step_1b_remove_punctuation(1),step_1c_normalize_whitespace(1),step_1d_unicode_normalize(1),step_1e_remove_stopwords(2) # Normalize queries for consistent processing...\n code2llm/exporters/dashboard_renderer.py:\n e: DashboardRenderer\n DashboardRenderer: render(17),_assemble_html(0),_render_evolution_section(0),_render_evolution_script(0) # Render HTML dashboard from prepared data structures...\n code2llm/exporters/flow_constants.py:\n e: is_excluded_path\n is_excluded_path(path)\n code2llm/exporters/toon_view.py:\n e: ToonViewGenerator\n ToonViewGenerator(ViewGeneratorMixin): _render(1),_render_header(0),_render_health(0),_render_alerts(0),_render_modules(0),_render_hotspots(0),_render_refactoring(0),_render_evolution(0) # Generate project.toon.yaml from project.yaml data...\n benchmarks/benchmark_performance.py:\n e: save_report,create_test_project,benchmark_original_analyzer,benchmark_streaming_analyzer,benchmark_with_strategies,print_comparison,main\n save_report(results;filename)\n create_test_project(size)\n benchmark_original_analyzer(project_path;runs)\n benchmark_streaming_analyzer(project_path;runs)\n benchmark_with_strategies(project_path)\n print_comparison(original;streaming)\n main()\n code2llm/exporters/project_yaml/evolution.py:\n e: build_evolution,load_previous_evolution\n build_evolution(health;total_lines;prev_evolution)\n load_previous_evolution(output_path)\n code2llm/exporters/mermaid/utils.py:\n e: readable_id,safe_module,_sanitize_identifier,module_of,build_name_index,resolve_callee,write_file,get_cc\n readable_id(name)\n safe_module(name)\n _sanitize_identifier(name;prefix)\n module_of(func_name)\n build_name_index(funcs)\n resolve_callee(callee;funcs;name_index)\n write_file(path;lines)\n get_cc(fi)\n code2llm/cli_exports/code2logic.py:\n e: _export_code2logic,_should_run_code2logic,_check_code2logic_installed,_build_code2logic_cmd,_run_code2logic,_handle_code2logic_error,_find_code2logic_output,_normalize_code2logic_output\n _export_code2logic(args;source_path;output_dir;formats)\n _should_run_code2logic(formats)\n _check_code2logic_installed()\n _build_code2logic_cmd(args;source_path;output_dir)\n _run_code2logic(cmd;verbose)\n _handle_code2logic_error(res;cmd)\n _find_code2logic_output(output_dir;res)\n _normalize_code2logic_output(found;target;args)\n code2llm/cli_exports/orchestrator_handlers.py:\n e: _export_mermaid,_export_mermaid_pngs,_export_calls,_export_context_fallback,_export_data_structures,_export_project_toon,_export_readme,_export_index_html\n _export_mermaid(args;result;output_dir)\n _export_mermaid_pngs(args;output_dir)\n _export_calls(args;result;output_dir;formats)\n _export_context_fallback(args;result;output_dir)\n _export_data_structures(args;result;output_dir)\n _export_project_toon(args;result;output_dir)\n _export_readme(args;result;output_dir)\n _export_index_html(args;output_dir)\n examples/streaming-analyzer/sample_project/database.py:\n e: DatabaseConnection\n DatabaseConnection: __init__(1),_load_data(0),_save_data(0),get_user(1),get_user_settings(1),get_user_logs(1),update_user_settings(2),update_user_profile(2),delete_user(1),clear_user_data(1),create_user(1),_log_action(3),get_stats(0) # Simple database connection simulator...\n examples/streaming-analyzer/demo.py:\n e: demo_quick_strategy,demo_standard_strategy,demo_deep_strategy,demo_incremental_analysis,demo_memory_limited,demo_custom_progress,main\n demo_quick_strategy()\n demo_standard_strategy()\n demo_deep_strategy()\n demo_incremental_analysis()\n demo_memory_limited()\n demo_custom_progress()\n main()\n examples/streaming-analyzer/sample_project/api.py:\n e: APIHandler\n APIHandler: __init__(0),process_request(1),_check_rate_limit(1),_get_stats(0),_get_user_info(1),_health_check(0),format_response(1) # Handles API requests and responses...\n code2llm/core/ast_registry.py:\n e: ASTRegistry\n ASTRegistry: __init__(0),get_global(0),reset_global(0),get_ast(1),get_source(1),invalidate(1),clear(0),__len__(0),__repr__(0) # Parse each file exactly once; share the AST across all analy...\n code2llm/core/file_cache.py:\n e: FileCache,make_cache_key\n FileCache: __init__(2),_get_cache_key_stat(1),_get_cache_key(2),_get_cache_path(1),get(2),put(3),get_fast(1),put_fast(2),clear(0) # Cache for parsed AST files...\n make_cache_key(file_path;content)\n code2llm/core/streaming/incremental.py:\n e: IncrementalAnalyzer\n IncrementalAnalyzer: __init__(1),_load_state(0),_save_state(1),get_changed_files(1),_get_module_name(2) # Incremental analysis with change detection...\n code2llm/exporters/readme_exporter.py:\n e: READMEExporter\n READMEExporter(BaseExporter): export(2) # Export README.md with documentation of all generated files...\n benchmarks/format_evaluator.py:\n e: FormatScore,_detect_problems,_detect_pipelines,_detect_hub_types,_check_structural_features,evaluate_format\n FormatScore: # Wynik oceny pojedynczego formatu...\n _detect_problems(content)\n _detect_pipelines(content)\n _detect_hub_types(content)\n _check_structural_features(content)\n evaluate_format(name;content;path)\n code2llm/exporters/toon/metrics.py:\n e: MetricsComputer\n MetricsComputer: __init__(0),compute_all_metrics(1),_compute_hotspots(1),_get_cycles(1) # Computes all metrics for TOON export.\n\nOrchestrates speciali...\n code2llm/analysis/cfg.py:\n e: CFGExtractor\n CFGExtractor(ast.NodeVisitor): __init__(1),extract(3),new_node(2),connect(4),visit_FunctionDef(1),visit_AsyncFunctionDef(1),visit_If(1),visit_For(1),visit_While(1),visit_Try(1),visit_Assign(1),visit_Return(1),visit_Expr(1),_extract_condition(1),_expr_to_str(1),_format_except(1) # Extract Control Flow Graph from AST...\n code2llm/exporters/evolution/exclusion.py:\n e: is_excluded\n is_excluded(path)\n code2llm/exporters/index_generator/scanner.py:\n e: FileScanner,get_file_types,get_default_file_info\n FileScanner: __init__(1),scan(0),_read_file_content(2),_escape_html(1),_format_size(1) # Scan output directory and collect file metadata...\n get_file_types()\n get_default_file_info(ext)\n code2llm/generators/llm_flow/parsing.py:\n e: _parse_call_label,_parse_func_label\n _parse_call_label(label)\n _parse_func_label(label)\n examples/functional_refactoring/entity_preparers.py:\n e: EntityPreparer,ShellEntityPreparer,DockerEntityPreparer,SQLEntityPreparer,KubernetesEntityPreparer,EntityPreparationPipeline\n EntityPreparer(Protocol): supports(1),prepare(3) # Protocol for domain-specific entity preparation...\n ShellEntityPreparer: supports(1),prepare(3),_apply_path_defaults(4),_apply_pattern_defaults(2),_apply_find_flags(3) # Prepares entities for shell commands...\n DockerEntityPreparer: supports(1),prepare(3),_resolve_container_name(1) # Prepares entities for docker commands...\n SQLEntityPreparer: supports(1),prepare(3),_sanitize_identifier(1),_sanitize_columns(1) # Prepares entities for SQL commands...\n KubernetesEntityPreparer: supports(1),prepare(3) # Prepares entities for kubernetes commands...\n EntityPreparationPipeline: __init__(0),prepare(3) # Coordinates entity preparation across domains...\n examples/functional_refactoring/cache.py:\n e: CacheEntry,EvolutionaryCache\n CacheEntry: # Single cache entry with evolution metadata...\n EvolutionaryCache: __init__(2),_load(0),_save(0),get(2),put(3),report_success(2),report_failure(2),_make_key(2),_calculate_score(1),_evict_worst(0) # Cache that evolves based on usage patterns.\n\nUnlike simple L...\n examples/litellm/run.py:\n e: run_analysis,get_refactoring_advice,main\n run_analysis(project_path)\n get_refactoring_advice(outputs;model)\n main()\n examples/functional_refactoring/generator.py:\n e: CommandGenerator\n CommandGenerator: __init__(3),generate(3) # Generates commands from natural language intents...\n scripts/bump_version.py:\n e: get_current_version,parse_version,format_version,bump_version,update_pyproject_toml,update_version_file,main\n get_current_version()\n parse_version(version_str)\n format_version(major;minor;patch)\n bump_version(version_type)\n update_pyproject_toml(new_version)\n update_version_file(new_version)\n main()\n badges/server.py:\n e: index,generate_badges,get_badges\n index()\n generate_badges()\n get_badges()\n examples/functional_refactoring_example.py:\n e: TemplateGenerator\n TemplateGenerator: __init__(0),generate(3),_load_templates_from_json(0),_load_defaults_from_json(0),_prepare_shell_entities(0),_prepare_docker_entities(0),_prepare_sql_entities(0),_find_alternative_template(0),_render_template(0) # Original - handles EVERYTHING: loading, matching, rendering,...\n benchmarks/benchmark_format_quality.py:\n e: _print_benchmark_header,_print_ground_truth_info,_generate_format_outputs,_create_offline_scores,run_benchmark\n _print_benchmark_header()\n _print_ground_truth_info(project_path)\n _generate_format_outputs(result;output_dir)\n _create_offline_scores()\n run_benchmark()\n code2llm/core/incremental.py:\n e: IncrementalAnalyzer,_file_signature\n IncrementalAnalyzer: __init__(1),needs_analysis(1),get_cached_result(1),update(2),invalidate(1),save(0),clear(0),_load_cache(0),_normalize_key(1) # Track file signatures to skip unchanged files on subsequent ...\n _file_signature(filepath)\n code2llm/core/export_pipeline.py:\n e: SharedExportContext,ExportPipeline\n SharedExportContext: __init__(1),_compute_metrics_summary(0),_compute_cc_distribution(0) # Pre-computed context shared across all exporters.\n\nLazy-comp...\n ExportPipeline: __init__(1),run(2) # Run multiple exporters with a single shared context.\n\nUsage:...\n code2llm/core/streaming/cache.py:\n e: StreamingFileCache\n StreamingFileCache: __init__(2),_get_cache_key(2),_evict_if_needed(0),get(2),put(3) # Memory-efficient cache with LRU eviction...\n code2llm/exporters/map/module_list.py:\n e: render_module_list\n render_module_list(result;is_excluded_path)\n test_langs/valid/sample.py:\n e: Product,ProductRepository,main\n Product:\n ProductRepository: __init__(0),add(1),find_by_id(1),list_all(0)\n main()\n code2llm/__init__.py:\n e: __getattr__\n __getattr__(name)\n test_python_only/valid/sample.py:\n e: User,UserService,main\n User:\n UserService: __init__(0),add_user(1),get_user(1),process_users(0)\n main()\n examples/streaming-analyzer/sample_project/auth.py:\n e: AuthManager\n AuthManager: __init__(0),_hash(1),authenticate(2),_verify_password(2),create_session(1),validate_session(1),revoke_session(1),get_user_role(1),has_permission(3),list_active_sessions(0) # Manages user authentication and authorization...\n code2llm/exporters/json_exporter.py:\n e: JSONExporter\n JSONExporter(BaseExporter): export(4) # Export to JSON format...\n code2llm/generators/llm_flow/cli.py:\n e: create_parser,main\n create_parser()\n main(argv)\n demo_langs/valid/sample.py:\n e: User,UserService,main\n User:\n UserService: __init__(0),add_user(1),get_user(1),process_users(0),list_users(0),remove_user(1),count(0)\n main()\n examples/functional_refactoring/template_engine.py:\n e: Template,TemplateLoader,TemplateRenderer\n Template: # Command template...\n TemplateLoader: __init__(1),load(0),_load_templates(0),_load_defaults(0),get_template(1),get_default(2),find_alternative_template(2) # Loads templates from various sources...\n TemplateRenderer: render(2),_manual_render(2),render_with_conditionals(2) # Renders templates with entity substitution...\n code2llm/core/config.py:\n e: AnalysisMode,PerformanceConfig,FilterConfig,DepthConfig,OutputConfig,Config,_get_optimal_workers\n AnalysisMode(str,Enum): # Available analysis modes...\n PerformanceConfig: get_workers(0) # Performance optimization settings...\n FilterConfig: # Filtering options to reduce analysis scope...\n DepthConfig: # Depth limiting for control flow analysis...\n OutputConfig: # Output formatting options...\n Config: # Analysis configuration with performance optimizations...\n _get_optimal_workers(default;max_per_gb)\n code2llm/api.py:\n e: analyze,analyze_file\n analyze(project_path;config)\n analyze_file(file_path;config)\n code2llm/cli_parser.py:\n e: get_version,create_parser\n get_version()\n create_parser()\n setup.py:\n e: read_version,read_readme\n read_version()\n read_readme()\n code2llm/analysis/__init__.py:\n e: __getattr__\n __getattr__(name)\n code2llm/exporters/base.py:\n e: BaseExporter,ViewGeneratorMixin,export_format,get_exporter,list_exporters\n BaseExporter(ABC): export(2),generate(2),_ensure_dir(1),_write_text(2) # Abstract base class for all code2llm exporters.\n\nAll exporte...\n ViewGeneratorMixin: generate(2) # Mixin providing the shared ``generate`` implementation for v...\n export_format(name;description;extension;supports_project_yaml)\n get_exporter(name)\n list_exporters()\n code2llm/exporters/readme/files.py:\n e: get_existing_files\n get_existing_files(output_dir)\n test_langs/invalid/sample_bad.java:\n e: User,UserService\n User: User(-1)\n UserService: addUser(-1)\n benchmarks/project_generator.py:\n e: create_core_py,create_etl_py,create_validation_py,create_utils_py,add_validator_to_core,create_ground_truth_project\n create_core_py(project)\n create_etl_py(project)\n create_validation_py(project)\n create_utils_py(project)\n add_validator_to_core(project)\n create_ground_truth_project(base_dir)\n code2llm/core/lang/cpp.py:\n e: analyze_cpp\n analyze_cpp(content;file_path;module_name;ext;stats)\n code2llm/core/lang/__init__.py:\n e: LanguageParser,register_language,get_parser,list_parsers\n LanguageParser(ABC): analyze(4),can_parse(1) # Abstract base class for language-specific parsers.\n\nAll lang...\n register_language()\n get_parser(extension)\n list_parsers()\n code2llm/core/lang/csharp.py:\n e: analyze_csharp\n analyze_csharp(content;file_path;module_name;ext;stats)\n code2llm/core/lang/java.py:\n e: analyze_java\n analyze_java(content;file_path;module_name;ext;stats)\n code2llm/core/lang/typescript.py:\n e: get_typescript_patterns,get_typescript_lang_config,analyze_typescript_js\n get_typescript_patterns()\n get_typescript_lang_config()\n analyze_typescript_js(content;file_path;module_name;ext;stats)\n code2llm/exporters/map_exporter.py:\n e: MapExporter\n MapExporter(BaseExporter): export(2),export_to_yaml(2) # Export to map.toon.yaml — structural map with a compact proj...\n code2llm/exporters/evolution_exporter.py:\n e: EvolutionExporter\n EvolutionExporter(BaseExporter): _is_excluded(1),export(2),export_to_yaml(2) # Export evolution.toon.yaml — prioritized refactoring queue...\n code2llm/exporters/html_dashboard.py:\n e: HTMLDashboardGenerator\n HTMLDashboardGenerator: __init__(0),generate(2),_render(1) # Generate dashboard.html from project.yaml data.\n\nOrchestrate...\n code2llm/exporters/index_generator/__init__.py:\n e: IndexHTMLGenerator,generate_index_html\n IndexHTMLGenerator: __init__(1),generate(0),scan_files(0),render_html(1) # Generate index.html for browsing all generated files.\n\nThis ...\n generate_index_html(output_dir)\n code2llm/exporters/readme/content.py:\n e: generate_readme_content\n generate_readme_content(project_path;output_dir;total_functions;total_classes;total_modules;insights;core_files_section;llm_files_section;viz_files_section)\n code2llm/exporters/index_generator/renderer.py:\n e: HTMLRenderer\n HTMLRenderer: render(1) # Render the index.html page with CSS and JavaScript...\n code2llm/exporters/mermaid/flow_detailed.py:\n e: export_flow_detailed\n export_flow_detailed(result;output_path;include_examples)\n code2llm/generators/_utils.py:\n e: dump_yaml\n dump_yaml(data)\n code2llm/exporters/mermaid/flow_full.py:\n e: export_flow_full\n export_flow_full(result;output_path;include_examples)\n code2llm/nlp/config.py:\n e: NormalizationConfig,IntentMatchingConfig,EntityResolutionConfig,MultilingualConfig,NLPConfig\n NormalizationConfig: # Configuration for query normalization...\n IntentMatchingConfig: # Configuration for intent matching...\n EntityResolutionConfig: # Configuration for entity resolution...\n MultilingualConfig: # Configuration for multilingual processing...\n NLPConfig: from_yaml(1),to_yaml(1) # Main NLP pipeline configuration...\n orchestrator.sh:\n project2.sh:\n project.sh:\n benchmarks/benchmark_constants.py:\n test_python_only/valid/__init__.py:\n test_python_only/invalid/__init__.py:\n code2llm/__main__.py:\n examples/streaming-analyzer/sample_project/__init__.py:\n examples/functional_refactoring/__init__.py:\n code2llm/analysis/utils/__init__.py:\n code2llm/core/streaming/__init__.py:\n examples/functional_refactoring/models.py:\n e: CommandContext,CommandResult\n CommandContext: # Context for command generation...\n CommandResult: # Result of command generation...\n code2llm/nlp/__init__.py:\n code2llm/exporters/project_yaml_exporter.py:\n code2llm/exporters/mermaid_exporter.py:\n e: MermaidExporter\n MermaidExporter(BaseExporter): # Export call graph to Mermaid format...\n code2llm/exporters/__init__.py:\n code2llm/core/streaming/strategies.py:\n e: ScanStrategy\n ScanStrategy: # Scanning methodology configuration...\n code2llm/exporters/llm_exporter.py:\n code2llm/exporters/readme/__init__.py:\n code2llm/exporters/project_yaml/__init__.py:\n code2llm/exporters/project_yaml/constants.py:\n code2llm/exporters/evolution/__init__.py:\n code2llm/exporters/evolution/constants.py:\n code2llm/exporters/map/__init__.py:\n code2llm/exporters/mermaid/__init__.py:\n code2llm/generators/__init__.py:\n code2llm/generators/mermaid/__init__.py:\n code2llm/generators/llm_flow/__init__.py:\n code2llm/cli_exports/__init__.py:\n code2llm/cli_exports/orchestrator_constants.py:\n code2llm/refactor/__init__.py:\n code2llm/patterns/__init__.py:\n```",
"level": 2
},
{
"name": "source map",
"type": "unknown",
"content": "*Top 5 modules by symbol density — signatures for LLM orientation.*",
"level": 2
},
{
"name": "# `code2llm.cli_commands` (`code2llm/cli_commands.py`)",
"type": "unknown",
"content": "```python\ndef handle_special_commands() # CC=9, fan=5\ndef handle_cache_command(args_list) # CC=12, fan=17 ⚠\ndef handle_report_command(args_list) # CC=4, fan=9\ndef validate_and_setup(args) # CC=3, fan=6\ndef print_start_info(args, source_path, output_dir) # CC=2, fan=1\ndef validate_chunked_output(output_dir, args) # CC=3, fan=6\ndef _get_chunk_dirs(output_dir) # CC=3, fan=2\ndef _validate_chunks(chunk_dirs, required_files) # CC=3, fan=7\ndef _validate_single_chunk(chunk_dir, required_files) # CC=4, fan=3\ndef _get_file_sizes(chunk_dir, required_files) # CC=3, fan=3\ndef _print_chunk_errors(chunk_name, chunk_issues) # CC=2, fan=1\ndef _print_validation_summary(chunk_dirs, valid_chunks, issues) # CC=3, fan=2\ndef generate_llm_context(args_list) # CC=3, fan=12\n```",
"level": 2
},
{
"name": "# `code2llm.cli_analysis` (`code2llm/cli_analysis.py`)",
"type": "unknown",
"content": "```python\ndef _run_analysis(args, source_path, output_dir) # CC=5, fan=4\ndef _run_standard_analysis(args, source_path, output_dir) # CC=5, fan=8\ndef _build_config(args, output_dir) # CC=9, fan=9\ndef _print_analysis_summary(result) # CC=1, fan=2\ndef _run_chunked_analysis(args, source_path, output_dir) # CC=3, fan=8\ndef _print_chunked_plan(subprojects) # CC=4, fan=5\ndef _filter_subprojects(args, subprojects) # CC=10, fan=4 ⚠\ndef _analyze_all_subprojects(args, subprojects, output_dir) # CC=4, fan=8\ndef _analyze_subproject(args, subproject, output_dir) # CC=14, fan=16 ⚠\ndef _merge_chunked_results(all_results, source_path) # CC=9, fan=5\ndef _run_streaming_analysis(args, config, source_path) # CC=7, fan=9\n```",
"level": 2
},
{
"name": "# `code2llm.api` (`code2llm/api.py`)",
"type": "unknown",
"content": "```python\ndef analyze(project_path, config) # CC=2, fan=2\ndef analyze_file(file_path, config) # CC=1, fan=4\n```",
"level": 2
},
{
"name": "# `code2llm.cli_parser` (`code2llm/cli_parser.py`)",
"type": "unknown",
"content": "```python\ndef get_version() # CC=2, fan=5\ndef create_parser() # CC=1, fan=5\n```",
"level": 2
},
{
"name": "# `code2llm.cli` (`code2llm/cli.py`)",
"type": "unknown",
"content": "```python\ndef main() # CC=7, fan=9\n```",
"level": 2
},
{
"name": "call graph",
"type": "unknown",
"content": "*455 nodes · 500 edges · 113 modules · CC̄=3.9*",
"level": 2
},
{
"name": "# hubs (by degree)",
"type": "unknown",
"content": "| Function | CC | in | out | total |\n|----------|----|----|-----|-------|\n| `create_parser` *(in code2llm.cli_parser)* | 1 | 1 | 45 | **46** |\n| `normalize_llm_task` *(in code2llm.generators.llm_task)* | 14 ⚠ | 1 | 43 | **44** |\n| `render_llm_flow_md` *(in code2llm.generators.llm_flow.generator)* | 10 ⚠ | 1 | 42 | **43** |\n| `main` *(in benchmarks.benchmark_performance)* | 1 | 0 | 41 | **41** |\n| `analyze_class_differences` *(in validate_toon)* | 6 | 1 | 39 | **40** |\n| `_summarize_functions` *(in code2llm.generators.llm_flow.analysis)* | 14 ⚠ | 1 | 35 | **36** |\n| `run_benchmark` *(in benchmarks.benchmark_evolution)* | 9 | 0 | 34 | **34** |\n| `handle_cache_command` *(in code2llm.cli_commands)* | 12 ⚠ | 1 | 33 | **34** |\n\n```toon markpact:analysis path=project/calls.toon.yaml\n# code2llm call graph | /home/tom/github/semcod/code2llm\n# nodes: 455 | edges: 500 | modules: 113\n# CC̄=3.9\n\nHUBS[20]:\n code2llm.cli_parser.create_parser\n CC=1 in:1 out:45 total:46\n code2llm.generators.llm_task.normalize_llm_task\n CC=14 in:1 out:43 total:44\n code2llm.generators.llm_flow.generator.render_llm_flow_md\n CC=10 in:1 out:42 total:43\n benchmarks.benchmark_performance.main\n CC=1 in:0 out:41 total:41\n validate_toon.analyze_class_differences\n CC=6 in:1 out:39 total:40\n code2llm.generators.llm_flow.analysis._summarize_functions\n CC=14 in:1 out:35 total:36\n benchmarks.benchmark_evolution.run_benchmark\n CC=9 in:0 out:34 total:34\n code2llm.cli_commands.handle_cache_command\n CC=12 in:1 out:33 total:34\n code2llm.core.lang.base._extract_declarations\n CC=9 in:4 out:28 total:32\n code2llm.core.lang.rust.analyze_rust\n CC=9 in:1 out:31 total:32\n benchmarks.benchmark_optimizations.benchmark_cold_vs_warm\n CC=7 in:1 out:30 total:31\n benchmarks.benchmark_performance.create_test_project\n CC=5 in:1 out:29 total:30\n code2llm.core.toon_size_manager._split_by_modules\n CC=10 in:1 out:27 total:28\n code2llm.exporters.mermaid.compact.export_compact\n CC=13 in:0 out:27 total:27\n code2llm.core.lang.go_lang._analyze_go_regex\n CC=10 in:1 out:26 total:27\n code2llm.cli_exports.orchestrator._run_exports\n CC=14 in:1 out:26 total:27\n code2llm.cli_exports.orchestrator_handlers._export_mermaid\n CC=6 in:1 out:26 total:27\n validate_toon.compare_modules\n CC=5 in:1 out:26 total:27\n code2llm.exporters.mermaid.calls.export_calls\n CC=13 in:0 out:26 total:26\n code2llm.exporters.evolution_exporter.EvolutionExporter._is_excluded\n CC=1 in:24 out:1 total:25\n\nMODULES:\n benchmarks.benchmark_evolution [3 funcs]\n load_previous CC=3 out:3\n run_benchmark CC=9 out:34\n save_current CC=1 out:3\n benchmarks.benchmark_format_quality [3 funcs]\n _print_benchmark_header CC=1 out:4\n _print_ground_truth_info CC=1 out:7\n run_benchmark CC=2 out:22\n benchmarks.benchmark_optimizations [5 funcs]\n benchmark_cold_vs_warm CC=7 out:30\n clear_caches CC=3 out:7\n main CC=3 out:13\n print_summary CC=1 out:18\n run_analysis CC=1 out:7\n benchmarks.benchmark_performance [2 funcs]\n create_test_project CC=5 out:29\n main CC=1 out:41\n benchmarks.format_evaluator [5 funcs]\n _check_structural_features CC=1 out:16\n _detect_hub_types CC=2 out:2\n _detect_pipelines CC=5 out:5\n _detect_problems CC=1 out:16\n evaluate_format CC=4 out:22\n benchmarks.project_generator [6 funcs]\n add_validator_to_core CC=1 out:3\n create_core_py CC=1 out:2\n create_etl_py CC=1 out:2\n create_ground_truth_project CC=1 out:6\n create_utils_py CC=1 out:2\n create_validation_py CC=1 out:2\n benchmarks.reporting [8 funcs]\n _print_gap_analysis CC=6 out:9\n _print_header CC=1 out:3\n _print_pipelines_detail CC=5 out:11\n _print_problems_detail CC=5 out:13\n _print_scores_table CC=3 out:7\n _print_structural_features CC=5 out:11\n build_report CC=3 out:8\n print_results CC=1 out:6\n code2llm.analysis.call_graph [2 funcs]\n _expr_to_str CC=1 out:1\n visit_FunctionDef CC=2 out:4\n code2llm.analysis.cfg [2 funcs]\n _expr_to_str CC=1 out:1\n visit_FunctionDef CC=5 out:8\n code2llm.analysis.data_analysis [3 funcs]\n _find_data_pipelines CC=7 out:7\n _categorize_functions CC=8 out:8\n _make_stage CC=2 out:0\n code2llm.analysis.dfg [1 funcs]\n _expr_to_str CC=1 out:1\n code2llm.analysis.pipeline_resolver [1 funcs]\n resolve CC=4 out:5\n code2llm.analysis.side_effects [1 funcs]\n analyze_function CC=3 out:6\n code2llm.analysis.type_inference [1 funcs]\n enrich_function CC=3 out:4\n code2llm.analysis.utils.ast_helpers [3 funcs]\n ast_unparse CC=4 out:4\n find_function_node CC=8 out:4\n qualified_name CC=2 out:3\n code2llm.api [2 funcs]\n analyze CC=2 out:2\n analyze_file CC=1 out:4\n code2llm.cli [1 funcs]\n main CC=7 out:11\n code2llm.cli_analysis [11 funcs]\n _analyze_all_subprojects CC=4 out:8\n _analyze_subproject CC=14 out:19\n _build_config CC=9 out:13\n _filter_subprojects CC=10 out:5\n _merge_chunked_results CC=9 out:7\n _print_analysis_summary CC=1 out:9\n _print_chunked_plan CC=4 out:9\n _run_analysis CC=5 out:4\n _run_chunked_analysis CC=3 out:13\n _run_standard_analysis CC=5 out:8\n code2llm.cli_commands [13 funcs]\n _get_chunk_dirs CC=3 out:2\n _get_file_sizes CC=3 out:3\n _print_chunk_errors CC=2 out:2\n _print_validation_summary CC=3 out:12\n _validate_chunks CC=3 out:11\n _validate_single_chunk CC=4 out:4\n generate_llm_context CC=3 out:21\n handle_cache_command CC=12 out:33\n handle_report_command CC=4 out:17\n handle_special_commands CC=9 out:8\n code2llm.cli_exports.code2logic [8 funcs]\n _build_code2logic_cmd CC=2 out:3\n _check_code2logic_installed CC=2 out:4\n _export_code2logic CC=6 out:13\n _find_code2logic_output CC=6 out:6\n _handle_code2logic_error CC=6 out:7\n _normalize_code2logic_output CC=2 out:4\n _run_code2logic CC=3 out:4\n _should_run_code2logic CC=2 out:0\n code2llm.cli_exports.formats [9 funcs]\n _export_calls CC=1 out:1\n _export_calls_format CC=4 out:7\n _export_calls_toon CC=1 out:1\n _export_mermaid_pngs CC=11 out:11\n _export_project_toon CC=2 out:8\n _export_project_yaml CC=2 out:5\n _export_simple_formats CC=13 out:24\n _export_yaml CC=6 out:10\n _run_report CC=6 out:12\n code2llm.cli_exports.orchestrator [8 funcs]\n _build_export_config CC=1 out:7\n _collect_dry_run_files CC=3 out:4\n _expand_all_formats CC=2 out:0\n _export_registry_formats CC=9 out:12\n _export_single CC=10 out:13\n _get_format_kwargs CC=2 out:0\n _run_exports CC=14 out:26\n _show_dry_run_plan CC=4 out:16\n code2llm.cli_exports.orchestrator_chunked [3 funcs]\n _export_chunked CC=6 out:9\n _get_filtered_subprojects CC=9 out:7\n _process_subproject CC=5 out:5\n code2llm.cli_exports.orchestrator_handlers [7 funcs]\n _export_calls CC=5 out:7\n _export_context_fallback CC=3 out:5\n _export_index_html CC=5 out:6\n _export_mermaid CC=6 out:26\n _export_mermaid_pngs CC=6 out:4\n _export_project_toon CC=2 out:7\n _export_readme CC=4 out:6\n code2llm.cli_exports.prompt [18 funcs]\n _analyze_generated_files CC=14 out:11\n _build_dynamic_focus_areas CC=9 out:17\n _build_dynamic_tasks CC=8 out:16\n _build_main_files_section CC=1 out:1\n _build_missing_files_section CC=6 out:5\n _build_optional_files_section CC=2 out:1\n _build_priority_order CC=9 out:21\n _build_prompt_file_lines CC=4 out:5\n _build_prompt_footer CC=5 out:7\n _build_prompt_header CC=1 out:0\n code2llm.cli_parser [1 funcs]\n create_parser CC=1 out:45\n code2llm.core.config [2 funcs]\n get_workers CC=2 out:1\n _get_optimal_workers CC=3 out:5\n code2llm.core.file_analyzer [1 funcs]\n _route_to_language_analyzer CC=10 out:10\n code2llm.core.file_cache [2 funcs]\n _get_cache_key CC=1 out:1\n make_cache_key CC=1 out:4\n code2llm.core.file_filter [1 funcs]\n __init__ CC=9 out:13\n code2llm.core.gitignore [1 funcs]\n load_gitignore_patterns CC=3 out:4\n code2llm.core.incremental [3 funcs]\n needs_analysis CC=2 out:5\n update CC=1 out:2\n _file_signature CC=2 out:1\n code2llm.core.lang.base [10 funcs]\n _extract_declarations CC=9 out:28\n _match_method_name CC=14 out:9\n _process_class_method CC=2 out:7\n _process_functions CC=9 out:2\n _process_standalone_function CC=10 out:11\n _resolve_call CC=7 out:7\n analyze_c_family CC=5 out:6\n calculate_complexity_regex CC=6 out:5\n extract_calls_regex CC=9 out:11\n extract_function_body CC=10 out:4\n code2llm.core.lang.cpp [1 funcs]\n analyze_cpp CC=1 out:1\n code2llm.core.lang.csharp [1 funcs]\n analyze_csharp CC=1 out:1\n code2llm.core.lang.generic [1 funcs]\n analyze_generic CC=12 out:20\n code2llm.core.lang.go_lang [2 funcs]\n _analyze_go_regex CC=10 out:26\n analyze_go CC=4 out:6\n code2llm.core.lang.java [1 funcs]\n analyze_java CC=1 out:1\n code2llm.core.lang.php [4 funcs]\n _adjust_qualified_names CC=3 out:6\n _extract_php_traits CC=4 out:8\n _parse_php_metadata CC=8 out:9\n analyze_php CC=2 out:10\n code2llm.core.lang.ruby [3 funcs]\n analyze CC=1 out:1\n _adjust_ruby_module_qualnames CC=4 out:10\n analyze_ruby CC=14 out:19\n code2llm.core.lang.rust [1 funcs]\n analyze_rust CC=9 out:31\n code2llm.core.lang.ts_extractors [5 funcs]\n _extract_classes_ts CC=1 out:6\n _extract_functions_ts CC=1 out:9\n _find_name_node CC=7 out:0\n _get_node_text CC=1 out:1\n extract_declarations_ts CC=1 out:5\n code2llm.core.lang.ts_parser [9 funcs]\n __init__ CC=1 out:1\n parse CC=3 out:3\n supports CC=2 out:1\n _get_language CC=7 out:6\n _get_parser CC=4 out:3\n _init_tree_sitter CC=2 out:1\n get_parser CC=2 out:1\n is_available CC=1 out:1\n parse_source CC=1 out:3\n code2llm.core.lang.typescript [3 funcs]\n analyze_typescript_js CC=1 out:5\n get_typescript_lang_config CC=1 out:0\n get_typescript_patterns CC=1 out:8\n code2llm.core.large_repo [9 funcs]\n _categorize_subdirs CC=7 out:10\n _collect_files_in_dir CC=1 out:1\n _collect_files_recursive CC=1 out:1\n _merge_small_l1_dirs CC=7 out:19\n _process_level1_files CC=5 out:13\n _split_hierarchically CC=8 out:14\n _split_level2_consolidated CC=9 out:19\n get_analysis_plan CC=2 out:4\n should_use_chunking CC=1 out:1\n code2llm.core.persistent_cache [5 funcs]\n get_file_result CC=4 out:5\n put_file_result CC=3 out:7\n _pack CC=2 out:2\n _unpack CC=2 out:2\n get_all_projects CC=6 out:8\n code2llm.core.repo_files [8 funcs]\n _get_gitignore_parser CC=2 out:2\n calculate_priority CC=7 out:1\n collect_files_in_dir CC=6 out:10\n collect_root_files CC=3 out:5\n contains_python_files CC=3 out:4\n count_py_files CC=3 out:4\n get_level1_dirs CC=8 out:9\n should_skip_file CC=7 out:4\n code2llm.core.streaming.cache [1 funcs]\n _get_cache_key CC=1 out:1\n code2llm.core.toon_size_manager [8 funcs]\n _parse_modules CC=6 out:7\n _split_by_lines CC=8 out:20\n _split_by_modules CC=10 out:27\n _write_chunk CC=2 out:1\n get_file_size_kb CC=1 out:1\n manage_toon_size CC=8 out:11\n should_split_toon CC=1 out:1\n split_toon_file CC=3 out:6\n code2llm.exporters.base [1 funcs]\n get_exporter CC=1 out:1\n code2llm.exporters.evolution.computation [8 funcs]\n aggregate_file_stats CC=7 out:11\n build_context CC=10 out:12\n compute_func_data CC=3 out:10\n compute_god_modules CC=2 out:4\n compute_hub_types CC=7 out:8\n filter_god_modules CC=3 out:5\n make_relative_path CC=3 out:3\n scan_file_sizes CC=6 out:7\n code2llm.exporters.evolution.exclusion [1 funcs]\n is_excluded CC=5 out:4\n code2llm.exporters.evolution.yaml_export [1 funcs]\n export_to_yaml CC=11 out:24\n code2llm.exporters.evolution_exporter [2 funcs]\n _is_excluded CC=1 out:1\n export CC=1 out:22\n code2llm.exporters.flow_constants [1 funcs]\n is_excluded_path CC=6 out:4\n code2llm.exporters.flow_exporter [1 funcs]\n _is_excluded CC=1 out:1\n code2llm.exporters.flow_renderer [1 funcs]\n render_header CC=4 out:4\n code2llm.exporters.map.alerts [3 funcs]\n _read_previous_cc_avg CC=6 out:6\n build_hotspots CC=5 out:4\n load_evolution_trend CC=5 out:2\n code2llm.exporters.map.details [4 funcs]\n _rank_modules CC=5 out:7\n _render_map_class CC=7 out:8\n _render_map_module CC=13 out:16\n render_details CC=2 out:2\n code2llm.exporters.map.header [4 funcs]\n _render_alerts_line CC=2 out:3\n _render_hotspots_line CC=2 out:3\n _render_stats_line CC=5 out:7\n render_header CC=8 out:18\n code2llm.exporters.map.module_list [1 funcs]\n render_module_list CC=4 out:8\n code2llm.exporters.map.utils [4 funcs]\n count_total_lines CC=5 out:5\n detect_languages CC=8 out:10\n file_line_count CC=2 out:4\n rel_path CC=6 out:9\n code2llm.exporters.map.yaml_export [5 funcs]\n _build_module_classes_data CC=6 out:5\n _build_module_entry CC=2 out:6\n _build_module_exports CC=6 out:4\n _build_module_functions_data CC=7 out:2\n export_to_yaml CC=8 out:19\n code2llm.exporters.map_exporter [1 funcs]\n export CC=1 out:10\n code2llm.exporters.mermaid.calls [1 funcs]\n export_calls CC=13 out:26\n code2llm.exporters.mermaid.classic [4 funcs]\n _render_cc_styles CC=6 out:12\n _render_edges CC=8 out:9\n _render_subgraphs CC=6 out:14\n export_classic CC=1 out:5\n code2llm.exporters.mermaid.compact [1 funcs]\n export_compact CC=13 out:27\n code2llm.exporters.mermaid.flow_compact [8 funcs]\n _longest_path_dfs CC=7 out:5\n _select_longest_path CC=4 out:4\n build_callers_graph CC=4 out:4\n export_flow_compact CC=1 out:9\n find_critical_path CC=2 out:6\n find_leaves CC=4 out:5\n is_entry_point CC=11 out:7\n should_skip_module CC=3 out:2\n code2llm.exporters.mermaid.flow_detailed [1 funcs]\n export_flow_detailed CC=1 out:14\n code2llm.exporters.mermaid.flow_full [1 funcs]\n export_flow_full CC=1 out:14\n code2llm.exporters.mermaid.utils [8 funcs]\n _sanitize_identifier CC=4 out:3\n build_name_index CC=2 out:3\n get_cc CC=3 out:2\n module_of CC=4 out:4\n readable_id CC=1 out:1\n resolve_callee CC=6 out:3\n safe_module CC=1 out:1\n write_file CC=1 out:5\n code2llm.exporters.mermaid_flow_helpers [12 funcs]\n _append_entry_styles CC=3 out:3\n _append_flow_node CC=4 out:6\n _classify_architecture_module CC=4 out:3\n _entry_points CC=3 out:2\n _filtered_functions CC=4 out:4\n _group_architecture_functions CC=2 out:3\n _group_functions_by_module CC=2 out:4\n _render_architecture_view CC=6 out:13\n _render_flow_edges CC=11 out:10\n _render_flow_styles CC=6 out:10\n code2llm.exporters.project_yaml.core [3 funcs]\n _build_project_yaml CC=12 out:25\n _detect_primary_language CC=9 out:11\n export CC=1 out:6\n code2llm.exporters.project_yaml.evolution [2 funcs]\n build_evolution CC=3 out:4\n load_previous_evolution CC=6 out:5\n code2llm.exporters.project_yaml.health [3 funcs]\n build_alerts CC=13 out:11\n build_health CC=7 out:13\n count_duplicates CC=5 out:8\n code2llm.exporters.project_yaml.hotspots [3 funcs]\n build_hotspots CC=5 out:7\n build_refactoring CC=13 out:20\n hotspot_note CC=7 out:5\n code2llm.exporters.project_yaml.modules [7 funcs]\n build_class_export CC=11 out:10\n build_exports CC=2 out:3\n build_function_exports CC=7 out:6\n build_modules CC=5 out:11\n compute_inbound_deps CC=5 out:3\n compute_module_entry CC=4 out:12\n group_by_file CC=5 out:8\n code2llm.exporters.readme.content [1 funcs]\n generate_readme_content CC=1 out:2\n code2llm.exporters.readme.files [1 funcs]\n get_existing_files CC=2 out:1\n code2llm.exporters.readme.insights [1 funcs]\n extract_insights CC=13 out:14\n code2llm.exporters.readme.sections [3 funcs]\n build_core_files_section CC=4 out:10\n build_llm_files_section CC=5 out:12\n build_viz_files_section CC=7 out:13\n code2llm.exporters.readme_exporter [1 funcs]\n export CC=5 out:17\n code2llm.exporters.report_generators [1 funcs]\n load_project_yaml CC=13 out:17\n code2llm.exporters.toon.helpers [7 funcs]\n _dup_file_set CC=2 out:3\n _hotspot_description CC=8 out:5\n _package_of CC=2 out:2\n _package_of_module CC=4 out:4\n _rel_path CC=6 out:9\n _scan_line_counts CC=6 out:10\n _traits_from_cfg CC=7 out:7\n code2llm.exporters.toon.metrics [2 funcs]\n _compute_hotspots CC=5 out:7\n compute_all_metrics CC=1 out:15\n code2llm.exporters.toon.metrics_core [7 funcs]\n _build_coupling_matrix CC=8 out:6\n _build_function_to_module_map CC=3 out:2\n _resolve_callee_module CC=9 out:5\n compute_class_metrics CC=7 out:14\n compute_file_metrics CC=12 out:25\n compute_function_metrics CC=8 out:14\n compute_package_metrics CC=5 out:9\n code2llm.exporters.toon.metrics_duplicates [2 funcs]\n _calculate_duplicate_info CC=6 out:14\n detect_duplicates CC=4 out:5\n code2llm.exporters.toon.module_detail [2 funcs]\n _render_module_detail CC=3 out:10\n render_details CC=3 out:2\n code2llm.exporters.toon.renderer [2 funcs]\n _detect_language_label CC=10 out:12\n render_layers CC=2 out:8\n code2llm.exporters.validate_project [2 funcs]\n _check_required_keys CC=9 out:6\n validate_project_yaml CC=11 out:17\n code2llm.generators._utils [1 funcs]\n dump_yaml CC=1 out:1\n code2llm.generators.llm_flow.analysis [3 funcs]\n _node_counts_by_function CC=4 out:4\n _pick_relevant_functions CC=8 out:11\n _summarize_functions CC=14 out:35\n code2llm.generators.llm_flow.cli [1 funcs]\n main CC=3 out:18\n code2llm.generators.llm_flow.generator [2 funcs]\n generate_llm_flow CC=5 out:12\n render_llm_flow_md CC=10 out:42\n code2llm.generators.llm_flow.nodes [7 funcs]\n _collect_entrypoints CC=5 out:6\n _collect_functions CC=7 out:10\n _collect_nodes CC=5 out:5\n _deduplicate_entrypoints CC=5 out:4\n _extract_entrypoint_info CC=4 out:6\n _group_nodes_by_file CC=3 out:5\n _is_entrypoint_file CC=2 out:2\n code2llm.generators.llm_flow.parsing [1 funcs]\n _parse_func_label CC=4 out:4\n code2llm.generators.llm_flow.utils [4 funcs]\n _as_dict CC=2 out:1\n _as_list CC=2 out:1\n _safe_read_yaml CC=12 out:14\n _strip_bom CC=2 out:1\n code2llm.generators.llm_task [12 funcs]\n _apply_bullet_sections CC=6 out:10\n _apply_simple_sections CC=5 out:4\n _create_empty_task_data CC=1 out:0\n _ensure_list CC=3 out:1\n _parse_acceptance_tests CC=3 out:4\n _parse_bullets CC=4 out:5\n _parse_sections CC=7 out:8\n _strip_bom CC=2 out:1\n load_input CC=15 out:20\n main CC=4 out:13\n code2llm.generators.mermaid [1 funcs]\n run_cli CC=1 out:8\n code2llm.generators.mermaid.fix [7 funcs]\n _fix_class_line CC=6 out:11\n _fix_edge_label_pipes CC=8 out:10\n _fix_edge_line CC=5 out:9\n _fix_subgraph_line CC=3 out:8\n _sanitize_label_text CC=1 out:9\n _sanitize_node_id CC=3 out:3\n fix_mermaid_file CC=5 out:10\n code2llm.generators.mermaid.png [8 funcs]\n _build_renderers CC=2 out:8\n _is_png_fresh CC=2 out:3\n _prepare_and_render CC=4 out:8\n _run_mmdc_subprocess CC=8 out:7\n _setup_puppeteer_config CC=4 out:9\n generate_pngs CC=7 out:9\n generate_single_png CC=3 out:5\n generate_with_puppeteer CC=2 out:7\n code2llm.generators.mermaid.validation [6 funcs]\n _check_bracket_balance CC=7 out:8\n _check_node_ids CC=12 out:12\n _is_balanced_node_line CC=6 out:0\n _scan_brackets CC=10 out:6\n _strip_label_segments CC=1 out:6\n validate_mermaid_file CC=6 out:10\n code2llm.parsers.toon_parser [6 funcs]\n _detect_section CC=3 out:2\n _parse_header_line CC=2 out:2\n _parse_stats_line CC=5 out:5\n is_toon_file CC=4 out:5\n load_toon CC=2 out:4\n parse_toon_content CC=8 out:9\n demo_langs.valid.sample [10 funcs]\n Order CC=1 out:0\n getId CC=1 out:0\n getItem CC=1 out:0\n addOrder CC=1 out:1\n getOrder CC=3 out:1\n main CC=2 out:6\n processOrders CC=2 out:2\n addUser CC=1 out:1\n service CC=1 out:1\n main CC=2 out:5\n examples.litellm.run [3 funcs]\n get_refactoring_advice CC=2 out:5\n main CC=1 out:17\n run_analysis CC=4 out:8\n examples.streaming-analyzer.sample_project.main [2 funcs]\n handle_get_request CC=4 out:6\n process_request CC=6 out:11\n examples.streaming-analyzer.sample_project.utils [2 funcs]\n format_output CC=3 out:5\n validate_input CC=4 out:2\n scripts.benchmark_badges [3 funcs]\n create_html CC=4 out:3\n get_shield_url CC=1 out:3\n main CC=5 out:23\n scripts.bump_version [7 funcs]\n bump_version CC=4 out:5\n format_version CC=1 out:0\n get_current_version CC=3 out:9\n main CC=3 out:11\n parse_version CC=2 out:3\n update_pyproject_toml CC=1 out:5\n update_version_file CC=1 out:3\n test_langs.invalid.sample_bad [6 funcs]\n AddUser CC=1 out:2\n NewUserService CC=1 out:1\n User CC=1 out:0\n addUser CC=1 out:1\n service CC=1 out:1\n main CC=1 out:3\n test_langs.valid.sample [8 funcs]\n User CC=1 out:0\n getId CC=1 out:0\n getName CC=1 out:0\n addUser CC=1 out:1\n getUser CC=3 out:1\n main CC=2 out:6\n processUsers CC=2 out:2\n service CC=1 out:1\n test_python_only.valid.sample [1 funcs]\n main CC=2 out:5\n validate_toon [21 funcs]\n _compare_all_aspects CC=1 out:5\n _extract_keys_from_yaml CC=1 out:2\n _extract_names_from_toon CC=3 out:4\n _print_comparison_summary CC=5 out:5\n _run_comparison_mode CC=7 out:12\n _run_single_file_mode CC=6 out:12\n analyze_class_differences CC=6 out:39\n compare_basic_stats CC=4 out:11\n compare_classes CC=1 out:19\n compare_functions CC=6 out:24\n\nEDGES:\n test_langs.invalid.sample_bad.main → test_langs.invalid.sample_bad.NewUserService\n test_langs.invalid.sample_bad.main → test_langs.invalid.sample_bad.AddUser\n test_langs.valid.sample.UserService.getUser → test_langs.valid.sample.User.getId\n test_langs.valid.sample.UserService.processUsers → test_langs.valid.sample.User.getName\n test_langs.invalid.sample_bad.UserService.service → test_langs.invalid.sample_bad.UserService.addUser\n test_langs.valid.sample.UserService.service → test_langs.valid.sample.UserService.addUser\n test_langs.valid.sample.UserService.main → test_langs.valid.sample.UserService.addUser\n test_langs.valid.sample.UserService.main → test_langs.valid.sample.User.User\n test_langs.valid.sample.UserService.main → test_langs.valid.sample.UserService.getUser\n test_langs.valid.sample.UserService.main → test_langs.valid.sample.User.getName\n validate_toon.load_file → code2llm.parsers.toon_parser.is_toon_file\n validate_toon.load_file → validate_toon.load_yaml\n validate_toon.load_file → code2llm.parsers.toon_parser.load_toon\n validate_toon.extract_functions_from_toon → validate_toon._extract_names_from_toon\n validate_toon.extract_classes_from_yaml → validate_toon._extract_keys_from_yaml\n validate_toon.extract_classes_from_toon → validate_toon._extract_names_from_toon\n validate_toon.extract_modules_from_yaml → validate_toon._extract_keys_from_yaml\n validate_toon.compare_functions → validate_toon.extract_functions_from_yaml\n validate_toon.compare_functions → validate_toon.extract_functions_from_toon\n validate_toon.compare_classes → validate_toon.extract_classes_from_yaml\n validate_toon.compare_classes → validate_toon.extract_classes_from_toon\n validate_toon.compare_classes → validate_toon.analyze_class_differences\n validate_toon.compare_modules → validate_toon.extract_modules_from_yaml\n validate_toon.compare_modules → validate_toon.extract_modules_from_toon\n validate_toon._run_single_file_mode → validate_toon.load_file\n validate_toon._run_single_file_mode → validate_toon.validate_toon_completeness\n validate_toon._run_comparison_mode → validate_toon.load_yaml\n validate_toon._run_comparison_mode → validate_toon.load_file\n validate_toon._run_comparison_mode → validate_toon._compare_all_aspects\n validate_toon._run_comparison_mode → validate_toon._print_comparison_summary\n validate_toon._compare_all_aspects → validate_toon.compare_basic_stats\n validate_toon._compare_all_aspects → validate_toon.compare_functions\n validate_toon._compare_all_aspects → validate_toon.compare_classes\n validate_toon._compare_all_aspects → validate_toon.compare_modules\n validate_toon._compare_all_aspects → validate_toon.validate_toon_completeness\n validate_toon.main → validate_toon._run_single_file_mode\n validate_toon.main → validate_toon._run_comparison_mode\n examples.litellm.run.main → examples.litellm.run.run_analysis\n examples.litellm.run.main → examples.litellm.run.get_refactoring_advice\n benchmarks.benchmark_evolution.run_benchmark → benchmarks.benchmark_evolution.load_previous\n benchmarks.benchmark_evolution.run_benchmark → benchmarks.benchmark_evolution.save_current\n benchmarks.reporting.print_results → benchmarks.reporting._print_header\n benchmarks.reporting.print_results → benchmarks.reporting._print_scores_table\n benchmarks.reporting.print_results → benchmarks.reporting._print_problems_detail\n benchmarks.reporting.print_results → benchmarks.reporting._print_pipelines_detail\n benchmarks.reporting.print_results → benchmarks.reporting._print_structural_features\n benchmarks.reporting.print_results → benchmarks.reporting._print_gap_analysis\n examples.streaming-analyzer.sample_project.main.Application.process_request → examples.streaming-analyzer.sample_project.utils.validate_input\n examples.streaming-analyzer.sample_project.main.Application.handle_get_request → examples.streaming-analyzer.sample_project.utils.format_output\n benchmarks.project_generator.create_ground_truth_project → benchmarks.project_generator.create_core_py\n```",
"level": 2
},
{
"name": "intent",
"type": "intent",
"content": "High-performance Python code flow analysis with optimized TOON format - CFG, DFG, call graphs, and intelligent code queries",
"level": 2
}
]
}