Skip to content

Mining Performance Analysis#83

Open
markoceri wants to merge 39 commits intodevfrom
feat/mining-performance
Open

Mining Performance Analysis#83
markoceri wants to merge 39 commits intodevfrom
feat/mining-performance

Conversation

@markoceri
Copy link
Copy Markdown
Collaborator

This PR delivers the new Mining Performance Analysis domain for Edge Mining and fixes the issue #33

Why this change

Edge Mining's rule engine already knows about energy state and miner state, but it has no visibility into how the mining pool is actually doing. As a result, useful rules like "don't switch miners off right before a payout" or "pause mining if worker shares have dried up for an hour" are impossible to write today.

This PR adds live pool-side data to the decisional context, through a new domain and two concrete pool integrations (Ocean and Braiins), with a rate-limit aware HTTP client under the hood.

What's new at a glance

1. Mining Performance domain

A new MiningPerformanceTrackerPort that any mining pool can implement. Value objects capture the data that actually matters to decisions:

  • PoolStats — current hashrate, 24h/7d averages, unpaid balance, estimated next payout, worker list.
  • PoolWorkerStats — per-worker hashrate, last share timestamp, valid/stale/rejected shares when the pool exposes them.
  • PayoutSchedule — how the pool pays out (per-block / hourly / daily / threshold), with optional threshold and next-payout hint.
  • MiningReward — individual reward entries for the recent history.
  • MiningPerformanceSnapshot — a consolidated snapshot grouping the three live fields above under one timestamp, so the rule engine can consume them as a coherent unit.

2. Two pool adapters

  • Ocean.xyz — public REST API, address-based, no authentication needed.
  • Braiins Pool — token-based authentication (Pool-Auth-Token header), aligned to the current post-FPPS API schema (the pool rewrote its public API in November 2023, removing threshold-based payouts in favour of daily FPPS).

Both adapters are wired through a matching factory, a config class, and full registration in the adapter service — so adding a third pool in the future is just "plug another adapter in".

3. Integration with the decisional context

  • DecisionalContext now carries a mining_performance field. Rules can reference things like mining_performance.current_hashrate.value, mining_performance.pool_stats.estimated_next_payout, or mining_performance.payout_schedule.frequency.
  • The optimization service builds the snapshot once per cycle and reuses it across all flows.
  • Two ready-to-adapt YAML files of example rules are shipped in data/examples/rules/start/ and data/examples/rules/stop/, covering pre-payout pushes, hashrate-drop detection against the 24h average, unprofitable weekly trends, and stale pool data.

4. Rate-limit aware pool clients

Public pool APIs throttle. The two new adapters share a small base class that:

  • Caches each call for a tuned TTL (60s for current hashrate, 5 min for worker/pool stats, 10 min for recent rewards, 1 h for payout schedule). The pools update those fields at roughly the same cadence, so polling faster would just burn through the rate limit without adding freshness.
  • Backs off exponentially on HTTP 429: 5s → 10s → 20s → 40s → 80s with small random jitter, and honours Retry-After when the pool sends it.
  • Serves stale data gracefully: if the pool keeps throttling, the cached value — even past its TTL — is served rather than letting the error bubble up and break the decision loop. The error is only propagated when no cached value exists.
  • Surfaces rate-limit errors end-to-end: REST responds with HTTP 429 + Retry-After, CLI prints a human-friendly message.

5. REST API and interactive CLI

  • 13 REST endpoints under /api/v1/mining-performance-trackers covering tracker CRUD, adapter-type and config-schema discovery, connectivity test, and live data (stats, workers, rewards history, payout schedule).
  • A new CLI menu "Manage Mining Performance Trackers" with an adapter-aware configuration wizard.

How to try it

  1. Set up a tracker:
    • CLI: python -m edge_mining → "Manage Mining Performance Trackers" → "Add" → pick Ocean or Braiins and fill in the config.
    • Or REST: POST /api/v1/mining-performance-trackers/ with the adapter-specific config.
  2. Test connectivity: REST GET /.../{id}/test or CLI "Test reachability".
  3. Inspect live data: REST endpoints for stats, workers, rewards, payout schedule — all cached and rate-limit aware.
  4. Write rules that reference the new fields. See data/examples/rules/start/mining_performance_start_rules.yaml and .../stop/mining_performance_stop_rules.yaml for ready-to-adapt examples.

Notes

  • Breaking change on DecisionalContext: the single tracker_current_hashrate: Optional[HashRate] field was replaced with a richer mining_performance: Optional[MiningPerformanceSnapshot]. Existing rules referencing tracker_current_hashrate need to migrate to mining_performance.current_hashrate.value. The rule engine's built-in operator examples have been updated accordingly.
  • Braiins FPPS alignment: the initial Braiins adapter was written against the pre-FPPS response shape (unconfirmed_reward, hash_rate_1h, threshold-based payouts). The final code follows the current documented schema — current_balance for the unpaid balance, hash_rate_60m for the one-hour hashrate, and PayoutFrequency.DAILY with no threshold/next-payout (FPPS pays daily automatically).
  • No persistence for live pool data: reward history, stats, workers are passthrough. The caching layer is in-memory only; we can revisit persistence later if the decision loop needs historical trend analysis beyond what the pools serve.
  • Port stays pure: the cache + backoff plumbing lives in a shared base class used by both adapters. The port itself is contract-only, so a future adapter can opt in or ship its own policy.

Test coverage

  • Suite at the end of the branch: 384 tests passing, 0 failures.
  • New tests cover the new domain, the two adapters (happy path, HTTP-level error mapping, 429 handling with Retry-After), the cache + backoff base class (cache hit/miss within and past TTL, backoff progression, stale-while-error fallback), the configuration service, the REST router, and the CLI interactions.
  • ruff and mypy are clean on all files touched by this PR.

…e analysis, including MiningReward, PoolWorkerStats, PoolStats, and PayoutSchedule
…hance documentation for pool statistics and worker stats
…tions with validation and serialization methods
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add REST API for Mining Performance Analysis (performance) domain

1 participant