Prepared by: Neo, Chief Code Architect
Date: 2026-03-28
Status: For Will's review
Before evaluating options, you need to know this. In November 2024, Strava updated their API agreement with an explicit clause:
"You may not use the Strava API Materials (including Strava Data), directly or indirectly, for any model training related to artificial intelligence, machine learning or similar applications."
They also added:
"You may only display or disclose to an end user the specific Strava Data related to that user."
There is no personal-use exception in the agreement. Feeding your own Strava API data into Claude for analysis technically falls under "use in AI applications."
Practical reality: Strava is not going to detect or enforce against a single user querying their own data through a personal API app and discussing it with Claude. Multiple MCP servers exist openly on GitHub doing exactly this. But the legal footing is not clean. Your bulk-exported data (GDPR download) is yours outright — no API agreement applies to it.
Multiple open-source MCP servers exist that connect Claude directly to your Strava account via OAuth 2.0. The strongest:
| Server | Language | Tools | Stars | Maturity |
|---|---|---|---|---|
| r-huijts/strava-mcp | TypeScript/Node.js | 25 tools | Active | Most complete |
| MariyaFilippova/mcp-strava | Kotlin/JVM | 18 tools | 18 | Route suggestions, Google Maps links |
Strava lets you download your complete data archive via Settings > My Account > Download or Delete Your Account > Request Your Archive. You receive a ZIP file within 2-4 hours containing:
activities.csv — every activity with metadata.gpx / .fit / .fit.gz files for every activity with full GPS tracksprofile.csv, goals.csv, and other account dataThis data is yours under GDPR. No API agreement applies. You can do whatever you want with it.
Layer 1: Strava MCP Server (real-time) - Daily/conversational access to your recent activities - Quick stats, comparisons, training summaries - "How was my ride?" workflows Layer 2: Bulk Export (periodic, legally clean) - Full historical archive in mainframe - activities.csv -> strava_activities table - GPS tracks stored locally for route analysis - Re-export quarterly or before major analysis Layer 3: Chrome MCP (targeted, on-demand) - Segment crowd intelligence for trip planning - Heatmap analysis for route discovery in new zones - Leaderboard data for competitive segments
| Step | Action | Time | Depends On |
|---|---|---|---|
| 1 | Request Strava bulk export | 5 min | Nothing |
| 2 | Create Strava API app | 10 min | Nothing |
| 3 | Install r-huijts/strava-mcp server | 15 min | Step 2 |
| 4 | Download bulk export, extract | 10 min | Step 1 (wait 2-4 hrs) |
| 5 | Add strava_activities table to mainframe | 30 min | Step 4 |
| 6 | Write sync_strava_bulk() | 1 hr | Step 5 |
| 7 | Write match_activity_to_zone() | 30 min | Step 6 |
| 8 | Build segment intelligence scraping | 2-3 hrs | As needed |
| Option | Verdict | Why |
|---|---|---|
| Strava MCP Server | DO IT | 15-minute setup, immediate conversational access to your training data. |
| Bulk Export | DO IT | Your data, your rules. No legal gray area. Foundation for mainframe integration. |
| Chrome MCP Scraping | USE SELECTIVELY | Good for segment intel and heatmaps. Not worth a full pipeline. |
| Composio/third-party | SKIP | Adds a middleman for no benefit. |
Bottom line: Install the MCP server for daily use. Request the bulk export for deep analysis and mainframe integration. Use Chrome MCP for segment crowd intelligence when planning activities in new zones.
Standing by for your call.
-- Neo
Sources: