From: Neo, Chief Code Architect
To: Will (The Admiral)
Date: March 28, 2026
Re: Maximum-efficiency vessel intelligence via Chrome MCP, with focus on past track extraction and itinerary reconstruction
| Capability | Method | Quality | Friction Level |
|---|---|---|---|
| Daily fleet positions (51 vessels) | React Fiber extraction from EXPLORE DATA grid | Excellent — all fields captured in single JS call | Low — ~2 min per pull |
| Weekly intelligence briefs | Positions + analysis + zone cross-reference | Excellent — professional-grade output | Low — automated from position data |
| Activity zone alerts | Haversine distance check against 17 zones | Excellent — 9 alerts this week | Zero — runs on every pull |
| Proximity clustering | Pairwise distance calculation | Excellent — 43 pairs detected | Zero — runs on every pull |
| Dense Track (past track) | Browser endpoint gettrackjson | Good — ~77 positions/day | Medium-High — manual, per-vessel |
| Itinerary reconstruction | Manual analysis from positions | Adequate — weekly brief does this narratively | High — no structured data |
| Historical pattern analysis | Manual review of past briefs | Poor — no time-series database | Very High — no historical store |
The bottleneck is past track and historical data. Daily positions work great. But when Will asks "Where has OCTOPUS been for the last 3 months?" or "Show me HAMPSHIRE II's seasonal pattern," the system has to:
https://www.marinetraffic.com/map/gettrackjson/shipid:{SHIP_ID}/stdate:{YYYY-MM-DD}/endate:{YYYY-MM-DD}/trackorigin:livetrack
This is an internal browser endpoint — not the paid API. It works as long as there's an active MarineTraffic session cookie in Chrome. The Essential plan ($1,000/year, currently active) gives 7 days of past track.
The Dense Track endpoint requires MarineTraffic's internal SHIP_ID, not MMSI or IMO. Currently we only have M&EM's SHIP_ID (6996019) documented. We need all 54.
How to extract SHIP_IDs in bulk (one-time operation):
// Run this in Chrome MCP on the EXPLORE DATA page after loading a fleet
const gridEl = document.querySelector('[role="grid"]');
const fiberKey = Object.keys(gridEl).find(k => k.startsWith('__reactFiber'));
let fiber = gridEl[fiberKey];
let rows = null;
let attempts = 0;
while (fiber && attempts < 50) {
if (fiber.memoizedProps?.rows?.length > 5) {
rows = fiber.memoizedProps.rows;
break;
}
fiber = fiber.return;
attempts++;
}
// Extract SHIP_ID mapping
rows.map(r => [r.SHIPNAME, r.SHIP_ID, r.MMSI, r.IMO].join(',')).join('\n')
For a single vessel:
javascript_tool in Chrome MCPvessel_positions tablePractical constraint: Chrome MCP JS execution has output length limits. For 17 TIER ONE vessels x 7 days x ~77 positions/day = ~9,200 positions. Solution: batch in groups of 3-5 vessels per call.
INPUT: [(lat, lon, speed, course, timestamp), ...] sorted by timestamp
OUTPUT: [
{ type: 'port_call', port: 'Nassau', arrival: T1, departure: T2, duration: '3d 4h' },
{ type: 'passage', from: 'Nassau', to: 'Palm Beach', distance: 185, avg_speed: 11.2 },
{ type: 'anchorage', location: 'Exuma Cays', lat: 24.2, lon: -76.4, duration: '1d 8h' },
]
Group consecutive positions where speed < 0.5kn into stops. Merge nearby stops (vessel swinging on anchor) within 0.5nm.
With sufficient historical data (collected daily over weeks/months), detect primary cruising grounds per month, typical ports, average days at sea vs in port, and repositioning months.
| Pattern | Detection Rule | Meaning |
|---|---|---|
| At anchor | speed < 0.5kn for > 2hrs | Stationary — port call or anchorage |
| Harbor maneuver | 0.5-3kn, frequent course changes | Entering/leaving port |
| Coastal cruise | 6-12kn, frequent heading changes | Sightseeing, island hopping |
| Passage | 10-16kn, steady heading for > 6hrs | Repositioning between cruising grounds |
| Delivery | > 14kn sustained for > 24hrs | Crew only, no guests aboard |
Six tables: vessels, vessel_positions (time-series AIS data), port_calls (derived from position clustering), voyages (reconstructed legs between stops), anchorages (known anchorage locations), and daily_snapshots (one row per vessel per day).
After each pull, automatically write positions to the mainframe vessel_positions and daily_snapshots tables. This builds the historical record passively.
Run once per week for each TIER ONE vessel (17 vessels). 17 x 539 positions = ~9,163 positions per week. ~2 minutes total execution.
Same as weekly, but for all 51 MT-tracked vessels. ~27,500 positions per month. Still tiny for SQLite.
Run the bulk extraction script once, store in vessels.ship_id. Takes ~30 seconds.
| Failure Mode | Detection | Recovery |
|---|---|---|
| Session expired | Dense Track returns 401/403 or HTML instead of JSON | Re-authenticate: navigate to MT login page |
| Rate limited | Dense Track returns 429 or empty response | Increase delay between requests to 2 seconds |
| React Fiber changed | rows is null after 50 fiber walks | Fall back to scroll-capture method |
| SHIP_ID missing | Vessel not in vessels table with ship_id | Look up via vessel page URL, update mainframe |
| MT site redesign | All methods fail | Check MT release notes, adapt selectors |
Chrome MCP (Browser)
|
+------------+-------------+
| |
EXPLORE DATA Grid Dense Track Endpoint
(React Fiber extract) (gettrackjson fetch)
~51 vessels, live positions ~77 pos/day per vessel
| |
v v
Parse + Normalize Parse + Normalize
| |
+------------+-------------+
|
mainframe.py
(SQLite DB)
|
+------+-----+------+------+
| | | |
vessels vessel_ port_ voyages
positions calls
daily_ anchorages
snapshots
|
Intelligence Layer
|
+------+-----+------+
| | |
Daily Weekly On-Demand
Brief Report Queries
Each daily pull feeds the database. The database enables pattern detection. Patterns enable better briefs. Better briefs justify the pull. The system gets smarter every day with zero additional effort from Will.
After 90 days: ~90 daily snapshots per vessel, ~8,100 Dense Track positions per TIER ONE vessel, enough data to reconstruct full seasonal itineraries.
After 12 months: full annual patterns — the ability to predict where any vessel is likely to be in any given month.
The current Essential plan limits past track to 7 days. To build longer histories, we must harvest weekly and stitch together. This is why the weekly harvest cadence is critical — miss a week and we lose that week's Dense Track resolution forever.
import math
def haversine(lat1, lon1, lat2, lon2):
"""Return distance in nautical miles between two coordinates."""
R = 3440.065 # Earth radius in nautical miles
lat1, lon1, lat2, lon2 = map(math.radians, [lat1, lon1, lat2, lon2])
dlat = lat2 - lat1
dlon = lon2 - lon1
a = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2
return 2 * R * math.asin(math.sqrt(a))
This document is the blueprint. The mainframe schema is implemented and seeded. The fleet is loaded. The system is ready to start accumulating intelligence.
-- Neo