I went through this same thing and the “what actually changed” part hurt way more than the scraper. Text-only + normalization got me 80% too, but structural moves were the silent killers. What helped was treating each plan as an object with invariants instead of lines: anchor on stable cues like exact plan name, closest recurring price pattern, and a few “signature” features, then cluster those together every run. If a feature’s cluster jumps plans, I tag it as an upgrade/downgrade even if the text is identical. I also kept a small manual mapping layer for edge cases where regex kept screwing up. ChartMogul and Baremetrics alerts gave me downstream revenue signals, and I ended up on Pulse for Reddit after trying F5Bot and Brand24 just to catch when users complained about “surprise” pricing changes that slipped past my rules and tune the detectors around those cases.