This started as a side project that got out of hand.
I wanted to see if an AI could find patterns in live poker footage that human analysts miss — not just track stats, but identify behavioral clusters, timing anomalies, and tell patterns across large samples. Things that would take a human hundreds of hours to catalog manually.
So I fed SpotMyTell's analysis system 500 hours of footage: a mix of public stream content, hand history files from poker sites, and recorded home game sessions (with permission). Spread across about 4 weeks.
Here's what happened week by week.
Week 1: The Basics Land Hard
The first week wasn't glamorous. The AI did what any decent pattern recognition system would do: it found the obvious stuff.
Snap-calls were weak or medium holdings 67% of the time across the sample. Long tanks before big bets correlated with strong hands at 71%. Instant checks correlated with weak holdings at 68%.
I already knew these patterns. Anyone who's read a poker book knows them. I almost shelved the project.
Then the AI gave me something I didn't expect: a breakdown by stake level.
At $1/2, these patterns held even stronger than average — snap-calls were weak 74% of the time, long tanks were strong 78% of the time. At $5/10 and above, the numbers were noisier — 58% and 61% respectively.
That correlation makes sense in retrospect. Lower-stakes players have less discipline about masking their reactions. Higher-stakes regulars have either learned to standardize their timing or vary it deliberately. But seeing it quantified across 500 hours made it concrete in a way my intuition wasn't.
The actionable takeaway: at $1/2-$2/5, lean harder on timing tells. At $5/10+, rely more on sizing and verbal patterns where deception is harder to maintain consistently.
Week 2: Verbal Tells Nobody Talks About
This is where it got interesting.
Week 2 the AI spent most of its processing time on audio analysis — parsing verbal patterns against hand outcomes at showdown. The verbal tell literature in poker is thin. Most of it comes from Caro's Book, which is 40 years old.
What the AI found:
Unsolicited narration correlates with missed draws at 64%. When a player explains their bet without being asked — "I'm representing the flush here" or "I put you on air so I'm bluffing" — they almost always have a weak hand. Strong hands don't need narration.
Asking "how much is it?" after a raise correlates with strong hands at 69%. The question is a stall tactic while someone calculates pot odds or thinks through their play. Players who already know they're folding don't ask. Players who already know they're calling (weak holdings in live games, often) tend to just call.
"I'm probably going to call" said out loud correlates with folding at 58%. Classic speech act tell. Players who announce calls often don't make them. The verbalization is a negotiation with themselves.
Volume changes matter more than content. A player who normally speaks at medium volume who drops to a near-whisper in a big pot is usually either very strong or very uncomfortable. The AI couldn't reliably distinguish which without additional signals, but the volume shift alone flagged big hands at 73%.
The AI got some of these wrong. It flagged one player as "narrates when weak" who turned out to be a talker — he narrated constantly regardless of hand strength. The pattern held in aggregate but broke down for that individual. By week 3, the system had built individual baselines that corrected this.
Week 3: Sizing Clusters
The third week produced what I think is the most practically useful output: sizing cluster maps.
Instead of looking at single bet sizes, the AI tracked sizing sequences. A player who bets 60% on the flop, 55% on the turn, and 50% on the river is in a different cluster than someone who bets 60%, 80%, and 120%.
Across the 500 hours, five distinct sizing clusters emerged:
Cluster 1: Consistent decrease (60%→50%→40%) — Almost always value bet protection. Player has a strong hand, gets increasingly scared of action, decreases sizing to "keep them in." Profitable to call down and raise rivers.
Cluster 2: Consistent increase (40%→60%→100%+) — Mixed. In position, this often means building a pot with a made hand. Out of position, it frequently signals a bluff escalating in size as the player commits to the line.
Cluster 3: Large, large, large (75%+/75%+/75%+) — Autopilot betting. This is solver-pattern mimicry without understanding. The player learned "bet 75%" and applies it everywhere. Exploit by finding spots where their range is weak and raising.
Cluster 4: Small, large (30%→100%+) — Classic "I found my hand." Player bet small hoping for a cheap street, hit something on the turn or river, now bets big. The small bet was the weak holding. The large bet is the strong holding.
Cluster 5: Any, any, overbet (X→X→150%+) — As covered in the sizing tells post, river overbets at low stakes are more bluff than value. This cluster shows up 3x more often in the bottom half of the data when sorted by showdown hand strength.
73% of players at $2/5 in the sample had at least one reliable timing tell. 81% had at least one reliable sizing tell. These numbers are higher than I expected.
Week 4: The Reverse Tells That Fooled the AI
The fourth week was humbling.
After three weeks, the system started flagging patterns it was confident about. Then I ran the model against footage of four professional or semi-professional players (players known to actively think about tells and counter them).
The error rate went from about 29% wrong to 47% wrong. The AI was worse than coin-flipping.
Here's what happened: these players had active reverse tell strategies. They'd learned the patterns the AI was trained on and deliberately inverted them.
One player — I'll call him "Counter" — consistently snap-called with strong hands and tanked with draws. He'd inverted the most reliable tell in the system. Against him, my snap-call read was exactly backwards.
Another player had trained himself to give the same verbal non-responses in every spot. No tell. Complete silence, consistent tempo, identical sizing across hand strength ranges. The AI had nothing to work with.
The lesson: the system works on players who aren't thinking about tells. That's most players at most stakes. But the higher you go, the more likely you are to find players who've actively countered the common patterns.
SpotMyTell's database now flags players with known reverse tell tendencies — if we have enough footage to identify the inversion, we mark it. But it's a reminder that no tool replaces active table observation.
What 500 Hours Actually Proved
Three things that I believe confidently now:
1. Tells are real, but they're per-player, not universal. The aggregate patterns are starting points. The individual baseline is what matters.
2. Sizing tells are more reliable than timing tells at higher stakes. Timing is easier to control intentionally. Sizing is harder to vary consistently across all decision points.
3. Verbal tells are underrated. The poker community focuses on physical and timing tells. Verbal patterns are less discussed, less defended against, and surprisingly predictive — especially the volume change finding.
What to Do Next
If you want to run your own sample, upload hand histories to SpotMyTell and let the AI categorize the patterns. It won't give you 500 hours of insight overnight, but even 20-30 hours of data on specific opponents starts to produce useful cluster maps.
For stream review analysis — particularly if you study specific players' footage — the stream review tool can process footage and flag tell patterns automatically.
The 500-hour project taught me that systematic analysis finds things intuition misses. But it also taught me to stay honest about what the data can and can't do. Tools help. Attention at the table is still irreplaceable.
Drop any questions below about the methodology — happy to go deeper on any of the four weeks.