Form spam protection without reCAPTCHA: seven techniques that work better

April 5, 2026 · 8 min read

Security concept — form spam protection without reCAPTCHA

reCAPTCHA has had a good run. It's been the default "am I a robot" solution on the internet since 2007. It is also, in 2026, the wrong default for most contact forms — and I mean "wrong" in a measurable, numbers-on-a-dashboard sense, not a philosophical one.

Here's what reCAPTCHA costs you when you put it on a form:

  • 140–300ms of added page load on every page it's on (not just the form page — the script loads globally)
  • A measurable drop in form completion rate, particularly on mobile Safari and on slow connections
  • Accessibility friction for users on screen readers and keyboard-only navigation
  • Privacy and GDPR questions that nobody on a small team wants to answer
  • A Google dependency you can't easily remove once it's in place
  • The "select all crosswalks" experience that makes users hate your site a little

And in exchange for all of that, it catches some bots. Not more bots than the alternatives I'm about to list. Just some.

This post is seven techniques that individually or in combination will outperform reCAPTCHA on a standard contact form, without any of the downsides. I've tested most of these directly (the numbers come from the 30-day form spam experiment), and the rest are techniques I've seen work reliably on friends' production sites.

1. The honeypot field

What it is: a hidden input field that real humans never see and bots happily fill in. Your backend checks the field and drops any submission where it's not empty.

The code:

<input type="text" name="website" tabindex="-1" autocomplete="off"
       style="position:absolute;left:-9999px" aria-hidden="true">

Why it works: most form-spam bots scrape the HTML, find every input, and fill in something for each one. They don't read CSS, they don't evaluate JavaScript, they don't check aria-hidden or tabindex. The hidden field looks like a normal input to them. They fill it. They get dropped.

What it catches: the cheap, high-volume spam bots that make up the majority of form abuse. In my 30-day experiment, one honeypot field cut daily spam from ~1,577/day to ~412/day. A 74% reduction from a single line of HTML.

What it doesn't catch: sophisticated bots that render JavaScript and CSS (residential-proxy scraper pools, some advanced SEO outreach tools).

Cost: zero milliseconds, zero accessibility impact, zero user friction. This is the first thing you should do. If you do only one thing from this list, do this one.

2. The minimum submission time check

What it is: you record when the form was rendered (either server-side or as a hidden timestamp field added by JavaScript), and reject any submission that arrives less than ~1.2 seconds later.

The reasoning: humans cannot fill in a contact form in under a second. Even a pre-filled autocomplete form requires a human to click the submit button, and even that takes at least 200–500ms of real-world latency. Bots, on the other hand, routinely POST within milliseconds of loading the page.

Implementation:

<input type="hidden" name="_formto_ts" id="_formto_ts" value="">
<script>
  document.getElementById('_formto_ts').value = Date.now()
</script>

Your backend checks Date.now() - submittedTs and drops anything under 1,200ms. Most serious form backends do this automatically if you include the field.

What it catches: everything the honeypot misses that isn't sophisticated enough to wait. In my experiment, adding this check cut spam from ~412/day to ~186/day — another ~55% reduction on top of the honeypot.

Cost: one line of JavaScript, negligible latency, no user-visible change.

3. Request header heuristics

What it is: server-side checks on headers that real browsers always send and naive bots often don't.

Things to check:

  • Accept-Language header present (browsers always send it; many scripts don't)
  • User-Agent doesn't scream "curl" or "python-requests"
  • Referer header matches a page on your site (when the form has a known origin)
  • Sec-Fetch-* headers are present on modern browsers (Chrome, Firefox, Safari all send them)

Why it works: writing a spam script is easy. Writing one that perfectly emulates a modern browser's request headers is harder. Cheap bots skip the headers and get caught.

What it catches: the middle tier of spam bots — the ones that bypass honeypots by being slightly more careful, but aren't sophisticated enough to set every header a real browser sends.

What it doesn't catch: bots running inside headless Chrome (which sets all the headers correctly) or bots using residential proxy networks (which send real browser traffic).

Cost: small backend-side cost, zero user-facing cost. Some false positives for users on unusual browsers or VPNs, so apply carefully.

4. Per-IP rate limiting

What it is: limit how many submissions a single IP address can make in a given window — typically something like "5 per minute" or "20 per hour."

Why it works: cheap spam comes in bursts from the same IPs. Rate limits absorb those bursts without affecting legitimate users, who rarely submit the same form twice within a minute.

What it catches: brute-force burst attacks, misconfigured bots that loop on the same form, and some accidental self-inflicted damage from load tests.

What it doesn't catch: slow drip attacks (one submission per IP per hour), residential-proxy attacks that rotate through hundreds of IPs.

Cost: a small amount of backend infrastructure. Zero user-facing cost unless someone is genuinely retrying a form many times in a row, which usually means something else is already wrong.

5. Content heuristics and keyword filters

What it is: server-side inspection of the submitted message for obvious spam patterns — things like crypto-related keyword combinations, casino affiliate language, SEO outreach templates, and known spam URL patterns.

The tradeoff: this is where you enter false-positive territory. A legitimate customer asking a question about cryptocurrency pricing might hit your crypto filter. A real partnership proposal might match your SEO outreach pattern. You have to tune carefully.

The approach I recommend: don't block based on keywords alone. Use them as one signal in a composite score. A submission with two keyword matches AND a suspicious user agent AND a missing Accept-Language header is almost certainly spam. Any one of those signals alone is not.

What it catches: the long tail of bot spam that gets past earlier filters by being well-formed but still obviously promotional.

Cost: ongoing maintenance (keyword lists rot), occasional false positives, requires either rule engineering or a classifier you trust.

6. A chat-based CAPTCHA alternative (Cloudflare Turnstile, hCaptcha)

What it is: CAPTCHAs that don't use image puzzles. They work by running a passive browser challenge — JavaScript that checks for real-browser characteristics, mouse movements, fingerprint signals — and either passing invisibly or showing a small checkbox.

Cloudflare Turnstile is the best of the breed currently. It's free, privacy-friendly, doesn't feed a Google tracking pipeline, and works without user interaction ~95% of the time. You get a small Turnstile badge instead of "select all bicycles."

hCaptcha is a similar option, slightly older, with a paid "privacy-pass" mode that lets users skip challenges entirely.

When to use these: after you've already added the honeypot, timing, and header checks. CAPTCHA should be the last layer, not the first. And even then, I'd pick Turnstile over reCAPTCHA every time.

Cost: some JavaScript weight, occasional user friction, a third-party dependency — but dramatically less of all three than reCAPTCHA.

7. Behavioral signals and reputation

What it is: a form backend (or service) that maintains a reputation score for IPs, ASNs, and known bad actors across many sites. When a request comes in, it gets scored against that database before any other check runs.

Why it works: spam bots are not unique to your site. The same IPs hitting your contact form are hitting thousands of other contact forms. Shared reputation data lets any one site benefit from the aggregate detection of every other site.

Who does this well: most serious form backends run some version of this. Akismet (the WordPress one) is the oldest. FormTo has its own internal version. Commercial services like Distil, PerimeterX, and Arkose Labs do heavier versions for enterprise.

Cost: this is usually something you get by picking the right backend, not something you build yourself. DIY reputation systems are hard.

The layered approach

Here's the dirty secret of form spam protection: no single technique is enough, and no single technique needs to be. The right setup is a sequence of cheap, fast checks that run in order, with each one catching what the previous ones missed.

My recommended stack for a new contact form in 2026:

  1. Honeypot field (blocks 74% of bots for free)
  2. Submission timing check (blocks another 55% of what's left)
  3. Request header heuristics (catches the middle tier)
  4. Per-IP rate limit (absorbs bursts)
  5. Reputation scoring (catches what gets past everything else)
  6. Content filter as last resort for very high-volume sites
  7. Turnstile only if you're still getting spam after all of the above, which is rare

Notice what's missing from that list: reCAPTCHA. And notice what's missing from the user's perspective: any visible spam-protection UI at all. The whole stack runs invisibly. Real users never see a challenge, a checkbox, a crosswalk puzzle, or a loading spinner. They fill in the form, they click Send, they move on with their day.

The math on CAPTCHA vs the alternatives

Let me put numbers on this, because "reCAPTCHA is bad" is easy to say and hard to justify without data.

In my 30-day experiment:

Stack Daily spam after Daily completion rate User complaints
Nothing ~1,577 100% (baseline) 0
reCAPTCHA v3 only ~58 94% 3
Honeypot only ~412 100% 0
Honeypot + timing ~186 100% 0
Honeypot + timing + headers + rate limit ~71 100% 0
Full stack (minus CAPTCHA) ~4 100% 0

The layered approach without CAPTCHA catches more spam than reCAPTCHA v3 alone, without any user-visible friction, and without the 6% drop in completion rate that reCAPTCHA introduced.

That's the whole argument. The numbers are boring. The implications are not.

The bit I want you to remember

reCAPTCHA is not a security feature. It's a legacy default. The defaults have moved on. In 2026, a well-configured form with a honeypot, timing check, header heuristics, rate limits, and a form backend that does reputation scoring will catch more spam than reCAPTCHA, with zero user friction and zero Google tracking.

If you're still putting reCAPTCHA on contact forms because "that's what you do," this is your official notice to stop.


FormTo ships with all of the above enabled by default — honeypot detection, timing checks, header heuristics, per-IP rate limits, and our own internal reputation layer. Create a free form, point it at your contact page, and watch the spam not arrive.

For the full experiment with numbers: 30 days of form spam. For the related "why do my form emails go to spam" problem (different kind of spam, same kind of frustration): the email-deliverability post.

← All posts