HTTP 100 Continue

HTTP 100 Continue is an interim response indicating the server has received the request headers and the client should proceed to send the request body. This status is crucial for optimizing large payload uploads — the client sends an Expect: 100-continue header, and the server either replies 100 to accept the body or 417 to reject it early, saving bandwidth. Without this mechanism, clients would blindly transmit potentially megabytes of data only to receive a 401 or 413 after the full upload completes.

Debug HTTP 100 live
Analyze real 100 behavior — headers, caching, CORS, redirects
Open Inspector →

Try it (live endpoint)

Response includes the status code, standard headers (including Content-Type), and a small diagnostic JSON body describing the request and returned status.

Simulator URL (copy in the app after load — not a normal link):

https://httpstatus.com/api/status/100

Example request:

curl -i "https://httpstatus.com/api/status/100"
Try in playground

Meaning

The server has received the request headers, and the client should proceed to send the request body (used with Expect: 100-continue header).

What it guarantees
  • An interim response was provided before the final response.
What it does NOT guarantee
  • The request has succeeded or failed.
  • Clients should treat this as a final outcome.

When to use this status

  • Use only when it accurately describes the outcome for clients and tooling.

When NOT to use this status (common misuses)

Critical headers that matter

Tool interpretation

Browsers
Treats as success; caches/revalidates based on headers and validators.
API clients
Deserializes per Content-Type; conditional requests use validators when implemented.
Crawlers / SEO tools
Indexes depending on headers and canonical stability; caches behavior via validators and cache directives.
Uptime monitors
Typically marks success; advanced checks may flag header anomalies or latency.
CDNs / reverse proxies
Caches/revalidates based on Cache-Control, ETag, and Vary; compression and content-type affect behavior.

Inspector preview (read-only)

On this code, Inspector focuses on semantics, headers, and correctness warnings that commonly affect clients and caches.

Signals it will highlight
  • Status semantics vs method and body expectations
  • Header sanity (Content-Type, Cache-Control, Vary) and evidence completeness
Correctness warnings
No common correctness warnings are specific to this code.

Guided Lab outcome

  • Reproduce HTTP 100 Continue using a controlled endpoint and capture the full exchange.
  • Practice distinguishing status semantics from transport issues (redirects, caching, proxies).

Technical deep dive

The 100 Continue mechanism is defined in RFC 7231 Section 5.1.1. When a client includes 'Expect: 100-continue' in the request headers, it pauses before sending the body. The server inspects the headers (authentication, content-length limits, content-type) and either sends '100 Continue' to greenlight the body, or a final status like 413 Payload Too Large. Key detail: the server MUST eventually send a final response regardless of whether it sent 100. Proxies must handle 100 specially — they forward it but must not cache it. HTTP/2 deprecated the Expect header but the semantics persist in some implementations. Timeout behavior varies: most servers wait 1-2 seconds before assuming the client won't send Expect and proceed normally.

Real-world examples

S3 multipart upload
AWS S3 uses the 100-continue mechanism for PUT requests. The client sends headers with Expect: 100-continue, and S3 validates the bucket policy, ACLs, and storage class before accepting the potentially gigabyte-sized body. This prevents uploading large files only to get a 403 Forbidden.
API gateway with auth validation
An API gateway receives a POST with a 50MB JSON payload. Before accepting the body, it checks the Authorization header against the identity provider. If the token is expired, it returns 401 immediately — the 100-continue flow saved 50MB of wasted bandwidth.
File upload service with size limits
A document management system checks Content-Length against per-user quotas before accepting the upload body. Users exceeding their quota get an immediate 413 instead of waiting for the full file to transfer.

Framework behavior

Express.js (Node)
Express/Node.js automatically handles 100-continue. The http.Server emits a 'checkContinue' event; if no listener is attached, it auto-sends 100. To add validation: server.on('checkContinue', (req, res) => { if (authorized(req)) res.writeContinue(); else res.writeHead(401).end(); });
Django / DRF (Python)
Django's WSGI layer (via gunicorn/uwsgi) typically handles 100-continue transparently. The WSGI server sends 100 when it starts reading the body. Custom middleware cannot easily intercept this — it happens at the server level before Django sees the request.
Spring Boot (Java)
Spring Boot with embedded Tomcat handles 100-continue automatically. Tomcat sends the interim response when the servlet input stream is first read. To customize, implement a Filter that checks headers before reading the body and returns an error status if validation fails.
FastAPI (Python)
FastAPI via Uvicorn handles 100-continue at the ASGI server level. Uvicorn automatically sends the 100 response when the application starts consuming the request body. Custom handling requires middleware that inspects headers before calling receive().

Debugging guide

  1. Check if client is sending 'Expect: 100-continue' header — curl does this automatically for POST bodies over 1024 bytes
  2. Use tcpdump or Wireshark to verify the 100 interim response is sent before the body flows
  3. If uploads hang, check if a proxy (nginx, HAProxy) is stripping the Expect header — nginx needs 'proxy_request_buffering off' for large uploads
  4. Verify server timeout for 100-continue — some servers drop the connection if no body arrives within the timeout window
  5. Test with: curl -v -H 'Expect: 100-continue' -X POST -d @largefile.bin https://api.example.com/upload

Code snippets

Node.js
const http = require('http');
const server = http.createServer();
server.on('checkContinue', (req, res) => {
  if (parseInt(req.headers['content-length']) > 10_000_000) {
    res.writeHead(413, { 'Content-Type': 'application/json' });
    return res.end(JSON.stringify({ error: 'Payload too large' }));
  }
  res.writeContinue();
  // handle request normally
  let body = '';
  req.on('data', chunk => body += chunk);
  req.on('end', () => {
    res.writeHead(200).end('OK');
  });
});
Python
# With aiohttp server
from aiohttp import web

async def upload_handler(request):
    # aiohttp handles 100-continue via expect_handler
    content_length = request.content_length or 0
    if content_length > 10_000_000:
        raise web.HTTPRequestEntityTooLarge(
            max_size=10_000_000,
            actual_size=content_length
        )
    data = await request.read()
    return web.Response(text=f'Received {len(data)} bytes')

app = web.Application()
app.router.add_post('/upload', upload_handler)
Java (Spring)
// Spring Boot Filter for 100-continue validation
@Component
public class ContinueFilter extends OncePerRequestFilter {
    @Override
    protected void doFilterInternal(
            HttpServletRequest req, HttpServletResponse res,
            FilterChain chain) throws ServletException, IOException {
        String expect = req.getHeader("Expect");
        long length = req.getContentLengthLong();
        if ("100-continue".equalsIgnoreCase(expect) && length > 10_000_000) {
            res.sendError(413, "Payload too large");
            return;
        }
        chain.doFilter(req, res);
    }
}
Go
package main

import (
	"fmt"
	"io"
	"net/http"
)

func uploadHandler(w http.ResponseWriter, r *http.Request) {
	if r.ContentLength > 10_000_000 {
		http.Error(w, "Payload too large", http.StatusRequestEntityTooLarge)
		return
	}
	// Go's net/http automatically sends 100 Continue
	// when the handler reads the body
	body, _ := io.ReadAll(r.Body)
	fmt.Fprintf(w, "Received %d bytes", len(body))
}

FAQ

What is the difference between HTTP 100 Continue and HTTP 102 Processing?
100 Continue is specifically about the request body — it tells the client 'go ahead and send your payload.' 102 Processing indicates the server is still working on the request and hasn't finished yet. 100 is sent BEFORE the body arrives; 102 is sent AFTER the full request is received but while processing takes time.
Does HTTP/2 support the 100 Continue mechanism?
HTTP/2 deprecated the Expect: 100-continue header. However, servers may still send 100 interim responses as informational HEADERS frames. In practice, HTTP/2's multiplexing and flow control make the bandwidth-saving benefit of 100-continue less critical since streams can be reset without affecting other requests.
Why does curl sometimes send Expect: 100-continue automatically?
curl sends 'Expect: 100-continue' for any POST/PUT request with a body larger than 1024 bytes (configurable with --expect100-timeout). This is a bandwidth optimization. You can disable it with -H 'Expect:' (empty value). Some servers that don't handle 100-continue properly may cause a 1-second delay because curl waits for the 100 response before timing out and sending the body anyway.
Can a proxy strip the 100 Continue response?
Yes, and this is a common source of bugs. Nginx by default buffers the request body (proxy_request_buffering on) and never forwards Expect: 100-continue to the upstream. This means the upstream never gets to validate headers before the body. Set 'proxy_request_buffering off' if you need end-to-end 100-continue support.

Client expectation contract

Client can assume
    Client must NOT assume
    No common correctness warnings are specific to this code.
    Retry behavior
    Retries are generally unnecessary; treat as final unless domain rules require revalidation.
    Monitoring classification
    Server error
    Use payload and header checks to avoid false positives; cacheability depends on Cache-Control/ETag/Vary.

    Related status codes

    101 Switching Protocols
    The server is switching protocols as requested by the client via the Upgrade header (e.g., upgrading to WebSocket).

    Explore more

    Related guides
    Related tools
    Related utilities