Nitter Polling — How It Works

Why we don't rely on X.com's default feed, and how we build a complete, unfiltered timeline instead.

1. The X.com Problem

When you open X.com and scroll your home timeline, you are not seeing all posts from people you follow. X runs every post through a recommendation algorithm that decides what to show — and what to hide.

Property X.com Home Feed Our Polling Approach
Coverage Partial — algorithm filters silently Complete — every post from every followed account
Sort order Relevance score, not time Strictly chronological
Ads Injected every few posts None
Unsolicited content "For You" posts from strangers Only accounts you chose
API access Paid API, strict rate limits, v1.1 mostly removed Browser-session auth, no developer account needed
Data ownership X controls what you see You control what you see
The old Twitter API (v1.1) allowed developers to fetch a chronological home timeline. X removed this in 2023. Since then, the only way to get unfiltered data is to behave like a browser.

2. What Is Nitter?

Nitter is an open-source, self-hosted X/Twitter frontend. Instead of showing you a web page, it acts as a translation layer between Twitter's internal GraphQL API and standard web formats like RSS.

Our instance runs at heissa.de:9495 and is authenticated using a real browser session (cookies auth_token + ct0) from account @Assieh7. This is the same credential a browser uses — so Twitter sees it as a normal user session, not a bot.

Your script
nitter_poll.py
Self-hosted
Nitter :9495
Twitter internal
GraphQL API
Response
RSS / HTML

Session token auto-refresh

Browser cookies expire. refresh_tokens.py runs a headless Chromium browser (--headless=new, no display server needed), logs into X with @Assieh7's password, and saves fresh auth_token + ct0 cookies into sessions.jsonl. A cron job runs this every 8 hours.

3. RSS Polling

Nitter exposes every profile and search as a standard RSS feed. RSS is pull-based: you request it, get XML back, parse it. No JavaScript, no cookie banners.

# Profile feed
GET http://heissa.de:9495/SZwanglos/rss

# Search feed (newest first)
GET http://heissa.de:9495/search/rss?q=lang%3Ade+KI&order=recency

nitter_poll.py uses the feedparser library to fetch these feeds. It keeps track of the last seen post ID in ~/.nitter_seen.json, so each run only returns genuinely new posts.

Configured feeds

Usage

python3 nitter_poll.py                        # all configured feeds
python3 nitter_poll.py --feed SZwanglos       # single feed
python3 nitter_poll.py --search "lang:de KI"  # ad-hoc search
python3 nitter_poll.py --watch --interval 300 # loop every 5 min

4. Following-List Polling

Instead of relying on X's algorithmic home feed, we fetch the complete list of accounts that @SZwanglos follows, then poll each account's RSS feed individually. This gives a 100 % chronological, unfiltered timeline of all followed accounts.

@SZwanglos follows 951 accounts. All of them are discovered automatically and cached in ~/.nitter_following/SZwanglos.json.

How the following list is fetched

Twitter's following page uses a virtualised list — old entries are removed from the DOM as you scroll. Scraping the DOM alone only yields ~60 accounts. Instead, we intercept Twitter's internal GraphQL API calls directly:

  1. A headless Chromium browser navigates to x.com/SZwanglos/following.
  2. We listen on all network responses, capturing any URL containing Following.
  3. From the first response we extract the Bearer token, CSRF token, and the numeric user ID.
  4. We then call the GraphQL endpoint directly from inside the browser (page.evaluate()) with a cursor, paginating through all ~950 accounts in batches of 50.
  5. All screen names are saved to the cache file.

Slow batch polling

Polling 947 accounts one after another at full speed would look like bot traffic. Instead, nitter_poll.py polls accounts sequentially with randomised delays:

# Poll 50 accounts per run, rotate every call, delay 2–5 s
python3 nitter_poll.py --following SZwanglos --batch 50 --delay 2 5

# Continuous: every 15 min poll next 50 accounts (full cycle ≈ 5 h)
python3 nitter_poll.py --following SZwanglos --batch 50 --delay 2 5 \
                       --watch --interval 900

5. Playwright Fallback 429 protection

When Nitter returns HTTP 429 Too Many Requests for a profile RSS feed, we fall back to scraping x.com directly using a headless Chromium browser with the real @Assieh7 session.

Why this bypasses rate limits

Twitter applies different rate limits to programmatic API calls vs. real browser sessions. The key difference is the x-client-transaction-id header — a value generated by Twitter's obfuscated frontend JavaScript that encodes browser fingerprint information. A real Chromium browser generates this header automatically; Nitter (a server-side app) cannot replicate it.

Request type Rate limit x-client-transaction-id
Nitter / programmatic ~180 req / 15 min per endpoint Missing or invalid
Real browser (Playwright) Normal interactive user quota Auto-generated by Chromium JS

For rate-limited accounts, a single Playwright browser session is opened (not one per account), visiting each profile page serially with random delays between page loads.

# Force Playwright mode for testing
python3 nitter_poll.py --playwright --search "china"

6. Full Pipeline

Step 1
Load following list
Step 2
Shuffle & batch
Step 3a
Nitter RSS fetch
Step 3b (429)
Playwright fallback
Step 4
Sort & display

State files

FilePurpose
~/.nitter_seen.jsonLast seen post ID per feed — prevents re-showing old posts
~/.nitter_following/SZwanglos.jsonCached list of 947 followed accounts (refreshed daily)
~/.nitter_batch_SZwanglosCurrent rotation position for --batch mode
~/nitter/.auth_state.jsonPlaywright browser state (cookies, localStorage) for @Assieh7
~/nitter/sessions.jsonlCookie session file read by Nitter Docker container

Infrastructure