Why we don't rely on X.com's default feed, and how we build a complete, unfiltered timeline instead.
When you open X.com and scroll your home timeline, you are not seeing all posts from people you follow. X runs every post through a recommendation algorithm that decides what to show — and what to hide.
| Property | X.com Home Feed | Our Polling Approach |
|---|---|---|
| Coverage | Partial — algorithm filters silently | Complete — every post from every followed account |
| Sort order | Relevance score, not time | Strictly chronological |
| Ads | Injected every few posts | None |
| Unsolicited content | "For You" posts from strangers | Only accounts you chose |
| API access | Paid API, strict rate limits, v1.1 mostly removed | Browser-session auth, no developer account needed |
| Data ownership | X controls what you see | You control what you see |
Nitter is an open-source, self-hosted X/Twitter frontend. Instead of showing you a web page, it acts as a translation layer between Twitter's internal GraphQL API and standard web formats like RSS.
Our instance runs at heissa.de:9495 and is authenticated using
a real browser session (cookies auth_token + ct0)
from account @Assieh7. This is the same credential a browser
uses — so Twitter sees it as a normal user session, not a bot.
Browser cookies expire. refresh_tokens.py runs a headless
Chromium browser (--headless=new, no display server needed),
logs into X with @Assieh7's password, and saves fresh
auth_token + ct0 cookies into
sessions.jsonl. A cron job runs this every 8 hours.
Nitter exposes every profile and search as a standard RSS feed. RSS is pull-based: you request it, get XML back, parse it. No JavaScript, no cookie banners.
# Profile feed GET http://heissa.de:9495/SZwanglos/rss # Search feed (newest first) GET http://heissa.de:9495/search/rss?q=lang%3Ade+KI&order=recency
nitter_poll.py uses the feedparser library to fetch
these feeds. It keeps track of the last seen post ID in
~/.nitter_seen.json, so each run only returns genuinely new posts.
SZwanglos — profile feed of @SZwanglosAssieh7 — profile feed of @Assieh7deutsch — search: lang:de, newest firstpython3 nitter_poll.py # all configured feeds python3 nitter_poll.py --feed SZwanglos # single feed python3 nitter_poll.py --search "lang:de KI" # ad-hoc search python3 nitter_poll.py --watch --interval 300 # loop every 5 min
Instead of relying on X's algorithmic home feed, we fetch the complete list of accounts that @SZwanglos follows, then poll each account's RSS feed individually. This gives a 100 % chronological, unfiltered timeline of all followed accounts.
~/.nitter_following/SZwanglos.json.
Twitter's following page uses a virtualised list — old entries are removed from the DOM as you scroll. Scraping the DOM alone only yields ~60 accounts. Instead, we intercept Twitter's internal GraphQL API calls directly:
x.com/SZwanglos/following.Following.page.evaluate()) with a cursor, paginating through all
~950 accounts in batches of 50.
Polling 947 accounts one after another at full speed would look like bot
traffic. Instead, nitter_poll.py polls accounts sequentially
with randomised delays:
--batch N: only poll N accounts per run, rotating position saved in ~/.nitter_batch_SZwanglos
# Poll 50 accounts per run, rotate every call, delay 2–5 s
python3 nitter_poll.py --following SZwanglos --batch 50 --delay 2 5
# Continuous: every 15 min poll next 50 accounts (full cycle ≈ 5 h)
python3 nitter_poll.py --following SZwanglos --batch 50 --delay 2 5 \
--watch --interval 900
When Nitter returns HTTP 429 Too Many Requests for a profile
RSS feed, we fall back to scraping x.com directly using a
headless Chromium browser with the real @Assieh7 session.
Twitter applies different rate limits to programmatic API calls vs. real
browser sessions. The key difference is the
x-client-transaction-id header — a value generated by
Twitter's obfuscated frontend JavaScript that encodes browser fingerprint
information. A real Chromium browser generates this header automatically;
Nitter (a server-side app) cannot replicate it.
| Request type | Rate limit | x-client-transaction-id |
|---|---|---|
| Nitter / programmatic | ~180 req / 15 min per endpoint | Missing or invalid |
| Real browser (Playwright) | Normal interactive user quota | Auto-generated by Chromium JS |
For rate-limited accounts, a single Playwright browser session is opened (not one per account), visiting each profile page serially with random delays between page loads.
# Force Playwright mode for testing python3 nitter_poll.py --playwright --search "china"
| File | Purpose |
|---|---|
~/.nitter_seen.json | Last seen post ID per feed — prevents re-showing old posts |
~/.nitter_following/SZwanglos.json | Cached list of 947 followed accounts (refreshed daily) |
~/.nitter_batch_SZwanglos | Current rotation position for --batch mode |
~/nitter/.auth_state.json | Playwright browser state (cookies, localStorage) for @Assieh7 |
~/nitter/sessions.jsonl | Cookie session file read by Nitter Docker container |
/search?q= requestswagodb.nitter_searches: IP, query, feed, timestamp