Technical Documentation · February 2026
A self-hosted Nitter Docker instance runs on heissa.de:9495 and
mirrors Twitter/X content as RSS feeds. A Python script
(nitter_poll.py) fetches these feeds regularly via cron and stores
new posts in a MariaDB table (nitter_posts). PHP pages
visualise the collected posts.
X.com / Twitter
│ (Twitter-API intern)
▼
Nitter-Docker heissa.de:9495
│ RSS-Feed /<account>/rss
▼
nitter_poll.py (Cron, stündlich)
│ INSERT IGNORE
▼
MariaDB · wagodb · nitter_posts
│
▼
SZwanglos.php / Impf_Info.php / StHomburg.php
To avoid overloading Nitter and Twitter, only 20 accounts are polled per cron run. Between each request the script waits 1.5 – 4.5 seconds randomly (natural timing). The three feeds run offset from each other to avoid overlap:
| Feed | Following accounts | Batch / hour | Runs until complete | Total duration |
|---|---|---|---|---|
| @SZwanglos | 947 | 20 · Cron :00 |
48 runs | ≈ 48 hours (2 days) |
| @Impf_Info | 173 | 20 · Cron :15 |
9 runs | ≈ 9 hours |
| @StHomburg | 3 | 20 · Cron :30 |
1 run | ≈ 1 hour |
A single batch (20 accounts) takes 30 – 90 seconds
net. The cron job then finishes; at the next hourly tick
the next 20 accounts are polled seamlessly
(rotating batch position, stored in
~/.nitter_batch_SZwanglos).
The script does not fetch all posts of an account every time. Instead it remembers the ID of the last seen post. On the next run it stops processing as soon as that ID appears – only newer entries are processed.
GET http://heissa.de:9495/<account>/rssfollowing:SZwanglos:<account> is read from the
state file ~/.nitter_seen.json. On the first run this is
empty → all feed entries are imported.
INSERT IGNORE into nitter_posts.
The IGNORE prevents duplicates if the same post ID
appears twice (e.g. during batch overlap).
~/.nitter_seen.json and used as the starting point
on the next cron run.
If Nitter returns 429 Too Many Requests, the script
automatically switches to Playwright: a headless
Chromium browser opens the corresponding x.com page with
a stored login session (~/.auth_state.json) and
extracts posts directly from the rendered HTML – without RSS.
nitter_posts ├── id BIGINT AUTO_INCREMENT ├── post_id VARCHAR(200) UNIQUE ← Twitter Snowflake ID or URL ├── for_user VARCHAR(100) ← SZwanglos / Impf_Info / StHomburg ├── account VARCHAR(100) ← polled account ├── author VARCHAR(100) ← display name ├── title TEXT ← Tweet-Text (120 Zeichen) ├── link VARCHAR(500) ← Nitter URL to tweet ├── post_time VARCHAR(20) ← "DD.MM HH:MM" (from RSS) ├── published_at DATETIME ← reconstructed from Snowflake ID ├── content TEXT ← Tweet-Text (200 Zeichen) └── created_at TIMESTAMP ← import timestamp
The published_at timestamp is calculated from the
Twitter Snowflake ID:
published_at = (snowflake_id >> 22) + 1288834974657 ms since epoch
This enables correct chronological sorting regardless of
the import time.
~/python/nitter_poll.py Polling script (RSS + Playwright fallback) ~/.nitter_seen.json Checkpoint: last known post ID per account ~/.nitter_batch_SZwanglos Batch position: where rotation continues ~/.nitter_following/SZwanglos.json Following list cache (24h TTL) ~/python/nitter_notify.py Bot: @mention tweets about new feed page