🧪 Skills

Twitter Auto Engage

Automated Twitter/X engagement script that scans a curated pool of target accounts, generates authentic GPT-powered replies, and posts them via the Twitter G...

v1.0.0
❤️ 0
⬇️ 94
👁 1
Share

Description


name: twitter-auto-engage description: Automated Twitter/X engagement script that scans a curated pool of target accounts, generates authentic GPT-powered replies, and posts them via the Twitter GraphQL API. Use when you want to run a scheduled engagement routine that builds presence through high-quality, non-sycophantic replies to thought leaders in your niche. metadata: {"requires": ["python3", "openai", "rnet_twitter"], "env": ["OPENAI_API_KEY", "TWITTER_COOKIE_PATH"], "tags": ["twitter", "social-media", "engagement", "automation", "openai", "gpt"]}

Twitter Auto-Engage

Automated Twitter/X engagement system that scans a pool of target accounts, selects their highest-engagement recent tweets, generates context-aware replies using GPT, and posts them with human-like timing. Built for founders and builders who want to grow through genuine technical conversation, not follower-baiting.

Requirements

pip install openai
# rnet_twitter — async Twitter GraphQL client (see rnet_twitter.py)

You also need:

  • A valid Twitter/X session cookie file (twitter_cookies.json)
  • An OpenAI API key

Setup

export OPENAI_API_KEY="your_openai_api_key"
export TWITTER_COOKIE_PATH="/path/to/twitter_cookies.json"

The script reads cookies from a JSON file exported from a logged-in browser session. See Obtaining Twitter Cookies below.

Usage

# Run a single engagement session (5 replies max)
python auto_engage.py

# Schedule 4x daily via cron (recommended)
# Morning, midday, afternoon, evening sessions
0 8,12,16,21 * * * cd {skillDir} && python auto_engage.py >> logs/engage.log 2>&1

Configuration

Edit the following constants at the top of the script to customize behavior:

Constant Default Description
TARGETS_PER_RUN 20 Accounts to check each session
MAX_REPLIES_PER_RUN 5 Maximum replies to post per session
RUN_PROBABILITY 0.85 Randomized skip chance (human variability)
MIN_REPLY_LENGTH 80 Minimum character count for a reply
MAX_REPLY_LENGTH 260 Maximum character count for a reply

Target Account Pool

Organize accounts into named categories with per-session quotas. The script randomly samples TARGETS_PER_RUN accounts from the full pool each session.

Example structure:

TARGET_POOL = {
    "ai_builders": [
        "simonw",     # Example: ML tools, Datasette
        "swyx",       # Example: AI engineering
    ],
    "indie_builders": [
        "dvassallo",  # Example: indie hacker
        "tdinh_me",   # Example: bootstrapped SaaS
    ],
    "marketing": [
        "harrydry",   # Example: Marketing Examples
        "wes_kao",    # Example: content strategy
    ],
}

CATEGORY_QUOTAS = {
    "ai_builders":   2,  # replies per session
    "indie_builders": 2,
    "marketing":     1,
}

Replace handles with accounts relevant to your niche and goals.

Reply Generation

Replies are generated by GPT using a structured voice prompt. The system enforces:

Voice Rules (Configurable)

  • Direct, clear sentences — no filler words
  • Specific over vague: numbers, tool names, concrete observations
  • Intellectually curious tone — genuine questions, not rhetorical
  • Comfortable acknowledging uncertainty or failure

Hard Bans (Built-in Filters)

The generated reply is rejected and skipped if it contains:

  • Sycophantic openers: "Great post", "Love this", "So true", "Spot on", etc.
  • Casual slang: lol, damn, wild, ngl, fr, bro, lowkey, fire, bussin, no cap
  • Emojis (any Unicode > U+1F600)
  • Exclamation marks (configurable tolerance)
  • Corporate speak: leverage, synergy, paradigm, game-changer
  • Self-promotion or product name-dropping

Three Reply Archetypes

GPT is prompted to pick ONE of:

A) Micro-insight + Question Share a specific observation from your own experience, then ask something you genuinely want answered.

"Ran into this building our brand tool — embeddings drift ~15% after 3 months without retraining. How often are you recalibrating?"

B) Respectful Challenge Politely push back with data or a counterexample, then invite their perspective.

"Counterpoint: Stripe's docs are famously good but they still need a sales team for enterprise. Isn't the real question where self-serve stops working?"

C) Pattern Recognition Connect their point to something non-obvious from a different domain.

"This mirrors what happened in recommendation systems — Netflix found that optimizing for clicks killed retention. The proxy metric trap is everywhere. What's your equivalent of 'watch time'?"

Customizing Your Voice

Edit the USER_CONTEXT variable (or equivalent prompt section) to describe your background:

USER_CONTEXT = """You are writing a Twitter reply as YOUR_NAME.

BACKGROUND:
- Brief description of who you are
- What you're building
- Genuine interests relevant to the accounts you target

VOICE:
- [Your preferred communication style]
- [Specific things to include or avoid]
"""

The more specific and authentic your context, the better the replies.

State Management

The script maintains auto_reply_state.json to track replied tweet IDs, preventing duplicate replies across sessions:

{
  "replied_tweets": ["tweet_id_1", "tweet_id_2"],
  "last_updated": "2026-03-06T09:30:00"
}

State is capped at the last 100 tweet IDs to prevent unbounded growth.

Engagement Logic

For each selected account the script:

  1. Fetches the 10 most recent original tweets (excluding retweets and @-replies)
  2. Filters out tweet IDs already replied to
  3. Selects the tweet with highest combined engagement (likes + replies)
  4. Generates a reply via GPT — if GPT outputs SKIP, the tweet is marked seen and skipped
  5. Likes the tweet first (pre-reply like is an algorithmic signal)
  6. Posts the reply
  7. Waits 3-6 seconds (randomized human-like delay) before moving to the next account

Output

Each session prints a summary and appends to your log. When called by a bot framework, the script outputs a JSON block after the ---JSON_OUTPUT--- separator for programmatic consumption:

[
  {
    "target": "simonw",
    "tweet": "LLMs are increasingly being used for...",
    "reply": "The retrieval side of this is underrated...",
    "url": "https://x.com/simonw/status/...",
    "liked": true
  }
]

Obtaining Twitter Cookies

  1. Log in to Twitter/X in Chrome or Firefox
  2. Open DevTools > Application > Cookies > https://x.com
  3. Export the cookie values to a JSON file in the format expected by rnet_twitter.py
  4. Store the file at the path referenced by TWITTER_COOKIE_PATH

Do not commit cookie files to version control.

Rate Limiting Guidelines

  • 5 replies per session, 4 sessions per day = 20 replies/day maximum
  • 85% probability of running each session adds natural variability
  • 3-6 second delay between each reply action
  • 1-2 second delay between like and reply on the same tweet

Twitter's informal limits for reply actions are not publicly documented, but staying under 50 replies/day and spacing them across multiple sessions avoids most friction.

Troubleshooting

Reply not posting: Verify your cookie file is fresh. Twitter sessions expire — re-export cookies from your browser.

GPT always outputs SKIP: Your USER_CONTEXT may not align with the selected accounts. Ensure your background gives GPT enough to work with for the topic domain.

Too many sycophancy rejections: GPT may default to affirmative openers. Increase the system prompt emphasis on the ban list or lower the temperature slightly.

Account flagged / rate limited by Twitter: Reduce MAX_REPLIES_PER_RUN to 3 and RUN_PROBABILITY to 0.6 for a more conservative cadence.

Reviews (0)

Sign in to write a review.

No reviews yet. Be the first to review!

Comments (0)

Sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Compatible Platforms

Pricing

Free

Related Configs