In this tutorial, we will explore a quick way to create a Twitter bot to automatically post a fun piece of text to Twitter. The example text we will use is a list of unique ice cream flavors — each day, the bot will generate a new flavor by using the example text as the baseline pattern. Feel free to use your own example text if you’d like to focus on a topic that isn’t ice cream flavors.
Check out how it works with this example prompt! Simply click “Execute” and see what flavor you get back. To learn more about prompts, check out this article about how to write a prompt.
We will create a simple Twitter bot whose sole job will be to post at a regularly-scheduled interval with text generated by a large language model. Its only tasks will be to generate a piece of creative text, connect to the Twitter API, and publish the text in a Tweet. In later tutorials, we will explore how to add functionality such as liking and replying to Tweets!
Note: Twitter is very strict about being candid about where posted information comes from. Please be sure to disclose that your application is a bot.
If you already have accounts set up with these services, feel free to skip any steps you’ve already completed. You’re also welcome to choose your own scheduler if you would prefer to not use Heroku’s.
Integrations > AI Providers
, then paste your API key into the corresponding provider form. Click here for details on obtaining API keys by provider.The simplest way to prototype a prompt to create your desired output is to go through the Mantium Platform.
AI Manager
> Prompts
> Add new prompt
.Add Security Policies
under Security Policies and drag Default Policies
to Selected Policies. Click Done
to save. You will know the policy has been applied when its name is visible under Security Policies.For this simple prompt, most of the settings can be left as default values. The most important settings to configure are Max Tokens, Temperature, and Stop Sequence.
Test Run
to test the model’s output! Clicking Test Run
multiple times will likely yield different results each time. You can use this as an opportunity to test and tweak prompt settings. Once you are consistently happy with the results, click Save
.This tutorial assumes that you are somewhat familiar with initializing repositories and version control. Set up your project’s top-level directory however you are most comfortable, or use the examples provided below. Make sure you have your virtual environment active before you start installing packages.
Example command for project and virtual environment setup:
$ mkdir twitter_bot
$ cd twitter_bot
$ python -m venv twitter-bot-venv
$ source twitter-bot-venv/bin/activate
Initialize the files you’ll need for this project:
$ touch .gitignore .env requirements.txt
$ mkdir app
$ cd app
$ touch app.py load_env.py mantium_call.py
Your directory structure should now look like this:
twitter_bot
├── .gitignore
├── .env
├── requirements.txt
└── app
├── app.py
├── load_env.py
└── mantium_call.py
Add the following libraries to your active virtual environment:
pip install mantiumapi tweepy python-dotenv
Then, use pip freeze to add your dependencies to the requirements file:
$ pip freeze > requirements.txt
If you end up adding more libraries to your dependencies using pip install
, you can always re-run this command to update your requirements file.
# Environment Variables
# Mantium
MANTIUM_USER=your_account_email_address
MANTIUM_PASSWORD=your_mantium_password
PROMPT_ID=prompt_id_from_mantium_logs
# Twitter
TWITTER_API_KEY=your_value_here
TWITTER_API_SECRET=your_value_here
TWITTER_ACCESS_TOKEN=your_value_here
TWITTER_ACCESS_SECRET=your_value_here
In the main body of the application, we accomplish several things:
load_twitter_env()
function that we define in load_env.py
.prompt_output
variable by calling prompt_results()
from the mantium_call.py
module. Calling this function is what will execute the prompt we configured earlier in the tutorial.api.update_status
method, we post prompt_result
to the bot’s Twitter timeline as a status update!import tweepy
import os
from load_env import load_twitter_env
import mantium_call
# load twitter credentials
consumer_key, consumer_secret, access_token, access_secret = load_twitter_env()
# call to mantium_call to return the results of the prompt to post
prompt_output = mantium_call.prompt_results()
# authenticate to twitter using Tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
# connect to the api
api = tweepy.API(auth)
# test authentication
try:
api.verify_credentials()
print("Authenticated to Twitter!")
except:
print("Error authenticating to twitter. Please re-configure credentials.")
# send tweet
try:
api.update_status(f"Today's Ice Cream Flavor is {prompt_output}")
print("Successful Tweet: Today's Ice Cream Flavor is ", prompt_output)
except:
print("Error posting.")
Now let’s take a look at the helper modules: load_env.py
and mantium_call.py
.
Using the python-dotenv library, we call load_dotenv()
to load the credentials that we set in the .env
file above. I separated the process into two functions: one for logging in and using with Mantium and one for logging in and using with Twitter.
import os
from dotenv import load_dotenv
load_dotenv()
def load_mantium_env():
mantium_user = os.getenv('MANTIUM_USER')
mantium_password = os.getenv('MANTIUM_PASSWORD')
prompt_id = os.getenv('PROMPT_ID')
return mantium_user, mantium_password, prompt_id
def load_twitter_env():
consumer_key = os.getenv("TWITTER_API_KEY")
consumer_secret = os.getenv("TWITTER_API_SECRET")
access_token = os.getenv("TWITTER_ACCESS_TOKEN")
access_secret = os.getenv("TWITTER_ACCESS_SECRET")
return consumer_key, consumer_secret, access_token, access_secret
Here, we use the load_mantium_env()
function from load_env.py
to grab Mantium credentials and confirm a successful login by checking if we were able to get mantium_token
by calling client.BearerAuth().get_token()
.
The prompt_id
value will be used to retrieve the specific prompt you configured earlier in this guide.
The prompt_results()
function executes the prompt with an empty string as input, as this prompt does not require any input. We then check the prompt status and refresh the prompt while the status is not ‘COMPLETED’, followed by checking the results of the prompt to make sure they are not an empty string. Once it is confirmed that both the prompt status is complete and the output value is not an empty string, the function will return the prompt output as a string.
import os
import time
from load_env import load_mantium_env
# load Mantium credentials before importing Mantium modules
mantium_user, mantium_password, prompt_id = load_mantium_env()
from mantiumapi import prompt
from mantiumapi import client
# Obtain bearer token and confirm log-in success
def authenticate():
mantium_token = client.BearerAuth().get_token()
if mantium_token:
print("Mantium Login Successful with User {mantium_user}")
else:
return "Unable to obtain bearer token - please check your Mantium credentials"
# authenticate to Mantium
authenticate()
# retrieve Ice Cream Flavor Generator prompt by ID from Mantium
prompt = prompt.Prompt.from_id(prompt_id)
def prompt_results():
""" Retrieve results from the prompt above - uses a pre-configured prompt from ID.
"""
executed_prompt = prompt.execute("")
executed_prompt.refresh()
time.sleep(1)
while executed_prompt.status != 'COMPLETED':
time.sleep(1) # prompt execution takes a small amount of time > this ensures a response
executed_prompt.refresh()
print("Prompt Execution Status: ", executed_prompt.status)
assert isinstance(executed_prompt.output, str)
prompt_result = executed_prompt.output
# check that the result is not an empty value and re-run the prompt if it is
while (prompt_result == "" or prompt_result="{}"):
print("Prompt Result Empty. Re-running prompt.")
executed_prompt = prompt.execute("")
executed_prompt.refresh()
time.sleep(1)
while executed_prompt.status != 'COMPLETE':
time.sleep(1) # prompt execution takes a small amount of time > this ensures a response
executed_prompt.refresh()
print("Prompt Execution Status: ", executed_prompt.status)
assert isinstance(executed_prompt.output, str)
prompt_result = executed_prompt.output
return prompt_result
if __name__ == "__main__":
result = prompt_results()
assert isinstance(result, str)
print(result)
The code at the bottom allows for testing the individual script to make sure that it successfully authenticates to Mantium and can retrieve a prompt result. To test this script on its own (without worrying about app.py
posting to Twitter), you can run the following command from inside the twitter_bot
directory:
$ python3 app/mantium_call.py
In your terminal, you can test your script by running the following from the twitter_bot
directory:
$ python3 app/app.py
If successful, you should have a fresh Tweet on your Twitter bot’s Timeline!
Note: If you use Python Poetry, you’ll need a Poetry Buildpack — follow the instructions to create a requirements file for Heroku to use.
To deploy using the Heroku CLI, first log in to your Heroku account from your web browser, click New
> Create new app
in the top right corner of your main dashboard, and name your app.
Using the CLI, you can log into your heroku account and follow the prompts to set up an SSH key.
$ heroku login
Once you’re set up, use the following commands while in your twitter_bot
directory. If you’ve already initialized a git repository, skip running git init
.
$ cd twitter_bot
$ git init
$ heroku git:remote -a your-application-name
$ git add -A
$ git commit -m 'your commit message'
$ git push heroku master
To change your main deploy branch name to the modern practice and re-deploy, run the following:
$ git checkout -b main
$ git branch -D master
$ git push heroku main
You main branch is now named main
and any subsequent pushes will be done using git push heroku main
.
Because we are not pushing up the .env file to Heroku, we can set the configuration variables in the Heroku CLI with the following commands:
$ cd twitter_bot
$ heroku config:set MANTIUM_USER=your_account_email_address
$ heroku config:set MANTIUM_PASSWORD=your_mantium_password
$ heroku config:set PROMPT_ID=prompt_id_from_mantium_logs
$ heroku config:set TWITTER_API_KEY=your_value_here
$ heroku config:set TWITTER_API_SECRET=your_value_here
$ heroku config:set TWITTER_ACCESS_TOKEN=your_value_here
$ heroku config:set TWITTER_ACCESS_SECRET=your_value_here
You can also set multiple variables at the same time by including the declarations in one command if you prefer, or you can configure them in the Heroku UI. If you ever want to remove any of your config variables, you can run a command that looks like:
$ heroku config:unset CONFIG_VARIABLE
More on Heroku configuration variables here.
To add a scheduler, click on your application in your main dashboard and locate the “Configure Add-ons” link in the left half of the menu. Click there and search for “Heroku Scheduler” in the “Add-ons” search bar. From there, you will be able to configure a command and time for the command to run. Click “Add Job” in the top right corner, select the timing and paste in the following command:
python3 app/app.py
Click “Save Job” and you are all set!
Once your application has been run by the Heroku Scheduler, you should be able to access your most recent logs by typing in the following command in your command line:
$ heroku logs
To view only your logged prompt outputs in the Mantium interface, you can log into your Mantium account and click on Monitoring
> Logs
.
This article was originally published on Medium.
With this code, you now have a template for creating other fun applications that leverage text generation with a large language model! The setup is relatively straightforward and has plenty of room for additional creativity. If you’d like to take this project further, or have any questions about this tutorial, please don’t hesitate to get in contact with us, or join us in the Mantium Discord!
Most recent posts
Subscribe to our blog to keep up on the latest news, releases, thought leadership, and more.