I was accessing Github API v3 to read the language information of all the repositories of around 7,00,000 users using PyGithub. Github API has a restriction of 5000 requests per hour. But the server was refusing my connection after around 30,000 requests although my requests were authenticated and well within the limit.
To keep the program running until all of the user data is collected I used subprocess module of python. It allows a python script to run other scripts along with some handy shell commands. The
subprocess.Popen([args]) class runs the script you want to run. For example,
programPath = "python" scriptPath = "/foo/bar/script.py" proc = subprocess.Popen([programPath,scriptPath])
the code above will act as a shell command of
So here is a whole script (suppose the filename of the script is process.py) that keeps your python scripts run in case of some unexpected error:
# process.py import subprocess import sys if len(sys.argv) > 1: programPath = "python" scriptPath = sys.argv proc = subprocess.Popen([programPath,scriptPath]) pid = proc.pid exitcode = proc.wait() print pid while not exitcode == 0: proc = subprocess.Popen([programPath,scriptPath]) pid = proc.pid exitcode = proc.wait() print pid
So if you run
python process.py foo-bar.py from terminal then
process.py will keep running the
foo-bar.py until it terminates gracefully.