A number of customers have asked how to work with jobs using Deadline’s internal Python API. This knowledge base article presents a script that goes over most of the work needed to access tasks, reports, and the jobs themselves. Documentation for the various data types are linked out from the function calls that use them. Hopefully this script does enough to be inspiring. 🙂
While it was tested against Deadline 10.3.1.3, the script should work with previous versions as far back as 6.0.0 when we introduced compressed job reports.
Comments along the way explain what the script is doing and you can copy and paste as you need. Also, there are a few pre-baked scripts over on Github if you want a few more ideas. If you’re a little new to Deadline’s use of python, execution starts at the “__main__()” function at the bottom.
To run this yourself, save the script as “job.py” and run it like so:
Windows:
%DEADLINE_PATH%\deadlinecommand ExecuteScriptNoGui C:\Path\To\job.py -j <job id>
Linux:
$DEADLINE_PATH/deadlinecommand ExecuteScriptNoGui /path/to/job.py -j <job id>
macOS:
/Applications/Thinkbox/Deadline10/Resources/deadlinecommand ExecuteScriptNoGui /path/to/job.py -j <job id>
And now, on with the show!
from Deadline.Scripting import RepositoryUtils # RepoUtils accesses almost everything
from argparse import ArgumentParser # This is just to make running this easier
import System # C# stuff for getting arguments from the user when running from DeadlineCommand
import sys # Just for exiting with an error if no job was found
def job_states(job):
'''
Examples of working with job objects
Helpful docs:
* https://docs.thinkboxsoftware.com/products/deadline/10.3/2_Scripting%20Reference/class_deadline_1_1_jobs_1_1_job.html
'''
print("Loaded job '{}'".format(job.JobName))
# Progress is broken up into different pieces so we can build those
# nicely coloured progress bars if you want to use them
_total_tasks = job.JobCompletedTasks + job.JobPendingTasks + job.JobQueuedTasks + job.JobRenderingTasks + job.JobSuspendedTasks
progress = job.JobCompletedTasks * 100 / job.JobTaskCount
print("Current status is {} and is {}% complete.".format(job.JobStatus, progress))
def task_states(tasks):
'''
Here are examples of working with task objects
Helpful docs:
* https://docs.thinkboxsoftware.com/products/deadline/10.3/2_Scripting%20Reference/class_deadline_1_1_jobs_1_1_task_collection.html
* https://docs.thinkboxsoftware.com/products/deadline/10.3/2_Scripting%20Reference/class_deadline_1_1_jobs_1_1_task.html
'''
print("Tasks:")
for task in tasks.TaskCollectionAllTasks:
print("* Id: {}, {} ({} with {} errors)".format(task.TaskId, task.TaskProgress, task.TaskStatus, task.TaskErrorCount))
def report_states(job_reports):
'''
Here are examples of working with job reports (errors, requeue, etc)
Helpful docs:
* https://docs.thinkboxsoftware.com/products/deadline/10.3/2_Scripting%20Reference/class_deadline_1_1_reports_1_1_job_report_collection.html
* https://docs.thinkboxsoftware.com/products/deadline/10.3/2_Scripting%20Reference/class_deadline_1_1_reports_1_1_report.html
'''
error_reports = job_reports.GetErrorReports()
print("The top five error reports were:")
for id, report in enumerate(error_reports):
print("* {} (average CPU usage was {}%)".format(report.ReportMessage, report.ReportAverageCpu))
# If you want the uncompressed contents of the log as a giant in-memory string you
# can use RepositoryUtils.GetJobReportLog(). This is unbounded though, so you can
# technically run out of memory
log_location = RepositoryUtils.GetJobReportLogFileName(report)
print(f" * Log is compressed and stored at {log_location}")
if id > 4:
break
if len(error_reports) == 0:
print("* None")
def job_dependencies(job):
'''
Here is an example of cloning an existing job and making the new
job dependent on the existing one
'''
new_job = RepositoryUtils.ResubmitJob(job, job.JobFrames, job.JobFramesPerTask, submitSuspended=True)
new_job.JobName = "Copy: {}".format(new_job.JobName)
new_job.SetJobDependencyIDs([job.JobId])
RepositoryUtils.SaveJob(new_job)
RepositoryUtils.PendJob(new_job)
print("Submitted '{}'".format(new_job.JobName))
def __main__(*kwargs):
parser = ArgumentParser(prog='jobs', description='Examples of reading an manipulating job objects in Deadline')
parser.add_argument('-j', '--job', required=False)
parser.add_argument('-d', '--dependency', action='store_true', help='If present, create a dependent job')
# We need to be clever because we're running inside of C# so strip off the first two flags. This
# is fairly brittle because we can't be sure where this script will run, it'll only work via
# `deadlinecommand ExecuteScript` or `deadlinecommand ExecuteScriptNoGui`
args = parser.parse_args(list(System.Environment.GetCommandLineArgs())[3:])
job_id = args.job
create_dependency = args.dependency
# Whether to use what's cached in memory or not. This updates every 10
# seconds or so in the Monitor and is always fresh when running through
# `deadlinecommand ExecuteScript`
invalidate_cache = False
job = RepositoryUtils.GetJob(job_id, invalidate_cache)
if job is None:
print("Could not find job with id '{job_id}'. Giving up early")
sys.exit(1)
# Some examples of reading job info
job_states(job)
print('-' * 79)
# Some examples of reading task info
tasks = RepositoryUtils.GetJobTasks(job, invalidate_cache)
task_states(tasks)
print('-' * 79)
# Some examples of reading job reports
job_reports = RepositoryUtils.GetJobReports(job_id)
report_states(job_reports)
print('-' * 79)
# An example of creating a dependent job
if create_dependency:
RepositoryUtils.AddJobHistoryEntry(job_id, "Creating a copy of this job")
job_dependencies(job)
print('-' * 79)
If you have follow up questions, feel free to post in the forums or open a support request.
Comments
0 comments
Article is closed for comments.