Version: Deadline 8.0
INTRODUCTION - WHY YOU SHOULD CARE
If you’ve ever been worried about your Deadline Database becoming too large, you hate scrolling through thousands of completed jobs in the Deadline Monitor, or you just want to backup jobs, then archiving jobs is the method for you! An archived job removes the job information from the Database and puts it into a zip file in your Deadline Repository. What you do with the archived job is then up to you! This allows you to have a non-destructive workflow allowing you to re-run old jobs, without the performance hit of having every job stored in your Database. This will also let you focus your attention on jobs currently going through your pipeline. All without the loss of the jobs’ information!
THE BASICS
So archiving jobs appeals to you, or you’re at least curious, and you’re wondering how much effort this’ll be for your tens, hundreds, or thousands of jobs. The good news is that after you’ve configured a few settings, archiving completed jobs can be an automated process! This means that any past, present, or future jobs in your Repository can be archived to your liking.
MANUAL JOB ARCHIVING
Even if you want to automatically archive every completed job, you should also know how to manually archive first. For archiving manually, just right-click a job in the jobs panel that is suspended, completed, or failed (wouldn’t really make sense to archive a job currently rendering) and select “Archive Job”:
This will bring up the dialog with a few options:
Selecting multiple jobs will allow you to archive a handful of jobs at once. As per usual, if you prefer the command line, you can use this Deadline Command:
ArchiveJob Archives the job. [Job ID(s)] The job ID, or a list of job IDs separated by commas
Which could look like this:
"%DEADLINE_PATH%/deadlinecommand.exe" ArchiveJob [jobid_To_Archive]
We also provide this functionality in our Scripting API, consider this example in the form of a context menu job script:
from Deadline.Scripting import * def __main__( *args ): deleteFromDatabase = False customArchiveFolder = None # If this is None, stores it in the repository jobsArchived folder selectedJobs = MonitorUtils.GetSelectedJobs() for job in selectedJobs: RepositoryUtils.ArchiveJob( job, deleteFromDatabase, customArchiveFolder ) print( "Archived %s" % job.JobName )
You can find out more by browsing the Scripting API’s ArchiveJob documentation.
From here it’s up to you if you’d like to store it in your Repository, or an external location. My personal recommendation is to archive your jobs in one location so you don’t lose track of them. If you’re archiving the job for a special purpose, like sending it to Support, then you probably don’t want to delete the job from the Database as that’s not for long term storage.
WHAT’S HALFWAY BETWEEN MANUAL AND AUTOMATIC?
You also have the ability to pre-emptively tell Deadline that you’d like to archive a job on completion of the job. Open any Monitor or integrated submitter and you’ll have the ability to specify an action on job completion. In this case, if you want the job archived the moment it’s completed then select this option (you can also do nothing or even delete the job if that floats your boat).
Either way, on a large scale operation with thousands of jobs, this can be a hassle.
AUTOMATIC JOB ARCHIVING
Thankfully, the easiest way for you to handle archiving a large amount of jobs is to just tell Deadline to do it! To set this up in the Deadline Monitor, assuming you have the permissions to, head to Tools > Configure Repository Options
And then select Job Settings > Cleanup
Note that these options will only apply to completed jobs. Suspended and failed jobs won’t be cleaned up automatically (for good reason).
Here you can choose how and when Deadline cleans up your completed jobs (or if we shouldn’t). Since we’re talking about archiving jobs, you’ll want to set the cleanup mode to ‘Archive’ and select how many days need to pass before the jobs are cleaned up. In this case I’ve told Deadline to only clean up my jobs after 2 weeks have passed and if they haven’t been modified in the last two hours. Now, whenever ‘House Cleaning’ occurs (by Pulse or Workers depending on your configuration), it’ll scan the completed jobs and determine if any meet these requirements. If they do, they’ll be archived and disappear from the Monitor. See the House Cleaning documentation for more information.
DUDE, WHERE’S MY DATA?!
No need to worry! When a job is archived, the job is deleted from the Database (unless you manually archive it) and a zip file is created with all the necessary information for the job inside of it. Using this archive, you can then import into any Repository of the same major version, or even back into the same Repository if you need to redo the job.
However, you should know we don’t remove ALL information about the job from the Database. We actually continue to store the jobs’ stats even after it’s been archived. In fact, even if you delete the job, we won’t delete the stats associated with it!
IMPORTING ARCHIVED JOBS
The other part of archiving, is the ability to then load them back into a Repository. If the archived jobs you wish to import are located in the Repository, they’ll also be organized into folders by the year and month those jobs were created.
Z:\DeadlineRepository8\jobsArchived\2016-09\morgan__Python__Testing Python Plugin__57d83eaa99bb592db889957f.zip
The name of the archived job file is generated by combining the user who submitted the job, the plugin used for the job, the name of the job, and the job ID. This is to ensure that each archived job is unique, but more readable than a random assortment of characters.
Importing the archived jobs is fairly simple and can be done either on the command line or through the Deadline Monitor. If you’re using Deadline Monitor, select ‘File’ > ‘Import Archived Jobs...’ and then select (or multi-select) the jobs you wish to import.
Or if you prefer, you can do this with the command line interface using the following Deadline Command:
ImportJob Imports the archived job. [Delete Archive] If the original archive should be deleted (true/false). [Archived Job File] The archived job file. Specify multiple files as separate parameters.
Which could look like this:
"%DEADLINE_PATH%/deadlinecommand.exe" ImportJob false "path/to/archived/job.zip"
And of course, here’s how you can import archived jobs with the Scripting API:
# import_jobs.py from Deadline.Scripting import * def __main__( *args ): archivedJobFiles = [ "path/to/archived/job1.zip", "path/to/archived/job2.zip", "path/to/archived/job3.zip" ] deleteArchive = True for archivedJob in archivedJobFiles: RepositoryUtils.ImportJob( archivedJob, deleteArchive ) print( "Imported %s" % archivedJob )
This needs to be run in the context of Deadline though, so here’s how you’d execute this python file:
"%DEADLINE_PATH%/deadlinecommand.exe" ExecuteScript "path/to/import_jobs.py"
You can find out more by browsing the Scripting API’s ImportJob documentation.
VERSION DIFFERENCES
As with all major version changes, there’s no guarantee that you can import an archived job from a different major version number. As time goes on we add more and more features, with some of these features requiring us to overhaul the process on how we store information (including jobs). But fear not, for we don’t make those breaking changes on minor versions, only on major versions.
UNPACKING IT ALL - TINKER AWAY
This next portion is for those whom love to tinker, feel free to skip this part if browsing the archive doesn’t interest you.
Although I’ll give you an example, if you’re curious as to what one your archived jobs would like, it’s easy to take a look yourself by manually archiving a job to your desktop and perusing the associated zip file. With that out of the way, it’s time to examine the contents of an archived job. Consider the following archived job:
Z:\DeadlineRepository8\jobsArchived\2016_09\morgan__Python__Testing Python Plugin__57d83eaa99bb592db889957f.zip
This folder contains all the information necessary for a job stored in Deadline. Looking inside the archived job we’ll see an assortment of files (contents will vary by job):
The first two files are compressed report files (named after their report IDs with a .bz2 extension) but since they’re compressed, opening them up in an editor will be gibberish. If you wish to browse their contents without importing the job into Deadline, you’ll have to decompress them. An easy way to do this would be to use your favourite graphical unarchiver on the file, such as 7-Zip, WinRAR, Bzip2, etc. If you prefer the command line interface, then you can use the one of the following commands on the report you want to read:
bzip2 -dk "path/to/archived/log.bz2"
7z.exe e "path/to/archived/log.bz2"
WinRAR.exe e "path/to/archived/log.bz2"
There’s obviously more, but you get the idea. These commands will decompress the file and keep the original archive around. Super handy!
The next three files are called job.json, task.json, reports.json. I won’t explain everything stored in these files, but thankfully their names reflect their contents. The job.json contains job information like the name of the job, user who submitted it, a list of auxiliary files, etc. The tasks.json file will contain information about each task for the job like the Deadline Worker that rendered it, how many errors it generated, etc. At this point you can probably guess that reports.json contains information on all the reports for the job.
If you open any of those files to see what the JSON objects looks like, then you’ll be greeted with this mess:
While this isn’t compressed in the same sense as the report files, it’s been minified, so it’s not as readable as a normal text document. If you really want to browse the contents of these files, you’ll probably want to make the JSON pretty (use your preferred text editor/website to do this) which will format it like this:
Much better!
The last file in this example is the auxiliary file submitted with this job, though you can have more. In this case, the scene file (scenefile.py) was submitted to Deadline so it’s counted as an auxiliary file and stored when the job is archived. The contents of this file are pretty arbitrary for this blog post, as any file submitted with the job is an auxiliary file. Unlike the other files, we don’t generate auxiliary files from the Database. It has the same contents as when it was submitted, so if it’s a plaintext file like this one you can easily look at it.
Best of luck if you’re trying to look through binary scene files though...
HELP ME HELP YOU
Another reason I’d like to bring to light archiving is that it’s extremely beneficial on our side of things for diagnosing problems. A lot of the time Support requires a lot of information from you regarding the error, and sending us the archived job makes this a lot easier for everyone involved. There’s no need for you to hunt down the correct log to send, or search for which settings the job uses, and Support then has a job that exhibits the problematic behaviour. This’ll save everyone time and headaches by preventing back and forth responses asking for more information, which will allow us to react quicker to issues that pop up.
It’s worth noting that most jobs you want to send to Support won’t be ‘Completed’ as they’ll have failed in some way. Due to this you’ll have to manually archive the job since it won’t be picked up automatically by House Cleaning.
CONCLUSION
Archiving your jobs is a simple process and hopefully this blog post has encouraged you to give it a try yourself with your Repository. Besides, why wouldn’t you want to keep you Repository neat and tidy?!
Comments
0 comments
Article is closed for comments.