Version 1.3¶
1.3.0¶
This version adds a few features and updates:
pVACvector now accepts a list of spacers to use when testing junction epitopes. These can be specified using the
--spacers
parameter with a comma-separated list of spacer peptides. Including the stringNone
will also test each junction without spacers. The default isNone,HH,HHC,HHH,HHHD,HHHC,AAY,HHHH,HHAA,HHL,AAL
The
--expn-val
cutoff parameter has been updated to be a float instead of an integer. This allows the user to provide a decimal cutoff for the filtering on gene and transcript expression values. Previously, only whole numbers were accepted.Decimal numbers in the pVACseq reports are now rounded to three decimal places. Previously, they were not rounded.
In addition, this version also fixes a few bugs:
The
--normal-vaf
cutoff value was incorrectly defaulting to 0.2 instead of 0.02. This resulted in the coverage filter not being as stringent as it should’ve been.There were a number of bugs in pVACapi and pVACviz that would prevent a user from submitting jobs using the interface in certain conditions. These have been resolved.
pVACseq would previously not support SVs in the input VCF where the alt had a value of
<DEL>
. These kinds of variants are now supported.
1.3.1¶
This version is a hotfix release. It fixes the following issues:
Some prediction algorithms might predict a binding affinity of 0 which could lead to division by 0 errors when calculating the fold change. In this situation we now set the fold change to
inf
(infinity).Previously the
--maximum-transcript-support-level
threshold was not getting propagated to the main pipeline step correctly, resulting in errors in the transcript support level filter.There was a bug in the multiprocessing logic that would result in certain steps getting executed more than once, which in turn would lead to FileNotFound errors when these duplicate executions were happening at the same time.
1.3.2¶
This version is a hotfix release. It fixes the following issues:
A bug in the parsing code of the binding prediction output files would result in only some binding prediction output files getting processed when using multiprocessing. This would potentially cause incomplete output reports that were missing predictions for some input variants. pVACseq, pVACfuse, and pVACvector runs that were done without multiprocessing should’ve been unaffected by this bug.
1.3.3¶
This version is a hotfix release. It fixes the following issues:
We were previously using our own locking logic while running in multiprocssing mode which contained a bug that could result in runs getting stuck waiting on a lock. This release switches to using the locking implementation provided by the
pymp-pypi
multiprocessing package.In an attempt to reduce cluttered output generated by Tenserflow we were previously repressing any message generated during the import of MHCflurry and MHCnuggets. As a side effect, this would also suppress any legitimate error messages generated during these imports which would result in the
pvacseq
,pvacfuse
, andpvacvector
commands exiting without output. This release updates to code so that actual errors still get output.
1.3.4¶
This version is a hotfix release. It fixes the following issues:
We were previously using nested multiprocessing which would cause defunct child jobs and stalled runs. Switching to single-level multiprocessing fixes this issue.
When running pVACvector from a pVACseq result file the creation of the peptide fasta file might cause an error if the epitope was situated near the beginning of the transcript. This issue has been fixed.
1.3.5¶
This version is a hotfix release. It fixes the following issues:
While the previous release fixed the issue of stalled processes when running IEDB-based prediction algorithms in multiprocessing mode, we were still experience a similar problem when running with MHCflurry and MHCnuggets. These two prediction algorithms are tensorflow-based which in the way it is currently used in pVACtools is not compatible with being run in multiprocessing mode. As a stop-gap measure this release removes MHCnuggets and MHCflurry from being run in multiprocessing mode. This resolves the problem until we can change our usage of these prediction algorithms to be multiprocessing-compatible.
1.3.6¶
This version is a hotfix release. It fixes the following issues:
Tensorflow is incompatible with multiprocessing when the parent process imports tensorflow or a tensorflow-dependent module. For this reason MHCflurry and MHCnuggets were removed from parallelization. In this release we moved to calling MHCflurry and MHCnuggets on the command line, which allowed us to remove our direct imports of these modules and allows us to parallelize the calls to these two prediction algorithms. All prediction algorithms supported by pVACtools can now be used in multiprocessing mode.
Some users were reporting
Illegal instruction (core dumped)
errors because their hardware was incompatible with the version of tensorflow we were using. Pinning the tensorflow version to 1.5.0 with this release should solve this problem.When running in multiprocessing mode while using the IEDB API, users would experience a higher probability of failed requests to the API. The IEDB API would throw a 403 error when rejecting requests due to too many simultaneous requests. pVACtools would previously not retry on this type of error. This release now adds retries on this error code. We also improved the random wait time calculation between requests so that the likelihood of multiple retries hitting at the same time has now been reduced.
When encountering a truncated input VCF, the VCF parser used by pVACtools would throw an error that was not indicative of the real error source. pVACseq now catches these errors and emmits a more descriptive error message when encountering a truncated VCF.
One option when annotating a VCF with VEP is the
-total-length
flag. When using this flag, the total length would be written to theProtein_position
field. pVACseq previously did not support a VCF with aProtein_position
field in this format. This release adds support for it.When creating the combined MHC class I and MHC class II all_epitopes file, we were previously not correctly determining all necessary headers which would lead to incorrect output of the individual prediction algorithm score columns. This release fixes this issue.
1.3.7¶
This version is a hotfix release. It fixes the following issues:
The previous version accidentally removed the
--additional-input-file-list
option. It has been restored in this version. Please note that it is slated for permanent removal in the next feature release (1.4.0).