Step by step one goes very far
Project context and summary :
When dealing with high depth read data, a simple way to associate accurate analyses to moderate computational resources is to extract a subset of raw reads that allows observing both homogeneous and moderate coverage depth. Unfortunately, current implementations are often unexpectedly slow and require many significant pre-processing of large files to be used in practice. In the current scientific context, much effort must go into algorithm design and efficient programming to process large data with reasonable running times. An efficient implementation should therefore be developed in order to quickly perform read coverage homogenization. Indeed, such tool will help to deal with highly redundant sequencing data by creating read subsets with useful properties. As the read coverage homogenization step is expected to be systematically used for pre-processing the large amount of raw reads generated in the PIBnet context, a development carried out by members of the CIB platform is expected to lead to efficient solutions that will take advantage of the computing resources hosted by the Institut Pasteur.Related team publications :
Sorry. You must be logged in to view this form.