Performance of copy number variants detection based on whole-genome sequencing by DNBSEQ platforms

Background DNBSEQ™ platforms are new massively parallel sequencing (MPS) platforms that use DNA nanoball technology. Use of data generated from DNBSEQ™ platforms to detect single nucleotide variants (SNVs) and small insertions and deletions (indels) has proven to be quite effective, while the feasibility of copy number variants (CNVs) detection is unclear. Results Here, we first benchmarked different CNV detection tools based on Illumina whole-genome sequencing (WGS) data of NA12878 and then assessed these tools in CNV detection based on DNBSEQ™ sequencing data from the same sample. When the same tool was used, the CNVs detected based on DNBSEQ™ and Illumina data were similar in quantity, length and distribution, while great differences existed within results from different tools and even based on data from a single platform. We further estimated the CNV detection power based on available CNV benchmarks of NA12878 and found similar precision and sensitivity between the DNBSEQ™ and Illumina platforms. We also found higher precision of CNVs shorter than 1 kbp based on DNBSEQ™ platforms than those based on Illumina platforms by using Pindel, DELLY and LUMPY. We carefully compared these two available benchmarks and found a large proportion of specific CNVs between them. Thus, we constructed a more complete CNV benchmark of NA12878 containing 3512 CNV regions. Conclusions We assessed and benchmarked CNV detections based on WGS with DNBSEQ™ platforms and provide guidelines for future studies.

hybridization (FISH), array-based comparative genomic hybridization (array CGH) [9,10] and single nucleotide polymorphism (SNP) arrays were used. Then, with improvements in sequencing technologies, whole-genome sequencing (WGS) and whole-exome sequencing (WES) became more widely used in detecting CNVs without the limitations of specified target regions associated with hybridization or arrays.
Several tools for CNV detection were developed based on WES [11] or WGS [12,13] data by Illumina platforms. Within these tools, there are five main strategies considered: (1) read pair (RP), (2) read depth (RD), (3) split read (SR), (4) de novo assembly (AS), and (5) combination of approaches (CA). Each strategy has its own advantages and limitations. RD-based tools can call accurate CNVs but are limited to detecting only the breakpoints of CNVs. SR-based tools can detect breakpoints at base-pair resolution but perform poorly in repetitive regions [12]. AS-based tools can detect CNVs without a known reference but require more computational resources [12]. Thus, different tools were designed for different samples and sequencing strategies. For example, Break-Dancer [14] applies the RP strategy and is suitable for CNV detection in a single sample, while HYDRA [15], based on the CA strategy, is suited for multiple samples, and CNAnorm [16], based on the RD strategy, is designed for case-control studies.
DNBSEQ ™ sequencing technology, developed by MGI Tech Co., Ltd. (MGI), was applied in different sequencing platforms, including BGISEQ-500, DNBSEQ-G400 and DNBSEQ-T7. Different from other sequencing technologies, DNBSEQ ™ combines the technologies of DNA nanoballs (DNBs) with low amplification error rates, a high density patterned array and combinational Probe-Anchor Synthesis (cPAS) [17]. With these technologies, DNBSEQ ™ sequencing platforms generate data with high sequencing accuracy, low duplication rates and reduced index hopping [18]. Previously, we explored the performances of single nucleotide variant (SNV) and small insertion and deletion (indel) detection on DNBSEQ ™ -based WGS data [19,20], while the performances of CNV detection remained unexplored. Several benchmarking analyses of CNV detection on WGS data by Illumina platforms have been reported [6,21]; thus, it is important to understand the performance of DNBSEQ ™ with respect to CNV detection in comparison with Illumina platforms with various CNV tools, so users of DNBSEQ ™ can choose the correct tools according to their needs. Here, we present the detection and performance evaluation of CNVs based on WGS data sequenced on DNBSEQ ™ platforms.

Detecting CNVs based on Illumina WGS data with different tools
Using various CNV detection tools for Illumina WGS data, we selected five representative tools (BreakDancer [14], CNVnator [22], Pindel [23], DELLY [24] and LUMPY [25]) that are commonly used and were recently updated (see "Methods" for more details) for detecting CNVs based on a single WGS sample. We detected CNVs on two Illumina WGS datasets of NA12878, with an average depth of 30.61X (31.91X on HiSeq2500_ PE150 and 29.30X on NovaSeq6000_PE150, Additional file 1: Table S1). We obtained 933 (968 and 897) CNVs on average using BreakDancer (see "Methods" for more details), 1945 on average (2660 and 1229) using CNVnator, 4888 on average (4045 and 5730) using Pindel, 1741 on average (1709 and 1773) using DELLY and 1365 on average (1380 and 1350) using LUMPY (Table 1, Fig. 1 and Additional file 1: Table S2). The consistency ratios between CNVs on Illumina platforms were 85.50% using BreakDancer, 55.77% using CNVnator, 53.90% using Pindel, 74.38% using DELLY and 83.49% using LUMPY (Additional file 2: Fig. S1). To compare and confirm features of CNVs detected by different tools, we first compared the numbers and lengths of the detected CNVs. We found that on average, 90.22% (8819/9775) of CNVs detected by Pindel were shorter than 1000 bp (Additional file 2: Fig. S2). This was consistent with previous findings that showed that Pindel is an effective tool to detect small CNVs [12]. Moreover, we found that on average, 99.20% (3858/3889) of CNVs were longer than 1000 bp using CNVnator (Additional file 2: Fig. S2). This indicated that CNVnator was more suitable for large CNV detection, consistent with a previous report [12]. Furthermore, using BreakDancer, DELLY and LUMPY, we obtained similar length distributions, which mostly fell into the range from 100 to 5000 bp (Additional file 2: Fig. S2).
We also analysed the regional distribution of detected CNVs across the genome. We found that CNVs detected by CNVnator were more enriched in exonic regions compared to those identified using the other four tools (average 41.43% vs. 28.98% and P = 0.007409 with the t-test) (Additional file 1: Table S3 and Additional file 2: Fig. S3), while the CNVs detected by Pindel were more enriched in intronic regions (average 32.63% vs. 23.33% with P = 3.406e−05 with the t-test). No notable bias was apparent in CNVs detected by BreakDancer, DELLY or LUMPY.

Detecting CNVs based on DNBSEQ ™ WGS data with different tools
We further detected CNVs on eight WGS datasets of NA12878 by DNBSEQ ™ platforms, with an average depth of 31.43X (ranging from 29.51 to 37.44, Additional file 1: Table S1). We obtained 1827 (ranging from 1519 to 2329) CNVs on average using Break-Dancer, 3523 (ranging from 2209 to 5778) on average using CNVnator, 4506 (ranging from 4172 to 4924) on average using Pindel, 1838 (ranging from 1624 to 209) on average using DELLY and 1750 (ranging from 1422 to 2378) on average using LUMPY (Table 1, Fig. 1 and Additional file 1: Table S2). We found high consistency ratios between these CNV sets on all DNBSEQ ™ platforms except for CNVnator (56.54%), with an average of 75.96% CNVs consistent using BreakDancer, 73.19% using Pindel, 75.66% using DELLY and 76.99% using LUMPY (Additional file 2: Fig. S1). Moreover, in contrast to those based on Illumina platforms, we found similar length and regional distributions of CNVs based on DNBSEQ ™ platforms (Additional file 1: Table S3
Moreover, we divided CNVs into four groups according to their length (50-100 bp, 100 bp-1 kbp, 1-10 kbp and 10 kbp-1 Mbp) and evaluated the precision of CNVs in these groups. We found higher precision of CNVs in the 100 bp-1 kbp length group from DNBSEQ ™ platforms than those from Illumina platforms by Pindel (61.15% on average from DNBSEQ ™ platforms vs. 28 (Fig. 4). These results showed that CNVs with small lengths were more accurately detected from WGS data sequenced on DNBSEQ ™ platforms.

Constructing a complete CNV benchmark of NA12878
As described above, we found a versatile consistency ratio in pairwise comparisons between any two CNV sets (Additional file 2: Fig. S1). However, significant differences in precision (P = 3.57e−12 with the t-test, Additional file 2: Fig. S6) were observed between the two available CNV benchmarks. To construct a more complete CNV benchmark of NA12878, we constructed a predicted CNV benchmark from all 50 CNV sets by incorporating any CNV regions that were detected by at least two tools and two platforms. Then, we integrated the unions of the predicted CNV set and two available benchmarks (see "Methods" for more details). Ultimately, we produced a novel complete CNV benchmark of NA12878, named "Benchmark3", with 3512 CNVs, including 3168 deletions and 344 duplications (Additional file 1: Table S4). We found that the length of CNVs in Benchmark3 ranged from 50 to 557,758 bp. Of these, 70.62% (2480/3512) were larger than 1000 bp (Additional file 2: Fig. S7a). Concerning the components of Bench-mark3, 70.44% (2474/3512) of CNVs were derived from only two available benchmarks, and 7.60% (267/3512) were only predicted by ten WGS datasets in our study (Additional file 2: Fig. S7b). We used Benchmark3 to evaluate the CNVs of DNBSEQ ™ platforms and Illumina platforms and found that the precision, sensitivity and F1-score obtained by using three benchmarks were consistent among different platforms (Additional file 2: Fig. S7c).

Discussion
CNVs are important genome variants and have been reported to be important in causing diseases such as cancer. Because of their importance, sequencing data-based CNV detection can be widely applied with the development of sequencing technologies. Many tools based on WGS data were developed for CNV detection, and these tools were estimated with in-house simulated data or nonuniform real data, such as trios used in CNVnator [22] and NA18507 used in Pindel [23].
In this study, we explored the performances of different germline CNV detection methods based on different WGS data of NA12878. Our results showed that the consistency ratios of CNVs detected by the same tool (on average 68.45%) were higher than those detected by different tools (39.00%), consistent with previous research [12]. Furthermore, we found that the consistency ratios of CNVs of the same platform detected by BreakDancer (on average 76.29%), Pindel (72.52%), DELLY (75.61%) and LUMPY

BreakDancer
CNVnator (77.21%) were higher than that detected by CNVnator (56.52%, Additional file 2: Fig. S1). Moreover, we found that the consistency rates of CNV results among different platforms detected by BreakDancer and CNVnator were quite different. Interestingly, we found that the CNVs detected by DELLY and LUMPY were highly consistent, either between tools (on average 72.20%) or within tools (74.34%), or between platforms (70.58%) or within platforms (76.41%), which shows the advantage of the unique features of multiple CA-based strategy tools [12]. These results suggest that DELLY and LUMPY might be good choices for germline CNV detection based on DNBSEQ ™ platforms.

Conclusion
Many comparative analyses of CNV detection tools based on arrays [26], WES [11] and WGS [12,13,27], have been published. Most WES and WGS data analyses were based on datasets sequenced on Illumina platforms. Until now, there was no comprehensive analysis of CNV detection based on datasets sequenced on DNBSEQ ™ platforms. This study represents the first systematic investigation and characterization of CNV detection using WGS data based on DNBSEQ ™ platforms with five representative tools. We found that the quantity, length, distribution, sensitivity and precision of CNVs across the genome detected on DNBSEQ ™ datasets was comparable to those detected on Illumina datasets. We also found that DNBSEQ ™ platforms provided a more accurate overview of small CNVs than Illumina platforms. We constructed a relatively complete CNV benchmark by integrating the union of CNV sets from different datasets detected by different tools and two public benchmarks of NA12878. In summary, our study provides a comprehensive guide for CNV researchers using DNBSEQ ™ platforms with benchmarks and performance measures.

Published WGS data
All fastq data of NA12878 were downloaded from the following websites: GigaScience DataBase (GigaDB), The National Center for Biotechnology Information (NCBI) and the China National GeneBank Sequence Archive (CNSA). All data were downsampled to approximately 30 × and aligned to the human reference genome hg19 following our previous WGS approach [19]. Ten WGS datasets with an average coverage of 31.27x (ranging from 29.3 × to 37.44x), a high mapping rate (average 99.46%, ranging from 99.06 to 99.60%) and high genome coverage (average 99.02%, ranging from 98.97 to 99.12%) provide good foundations for CNV detection (Additional file 1: Table S1).

Tools and parameters
There are approximately 44 published CNV detection tools based on WGS data distributed into five strategies (Additional file 1: Table S5). We carefully selected five representative tools while mainly considering four factors: single sample pattern, widely used, continuous updating and strategy. The selected tools were BreakDancer (RP strategy, ver. 1.4.5) [14], CNVnator (RD strategy, ver. 0.3.3) [22], Pindel (SR strategy, ver. 0.2.5b9) [23], DELLY (ver. 0.7.8) [24] and LUMPY (ver. 0.2.13) [25], which was built on the CA strategy. All tools were processed with default parameters, except the optimal bin size in CNVnator was chosen according to the authors' recommendations such that the ratio of the average read-depth signal to its standard deviation was between 4 and 5.

CNV benchmarks of NA12878
For the reference CNV benchmark, two available benchmarks of NA12878 were introduced for evaluating CNVs: 2819 CNVs by Ryan et al. 2014 (Benchmark1) [25] and 2171 CNVs by Peter et al. 2015 (Benchmark2) [28]. We carefully compared these two CNV benchmarks and found a large proportion of specific CNVs in both Benchmark1 and Benchmark2. When we defined a CNV in one benchmark as specific if it overlapped with any CNV in the other benchmark by < 90% reciprocally in size, we found that 56.05% of CNVs in Benchmark1 and 32.43% of CNVs in Benchmark2 were specific. If we set 50.00% as the threshold of reciprocal overlap according to size, 47.82% and 30.49% of CNVs in Benchmark1 and Benchmark2, respectively, were specific (Additional file 2: Fig. S8).

CNV detection and filtration
The CNV set was detected on different datasets by different tools. First, to reduce the number of false positives, non-CNVs were filtered from the initial outputs.
The outputs from Pindel were filtered according to the following criteria: SVTYPE was not 'DEL' or 'DUP:TANDEM' and supporting read pairs < 3. Additionally, CNVs were filtered according to the following criteria: (1) not autosomal or chrX and (2) overlapping a gap in the reference genome.
The outputs from LUMPY were filtered according to the following criteria: (1) SVTYPE was not 'DEL' or 'DUP' and (2) PE < 3. Additionally, CNVs were filtered according to the following criteria: (1) not autosomal or chrX and (2) overlapping a gap in the reference genome.
Within each obtained CNV set, we further filtered the CNVs that overlapped with other CNVs in the same CNV set, as these scenarios probably indicated complex structure variations. We required at least 1 bp overlap among CNVs to define them as overlapping, and the remainder were defined as non-overlapping. We found that on average, 9.82% (ranging from 3.75 to 20.36%) of the CNVs detected by BreakDancer for ten datasets overlapped, which is notably less than that detected by Pindel (39.43%, ranging from 30.97 to 49.96%), DELLY (19.87%, ranging from 16.55 to 26.81%) and LUMPY (30.23%, ranging from 23.73 to 43.67%) (Additional file 2: Fig. S9a). Overlapping CNVs were not found in any of the ten datasets with CNVnator (Additional file 2: Fig. S9a). Next, we assessed the precision of overlapping CNVs and non-overlapping CNVs on two benchmarks. For Benchmark1, the precision of non-overlapping CNVs was significantly higher than that of overlapping CNVs (Additional file 2: Fig. S9b, average precision 51.17% of non-overlapping CNVs vs. average precision 12.83% of overlapping CNVs, P = 5.64e−13 with the t-test). Similarly, for Benchmark2, we also found that the precision of non-overlapping CNVs was significantly higher than that of overlapping CNVs (Additional file 2: Fig. S9c, average 39.23% vs. average 9.08%, P = 1.11e−16 with the t-test). These results clearly show the negative effect of overlapping CNVs, prompting us to remove the overlapping CNVs in the subsequent evaluation analysis.

Comparison between CNV sets
Any two CNV sets were compared for common (or consistent) and specific CNVs and marked according to their platforms. For example, a comparison between BGISEQ-500_ PE100 and DNBSEQ-G400_PE100 was marked as "DNBSEQ_vs_DNBSEQ", and a comparison between BGISEQ-500_PE100 and HiSeq2500_PE150 was marked as "DNB-SEQ_vs_Illumina". A CNV in one CNV set was considered common if either it overlapped with a single CNV in another CNV set by ≥ 50% reciprocally in size or there existed a set of CNVs in another CNV set such that each CNV in another CNV set had ≥ 50% size overlap with the CNV and ≥ 50% of the CNV overlapped with this set of CNVs in another CNV set. The remainder were considered specific. All three parts (common, specific in one CNV set and specific CNVs in another CNV set) were evaluated with benchmarks.

Evaluation of CNVs
The evaluation of CNVs based on each benchmark was performed with in-house scripts. CNVs were evaluated based on deletions and duplications separately. A CNV was considered valid if either it overlapped with a single CNV in a benchmark by ≥ 50% reciprocally in size or there existed a set of CNVs in a benchmark such that each CNV in the benchmark had ≥ 50% size overlap with the CNV and ≥ 50% of the CNV overlapped with this set of CNVs in the benchmark.
The precision, sensitivity and F1-score were calculated in each CNV set using the following equations:

Construction of the complete CNV benchmark
All CNV regions were classified as deletions or duplications. For deletions or duplications, a fragment collection was built according to the breakpoints of CNV regions from all 50 CNV sets. Then, all fragments were classified according to the sequencing platform and the CNV detection tool. One fragment was classified as BGISEQ-500 or DNB-SEQ-G400 by a tool if the fragment was detected in all datasets from the same platform (three datasets of BGISEQ-500 and five datasets of DNBSEQ-G400) by the same tool. A fragment was marked as a potential benchmark if it was supported by at least two platforms and at least two tools. A predicted benchmark was built by merging all potential benchmark fragments. Later, a novel CNV benchmark was built by integrating all unions of CNVs from the predicted benchmark, Benchmark1 and Benchmark2.

Annotation of CNVs
Genomic regions and CGIs were downloaded from the UCSC Genome Browser (https :// hgdow nload .soe.ucsc.edu/golde nPath /hg19/datab ase/). The 2000 bp flanking region of the CGI in each direction was defined as the CGI-shore. The region located − 2000 bp to 0 bp of the transcription start site (TSS) was defined as upstream, and the region located 0 bp to + 2000 bp of the transcription end site (TES) was defined as downstream. CNVs were located in any of seven regions (upstream, exonic, intronic, intergenic, downstream, CGI and CGI-shore) if the midpoint of the CNV region was in any region.
Additional file 1. Table S1: Whole-genome sequencing data of NA12878 based on four sequencing platforms. Table S2: Number and length statistics of all 50 CNV sets. Table S3: Regional distribution of all 50 CNV sets across the genome. Table S4: List of the complete CNV benchmark of NA12878. Table S5: CNV detection tools using wholegenome sequencing data.
Additional file 2. Figure S1: Consistency ratios of pairwise comparisons between all 50 CNV sets. Heatmap shows the consistency ratio distribution between any two CNV sets or benchmarks. Figure S2: Summary of the density distribution of CNV length for ten datasets using five tools. Each inner chart represents the CNV results of ten datasets detected by each tool. In each inner chart, the line plot shows the density (y-axis) of the CNV count at a certain CNV length (x-axis), and the two black vertical lines indicate the Alu elements (left) and the LINE1 elements (right). Figure S3: Annotation of CNVs across the genome. Histogram shows the number (upper) and proportion (lower) of CNVs occurring in different regions across the genome. CpG island: CGI. CpG island-shore: CGI-shore. Figure S4: Comparison of CNVs by data on DNBSEQTM platforms. Box plot shows the number (upper), precision based on Benchmark1 (median) and precision based on Benchmark2 (lower) of common and specific CNVs between platforms by different tools (column). Precision1, precision based on Benchmark1; Precision2, precision based on Benchmark2. Figure S5: Comparison of CNVs by data on Illumina platforms. Box plot shows the number (upper), precision based on Benchmark1 (median) and precision based on Benchmark2 (lower) of common and specific CNVs between platforms by different tools (column). Precision1, precision based on Benchmark1; Precision2, precision based on Benchmark2. Figure S6: Comparison of the precision and sensitivity between two benchmarks on all 50 CNV sets. (A) Box plot shows the difference in precision between Benchmark1 (left) and Benchmark2 (right).
(B) Box plot shows the difference in sensitivity between two benchmarks. Boxplot represents the rate of all 50 CNV sets, and dashed lines were drawn to connect the results based on the same dataset. **, P < 0.01; measured by the t-test. Benchmark1, data by Ryan et al. 2014; Benchmark2, data by Peter et al. 2015. Figure S7: Novel, complete CNV benchmark of NA12878. (a) Histogram shows the number of CNVs with different lengths in the complete NA12878 CNV benchmark. (b) Pie shows the components of the complete NA12878 CNV benchmark. Labels without a plus sign, such as "Benchmark1", "Benchmark2" and "Predicted", represent a unique source of the CNV benchmark. Labels with a plus sign indicate that the CNV benchmark was provided by at least two sources ("Benchmark 1+Predicted" indicates that the CNV benchmark was provided by both Benchmark1 and Predicted benchmark).