Fast(er) Reconstruction of Shredded Text Documents
via SelfSupervised Deep Asymmetric Metric Learning
Abstract
The reconstruction of shredded documents consists in arranging the pieces of paper (shreds) in order to reassemble the original aspect of such documents. This task is particularly relevant for supporting forensic investigation as documents may contain criminal evidence. As an alternative to the laborious and timeconsuming manual process, several researchers have been investigating ways to perform automatic digital reconstruction. A central problem in automatic reconstruction of shredded documents is the pairwise compatibility evaluation of the shreds, notably for binary text documents. In this context, deep learning has enabled great progress for accurate reconstructions in the domain of mechanicallyshredded documents. A sensitive issue, however, is that current deep model solutions require an inference whenever a pair of shreds has to be evaluated. This work proposes a scalable deep learning approach for measuring pairwise compatibility in which the number of inferences scales linearly (rather than quadratically) with the number of shreds. Instead of predicting compatibility directly, deep models are leveraged to asymmetrically project the raw shred content onto a common metric space in which distance is proportional to the compatibility. Experimental results show that our method has accuracy comparable to the stateoftheart with a speedup of about times for a test instance with shreds ( mixed shreddedpages from different documents).
1 Introduction
Paper documents are of great value in forensics because they may contain supporting evidence for criminal investigation (e.g., fingerprints, bloodstains, textual information). Damage on these documents, however, may hamper or even prevent their analysis, particularly in cases of chemical destruction. Nevertheless, recent news [news2018thehill] shows that documents are still being physically damaged by handtearing or using specialized paper shredder machines (mechanically shredding). In this context, a forensic document examiner (FDE) is typically required to reconstruct the original document for further analysis.
To accomplish this task, FDEs usually handle paper fragments (shreds) manually, verifying the compatibility of pieces and grouping them incrementally. Despite its relevance, this manual process is timeconsuming, laborious, and potentially damaging to the shreds. For these reasons, research on automatic digital reconstruction has emerged since the last decade [ukovich2004, justino2006]. Traditionally, handtearing and mechanicalshredding scenarios are addressed differently since shreds’ shape tends to be less relevant in the latter. Instead, shreds’ compatibility is almost exclusively determined by appearance features, such as color similarity around shreds extremities [skeoch2006, marques2013].
As with the mechanical shredding, ad hoc strategies have been also developed for binary text documents to cope with the absence of discriminative color information [lin2012, sleit2013, gong2016, chen2017a]. More recently, Paixão et al. [paixao2018tifs] substantially improved the stateoftheart in terms of accuracy performance on the reconstruction of stripshredded text documents, i.e., documents cut in the longitudinal direction only. Nevertheless, time efficiency is a bottleneck because shreds’ compatibility demands a costly similarity assessment of character shapes. In a followup work [paixao2018deep], the group proposed a deep learningbased compatibility measure, which improved the accuracy even further as well as the time efficiency of the reconstruction. In [paixao2018deep], shreds’ compatibility is estimated pairwise by a CNN trained in a selfsupervised way, learning from intact (nonshredded) documents. Human annotation is not required at any stage of the learning process. A sensitive issue, however, is that model inference is required whenever a pair of shreds has to be evaluated. Although this is not critical for a low number of shreds, scalability is compromised for a more realistic scenario comprising hundreds/thousands of shreds from different sources.
To deal with this issue, we propose a model in which the number of inferences scales linearly with the number of shreds, rather than quadratically. For that, the raw content of each shred is projected onto a space in which the distance metric is proportional to the compatibility. The projection is performed by a deep model trained using a metric learning approach. The goal of metric learning is to learn a distance function for a particular task. It has been used in several domains, ranging from the seminal work of the Siamese networks [bromley1994neurips] in signature verification, to an application of the triplet loss [triplet2009jmlr] in face verification [facenet2015cvpr], to the lifted structured loss [lifted2016cvpr], to the recent connection with Mutual Information Maximization [tschannen2019mutual] and many others. Unlike most of these works, however, the proposed method does not employ the same model to semantically different samples. In our case, right and left shreds are (asymmetrically) projected by two different models onto a common space. After that, the distances between the right and left shreds are measured, the compatibility matrix is built and passed on to the actual reconstruction. To enable fair comparisons, the actual reconstruction was performed by coupling methods for compatibility evaluation to an external optimizer. The experimental results show that our method achieves accuracy comparable to the stateoftheart () while taking only 3.73 minutes to reconstruct 20 mixed pages with 505 shreds compared to 1 hour and 20 minutes of [paixao2018deep], i.e., a speedup of times.
In summary, the main contributions of our work are:

This work proposes a compatibility evaluation method leveraging metric learning and the asymmetric nature of the problem;

The proposed method does not require manual labels (trained in a selfsupervised way) neither real data (the model is trained using artificial data);

The experimentation protocol is extended from singlepage to more a realistic and time demanding scenario with a multipage multidocument reconstruction;

Our proposal scales the inference linearly rather than quadratically as in the current stateoftheart, achieving a speedup of times for 505 shreds, and even more for more shreds.
2 The Problem Definition
For simplicity of explanation, let us first consider the scenario where all shreds belong to the same page: singlepage reconstruction of stripshredded documents. Let denote the set of shreds resulting from longitudinally shredding (stripcut) a single page. Assume that the indices determine the groundtruth order of the shreds: is the leftmost shred, is the right neighbor of , and so on. A pair – meaning placed right after – is said to be “positive” if , otherwise it is “negative”. A solution of the reconstruction problem can be represented as a permutation of . A perfect reconstruction is that for which , for all .
Automatic reconstruction is classically formulated as an optimization problem [prandtstetter2008, morandell2008] whose objective function derives from pairwise compatibility (Figure 1). Compatibility or cost, depending on the perspective, is given by a function that quantifies the (un)fitting of two shreds when placed sidebyside (order matters). Assuming a cost interpretation, , , denotes the cost of placing to the right of . In theory, should be low when (positive pair), and high for other cases (negative pairs). Typically, due to the asymmetric nature of the reconstruction problem.
The cost values are the inputs for a search procedure that aims to find the optimal permutation , i.e., the arrangement of the shreds that best resembles the original document. The objective function to be minimized is the accumulated pairwise cost computed only for consecutive shreds in the solution:
(1) 
The same optimization model can be applied in the reconstruction of several shredded pages from one or more documents (multipage reconstruction). In a stricter formulation, a perfect solution in this scenario can be represented by a sequence of shreds which respects the groundtruth order in each page, as well as the expected order (if any) of the pages themselves. If page order is not relevant (or does not apply), the definition of a positive pair of shreds can be relaxed, such that a pair is also positive if and are, respectively, the last and first shreds of different pages, even for . The optimization problem of minimizing Equation 1 has been extensively investigated in literature, mainly using genetic algorithms [biesinger2013, xu2014, ge2015reconstructing, gong2016] and other metaheuristics [prandtstetter2009meta, schauer2010, badawy2018discrete]. The focus of this work is, nevertheless, on the compatibility evaluation between shreds (i.e., the function ), which is critical to lead the search towards accurate reconstructions.
To address text documents, literature started to evolve from the application of pixellevel similarity metrics [balme2007, gong2016, marques2013], which are fast but inaccurate, towards stroke continuity analysis [phienthrakul2015, guo2015] and symbollevel matching [xing2017a, paixao2018tifs]. Strokes continuity across shreds, however, cannot be ensured since physical shredding damages the shreds’ borders. Techniques based on symbollevel features, in turn, tend to be more robust. However they may struggle to segment symbols in complex documents, and to cope efficiently with the large variability of symbols’ shape and size. These issues have been addressed in [paixao2018deep], wherein deep learning has been successfully used for accurate reconstruction of stripshredded documents. Nonetheless, the large number of network inferences required for compatibility assessment hinders scalability for multipage reconstruction.
This work addresses precisely the scalability issue. Although our selfsupervised approach shares some similarities with their work, the training paradigm is completely distinct since the deep models here do not provide compatibility (or cost) values. Instead, deep models are used to convert pixels into embedding representations, so that a simple distance metric can be applied to measure shreds’ compatibility. This is better detailed in the next section.
3 Compatibility Evaluation via Deep Metric Learning
The general intuition behind the proposed approach for compatibility evaluation is illustrated in Figure 2. The underlying assumption is that two sidebyside shreds are globally compatible if they locally fit each other along the touching boundaries. The local approach relies on small samples (denoted by ) cropped from the boundary regions. Instead of comparing pixels directly, the samples are first converted to an intermediary representation (denoted by ) by projecting them onto a common embedding space . Projection is accomplished by two models (CNNs): and , , specialized on the left and right boundaries, respectively.
Assuming that these models are properly trained, boundary samples (indicated by the orange and blue regions in Figure 2) are then projected, so that embeddings generated from compatible regions (mostly found on positive pairings) are expected to be closer in this metric space, whereas those from nonfitting regions should be farther apart. Therefore, the global compatibility of a pair of shreds is measured in function of the distances between corresponding embeddings. More formally, the cost function in Equation 1 is such that:
(2) 
where represents the embeddings associated with the shred , and is a distance metric (e.g., Euclidean).
The interesting property of this evaluation process is that the projection step can be decoupled from the distance computation. In other words, the process scales linearly since each shred is processed once by each model, and pairwise evaluation can be performed with the embeddings produced. Before diving into the details of the evaluation, we first describe the selfsupervised learning of these models. Then, a more indepth view of evaluation will be presented, including the formal definition of a cost function that composes the objective function in Equation 1.
3.1 Learning Projection Models
For producing the shreds’ embeddings, the models and are trained simultaneously with small samples. The two models have the same fully convolutional architecture: a base network for feature extraction appended with a convolutional layer. The added layer is intended to work as a fully connected layer when the base network is fed with samples. Nonetheless, weight sharing is disabled since models specialize on different sides of the shreds, hence deep asymmetric metric learning. The base network comprises the first three convolutional blocks of SqueezeNet [iandola2016squeezenet] architecture (i.e., until the fire3 block).
SqueezeNet has been effectively used in distinguishing between valid and invalid symbol patterns in the context of compatibility evaluation [paixao2018deep]. Nevertheless, preliminary evaluations have shown that the metric learning approach is more effective with shallower models, which explains the use of only the first three blocks. For projection onto space, a convolutional layer with filters of dimensions (base network’s dimensions when fed with samples) and sigmoid activation was added to the base network.
Figure 3 outlines the selfsupervised learning of the models with samples extracted from digital documents. First, the shredding process is simulated so that the digital documents are cut into equally shaped rectangular “virtual” shreds. Next, shreds of the same page are paired sidebyside and sample pairs are extracted topdown along the touching edge: one sample from the rightmost pixels of the left shred (rsample), and the other from the leftmost pixels of the right shred (lsample). Since shreds’ adjacency relationship is provided for free with virtual shredding, sample pairs can be automatically labeled as “positive” (green boxes) or “negative” (red boxes). Selfsupervision comes exactly from the fact that labels are automatically acquired by exploiting intrinsic properties of the data.
Training data comprise tuples , where and denote, respectively, the r and lsamples of a sample pair, and is the associated groundtruth label: if the sample pair is positive, and , otherwise. Training is driven by the contrastive loss function [chopra2005learning]:
(3) 
where , and is the margin parameter. For better understanding, an illustration is provided in Figure 4. The models handle a positive sample pair that, together, composes the pattern “word”. Since it is positive (), the loss value would be low if the resulting embeddings are close in , otherwise, it would be high. Note that weightsharing would result in the same loss value for the swapped samples (pattern “rdwo”), which is undesirable for the reconstruction application. Implementation details of the sample extraction and training procedure are described in experimental methodology (Section 4.3).
3.2 Compatibility Evaluation
In compatibility evaluation, shreds’ embedding and distance computation are two decoupled steps. Figure 5 presents a joint view of these two steps for better understanding of the model’s operation. Strided sliding window is implicitly performed by the fully convolutional models. To accomplish this, two vertically centered regions of interest are cropped from the shreds’ boundaries ( is the sample size): , comprising the rightmost pixels of the left shred, and , comprising the leftmost pixels of the right shred. Inference on the models produces feature volumes represented by the tensors (lembeddings) and (rembeddings). The rows from the top to the bottom of the tensors represent exactly the topdown sequence of dimensional local embeddings illustrated in Figure 2.
If it is assumed that vertical misalignment among shreds is not significant, compatibility could be obtained by simply computing . For a more robust definition, shreds can be vertically “shifted” in the image domain to account for misalignment [paixao2018deep]. Alternatively, we propose to shift the tensor “up” and “down” units (limited to ) in order to determine the bestfitting pairing, i.e., that which yields the lowest cost. This formulation helps to save time since it does not require new inferences on the models. Given a tensor , let denote a vertical slice from row to . Let and represent, respectively, the r and lembeddings for a pair of shreds . When shifts are restricted to the upward direction, compatibility is defined by the function:
(4) 
where is the number of rows effectively used for distance computation. Analogously, for the downward direction:
(5) 
Finally, the proposed cost function is a straightforward combination of Equations (4) and (5):
(6) 
Note that, if is set to (i.e., shifts are not allowed), then , therefore:
(7) 
4 Experimental Methodology
The experiments aim to evaluate the accuracy and time performance of the proposed method, as well as to compare with the literature in document reconstruction focusing on the stateoftheart deep learning method proposed by Paixão et al. [paixao2018deep] (hereafter referred to as Paixãob). For this purpose, we followed the basic protocol proposed by [paixao2018tifs] in which the methods are coupled to an exact optimizer and tested on two datasets (D1 and D2). Two different scenarios are considered here: single and multipage reconstruction.
4.1 Evaluation Datasets
D1.
Produced by Marques and Freitas [marques2013], it comprises shredded pages scanned at dpi. Most pages are from academic documents (e.g., books and thesis), part of such pages belonging to the same document. Also, instances have only textual content, whereas the other have some graphic element (e.g., tables, diagrams, photos). Although a real machine (Cadence FRG712) has been used, the shreds present almost uniform dimensions and shapes. Additionally, the text direction is nearly horizontal in most cases.
D2.
This dataset was produced by Paixão et al. [paixao2018tifs] and comprises singlepage documents (legal documents and business letters) from the ISRITk OCR collection [nartker2005]. The pages were shredded with a Leadership 7348 stripcut machine and their respective shreds were arranged sidebyside onto a support yellow paper sheet, so that they could be scanned at once and, further, easily segmented from background. In comparison to D1, the shreds in D2 have less uniform shapes and their borders are significantly more damaged due to the machine blades wear. Besides, the handling of the shreds before scanning caused slight rotation and (vertical) misalignment between the shreds. These factors render D2 as a more realistic dataset compared to D1.
4.2 Accuracy Measure
Similar to the neighbor comparison measure [andalo2017], the accuracy of a solution is defined here as the fraction of adjacent pairs of shreds which are “positive”. For multireconstruction, the relaxed definition of “positive” is assumed (as discussed in Section 2), i.e., the order in which the pages appear is irrelevant. More formally, let be a solution for the reconstruction problem for a set of shreds . Then, the accuracy of is calculated as
(8) 
where denotes the 01 indicator function.
4.3 Implementation Details
Sample Extraction.
Training data consist of samples extracted from binary documents (forms, emails, memos, etc.) scanned at dpi of the IITCDIP Test Collection 1.0 [lewis2006building]. For sampling, the pages are split longitudinally into virtual shreds (amount estimated for the usual A4 paper shredders). Next, the shreds are individually thresholded with Sauvola’s algorithm [sauvola2000adaptive] to cope with small fluctuations in pixel values of the original images. Sample pairs are extracted pagewise, which means that the samples in a pair come from the same document. The extraction process starts with adjacent shreds in order to collect positive sample pairs (limited to pairs per document). Negative pairs are collected subsequently, but limited to the number of positive pairs. During extraction, the shreds are scanned from top to bottom, cropping samples every two pixels. Pairs with more than blank pixels are considered ambiguous, and then they are discarded for future training. Finally, the damage caused by mechanical shredding is roughly simulated with the application of saltandpepper random noise on the two rightmost pixels of the rsamples, and on the two leftmost pixels of the lsamples.
Training.
The training stage leverages the sample pairs extracted from the collection of digital documents. From the entire collection, the sample pairs of randomly picked documents are reserved for validation where the bestepoch model should be selected. By default, the embeddings dimension is set to . The models are trained from scratch (i.e., the weights are randomly initialized) for epochs using the stochastic gradient descent (SGD) with a learning rate of and minibatches of size . After each epoch, the models’ state is stored, and the training data are shuffled for the new epoch (if any). The bestepoch model selection is based on the ability to project positive pairs closer in the embedding space, and negative pairs far. This is quantified via standardized mean difference (SMD) measure [cohn1988statistical] as follows: for a given epoch, the respective and models are fed with the validation sample pairs, and the distances among the corresponding embeddings are measured. Then, the distance values are separated into two sets: , comprising distances calculated for positive pairs, and , for negative ones. Ideally, the difference between the mean values of the two sets should be high, while the standard deviations within the sets should be low. Since these assumptions are addressed in SMD, the best and are taken as those which maximize .
4.4 Experiments
The experiments rely on the trained models and , as well as on the Paixãob’s deep model. The latter was retrained (following the procedure described in [paixao2018deep]) on the CDIP documents to avoid training and testing with documents of the same collection (ISRI OCRTk). In practice, no significant change was observed in the reconstruction accuracy with this procedure.
The shreds of the evaluation datasets were also binarized [sauvola2000adaptive] to keep consistency with training samples. The default parameters of the proposed method includes and . Nondefault assignments are considered in two of the three conducted experiments, as better described in the following subsections.
Singlepage Reconstruction.
This experiment aims to show whether the proposed method is able to individually reconstruct pages with accuracy similar to Paixãob, and how the time performance of both methods is affected when the vertical shift functionality is enabled since it increases the number of pairwise evaluations. To this intent, the shredded pages of D1 and D2 were individually reconstructed with the proposed and Paixãob methods, first using their default configuration, and after disabling the vertical shifts (in our case, it is equivalent to set ). Time and accuracy were measured for each run. For a more detailed analysis, time was measured for each reconstruction stage: projection (pro) – applicable only for the proposed method –, pairwise compatibility evaluation (pw), and optimization search (opt).
Multipage Reconstruction.
This experiment focuses on the scalability with respect to time while increasing the number of shreds in multipage reconstruction. In addition to the time performance, it is essential to confirm whether the accuracy of both methods remains comparable. Rather than individual pages, there are two large reconstruction instances in this experiment: the mixed shreds of D1 and the mixed shreds of D2. Each instance was reconstructed with the proposed and Paixãob methods, but now only with their default configuration (i.e., vertical shifts enabled). Accuracy and time (segmented by stage) were measured. Additionally, time processing was estimated for different instance sizes based on the average elapsed time observed for D2.
Sensitivity Analysis.
The last experiment assesses how the proposed method is affected (time and accuracy) by testing with different embedding dimensions (): . Note that this demands the retraining of and for each . After training, the D1 and D2 instances were individually reconstructed, and then accuracy and time processing were measured.
4.5 Experimental Platform
The experiments were carried out in an Intel Core i74770 CPU @ 3.40GHz with 16GB of RAM running Linux Ubuntu 16.04, and equipped with a TITAN X (Pascal) GPU with 12GB of memory. Implementation^{1}^{1}1https://github.com/thiagopx/deepreccvpr20. was written in Python 3.5 using Tensorflow for training and inference, and OpenCV for basic image manipulation.
5 Results and Discussion
5.1 Singlepage Reconstruction
A comparison with the literature for singlepage reconstruction of stripshredded documents is summarized in the Table 1. Given the clear improvement in the performance, the following discussions will focus on the comparison with [paixao2018deep]. The boxplots in Figure 6 show the accuracy distribution obtained with both the proposed method and Paixãob for singlepage reconstruction. Likewise [paixao2018deep], we also observe that vertical shifts affect only D2 since the D1’s shreds are practically aligned (vertical direction). The methods did not present significant difference in accuracy for the dataset D2. For D1, however, Paixãob slightly outperformed ours: the proposed method with default configuration (vertical shift “on”) yielded accuracy of (arithmetic mean standard deviation), while Paixãob achieved . The higher variability in our approach is mainly explained by the presence of documents with large areas covered by filled graphic elements, such as photos and colorful diagrams (which were not present in the training). By disregarding these cases (12 in a total of 60 samples), the accuracy of our method increases to , and the standard deviation drops to .
Time performance is shown in Figure 7. The stacked bars represent the average elapsed time in seconds (s) for each reconstruction stage: projection (pro), pairwise compatibility evaluation (pw), and optimization search (opt). With vertical shift disabled (left chart), the proposed method spent much more time producing the embeddings (s) than in pairwise evaluation (s) and optimization (s). Although Paixãob does not have the cost of embedding projection, pairwise evaluation took s, about times the time elapsed in the same stage in our method. This difference becomes more significant (in absolute values) when the number of pairwise evaluations increases, as it can be seen with the enabling of vertical shifts (right chart). In this scenario, pairwise evaluation took s in our method, against the s spent in Paixãob (approx. times slower). Including the execution time of the projection stage, our approach yielded a speedup of almost times for compatibility evaluation. Note that, without vertical shifts, the accuracy of Paixãob would drop from to in D2.
Finally, we provide an insight into what the embedding space might look like by showing a local sample and its three nearest neighbors. As shown in Figure 9, the models tend to form pairs that resemble something realistic. It is worth noting that the samples are very well aligned vertically, even in the cases where the sample is shifted slightly to the top or bottom and the letters are appearing only in half (see more samples in the Supplementary Material).
5.2 Multipage Reconstruction
For multipage reconstruction, the proposed method achieved and of accuracy for D1 and D2, respectively, whereas Paixãob achieved and . Overall, both methods yielded highquality reconstructions with low difference in accuracy (approx. p.p.), which is an indication that their accuracy is not affected by the increase of instances.
Concerning time efficiency, however, the methods behave notably different, as evidenced in Figure 8. The left chart shows the average elapsed time of each stage to process the shreds of D2. In this context, with a larger number of shreds, the optimization cost became negligible when compared to the time required for pairwise evaluation. Remarkably, Paixãob demanded more than minutes to complete evaluation, whereas our method took less than minutes (speedup of approx. times). Based on the average time for the projection and the pairwise evaluation, estimation curves were plot (right chart) indicating the predicted processing time in function of the number of shreds (). Viewed comparatively, the growing of the proposed method’s curve (in blue) seems to be linear, although pairwise evaluation time (not the number inferences) grows quadratically with . In summary, the greater the number of shreds, the higher the speedup ratio.
5.3 Sensitivity Analysis
Figure 10 shows, for singlepage reconstruction, how accuracy and time processing (mean values over pages) are affected with the embedding dimension (). Remarkably, projecting onto 2D space () is sufficient to achieve average accuracy superior to . The highest accuracies were observed for : and for D1 and D2, respectively. Also, the average reconstruction time for was s, which represents a reduction of nearly when compared to the default value (). For higher dimensions, accuracy tends to decay slowly (except for ). Overall, the results suggest that there is space for improvement on accuracy and processing time by focusing on small values of , which will be better investigated in future work.
6 Conclusion
This work addressed the problem of reconstructing mechanicallyshredded text documents, more specifically the critical part of evaluating compatibility between shreds. Focus was given on the time performance of the evaluation. To improve it, we proposed a deep metric learningbased method as a compatibility function in which the number of inferences scales linearly rather than quadratically [paixao2018deep] with the number of shreds of the reconstruction instance. In addition, the proposed method is trained using artificially generated data (i.e., does not require realworld data) in a selfsupervised way (i.e., does not require annotation).
Comparative experiments for singlepage reconstruction showed that the proposed method can achieve accuracy comparable to the stateoftheart with a speedup of times on compatibility evaluation. Moreover, the experimentation protocol was extended to a more realistic scenario in this work: multipage multidocument reconstruction. In this scenario, the benefit of the proposed approach is even greater: our evaluation compatibility method takes less than 4 minutes for a set of 20 pages, compared to the approximate time of 1 hour and 20 minutes (80 minutes) of the current stateoftheart (i.e., a speedup of times), while preserving a high accuracy (). Additionally, we show that the embedding dimension is not critical to the performance of our method, although a more careful tuning can lead to better accuracy and time performance.
Future work should include the generalization of the proposed method to other types of cut (e.g., crosscut and handtorn), as well as to other problems related to jigsaw puzzle solving [andalo2017].
Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior  Brasil (CAPES)  Finance Code 001. We thank NVIDIA for providing the GPU used in this research. We also acknowledge the scholarships of Productivity on Research (grants 311120/20164 and 311504/20175) supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq, Brazil).