[ad_1]
Datasets
Given that rodents are widely used in behavioral research, and mice are the most studied rodents35, we chose two publicly-available datasets featuring mice engaging in a range of behaviors in our main analysis. The first dataset, referred to as the “home-cage dataset,” was collected by Jhuang et al.7 and features 12 videos (approximately 10.5 h and 1.13 million frames in total) of singly housed mice in their home cages, recorded from the side view. Video resolution is \(320\times 240\) pixels. The authors annotate each video in full and identify eight mutually exclusive behaviors (Fig. S1A) of varying incidence (Fig. 3A). This dataset allows us to benchmark our approach against existing methods, allows us to evaluate our method on a common use-case, and is relatively well-balanced in terms of the incidence of each behavior.
The second dataset used is the Caltech Resident-Intruder Mouse dataset (CRIM13), collected by Burgos-Artizzu et al.6. It consists of 237 pairs of videos, recorded from synchronized top- and side-view cameras, at 25 frames per second and an 8-bit pixel depth. Videos are approximately 10 min long, and the authors label 13 mutually exclusive actions (Fig. S1B). Of these actions, 12 are social behaviors, and the remaining action is the category “other,” which denotes periods where no behavior of interest occurs6. This dataset features a number of challenges absent from the Jhuang et al.7 dataset. In addition to including social behavior (in contrast to the home-cage dataset, which features singly-housed mice), it presents two algorithmic challenges. First, videos are recorded using a pair of synchronized cameras. This allows us to test multiple-camera integration functionality (see “Methods: Feature extraction” section), to evaluate classifier performance using features from multiple cameras. And second, it is highly unbalanced, with a slight majority of all annotations being the category “other” (periods during which no social behavior occurred; Fig. 3D).
We also include an exploratory dataset, to demonstrate the applicability of our model to non-rodent models, comprised of seven unique videos of single-housed octopus bimaculoides during a study of octopus habituation behaviors in the Dartmouth octopus lab. One video (approximately 62 min in length) was annotated by two different annotators, allowing us to assess inter-observer reliability by calculating the agreement between these two independent annotations. The videos span approximately 6.75 h in total, with 6.15 h annotated. Video was recorded at 10 frames per second with a resolution of \(640\times 436\) pixels. We define five behaviors of interest: crawling, fixed pattern (crawling in fixed formation along the tank wall), relaxation, jetting (quick acceleration away from stimuli), and expanding (tentacle spread in alarm reaction or aggressive display), and an indicator for when none of these behaviors occur (none). In the original dataset, there were three additional behaviors (inking/jetting, display of dominance, color change), comprising a very small number of the total frames, which could co-occur with the other six behaviors (crawling, fixed pattern, relaxation, jetting, expanding, none). However, because our classification model can only predict mutually-exclusive classes at the current time, we removed these three behaviors from our input annotations.
Inter-observer reliability
Both datasets include a set of annotations performed by two groups of annotators. The primary set of annotations was produced by the first group of annotators and includes all video in the dataset. The secondary set of annotations was performed by a second, independent set of annotators on a subset of videos. We use the primary set of annotations to train and evaluate our method, and the secondary set to establish inter-observer reliability; that is, how much two, independent human annotator’s annotations can be expected to differ. Given this, classifier-produced labels can be most precisely interpreted as the predicted behavior if the video was annotated by the first group of annotators. This distinction becomes important because we benchmark the accuracy of our method (i.e., the agreement between the classifier’s predictions and the primary set of annotations) relative to the inter-observer agreement (i.e., the agreement between the first and second group of annotators, on the subset of video labeled by both groups). So, for example, when we note that our model achieves accuracy “above human agreement,” we mean that our classifier predicts the labels from the first human annotator group better than the second human annotator group does. In the case of the home-cage dataset, the agreement between the primary and secondary sets was 78.3 percent, compared on a 1.6 h subset of all dataset video7. For CRIM13, agreement was 69.7 percent, evaluated on a random selection of 12 videos6.
Simulating labeled data
To simulate our approach’s performance with varying amount of training data, in our primary analyses we train the classifier using the following amounts of labeling:
$$\mathrmprop_\mathrmlabeled=\left[0.02:0.02:0.2, 0.2:0.05:0.9\right].$$
That is, we use a proportion of all data, \(\mathrmprop_\mathrmlabeled\), to construct our training and validation sets (i.e., \(\mathcalD^\mathrmlabeled)\), and the remaining \(1-\mathrmprop_\mathrmlabeled\) data to create our test set, \(\mathcalD^\mathrmtest\) (Fig. 1B,D). We use an increment of \(0.02\) for low training proportions (up to \(0.20\)), because that is when we see the greatest change relative to a small change in added training data (Fig. 2A,E). We increment values from \(0.25\) to \(0.90\) by \(0.05\). This gives us a set of 24 training proportions per analysis. Additionally, for each training proportion, unless otherwise noted, we evaluate the model on 10 random splits of the data. In our main analyses, we use a clip length of one minute for both datasets.
Comparison with existing methods
When comparing our model to existing methods, we employ k-fold validation instead of evaluating on random splits of the data. In the case of the home-cage dataset, the existent methods cited employ a “leave one out” approach—using 11 of the 12 videos to train their methods, and the remaining video to test it. In our approach, however, we rely on splitting the data into clips, so instead we use 12-fold cross-validation, where we randomly split the dataset clips into 12 folds and then employ cross-validation on the clips, rather than entire videos. In evaluating their approach’s performance on the CRIM13 dataset, Burgos-Artizzu et al.6 selected 104 videos for training and 133 for testing, meaning that they trained their program on 44 percent of the data, and tested it on 56 percent. Here, we evaluate our method relative to theirs using two-fold cross validation (50 percent test and 50 percent train split) to retain similar levels of training data.
Frame extraction
To generate spatial frames, we extract raw video frames from each video file. Rather than save each image as an image file in a directory, we save the entire sequence of images corresponding to a single video to a sequence file, using the implementation provided by Dollár36 with JPG compression. This has the advantage of making the video frames easier to transfer between file systems and readable on any operating system (which is useful for users running the toolbox on high performance computing clusters). To generate the temporal component, we use the TV-L1 algorithm19,37, which shows superior performance to alternate optical flow algorithms15, to calculate the dense optical flow between pairs of sequential video frames and represent it visually via the MATLAB implementation by Cun38. In the visual representation of optical flow fields, hue and brightness of a pixel represent the orientation and magnitude of that pixel’s motion between sequential frames. By representing motion information of the video as a set of images, we can use a similar feature extraction method for both spatial and temporal frames. Just as the features derived from the spatial images represent the spatial information in the video, the features derived from the temporal images should provide a representation of the motion information in the video.
Feature extraction
We utilize the pretrained ResNet18 convolutional neural network (CNN) to extract high-level features from the spatial and temporal video frames. Often used in image processing applications, CNNs consist of a series of layers that take an image as an input and generate an output based on the content of that image. Intuitively, classification CNNs can be broken down into two components: feature extraction and classification. In the feature extraction component, the network uses a series of layers to extract increasingly complex features from the image. In the classification component, the network uses the highest-level features to generate a final classification for the image (e.g., “dog” or “cat”). In the case of pretrained CNNs, the network learns to extract important features from the input image through training—by generating predictions for a set of images for which the ground truth is known, and then modifying the network based on the deviation of the predicted classification from the true classification, the network learns which features in the image are important in discriminating one object class from another. In pretrained CNNs, such as the ResNet18, which was trained to categorize millions of images from the ImageNet database into one thousand distinct classes39, early layers detect generic features (e.g., edges, textures, and simple patterns) and later layers represent image data more abstractly40.
Here, we leverage transfer learning—where a network trained for one context is used in another—to extract a low-dimensional representation of the data in the spatial and temporal video frames. The idea is that, since the ResNet18 is trained on a large, general object dataset, the generality of the network allows us to obtain an abstract representation of the salient visual features in the underlying video by extracting activations from the later layers of the network in response to a completely different set of images (in this case, laboratory video of animal behavior). To extract features from the ResNet18 network for a given image, we input the image into the network and record the response (“activations”) from a specified layer of the network. In this work, we chose to extract activations from the global average pooling layer (“pool5” in MATLAB) of the ResNet18, close to the end of the network (to obtain high-level feature representations). This generates a feature vector of length \(512\), representing high level CNN features for each image.
By default, the ResNet18 accepts input images of size \([224, 224, 3]\) (i.e., images with a width and height of 224 pixels and three color channels), so we preprocess frames by first resizing them to a width and height of 224 pixels. In the case of spatial frames, the resized images are input directly into the unmodified network. For temporal frames, however, rather than inputting frames into the network individually, we “stack” each input frame to the CNN with the five frames preceding it and the five frames following it, resulting in an input size of \([224, 224, 33]\). This approach allows the network to extract features with longer-term motion information and has been shown to improve discriminative performance14,18. We select a stack size of 11 based on the findings from Simonyan and Zisserman18. By default, the ResNet18 network only accepts inputs of size \([224, 224, 3]\), so to modify it so that it accepts inputs of size \([224, 224, 33]\) we replicate the weights of the first convolutional layer (normally three channels) 11 times. This allows the modified “flow ResNet18” to accept stacks of images as inputs, while retaining the pretrained weights to extract salient image features.
After spatial features and temporal features have been separately extracted from the spatial and temporal frames, respectively, we combine them to produce the spatiotemporal features that will be used to train the classifier (Fig. 1E). To do so, we simply concatenate the spatial and temporal features for each frame. That is, for a given segment of video with \(n\) frames, the initial spatiotemporal features are a matrix of size \(\left[n, 512\times 2\right]=[n, 1024]\), where \(512\) represents the dimensionality of the features extracted from the ResNet18. If multiple synchronized cameras are used (as is the case in one of our benchmark datasets), we employ the same process, concatenating the spatial and temporal features for each frame and each camera. In the case of two cameras, for example, this implies the initial spatiotemporal features is a matrix of size \(\left[n,512\times 2\times 2\right]=\left[n,2048\right]\). To decrease training time, memory requirements, and improve performance41,42, we utilize dimensionality reduction to decrease the size of the initial spatiotemporal features to generate final spatiotemporal features of size \(\left[n,512\right]\). We selected reconstruction independent component analysis43,44 as our dimensionality reduction method, which creates a linear transformation by minimizing an objective function that balances the independence of output features with the capacity to reconstruct input features from output features.
Classifier architecture
The labeled and unlabeled data consist of a set of clips, generated from project video, which the classifier uses to predict behavior. Clips in \(\mathcalD^\mathrmlabeled\) and \(\mathcalD^\mathrmunlabeled\) are both constituted of a segment of video and a corresponding array of spatiotemporal features extracted from that video. Clips in \(\mathcalD^\mathrmlabeled\) also include an accompanying set of manual annotations (Fig. 1B). For a given clip in \(\mathcalD^\mathrmlabeled\) with \(n_\mathrmlabeled\) frames, the classifier takes a \([n_\mathrmlabeled,512]\)-dimensional vector of spatiotemporal features (Fig. 1E) and a one-dimensional array of \(n_\mathrmlabeled\) manually-produced labels (e.g., “eat,” “drink,” etc.) as inputs, and learns to predict the \(n_\mathrmlabeled\) labels from the features. After training, for a given clip in \(\mathcalD^\mathrmunlabeled\) with \(n_\mathrmunlabeled\) frames, the classifier takes as an input a \([n_\mathrmunlabeled,512]\)-dimensional vector of spatiotemporal features and outputs a set of \(n_\mathrmunlabeled\) behavioral labels, corresponding to the predicted behavior in each of the \(n_\mathrmunlabeled\) frames. To implement this transformation from features to labels, we rely on recurrent neural networks (RNNs). Prior to inputting clips into the RNN, we further divide them into shorter “sequences,” corresponding to 15 s of video to reduce overfitting45 and sequence padding46. Unlike traditional neural networks, recurrent neural networks contain cyclical connections which allows information to persist over time, enabling them to learn dependencies in sequential data47. Given that predicting behavior accurately requires the integration of information over time (i.e., annotators generally must view more than one frame to classify most behavior, since behaviors are often distinguished by movement over time), this persistence is critical.
We opt for a long short-term memory (LSTM) network with bidirectional LSTM layers (BiLSTM) as the core of our classification model. LSTMs are better able to learn long-term dependencies in data than traditional RNNs in practice48,49, and the use of bidirectional layers allows the network to process information in both temporal directions50 (i.e., forward and backward in time, rather than forward only in the case of a traditional LSTM layer). As shown in Figure S4, our network’s architecture begins with a sequence input layer, which accepts a two-dimensional array corresponding to spatiotemporal video features (with one row per frame and one column per feature). We then apply two BiLSTM layers, which increases model complexity and allows the model to learn more abstract relationships between input sequences and correct output labels51. To reduce the likelihood of model overfitting, we use a dropout layer after each BiLSTM layer, which randomly sets some proportion of input units (here, \(50\) percent) to \(0\), which reduces overfitting by curbing the power of any individual neuron to generate the output52. The second dropout layer is followed by a fully-connected layer with an output size of \([n,K\)], where \(K\) is the number of classes and \(n\) is the number of frames in the input clip. The softmax layer then normalizes the fully-connected layer’s output into a set of class probabilities with shape \([n,K]\), where the sum of each row is \(1\) and the softmax probability of class \(k\) in frame \(j\) is given by the entry \(jk\). Following the softmax layer, the sequence-to-sequence classification layer generates a one-dimensional categorical array of \(n\) labels corresponding to the behavior with the highest softmax probability in each frame. We select cross-entropy loss for \(K\) mutually exclusive classes53 as our loss function, since the behaviors in both datasets are mutually exclusive. All classifiers were trained using a single Nvidia Tesla K80 GPU running on the Dartmouth College high performance computing cluster.
Classifier training
In this analysis, we use the hyperparameters specified in Table 2 when training the network. To avoid overfitting, we select 20 percent of \(\mathcalD^\mathrmlabeled\) to use in our validation set (i.e., \(prop_train=0.20\); see Fig. 1C). We then evaluate the network on this validation set every epoch (where “epoch” is defined as a single pass of the entire training set through the network) and record its cross-entropy loss. If the loss on the validation set after a given epoch is larger than or equal to the smallest previous loss on the validation set more than twice, training terminates.
Classifier evaluation
To evaluate the classifier, we consider its performance on the test set, \(\mathcalD^\mathrmtest\) (Fig. 1B,D). For each clip, the classifier outputs a set of predicted labels for each frame, corresponding to the predicted behavior in that frame. In evaluating the classifier, we are interested in how closely these predicted labels match the true ones. We first consider overall prediction accuracy. We let \(\mathrmcorrect\) denote the number labels in which the network’s prediction is the same as the true label and \(\mathrmincorrect\) the number of labels in which the network’s prediction is not the same as the true label. Then accuracy can be quantified as the following proportion:
$$\mathrmaccuracy=\frac\mathrmcorrect\mathrmcorrect+\mathrmincorrect.$$
Next, we consider the performance of the network by behavior. To do so, we let \(\mathrmTP_k\) denote the number of true positives (predicted class \(k\) and true class \(k\)), \(\mathrmFP_k\) the number of false positives (predicted class \(k\), but true label not class \(k\)), and \(\mathrmFN_k\) the number of false negatives (true class \(k\), predicted not class \(k\)) for class \(k\) (where \(k\) is between \(1\) and the total number of classes, \(K\)).
We then calculate the precision, recall, and F1 score for each label11,55, where the precision and recall for class \(k\) are defined as follows:
$$\mathrmprecision_k=\frac\mathrmTP_k\mathrmTP_k+\mathrmFP_k,$$
$$\mathrmrecall_k=\frac\mathrmTP_k\mathrmTP_k+\mathrmFN_k.$$
Precision is the proportion of correct predictions out of all cases in which the predicted class is class \(k\). Recall, meanwhile, denotes the proportion of correct predictions out of all the cases in which the true class is class \(k\). From the precision and recall, we calculate the F1 score for class \(k\). The F1 score is the harmonic mean of precision and recall, where a high F1 score indicates both high precision and recall, and deficits in either decrease it:
$$\mathrmF1_k=2\cdot \frac\mathrmprecision_k\cdot \mathrmrecall_k\mathrmprecision_k+\mathrmrecall_k$$
After calculating the F1 score for each class, we calculate the average F1 score, \(\mathrmF1_\mathrmall\) as follows: \(\mathrmF1_\mathrmall=\frac1K\sum_k=1^K\mathrmF1_k\).
Confidence score definition
For each input clip, the classifier returns a set of predicted annotations corresponding to the predicted behavior (e.g., “walk,” “drink,” “rest,” etc.) occurring in each frame of that clip. We denote the set of classifier-predicted labels for clip number \(i\), \(\textclip_i\), as \(\left\ j\in \textclip_i\right\\). Each clip also has a set of “true” labels, corresponding to those that would be produced if the clip was manually annotated. In the case of the labeled data, the true labels are known (and used to train the classifier). In the case of unlabeled data, they are not known (prior to manual review). We denote the set of true labels for \(\mathrmclip_i\) as \(\left\ j\in \textclip_i\right\\). For each frame in a clip, in addition to outputting a prediction for the behavior occurring in that frame, we also generate an estimate of how likely that frame’s classifier-assigned label is correct. That is, for each clip, we generate a set of predicted probabilities \(\left\\widehatp_j \) such that \(\widehatp_j\) denotes the estimated likelihood that \(\widehaty_j\) is equal to \(y_j\). In an optimal classifier, \(\mathbbP\left(\widehaty_j=y_j\right)=\widehatp_j\). That is, \(\widehatp_j\) is an estimate of the probability the classification is correct; and, in an optimal confidence-scorer, the estimated probability the classification is correct will be the ground truth likelihood the classification is correct56.
Now that we have established an estimated probability that a given frame in a clip is correct, we extend the confidence score to an entire clip. As in training data annotation, the review process is conducted at the level of an entire clip, not individual video frames. That is, even if there are a handful of frames in a clip that the classifier is relatively unconfident about, we assume that a human reviewer would need to see the entire clip to have enough context to accurately correct any misclassified frames. Since \(\widehatp_j\) is the estimated probability a given frame \(j\) is correct, it follows that the average \(\widehatp_j\) for \(j\in \textclip_i\) is the estimated probability a randomly selected frame in \(\mathrmclip_i\) is correct. We define this quantity to be the clip confidence score; formally, \(\mathrmconf\left(\textclip_i\right)=\frac1\left\sum_j\in \textclip_i\widehatp_j\), where \(\mathrmconf\left(\mathrmclip_i\right)\) is the clip confidence score of \(\textclip_i\) and \(\left|\textclip_i\right|\) is the number of frames in \(\textclip_i\). We then consider that accuracy is the true probability a randomly selected frame in \(\textclip_i\) is correct by definition. That is, \(\mathrmacc\left(\textclip_i\right)=\frac1\left\sum_j\in \textclip_i\mathbfI(\widehaty_j=y_j)\), where \(\mathrmacc\left(\textclip_i\right)\) is the accuracy of \(\textclip_i\) and \(\mathbfI\) is the indicator function. In the case of an optimal confidence score, we’ll have that \(\mathrmconf\left(\textclip_i\right)=\mathrmacc\left(\textclip_i\right)\). If we compare \(\mathrmconf\left(\textcli\textp_i\right)\) with \(\mathrmacc\left(\textclip_i\right)\) on our test data, we can establish how well the confidence score can be expected to perform when the ground truth accuracy, \(\mathrmacc\left(\textclip_i\right)\), is unknown. In Methods: Confidence score calculation, we discuss our approach for obtaining \(\widehatp_j\), after which finding clip-wise confidence scores is trivial.
Confidence score calculation
Here, we first examine how to calculate the frame-wise confidence score \(\widehatp_j\). To do so, we consider the classifier structure (Fig. S4) in more detail. In particular, we focus on the last three layers: the fully-connected layer, the softmax layer, and the classification layer. To generate a classification for a given frame, the softmax layer takes in a logits vector from the fully-connected layer. This logits vector represents the raw (unnormalized) predictions of the model. The softmax layer then normalizes these predictions into a set of probabilities, where each probability is proportional to the exponential of the input. That is, given \(K\) classes, the \(K\)-dimensional vector from the fully-connected layer is normalized to a set of probabilities, representing the probability of each class. The class with the highest probability is then returned as the network’s predicted label (e.g., “eat” or “walk”) for that frame. We can then interpret this probability as a confidence score derived from the softmax function56. Formally, if we let logits vector \(\varvecz_j\) represent the output from the fully-connected layer corresponding to frame \(j\), the softmax-estimated probability that the predicted label of frame \(j\) is correct is \(\widehatp_j^\mathrmSM=\undersetk\mathrmmax\sigma \left(\varvecz_j\right)^(k)\), where \(\sigma \) is the softmax function. We refer to this confidence score as the “max softmax score,” since it is derived from the maximum softmax probability.
One of the challenges with using the max softmax probability as a confidence score, however, is that it is often poorly scaled. Ideally, estimated accuracy for a prediction would closely match its actual expected accuracy, but in practice the softmax function tends to be “overconfident”56. That is, \(\widehatp_j^\mathrmSM\) tends to be larger than \(\mathbbP(\widehaty_j=y_j)\). To generate a more well-calibrated confidence score (i.e., one in which \(\widehatp_j\) is closer to \(\mathbbP(\widehaty_j=y_j)\), we use an approach called temperature scaling. Temperature scaling uses a learned parameter \(T\) (where \(T>1\) indicates decreased confidence and \(T<1\) increased confidence) to rescale class probabilities so that the confidence score more closely matches the true accuracy of a prediction57. We define the temperature scaling-based confidence for frame \(j\) as \(\widehatp_j^\mathrmTS=\undersetk\mathrmmax{\sigma (\varvecz_j/T)}^(k)\), where \(T\) is selected to minimize the negative log likelihood on the validation set. Now that we have established the process for generating a frame-wise confidence score, we can generate the clip-wise confidence score that is used in the confidence-based review. As previously described, for \(\textclip_i\) this is simply \(\mathrmconf\left(\textclip_i\right)=\frac1\left\sum_j\in \textclip_i\widehatp_j\), where \(\widehatp_j\) is either generated via the softmax function (\(\widehatp_j=\widehatp_j^\mathrmSM\)) or temperature scaling (\(\widehatp_j=\widehatp_j^\mathrmTS\)).
Confidence-based review
Now that we have generated a confidence score for a given clip, we use it in two ways. First, recall that one of the purposes of the confidence-based review is to estimate the accuracy of the unlabeled data, \(\mathcalD^\mathrmunlabeled\). If, for example, a user decided that an accuracy of 80 percent was acceptable for their given behavior analysis application (i.e., \(\mathrmacc(\mathcalD^\mathrmunlabeled)\ge 0.8\)), then given an acceptably reliable confidence score, unlabeled data for which \(\mathrmconf\left(\mathcalD^\mathrmunlabeled\right)\ge 0.8\) would be sufficient for export and use in their given analysis without manual review. Before obtaining an estimate for \(\mathrmconf\left(\mathcalD^\mathrmunlabeled\right)\), we first consider that the true (unknown) accuracy of the annotations in \(\mathcalD^\mathrmunlabeled\) is the weighted sum of the accuracies of the clips in \(\mathcalD^\mathrmunlabeled\), where weight is determined by the number of frames in each clip. Formally, we can express the accuracy of \(\mathcalD^\mathrmunlabeled\) as:
$$\mathrmacc\left(\mathcalD^\mathrmunlabeled\right)=\sum _i\in \mathcalD^\mathrmunlabeled(\mathrmacc\left(\textclip_i\right)\times \frac\left\sum_j\in \mathcalD^\mathrmunlabeled\left),$$
where \(\frac\) weights the accuracy of \(\mathrmacc\left(\textclip_i\right)\) by the number of frames in \(\textclip_i\) (i.e., \(\left|\textclip_i\right|\)) relative to the total number of clips (i.e., \(\sum_j\in \mathcalD^\mathrmunlabeled\left|\mathrmclip_j\right|\)). We then estimate the accuracy of the unlabeled data by substituting the known \(\mathrmconf(\textclip_i)\) for the unknown \(\mathrmacc(\textclip_i)\):
$$\mathrmconf\left(\mathcalD^\mathrmunlabeled\right)=\sum _i\in \mathcalD^\mathrmunlabeled(\mathrmconf\left(\textclip_i\right)\times \frac\left).$$
In this way, \(\mathrmconf\left(\mathcalD^\mathrmunlabeled\right)\) represents the approximate accuracy of the classifier on unlabeled data. If the confidence score functions well, then \(\mathrmconf\left(\mathcalD^\mathrmunlabeled\right)\) will closely match \(\mathrmacc\left(\mathcalD^\mathrmunlabeled\right)\).
Next, we consider the confidence-based review. In this component of the workflow, user can review and correct labels automatically generated by the classifier for \(\mathcalD^\mathrmunlabeled\). A naïve approach would be to review all the video clips contained in \(\mathcalD^\mathrmunlabeled\). While this would indeed ensure all the labels produced by the classifier are correct, if \(\mathcalD^\mathrmunlabeled\) is large it can prove quite time-consuming. So instead, we leverage confidence scores to allow users to only annotate the subset of clips with relatively low confidence scores (i.e., relatively low predicted accuracy), for which review is most productive, while omitting those with relatively high confidence scores.
If a user reviews only a portion of the clips, it should be the portion with the lowest accuracy, for which correction is the most important. To express this formally, consider an ordered sequence of the \(n\) clips in \(\mathcalD^\mathrmunlabeled\), \((\mathrmclip_1, \mathrmclip_2, \dots , \mathrmclip_n)\), sorted in ascending order by accuracy (i.e., \(\mathrmacc\left(\mathrmclip_i\right)\le \mathrmacc\left(\mathrmclip_j\right)\), for \(i<j\) and all \(i,j\le n\)). If we review only \(k\) of the \(n\) clips, where \(k\le n\), we are best off reviewing clips \(\mathrmclip_1, \mathrmclip_2, \dots , \mathrmclip_k\) from the list since they have the lowest accuracy. For unlabeled data, however, recall that we can’t precisely sort clips by accuracy, since without ground truth annotations \(\mathrmacc\left(\textclip_i\right)\) is unknown. However, since \(\mathrmconf\left(\textclip_i\right)\) approximates \(\mathrmacc\left(\textclip_i\right)\), we can instead sort unlabeled clips by their (known) confidence scores, and then select the clips with the lowest confidence scores to review first. This forms the basis of the confidence-based review. Given a set of clips in \(\mathcalD^\mathrmunlabeled\), we simply create a sequence of clips \((\mathrmclip_1, \mathrmclip_2, \dots , \mathrmclip_n)\) sorted by confidence score (i.e., such that \(\mathrmconf\left(\textclip_i\right)\le \mathrmconf(\mathrmclip_j)\), for all \(i<j\)) and then have users review clips in ascending order. If the confidence score is an effective estimate of the clip accuracies, sorting based on confidence score will approximate sorting by accuracy.
Evaluating confidence score calibration
To examine the relationship between confidence scores and accuracy, we first consider the relationship between individual clips’ predicted accuracy (as derived from confidence cores) and actual accuracy. The prediction error (PE) for a given clip is defined as the signed difference between its predicted accuracy and its actual accuracy. For \(\mathrmclip_i\), the PE is then \(\mathrmPE(\mathrmclip_i)=\mathrmconf\left(\mathrmclip_i\right)-\mathrmacc(\mathrmclip_i)\). Positive values indicate an overconfident score, and negative value and underconfident one. The absolute error (AE) is the magnitude of the prediction error and is defined as \(\mathrmAE\left(\mathrmclip_i\right)=\left|\mathrmPE(\mathrmclip_i)\right|\). The AE is always positive, with a higher \(\mathrmAE\left(\mathrmclip_i\right)\) indicating a greater absolute deviation between \(\mathrmconf\left(\mathrmclip_i\right)\) and \(\mathrmacc(\mathrmclip_i)\).
While PE and AE are defined for a single clip, we also consider the mean absolute error and mean prediction error across all the clips in \(\mathcalD^\mathrmunlabeled\). Here, we let \(\mathrmclip_1, \mathrmclip_2,\dots ,\mathrmclip_n\) denote a set of \(n\) clips. The mean absolute error (MAE) is defined as \(\mathrmMAE=\frac1n\sum_i=1^n\mathrmAE(\mathrmclip_i)\). MAE expresses the average magnitude of the difference between predicted accuracy and actual accuracy for a randomly selected clip in the set. So, for example, if \(\mathrmMAE=0.1\), then a randomly selected clip’s confidence score will differ from its accuracy score by about 10 percent, in expectation. The mean signed difference (MSD), meanwhile, is defined as \(\mathrmMSD=\frac1n\sum_i=1^n\mathrmPE(\mathrmclip_i)\). MSD expresses the signed difference between the total expected accuracy across clips and the total actual accuracy. So, for example, is \(\mathrmMSD=-0.05\), then the total estimated accuracy of annotations for the set \(\mathrmclip_1, \mathrmclip_2,\dots ,\mathrmclip_n\) is five percent lower than the true accuracy.
Evaluating review efficiency
To develop a metric for the performance of the confidence-based review, we first consider a case where a user has generated predicted labels for \(n\) clips, which have not been manually labeled, and selects \(k\) of them to review, where \(k\le n\). The remaining \(n-k\) clips are not reviewed and are exported with unrevised classifier-generated labels. Then, for each of the \(k\) clips the user has selected, he or she reviews the clip and corrects any incorrect classifier-generated labels. In this formation, after reviewing a given clip, that clip’s accuracy (defined as the agreement between a clip’s labels and the labels produced by manual annotation), is \(1\), since any incorrect classifier-produced labels would have been corrected.
Next, we assume that we have been provided with a sequence of \(n\) clips, \(\mathcalD=(\mathrmclip_1, \mathrmclip_2, \dots , \mathrmclip_n)\), from which we select the first \(k\) clips in the sequence to review. If we denote \(\mathrmclip_i^\mathrmunrev\) as clip \(i\) prior to being reviewed, and \(\mathrmclip_i^\mathrmrev\) as clip \(i\) after being reviewed, then we can express the sequence of the first \(k\) clips after they have been reviewed as \(\mathcalD_k^\mathrmrev=(\mathrmclip_1^\mathrmrev, \mathrmclip_2^\mathrmrev, \dots , \mathrmclip_k^\mathrmrev)\). We then express the remaining \(n-k\) clips as the sequence \(\mathcalD_k^\mathrmunrev=(\mathrmclip_k+1^\mathrmunrev, \mathrmclip_k+2^\mathrmunrev, \dots , \mathrmclip_n^\mathrmunrev)\). We then consider that the overall accuracy of the sequence of clips, \(\mathrmacc(\mathcalD)\), is simply weighted average of the accuracy of the reviewed videos, \(\mathcalD_k^\mathrmrev\), and the unreviewed ones, \(\mathcalD_k^\mathrmunrev\), where the weight is a function of the number of frames in each clip. Formally,
$$\mathrmacc\left(\mathcalD_k\right)=\mathrmacc\left(\mathcalD_k^\mathrmrev\right)\times \frac\mathcalD_k^\mathrmrev\right\mathcalD\right+\mathrmacc\left(\mathcalD_k^\mathrmunrev\right)\times \frac\mathcalD\right,$$
where \(\left|\mathcalD\right|\) is the total number of video frames in the clips in set \(\mathcalD\) (i.e., \(\left|\mathcalD\right|=\sum _i\in \mathcalD\left|\mathrmclip_i\right|\)). We then consider that, after reviewing and correcting the first \(k\) clips, the accuracy of each reviewed clip is now \(1\). That is, \(\mathrmacc\left(\mathrmclip_i^\mathrmrev\right)=1\) for all \(\mathrmclip_i^\mathrmrev\in \mathcalD_k^\mathrmrev\). Therefore, the total accuracy of sequence \(\mathcalD\), after reviewing the first \(k\) clips, is
$$\mathrmacc\left(\mathcalD_k\right)=\frac\mathcalD\right+\mathrmacc\left(\mathcalD_k^\mathrmunrev\right)\times \frac\mathcalD\right.$$
This method for calculating the accuracy of dataset \(\mathcalD\) after reviewing the first \(k\) clips becomes useful for analyzing the performance of the confidence-based review. To see why, we first consider the lower bound on \(\mathrmacc\left(\mathcalD_k\right)\). In the worst case, our confidence score will convey no information about the relative accuracies of the clips in \(\mathcalD\). Without a relationship between \(\mathrmacc\left(\mathrmclip_i\right)\) and \(\mathrmconf\left(\mathrmclip_i\right)\), sorting based on confidence score is effectively the same as randomly selecting clips. In this way, we can compare the accuracy after labeling the first \(k\) clips via confidence-score with the accuracy that would have been obtained if the first \(\mathrmk\) clips were reviewed. We denote this improvement in accuracy using confidence metric \(\mathrmconf\) as the “improvement over random” and formalize it as \(\mathrmIOR_k^\mathrmconf=\mathrmacc\left(\mathcalD_k^\mathrmconf\right)-\mathrmacc\left(\mathcalD_k^\mathrmrand\right)\), where \(\mathcalD_k^\mathrmconf\) and \(\mathcalD_k^\mathrmrand\) denote dataset \(\mathcalD\) sorted by confidence score and randomly, respectively.
Next, we place an upper bound on \(\mathrmIOR_k\) by considering the maximum accuracy that \(\mathcalD\) could have after reviewing \(k\) clips. In the best case, the first \(k\) clips reviewed would be the \(k\) clips with the lowest accuracy. Here, since we’re evaluating on \(\mathcalD^\mathrmtest\), where accuracy is known, we can calculate this. If we let \(\mathcalD^\mathrmacc\) denote the sequence of clips sorted in ascending order by their true accuracy, then the maximum accuracy of \(\mathcalD\) after reviewing \(k\) clips is \(\mathrmacc\left(\mathcalD_k^\mathrmacc\right)\). Then, similar to the analysis above, we calculate the improvement of optimal review (i.e., review based on true accuracy) over random review as \(\mathrmIOR_k^\mathrmopt=\mathrmacc\left(\mathcalD_k^\mathrmacc\right)-\mathrmacc\left(\mathcalD_k^\mathrmrand\right).\) Semantically, \(\mathrmIOR_k^\mathrmopt\) expresses how much higher the accuracy of the test set it after reviewing \(k\) clips in the optimal order than it would be if clips had been reviewed randomly.
We can then derive a series of global measures for the confidence-based review. While \(\mathrmIOR_k\) is defined for a single number of clips reviewed, \(k\), we look to generate a measure that expresses \(\mathrmIOR_k\) across a range of \(k\) values. To do so, we calculate the average improvement over random across the number of clips reviewed, from \(0\) to the total number, \(n\), as follows:
$$\overline\mathrmIOR _n^\mathrmmethod=\frac1n\sum _k=0^n\mathrmIOR_k^\mathrmmethod.$$
\(\overline\mathrmIOR _n^\mathrmmethod\) expresses the mean improvement over random of method \(\mathrmmethod\) over \(n\) clips. After calculating \(\overline\mathrmIOR _n^\mathrmconf\) and \(\overline\mathrmIOR _n^\mathrmopt\) (i.e., \(\overlineIOR _n\) for confidence-based and optimal sorting), we can generate a final measure for the review efficiency by expressing the average improvement of confidence score \(\mathrmconf\) over random relative to the maximum possible improvement over random (optimal review):
$$\mathrmreview\_\mathrmefficiency_n^\mathrmconf=\frac \overline\mathrmIOR _n^\mathrmconf\overline\mathrmIOR _n^\mathrmopt.$$
This metric expresses how close review using metric \(\mathrmconf\) is to optimal. If sort order based on \(\mathrmconf\) exactly matches that of sorting by accuracy, \(\mathrmreview\_\mathrmefficiency_n^\mathrmconf=1\). If the sort order is no better than random, \(\mathrmreview\_\mathrmefficiency_n^\mathrmconf=0\).
Implementation details and code availability
We implement the toolbox in MATLAB version 2020b. The GUI for annotation and confidence-based review is included in the toolbox as a MATLAB application. Figures are produced using Prism9 and OmniGraffle. The entire toolbox, along with example scripts, documentation, and additional implementation details is hosted via a public GitHub repository at: https://github.com/carlwharris/DeepAction. We provide the intermediary data generated for the home-cage dataset (e.g., spatial and temporal frames and features, annotations, etc.) as an example project linked in the GitHub repository. Data produced to generate results for the CRIM13 project is available upon request, but not provided as an example project due to its large file sizes. Full data (i.e., results for each test split in both projects) needed to replicate the results is also available on request. Data for the exploratory data set is proprietary.
[ad_2]
Source link