One of the major concerns of using supervised ML for this classification task is that the model fails to effectively recognize and classify novel and unseen issues at the inference stage. To address this issue, a human-in-the-loop model feedback mechanism is implemented. The main idea is that a proportion of model predictions are validated daily by a group of experts. The experts manually analyze textual logs with associated model predictions and propose any corrections to labels, including the addition of new labels. Authors primarily focus on validating low-confidence predictions but used a staggered model confidence-based random sampling strategy for selecting predictions for validation. They allocated 70 percent of low confidence predictions based on a fixed low-confidence threshold and, 10% of high-confidence predictions based on a high-confidence threshold, and finally 20% of randomly sampled predictions for manual validation.