Six of the nine cameras in our lab simulation show scenes from inside a typical bus. We discuss how we tested configuring analytics for different regions of the bus in the following sections.
Home > Workload Solutions > Computer Vision > White Papers > Computer Vision for Public Transportation - Phase 1 > Inside camera scenes
Six of the nine cameras in our lab simulation show scenes from inside a typical bus. We discuss how we tested configuring analytics for different regions of the bus in the following sections.
The following screenshot shows the three ROI zones we created using the DeepInsights portal tools. The largest blue rectangle defines the driver’s seating area. We are interested in detecting movement inside this zone. The smaller blue rectangle is labeled the Inside Drive Zone. Rules for this zone are primarily detecting the motion of the driver’s arms and hands and the steering wheel that would be consistent with active control of the bus while moving.
The detection of driver movement is used for alerting and producing trending statistics using a Zone Counting rule. We configured a Reduce aggregation type to make a maximum count aggregation for a collection rate of every 3 secs. The following two screenshots show the workflow for configuring this rule.
The following screenshot shows the UI after configuring a single ROI consisting of a line drawn across the scene that defines a “threshold” for tracking objects moving onto and off the bus through the front door. The UI also shows that two rules are configured for the ROI; one for tracking movement from outside the bus toward the interior (In) and one for tracking movement from the bus’s interior to the area outside (Out).
As shown in the following screenshots, we configured both a People Enter and People Exit rule associated with the Inside Bus Entrance zone. Using the Line Crossing type rule, these rules count the number of passengers entering and exiting the bus. An object that crosses the virtual line in the scene must match the model’s person class to be counted. The post-processing Split type is a form of aggregation that sends both the dwell time (amount of time it took to move through the scene) and the object class names as data for the events generated for either rule.
Four cameras can be used to monitor the main areas occupied by passengers. In this use case, we configured a single ROI that combines the video coverage for all four camera scenes, as shown in the following screenshot. We defined the blue rectangle that spans all four video display panels as the Inside Zone that will be used to configure rules.
We then configured two rules to monitor the total count of passengers and the associated dwell time (the time the objects appear in the scene). We used a Zone Counting rule type for any matches to a person object for the Active People Count rule. We used a Reduce processing type with a maximum count aggregation over each 3-second interval to reduce the number of generated events and obtain a series of counts over time with sufficient detail to produce accurate and valuable reports.
The People Dwell Time rule starts with the detection of a match with a person object and then is post-processed with a Split type to produce two data items; the first indicating a person object was detected together with the second being the dwell time (defined previously) for the total time the person object was present in the zone.