It is very challenging to size universally for LiDAR streams. This is because the following items impact the system resources required:
- The density of points produced by the LiDAR sensor
- The size of the coverage area that requires object detection analytics
- The number of LiDAR sensors positioned to collect data that will cover the scene
We followed the common practice of testing sizing based on how many LiDAR points per second can be processed while monitoring the latency in the system.
For this sizing exercise, the simulated LiDAR sensors were each publishing at a rate of 1.2 million points a second. Up to four sensors were running on an Algo node at any given time to give a maximum of 4.8 million points being processed per second.