Who’s watching your IP cameras?
Thu, 20 Jul 2023 18:05:50 -0000|
Read Time: 0 minutes
In today’s world, the deployment of security cameras is a common practice. In some public facilities like airports, travelers can be in view of a security camera 100% of the time. The days of security guards watching banks of video panels being fed from hundreds of security cameras are quickly being replaced by computer vision systems powered by artificial intelligence (AI). Today’s advanced analytics can be performed on many camera streams in real-time without a human in the loop. These systems enhance not only personal safety but also provide other benefits, including better passenger experience and enhanced shopping experiences.
Modern IP cameras are complex devices. In addition to recording video streams at increasingly higher resolutions (4k is now common), they can also encode and send those streams over traditional internet protocol IP to downstream systems for additional analytic processing and eventually archiving. Some cameras on the market today have enough onboard computing power and storage to evaluate AI models and perform analytics right on the camera.
The development of IP-connected cameras provided great flexibility in deployment by eliminating the need for specialized cables. IP cameras are so easy to plug into existing IT infrastructure that almost anyone can do it. However, since most camera vendors use a modified version of an open-source Linux operating system, IT and security professionals realize there are hundreds or thousands of customized Linux servers mounted on walls and ceilings all over their facilities. Whether you are responsible for <10 cameras at a small retail outlet or >5000 at an airport facility, the question remains “How much exposure do all those cameras pose from cyber-attacks?”
To understand the potential risk posed by IP cameras, we assembled a lab environment with multiple camera models from different vendors. Some cameras were thought to be up to date with the latest firmware, and some were not.
Working in collaboration with the Secureworks team and their suite of vulnerability and threat management tools, we assessed a strategy for detecting IP camera vulnerabilities Our first choice was to implement their Secureworks Taegis™ VDR vulnerability scanning software to scan our lab IP network to discover any camera vulnerabilities. VDR provides a risk-based approach to managing vulnerabilities driven by automated & intelligent machine learning.
We planned to discover the cameras with older firmware and document their vulnerabilities. Then we would have the engineers upgrade all firmware and software to the latest patches available and rescan to see if all the vulnerabilities were resolved.
Once the SecureWorks Edge agent was set up in the lab, we could easily add all the IP ranges that might be connected to our cameras. All the cameras on those networks were identified by SecureWorks VDR and automatically added to the VDR AWS cloud-based reporting console.
Discovering Camera Vulnerabilities
The results of the scans were surprising. Almost all discovered cameras had some Critical issues identified by the VDR scanning. In one case, even after a camera was upgraded to the latest firmware available from the vendor, VDR found Critical software and configuration vulnerabilities shown below:
One of the remaining critical issues was the result of an insecure FTP username/password that was not changed from the vendor’s default settings before the camera was put into service. These types of procedural lapses should not happen, but inadvertently they are bound to. The password hardening mistake was easily caught by a VDR scan so that another common cybersecurity risk could be dealt with. This is an example of an issue not related to firmware but a combination of the need for vendors not to ship with a well-known FTP login and the responsibility of users to not forget to harden the login.
Another example of the types of Critical issues you can expect when dealing with IP cameras relates to discovering an outdated library dependency found on the camera. The library is required by the vendor software but was not updated when the latest camera firmware patches were applied.
Camera Administration Consoles
The VDR tool will also detect if a camera is exposing any HTTP sites/services and look for vulnerabilities there. Most IP cameras ship with an embedded HTTP server so administrators can access the cameras' functionality and perform maintenance. Again, considering the number of deployed cameras, this represents a huge number of websites that may be susceptible to hacking. Our testing found some examples of the type of issues that a camera’s web applications can expose:
The scan of this device found an older version of Apache webserver software and outdated SSL libraries in use for this cameras website and should be considered a critical vulnerability.
In this article, we have tried to raise awareness of the significant Cyber Security risk that IP cameras pose to organizations, both large and small. Providing effective video recording and analysis capabilities is much more than simply mounting cameras on the wall and walking away. IT and security professionals must ask, “Who’s watching our IP cameras? Each camera should be continuously patched to the latest version of firmware and software - and scanned with a tool like SecureWorks VDR. If vulnerabilities still exist after scanning and patching, it is critical to engage with your camera vendor to remediate the issues that may adversely impact your organization if neglected. Someone will be watching your IP cameras; let’s ensure they don’t conflict with your best interests.
Dell Technologies is at the forefront of delivering enterprise-class computer vision solutions. Our extensive partner network and key industry stakeholders have allowed us to develop an award-winning process that takes customers from ideation to full-scale implementation faster and with less risk. Our outcomes-based process for computer vision delivers:
- Increased operational efficiencies: Leverage all the data you’re capturing to deliver high-quality services and improve resource allocation.
- Optimized safety and security: Provide a safer, more real-time aware environment
- Enhanced experience: Provide a more positive, personalized, and engaging experience for customers and employees.
- Improved sustainability: Measure and lower your environmental impact.
- New revenue opportunities: Unlock more monetization opportunities from your data with more actionable insights
Where to go next...
Related Blog Posts
A Quick Run Down: Mission Critical Architecture for Modern Security Solutions
Tue, 20 Jun 2023 20:21:51 -0000|
Read Time: 0 minutes
Nowadays, end users view their security data as mission critical. This means having uninterrupted access to recorded video and ensuring that recorders and archivers remain functional, which are essential for daily business operations. As a result, designs need to prioritize uptime while also taking other end user considerations into account.
A key consideration is storage efficiency. Stacking 2U NVRs for storage purposes is highly inefficient, both in terms of sustainability and rack space. To avoid over-procuring compute nodes to expand storage, it is better to build a proper on-premises storage solution that can scale at the petabyte scale. This ensures that the end user has sufficient storage capacity without compromising on the availability of compute nodes.
In addition to availability, regulatory compliance is another critical issue for end users. Achieving six 9s availability is possible, but if a node requires servicing, there may be no access to recorded video, leading to potential compliance issues. To mitigate this risk, it is recommended to shift from a RAID appliance to a node-based NAS storage system that uses erasure coding. This enables the end user to drop a full node of 20 HDDs and still record and access the video with six 9s availability.
In mission-critical environments such as airports and casinos, operations must continue during storage maintenance. For that reason, it’s important to have a storage solution that allows for maintenance that won’t affect daily operations. To sum it up: designing a storage solution that prioritizes efficiency, scalability, and regulatory compliance is critical for meeting the needs of end users in mission-critical environments.
What about Cloud?
Over the past several years, the trend of implementing a cloud-first strategy has been no secret. However, this strategy has posed numerous challenges for end-users. Although it is simple and fast to ramp up with a third-party cloud provider, and it provides a finance model that does not require large capital, over time the costs escalate with network access and egress fees. This has surprised many clients, causing them to re-evaluate this model.
The SIA Security Industry Privacy Guide suggests that best practice would not be to share security data with a third-party. These third-party contracts can also be through another third-party, such as a hardware manufacturer that has a third-party cloud contract and resells cloud services. As a result, it becomes challenging to manage risks as control of the data is no longer within the organization, yet the organization still holds liability for the security of the data being collected. This has led many organizations to repatriate to an on-premises or hybrid model for better control of their data.
We are now seeing a shift from a cloud-first to a cloud-smart strategy, where clients are becoming more selective about what they keep on-premises and what they store in a public cloud to maintain control and apply their policies to the data volumes more easily. This approach enables organizations to maintain control over sensitive data while still taking advantage of the benefits of cloud technology. In this way, adopting a cloud-smart strategy allows organizations to balance the benefits of cloud technology with the need for data control and security.
Cyber Security by Design
In today's security landscape, protecting sensitive privacy data and intellectual property is a top priority for most organizations. Unfortunately, we have witnessed high-profile incidents, such as the loss of biometric data for millions of people, resulting in privacy breaches and legislative action against end users. Even with the use of cloud providers, organizations must recognize that they are ultimately responsible for the security of the data they collect and store. Cloud environments are also susceptible to cyber breaches, which can be costly and time-consuming to recover from. Many organizations are still struggling to recover from incidents that occurred a year ago, causing significant operational disruptions and financial losses.
To mitigate the risks associated with cyber threats, today's mission critical architecture should prioritize cyber recovery by design. One effective strategy is to secure data in a cyber recovery vault that is offline and separated from the production cluster through an air gap. This approach allows for rapid recovery, enabling an end user to quickly regain control and resume operations with a clean data set. The recovery of a PB of data can now be achieved within an hour, as opposed to months in the past. By adopting a cyber recovery by design approach, organizations can proactively safeguard their data and minimize the impact of cyber incidents.
Video is no longer just a tool for security purposes: it has become a valuable source of metadata that can be leveraged by multiple stakeholders within an organization. In fact, I have seen up to 14 different stakeholders within a single organization look to video metadata for a variety of non-security-related business outcomes. One such example is the retail industry, where video data has been used to enhance the overall customer experience.
As we shift towards using video for data science, new challenges arise for data scientists in achieving meaningful outcomes for their stakeholders. According to recent studies, data scientists spend 79% of their time searching for data in various silos to leverage. This is particularly true in LUN environments, where data silos are numerous, and correlating the appropriate camera feed to the disk group can be time-consuming and complex.
To address this challenge, organizations can create a single-volume data lake to store all necessary data. This allows data scientists to quickly map data targets and dramatically reduces the time-to-market for AI projects, making their time more productive and allowing them to focus on actual data science work rather than wrangling with data.
To sum up, we’ve discussed the importance of mission-critical architecture for modern security solutions, and highlighted the need for storage efficiency, scalability, and regulatory compliance to meet the needs of end-users in mission-critical environments. We discussed the shift from a cloud-first to a cloud-smart strategy, with a focus on maintaining control and security of data. Cybersecurity is a top priority for organizations, and we stress the need to prioritize cyber recovery by design to minimize the impact of cyber incidents. Finally, we touched upon the use of AI and the challenges of data silos and suggested creating a single-volume data lake to store all necessary data for AI projects, allowing data scientists to focus on actual data science work.
Thanks for reading! To learn more about modern security solutions, see:
Mordekhay Shushan, Solution Architect
Brian St.Onge, Business Development Manager, Video Surveillance
Sharing the Love for GPUs in Machine Learning
Wed, 17 Mar 2021 16:53:14 -0000|
Read Time: 0 minutes
Anyone that works with machine learning models trained by optimization methods like stochastic gradient descent (SGD) knows about the power of specialized hardware accelerators for performing a large number of matrix operations that are needed. Wouldn’t it be great if we all had our own accelerator dense supercomputers? Unfortunately, the people that manage budgets aren’t approving that plan, so we need to find a workable mix of technology and, yes, the dreaded concept, process to improve our ability to work with hardware accelerators in shared environments.
We have gotten a lot of questions from a customer trying to increase the utilization rates of machines with specialized accelerators. Good news, there are a lot of big technology companies working on solutions. The rest of the article is going to focus on technology from Dell EMC, NVIDIA, and VMware that is both available today and some that are coming soon. We also sprinkle in some comments about the process that you can consider. Please add your thoughts and questions in the comments section below.
We started this latest round of GPU-as-a-service research with a small amount of kit in the Dell EMC Customer Solutions Center in Austin. We have one Dell EMC PowerEdge R740 with 4 NVIDIA T4 GPUs connected to the system on the PCIe bus. Our research question is “how can a group of data scientists working on different models with different development tools share these four GPUs?” We are going to compare two different technology options:
- VMware Direct Path I/O
- NVIDIA GPU GRID 9.0
Our server has ESXi installed and is configured as a 1 node cluster in vCenter. I’m going to skip the configuration of the host BIOS and ESXi and jump straight to creating VMs. We started off with the Direct Path I/O option. You should review the article “Using GPUs with Virtual Machines on vSphere – Part 2: VMDirectPath I/O” from VMware before trying this at home. It has a lot of details that we won’t repeat here.
There are many approaches available for virtual machine image management that can be set up by the VMware administrators but for this project, we are assuming that our data scientists are building and maintaining the images they use. Our scenario is to show how a group of Python users can have one image and the R users can have another image that both use GPUs when needed. Both groups are using primarily TensorFlow and Keras.
Before installing an OS we changed the firmware setting to EFI in the VM Boot Options menu per the article above. We also used the VM options to assign one physical GPU to the VM using Direct Path I/O before proceeding with any software installs. It is important for there to be a device present during configuration even though the VM may get used later with or without an assigned GPU to facilitate sharing among users and/or teams.
Once the OS was installed and configured with user accounts and updates, we installed the NVIDIA GPU related software and made two clones of that image since both the R and Python environment setups need the same supporting libraries and drivers to use the GPUs when added to the VM through Direct Path I/O. Having the base image with an OS plus NVIDIA libraries saves a lot of time if you want a new type of developer environment.
With this much of the setup done, we can start testing assigning and removing GPU devices among our two VMs. We use VM options to add and remove the devices but only while the VM is powered off. For example, we can assign 2 GPUs to each VM, 4 GPUs to one VM and none to the other or any other combination that doesn’t exceed our 4 available devices. Devices currently assigned to other VMs are not available in the UI for assignment, so it is not physically possible to create conflicts between VMs. We can NVIDIA’s System Management Interface (nvidia-smi) to list the devices available on each VM.
Remember above when we talked about process, here is where we need to revisit that. The only way a setup like this works is if people release GPUs from VMs when they don’t need them. Going a level deeper there will probably be a time when one user or group could take advantage of a GPU but would choose to not take one so other potentially more critical work can have it. This type of resource sharing is not new to research and development. All useful resources are scarce, and a lot of efficiencies can be gained with the right technology, process, and attitude
.Before we talk about installing the developer frameworks and libraries, let’s review the outcome we desire. We have 2 or more groups of developers that could benefit from the use of GPUs at different times in their workflow but not always. They would like to minimize the number of VM images they need and have and would also like fewer versions of code to maintain even when switching between tasks that may or may not have access to GPUs when running. We talked above about switching GPUs between machines but what happens on the software side? Next, we’ll talk about some TensorFlow properties that make this easier.
TensorFlow comes in two main flavors for installation tensorflow and tensorflow-gpu. The first one should probably be called “tensorflow-cpu” for clarity. For this work, we are only installing the GPU enabled version since we are going to want our VMs to be able to use GPU for any operations that TF supports for GPU devices. The reason that I don’t also need the CPU version when my VM has not been assigned any GPUs is that many operations available in the GPU enabled version of TF have both a CPU and a GPU implantation. When an operation is run without a specific device assignment, any available GPU device will be given priority in the placement. When the VM does not have a GPU device available the operation will use the CPU implementation.
There are many examples online for testing if you have a properly configured system with a functioning GPU device. This simple matrix multiplication sample is a good starting point. Once that is working you can move on a full-blown model training with a sample data set like the MNIST character recognition model. Try setting up a sandbox environment using this article and the VMware blog series above. Then get some experience with allocating and deallocating GPUs to VMs and prove that things are working with a small app. If you have any questions or comments post them in the feedback section below.
Thanks for reading.
Phil Hummel - Twitter @GotDisk@GotDisk