Seeing Eye To AI – How Smart Video Is Shaping The Edge
The evolution of smart video technology continues at pace. As in many other industries, the onset of the COVID-19 pandemic expedited timelines and the artificial intelligence (AI) video world is continuing its rapid evolution in 2021.
As video demand and the use of AI to make sense of the visual data increase, the number of cameras and subsequent data produced are growing rapidly, and these are forcing the creation of new edge architectures.
Cameras and AI in traffic management
‘Smart factories’ can leverage AI to detect flaws or deviations in the production line in real time
In addition, a new generation of ‘smart’ use cases has developed. For example, in ‘smart cities’, cameras and AI analyze traffic patterns and adjust traffic lights, in order to improve vehicle flow, reduce congestion and pollution, and increase pedestrian safety.
‘Smart factories’ can leverage AI to detect flaws or deviations in the production line in real time, adjusting to reduce errors and implement effective quality assurance measures. As a result, costs can be greatly reduced through automation and earlier fault detection.
Evolution of smart video
The evolution of smart video is also happening alongside other technological and data infrastructure advancements, such as 5G. As these technologies come together, they’re impacting how we architect the edge. And, they’re driving a demand for specialized storage.
Listed below are some of the biggest trends that we’re seeing:
Greater volume means greater quality
The volume and variety of cameras continue to increase with each new advancement, bringing new capabilities. Having more cameras allow more to be seen and captured. This could mean having more coverage or more angles. It also means more real-time video can be captured and used to train AI.
Quality also continues to improve with higher resolutions (4K video and above)
Quality also continues to improve with higher resolutions (4K video and above). The more detailed the video, the more insights can be extracted from it. And, the more effective the AI algorithms can become. In addition, new cameras transmit not just one video stream, but also additional low-bitrate streams used for low-bandwidth monitoring and AI pattern matching.
Smart cameras operate 24/7
Whether used for traffic management, security or manufacturing, many of these smart cameras operate 24/7, 365 days a year, which poses a unique challenge. Storage technology must be able to keep up.
For one thing, storage has evolved to deliver high-performance data transfer speeds and data writing speeds, to ensure high quality video capture. And, actual on-camera storage technology must deliver longevity and reliability, critical to any workflow.
Real world context is vital to understanding endpoints
Whether used for business, in scientific research or in our personal lives, we’re seeing new types of cameras that can capture new types of data. With the potential benefits of utilizing and analyzing this data, the importance of reliable data storage has never been more apparent.
Considering context when designing storage technology
As we design storage technology, we must take the context into consideration, such as location and form factor. We need to think of the accessibility of cameras (or lack thereof), are they atop a tall building or maybe amid a remote jungle?
Such locations might also need to withstand extreme temperature variations. All of these possibilities need to be taken into account, so as to ensure long-lasting, reliable continuous recording of critical video data.
Chipsets are improving artificial intelligence (AI) capability
Improved compute capabilities in cameras means processing happens at the device level, enabling real-time decisions at the edge.
New camera chipsets deliver enhanced AI capability
We’re seeing new chipsets arrive for cameras that deliver improved AI capability, and more advanced chipsets add deep neural network processing for on-camera deep learning analytics. AI keeps getting smarter and more capable.
Cloud must support deep learning technology
Just as camera and recorder chipsets are coming with more compute power, in today’s smart video solutions most of the video analytics and deep learning is still done with discrete video analytics appliances or in the Cloud. To support these new AI workloads, the Cloud has gone through some transformation. Neural network processors within the Cloud have adopted the use of massive GPU clusters or custom FPGAs.
They’re being fed thousands of hours of training video, and petabytes of data. These workloads depend on the high-capacity capabilities of enterprise-class hard drives (HDDs), which can already support 20TB per drive and high-performance enterprise SSD flash devices, platforms or arrays.
Reliance on the network
Wired and wireless internet have enabled the scalability and ease of installation that has fueled the explosive adoption of security cameras, but it could only do so where LAN and WAN infrastructures already exist.
5G technology aids camera installations
Emerging cameras that are 5G-ready are being designed to load and run 3rd party applications
5G removes many barriers to deployment, allowing expansive options for placement and ease of installation of cameras at a metropolitan level. With this ease of deployment comes new greater scalability, which drives use cases and further advancements in both camera and cloud design. For example, cameras can now be standalone, with direct connectivity to a centralized cloud, as they’re no longer dependent on a local network.
Emerging cameras that are 5G-ready are being designed to load and run 3rd party applications, which can bring broader capabilities. Yet with greater autonomy, these cameras will need even more dynamic storage. They will require new combinations of endurance, capacity, performance, and power efficiency, to be able to optimally handle the variability of new app-driven functions.
Paving the way for the edge storage revolution
It’s a brave new world for smart video, and it is as complex as it is exciting. Architectural changes are being made to handle new workloads and prepare for even more dynamic capabilities at the edge and at end points. At the same time, deep learning analytics continue to evolve at the back end and the Cloud.
Understanding workload changes, whether at the camera, recorder, or the Cloud level, is critical to ensuring that new architectural changes are augmented by continuous innovation in storage technology.