Outside surveillance is being improved further through advances in location technology, such as geotracking.
Paul Brewer, cofounder and vice president of technology at ObjectVideo, says these geographic tools provide a new approach to analytics. “If we have a camera, and we tell it its place in the world, now we have the ability to look at an area or street corner, so that we can make the data...searchable.... It’s not just camera number 32 anymore, it’s camera at the corner of Constitution and 23rd street, for example. And there are a number of ways we exploit that. But I think you’re going to start to see the GIS [geographic information system] data become more important in this world.”
Brewer adds that the use of geographic information can help with decreasing data transmitted to the back end. That’s because they can show icons on the map, and the user isn’t sent every pixel over the network.
Intelligent video analytics added to outdoor surveillance cameras are also improving as the algorithms continue to develop. They were perhaps over-promoted in the last decade, but now the technology is helping analytics providers make good on their promises. Another positive change is that providers have learned to promise less and be more realistic. And in addition to better algorithms, other technological advances are improving surveillance and analytics options outdoors.
GeoRegistering. SightLogix offers what the company calls geoRegistering. According to John Romanowich, CEO of SightLogix, this means that the software uses geometry by considering the height and angle of the camera in interpreting the scene; the camera knows the location of each object in the scene. “In knowing that, we can actually infer its size very precisely. And by inferring its size very precisely, we can provide very accurate filters based upon size,” says Romanowich.
“So, for example, human beings are usually somewhere between three feet and seven feet tall. So if you were to say, ‘well, it’s not likely that the human being is going to be less than a foot tall’…we put a filter that says anything smaller than a foot, we’ll ignore, which means that we’ve now eliminated 90 percent of the small animals that are likely to be a problem. You might say, ‘Well, gee, what about the bigger animals?’ and my comment about the bigger animals is, you’re kind of stuck with detecting them…. You better let the person watching for awhile alert them to it and let them decide intelligently whether or not they think that’s an animal or a potential likely intruder.”
Edge processing. Edge processing means processing video analytics and other data in the camera rather than only after it is sent over the network to a central server. There are pros and cons to doing this, and it is not optimal for all analytics applications.
One factor is that it adds to the cost of the camera. However, many of the sources interviewed advocate edge processing because it allows the analytics to work on the highest quality picture (rather than a compressed version sent over the network). Analytics are often only as good as the video they are using. Edge processing also saves on bandwidth and storage costs.
Self-calibration. One of the original problems with analytics was the time it took the end user to program the system to recognize what was a normal state of affairs and what should cause an alarm. Newer systems can learn the scene on their own and don’t need all the rules programmed into them by an integrator or security manager. This capability is called self-calibration.
This feature also allows the system to self correct. Thus, if the camera gets moved around a bit because of the weather, a technician doesn’t have to come back out and recalibrate it.
Electronic stabilization. Another issue with outdoor surveillance is that wind and the other elements can cause camera vibration. That can confuse analytics that are trying to pick up motion.
To combat this issue, SightLogix electronically stabilizes its cameras. “By electronically stabilizing, you now don’t have to tune back the sensitivity. You can electronically stabilize and get all that information directly from the image of targets that are moving within the scene,” says Romanowich.
The reason the SightLogix cameras can electronically stabilize is that they have very high processing power on the cameras. “The processors are so powerful and so fast that they are looking at every single pixel of every frame, and they are studying the global motion of the entire image. Is it rotating? Is it moving up? Is it moving down? And it can basically then stitch it all together in real time in what they call the optical flow,” says Romanowich. From frame to frame it can actually line up the images, he notes. “It would be almost like if you had a deck of cards, and you were to throw them down, they’d be all scattered apart, but if you grab them all and...put them all together, and line them back up with your hands, it’s effectively the same idea,” he explains.
It does cost more to have very high processing power on a camera. Romanowich says the SightLogix cameras have the equivalent of five servers worth of processing power.
Similarly, cloud movement and snow and rain can make the analytic think there are changes in the scene and that can make it hard to detect a person amid the changes. VideoIQ deals with this issue by including about 250,000 algorithms of what people look like so that the technology can identify an individual in the frame.