A group of academics has designed a new system known as "Privid" that enables video analytics in a privacy-preserving manner to combat concerns with invasive tracking.
"We're at a stage right now where cameras are practically ubiquitous. If there's a camera on every street corner, every place you go, and if someone could actually process all of those videos in aggregate, you can imagine that entity building a very precise timeline of when and where a person has gone," Frank Cangialosi, the lead author of the study and a researcher at the MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), said in a statement.
"People are already worried about location privacy with GPS — video data in aggregate could capture not only your location history, but also moods, behaviors, and more at each location," Cangialosi added.
Privid is built on the foundation of differential privacy, a statistical technique that makes it possible to collect and share aggregate information about users, while safeguarding individual privacy.
This is achieved by adding random noise to the results to prevent re-identification attacks. The amount of noise added is a trade-off – adding more noise makes the data more anonymous, but it also makes the data less useful – and it's determined by the privacy budget, which ensures that the results are still accurate and at the same time configured low enough to prevent data leakage.
The querying framework involves an approach called "duration-based privacy" wherein the target video is chopped temporally into chunks of same duration that's then fed separately into the analyst's video processing module to produce the "noisy" aggregate result.
Ready to tackle new AI-driven cybersecurity challenges? Join our insightful webinar with Zscaler to address the growing threat of generative AI in cybersecurity.Supercharge Your Skills
The underlying idea is that by adding specialized types of noise to the data or analysis methods, it can prevent relevant parties from identifying an individual while simultaneously not obscuring findings about societal patterns that emerge when performing analyses on the video inputs, such as, say, counting the number of people that passed by a camera in one day, or computing the average speed of cars observed.
This also prevents a malicious actor from singling out specific individuals and determining their presence (or lack thereof) in the videos.
"In building Privid, we do not advocate for the increase of public video surveillance and analysis. Instead, we observe that it is already prevalent, and is driven by strong economic and public safety incentives," the researchers concluded.
"Consequently, it is undeniable that the analysis of public video will continue, and thus, it is paramount that we provide tools to improve the privacy landscape for such analytics."