Looking to the Sky
Carnegie Mellon University is leading a research effort sponsored by Intel Corp. that will enable cloud-based services to process a rapidly increasing volume of online video and put new analytics and immersive technologies within reach of consumers, businesses and public officials.
The Intel Science and Technology Center (ISTC) for Visual Cloud Systems, now underway, is tapping Carnegie Mellon expertise in computer vision, storage systems and databases, and networking. The goal of the $4.125 million project, supported by Intel over the next three years, is to accelerate large-scale development and adoption of cloud-based computing systems architecture.
This work would enable systems to handle the rapidly increasing amount of video content generated by Internet of Things (IoT) devices including online cameras and drones, as well as by content creators and broadcasters.
In addition to its technical expertise in visual cloud systems computing and architectures, Intel provides technologies including Intel® Xeon® processors, edge devices, and imaging and camera technology.
“Online video is one of the richest data sources we have,” said David Andersen, associate professor of computer science and co-director of the new center with Kayvon Fatahalian, assistant professor of computer science. “Amazing progress is being made at analyzing and searching images, thanks to deep learning, but the approaches that work so well in still images don’t scale to video. Unlocking the immense amount of information that now goes unused will be a major focus of our work.”
“Intel and Carnegie Mellon University are extending our historic collaborations in cloud computing to tackle the challenges and opportunities these applications bring to the worldwide infrastructure of content creation, content distribution and video analytics,” said Jim Blakley, general manager of Intel Corporation’s Visual Cloud Division. “Intel’s investment and collaboration with Carnegie Mellon should accelerate our understanding of these big challenges, and bring more breakthrough ideas to light.”
The sheer volume of video being uploaded to the web is daunting. YouTube alone processes 300 hours of video uploads every minute and Twitch, a live-streaming platform, hosts 1.7 million at-home video broadcasters per month. By 2030, Fatahalian said, nearly all vehicles will be collecting video streams, and at least a billion security cameras will be linked to the web, generating tens of billions of images each second across the world.
“By 2030, just processing online video would require the equivalent of the planet’s entire power budget,” Andersen said.
The ISTC for Visual Cloud Systems will focus on developing new system architectures and data processing techniques optimized for processing these data- and bandwidth-intensive workloads. Much of the video will never be seen by human eyes, so researchers are developing methods to analyze these video streams at scale with intelligent computing systems and to index and store videos in such a way that they can be readily searched as new questions and concerns arise.
“For instance, someone might decide to count the number of cyclists with and without helmets,” Andersen said. “We want to make it possible to go back in time, to review years of traffic camera input, to do this sort of ad hoc analytics.”
Such advances also will drive new user experiences that are important across a number of industries. These visual computing applications include virtual reality, augmented reality, 3-D scene understanding and immersive live experiences powered by data from billions of connected IoT devices.
Andersen and Fatahalian are among 10 researchers from Carnegie Mellon’s School of Computer Science and College of Engineering participating in this ISTC. Pat Hanrahan, professor of computer science and electrical engineering at Stanford University, also is part of this ISTC.
The visual cloud systems research is part of the Intel Science and Technology Center program that began in 2010 to serve academic partners and power innovative research across a wide range of technology initiatives such as Cloud Computing, Big Data and Internet of Things. Results of the work will be available to the public and will help drive the implementation of visual cloud solutions on a broad scale.