BACKGROUND IMAGE: stock.adobe.com
The shift from analog to digital has been difficult for voice and video. In the last 30 years, the communications industry has focused on the process of acquiring raw data, compressing it and sending it across the globe via the internet.
More recently, the focus has shifted from pure delivery -- how to get a meaningful conversation between remote people -- toward improving the effectiveness of people in a meeting or removing people from the conversation altogether.
Video was the last communications medium to address this shift because it contains more data that needs to be processed in order to gain insight. Now, AI has entered the video space with smart cameras that glean insights into the effectiveness of meetings and collaboration.
Today, AI in cameras falls into two categories. First is general-purpose cameras built for IoT. The second class of cameras is purpose-built for video meetings. Many enterprises will end up adopting and deploying both camera types.
Enterprises will use cloud-based smart cameras for specific automation projects, such as automating warehouse work or understanding buyer behavior in stores. Organizations will acquire smart video meeting cameras once they select a video conferencing provider and complementary hardware.
Bringing AI in cameras to the meeting room
Conferencing hardware manufacturers have made a shift toward AI in cameras. These types of cameras, however, are not as open as the general-purpose smart cameras. Instead, they focus on a specific use case of conducting remote conferences from meetings rooms.
The baseline capability of these cameras is identifying people in a room. But many AI-driven capabilities can be baked into a camera when the need arises, such as counting people, identifying objects, blurring or replacing the background, automatic framing through a session and focusing on a room's whiteboard.
Some vendors are starting to introduce these capabilities into their video room systems. Sometimes, they're added through software running locally or in the cloud. Other times, these features are integrated into the camera itself.
Logitech offers an automatic zoom capability under the name RightSense. Automatic zoom enables the camera to zoom in and out on areas of interest based on the location of people inside the room. Dolby offers the same capability in its Dolby Voice Room.
Huddly IQ's camera offers a similar capability with its Genius Framing feature. The camera also includes AI-driven analytics, such as counting people in the room, for planning and understanding the use of the various meeting rooms within the enterprise.
It's worth noting that vendors that build conferencing hardware and services often resell their devices through partners as well.
Testing use cases with cloud-based smart cameras
In many cases, it makes sense to run inference -- how machine learning infers and categorizes data it collects based on training models -- close to the camera input to reduce the amount of data sent over the network and maintain better privacy. In the last two years, major cloud vendors have developed smart camera strategies built around the needs of developers and tied closely to their respective cloud-based machine learning tools.
In November 2017, Amazon introduced AWS DeepLens, which combines a camera with an Intel CPU. A developer can place a machine learning model on the camera and have it run inference on the camera's data feed.
Google took a different route for AI in cameras and introduced do-it-yourself AI kits called AIY. One of the first kits was Vision Kit, which uses machine learning and enables users to build an intelligent camera based on Raspberry Pi. The intent is to let creators experiment and prototype rapidly with vision-related machine learning projects, while connected to Google Cloud.
Earlier this year, Microsoft introduced Azure Kinect. Taking the technology from its Xbox Kinect gaming camera, Microsoft created a device capable of capturing vision, depth and sound in great accuracy. Azure Kinect is targeting cloud developers that can develop and deploy their machine learning vision products on the Microsoft device, while making use of Azure cloud infrastructure.