From Automation to Integration: The Evolution of Digital Signage

The Evolution of Digital Signage

In the early days of digital signage there were two pieces of software used for most projects. Assets like images and videos were generated in creative tools, often from Adobe. These assets were then loaded into a CMS that would automatically present them on designated displays at designated times. The only form of interoperability was file compatibility between the creative tools and the CMS, often implemented by using industry standard file formats like jpg for still images and MP4 for videos.

Today, the digital signage eco-system has become more complex. Within the ecosystem we have new tools like analytics that need to work with the CMS to provide decision support to management and to provide real-time selection of media based on considerations like weather, location and audience characteristics. In addition, today’s digital signage systems may be called on to work with other systems within the enterprise ecosystem. For example, inventory systems may direct the CMS to overweight ads that promote products that are moving slower than projected and are in excess inventory positions. While this is all based on simple business concepts, getting system components from different vendors to interoperate can be a challenge.

In general, there are three steps to achieving interoperability. The first step is to somehow deliver the data from one application to another. Jeffrey Weitzman, Managing Director at Navori, said they use common urls to point to accessible locations on the internet. In some cases, the producing application leaves files in that location for the receiving application to retrieve as needed. In other, more sophisticated applications, the producing application generates a file for that location every time a request is made, ensuing that the receiving application receives the latest possible information. Irina Magdenko, project manager for audience measurement provider Seemetrix explained that they use HTML requests for data delivery. The advantage of these approaches is to use existing technologies for data transfer, eliminating the need to re-invent data delivery mechanisms and ensuring interoperability with a greater number of complementary products.

The second step is to parse the data. This is often facilitated by using a standard data format like CSV, XML or JSON. The result is typically the separation of the data into a set of fields and values, like age:28 or gender:male. Since these file formats are well understood and available in most modern programming languages, there is relatively little need for development in this area.

The next step is to infer semantic meaning from the field:value pairs. For example, interpreting age:5 appears to be a relatively straightforward operation. But what if the facial detection system that produced this data recognizes some imprecision in its own estimates and segments people into age ranges. In this case the fifth range might mean people between ages 35 and 45.

Another example is a weather feed. If you want to display the weather feed on the digital signage for the convenience of customers, then it may be simple enough to reproduce the exact text that comes in from a weather service. But if the desire is to use the weather feed to determine whether to promote coffee for cold weather or ice cream for hot weather, then the receiving application must be able to understand the meaning of the weather feed so it can provide useful guidance for the day’s ad rotation schedule. The need for understanding the data can also become critical based on the specific needs of specialized applications. Smart Shelf founder Kevin Howard explains that their system can be used to place a thin strip of LED displays along the front edge of every inch of every shelf in a store. This requires up to date data from the planogram system so they can be sure to place the correct pricing or promotional messaging in the right location of each shelf.

In 2017, Samsung introduced its Brightics system. Technical sales director Philip Chan says that this system uses AI to learn how to interpret the data. The result is normalized data that should be easier to interpret and use for making and implementing automated business decision.

Late last year, NEC introduced its Analytic Learning Platform, ALP. Vice president of strategy Rich Ventura explains that Alp is a middleware layer that can use APIs to access data from a wide variety of sources and combine all of those feeds to make more intelligent decisions based on information from many disparate sources.

Each of these solutions requires custom programming or manual data entry for one system to be able to receive and understand the data from other systems. While APIs are a convenient way to accomplish this, each API requires implementation resources and has the risk of obsolescence if the API provider updates the interface.

Standardization might help reduce the workload on the vendors and enable each product to interoperate with a wider variety of data sources and analytics systems. This would give the end users the flexibility to build customized solutions by combining the products from various vendors that are best suited to their specific needs. As the digital signage industry continue to grow, the need for easier interoperability will increase and the pressure on vendors to introduce standards may also increase.