The Next Revolution in Supply Chain Management: from Big Data Analytics towards Real-Time Digital Twins?
By Sven Verstrepen, Head of Supply Network Innovation & Analytics, Ahlers
Sven Verstrepen, Head of Supply Network Innovation & Analytics, Ahlers
During the last two decades, the logistics industry has seen an invasion of new business models and disruptive technological innovations. Most manufacturing companies have at some point invested heavily in Enterprise Resource Planning (ERP) or similar software systems that were meant to bridge the gap between the different company silos and create “a single version of the truth”. However, perhaps not surprisingly, very few companies have succeeded to run all of their operations on a single ERP platform. Most have instead chosen to use a best-of-breed mix of different software solutions for manufacturing, accounting, personnel planning, warehouse management, transportation, etc. With regard to these last two activities, these have been mostly outsourced to 3PL or 4PL logistics service providers, who happily added their own ICT solutions and complexity to the mix.
Although attempts have been made to standardize some of the most necessary and obvious information elements in the supply chain on a global level (think of GS1 barcodes or EDI messaging between suppliers and customers), most logistics networks still run on a patchwork of operational systems. Even if companies succeed to feed data from those transactional systems into their “Business Intelligence (BI)” or similar reporting systems, this often still generates a rather inflexible, incomplete, and fragmented view of reality.
"Many data warehouses contain a large amount of obsolete or polluted data fields which are difficult to evaluate or interpret for non-technical users"
At a certain moment it was believed that Data Warehouses would be the solution to this problem. By grabbing data from a multitude of transactional systems, transforming it into a more or less standard format, and making it available to a community of power users with analytical tools and skills, surely significant progress was made. However, data warehouses have their limitations as well. They are expensive to set up and maintain, not easy to scale and available only to a limited audience. Moreover, they cover only a fraction of reality and do so with a significant amount of time delay (usually min. 24 hours, as most data warehouses are fed with batch uploads during the night). On top of this, many data warehouses contain a large amount of obsolete or polluted data fields which are difficult to evaluate or interpret for non-technical users.
Last but not least, data warehouses only contain internal information about one company, and not about crucial supply chain partners such as suppliers, customers or 3rd party logistics service providers. Ultimately, they are of little use to support the holy grail of logistics, i.e. vertical and horizontal collaboration along the entire end-to-end supply chain.
This has significant repercussions for supply chain professionals such as logistics buyers, transport planners, warehouse managers, network design consultants, etc. Although all these roles are heavily reliant on large amounts of accurate facts and figures to optimize their work, mostly they have only fragmented and outdated data sets about their logistics environment at their disposal. This is of course far better than nothing, but it also helped instigate a culture of sub optimization and “driving thesupply chain by looking in the rear-view mirror”.
But as Bob Dylan sings, the times, they are a-changin’. A number of technological innovations have recently become available that may well cause a dramatic and disruptive shift in the way supply chains are managed.
This technology alone does not solve the problem of “managing the supply chain by looking in the rear-view mirror”. It also does not help to enable collaboration along the supply chain.
In order to gauge the future performance of a supply chain and to detect problems before they occur, we also need two other innovations which are becoming widely available: Data Lakes and Streaming Analytics. Data lakes are flexible and scalable databases which make it easy and cheap to grab high volumes of real-time data flows from a wide variety of sources, and make them available to a large number of users in the cloud. Such data sources can be the ERP or legacy systems from a multitude of collaborating companies, but also real-time vehicle location data, traffic congestion information, weather forecasts, mobile applications or Internet of Things (IoT) sensor data, etc.
Streaming analytics refers to powerful software that makes it possible to visualize, explore, analyse, and forecast real-time data streams on-the-fly.
These technologies combined make it possible to generate “Digital Twins” of the supply chain, i.e. accurate virtual representations of the physical reality as it occurs. By making this reality available to logistics decision makers via user-friendly dashboards and allowing them to run “what-if scenarios” in real-time, important improvements can be expected for example in inventory management, transport efficiency or on-shelf-availability. Add other predictive technologies such as machine learning and artificial intelligence to the mix, and logistics may never be the same again.
A growing number of companies are already taking their first baby steps into this uncharted territory. This digital transformation will surely require significant investments in technology and human capital. Some board members and shareholders will ask if the short term financial return justifies the cost. However, a better question to ask is whether in 10 years’ time, it will still be possible to run a competitive supply chain without these new technologies. The times they are a changin’ indeed, and rear-view mirrors just won’t cut it anymore.