data-analytics-4

Data Analytics: the potential you are potentially missing

Industry 4.0 is about data and connectivity. In other words, robust Data Analytics is the essential key success factor with regards to Industry 4.0 and the Factory of the Future, which we discussed in an earlier article.

For example, gaining value from the massive amount of complex and unstructured data produced by industrial machinery, two steps are crucial when talking about Big Data . First, the data needs to be aggregated and structured. Second, it needs to be analyzed. Only then can Big Data become valuable. The latest data analytics methods allow for real-time changes in data structure and analysis.

In our other article on the general characteristics of and potentials for Industry 4.0, we also discussed the clear benefits and opportunities of data analytics. These main benefits include pro-active and forward-looking decision making.

Data from all types of sources provide a holistic and real-time image of the current situation and giving you a clear competitive advantage. Second, with the right data analytics, decision-making becomes much clearer and faster. Through the use of a digital twin, for example, it is possible to virtually map an entire value chain and visualize different scenarios for various parameters. This makes it very transparent what the biggest risk factors are and what the best course of action ought to be.

Third and final, a remarkable and underappreciated consequence of good data analytics is decentralization. Local sensors and software make it possible to immediately pinpoint specific issues. These issues can then be resolved swiftly by local teams, which relieves pressure from central units and increases efficiency, effectiveness and reaction speed.

Below we will zoom what potential is typically wasted when trying to apply analytics on Big Data. Afterwards, we will address how big data and data analytics projects can contribute to avoid these wastes.

Missed opportunities with Big Data & Data Analytics

So far, we have established the massive opportunities offered by Big Data and Data Analytics. In practice, however, most organizations struggle in getting the most out of this potential. This can be due to a number of reasons.

Some of the recurring reasons we encounter are low data quality, data which is separated in different organizational silos, insufficient data gathered throughout the organization, … In short, fundamental data management is not in place. Starting from this point, most organizations have difficulty in taking their data management and data analytics to the next level. The end goal should be to set up data analysis as a core cross-functional competence and to make sure it is embedded in all levels of the organization.

EFESO provides support in Big Data and Data Analytics projects for a variety of clients. Together with our clients, we truly raise the potential of their data. Starting from the specific situation and requirements of the client, we customize our approach. The six criteria we look at to improve the data’s potential remain consistent:

  1. Volume: Companies collect an enormous amount of data, oftentimes without even realizing it. This means terabytes or even petabytes of untouched potential.
  2. Variety: As human beings, we like to take the easy route. Nonetheless, it pays to not only leverage your structured data. Semi-structured and even unstructured data can be extremely useful and valuable.
  3. Veracity: The basis of data analysis is data of sufficient quality. Note that this does not imply structured versus unstructured data.
  4. Visualization: To optimally exploit potentials, gather insights with Advanced Data Visualizations. Make the data speak for itself.
  5. Velocity: The speed by which information can be retrieved and analysis can be done will make the difference between making the right calls in the moment. Choices range from working with data batches to real-time retrieval.
  6. Value: The bottom line is and remains an essential dimension to be taken into consideration when defining your data strategy.

Typical process for a Big Data Project

Up until now, we have discussed the bigger picture and potential for Big Data and Data Analytics, as well as the dimensions to be taken into consideration to get the full potential out of this data. Let us now focus on the typical steps to take in a Big Data project.

Phase 1: Target vector 

There is no universal right way to use Big Data. For any company, the optimal usage of and goals for Big Data should follow from the overall company strategy and specific use cases which promise to be the most beneficial. The questions you try to answer in this phase include:

  • Which encompassing objective do we strive for?
  • How could Big Data contribute the strategy?
  • Is it possible (and desirable) that new business models emerge?
  • Which potential use cases could provide direct value?

Concrete examples of such use cases could be:

  • Determining different product strategies
  • Improve capacity efficiency on plant level
  • Optimization of manufacturing and supply chain costs
  • Improvement of time to market
  • Using predictive maintenance to increase production availability
  • Increase in cross-departmental collaboration
  • Establish a decision-making tool and platform for potential new business models

Phase 2: Data Architecture

During the next step, the overall requirements are investigated and created. More specifically, the architecture that is necessary to make the previously determined Big Data use case a reality. Depending on the identified strategy and use cases, several changes will be necessary. These can be on a process or organizational level and need to be implemented together with the purely technological changes to the architecture. For any effort to succeed, it is key to apply the right effort on all levels of Process, Human and Digital. Focusing on the questions to be answered, these include the following:

  • Where could we benefit from a Digital Twin?
  • How should cloud services be integrated?
  • Which analytics level do we require from our own people?
  • Are there any systems to be retrofitted?
  • What possible (cyber)security issues need to be taken into account?
  • How should we (re)design our processes to support the potential use cases?
  • What kind or organizational structure will we need to enable the new way of working?

Phase 3: Proof of Concept

Before actually rolling out completely, first comes the proof of concept for one or several priority use cases. What does this entail? Usually, a first confirmation of the analysis models is run using a digital twin. In this way, with the use of actual data from production, initial results can already be retrieved and used, without the need to fully implement this solution within live operations. The decision on whether or not to implement this into live operations can be made after checking results. Furthermore, by using this approach the necessary data interfaces will have been investigated and the transition towards productive use will be much easier.

Phase 4: Data Cluster

Following a successful proof of concept, the so-called Big Data Cluster is built and changed over from the environment used for the proof of concept to the live environment. This involves several steps. The connection to the various data sources is made and, ideally, automated. The visualization environment is set up together with the integration towards a view for end-users.

Phase 5: Towards service-oriented Cloud

Once the data cluster is up and running, the following phase involves transferring this operation to a service provider. Based on an operating model, smart devices are to be connected and combined into the Data Lake. The final objective being a Data-as-a-Service (DaaS) or Analytics-as-a-Service (AaaS) solution.

Phase 6: Taking the development even further in Collaborative Cloud

All the next steps and potential further evolutions of extra services happen in a collaborative cloud. This make it possible for both internal and external developers to contribute with their own developments and to make more services available.

Potential improvements using Big Data

What we have not discussed so far are some specific examples of how Big Data can be exploited in real-life scenarios. Note that this list is far from exhaustive.

  • Pattern recognition: Like the name suggests, different patterns are automatically detected throughout data. The possible applications are near endless, ranging from image analysis to biosecurity.
  • KPI monitoring: The most relevant KPI’s, such as OEE for example, can be monitored and aggregated based on the required view. This can change from global E2E level to local plant level.
  • Condition monitoring: A major part of predictive maintenance. Several key variables are monitored continuously for specific conditions, allowing for timely and correct actions.
  • Production monitoring: Similar to condition monitoring. The focus of production monitoring, however, are the environmental changes which are checked and followed for outliers. A potential use case could be a production process which needs to remain within a very specific temperature range.
  • Fraud Detection: Autonomous check of different data streams with real-time detection of outliers or anomalies that could imply fraud.
  • Simulation-based optimization: Using Big Data to accurately simulate models of several processes, such as planning or scheduling, in order to optimize each
  • Logistics optimization: Leveraging massive amounts of unstructured data such as geodata, weather data and even real-time traffic information to optimize the flow of products (especially relevant for Just in Time). Linked to these flows, supply and stocking requirements can also be optimized.

Procedure of a Data Analytics Project

Moving on from Big Data, how to run such a project and what some real-life applications could be, let us now focus on the typical stages within a data analytics project. First, however, data analytics is a very broad term. Data analytics is much more than simply collecting your data and visualizing it in a nice way. Ideally, your analytics allow you, first, to generate better insights into your reality. Second, data analytics could allow you to model your specific scope (product, machine, plant, …) in such a way that it will allow you to make predictions based on input data. This, logically, will benefit decision-making in a transformative way. When we discuss running a data analytics project, the goal is to set up a predictive model which will allow for real-time better (or even automated) decision making.

Phase 1: Creating an understanding of processes and data within scope

The data analytics project starts by creating an understanding of the relevant processes in scope and the available data structures. It is key to see what type of data sources there are, what types of data is available and how the data is related to the actual business processes.

Phase 2: Analyze the existing data and create the prediction model 

In the case that there is sufficient data available of high-enough quality, it is possible to immediately create a proof of concept model. This can be done in a test environment or even by taking a data sample offline. The first hypotheses are defined and tested against existing data. In the case that there is not enough data or that there are no patterns to be found in the available data, it is possible to either increase the scope or to obtain more data. This can be done by retrofitting to the existing processes. After these steps, the prediction model can be created and its predictive power is tested with historical data.

Phase 3: Implementation of the prediction model 

Should the predictive quality be high enough, the data analytics model can be rolled out to the live environment. A key value driver will be the correct set-up of ETL/ELT processes (Extraction-Transform-Load / Extraction-Load-Transform). This involves the automated process of providing and preparing the raw data to be integrated into the overall IT architecture. From a front-end perspective, visualizations and report processes are put in place to allow end-users to benefit from the solution. Finally, the data analytics model needs to be tested and validated in the live environment.

How we can help with your Big Data and Data Analytics project

EFESO helps you to overcome the challenges you face when trying to take the next step in Big Data and Data Analytics and thus to realize the potentials step by step in your company. You can request a call for any queries on Big Data or Data Analytics.

Want to know more?

Should you have any questions or need further info, we would be happy to support!