Big Data Analytics — An Architectural View

Big Data Analytics is a comprehensive business-driven discipline. At a high level, Big Data Analytics aims to make quick business decisions, reduce the cost for a product or service, and test new markets to create new products and services.


Big Data analytics are used by many organisations in various industries. From the industry reports, it is evident that Big Data Analytics solutions are commonly used by several industries such as health care, life sciences, manufacturing, government, retail, education, and several more. The use cases and requirements may differ from one industry to another. Therefore, industry knowledge for Big Data architects is also essential

We need methods and tools to perform Big Data Analytics. There are established methods and many tools available on the market. Most of the methods are proprietary, but some are available via open-source programs.

We need to familiarise with standard Big Data Analytics tools and help with the selection and procurement process. There are many tools but some popular tools commonly used and frequently mentioned in the Big Data Analytics publications are Aqua Data Studio, Azure HDinsight, IBM SPSS Modeler, Skytree, Talend, Splice Machine, Plotly, Lumify, Elasticsearch.

Why Data will Transform Investment Management

As we all observe, open-source has progressed well in Big Data Analytics area and produced multiple powerful tools. Some commonly used open-source analytics tools are from the Apache such as Hadoop, Spark, Storm, Cassandra, SAMOA, and other open-source tools such as Neo4j, MongoDB, and R programming environment.

Big Data analytics is a broad and rapidly growing area. We can better understand Big Data Analytics looking at its inherent characteristics as documented in the body of knowledge for Big Data.

We can easily remember these characteristics using nine terms, starting with the letter C. These terms are connection, conversion, cognition, configuration, content, customisation, cloud, cyber, and community. As these terms are self-explanatory, I don’t go into details to explain each here.

Big Data analytics use various methods and techniques such as natural language processing, machine learning, data mining, association pattern mining, behavioural analytics, predictive analytics, descriptive analytics, prescriptive analytics, diagnostic analytics. Let’s take an overview of these Big Data Analytics types.

I’d like to touch briefly on type of Big Data analytics. As Big Data Architects, even though we don’t perform analytics, we still need to understand four major Big Data analytics types to create solutions in these areas.

There are four common analytics types namely descriptive, predictive, prescriptive, diagnostic. Each type is different in scope and aims to answer different business questions and provide different insights. Let’s briefly explain each.

Descriptive analytics covers the historical aspect of data to understand what happened in the past. It aims to interpret historical data and elicit conclusions from data analysis to gain business insights. Some of the common themes of descriptive analytics are sales growth, new customers, numbers of products sold and many other financial metrics to inform the sales and business executives.

Predictive analytics cover techniques that predict future outcomes based on current and historical data. Predictive analytics look for patterns and capture relationships in data. For example, the use of linear regression techniques in machine learning and neural network to achieve the interdependencies of variables in captured data is commonly used for predictive analytics. It can be used in many disciplines and various business purposes. Predicting customer purchase goals by analysing their shopping behaviour is an everyday use case for Big Data solutions.

Prescriptive analytics aims to find the best action for a given situation. This type of analysis looks for ways to determine the best outcome among various choices. Prescriptive analytics can be instrumental in mitigating risks, improve the accuracy of predictions and take benefits of opportunities. It analyses the interactions and potential decisions and provides the best solution.

Diagnostic analytics ask the question of why something has happened by examining the data and propose an answer to this fundamental question. It used multiple techniques such as discovery, mining, correlations, contrasting, and so on.

In addition, these four types of analytics, another trending analytics type for Big Data solutions is related to semantic technologies. Semantics Data Analytics is an emerging and complicated process requiring various techniques and tools.

As the volume, variety, and velocity of data rising in the enterprise, there is a need to apply semantic technologies to our Big Data solutions in the program level. Using semantic technologies can help our solutions to improve the veracity of data and gain insights from proverbially murky data growing exponentially from multiple sources and in various formats especially in an unstructured format.

Semantic data analytics can be used to create new meanings from unstructured data, particularly in text format. This type of analytics also helps with finding the context and relationships in unstructured data. We can apply semantic analytics to search for the required keywords to make sense of the cluttered text.

The keyword search via semantic analysis can help us automate the data mining techniques. One of the most useful aspects of semantic technology, such as deep learning mechanisms, is provision to annotate the content, context and the real meaning of the data. The graphical representation capability in a semantic model can be beneficial to represent data in a human-understandable way.

The automation of semantic technologies for Big Data requires to integrate semantic knowledge into the data systems and databases. This capability allows the consumers to make easily understandable queries with the necessary meaning behind them as the semantic search is not limited to explicit statements. The traditional search mechanisms are not capable of capturing meaning, context and essential relationships from data sources.

The automation can consider connecting metadata models with semantic technologies to the traditional data warehouse systems in the enterprise. For example, in data science terms, the relationships at various levels such as unary, binary, ternary can be described in the semantic taxonomy for automation considerations.

This integration can provide a transformation of data for better quality and more human-understandable formats. This automation process can help our solutions to structure unstructured data to a certain degree. However, the semantic technologies are still evolving; hence, they are no panacea for full-fledged data transformation from unstructured to structured. It is worth the architectural considerations for our Big Data solutions.

Leave a Reply