Don't Just Collect Data, Manage It

By
2 Minutes Read

These days most companies have already taken action regarding a better understanding of their data.

Having data stored randomly is like having your clothes scattered about the floor of your closet: You may be able to find them, but it's going to take some time and effort. It would help if you had a specific process for handling the information to retrieve more data and use it wisely.

When implemented correctly, this strategy, known as data-driven decisions and applications, can offer guidance on the company's business and operations. This data strategy has various stages, which generally increase in difficulty and effort depending on the time it takes to implement them. Here is a typical sequence of events in a data lake–data platform project.

First things first: you need to create a data lake.

All relevant information and data sources must be organised and stored in a commonplace. A commonplace is a storage place for multiple types of data and file extensions. In some cases, this might take place in a single database. In others, however, it might involve multiple databases.

Spark’s favourite choice is usually Amazon S3. This service allows data to be queried and data formats to be rapidly identified, making it one of the best for organising and storing data in a commonplace. You can do this directly by using the S3 query tool and other tools like Cloud Shell.

It's important to note that data lakes store data as is.  

So information hidden in the data lake (because of compliance and privacy) can be stored in raw format. Because of that, it's important to control who has access. Limiting permissions and implementing the relevant restrictions and roles for devs and engineers is crucial.

The phase of this type of project usually includes scheduled ingestion processes that automatically migrate databases and sources of information to the selected service for data lakes. Services like Kinesis and Kinesis firehose are typically used to ingest real-time information. Various services can do these ingestions. For example, to migrate database information to S3, Database Migration Service (DMS) is an excellent out-of-the-box option. Other teams prefer to use Elastic Map Reduce (EMR) to have a more customisable tool to ingest data and improve the level of control on connectivity, input and output formats, output file sizes, etc.

When data lakes are ready, it is time to start thinking about what information and insights can be obtained from the data – this is usually the second stage of most Data Pipelines.

At Spark, this phase includes meetings with various business stakeholders to gather more insights. By partnering with a business as a consultancy, the stakeholders have collected data and performed analyses to reach conclusions and prepare reports for other stakeholders. 

For example, a transport company wants to know information about the different profiles of customers using its services, and an online shop wants to know what colour is better to print their football shoes. We refer to these as use cases. The best way to start a use case is by replicating current reports performed by the customer in the new platform. It is essential to determine the level of aggregation and detail of the output data because report and analysis information must be accessible quickly. Big Data tools, such as Spark, generally process pieces of code that Extract, Transform and Load (ETL) raw data into reliable and trusted data.  Non-relevant information must be dropped from the outputs of the ETLs and duplicated. Inadequate or incomplete information should not be considered for analytics and insights.

Are you ready to build a data lake? Speak with our experts to learn more about gathering insights (and taking action) from your data.