Synopsis
Chapter 1: Data Lake Concepts Overview
Chapter Goal: This chapter highlights key concepts of Data Lake and Tech Stack. It briefs the readers on the background of Data Management, the need to have a Data Lake, and focus on latest running trends.
No of pages: 20
Sub -Topics:
1. Familiarization with Enterprise Data Lake ecosystem
2. Understand key components of Data Lake
3. Data understanding - Structured vs Unstructured
Chapter 2: Data Replication Strategies
Chapter Goal: The chapter will focus on how to replicate data into Hadoop from source systems. Depending on the nature of source systems, strategies may change. The chapter will start with a talk trivial approaches to ETL data into Hadoop and then dive into the latest trends of change data capture.
No of pages: 25
Sub - Topics:
1. Conventio
nal ETL strategies
2. Change data capture for relational data
3. Change data capture for time-series data
Chapter - 3: Bring Data into Hadoop
Chapter Goal: The chapter will focus on how to get data into a Hadoop cluster. It will talk on several approaches and utilities that can be used to bring data into Hadoop for processing.
Page count: 30
Sub -Topics:
1. RDBMS to Hadoop
2. MPP database systems to Hadoop
3. Unstructured data into Hadoop
Chapter 4: Data Streaming Strategies
Chapter Goal: The chapter will deep dive into data streaming principles of Kafka. It will talk on how Kafka works and understand how it resolves the challenge of getting data into Data Lake.
No of pages: 50
Sub - Topics:
1. How to stream the data? Kafka
2. How to persist the
changes
3. How to batch the data
4. How to massage the data5. Tools and technologies - HVR, Oracle golden gate for big data
Chapter 5: Data Processing in Hadoop
Chapter Goal: This chapter will provide an insight into various data querying platforms. It all started with Map Reduce but Hive is quickly acquiring de facto status in the industry. Chapter will deep dive into Hive, its SQL like semantics and show case its most recent capabilities. A dedicated section on Spark will give a detailed walk-through on Spark approach to process data in Hadoop.
No of pages: 30
Sub - Topics:
1. Map reduce
2. Query engines - intro/bigdata sql/bigSQL
3. Hive - focus
4. Spark - focus
5. Presto
Chapter 6: Data Security and Compliance
Chapter Goal: This chapter will talk on security as
pects of a data lake in Hadoop. The fact that security had been deliberately compromised in the past by organizations, does has a weight. The chapter talks about how to build a safety net around data lake and mitigate the risks of unauthorized access or injection attacks on a Data Lake.
Page count: 20
Sub - Topics:
1. Encryption in-transit and at rest
2. Data masking
3. Kerberos security and LDAP authentication
4. Ranger
Chapter 7: Ensure Availability of a Data Lake
Chapter Goal: This chapter throws light on yet another key aspect of data landscape i.e. availability. It will discuss topics like disaster recovery strategies, how to setup replication between two data centers, and how to tackle consistency and integrity of data.
Page count: 20
Sub - Topics:
1. Disaster Recovery Strategies
2. Setup Data cente
"synopsis" may belong to another edition of this title.