Apache Pinot joins hands with Kafka and Presto to provide low-latency, high-throughput user-facing analytics

Apache Pinot is a real-time, distributed OLAP datastore that was built for low-latency, high-throughput analytics, making it perfect for user-facing analytical workloads. Pinot joins hands with Kafka and Presto to provide user-facing analytics.

Access Modifiers in Scala

Access modifiers, also known as access specifiers, determine the accessibility and scope of classes, methods, and other members. Scala's access modifiers closely resemble those of Java, although they provide more granular and powerful visibility control than Java.

Overview of Amazon EMR

Amazon EMR is a managed cluster platform that makes it easier to run big data frameworks like Apache Hadoop and Apache Spark on AWS to process and analyze huge amounts of data.

An Introduction to Algorithms and Data Structures

An algorithm is a series of instructions in a particular order for performing a specific task.

Terraform Basics

Terraform is an open source infrastructure-as-code tool that allows us to programmatically provision the physical resources required for an application to run.

AWS Command Line Interface (AWS CLI)

AWS CLI is an open-source tool that allows us to interact with AWS services using command-line shell commands.

Data Product vs. Data as a Product

A data product is not the same as data as a product. A data product aids the accomplishment of the product's goal by using the data, whereas in data as a product, the data itself is seen as the actual product.

Data Governance

Data governance is the process of defining security guidelines and policies and making sure they are followed by having authority and control over how data assets are managed.

Data Catalog

A data catalog is an ordered inventory of an organization's data assets that makes it easy to find the most relevant data quickly.

Let’s Know About the Parquet File

An open source file format for Hadoop that provides columnar storage and is built from the ground up with complex nested data structures in mind.

Partitions and Bucketing in Spark

Partitioning and bucketing are used to improve the reading of data by reducing the cost of shuffles, the need for serialization, and the amount of network traffic.

How To Set SLA in Apache Airflow

Apache Airflow enables us to schedule tasks as code. In Airflow, a SLA determines the maximum completion time for a task or DAG. Note that SLAs are established based on the DAG execution date, not the task start time.