Apache Hive Definitions

Posted by on Jun 12, 2019 in Tutorials | 1 comment

Apache Hive: is an open source data warehouse system built on top of Hadoop Haused for querying and analyzing large datasets stored in Hadoop files. It process structured and semi-structured data in Hadoop.

Hive Architecture: After the introduction to Apache Hive, Now we are going to discuss the major component of Hive Architecture. The Apache Hive components are- Metastore – It stores metadata for each of the tables like their schema and location. Hive also includes the partition metadata. This helps the driver to track the progress of various data sets distributed over the cluster. It stores the data in a traditional RDBMS format. Hive metadata helps the driver to keep a track of the data and it is highly crucial. Backup server regularly replicates the data which it can retrieve in case of data loss.

Driver – It acts like a controller which receives the HiveQL statements. The driver starts the execution of statement by creating sessions. It monitors the life cycle and progress of the execution. Driver stores the necessary metadata generated during the execution of a HiveQL statement. It also acts as a collection point of data or query result obtained after the Reduce operation.

Compiler – It performs the compilation of the HiveQL query. This converts the query to an execution plan. The plan contains the tasks. It also contains steps needed to be performed by the MapReduce to get the output as translated by the query. The compiler in Hive converts the query to an Abstract Syntax Tree (AST). First, check for compatibility and compile time errors, then converts the AST to a Directed Acyclic Graph (DAG).

Optimizer – It performs various transformations on the execution plan to provide optimized DAG. It aggregates the transformations together, such as converting a pipeline of joins to a single join, for better performance. The optimizer can also split the tasks, such as applying a transformation on data before a reduce operation, to provide better performance.

Executor – Once compilation and optimization complete, the executor executes the tasks. Executor takes care of pipelining the tasks.
CLI, UI, and Thrift Server – CLI (command-line interface) provide a user interface for an external user to interact with Hive. Thrift server in Hive allows external clients to interact with Hive over a network, similar to the JDBC or ODBC protocols.

Hive Shell: The shell is the primary way with the help of which we interact with the Hive; we can issue our commands or queries in HiveQL inside the Hive shell. Hive Shell is almost similar to MySQL Shell. It is the command line interface for Hive. In Hive Shell users can run HQL queries. HiveQL is also case-insensitive (except for string comparisons) same as SQL.

We can run the Hive Shell in two modes which are: Non-Interactive mode and Interactive mode

Hive in Non-Interactive mode – Hive Shell can be run in the non-interactive mode, with -f option we can specify the location of a file which contains HQL queries. For example- hive -f my-script.q

Hive in Interactive mode – Hive Shell can also be run in the non-interactive mode. In this mode, we directly need to go to the hive shell and run the queries there. In hive shell, we can submit required queries manually and get the result. For example- $bin/hive, go to hive shell.

What is Hive Partitioning and Bucketing? Apache Hive is an open source data warehouse system used for querying and analyzing large datasets. Data in Apache Hive can be categorized into Table, Partition, and Bucket. The table in Hive is logically made up of the data being stored. It is of two type such as internal table and external table. Refer this guide to learn what is Internal table and External Tables and the difference between both. Let us now discuss the Partitioning and Bucketing in Hive in detail-Partitioning – Apache Hive organizes tables into partitions for grouping same type of data together based on a column or partition key. Each table in the hive can have one or more partition keys to identify a particular partition. Using partition we can make it faster to do queries on slices of the data.

Bucketing – In Hive Tables or partition are subdivided into buckets based on the hash function of a column in the table to give extra structure to the data that may be used for more efficient queries.

Apache Spark: is a lightning-fast cluster computing designed for fast computation. It was built on top of Hadoop MapReduce and it extends the MapReduce model to efficiently use more types of computations which includes Interactive Queries and Stream Processing. This is a brief tutorial that explains the basics of Spark Core programming.

Apache Spark provide three type of APIs:

RDD: The main abstraction Spark provides is a resilient distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel.

DataFrame: is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a R/Python Dataframe. Along with Dataframe, Spark also introduced catalyst optimizer, which leverages advanced programming features to build an extensible query optimizer.

Dataset: Dataset API is an extension to DataFrames that provides a type-safe, object-oriented programming interface. It is a strongly-typed, immutable collection of objects that are mapped to a relational schema.

Data Warehouse (DW or DWH): also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for workers throughout the enterprise.

Star schema: is the simplest style of data mart schema and is the approach most widely used to develop data warehouses and dimensional data marts. The star schema consists of one or more fact tables referencing any number of dimension tables. The star schema is an important special case of the snowflake schema, and is more effective for handling simpler queries.

Amazon Web Services (AWS): is a subsidiary of Amazon.com that provides on-demand cloud computing platforms to individuals, companies and governments, on a paid subscription basis.

AWS Lambda: is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume – there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service – all with zero administration. AWS Lambda runs your code on a high-vailability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All you need to do is supply your code in one of the languages that AWS Lambda supports (currently Node.js, Java, C#, Go and Python).

Apache Kafka: is an open-source distributed pub-sub messaging solution that was initially developed at LinkedIn. Apache Kafka consists of multiple nodes referred to as Brokers(Message Brokers). Brokers are responsible for accepting messages (leaders) and replicating the messages to the rest of the brokers in the cluster (followers). The distributed nature of Apache Kafka allows the system to scale out and provides high availability (HA) in case of node failure. The membership (leaders and followers) of Brokers in a cluster is tracked and administered via Apache Zookeeper, yet another open-source distributed membership framework.

Amazon Kinesis: also a pub-sub messaging solution, is hosted by Amazon Web Services (AWS) and provides a similar set of capabilities as Apache Kafka. Amazon Kinesis is a fully managed service hosted within a given AWS region (i.e. us-east-1) and spans over multiple Availability Zones (i.e. us-east-1a). Similar to Apache Kafka, Amazon Kinesis is responsible for accepting end-user’s messages and replicating the messages to multiple-availability zones for high availability and durability. The fully managed aspect of Amazon Kinesis eliminates the need for users to maintain infrastructures or be concerned about the details surrounding features like replication or the other system configurations.

One Response to “Apache Hive Definitions”

  1. Hi, very nice website, cheers!
    ——————————————————
    Need cheap and reliable hosting? Our shared plans start at $10 for an year and VPS plans for $6/Mo.
    ——————————————————
    Check here: https://www.good-webhosting.com/

Leave a Reply

Your email address will not be published. Required fields are marked *