Codemash Thursday 9:15a – A guided tour of the BigData technologies zoo

Uncategorized

A guided tour of the BigData technologies zoo
#Itamar Syn-Hershko

Big Data is a buzzword. “Big Data is any thing which will crash Excel” (@devopsborat)

Even if you don’t have Big Data, these tools and technologies can still be useful.

Agenda – Data at Rest, Streams, Moving data around

There are a LOT of tools and technologies around big data.

Where are we today:
* Database Schemas
* Unreliable at scale
* Expensive at scale
* Relational mindset
* Data is being moved from storage to compute

Schemas assume structured data, can be hard to set up, and are hard to adapt (lack agility)
Scaling strategy is bigger machines (scaling up) which is more expensive than scaling out (multiple simple machines)

Quote from Grace Hopper (heavily paraphrased):
You can’t grow larger oxen, so you need to get another ox to move a bigger load.

Hadoop – based on Google File System and MapReduce
Commodity hardware
Created by Doug Cutting and Mike Cafarella. Open sourced under Apache.
The original product was called Nutch in 2002. Became Hadoop in 2006.
HDFS – Hadoop Distributed File System
Basically takes a big file and store it on a lot of servers. Divide the file into partitions and store each partition on different servers. Essentially sharding. The partitions are each stored on more than one machine for protection.
There is a NameNode that manages how the data is partitioned and how to reassemble it. Losing NameNode is a problem, so it needs to have redundancy.

More DFS: S3, CephFS, GlusterFS, Lustre

Dedicated File Formats: SequenceFile, RCFile, Avro,

MapReduce – parallel computations on data, based on functional programming concepts
Map processes documents in some way (take sentences and break them in to words, for example), producing tuples.
Reduce takes the tuples and combines them (so take each word and add up the counts)

Hadoop does this in Java – you write a Mapper and a Reducer (they implement interfaces). Then you put the .jar files on Hadoop and it runs the job in place using TaskTrackers. These TaskTrackers are controlled by a JobTracker, which runs the job and spins up the TaskTrackers to do the work. There will be a TaskTracker for each partition of data. This is how parallelism is achieved.

Hadoop now has a bunch of distributions. Apache, Cloudera, Hortonworks are the key ones. Each beyond Apache adds other technologies on top (Impala; HCatalog, Tez). Various features are added by cloud vendors to make managing Hadoop in the cloud easier, too.

Apache Hive – Runs SQL over HDFS using HiveQL
HiveQL is not exactly SQL, but very similar
Compiles down to MapReduce, later versions compile down to DAG (Tez)
Think of MapReduce as assembly language, there are various abstractions you can use above it which have their own advantages (HiveQL is one of them obviously)
Apache Pig is a procedural language that expresses processes on data and compiles down to MapReduce (scripts are called Pig Latin). You can write user defined functions in your own language (Javascript, for example) and use them in Pig

Apache HCatalog – Hortonworks distro only
Defines another way to look at your data files and figure out what files you want
HBase is another one

Workflow schedulers – Apache Oozie, LinkedIn Azkaban, Spotify Luigi are examples

The bad and the ugly:
* Data is not always local
* Still too much I/O
* Slow to compute
* Hard to make JobTracker High Availability (HA)
* Poor resource utilization (you can be either a mapper or a reducer)
* NameNodes are a single point of failure

YARN and MapReduce 2.0
YARN does cluster resource management. People call it an operating system for data processing. Improves on the issues with Hadoop above.

Apache Spark
Resilient Distributed Datasets (RDD) – represented as DAG (Directed Acyclic Graph)
Combine data and actions
Take data, transforms it, performs actions on it
RDD is split to do work in parallel as possible.
Transformation: map, filter, union, distinct, join, etc.
Actions: take, count, first, reduce, foreach, etc.
Works continously instead of in batches
Out of the box – Scala and Python
Has integration with Spark R, Spark SQL, Spark GraphX, Spark Streaming
Spark runs in clusters – can self-manage, or you can run in YARN or Apache Mesos
Driver program sends work to the cluster manager. Worker nodes do the work. Worker starts after the last processed data, so somewhat crash tolerant.
Spark has a large ecosystem of it’s own, similar to Hadoop

Stream Processing
Iterative batch processing (Deterministic batch operations)

Apache Storm
Handles streams
Takes from sources (spouts)
Processes in Bolts
Define a topology of Spouts and Bolts connected together
Runs continously, not batch

Apache Samza
Similar to Storm
Handle each message as it arrives
Garantees ordering

Data Pipes – how do we stream into Hadoop?
RabbitMQ, Cassandra, Redis, Kafka, etc.
Apache Flume

Zookeeper
Configuration Management, Synchronization

Since we are talking about distributed systems: read Aphyr’s “Call Me Maybe” blog series[https://aphyr.com/tags/jepsen]

ELK – Elastic Search, Logstash, Kafka to work with log streams

Apache Mahout – Machine learning framework

Advertisements