Wednesday, July 22, 2015

Review of “Monitoring Hadoop” by Gurmukh Singh

This book is recently published, April 2015, and it covers Nagios, Ganglia, Hadoop monitoring and monitoring best practices.
The first part is rightfully devoted to Nagios. Nagios is covered quite in depth: install, verification and configuration. It gives you the right balance: it does not say everything that there is in a Nagios manual, but tells you sufficient information to install Nagios and prepare it to monitor specific Hadoop daemons, ports, and hardware.
The same goes for Ganglia: it is covered in sufficient detail for one to be able to install and run, with enough attention to Hadoop specifics.
What I did not find in the book, and what could be useful... to read further

Review of “Hadoop in Action,” second edition

Four years have passed since the first publication, and as Russians say, “A lot of water has passed (under the bridge) since then,” so let’s look at what’s new in this edition.

Tuesday, July 7, 2015

The power of text analytics at DARPA/Memex

One of the things we are doing in the DARPA Memex program is text analytics. One of the outcomes of it is an open source project called MemexGATE.

By itself, GATE stands for Generic Architecture for Text Engineering, and it is a mature and widely-used tool. It is up to you to create something useful with GATE, and MemexGATE is our first step. This is an application configured to understand court documents. It will detect people mentioned in the documents, dates, places, and many more characteristics that take you beyond plain key word searches.

To achieve this, GATE combines processing pipelines (such as sentence splitter, language-specific word tokenizer, part of speech tagger, etc) with gazetteers. Now, what is a gazetteer? -- It is a list of people, places etc. that can occur in your documents. MemexGATE includes scripts that collect all US judges, for example, so that they can be detected, when found in a document.

But MemexGATE does more: it is scalable. Building on the Behemot framework, it can parallelise processing for the Hadoop cluster, thus putting no limit on the size of the corpus. MemexGATE was designed and implemented by Jet Propulsion Lab team, and the project committer is Lewis McGibbney.

The picture shown above gives an example of a processed document (from NY court of appeals), with specific finds color-coded. In this way, we process more than 100,000 documents. Why is this useful for us at Memex? - Because we are trying to find and parse court documents related to labor trafficking, so that we can analyze them and better understand the indicators of labor trafficking.

It is very exciting to work on the Memex program. Our team is called "Hyperion Gray" and has been featured in Forbes lately.

What's next? One of the plans is to add understanding of documents to FreeEed, the open source eDiscovery. Instead of just doing keyword searches through the document, the lawyers will be able, by the addition of text analytics, make more sense of the documents: detect people, dates, organizations, etc. This will in turn help create the picture of the case in an automated way.

Disclaimer: we are not official speakers for Memex.

Friday, July 3, 2015

Big Data Cartoons - Summer of Big Data

Since nothing much happens in Big Data in the summer (JK:), our artist took to making pictures of the breakfasts that an artist needs. Here are some examples.

Once this page is visited by more than a million people, it itself will qualify for a "Big Data" page.

Wednesday, June 10, 2015

Joe Witt of Onyara presented Apache NiFi

Joe Witt and the team of Onyara came to present Apache Nifi at Houston Hadoop Meetup. The NiFi project is the result of eight years of development at NSA, which has been open sourced in November of 2014.

The project is for automating enterprise dataflows, and its salient use cases are
  • Remote sensor delivery
  • Inter-site/global distribution
  • Intra-site distribution
  • "Big Data" ingest
  • Data Processing (enrichment, filtering, sanitization)
For the rest, in the words of Shakespeare

"Let Lion, Moonshine, Wall, and lovers twain

At large discourse, while here they do remain."

Meaning, in our case, here are the slides, courteously provided by Joe.

Oh, and there WAS a live demo, so those who missed it - missed it.

As always, pizza was provided by Elephant Scale LLC, Big Data training and consulting.

Monday, June 8, 2015

Big Data Cartoon - Summer Fun

Summer is the time to have fun and to get some rest! While their moms and dads are presumably coding away some new Big Data app, their kids can go to the summer camp. So did our Big Data cartoonist, who is now working as a summer camp artistic director. (These "cartoons" are really the large size decorations there.)

But you can see the same themes, albeit hidden: the tiger is no doubt the new elephant, and the magicians are the software engineers.

Thursday, June 4, 2015

Review of "Apache Flume" by Steve Hoffman (Packt)

This is a second edition of the Apache Flume book, and it covers the latest Flume version 5.2. The author works at Orbitz, so he can draw on a lot of practical Big Data experience.

The intro chapter takes you through the history, versions, requirements, and the install and sample run of Flume. The author gives you the information on useful undocumented options and takes you to the cutting edge with submitting new requests to the Flume team (using has request as an example).

That should be enough, but the justification for the existence of the book and all the additional architectural options with Flume are this: real life will give you data collection troubles you never before though of. There will be memory and storage limitation on any node where you would install Flume, and that is why your real-world architectures will be multi-tiered, with part of the system being down for significant lengths of time. This is where more knowledge will be required.

Channel and sinks get their own individual chapters. You will learn about file rotation and data compressions and serialization mechanisms (such as Avro) to be used in Flume. Load balancing and failover descriptions will help you create robust data collection.

Flume can collect data from a variety of sources, and chapter five describes them, with a lot of in-the-know information and best practices and potential gotchas.

Interceptors (and in particular the Morphline interceptor) are a less known, but very powerful libraries to improve your data flows in Flume. They are a part of KiteSDK.

Chapter seven, “Putting it all together” leads you through a practical example of collecting the data and storing it in ElasticSearch, under specific Service Level Agreements (SLA), and the setting up Kibana for viewing the results.

The chapter on monitoring is useful because monitoring, while important, is as yet not complete in Flume, and the more up-do-date information on it you can get, the better – to avoid flying the dark. Imagine someone telling you that you've been loosing data for the month, and that parts of your system were not working, unbeknownst st to you. To avoid this, use monitoring!

The last chapter gives advice on deploying Flume in multiple data centers and on the “evils” of time zones.

All in all, a must for anyone needing data collection skills in Big Data and Flume.