Monday 25 November 2013

Hadoop Basics

What is  Hadoop ?

Hadoop is a paradigm-shifting technology that lets you do things you could not do before – namely compile and analyze vast stores of data that your business has collected. “What would you want to analyze?” you may ask. How about customer click and/or buying patterns? How about buying recommendations? How about personalized ad targeting, or more efficient use of marketing dollars?

From a business perspective, Hadoop is often used to build deeper relationships with external customers, providing them with valuable features like recommendations, fraud detection, and social graph analysis. In-house,Hadoop is used for log analysis, data mining, image processing, extract-transform-load (ETL), network monitoring– anywhere you’d want to process gigabytes, terabytes, or petabytes of data.


Pillars of Hadoop

HDFS exists to split, distribute, and manage chunks of the overall data set, which could be a single file or a
directory full of files. These chunks of data are pre-loaded onto the worker nodes, which later process them in the MapReduce phase. By having the data local at process time, HDFS saves all of the headache and inefficiency of shuffling data back and forth across the network.
In the MapReduce phase, each worker node spins up one or more tasks (which can either be Map or Reduce).Map tasks are assigned based on data locality, if at all possible. A Map task will be assigned to the worker node where the data resides. Reduce tasks (which are optional) then typically aggregate the output of all of the dozens,hundreds, or thousands of map tasks, and produce final output.

The Map and Reduce programs are where your specific logic lies, and seasoned programmers will immediately recognize Map as a common built-in function or data type in many languages, for example,
map(function,iterable) in Python, or array_map(callback, array) in PHP. All map does is run a userdefined
function (your logic) on every element of a given array. For example, we could define a function
squareMe, which does nothing but return the square of a number. We could then pass an array of numbers to
a map call, telling it to run squareMe on each. So an input array of (2,3,4,5) would return (4,9,16,25), and our call would look like (in Python) map(“squareMe”,array(‘i’,[2,3,4,5]).

Hadoop will parse the data in HDFS into user-defined keys and values, and each key and value will then be
passed to your Mapper code. In the case of image processing, each value may be the binary contents of your image file, and your Mapper may simply run a user-defined convertToPdf function against each file. In this case, you wouldn’t even need a Reducer, as the Mappers would simply write out the PDF file to some datastore (like HDFS or S3).This is what the New York Times did when converting their archives.

Consider, however, if you wished to count the occurrences of a list of “good/bad” keywords in all customer
chat sessions, twitter feeds, public Facebook posts, and/or e-mails in order to gauge customer satisfaction. Your good list may look like happy, appreciate, “great job”, awesome, etc., while your bad list may look like unhappy, angry, mad, horrible, etc., and your total data set of all chat sessions and emails may be hundreds of GB. In this case, each Mapper would work only on a subset of that overall data, and the Reducer would be used to compile the final count, summing up outputs of all the Map tasks.

At its core, Hadoop is really that simple. It takes care of all the underlying complexity, making sure that each record is processed, that the overall job runs quickly, and that failure of any individual task (or hardware/network failure) is handled gracefully. You simply bring your Map (and optionally Reduce) logic, and Hadoop processes every record in your dataset with that logic.


Why Hadoop??
The fact that Hadoop can do all the above is not the compelling argument for it’s use. Other technologies
have been around for a long, long while which can and do address everything we’ve listed so far. What makes Hadoop shine, however, is that it performs these tasks in minutes or hours, for little or no cost versus the days or weeks and substantial costs (licensing, product, specialized hardware) of previous solutions

Hadoop does this by abstracting out all of the difficult work in analyzing large data sets, performing its work on commodity hardware, and scaling linearly. -- Add twice as many worker nodes, and your processing will generally complete 2 times faster. With datasets growing larger and larger, Hadoop has become the solitary solution businesses turn to when they need fast, reliable processing of large, growing data sets for little cost.

Where to start Learning ?

Here are five steps to start learning Hadoop

  1.     Download and install Ubuntu Linux server 32-bit
  2.     Read about Hadoop (what's Hadoop, Hadoop architecture, MapReduce, and HDFS)
  3.     Start with installing Hadoop on a single node
  4.     Do some examples (like wordcount to test how it works)
  5.     Start doing multiple nodes


References : wiki.apache.org/hadoop/

Monday 18 November 2013

Performance Engineering

Performance engineering within systems engineering, encompasses the set of roles, skills,  activities, practices, tools, and deliverable applied at every phase of the systems development life cycle which ensures that a solution will be designed, implemented, and operationally supported to meet the non-functional performance requirements defined for the solution. As the connection between application success and business success continues to gain recognition, particularly in the mobile space, application performance engineering has taken on a preventative and perfective role within the software development life cycle. As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.Adherence to the non-functional requirements is also validated post-deployment by monitoring the production system.
Objectives
  • Increase business revenue by ensuring the system can process transactions within the requisite timeframe
  • Eliminate system failure requiring scrapping and writing off the system development effort due to performance objective failure
  • Eliminate late system deployment due to performance issues
  • Eliminate avoidable system rework due to performance issues
  • Eliminate avoidable system tuning efforts
  • Avoid additional and unnecessary hardware acquisition costs
  • Reduce increased software maintenance costs due to performance problems in production
  • Reduce increased software maintenance costs due to software impacted by ad hoc performance fixes
  • Reduce additional operational overhead for handling system issues due to performance problems.
Approach

Tuesday 10 September 2013

Evolving Customer Performance Requirement

Mostly the customer performance requirement are unambiguous and unrealistic. This is more clear from my recent experience with one of our customer. The key challenges are:-

1. performance requirement are very high level with accepted Metrics / workload not defined.
2. The tools and procedure for Performance evaluation is not defined.

Lets take each of this and see how we can involve customer in continuous engagement and avoid the last minute rush!

The requirement are defined with desired number of users and the acceptable response time. The initial gap we found around the distribution of users and the workload associated. Are these the realistic work load or are stressing the system in wrong way! Are we confirming to Customer needs which we cannot achieve! We initially told the customer all these issues and though we did not got complete details but we got two things the workload scenario and distribution. On seeing the scenario we immediately knew these are not realistic scenario and achieving the response time criteria with this scenario is not possible based on our experience but we went ahead so we could gather enough information on additional data like Hits/sec and Throughput and do compare against the market standard for similar tools.After multiple round of test we did various performance tuning but could not achieve the desire expectation. But we had enough data to show to customer that their needs are not realistic.

The was next challenge on the procedure involved for evaluating. The think times b/w transaction was 7-10 sec , user ramp up was not clear , peak load time not defined , content checks were done at each step , run time settings were completely default and all these were making the test failed at much lesser concurrency. After some discussion one of key difference was content checks and that was adding to the Load Runner response time as statement was executed for each vuser.

I suggest to add these learning to your case to better manage your customer requirement!