Volume 11 Supplement 12
Hybrid cloud and cluster computing paradigms for life science applications
- Judy Qiu1, 2Email author,
- Jaliya Ekanayake†1, 2,
- Thilina Gunarathne†1, 2,
- Jong Youl Choi†1, 2,
- Seung-Hee Bae†1, 2,
- Hui Li†1, 2,
- Bingjing Zhang†1, 2,
- Tak-Lon Wu†1, 2,
- Yang Ruan†1, 2,
- Saliya Ekanayake†1, 2,
- Adam Hughes†1, 2 and
- Geoffrey Fox†1, 2
© Qiu et al; licensee BioMed Central Ltd. 2010
Published: 21 December 2010
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister.
Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications.
The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications.
We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.
Clouds are the largest scale computer centers constructed, and so they have the capacity to be important to large-scale science problems as well as those at small scale.
Clouds exploit the economies of this scale and so can be expected to be a cost effective approach to computing. Their architecture explicitly addresses the important fault tolerance issue.
Clouds are commercially supported and so one can expect reasonably robust software without the sustainability difficulties seen from the academic software systems critical to much current cyberinfrastructure.
There are 3 major vendors of clouds (Amazon, Google, and Microsoft) and many other infrastructure and software cloud technology vendors including Eucalyptus Systems, which spun off from UC Santa Barbara HPC research. This competition should ensure that clouds develop in a healthy, innovative fashion. Further attention is already being given to cloud standards .
There are a growing number of academic and science cloud systems supporting users through NSF Programs for Google/IBM and Microsoft Azure systems. In NSF OCI, FutureGrid  offers a cloud testbed, and Magellan  is a major DoE experimental cloud system. The EU framework 7 project VENUS-C  is just starting with an emphasis on Azure.
Clouds offer attractive "on-demand" elastic and interactive computing.
The centralized computing model for clouds runs counter to the principle of "bringing the computing to the data", and bringing the "data to a commercial cloud facility" may be slow and expensive.
There are many security, legal, and privacy issues  that often mimic those of the Internet which are especially problematic in areas such health informatics.
The virtualized networking currently used in the virtual machines (VM) in today’s commercial clouds and jitter from complex operating system functions increases synchronization/communication costs. This is especially serious in large-scale parallel computing and leads to significant overheads in many MPI applications [13–15]. Indeed, the usual (and attractive) fault tolerance model for clouds runs counter to the tight synchronization needed in most MPI applications. Specialized VMs and operating systems can give excellent MPI performance  but we will consider commodity approaches here. Amazon has just announced Cluster Compute instances in this area.
Private clouds do not currently offer the rich platform features seen on commercial clouds .
Some of these issues can be addressed with customized (private) clouds and enhanced bandwidth from research systems like TeraGrid to commercial cloud networks. However it seems likely that clouds will not supplant traditional approaches for very large-scale parallel (MPI) jobs in the near future. Thus we consider a hybrid model with jobs running on classic HPC systems, clouds, or both as workflows could link HPC and cloud systems. Commercial clouds support "massively parallel" or “many tasks” applications, but only those that are loosely coupled and so insensitive to higher synchronization costs. We focus on the MapReduce programming model , which can be implemented on any cluster using the open source Hadoop  software for Linux or the Microsoft Dryad system [20, 21] for Windows. MapReduce is currently available on Amazon systems, and we have developed a prototype MapReduce for Azure.
Metagenomics - a data intensive application vignette
One can study in [22, 25, 26] which applications run well on MapReduce and relate this to an old classification of Fox . One finds that Pleasingly Parallel and a subset of what was called “Loosely Synchronous” applications run on MapReduce. However, current MapReduce addresses problems with only a single (or a “few”) MapReduce iterations, whereas there are a large set of data parallel applications that involve many iterations and are not suitable for basic MapReduce. Such iterative algorithms include linear algebra and many data mining algorithms , and here we introduce the open source Twister to address these problems. Twister [25, 29] supports applications needing either a few iterations or many iterations using a subset of MPI - reduction and broadcast operations and not the latency sensitive MPI point-to-point operations.
Twister  supports iterative computations of the type needed in clustering and MDS . This programming paradigm is attractive as Twister supports all phases of the pipeline in Figure 1 with performance that is better or comparable to the basic MapReduce and on large enough problems similar to MPI for the iterative cases where basic MapReduce is inadequate. The current Twister system is just a prototype and further research will focus on scalability and fault tolerance. The key idea is to combine the fault tolerance and flexibility of MapReduce with the performance of MPI.
We showed “doubly data parallel” (all pairs) application like pairwise distance calculation using Smith Waterman Gotoh algorithm can be implemented with Hadoop, Dyrad, and MPI . Further, Figure 5 shows a classic MapReduce application already studied in Figure 2 and demonstrates that Twister will perform well in this limit, although its iterative extensions are not needed. We use the conventional efficiency defined as T(1)/(pT(p)), where T(p) is runtime on p cores. The results shown in Figure 5 were obtained using 744 cores (31 24-core nodes). Twister outperforms Hadoop because of its faster data communication mechanism and the lower overhead in the static task scheduling. Moreover, in Hadoop each map/reduce task is executed as a separate process, whereas Twister uses a hybrid approach in which the map/reduce tasks assigned to a given daemon are executed within one Java Virtual Machine (JVM). The lower efficiency in DryadLINQ shown in Figure 5 was mainly due to an inefficient task scheduling mechanism used in the initial academic release . We also investigated Twister PageRank performance using a ClueWeb data set  collected in January 2009. We built the adjacency matrix using this data set and tested the page rank application using 32 8-core nodes. Figure 6 shows that Twister performs much better than Hadoop on this algorithm , which has the iterative structure, for which Twister was designed.
We have shown that MapReduce gives good performance for several applications and is comparable in performance to but easier to use  (from its high level support of parallelism) than conventional master-worker approaches, which are automated in Azure with its concept of roles. However many data mining steps cannot efficiently use MapReduce and we propose a hybrid cloud-cluster architecture to link MPI and MapReduce components. We introduced the MapReduce extension Twister [25, 29] to allow a uniform programming paradigm across all processing steps in a pipeline typified by Figure 1.
We used three major computational infrastructures: Azure, Amazon and FutureGrid. FutureGrid offers a flexible environment for our rigorous benchmarking of virtual machine and "bare-metal" (non-VM) based approaches, and an early prototype of FutureGrid software was used in our initial work. We used four distinct parallel computing paradigms: the master-worker model, MPI, MapReduce and Twister.
List of abbreviations
Message Passing Interface
National Science Fundation
- UC Santa Barbara HPC Research:
University of California Santa Barbara High Performance Computing Research
Office of Cyberinfrastructure
Department of Energy
High Performance Computing
Basic Local Alignment Search Tool
Java Virtual Machine
We appreciate Microsoft for their technical support. This work was made possible using the computing use grant provided by Amazon Web Services which is titled "Proof of concepts linking FutureGrid users to AWS". This work is partially funded by Microsoft "CRMC" grant and NIH Grant Number RC2HG005806-02. This document was developed with support from the National Science Foundation (NSF) under Grant No. 0910812 to Indiana University for "FutureGrid: An Experimental, High-Performance Grid Test-bed." Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessary
This article has been published as part of BMC Bioinformatics Volume 11 Supplement 12, 2010: Proceedings of the 11th Annual Bioinformatics Open Source Conference (BOSC) 2010. The full contents of the supplement are available online at http://www.biomedcentral.com/1471-2105/11?issue=S12.
- Armbrust M, Fox , Griffith R, Joseph AD, Katz R, Konwinski A, Lee G, Patterson D, Rabkin A, Stoica I, Zaharia M: Above the Clouds: A Berkeley View of Cloud Computing. Technical report [http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009–28.pdf]
- Press Release: Gartner's 2009 Hype Cycle Special Report Evaluates Maturity of 1,650 Technologies.[http://www.gartner.com/it/page.jsp?id=1124212]
- Cloud Computing Forum & WorkshopNIST Information Technology Laboratory, Washington DC; 2010. [http://www.nist.gov/itl/cloud.cfm]
- Nimbus Cloud Computing for Science[http://www.nimbusproject.org/]
- OpenNebula Open Source Toolkit for Cloud Computing[http://www.opennebula.org/]
- Sector and Sphere Data Intensive Cloud Computing Platform[http://sector.sourceforge.net/doc.html]
- Eucalyptus Open Source Cloud Software[http://open.eucalyptus.com/]
- FutureGrid Grid Testbed[http://www.futuregrid.org]
- Magellan Cloud for Science[http://magellan.alcf.anl.gov/, http://www.nersc.gov/nusers/systems/magellan/]
- European Framework 7 project starting June 1 2010 VENUS-C Virtual multidisciplinary EnviroNments USing Cloud infrastructure
- Recordings of Presentations Cloud Futures 2010Redmond WA; 2010. [http://research.microsoft.com/en-us/events/cloudfutures2010/videos.aspx]
- Lockheed Martin Cyber Security Alliance: Cloud Computing Whitepaper2010. [http://www.lockheedmartin.com/data/assets/isgs/documents/CloudComputingWhitePaper.pdf]
- Walker E: Benchmarking Amazon EC2 for High Performance Scientific Computing. USENIX 2008., 33(5): [http://www.usenix.org/publications/login/2008–10/openpdfs/walker.pdf]
- Ekanayake J, Qiu XH, Gunarathne T, Beason S, Fox G: High Performance Parallel Computing with Clouds and Cloud Technologies. Book chapter to Cloud Computing and Software Services: Theory and Techniques CRC Press (Taylor and Francis); 2010. [http://grids.ucs.indiana.edu/ptliupages/publications/cloud_handbook_final-with-diagrams.pdf]
- Evangelinos C, Hill CN: Cloud Computing for parallel Scientific HPC Applications: Feasibility of running Coupled Atmosphere-Ocean Climate Models on Amazon’s EC2. In CCAO8: Cloud Computing and its Applications. Chicago ILL USA; 2008.
- Lange J, Pedretti K, Hudson T, Dinda P, Cui Z, Xia L, Bridges P, Gocke A, laconette S, Levenhagen M, Brightwell R: Palacios and Kitten: New High Performance Operating Systems For Scalable Virtualized and Native Supercomputing. In 24th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2010). Atlanta, GA, USA; 2010.
- Fox G: White Paper: FutureGrid Platform FGPlatform: Rationale and Possible Directions).2010. [http://grids.ucs.indiana.edu/ptliupages/publications/FGPlatform.docx]
- Dean J, Ghemawat S: MapReduce: simplified data processing on large clusters. In Commun. Volume 51. ACM; 2008:107–113.
- Open source MapReduce Apache Hadoop[http://hadoop.apache.org/core/]
- Ekanayake J, Gunarathne T, Qiu J, Fox G, Beason S, Choi JY, Ruan Y, Bae SH, Li H: Technical Report: Applicability of DryadLINQ to Scientific Applications.2010. [http://grids.ucs.indiana.edu/ptliupages/publications/DryadReport.pdf]
- Ekanayake J, Balkir A, Gunarathne T, Fox G, Poulain C, Araujo N, Barga R: DryadLINQ for Scientific Analyses. In 5th IEEE International Conference on e-Science. Oxford UK; 2009.
- Fox GC, Qiu XH, Beason S, Choi JY, Rho M, Tang H, Devadasan N, Liu G: Biomedical Case Studies in Data Intensive Computing. In Keynote talk at The 1st International Conference on Cloud Computing (CloudCom 2009) at Beijing Jiaotong University, China December 1–4, 2009. Edited by: Jaatun M., Zhao, G., Rong, C. Springer Verlag LNC 5931 "Cloud Computing"; 2009:2–18.
- Bae SH, Choi JH, Qiu J, Fox J: Dimension Reduction and Visualization of Large High-dimensional Data via Interpolation. In Proceedings of ACM HPDC 2010 conference. Chicago, Illinois; 2010.
- Sammon JW: A nonlinear mapping for data structure analysis. IEEE Trans. Computers 1969, C-18: 401–409. 10.1109/T-C.1969.222678View Article
- Ekanayake J, Computer Science PhD: ARCHITECTURE AND PERFORMANCE OF RUNTIME ENVIRONMENTS FOR DATA INTENSIVE SCALABLE COMPUTING. Bloomington: Indiana; 2010.
- Fox GC: Algorithms and Application for Grids and Clouds. 22nd ACM Symposium on Parallelism in Algorithms and Architectures 2010. [http://grids.ucs.indiana.edu/ptliupages/presentations/SPAAJune14–10.pptx]
- Fox GC, Williams RD, Messina PC: Parallel computing works!Morgan Kaufmann Publishers; 1994. [http://www.old-npac.org/copywrite/pcw/node278.html#SECTION001440000000000000000]
- Chu , Cheng T, Sang KimK, Yi LinA, Yu Y, Bradski R, Ng A, Olukotun K: Map-Reduce for Machine Learning on Multicore. In NIPS. MIT Press; 2006.
- Twister Home page[http://www.iterativemapreduce.org/]
- Qiu X, Ekanayake J, Beason S, Gunarathne T, Fox G, Barga R, Gannon D: Cloud Technologies for Bioinformatics Applications. Proceedings of the 2nd ACM Workshop on Many-Task Computing on Grids and Supercomputers (SC09) Portland, Oregon; 2009. [http://grids.ucs.indiana.edu/ptliupages/publications/MTAGSOct22–09A.pdf]
- The ClueWeb09 Dataset[http://boston.lti.cs.cmu.edu/Data/clueweb09/]
- Ekanayake J, Li Hui, Bingjing B, Gunarathne T, Bae SH, Qiu J, Fox G: Twister: A Runtime for Iterative MapReduce. In Proceedings of the First International Workshop on MapReduce and its Applications of ACM HPDC 2010 conference. Chicago, Illinois; 2010.
- Gunarathne T, Wu TL, Qiu J, Fox G: Cloud Computing Paradigms for Pleasingly Parallel Biomedical Applications. In Proceedings of Emerging Computational Methods for the Life Sciences Workshop of ACM HPDC 2010 conference. Chicago, Illinois; 2010:20–25.
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.