Cost based Model for Big Data Processing with Hadoop Architecture
Cost based Model for Big Data Processing with Hadoop Architecture
Article PDF

Keywords

big data
hadoop
cloud computing
mapreduce

How to Cite

Mayank Bhushan, & Sumit Kumar Yadav. (2014). Cost based Model for Big Data Processing with Hadoop Architecture. Global Journal of Computer Science and Technology, 14(C2), 13–17. Retrieved from https://gjcst.com/index.php/gjcst/article/view/1242

Abstract

With fast pace growth in technology we are getting more options for making better and optimized systems For handling huge amount of data scalable resources are required In order to move data for computation measurable amount of time is taken by the systems Here comes the technology of Hadoop which works on distributed file system In this huge amount of data is stored in distributed manner for computation Many racks save data in blocks with characteristic of fault tolerance having at least three copies of a block Map Reduce framework use to handle all computation and produce result Jobtracker and Tasktracker work with MapReduce and processed current as well as historical data that s cost is calculated in this paper
Article PDF
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2014 Authors and Global Journals Private Limited