Big Data Analysis using Hadoop Technologies
Author(s):
Swathi.V , UG student (Sri Krishna arts and science college); Aiswariya.M, UG student (Sri Krishna arts and science college); Vivekavarthini.K, UG student (Sri Krishna arts and science college); Brindha.K, UG student (Sri Krishna arts and science college); Nanthini.S, UG student (Sri Krishna arts and science college)
Keywords:
Big Data, Volume, Variety, Velocity, Value, Veracity, Hadoop, HDFS, Map Reduce, Hadoop-Eco System
Abstract:
Big data is a term that describes the large volume of data. That contains data in the form of both structured and un-structured data. These data sets are very large and complex so that it becomes difficult to process using traditional data processing applications. To process this enormous amount of data Hadoop can be used. Hadoop is an open source software project that enables the distributed processing of large data sets across clusters of commodity servers. It is designed to scale up from a single server to thousands of machines, with a very high degree of fault tolerance. The technologies used by big data application to handle the massive data are Hadoop, Map Reduce, Apache Hive, No SQL and HPCC. These technologies handle massive amount of data in KB, MB, GB, TB, PB, EB, ZB, YB and BB.
Other Details:
| Manuscript Id | : | IJSTEV5I1036
|
| Published in | : | Volume : 5, Issue : 1
|
| Publication Date | : | 01/08/2018
|
| Page(s) | : | 97-101
|
Download Article