The Historical Development of Big Data (technologies and techniques)
Get link
Facebook
X
Pinterest
Email
Other Apps
The History and Timeline of Big Data.
Big data has been in existence since 1600s, it started with the first programmable computer in the 40s to the internet, Hadoop, IoT and AI and many more.
The first programmable electronic computer as see above.
Big Data is a collection information that are mine for business and this has revalued around the Morden business environment in the resent years, like I said in my previous post, is made up of structured, Unstructured and Semi Structured Data, Big data is can be described with first 3Vc of 5Vs Volume, Velocity and Varieties.
The Foundational History of Big Data.
The history of Big Data that has led to todays advance big Data analysis of a foundational period where clever people started seeing the value of statistics and make sense of it.
In 1666 John Grant introduced statistical data analysis when he first recorded the first data collect of health record about the death rate of the bubonic plague in England London.
In 1865 Richard Millar Devens introduced " Business Intelligence " this is the process of analysing data and then use to deliver actionable information which might introduced GDPR of today because this data are been use to make profit.
In 1884 Herman Hollerith Invented the punch card tabulating machine, this was the beginning of data processing, this machine was used during the U.S census in 1890 and then in 1911 he founded the computing tabulating Recording company known as IBM today.
In 1926 Nikola Tesla predicts the world will one day have access to wireless device which will create more data than expected " Tesla managed to predict our Morden smartphones and other handheld device and how wireless technology would increase data collection: "When wireless is perfectly applied, the whole earth will be converted into a huge brain, which in fact it is, We shall be able to communicate with one another instantly, irrespective of distance."
In 1928 Fritz Pflueger invents a way to store information on tape. is process for putting metal stripes on magnetic papers led him to create magnetic tape, which formed the foundation for video cassettes, movie reels and more.
In 1943 Theoretical computer was created by the U.K was one of the first data processing machines to decipher Nazi codes during WWII. The Colossus, as it was called, performed Boolean and counting operations to analyze large volumes of data.
In 1959 Arthur Samuel, a programmer at IBM and pioneer of artificial intelligence, coined the term machine learning (ML).
In 1965 The U.S. build the first data processing center the buildings was use as a centre to store millions of tax returns and fingerprints on magnetic tape.
In 1969 the Advanced Research Projects Agency Network (ARPANET) created the first wide area network that included distributed control and TCI/IP protocols, this formed the foundation of today's internet.
The Internet Age and Today's Big Data.
As computers and order smart devices start sharing information at very greater rates due to the internet, the next stage in the history of big data was birthed.
Between 1989 and 1990, Tim Berners-Lee and Robert Cailliau founded the World Wide Web and develop HTML, URLs and HTTP while he was working for CERN. which made the internet age a widespread and easy access to data collection.
In 1996 Digital data storage becomes more cost-effective than storing information on paper for the first time in 1996, as reported by R.J.T. Morris and B.J. Truskowski in their 2003 IBM Systems Journal paper, "The Evolution of Storage Systems." in today's world of big data.
In 1998 Carlo Strozzi develops, an open source relational database that provides a solution to store and retrieve data modeled differently from the traditional tabular methods found in relational databases.
In 1999 the first edition of the influential book, which gave Much Information was published by Hal R. Varian and Peter Lyman in 2000, which led to the amount of digital information available in the world to date.
Big Data in the 21st century.
Big data as we know has finally arrives, and it brought with a great change which cannot be overestimated. Everyone, and everything, is impacted.
In 2001 Doug Laney of analyst firm Gartner coins the 3Vs (volume, variety and velocity). The Vs has encapsulate the true definition of big data and usher in a new period where big data can be viewed as a dominant feature of the 21st century. Additional Vs -- such as veracity, value and variability -- have since been added to the list.
In 2005 Computer scientists Doug Cutting and Mike Cafarella create Apache Hadoop, known as the open source framework used to store and process large data sets, with a team of engineers spun off from Yahoo. Why in 2006 Amazon Web Services (AWS) starts offering a web-based computing infrastructure services, now known as cloud computing. Currently, AWS dominates the cloud services industry with roughly one-third of the global market share.
In 2008 The world's CPUs processed over 9.57 zettabytes (or 9.57 trillion gigabytes) of data, about equal to 12 gigabytes per person. Global production of new information hits an estimated 14.7 exabytes.
In 2009 Gartner reported business intelligence as the top priority for CIOs. As companies face a period of economic volatility and uncertainty due to the Great Recession, squeezing value out of data becomes paramount. while in 2011 McKinsey reports that by 2018 the U.S. will face a shortage of analytics talent. Lacking between 140,000 and 190,000 people with deep analytical skills and a further 1.5 million analysts and managers with the ability to make accurate data-driven decisions.
Why Facebook also launches the Open Compute project to share specifications for energy-efficient data centers. The initiative's goal is to deliver a 38% increase in energy efficiency at a 24% lower cost.
In 2012 The Obama administration announces the Big Data Research and Development Initiative with a $200 million commitment, citing a need to improve the ability to extract valuable insights from data and accelerate the pace of STEM (science, technology, engineering, and mathematics) growth, this is to enhance national security and transform learning. The acronym has since become STEAM, adding an A by incorporating the arts.
2014 was the first time, more mobile devices access the internet more than desktop computers in the U.S. And the rest of the world follows suit two years later inn 2016 Ninety percent of the world's data was created alone, and IBM reported that 2.5 quintillion bytes of data is created every day (that's 18 zeroes).
In 2017 the IDC forecasts that big data analytics market would reach $203 billion in 2020 Allied Market Research reported that the big data and business analytics market has hit $193.14 billion. while in 2019, it was estimated it will grow to $420.98 billion and by 2027 at a compound annual growth rate of 10.9%.
The question is where does Big Data Go From Here ?
Growth of data The growth of data, often referred to as data proliferation or data explosion, is a phenomenon driven by various factors such as technological advancements, digitalisation of processes, increased connectivity, and the widespread adoption of Internet-connected devices. This growth is characterized by an exponential increase in the volume, velocity, variety, and veracity of data. these measures of data growth presents both opportunities and challenges for organizations seeking to harness the power of data for insights, innovation, and competitive advantage. Effective management, processing, and analysis of big data require a combination of technology, expertise, and organizational capabilities. The world of data growth and its impact: Global Data Creation The total amount of data created, captured, copied, and consumed globally has been on a rapid rise, ln 2020, the world witnessed 64.2 zettabytes of data and over the next five years (up to 2025), global data creation...
The Characteristics of big data analysis (including visualisations) Big data analysis is characterized by several key features that distinguish it from traditional data analytics, These characteristics reflect the challenges and opportunities posed by the vast volumes, variety, velocity, and complexity of big data. Here are some key characteristics: The Characteristics of big data analysis 1, Volume: The sheer amount of data generated and stored, deals with extremely large volumes of data, often ranging from terabytes to petabytes and beyond. Traditional data analysis techniques may struggle to handle such massive datasets efficiently. 2. Variety: Big data comes in diverse formats and types, including structured data (e.g., databases), semi-structured data (e.g., JSON, XML), and unstructured data (e.g., text, images, videos). Big data analysis must be able to handle this variety and extract insights from multiple data sources. 3. Velocity: Big data i...
What is Big Data ? Big data can be defined as a large amount of data collection made up of Structured, Unstructured and semi structured that keeps growing over time, these data are too large for a regular hard drive to hold. The more technology grows more data sure as connectivity, Mobility, the internet of things (IoT) and the artificial intelligence(AI) are been collected and the data keeps growing. More Big Data tools are created help companies collect and process all these data at a required speed and size. The 3Vs of Big Data. Big data can also be describe using the 3Vs Volume, Velocity, and Variety these were the first definition by Gartner in 2021. Volume . this is the high Volume of Data available to be collected and process from sources and devices on a daily bases. Velocity: This is the speed at which data's are been generated, most of these big data are been process in real time so it requires a high speed generate these dates Variety : Big data ...
Comments
Post a Comment