Breaking News
You are here: Home / All / 2013 Cloud Computing Trends, Part 3: Decoding Big Data

2013 Cloud Computing Trends, Part 3: Decoding Big Data

The term “big data” gets thrown around a lot by cloud providers and businesses alike, with both struggling to understand what it means, and how it affects a bottom line. One important trend for cloud computing through 2013 is decoding big data; figuring out where it fits into IT best practices, and how it can improve business efficacy.

The Right Definition

The simplest way to define big data is as every usable bit of information a company creates—this includes financial data, customer data, employee data, and correspondence, just to name a few. What’s troublesome for companies isn’t generating this data, but rather storing and then making the best use of it.

Cloud computing has helped enable massive storage capacities, while a number of industry front runners like Microsoft, IBM and SAP have developed real-time or near real-time analytics applications designed to structure this data into meaningful patterns, patterns some industry experts see as a natural resource akin to oil. Once processed properly, this cloud computing oil is invaluable.

Building a New Empire

Quoted in a recent ITWire article, senior analyst Laurent Lachal of research firm Ovum says that while “cloud computing has barely reached the adolescence phase and it will take at least another five years for cloud computing to mature into adulthood,” the data generated by this growing cloud can’t be ignored. Social media sites, CRM suites and a host of other cloud services generate massive amounts of data, which in turn require the development of specific applications to make sense of it all. As the “new cloud computing oil in 2013,” big data fuels growth across the IT market but unlike its real-world counterpart, doesn’t appear to have a finite limit.

The result is an upswing in two separate big data trends through 2013: cloud-based analytics software to interpret data as mentioned above, and deduplication software and strategies to help prevent cloud sprawl.

Welcome to the Suburbs

Just like a suburban neighborhood, the cloud often gets accused of contributing to sprawl: taking up huge amounts of space that could be more efficiently used. In neighborhoods, this can take the form of condos or apartment buildings, helping to combat the rows of seemingly duplicate houses; in cloud computing this is deduplication—making sure what gets sent to cloud storage isn’t a duplicate of something already recorded. Henrik Rosendahl of data management and backup provider Quantum says that deduplication will “increasingly be viewed as an integral component for cost-effective cloud-based backup,” and services that don’t include optimization for deduplication won’t be able to effectively compete.

A recent InformationWeek article advises companies to also develop sprawl-combating strategies that include finding a custom cloud, setting specific limits on cloud resources and use, and assigning a manager to watch over data – without oversight, the combination of big data and the cloud can quickly get out of hand and become another case of sprawl rather than the solution.

Expect big data services to perform well in 2013, especially those with a focus on real-time analysis and deduplication. Big data may well be the oil that fuels cloud computing, but how well it’s processed significantly affects both quality and performance.

Read more on this topic:

Doug Bonderud is a freelance writer, cloud proponent, business technology analyst and a contributor on the Dataprise website, a New York cloud service provider.
Share on TwitterShare on FacebookShare on LinkedInPin it on PinterestSubmit to redditSubmit to StumbleUponShare on Tumblr

Close
Please support the site
Let's connect on our social media pages

Facebook

Twitter

Google+