The documenting and expression of data has changed greatly in relatively recent years with the start of the internet and social networking amongst the public. Where information is now not only found from and devised by smaller groups of people where information has been analysed and curated but is instead overly abundant and unstructured by the mass.
This data constructed by the mass is collected and analysed. However as time continues and the population of the world and the people using the internet and social media increases through online sites such as facebook and twitter where the number of users are continually expanding. Therefore causing the amount of data produced by the public to also vastly increase.
The increase in online data collection generates both positive and negative effects. Where in social networking and micro-blogging sites like twitter and facebook, people choose to make certain data and information available to the public domain. Here the data can become beneficial to businesses and helps to give voice for the opinion of the individual rather than always from an organisation. Also as large amounts of data are collected almost instantly it enables a better understanding and assistance within larger scale events like humanitarian aid and disaster relief.
Yet as the amount of data increases in size and in variety there becomes problems with the way data is stored since the links between data become complex changes the way it can be managed and stored. As well as in the analysis of data, data becomes generalised and loses it specificity to personal situations. Where the data collated from the internet isn’t always truly representative due to false searches and searches of no intention. For example apps and plugins have been produced in an attempt to aid privacy of searches through flooding and confusing the system with periodically and randomly generated false searches. In addition to much of the information collected in the online space being by-product data where the data never really had a specific intention and hence is harder to classify with the current system in place.
The extent of online data can also lend itself to creating rumour propagation and misinformation to the public. Where the data, rather than being “raw” and “objective” can actually become subjective in the way it is looked into and interpreted. Leading to problems in the way solutions are produced, which no longer focus on the true cause of the issue but instead follow the subjective data and create a solution that doesn’t attend to the real cause of the issue.
However if extra information was provided to the public (a basic metadata) about the origins of the data posted online.
How the piece of information was modified as it was propagated through social media and how an owner of the piece of information is connected to the transmission of the statement – provides additional context to the piece of information. (Barbier et al. 2013, p. 9)
Enabling the public to better determine the extent to just how false and how reliable the information posted really is.
Barbier, G., Feng, Z., Gundecha, P. & Liu, H. (ed.) 2013, Provenance in data in social media, Morgan & Claypool, US.
Raley, Rita. 2013, ‘Dataveillance and countervailance,’ in Gitelman, L. (ed) “Raw data” is an oxymoron, Cambridge, MA: Cambridge University Press, pp. 131-9.
Bloomberg Business 2015, Twitter is about mining data for insights: chris moody, viewed 27 August 2015, < http://www.bloomberg.com/news/videos/2015-01-19/twitter-is-about-mining-data-for-insights-chris-moody>.
Ford, P. 2013, the hidden technology that makes twitter huge, Bloomberg Business, viewed 27 August 2015, < http://www.bloomberg.com/bw/articles/2013-11-07/the-hidden-technology-that-makes-twitter-huge>.
Facebook 2015, Number of monthly active Facebook users worldwide as of 2nd quarter 2015, statista, viewed 27 August 2015, < http://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/>.