Data Management Plan

Use of data for the targeted advertisement of products and services that will promote health and well being.

Project Title

Author Name

Student Number

Overview

A scenario of targeted product advertisement of using data to promote health and well being.

Assumptions.

  1. During the process, the Master Data Management teams shall assume that all Master Data Management software podiums intended for the advertisement they entirely support both the organization and the product entity types. This dogma to remain known as Multi-Domain MDM or MDM.
  2. The product targeted to be advertised to promote health and well being is a health supplement product.
  3. Predictive modeling and visualization that is automated will assist the company’s promoters in identifying the patterns with the data that will inform decision-making in real-time.
  4. The marketing team advertising the product understands data analysis and interpretation of the results from the analysis.
  5. Data management obtuse systems and reporting have got the advantage of rejecting marketing team insights that are likely to affect sales, customer relations, and spend media.
  6. The marketing team can estimate the costs of hold on to the obsolete systems as the accessibility of data grows.
  7. The marketing team will use a management system that gives a single view of the entire data in real-time.

 

Volume, Velocity, Variety, Veracity, Value

The 5Vs

This framework addresses the managerial and technical aspects of massive data, issue merging, opportunities, promises, and challenges. This model has evolved from its early stages of massive data; the framework used only 3Vs, namely Volume, Variety, and Velocity. This framework later included two more Vs., namely, Veracity and Value (Chen et al., 2014). These 5Vs, namely, Volume, Velocity, value, and Veracity, are referred to as the big data 5Vs.  This 5Vs (Volume, Velocity, Variety, Veracity, Value) framework is a useful approach in addressing significant data issues. Big data is defined as an approach that is holistic in managing, processing, and analyzing the 5Vs (Volume, Velocity, Variety, Veracity, and Value) to create insights that are actionable for sustaining value delivery, performance measuring, and competitive advantage establishment (Boyd & Crawford, 2012). The 5Vs are discussed as follows:-

  1. Velocity: This refers to the regularity of data creation and delivery of this data. This can be either a challenge or an opportunity. Even as there are advantages associated with the fast creation and production of semi-structured and structured data, data speed has a disadvantage. Any instant generated data must be authentically obtained and verified, stored, gleaned for insight, processes updated, monitored, and maintained at the same speed. It was obtained to add utility and put forward a competitive advantage. This puts more pressure points on external and internal systems of technology and the responsible team.
  2. Value: One of the main intentions of big data management and handling is to deliver value to the user of data (Hagen, Khan, & Honavar, 2014). This means that when referring to value without big data will not be possible. It is recognized that value is relative and time-bound in selected cases, emerging that value in a portion of data will change with time. Data itself is of no value, but at the point where it gives insights into something, it gains value, and this becomes a basis of the significance of competitive advantage or goal achievement.
  • Verification/Veracity: Invalid data is reliable. Data can become unreliable when it’s corrupt, untimely, unreliable if analyzed using a weak system, and misinterpreted. Also, data that is not easy to locate will not help in decision making since it will be a waste of time due to its excess. 1:10:100 is the rule used by many experts in explaining the cost of failure vs. cost of prevention. When the speed or Velocity of data generated increases, the verifying challenges also increases. In bad data, big data can never fix bad data problems but can only override their effects.
  1. Validity: This the integrity of data derived from the value of the Big Data. If integrity lacks in data that cannot even fit for its intended purpose, then the insight for this data might be used wrongly or even rejected by the user. For data to be valid, the least required is to ensure the validity of relevant data from the beginning to the end process flowing through the necessary processes. Therefore data interpretation must be grounded in facts and logic.
  2. Visibility: organizations have no shortage of data; instead, they are frustrated by the fact that data that is valuable is hidden or buried in a system of storage and application software that are difficult to access. This puts more businesses in a good market position because they sit on valuable data that other businesses cannot access and, in turn, use it to threaten their market. Therefore visibility entails data that an authorized individual can access, locate, secure, and process in a timely and reliable manner (Walker et al., 2013). Keeping data relevant in a visible manner makes it alive and adds value and usefulness to the data. Even if data is useful but invisible to the users, it will be of no value. External and internal data must be visible in a timely manner , in the accurate way, and by the right user.

Data Lifecycle

  • Data acquisition, including data sources and how data will be acquired

Data acquisition is a means by which data is collected and a source of data identification for the common data acquisition consideration. Data sources include primary, which also means data collected directly from the field, and secondary data, which is found in records, and it has been existing. Data can be collected using interviews, surveys, focus group documents, and records.

  • business owners of the data

These are groups or functional institution that keeps specific data that is collected in line with their work. For Eg, Red Cross collects data during calamities to establish the impact of the population

‘s disaster.

  • The analysis software specification determines the data format.
  • Integration

This is the business process and technical process combined to answer a particular question that is very important.

  • Reproducibility

This is the ability of analyzed data results to be replicated to be used in a different task without any new materials requirement.

  • Plans for data publishing and re-use; and plans for data retirement.

Data publishing is a disclosure of data in public; Data re-use uses data for than one purpose, different from what it was intended for originally. Data retirement is the decommissioning or shut down of the process of data that is inactive mainly to cost-saving

Table of Contents

Data Analysis

Data from the potential customers of the health product to be advertised will be analyzed using statistical tools to establish the relationship between the product’s use by the customers and their health performance. Therefore data for the sales of the product and heath changes or changes associated with the product will be put in a statistical package that will be used to generate the results that will be interpreted. The regression model will be used to predict the future of the product.

Data Quality, Security, and Privacy

Data quality determines the data condition based on accuracy, consistency, completeness, reliability, and updated information. Data integrity is the accuracy, consistency, and completeness of the data; besides, all these are the safety of the data in compliance with the governing bodies’ regulations. It is monitored by rules and a collection of processes, standards, and rules implementation during the design phase.

A data dictionary is made up of Metadata. This dictionary contains important information such as what is contained in the database, the allowed user, and the dictionary’s location. The database administrators only handle this dictionary. This feature of the data dictionary provides the safety of data. The users can only access data for use when required and with the exact purpose for use with that data.

  • Appropriate and inappropriate uses of the data

Appropriate use of data is the use of data o achieve the main objective. Simultaneously, inappropriate is the use of data for other purposes apart from the original use, which is unethical.eg uses data to leak information to the competitor of the business.

  • privacy, including how data will be safeguarded

Privacy and data safeguarding can be achieved by the use of security code and strong passwords.

  • security, including who has access to data

This is the restriction of data access by the individuals who are not the users or the data’s intended users. This can be done by having in place a data dictionary.

  • Ethical considerations.
  1. Proper document keeping
  2. Reasons for accessing data
  • Data permit ion
  1. Data access procedure

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Bibliography

“Big Data Analysis.” SpringerLink. Accessed October 17, 2020. https://link.springer.com/chapter/10.1007/978-3-319-06245-7_5.

“Big Data and the Business Case.” Big Data Analytics, 2015, 21-28. doi:10.1002/9781119205005.ch3.

“CRITICAL QUESTIONS FOR BIG DATA.” Taylor & Francis. Last modified May 25, 2012. https://doi.org/10.1080/1369118X.2012.678878.

“Louisiana Clinical Data Research Network: Establishing an Infrastructure for Efficient Conduct of Clinical Research.” OUP Academic. Last modified May 12, 2014. https://doi.org/10.1136/amiajnl-2014-002740.

 

 

 

 

 

 

Calculate your order
Pages (275 words)
Standard price: $0.00
Open chat
1
towriteessays.com
Hello 👋
Thank you for choosing our assignment help service!
How can I help you?