Why you should launch a data management program – A (data) message to the C-suite

Why you should launch a data management program – A (data) message to the C-suite

Why you should launch a data management program – A (data) message to the C-suite

How do you create quality information from data silos

If you are an executive in a rapidly growing organization whose stated mission is “to be the most successful business in the world” you will be aware that rapid growth brings a lot of excitement, and that this very growth changes the nature of the way the company operates.

Instinctive understanding of the business becomes more challenging as more operating sites and businesses are added to the group. Executives can no longer rely solely on their knowledge of individual sites, and as the business grows, they rely more on reports compiled from the different businesses to keep a grip on the business metrics.

Any significant increase in new data brings new dynamics. Data silos multiply making it more difficult to aggregate data across departments; across sites; across regions; and across national borders. Recently acquired business units will inevitably lack a common language. Some common phrases may even have a different meaning in different parts of the organization. This lack of a common understanding makes it more likely that business opportunities will be missed.

Good quality master data is the foundation for making sound business decisions, and also for identifying potential risks to the business. Why is master data important? Master data is the data that identifies and describes individuals; organizations; locations; goods; services; processes; rules; and regulations, so master data runs right through the business.

All departments in the business need structured master data. What is often misunderstood is that the customer relationship management (CRM) system; the enterprise resource management system (ERP); the payroll system; and the finance and accounting systems are not going to fix poor master data, the software performance always suffers because of poor quality data., but the software itself will not cure that problem.

The key is to recognise what master data is not; master data is not an information technology (IT) function, master data is a business function. Improving master data is not a project. Managing master data is a program and a function that should be at the heart of the business process.

Foundational master data, that is well structured, and good quality is a necessity if your business is going to efficiently and effectively process commercial data; transaction reporting; and business activity. The reality is that well-structured, good quality, master data is also the most effective way to connect multiple systems and processes, both internally and externally.

Good quality data is the pathway to good quality information. With insight from good quality information, business leaders can identify – and no longer need to accept – the basic business inefficiencies that they know exist, but cannot pin down. For a manufacturing company making 5% profit on sales, every $50,000 in operational savings is equivalent to $1,000 000 sales. It follows then, that if you have $50,000 of efficiency savings that you can identify and implement as a result of better quality information, you have solved a million-dollar problem.

If you, as an executive, are unsure of the integrity and accuracy of the very data that is the foundation for the reports you rely on in your organization, launching a master data management program is the best course of action you can take.

Contact us

Give us a call and find out how we can help you.

+44 (0)23 9387 7599

info@koiosmasterdata.com

About the author

Peter Eales is a subject matter expert on MRO (maintenance, repair, and operations) material management and industrial data quality. Peter is an experienced consultant, trainer, writer, and speaker on these subjects. Peter is recognised by BSI and ISO as an expert in the subject of industrial data. Peter is a member ISO/TC 184/SC 4/WG 13, the ISO standards development committee that develops standards for industrial data and industrial interfaces, ISO 8000, ISO 29002, and ISO 22745. Peter is the project leader for edition 2 of ISO 29002 due to be published in late 2020. Peter is also a committee member of ISO/TC 184/WG 6 that published the standard for Asset intensive industry Interoperability, ISO 18101.

Peter has previously held positions as the global technical authority for materials management at a global EPC, and as the global subject matter expert for master data at a major oil and gas owner/operator. Peter is currently chief executive of MRO Insyte, and chairman of KOIOS Master Data.

KOIOS Master Data is a world-leading cloud MDM solution enabling ISO 8000 compliant data exchange

Non Covid-19 deaths by occupation – a closer look

Non Covid-19 deaths by occupation – a closer look

Non Covid-19 deaths by occupation – a closer look

To download this article, please view in PDF and download from the new tab.

ONS data raises important questions about non COVID-19 deaths by occupation

Why have non COVID-19 related deaths in the hairdressing industry risen by 30%?

Following a freedom of information request on the 25th January 2021, The Office of National Statistics (ONS) released the dataset: Coronavirus (COVID-19) related deaths by occupation, England and Wales. [1]

The summary accompanying the dataset concluded that “those working in close proximity to others continue to have higher COVID-19 death rates when compared with the rest of the working age population.” [2]

This data is clearly vital in understanding the impact of lockdown legislation on COVID-19 deaths and informs the growing conjecture about the disease’s disproportionate impact on workers with low, or irregular incomes.

Without doubt, we are fortunate in this country that the ONS provides such valuable insight to enable us to make sense of what is happening. However, the summary drew no conclusions about the increases in non COVID-19 related deaths by occupation, prompting the author to take a closer look. It highlighted a worrying increase in non COVID-19 deaths in one particular occupation – hairdressing.

Delving deeper into the deaths by occupation data

The ONS dataset provides context to the deaths including COVID-19 against the average “expected” deaths over the same period for the past five years. [3]

The main media commentary following the release of the dataset focused on the fact that more men than women of working age had COVID-19 recorded on their death certificates. Overall, the excess deaths for women in the period covered by the dataset was 1,891. The deaths of women attributed to COVID-19 was 1,742, so no significant statistical difference. However, that total figure hides a range of outcomes across the 369 occupations listed in the dataset. When you look at the dataset in more detail some interesting numbers emerge.

In Table 1 at the end of this article (adapted from table 9 of the ONS report), I have added two extra columns: Non COVID-19 excess mortality 2020; and Percentage change Non COVID-19 excess mortality 2020.

At the “top” of the table, now sorted by percentage of non COVID-19 deaths, are hairdressers with an increase in Non COVID-19 excess mortality of 30%. But what accounts for such a marked increase and what are the leading causes of these excess deaths?

Delving deeper still – some concerning increases in several causes of death of hairdressers

Following a request for more detailed information on the mortality rate of the “top” group – hairdressers – the ONS responded very promptly on the 12th February, publishing a new dataset breaking down the leading causes of death. [4]

The total deaths, for men and women, was 398, an increase of 37% compared with the average number of deaths over the past five years covering the same reporting period. COVID-19 accounts for 20 of those deaths.
Table 2 at the end of this article (adapted from table 1 of the second ONS report) shows the top ten causes of death (out of 63) showing dramatic increases in suicide and accidental poisoning among hairdressers, as well as a startling rise in deaths from breast cancer and strokes.

Questions we should ask next

This paper was specifically written to draw attention to a trend overlooked by most commentary on the original dataset release, namely a steep rise in non COVID-19 related deaths in certain professions, and in particular hairdressers.

As more datasets are released covering longer periods of time, new trends in the data will become apparent. It is still too early to draw definite conclusions, and whilst we must always be careful to remember that correlation does not imply causation, these datasets do raise the imperative to ask more questions such as:

  • Why is it that, during this pandemic, COVID-19 was responsible for less than 7% of the 37% increase of deaths in hairdressers?
  • What is driving the increase in nine of the top ten causes of deaths among hairdressers?
  • Breast cancer deaths among hairdressers are up by 44%. Is this figure an outlier, if not, what is driving this increase?
  • What is behind the doubling of deaths from strokes among hairdressers?
  • Deaths from suicide and accidental poisoning are up nearly 50%, and together, are more than double the deaths from COVID-19. Why?

Increased deaths across this many categories in a single occupation cannot simply be dismissed as an outlier, or a one-off event. There will almost certainly be an underlying cause.

Many hairdressers are self-employed and have been unable to work for long periods since March 2020. A lot of money was spent by these businesses to make their salons safe when they reopened after the first lockdown.

There has been a lot of recent commentary in the media about how many excess deaths may have been caused as a result of the lockdown policies. Is this an early indicator of this effect? Certainly, the rises in accidental poisoning and suicides in this – generally low paid – occupation is extremely worrying.

The original dataset, published in January, lacked the context of the occupation size and the median income of each occupation. Obtaining these additional data elements may tell us more about the anecdotal evidence that it is the poor, or those with irregular incomes, who are suffering disproportionately from the lockdown. Perhaps the ONS will add these data fields to the next release.

Hopefully, the NHBF, the trade body for hairdressers, will also study this dataset in more detail and work with their membership to reduce some of the tragic, avoidable deaths in these categories.

Acknowledgement: Open data and the Office for National Statistics

We are very fortunate to have the ONS and an open data policy in the UK. I would like to thank the ONS for their prompt response to my request, and the great work they do in regularly publishing datasets that allow us to examine for ourselves what is really happening. This open data policy allows anyone to delve beyond the headlines we see every day.

Tables

Table 1: Deaths for women by occupation involving ten or more instances of COVID-19

Table 2: Top 10 causes of death among hairdressers

References

[1] https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/causesofdeath/bulletins/coronavirusCOVID19relateddeathsbyoccupationenglandandwales/deathsregisteredbetween9marchand28december2020

[2] “Today’s analysis shows that jobs with regular exposure to COVID-19 and those working in close proximity to others continue to have higher COVID-19 death rates when compared with the rest of the working age population. Men continue to have higher rates of death than women, making up nearly two thirds of these deaths.”

Ben Humberstone, ONS, Head of Health Analysis and Life Events, 25th January 2021

[3] The dataset covers deaths involving COVID-19 and all causes by sex (those aged 20 to 64 years), England and Wales, for deaths registered between 9th March and 28th December 2020.

Deaths are defined using the International Classification of Diseases, 10th Revision (ICD-10). Deaths involving COVID-19 include those with an underlying cause, or any mention, of ICD-10 codes :

  • U07.1 (COVID-19, virus identified) or
  • U07.2 (COVID-19, virus not identified).

All causes of death is the total number of deaths registered during the same time period, including those that involved COVID-19.

Table 9 in the dataset breaks the figures down by occupation. Occupation is defined using the Standard Occupation Classification (SOC 2010). The table lists 369 occupations. Table 9 breaks the dataset down further by male and female.
The three columns of figures supplied in the dataset are titled:

  • Deaths involving COVID-19;
  • All causes of death;
  • Average all-cause mortality (2015 to 2019)

[4] https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/causesofdeath/adhocs/12888numberofdeathsamonghairdressersandbarbersthoseaged20to64yearsbyleadingcausesofdeathsdeathsregisteredbetween9marchand28december2020englandandwales

About the author

Peter Eales is chair of KOIOS Master Data, a provider of cloud-based data quality software. KOIOS also provides data quality consultancy and training services based on International Standards for data quality. Peter is an internationally recognised expert in the field of characteristic data exchange, and industrial data quality. Peter is a member of a number of International Organization for Standardization (ISO) working groups drafting International Standards in these areas.  

Peter has a daughter who is a self-employed hairdresser

Contact us

+44 (0)23 9387 7599

info@koiosmasterdata.com

Data quality: How do you quantify yours?

Data quality: How do you quantify yours?

Data quality: How do you quantify yours?

Being able to measure the quality of your data is a vital to the success of any data management programme. Here, Peter Eales, Chairman of KOIOS Master Data, explores how you can define what data quality means to your organization, and how you can quantify the quality of your dataset.

In the business world today, it is important to provide evidence of what we do, so, let me pose this question to you: how do you currently quantify the quality of your data?

If you have recently undertaken an outsourced data cleansing project, it is quite likely that you underestimated the internal resource that it takes to check this data when you are preparing to onboard it. Whether that data is presented to you in the form of a load file, or viewed in the data cleansing software the outsourced party used, you are faced with thousands of records to check the quality of. How did you do that? Did you start by using statistical sampling? Did you randomly check some records in each category? Either way, what were you checking for? Were you just scanning to see if it looked right?

The answer to these questions lies in understanding what, in your organization, constitutes good quality data, and then understanding what that means in ways that can be measured efficiently and effectively.

The Greek philosophers Aristotle and Plato captured and shaped many of the ideas we have adopted today for managing data quality. Plato’s Theory of Forms tells us that whilst we have never seen a perfectly straight line, we know what one would look like, whilst Aristotle’s Categories showed us the value of categorising the world around us. In the modern world of data quality management, we know what good data should look like, and we categorise our data in order to help us break down the larger datasets into manageable groups.

In order to quantify the quality of the data, you need to understand, then define the properties (attributes or characteristics) of the data you plan to measure. Data quality properties are frequently termed “dimensions”. Many organizations have set out what they regard as the key data quality dimensions, and there are plenty of scholarly and business articles on the subject. Two of the most commonly attributed sources for lists of dimensions are DAMA International, and ISO, in the international standard ISO 25012.

There are a number of published books on the subject of data quality. In her seminal work Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information™ (Morgan Kaufmann, 2008), Danette McGilvary emphasises the importance of understanding what these dimensions are and how to use them in the context of executing data quality projects. A key call out in the book emphasises this concept.

“A data quality dimension is a characteristic, aspect, or feature of data. Data quality dimensions provide a way to classify information and data quality needs. Dimensions are used to define, measure, improve, and manage the quality of data and information.
The data quality dimensions in The Ten Steps methodology are categorized roughly by the
techniques or approach used to assess each dimension. This helps to better scope and plan a project by providing input when estimating the time, money, tools, and human resources needed to do the data quality work.

Differentiating the data quality dimensions in this way helps to:
1) match dimensions to business needs and data quality issues;
2) prioritize which dimensions to assess and in which order:
3) understand what you will (and will not) learn from assessing each data quality dimension, and:
4) better define and manage the sequence of activities in your project plan within time and resource constraints”.

Laura Sebastian-Coleman in her work Measuring Data Quality for Ongoing Improvement, 2013 sums up the use of dimensions as follows:

“if a quality is a distinctive attribute or characteristic possessed by someone or something, then a data quality dimension is a general, measurable category for a distinctive characteristic (quality) possessed by data.

Data quality dimensions function in the way that length, width, and height function to express the size of a physical object. They allow us to understand quality in relation to a scale or different scales whose relation is defined. A set of data quality dimensions can be used to define expectations (the standard against which to measure) for the quality of a desired dataset, as well as to measure the condition of an existing dataset”.

Tim King and Julian Schwarzenbach in their work, Managing Data Quality – A practical guide (2020) include a short section on data characteristics, that also reminds readers that when defining a set of (dimensions) it depends on the perspective of the user; back to Plato and his Theory of Forms from where the phrase “beauty lies in the eye of the beholder” is derived. According to King and Schwarzenbach quoting DAMA UK, 2013, the six most common dimensions to consider are:

  • Accuracy
  • Completeness
  • Consistency
  • Validity
  • Timeliness
  • Uniqueness

The book also offers a timely reminder that international standard ISO 8000-8 is an important standard to reference when looking at how to measure data quality. ISO 8000-8 describes fundamental concepts of information and data quality, and how these concepts apply to quality management processes and quality management systems. The standard specifies prerequisites for measuring information and data quality and identifies three types of data quality: syntactic; semantic; and pragmatic. Measuring syntactic and semantic quality is performed through a verification process, while measuring pragmatic quality is performed through a validation process.

In summary, there is plenty of resource out there that can help you with understanding how to measure the quality of your data, and at KOIOS Master Data, we are experts in this field. Give us a call and find out how we can help you.

Contact us

In summary, there is plenty of resource out there that can help you with understanding how to measure the quality of your data, and at KOIOS Master Data, we are experts in this field. Give us a call and find out how we can help you.

+44 (0)23 9387 7599

info@koiosmasterdata.com

About the author

Peter Eales is a subject matter expert on MRO (maintenance, repair, and operations) material management and industrial data quality. Peter is an experienced consultant, trainer, writer, and speaker on these subjects. Peter is recognised by BSI and ISO as an expert in the subject of industrial data. Peter is a member ISO/TC 184/SC 4/WG 13, the ISO standards development committee that develops standards for industrial data and industrial interfaces, ISO 8000, ISO 29002, and ISO 22745. Peter is the project leader for edition 2 of ISO 29002 due to be published in late 2020. Peter is also a committee member of ISO/TC 184/WG 6 that published the standard for Asset intensive industry Interoperability, ISO 18101.

Peter has previously held positions as the global technical authority for materials management at a global EPC, and as the global subject matter expert for master data at a major oil and gas owner/operator. Peter is currently chief executive of MRO Insyte, and chairman of KOIOS Master Data.

KOIOS Master Data is a world-leading cloud MDM solution enabling ISO 8000 compliant data exchange

International trade and counterfeiting challenges: a new digital solution that will traverse the borders – Part 2

International trade and counterfeiting challenges: a new digital solution that will traverse the borders – Part 2

International trade and counterfeiting challenges: a new digital solution that will traverse the borders – Part 2

Part 2 – Introducing K:blok – the digital solution to international trade and counterfeit challenges

Introduction

In February 2019, we (KOIOS Master Data) embarked on a successful year long research and development project focusing on “Using ISO 8000 Authoritative Identifiers and machine-readable data to address international trade and counterfeiting challenges”. This project was funded by Innovate UK, part of UK Research and Innovation. ISO 8000 is the international standard for data quality.

Part one of this article explains the challenges HMRC and the UK PLC face due to counterfeiting and misclassification when importing into the UK, and outlines a digital solution to solve those challenges. Upon which we won our Innovate UK grant.

This part of the article (part two) outlines the development progress made towards building a digital solution, how machine learning and natural language processing techniques were used during the year-long project and how the project can move forward.

K:blok – technology to traverse borders

To tackle the challenges outlined in part one, we developed a new software product, K:blok.

K:blok is a cloud application that allows importers to create a digital contract between the parties involved in the cross border movement of goods from the manufacturer to the importer/buyer. These parties can include: manufacturers, shippers, freighters, insurers and lawyers, amongst others.

The contract brings together, in a single source, various pieces of data that are required to successfully and efficiently import a product into the UK and data that is not currently captured in any software system:

  • ISO 8000 compliant, machine readable, multilingual product descriptions produced by the manufacturer of the products;
  • ISO 8000 compliant Authoritative Legal Entity Identifiers (ALEI’s) for each organisation that participates in the trade;
  • Accurate commodity codes for each product, the quantity of products, serial numbers and anti-counterfeit information (only visible to the manufacturer, the buyer and HMRC) to help validate the authenticity of the product;
  • Trade specific information required for insurance and accountability, for example: the trade incoterm;
  • Licensing and trading information about the parties in the contract, for example: Economic Operators Registration and Identification (EORI) number;
  • Information regarding the route the product is taking, for example: the port of import into the UK, port of export from the original country of export, vessel/aircraft numbers and locations of the change of custody of the consignments.

The contract is digital, machine readable, can be exchanged without loss of meaning and is suitable for interoperating with distributed ledger technology, like blockchain.

This data can be accessed and used by any of the participants of the contract and analysed by HMRC. All of this data is captured before the goods are moved which, in turn, provides an intelligence layer and pre-arrival data on goods for HMRC analytics, to enable resources to be targeted at consignments deemed high risk.

This single source of data also provides buyers with an audit trail for their purchased products, which begins with the original manufacturer which assists with the authentication of the product received and can form the basis of an efficient global trusted trader scheme.

Natural language processing will help avoid misclassification

As discussed in part one, misclassification leads to the UK losing billions in tax revenue. Misclassification is both intentional and unintentional. Reducing the unintentional misclassification could save the UK millions in tax revenue.

There is a fundamental flaw in the current process of tariff code assignment. The party that currently assigns the tariff code is not usually the manufacturer of the product. Therefore, the party does not have the technical knowledge to classify the product correctly. This party also rarely has a full description of the product and resorts to using a basic description from an invoice to assign the code.

Currently, HMRC provides an online lookup and email service to enable UK businesses to assign the correct tariff code. However, there are concerns that the service is not time efficient. This concern will only get worse as more companies may have to classify their goods once the UK leaves the European Union (EU).

Therefore, as part of our project, we worked with two students from the University of Southampton, studying Computer Science with Machine Learning, to create an additional application programming interface (API) that links with the government tariff code API and uses natural language processing techniques to score a similarity between an input product description and the potential mapping to the correct tariff code.

This is accessible by manufacturers using the KOIOS software to link their ISO 8000 compliant product specifications to the correct commodity code for trading with the UK.

Techniques such as term frequency-inverse document frequency (tf-idf) and K-means were integrated into this API. Support Vector Machine (SVM), Random Forest and a Deep Neural Network (2 layers) have also been explored to improve the accuracy of the algorithm.

The API successfully improves on the searching capabilities of the government online lookup service within the product areas explored in this project – which were bearings and couplings.

KOIOS are uniquely positioned to continue the development of digital solutions for the UK PLC

Our Innovate UK project provides a foundation to achieve more efficient, cost-effective, cross border trading and to reduce counterfeit activities. We believe that data standards, including ISO 8000 can play a huge part in digitising and automating this process further.

We are ideally suited and uniquely positioned to continue the research and development of both the K:blok platform and the machine learning tariff classifier.

We also believe there is an opportunity to digitise the outdated, human readable tariff classification into a digital classification, using the international standards ISO 22745 and ISO 29002. These data standards sit at the core of all of the products in the KOIOS Software Suite. A digital version of the tariff classification will improve the accuracy, speed and reliability of computer automation.

Join us in our vision

Our successful Innovate UK project was a step in the right direction to improving international trade and reducing counterfeiting. Brexit also provides a great opportunity for the UK to become a world leader in using technology across borders and to set the standard for countries to follow.

In the coming months, we will continue to engage with the UK Government/HMRC and continue to look for opportunities to fund our research and development.

If you think that you can add value to this project and would like to explore how we could collaborate then please get in touch at info@koiosmasterdata.com

Contact us

If you think that you can add value to this project and would like to explore how we could collaborate then please get in touch.  

+44 (0)23 9387 7599

info@koiosmasterdata.com

International trade and counterfeiting challenges: a new digital solution that will traverse the borders – Part 2

International trade and counterfeiting challenges: a new digital solution that will traverse the borders – Part 1

International trade and counterfeiting challenges: a new digital solution that will traverse the borders – Part 1

Part 1 – The cost of counterfeit goods and misclassification to the UK

Introduction

In February 2019, we (KOIOS Master Data) embarked on a successful year-long research and development project focusing on “Using ISO 8000 Authoritative Identifiers and machine-readable data to address international trade and counterfeiting challenges”. This project was funded by Innovate UK, part of UK Research and Innovation. ISO 8000 is the international standard for data quality.

This part of the article (part one) explains the problem counterfeit goods and misclassification of products has on the UK PLC and the proposed solution which won us the Innovate UK Government grant.

A GBP 11 billion impact: and poor data exchange is the root of the problem

Counterfeit products and misclassification of products, when importing into the UK, cause major challenges for commercial organisations and the economy in the UK. These challenges increase a business’s exposure to risk, including consumer health, safety and well-being.

The impact of global counterfeiting on the UK economy is increasing. The Organisation for Economic Co-operation and Development (OECD) states that forgone sales for UK companies due to infringement of their intellectual property (IP) rights in global trade amounted to GBP 11 billion and at least 86,300 jobs were lost due to counterfeiting and piracy in 2019.

Protection from counterfeiting could save some organisations £000’s: for example, Greek customs seized 17,000 bearings, purporting to be from SKF, worth €1m in a single anti-counterfeiting operation.

When importing into the UK, importers are required to declare a commodity code for the products being imported. The commodity code is used to collect duty and VAT and dictates the restrictions and regulations, including the requirement for licensing, when importing or exporting the product.

Often the importer of the product does not have the technical knowledge to classify the product correctly. This, in combination with the complexity of the tariff code system currently adopted by the European Union (EU), and subsequently by the UK, causes many cases of misclassification. These cases are both intentional and unintentional.

Misclassification causes incorrect duty and VAT ratings to be applied to companies importing products, and also distorts trade statistics. Fraudulent misclassification leads to the UK losing billions in tax revenue.

Importers currently make customs declarations using the Customs Handling of Import and Export Freight (CHIEF) system, with some importers transitioning to the newer Customs Declaration Service (CDS).

The current importing process is not stringent enough and information is declared too late in the process. This results in a lack of transparency of the origin of products and a lack of quality data supporting the import and trade.

Therefore, Customs have the near impossible task of identifying and intercepting counterfeit or misclassified products. Customs activities increase spending by the UK Government on customs checks and delay trading activities.

International trade and counterfeit challenges: there is a digital solution

We believe that the challenges facing HMRC and the organisations that suffer from counterfeit goods can be solved with a stringent digital solution. A digital solution that captures:

  • A quality description of the products in a consignment;
  • The regulatory/licensing requirements on the products and the importer – for example: the commodity code of the product and the Economic Operators Registration and Identification (EORI) number; and
  • The parties involved in the trade – for example: manufacturers, shippers, freighters, insurers and lawyers, amongst others,

in a timely manner (pre-arrival to the UK border). This assured, single source of data can then be used by all parties in the supply chain, including HMRC and border forces. HMRC will then be able to use this trusted data to better target resources on more risky consignments and the platform can be a requirement for inclusion in a trusted trader programme.

This digital solution can be taken further so that the importer using the platform can establish a purchase order with the seller.

We also believe that we can help to reduce the misclassification of products by:

  1. Putting the responsibility of classifying the product on the manufacturer of the product, rather than the importer; and
  2. Assisting the manufacturers with classifying the product by using ISO 8000 compliant, machine readable product specifications and machine learning techniques to search the current human readable tariff classification.

Without a digital data solution for the automating of tariff code assignment and the provenance of products, no significant improvements to the current state of play can be achieved.

The proposed solution would enable HMRC to: 

  • reduce administration; 
  • eliminate errors; 
  • restrict growing levels of fraud in the digital economy 
  • target resources effectively through collecting pre-arrival data on goods 

This proposal formed the fundamental basis of our successful Innovate UK grant application.

The next part of this article outlines the development progress made towards building a digital solution, how machine learning and natural language processing techniques were used during the year-long project and how the project can move forward.

Contact us

If you think that you can add value to this project and would like to explore how we could collaborate then please get in touch.  

+44 (0)23 9387 7599

info@koiosmasterdata.com

What is K:spir and how can it revolutionise the SPIR process?

What is K:spir and how can it revolutionise the SPIR process?

What is K:spir and how can it revolutionise the SPIR process?

The SPIR process urgently needs to enter the 21st century

At KOIOS Master Data we have a unique understanding of the difficulties caused by the current SPIR (Spare Parts Interchangeability Record) process. Through our team’s years of MRO consultancy work, we have first-hand experience of how damaging the poor-quality data supplied in SPIRs can be to oil and gas projects. It can have a profound effect on cost, time and resource – cost, time and resource that could be spent innovating and developing a competitive advantage. Not to mention, the unnecessary wastage it can lead to, in an industry that can hardly accommodate it in the current climate. In this age of Industry 4.0, digital transformation and international data standards such as ISO 8000, the question begs – why is data quality consistently letting the side down? When we struggled to find an effective SPIR solution, KOIOS Master Data was born and we set out to create one.

K:spir is the only SPIR software designed this century using ISO 8000 standard data. It creates machine-readable data that retains quality throughout the chain, enabling accurate decision making and resulting in reduced cost, time and resource.

Here, we look at the importance of master data management, the challenges created by the SPIR process, and how K:spir is uniquely positioned to resolve those challenges.

Why is data management so important to the SPIR process?

In this age of ‘data explosion’, most businesses are aware of how poorly-managed data can put them on the back foot. In Experian’s 2019 Global Management Data Research, they found that 95% of organizations surveyed see a negative impact from poor data quality.

Similarly, the Aberdeen Group’s Big Data Survey in 2017 found that the biggest challenges for Executives arise from data disparity, including inaccessible data, poor quality data informing decisions and the growing need for faster analysis. 

The overall effect is a lack of trust in data, to the great detriment of strategic decision making. And when you can’t trust your data to inform business decisions, then cost, time and resource will inevitably suffer.

In the context of the SPIR process, accurate decision making is everything. The SPIR exists as a tool for forecasting spares requirements for the life of a project, its sole purpose being to assist the Owner Operator (O/O) to make accurate decisions. Yet, as many will attest, the data supplied is often inaccurate, hard to access and sometimes supplied by the Engineering Procurement Contractor (EPC) at handover, by which time it is often too late to inform anything at all. 

Experts have raised the question – if you can’t trust SPIRs to make accurate procurement decisions, then are they worth the paper they’re written on?. The process is clearly out-of-date, yet it continues to blight the efficiency of many oil and gas upstream projects.

SPIRs dissected 

The shortcomings of the antiquated SPIR process can be summarised into three key areas:

1. DATA IS INACCURATE AND OVER-SIMPLIFIED

SPIRs are generated from paper forms and are transcribed many times, so part descriptions become distorted. Often, parts have multiple descriptions.

Solution: K:spir locks in data quality right at the start of the process, using ISO 8000 standard data. Part descriptions are consistent and safe from misinterpretation, providing confidence in forecasting and reordering. 

SPIRs are usually completed by an Original Equipment Contractor (OEM), who is not necessarily aware of the O/O’s operating and maintenance procedures. Therefore, they do not take into account equipment criticality or maintenance capability.

Solution: K:spir uses the maintenance and repair strategy to determine the spares requirement, reducing wastage and taking cost off of the bottom line.

2. DATA IS INACCESSIBLE AND DIFFICULT TO ANALYZE

SPIRs often provide information in spreadsheets or pdfs, which are impossible to extract data from quickly, if at all. To extract anything meaningful is very cost and time-intensive, and relies on support from IT specialists.

Solution: K:spir provides instant reporting on the completeness and cost of spares, allowing for accurate decision making. The information is fully configurable to the requirements of the O/O. It can also create a Maintenance Bill of Materials (BoM) and is interoperable with maintenance systems.

Information is not portable and has to be re-entered for different systems.

Solution: K:spir generates portable (machine-readable) data saving significant time spent re-keying information and unnecessary data handling costs.

Data exists on many platforms and is not available to all stakeholders, all of the time.

Solution: K:spir is cloud-based, providing simultaneous access to all stakeholders in the chain. This allows for more transparency and accountability at all stages of the project lifecycle.

3. DATA IS SUPPLIED TOO LATE

Sometimes even as late as handover, by which time it’s too late for the O/O to minimize the operating risk. There is no opportunity to make informed decisions, such as ordering spares with long lead times, or calculating warehouse space. This can lead to unnecessary wastage and operational difficulties along the line.

Solution: K:spir provides transparency right from the beginning of the project, allowing for critical decisions to be made early on. 

With its unique set of features and benefits, it’s clear that K:spir can relieve the symptoms of the current SPIR process with immediate effect, saving valuable cost, time and resource.

A SPIR – this is not what efficiency looks like!

SPIRs and effective MDM – who is responsible for getting it right?

As confident as we are in the KOIOS software suite to advance the world of Master Data Management (MDM), there are clearly other factors that need to be addressed, most notably, ownership. It is a thorny area, and one that is being more keenly contested as digital transformation rattles on apace. As the Aberdeen Group puts it, there is a “growing urgency for better data management”, as businesses see the shortfalls of their inability to harness data.

Experian’s report shows that in 84% of cases, data is still managed primarily by IT departments. Revealingly, 75% of their sample thought that ownership should lie within the business, with support from IT. They conclude that organizations should develop their MDM strategy to fulfill the needs of a much larger group of stakeholders, who wish to harness the power of their data to improve decision making and efficiency.

In the context of SPIRs and oil and gas projects, we believe that O/Os should become more demanding over the quality of data supplied to them by manufacturers. It is unrealistic for their IT experts to have sight of the broader operational requirements, with their own priorities being diverse and demanding. It is the Executives who suffer the consequences of the risk taken by ignoring poor data, and the operations and maintenance departments that will experience the pain. Clearly, they need to make their voices heard much earlier in the process. That said, manufacturers and EPCs also need a better understanding of the challenges faced by O/Os, and in our view should share the responsibility for getting the data right from the start.

It is, as previously stated, a tough subject, but we are constantly encouraged by the conversations we have with manufacturers and O/Os alike. More and more key stakeholders are waking up to the power that effective MDM can have in driving business forwards, by freeing up cost, time and resource and supporting strategic decision making. Not just to their own ends, but for industry as a whole to fully realize its digital transformation goals.

Join us in our vision to revolutionize the SPIR process

A radical change to the SPIR process and MDM as a whole is on the horizon. While there may be no silver bullet, we firmly believe that the right software is an essential move forward. The KOIOS software suite is geared towards this larger shift in MDM, but in the case of K:spir, the results can be felt immediately.

Our hope is that O/O’s and manufacturers alike will unite in becoming more discerning and demanding about data quality, working as one to create harmony along the chain. At KOIOS Master Data, we are committed to leading the conversation and driving better data quality.

Contact us

If you wish to become part of the change and join us in our vision to revolutionize the SPIR process, we would love to discuss it further with you. 

+44 (0)23 9387 7599

info@koiosmasterdata.com