Statistical Process Control – or SPC for short – can help your organization progress in great strides when applied correctly, but it does come with a few caveats that you’ll need to observe. Most importantly, SPC requires you to spend a lot of time figuring out how exactly you’re going to collect your data, and proper data collection practices are the foundation of a good SPC implementation. Get that part right, and you won’t have to spend a lot of time figuring out the rest.
The Two Types of Data
There are two main types of data you’ll be dealing with in SPC – attribute data and variable data. The main difference between them is that attribute data is based on “attributes” – discrete, mutually exclusive properties. A product is good or bad, a condition is true or false, a certain percentage of products fall in some category, etc. The point is that attribute data can easily be grouped according to these values, allowing you to aggregate the data quite effectively.
On the other hand, variable data is continuous and not discrete. For example, it can measure some range between 0 and 1, the value of an analog control signal, and more. Variable data also tends to be more flexible in its application to your research, as it can give you a more objective overview of the current situation.
That’s because attribute data can be very susceptible to your own perception, and in the end it can be quite subjective. How do you define whether something is good or bad? Unless you have a very strict table of product requirements that covers every single aspect of the output (which is not a bad idea at some point anyway), this can be quite open to your own interpretation.
Combining Your Data Sets
Sometimes it can make sense to draw conclusions from more than one data set, including ones of different types. You can, for example, look for overlaps between an attribute data set that covers the final quality verdict for a product, and a variable data set that shows what signals a certain machine in the production chain has been receiving when each of those products arrived at the end of the line.
This can lead to the discovery of some interesting relationships between the work of some parts of the production line and the final output, in some cases giving you unpredictable results that you would have never guessed on your own. That’s one of the strong points of SPC, and the main reason why you should spend as much time as you can developing a good, reliable system for data collection and aggregation.
Aligning Old Data with New Discoveries
Sometimes – quite often in fact – it can be useful to compare some old findings with new data to see how some of your company’s processes have evolved over time. However, it’s not rare that you will have modified your data collection practices in some way from the last time that data was collected, resulting in incompatibility between the sets.
This can be avoided by setting up your data structures and databases to be flexible and easily adaptable from the beginning. Use a common standard across the board and try to implement everything in a modular way that leaves it open for change in the future. It doesn’t have to be hard to align data sets taken under different conditions, as long as this was planned from the beginning, and if you do it right from the start, you can easily keep your entire historic data useful for as long as the company itself exists. Which, as you’ll eventually find out, can be a huge benefit.
Proper data collection practices are a critical aspect of SPC, and something you’ll want to master as early as possible. Don’t rush the job thinking that you’ll just come back to fix things later – the longer you wait between separate iterations, the more problematic it will become to compare different sets of data in the future, defeating the whole point of running a continuous statistical analysis on your organization’s work in the first place. On the other hand, when you do this right, it can save you tremendous amounts of effort in the long run.