How Much Does Sensor Quality Matter for Anomaly Detection?
Anomaly detection and applications such as predictive maintenance promise a lot: reduced downtime, lower costs and improved productivity. But in practice, adoption is often limited by two key challenges:
The need for machine-specific models
Uncertainty around data quality
At Anomalyse, we take a different approach. Using a general-purpose machine learning platform that learns “normal” behaviour directly from sensor data, without needing prior knowledge of the machine. But this raises an important question:
“How much does the quality of sensor data actually affect the results?”
Testing the Question with NPL
Through the Innovation for Machinery (I4M) programme, funded by the Advanced Machinery and Productivity Initiative (AMPI), we partnered with the National Physical Laboratory (NPL) to investigate this. We designed a controlled experiment using a CNC milling machine, a common asset in manufacturing. Two sets of vibration sensors were installed side by side:
High-quality industrial sensors
Lower-cost, less precise sensors
This allowed us to compare “good” and “noisy” data under identical operating conditions.
To simulate a real-world fault, we gradually reduced the pressure in the machine’s vice over multiple runs. This created a progressive loss of grip, a condition that would eventually lead to failure as the piece being milled is dropped.
Crucially, this pressure data was not used in the analysis. Instead, both our platform and NPL’s independent Early Warning Signal methods relied solely on vibration data to detect the issue.
What We Found
The results were clear - and encouraging!
Both high-quality and low-quality sensor data enabled detection of the fault. Even with less precise data, the system could identify that something was wrong. However, the difference emerged in how well the fault could be understood:
Low-quality sensors were effective at detecting abnormal behaviour, but provided limited visibility into how the fault developed over time
High-quality sensors revealed a much clearer progression, with anomaly scores increasing as the fault worsened
This means that while lower-cost setups can deliver value, higher-quality data enables earlier detection and better insight into emerging issues.
What This Means for Manufacturers
One of the biggest barriers to predictive maintenance is the perceived need for perfect data and expensive instrumentation.
This project shows that isn’t the case.
Manufacturers can start with existing or low-cost sensors and still gain meaningful insights. The decision to invest in higher-quality sensors then becomes a strategic one:
Do you just need to know when something is wrong?
Or do you need early warning and deeper diagnostics?
In other words, it’s not about whether your data is “good enough” - it’s about what level of insight you need.
Building Trust Through Evidence
For us, the most important outcome of this work is credibility.
By working with NPL and validating our results against independent analytical methods, we’ve demonstrated that our platform can detect faults in real industrial scenarios, even with imperfect data.
That evidence allows us to have more confident, data-driven conversations with manufacturers about how predictive maintenance can work in their environment.
Interested in learning more?
Get in touch to explore how anomaly detection could work with your data - whatever its quality.