Sensors and Signal Acquisition
Turning Physical Reality into Reliable Digital Data
Every data collection system starts with a simple but critical step: measuring the real world. No matter how advanced your cloud analytics, dashboards, or machine learning models are, the quality of the resulting insights will never exceed the quality of the data captured at the very beginning of the pipeline.
In this article, we narrow the focus deliberately. Instead of covering the full data stack, we concentrate on sensors and signal acquisition—the foundation upon which all data collection systems are built. This is where physical reality is translated into digital information, and where many of the most costly and difficult-to-fix mistakes are made.
Figure: A high-level architecture showing how physical sensors, signal acquisition, and edge processing interact.
What Is a Sensor?
A sensor is a device that converts a physical phenomenon into an electrical signal that can be measured and processed digitally. Depending on the application, this phenomenon might be temperature, pressure, vibration, position, electrical current, or any number of other physical quantities.
However, the sensor itself is only part of the picture. Signal acquisition also includes everything required to turn that raw physical response into usable digital data. Amplification, filtering, sampling, and analogue-to-digital conversion are all essential steps. Treating the sensor as an isolated component rather than part of a signal chain is a common source of system-level problems.
Analog and Digital Sensors
Sensors are often broadly classified as analog or digital, based on how they present their output to the system.
Analog sensors produce a continuous electrical signal—typically a voltage or current—that is proportional to the measured quantity. Examples include thermistors, strain gauges, pressure transducers, and industrial 4–20 mA sensors. These devices offer a high degree of flexibility and, in many cases, excellent resolution. At the same time, they place greater demands on the surrounding electronics. Noise sensitivity, careful PCB layout, analogue front-end design, and proper ADC configuration all become critical.
Digital sensors, by contrast, integrate parts of the signal conditioning internally and output already-digitised data over interfaces such as I2C, SPI, or UART. This often simplifies hardware design and firmware development and makes systems more robust to electrical noise. The trade-off is reduced transparency into the raw signal, limited control over sampling behaviour, and dependence on vendor-specific implementations.
Neither approach is inherently better. The right choice depends on system constraints, performance requirements, and how much control is needed over the measurement process.
Sampling: Capturing the Signal Correctly
Sampling is the process of converting a continuous signal into discrete digital values. This step determines what information is preserved and what is permanently lost.
The sampling frequency must be high enough to capture the dynamics of the signal being measured. The well-known Nyquist criterion provides a theoretical minimum, but real-world systems often require higher sampling rates to account for noise, filtering, and non-ideal behaviour. Resolution is equally important. The number of ADC bits defines the smallest detectable change in the signal and has a direct impact on accuracy and noise performance.
Poor sampling choices are difficult to correct later. Oversampling wastes power, processing capacity, and bandwidth, while undersampling leads to aliasing and missing information. Both problems scale downstream as data volumes grow.
Signal Conditioning and Noise
Raw sensor outputs are rarely usable without conditioning. Amplification brings signals into the measurable range of the ADC, filtering removes unwanted frequency components, and level shifting ensures compatibility between components.
Noise is an unavoidable reality in physical systems. Electromagnetic interference, ground loops, power supply ripple, and mechanical vibrations all influence measurement quality. Addressing these issues is not just a matter of software filtering. PCB layout, grounding strategy, shielding, cable selection, and connector choice often have a greater impact than any algorithm applied later.
Good signal integrity starts with acknowledging that the physical world is imperfect—and designing accordingly.
Calibration and Drift
Sensors change over time. Temperature cycles, mechanical stress, and component aging all cause drift. Without calibration, these changes silently degrade data quality.
Calibration can take many forms, from factory calibration to periodic field adjustments or software-based offset and gain correction. Some systems cross-check multiple sensors to detect deviations automatically. Regardless of method, calibration must be treated as an ongoing process rather than a one-time activity. Ignoring it often means discovering data quality issues only after decisions have already been made based on faulty data.
Sensor Diagnostics and Fault Detection
In any long-lived system, sensors will fail. Cables break, connectors corrode, components drift outside acceptable limits, and installation issues introduce subtle errors. A robust data collection system must therefore be able not only to measure, but also to detect when measurements can no longer be trusted.
Sensor diagnostics focus on identifying abnormal or implausible behaviour as early as possible. This can include detecting values that are out of range, signals that are stuck at constant levels, excessive noise, sudden discontinuities, or readings that violate known physical constraints. In multi-sensor systems, plausibility checks and cross-comparisons are often used to identify faulty inputs.
Equally important is how faults are reported. Diagnostic information should be propagated upstream as part of the data stream or metadata, rather than handled only locally. This allows backend systems, operators, and analytics pipelines to distinguish between valid data, degraded measurements, and complete sensor failures.
Treating diagnostics as a first-class part of signal acquisition prevents silent data corruption and ensures that downstream processing, storage, and decision-making can react appropriately when the physical world—or the sensors observing it—behave unexpectedly.
Sensor Selection as a System Decision
Choosing a sensor is never just about accuracy or price. Power consumption, environmental conditions, data rate requirements, lifetime expectations, certification needs, and maintenance strategy all influence the decision.
A sensor that looks ideal in isolation may be a poor fit when considered in the context of the full data collection system. Viewing sensor selection through a system-level lens helps avoid expensive redesigns later on.
How This Connects to the Bigger Picture
Everything that follows in the data collection pipeline depends on this layer. Edge processing can only filter what has been captured. Connectivity can only transmit what has been sampled. Data platforms and Data Lakes can only store what exists in the first place.
Poor input data scales just as efficiently as good data.
What’s Next in the Series?
In the next article, we move one layer up the stack and look at what happens immediately after sensing:
Edge Processing and Data Conditioning
How local computation reduces bandwidth, power consumption, and latency—and enables intelligent behaviour close to the physical system.