Measurement Science and Technology - IOPscience
Measurement Science and Technology
Purpose-led Publishing
is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
SUPPORTS OPEN ACCESS
Launched in 1923
Measurement Science and Technology
was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Submit
an article
opens in new tab
Track my article
opens in new tab
RSS
Sign up for new issue notifications
Median submission to first decision before peer review
8 days
Median submission to first decision after peer review
52 days
Impact factor
3.4
Citescore
4.4
Full list of journal metrics
The following article is
Open access
Roadmap: Integrating artificial intelligence in structural health monitoring systems
Simon Laflamme
et al
2026
Meas. Sci. Technol.
37
103001
View article
, Roadmap: Integrating artificial intelligence in structural health monitoring systems
PDF
, Roadmap: Integrating artificial intelligence in structural health monitoring systems
Advances in computing and machine learning (ML) methods have led to a rapid rise in artificial intelligence (AI) research and applications in many fields. AI research benefitted from advances in computation hardware, collection and distribution of large data sets, and proliferation of software techniques. AI techniques include ML for provable results, deep learning for data exploration, reinforcement learning for control, and active learning for adaptive systems. Likewise, AI algorithms can handle large amounts of data, construct unknown representations, and provide a direct link between data and classification for decision making. These unmatched capabilities have been seen as a path to solving hard engineering problems, including that of structural health monitoring (SHM). SHM consists of automating the condition assessment task of civil, health, mechanical, and aerospace systems using measurements obtained from temporary or permanently installed sensors. Often, the systems of interest are geometrically large and/or technically complex, which complicates the development and application of physics-based methods. It follows that AI is seen as a key potential contributor enabling SHM in field applications for data-driven analysis. As with many research endeavors, many concepts using AI for SHM have been explored in the literature. Nevertheless, very few AI methods have been deployed in the context of SHM, which may be due to the lack of available data supporting their capabilities, limited integrated AI-SHM systems capable of providing results to users and operators with decision-making capabilities, or certification of AI methods for safety-critical applications. The objective of this Roadmap publication is to discuss the integration of AI at the system level enabling SHM, including associated challenges and opportunities such as those found in common metrics of concern (e.g. transparency, interpretability, explainability, security, certifiability, etc), with a particular focus on providing a path to research and development efforts that could yield impactful field applications. The overview of available methods and directions will provide the readers with applicability of AI for certain SHM designs (software), availability of common data sets for further AI comparisons (data), and lessons learned in implementation (hardware).
The following article is
Open access
Uncertainty quantification in particle image velocimetry
A Sciacchitano 2019
Meas. Sci. Technol.
30
092001
View article
, Uncertainty quantification in particle image velocimetry
PDF
, Uncertainty quantification in particle image velocimetry
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into
a-priori UQ approaches
, which provide a general figure for the uncertainty of PIV measurements, and
a-posteriori UQ approaches
, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for
a-posteriori
PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
The following article is
Open access
Time-gated Raman spectroscopy – a review
Martin Kögler and Bryan Heilala 2020
Meas. Sci. Technol.
32
012002
View article
, Time-gated Raman spectroscopy – a review
PDF
, Time-gated Raman spectroscopy – a review
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
The following article is
Open access
Medical & healthcare robotics: a roadmap for enhanced precision, safety, and efficacy
Dimitris K Iakovidis
et al
2025
Meas. Sci. Technol.
36
103001
View article
, Medical & healthcare robotics: a roadmap for enhanced precision, safety, and efficacy
PDF
, Medical & healthcare robotics: a roadmap for enhanced precision, safety, and efficacy
Medical robotics holds transformative potential for healthcare. Robots excel in tasks requiring precision, including surgery and minimally invasive interventions, and they can enhance diagnostics through improved automated imaging techniques. Despite the application potentials, the adoption of robotics still faces obstacles, such as high costs, technological limitations, regulatory issues, and concerns about patient safety and data security. This roadmap, authored by an international team of experts, critically assesses the state of medical robotics, highlighting existing challenges and emphasizing the need for novel research contributions to improve patient care and clinical outcomes. It explores advancements in machine learning, highlighting the importance of trustworthiness and interpretability in robotics, the development of soft robotics for surgical and rehabilitation applications, and the role of image-guided robotic systems in diagnostics and therapy. Mini, micro, and nano robotics for surgical interventions, as well as rehabilitation and assistive robots, are also discussed. Furthermore, the roadmap addresses service robots in healthcare, covering navigation, logistics, and telemedicine. For each of the topics addressed, current challenges and future directions to improve patient care through medical robotics are suggested.
The following article is
Open access
Characterization of film condensation: a review of experimental methods and emerging technologies
Samah A Albdour
et al
2026
Meas. Sci. Technol.
37
102002
View article
, Characterization of film condensation: a review of experimental methods and emerging technologies
PDF
, Characterization of film condensation: a review of experimental methods and emerging technologies
Liquid‐film condensation underpins heat‐transfer efficiency and safety in nuclear‐reactor cooling loops, industrial heat exchangers, and spacecraft thermal‐control systems; yet accurately characterizing film thickness and dynamics remains challenging: although a wide range of diagnostic methods is available, each occupies a distinct and often non-overlapping window in spatial and temporal resolution, accuracy, intrusiveness, cost, and adaptability, which complicates the choice of technique and the comparison and synthesis of data across studies. In this review, we apply a unified six‐criteria framework to benchmark ten leading techniques; classical calorimetric and thermal‐probe approaches, thin‐film interferometry, infrared thermography, pulse‐echo ultrasound, acoustic‐emission monitoring, chromatic‐confocal sensing, total‐internal‐reflection imaging, particle‐based velocimetry, laser‐induced fluorescence, x-ray tomography, and high‐speed particle tracking, and introduce two decision‐support schematics: a multi‐axis radar chart that maps each method’s performance envelope and a decision‐tree flowchart that aligns experimental requirements with optimal approaches. Our analysis reveals four critical gaps: noninvasive nanometer‐scale mapping over large areas; real‐time capture of microsecond‐scale transients; co‐located measurement of thickness, temperature, and heat flux; and robust deployment in harsh environments. Finally, we survey emerging solutions; fiber‐optic fiber-optic Bragg grating arrays, MEMS‐based capacitive and piezoelectric sensors, terahertz time‐domain spectroscopy, benchtop x-ray phase-contrast imaging, and digital holographic interferometry, and discuss their integration with machine-learning–driven data fusion and CFD, laying out a roadmap for next‐generation, high‐fidelity condensation modeling in both terrestrial and microgravity applications.
The following article is
Open access
Roadmap on measurement technologies for next generation structural health monitoring systems
Simon Laflamme
et al
2023
Meas. Sci. Technol.
34
093001
View article
, Roadmap on measurement technologies for next generation structural health monitoring systems
PDF
, Roadmap on measurement technologies for next generation structural health monitoring systems
Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots.
The following article is
Open access
Physics-informed deep-learning applications to experimental fluid mechanics
Hamidreza Eivazi
et al
2024
Meas. Sci. Technol.
35
075303
View article
, Physics-informed deep-learning applications to experimental fluid mechanics
PDF
, Physics-informed deep-learning applications to experimental fluid mechanics
High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers’ equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
The following article is
Open access
Nitrate and nitrite detection in water environment by UV absorption spectroscopy—application to fish farm samples
Sasa-Alexandra Yehia-Alexe
et al
2026
Meas. Sci. Technol.
37
065801
View article
, Nitrate and nitrite detection in water environment by UV absorption spectroscopy—application to fish farm samples
PDF
, Nitrate and nitrite detection in water environment by UV absorption spectroscopy—application to fish farm samples
The detection of NO
and NO
in NaNO
, NaNO
solutions and fish farm water samples was carried out in the 265–365 nm UV range. The equipment used was an optoelectronic platform of 200 × 400 × 250 mm
, composed of electroluminescent diodes. The detector had a narrow emission spectrum of 12 nm at full width at half maximum (FWHM), a long-path optical cell, and a broadband photodiode. An optical emission spectrometer was employed as an alternative detector. Both detectors were involved in the calibration procedure of the platform for NO
and NO
quantification in liquids in the concentration range of 0.3–365 mg l
−1
. In the case of the photodiode, the limit of detection of NO
was 2.2 ± 0.04 mg l
−1
at 295 nm, and for NO
, it was 0.3 ± 0.007 mg l
−1
at 340 nm. These values represent improvements of 1.4 mg l
−1
and 2.3 mg l
−1
, respectively, compared to those provided by a UV–Vis spectrophotometer calibrated at the NO
and NO
maximum absorption wavelengths of 303 and 354 nm. Measurements of 27.5 ± 3.3 mg l
−1
NO
at 310 nm and 5 ± 0.12 mg l
−1
NO
at 340 nm were obtained from fish farm water samples. These overcame the issue that UV–Vis spectrophotometric data were unquantifiable due to poor sensitivity and interference from other ions.
The following article is
Open access
BOS/schlieren synthetic image generation via MIRAGE (MATLAB implementation of ray tracing for analysis of variable density Gradient Environments)
Chandler J Moy
et al
2026
Meas. Sci. Technol.
37
137001
View article
, BOS/schlieren synthetic image generation via MIRAGE (MATLAB implementation of ray tracing for analysis of variable density Gradient Environments)
PDF
, BOS/schlieren synthetic image generation via MIRAGE (MATLAB implementation of ray tracing for analysis of variable density Gradient Environments)
This work presents a MATLAB-based, open source tool for generating synthetic images, e.g. background oriented schlieren (BOS), in non-homogeneous refractive index field environments induced by the presence of density gradients. The tool allows the user model their specific optical setup and input a known flow density field (e.g. from CFD simulations) to generate experiment-like images using geometric ray tracing. MIRAGE (MATLAB Implementation of Ray tracing for Analysis of variable density Gradient Environments) can be used by researchers for designing experiments and assessing post-processing schemes, error analyses, and uncertainty quantification models. The simulation begins by initializing light rays from the light source or background pattern. The light rays are then propagated through the user-defined optical components and density gradient field before intensity is accumulated at the camera sensor plane using a cosine-fourth-power law and bilinear weighting scheme. Ray tracing through the density gradient field is performed using a fourth order Runge–Kutta scheme and is validated using a simulation of a Luneberg lens. The entire program is also validated by generating a synthetic BOS image using CFD data of a low Mach number turbulent mixing layer. The density gradients calculated from the synthetic BOS image are in good qualitative and quantitative agreement with the CFD span-averaged density gradients. The entire program is housed in a graphical user interface (GUI), making it easy and intuitive for the user to conduct simulations using their own datasets and experimental setups.
The following article is
Open access
Advancing machine fault diagnosis: a detailed examination of convolutional neural networks
Govind Vashishtha
et al
2025
Meas. Sci. Technol.
36
022001
View article
, Advancing machine fault diagnosis: a detailed examination of convolutional neural networks
PDF
, Advancing machine fault diagnosis: a detailed examination of convolutional neural networks
The growing complexity of machinery and the increasing demand for operational efficiency and safety have driven the development of advanced fault diagnosis techniques. Among these, convolutional neural networks (CNNs) have emerged as a powerful tool, offering robust and accurate fault detection and classification capabilities. This comprehensive review delves into the application of CNNs in machine fault diagnosis, covering its theoretical foundation, architectural variations, and practical implementations. The strengths and limitations of CNNs are analyzed in this domain, discussing their effectiveness in handling various fault types, data complexities, and operational environments. Furthermore, we explore the evolving landscape of CNN-based fault diagnosis, examining recent advancements in data augmentation, transfer learning, and hybrid architectures. Finally, the future research directions and potential challenges to further enhance the application of CNNs for reliable and proactive machine fault diagnosis are highlighted.
Reconfiguration sequence planning algorithm of modular robot based on maximum similarity matching
Huawei Liu
et al
2026
Meas. Sci. Technol.
37
166205
View article
, Reconfiguration sequence planning algorithm of modular robot based on maximum similarity matching
PDF
, Reconfiguration sequence planning algorithm of modular robot based on maximum similarity matching
With the aim of bridging the gap between the increasing complexity of space missions and the challenge of solving reconfiguration planning strategy for modular robots, this study focuses on a self-developed modular robot composed of custom jointed modular units and proposes a strategy for solving reconfiguration planning strategy. First, a self-reconfiguration sequence planner for the modular robot is designed, and a method for generating robot self-reconfiguration sequences that satisfy topological graph transformations is proposed. This approach reduces the dimensionality of the node-matching search space. Second, an optimal selection model for self-reconfiguration sequences is established. Topology decomposition rules and topological matching strategies are developed, and a set of self-reconfiguration sequences is constructed based on sequence planning methods with known node correspondences. Finally, by analyzing the maximum common subchains of topological structures and constructing virtual subchains, the self-reconfiguration process is optimized using multiple metrics including energy consumption, reconfiguration time, and number of steps. This enables the evaluation of modular robot performance and the selection of optimal reconfiguration sequences. Taking transformations between configurations with fewer than three robot branches as an example, the proposed method is demonstrated and analyzed through simulations. Experimental data on various configuration transformations show that, compared with random matching, nearest Euclidean matching, layer-by-layer matching methods, double-nested A* method, batch informed tree method, graph network method, and reinforcement learning method, the proposed method reduces cost by 68.58%, 83.92%, 84.1%, 62.76%, 64.67%, 41.5%, and 32.41%, respectively, verifying its feasibility and effectiveness.
High-precision leakage detection in fresh-air ducts: an adaptive wavelet denoising approach with enhanced energy tracking
Jian Shi
et al
2026
Meas. Sci. Technol.
37
166115
View article
, High-precision leakage detection in fresh-air ducts: an adaptive wavelet denoising approach with enhanced energy tracking
PDF
, High-precision leakage detection in fresh-air ducts: an adaptive wavelet denoising approach with enhanced energy tracking
Conventional wavelet denoising methods based on hard and soft threshold functions exhibit significant limitations in terms of signal discontinuity and systematic deviation, resulting in compromised signal smoothness and reconstruction fidelity. To address these challenges, this paper proposes a novel wavelet denoising method incorporating three key innovations: (1) An adaptive threshold calculation method is proposed, which is based on all wavelet coefficients across different decomposition levels to achieve cross-scale noise suppression; (2) To overcome the critical drawbacks of traditional threshold functions, a novel continuous nonlinear threshold function is developed. The proposed method simultaneously eliminates the discontinuity associated with hard thresholding and corrects the systematic bias caused by soft thresholding; (3) An adaptive energy detection algorithm is proposed to further improve the time positioning accuracy of the denoised signal. This algorithm autonomously tracks signal energy fluctuations and dynamically optimizes detection thresholds in real-time, achieving unprecedented accuracy in rising edge identification. Finally, the proposed method is evaluated on a real-world fresh-air duct leakage dataset and compared against five baselines. The experimental results demonstrate that the proposed wavelet threshold denoising method outperforms all baseline methods across the established evaluation criteria. Furthermore, the proposed energy detection algorithm achieves a remarkable time positioning error of only 0.0009 s, significantly surpassing the performance of the traditional envelope method. These findings conclusively validate the robustness and practical efficacy of the proposed method for accurate leakage detection in complex multi-source noise environments characteristic of fresh-air duct systems.
The following article is
Open access
Revisiting rotating bending fatigue in FFF polymer specimens using geometry-based cross-section properties for stress estimation
Carolina Bermudo Gamboa
et al
2026
Meas. Sci. Technol.
37
165601
View article
, Revisiting rotating bending fatigue in FFF polymer specimens using geometry-based cross-section properties for stress estimation
PDF
, Revisiting rotating bending fatigue in FFF polymer specimens using geometry-based cross-section properties for stress estimation
The fatigue performance of polymer components fabricated by fused filament fabrication (FFF) under cyclic loading is strongly influenced by their shell–infill configuration. Standard rotating bending fatigue procedures, such as ISO 1143:2010, assume solid and homogeneous circular sections, which can bias stress estimations when applied to FFF specimens. In the absence of a specific fatigue standard for additively manufactured polymers, this work compares four analytical approaches to calculate the section modulus and associated moment of inertia of FFF-printed specimens subjected to rotating bending fatigue tests, enabling accurate estimation of the maximum surface stress through the classical flexural relation. The approaches range from the solid-section assumption to an analytical geometry-based (AGB) formulation that represents horizontally printed morphologies with ovalized shell–infill interfaces and non-uniform shell thickness. The impact of each approach is assessed by reconstructing
curves using (i) new rotating bending tests on polylactic acid (PLA) specimens manufactured with grid and honeycomb infill patterns and varying infill density under constant bending moment, and (ii) literature data recalculated using geometry-aware section properties. Reprocessing published PLA datasets shows that geometry-aware stress estimation in some cases can approximately double the inferred fatigue life relative to ISO-based calculations. Experimentally, grid infill provides longer fatigue lives than honeycomb under otherwise identical conditions. Overall, the AGB approach provides the most consistent stress estimation for horizontally printed cylinders and improves the comparability of rotating bending fatigue results across different FFF configurations.
The following article is
Open access
Perspective on measurements and modeling of Earth’s climate
Graziano Coppa and Laura Teresa Massano 2026
Meas. Sci. Technol.
37
161001
View article
, Perspective on measurements and modeling of Earth’s climate
PDF
, Perspective on measurements and modeling of Earth’s climate
This paper celebrates the achievements in the modeling of the Earth’s atmosphere, ocean, and land that led to the discovery of anthropogenic climate change and, ultimately, to the awarding of the 2021 Nobel Prize in Physics to Syukuro Manabe and Klaus Hasselmann. The paper will succinctly recap its history, from the first pioneering years of Tyndall and Arrhenius, to the introduction of computers, to the latest breakthroughs and refinements. It will connect the work of modelists, who strive to create ‘digital twins’ of our planet in order to simulate its hydro-dynamical, chemical, and physical evolution through computerized models, and the observations needed to initialize the models themselves and validate them through comparisons and reanalysis, bridging the delicate gap between theory and measurements. Finally, we will present an overview of the future direction of this field of research, trying to highlight the challenges but also the opportunities and the importance of understanding the evolution of the Earth, especially for thermal-related quantities.
A machine learning-based noise pollution detection system for urban traffic
Nurullah Sari
et al
2026
Meas. Sci. Technol.
37
165102
View article
, A machine learning-based noise pollution detection system for urban traffic
PDF
, A machine learning-based noise pollution detection system for urban traffic
Urban road-traffic noise is commonly reported using long-term averaged indicators, which limits analysis of short-duration high-level events and their association with individual vehicles. We present an event-driven acoustic–vision measurement framework that converts short-window A-weighted equivalent-level exceedances (L_Aeq,0.1 s) into time-stamped, vehicle-attributed records. Audio from two directional microphones is sampled at 48 kHz and processed in 100 ms windows. A digital A-weighting filter is implemented in software following the IEC 61672-1:2013A-weighting specification to estimate L_Aeq,0.1 s at the receiver. When a user-defined threshold is exceeded, the system triggers synchronised image capture and performs licence-plate detection (opting YOLOv5) and optical character recognition (OCR) to document the corresponding vehicle. A multi-threaded implementation processes audio and video concurrently on a laptop, while the resulting database links exceedance time, lane/channel metadata, geolocation and plate-text outputs to support operationally actionable vehicle-level attribution. The detector achieved mAP@0.5 = 0.988 (mAP@0.5:0.95 = 0.73). In peak-hour field deployment on a two-lane urban road, plate-detection success reached 94.2%, and OCR accuracy was 82.6% for exceedance frames.
The following article is
Open access
Measurement and analysis of point patterns in materials science: methods, applications, and prospects
Efi-Maria Papia
et al
2026
Meas. Sci. Technol.
37
152001
View article
, Measurement and analysis of point patterns in materials science: methods, applications, and prospects
PDF
, Measurement and analysis of point patterns in materials science: methods, applications, and prospects
The spatial distribution of material structures, defects, and microstructural features plays a crucial role in determining material properties such as mechanical strength, electrical conductivity, and thermal transport. Advances in high-resolution characterization techniques, such as atom probe tomography, now generate microscopy datasets with unprecedented precision and volume. These developments have created a growing need for robust statistical and metrological tools to quantify spatial organization in complex materials. Point pattern analysis (PPA) offers a powerful metrological framework to quantify these spatial arrangements, providing insights into clustering, ordering, and spatial correlations across multiple length scales. This review explores key PPA methodologies relevant to materials science, including distance, density and geometry -based approaches, while also exploring recent machine learning applications. The integration of modern computational approaches has further enhanced PPA’s ability to automate feature detection and predict microstructural transformations with greater precision. While challenges remain in handling large-scale datasets, experimental noise, and complex anisotropic structures, ongoing advancements in high-performance computing, artificial intelligence, and
in-situ
analysis continue to expand the applicability of PPA in materials science.
Deep learning for wind turbine fault diagnosis: advances, challenges, and future perspectives
Wenyi Liu
et al
2026
Meas. Sci. Technol.
37
132001
View article
, Deep learning for wind turbine fault diagnosis: advances, challenges, and future perspectives
PDF
, Deep learning for wind turbine fault diagnosis: advances, challenges, and future perspectives
With the continuous expansion of wind turbine scale, their fault identification and operation & maintenance (O&M) work are facing increasingly severe challenges. Traditional fault diagnosis methods, characterized by complex system modeling, high costs, and limited generalization capabilities, struggle to meet the requirements of large-scale applications. As an emerging intelligent technology, deep learning can automatically extract potential patterns from massive operational data, significantly reducing manual dependence. Consequently, it has garnered widespread attention in recent years and been applied in practice. This paper systematically reviews the research progress of deep learning in the field of wind turbine fault diagnosis, focusing on elaborating the principles and learning strategies of mainstream deep learning model architectures applicable to this domain. Through in-depth case analysis of two types of application scenarios—component-oriented and task-oriented—this paper reveals the breakthrough progress achieved by deep learning in fault diagnosis, and details its practical application processes in fault detection, diagnosis, and prediction through specific cases. Despite the advantage of high diagnostic accuracy and its role in advancing intelligent fault diagnosis, this paper critically points out that current research still faces challenges such as insufficient data, high model complexity, poor interpretability, and limitations in industrial on-site deployment. On this basis, future research should focus on breaking through key directions including few-shot learning, model lightweighting, cross-domain adaptation, and improved interpretability, while calling for the establishment of public datasets and unified evaluation standards. Only by addressing these challenges through interdisciplinary collaboration can the large-scale and reliable application of deep learning in wind power intelligent O&M be realized, thereby supporting the cost reduction, efficiency improvement, safe, and stable operation of the wind power industry. In summary, this paper provides a systematic reference for the further research and application of deep learning in the field of wind turbine fault diagnosis.
The following article is
Open access
Characterization of film condensation: a review of experimental methods and emerging technologies
Samah A Albdour
et al
2026
Meas. Sci. Technol.
37
102002
View article
, Characterization of film condensation: a review of experimental methods and emerging technologies
PDF
, Characterization of film condensation: a review of experimental methods and emerging technologies
Liquid‐film condensation underpins heat‐transfer efficiency and safety in nuclear‐reactor cooling loops, industrial heat exchangers, and spacecraft thermal‐control systems; yet accurately characterizing film thickness and dynamics remains challenging: although a wide range of diagnostic methods is available, each occupies a distinct and often non-overlapping window in spatial and temporal resolution, accuracy, intrusiveness, cost, and adaptability, which complicates the choice of technique and the comparison and synthesis of data across studies. In this review, we apply a unified six‐criteria framework to benchmark ten leading techniques; classical calorimetric and thermal‐probe approaches, thin‐film interferometry, infrared thermography, pulse‐echo ultrasound, acoustic‐emission monitoring, chromatic‐confocal sensing, total‐internal‐reflection imaging, particle‐based velocimetry, laser‐induced fluorescence, x-ray tomography, and high‐speed particle tracking, and introduce two decision‐support schematics: a multi‐axis radar chart that maps each method’s performance envelope and a decision‐tree flowchart that aligns experimental requirements with optimal approaches. Our analysis reveals four critical gaps: noninvasive nanometer‐scale mapping over large areas; real‐time capture of microsecond‐scale transients; co‐located measurement of thickness, temperature, and heat flux; and robust deployment in harsh environments. Finally, we survey emerging solutions; fiber‐optic fiber-optic Bragg grating arrays, MEMS‐based capacitive and piezoelectric sensors, terahertz time‐domain spectroscopy, benchtop x-ray phase-contrast imaging, and digital holographic interferometry, and discuss their integration with machine-learning–driven data fusion and CFD, laying out a roadmap for next‐generation, high‐fidelity condensation modeling in both terrestrial and microgravity applications.
The following article is
Open access
Roadmap: Integrating artificial intelligence in structural health monitoring systems
Simon Laflamme
et al
2026
Meas. Sci. Technol.
37
103001
View article
, Roadmap: Integrating artificial intelligence in structural health monitoring systems
PDF
, Roadmap: Integrating artificial intelligence in structural health monitoring systems
Advances in computing and machine learning (ML) methods have led to a rapid rise in artificial intelligence (AI) research and applications in many fields. AI research benefitted from advances in computation hardware, collection and distribution of large data sets, and proliferation of software techniques. AI techniques include ML for provable results, deep learning for data exploration, reinforcement learning for control, and active learning for adaptive systems. Likewise, AI algorithms can handle large amounts of data, construct unknown representations, and provide a direct link between data and classification for decision making. These unmatched capabilities have been seen as a path to solving hard engineering problems, including that of structural health monitoring (SHM). SHM consists of automating the condition assessment task of civil, health, mechanical, and aerospace systems using measurements obtained from temporary or permanently installed sensors. Often, the systems of interest are geometrically large and/or technically complex, which complicates the development and application of physics-based methods. It follows that AI is seen as a key potential contributor enabling SHM in field applications for data-driven analysis. As with many research endeavors, many concepts using AI for SHM have been explored in the literature. Nevertheless, very few AI methods have been deployed in the context of SHM, which may be due to the lack of available data supporting their capabilities, limited integrated AI-SHM systems capable of providing results to users and operators with decision-making capabilities, or certification of AI methods for safety-critical applications. The objective of this Roadmap publication is to discuss the integration of AI at the system level enabling SHM, including associated challenges and opportunities such as those found in common metrics of concern (e.g. transparency, interpretability, explainability, security, certifiability, etc), with a particular focus on providing a path to research and development efforts that could yield impactful field applications. The overview of available methods and directions will provide the readers with applicability of AI for certain SHM designs (software), availability of common data sets for further AI comparisons (data), and lessons learned in implementation (hardware).
Review on deep learning-assisted soft sensors using imperfect measurements
Yi Liu
et al
2026
Meas. Sci. Technol.
37
102001
View article
, Review on deep learning-assisted soft sensors using imperfect measurements
PDF
, Review on deep learning-assisted soft sensors using imperfect measurements
Deep learning has boosted process soft sensing, but field performance often falls short because measurements are imperfect. Outliers, noise, and missing data are common in plants, and they distort the learned mapping between inputs and targets. This review outlines a practical agenda for deep learning-assisted soft sensors under imperfect measurement. First, a measurement-centered view is adopted to scope three major imperfection classes in practice. Second, methods are organized along three fronts that align with deployment needs: resisting outliers, reducing noise, and learning with incomplete data. The main ideas are synthesized for each front, and their strengths, limits, and suitability to plant constraints are summarized. The review goes beyond single imperfections. Interactions among outliers, noise, and missing values are analyzed, and their joint impact on training, validation, and online use is explained. A deployment pathway is then given, including stress testing under mixed imperfection, calibration of uncertainty, and rules for safe action when confidence is low. Compared with prior surveys focusing on outlier detection, robust modeling, or data cleaning in isolation, this work provides a unified and deployment-focused view specific to soft sensing, with clear links from measurement defects to reliable practice in plants.
Fault Diagnosis of Bearings Under Small Sample and Variable Working Conditions Based on DG-FWC-KELM
Xu et al
View accepted manuscript
, Fault Diagnosis of Bearings Under Small Sample and Variable Working Conditions Based on DG-FWC-KELM
PDF
, Fault Diagnosis of Bearings Under Small Sample and Variable Working Conditions Based on DG-FWC-KELM
To address the issues of data scarcity (small samples) and data distribution shift (domain generalization) caused by variable working conditions in bearing fault diagnosis within actual industrial scenarios, traditional deep learning methods often suffer from defects such as severe overfitting, poor domain generalization ability, and time-consuming training. Therefore,this paper proposes a Domain Generalization-oriented Wide-Field Convolution Kernel Extreme Learning Machine fault diagnosis method with strong generalization capabilities. First, a Wide-Field Convolution module is constructed, utilizing large-size convolution kernels to simulate "wide-angle" perception, which efficiently captures global noise-resistant features while reducing computational complexity. Second, a high-dimensional feature constraint mechanism is designed, introducing LeakyReLU combined with Tanh functions and L2 norms to impose physical constraints, effectively resolving numerical explosion and enhancing feature separability. Finally, the Kernel Extreme Learning Machine is adopted to replace the traditional Softmax layer, leveraging its analytical solution characteristics to construct a robust classification hyperplane under small sample conditions, significantly improving training speed and generalization accuracy. Experimental results on cross-domain variable working conditions using Jiangnan University and Case Western Reserve University datasets demonstrate that the method can effectively identify fault states in unseen target domains using only a small number of source domain samples for training. Furthermore, to deeply examine the global feature capture capability of the proposed "Wide-Field Convolution " module under strong background noise and complex vibration coupling, targeted high-difficulty test data from Shanghai Maritime Universityis introduced for verification. The results confirm that the proposed DG-FWC-KELM model can effectively overcome interference under extremely harsh working conditions and maintains excellent diagnostic robustness in domain generalization scenarios.
RwoTracker:Wheel–rail lateral displacement detection algorithm based on point tracking
wang et al
View accepted manuscript
, RwoTracker:Wheel–rail lateral displacement detection algorithm based on point tracking
PDF
, RwoTracker:Wheel–rail lateral displacement detection algorithm based on point tracking
Effective wheel--rail contact is fundamental to the safe operation of railway vehicles. To address the susceptibility of conventional vision-based methods to motion blur and their lack of temporal consistency, this study proposes a novel dynamic trajectory tracking and measurement paradigm based on an enhanced CoTracker3 framework. First, a lightweight initial point selection network, RWPointNet, built upon MobileNetV4, is developed to enable targeted sampling and accurate initialization of key measurement points within the wheel--rail contact region. Second, a multi-scale four-dimensional (4D) correlation modeling strategy is introduced to capture pixel-level motion correspondences across both spatial and temporal domains. In addition, an as-rigid-as-possible (ARAP) constraint is incorporated as a physical prior to preserve the geometric consistency of the wheel--rail structure under high-speed motion, thereby effectively suppressing trajectory drift. Experimental results on a self-constructed dataset demonstrate an accuracy of 99.9\% under an 8-pixel threshold, with a mean threshold accuracy of 84.8\%, alongside substantial reductions in both MAE and RMSE. MAE is 1.31 pixels, corresponding to a physical displacement of 1.02 mm. Furthermore, the proposed method achieves an inference speed of 62 FPS on the Jetson Orin Nano edge computing platform. Overall, this work achieves a favorable balance between robustness and real-time performance, offering a promising new approach for wheel--rail displacement monitoring.
Design and Field Validation of a High-Precision and High-Efficiency Seismic Receiving System for Extreme Antarctic Environments
Zhou et al
View accepted manuscript
, Design and Field Validation of a High-Precision and High-Efficiency Seismic Receiving System for Extreme Antarctic Environments
PDF
, Design and Field Validation of a High-Precision and High-Efficiency Seismic Receiving System for Extreme Antarctic Environments
Antarctic ice-sheet sounding is critical for advancing insights into polar environmental systems. Seismic exploration constitutes an effective technique for Antarctic ice-sheet detection, as seismic waves exhibit substantial penetration depth and high resolution in
homogeneous snow-ice media. Nevertheless, extreme low temperatures, harsh ambient conditions, and the heterogeneous geographic and environmental characteristics of the Antarctic region impose significant constraints on both detection efficiency and precision.
To address these limitations, this paper proposes a seismic signal receiving system specifically engineered for the heterogeneous and harsh Antarctic environment. For detection efficiency, the system incorporates wired, wireless, and autonomous storage nodes to enable a distributed hybrid network, coupled with a power management module reducing the standby power from 7 W to 30 mW, and a wind-solar hybrid energy supply unit supporting operation down to -50◦C. To enhance precision, a neural network-based calibration method is utilized to mitigate temperature-induced drift in the analog-to-digital converter (ADC), achieving a calibration accuracy of 9.3 x 10−3 ppm. Additionally, high-precision acquisition and synchronization schemes are adopted, which realize a time synchronization precision of ±50 ns, input-referred noise of 0.20 μVrms@40 dB, and a dynamic range of 130 dB. Field validation experiments were performed near Taishan Station. The system successfully recorded signals from an artificial seismic source, based on which the ice thickness at the survey site was estimated to be 3.6 km; this estimation is highly consistent with data acquired via ground-penetrating radar (GPR). These findings verify the system’s performance and reliability, thereby laying a critical technological foundation for subsequent Antarctic ice-sheet investigations.
Prediction of assembly posture errors between microhemispherical resonators and electrode plates
Li et al
View accepted manuscript
, Prediction of assembly posture errors between microhemispherical resonators and electrode plates
PDF
, Prediction of assembly posture errors between microhemispherical resonators and electrode plates
As the core sensitive unit of a resonant gyroscope, the micro-hemispherical resonator has the advantage of a high quality factor, enabling the development of high-precision micro-hemispherical resonant gyroscopes. A key challenge lies in the assembly accuracy between the micro-hemispherical resonator and the electrode plate. Because assembly errors often require repeated adjustments, the assembly process is time-consuming and inefficient. In this work, an assembly posture error identification method for a micro-hemispherical resonator with in-plane electrodes is proposed. First, based on the defined assembly posture error parameters of the in-plane electrodes and the micro-hemispherical resonator, a mathematical model relating assembly posture errors to capacitance is established. Second, this model is used to analyse the influence of posture error parameters on capacitance and to generate capacitance data under different prescribed assembly posture errors. Third, a neural-network regression model is established to map capacitance values to assembly posture error parameters. The capacitance-pose dataset used for model training and testing is generated from the theoretical capacitance model, and the corresponding pose parameters are used as predefined reference labels during training and evaluation. The results show that the proposed model has good regression capability on the constructed capacitance-pose dataset. In addition, assembly experiments were carried out to compare the capacitance uniformity under coarse alignment and fine alignment. The experimental results show that fine alignment can significantly reduce assembly errors and improve capacitance uniformity. This work provides a theoretical basis for capacitance-guided assembly optimisation of micro-hemispherical resonators.
A bearing remaining useful life prediction method integrating cosine similarity change point detection
贾 et al
View accepted manuscript
, A bearing remaining useful life prediction method integrating cosine similarity change point detection
PDF
, A bearing remaining useful life prediction method integrating cosine similarity change point detection
Rolling bearings are key wear-prone components whose degradation exhibits strong nonlinearity, phased evolution, and heavy noise. These characteristics increase the difficulty of degradation modeling and remaining useful life (RUL) prediction. To address inaccurate change point detection and experience-dependent model selection in existing studies, this paper proposes an adaptive RUL prediction framework based on data-driven phase identification and segmented modeling. An adaptive Particle Filter-Sliding Window Cosine Similarity (PF-SCS) method is developed to detect degradation trend change points. It couples state recursion with shape similarity to accurately locate the accelerated deterioration point and avoids subjective threshold settings. After change point identification, a Fusion Fitting Criterion (FFC) is constructed by standardizing and combining RSS and AIC, enabling objective model selection for the post-change-point degradation stage. A Wiener-process-based state-space model is further established, and maximum likelihood estimation is used for online parameter updating to obtain a time-varying RUL posterior distribution. Experiments on the XJTU-SY bearing dataset and self-built accelerated life tests verify the proposed framework. The PF-SCS method precisely detects degradation mutations, and the Wiener-based model achieves accurate and reliable RUL prediction.
More Accepted manuscripts
The following article is
Open access
Revisiting rotating bending fatigue in FFF polymer specimens using geometry-based cross-section properties for stress estimation
Carolina Bermudo Gamboa
et al
2026
Meas. Sci. Technol.
37
165601
View article
, Revisiting rotating bending fatigue in FFF polymer specimens using geometry-based cross-section properties for stress estimation
PDF
, Revisiting rotating bending fatigue in FFF polymer specimens using geometry-based cross-section properties for stress estimation
The fatigue performance of polymer components fabricated by fused filament fabrication (FFF) under cyclic loading is strongly influenced by their shell–infill configuration. Standard rotating bending fatigue procedures, such as ISO 1143:2010, assume solid and homogeneous circular sections, which can bias stress estimations when applied to FFF specimens. In the absence of a specific fatigue standard for additively manufactured polymers, this work compares four analytical approaches to calculate the section modulus and associated moment of inertia of FFF-printed specimens subjected to rotating bending fatigue tests, enabling accurate estimation of the maximum surface stress through the classical flexural relation. The approaches range from the solid-section assumption to an analytical geometry-based (AGB) formulation that represents horizontally printed morphologies with ovalized shell–infill interfaces and non-uniform shell thickness. The impact of each approach is assessed by reconstructing
curves using (i) new rotating bending tests on polylactic acid (PLA) specimens manufactured with grid and honeycomb infill patterns and varying infill density under constant bending moment, and (ii) literature data recalculated using geometry-aware section properties. Reprocessing published PLA datasets shows that geometry-aware stress estimation in some cases can approximately double the inferred fatigue life relative to ISO-based calculations. Experimentally, grid infill provides longer fatigue lives than honeycomb under otherwise identical conditions. Overall, the AGB approach provides the most consistent stress estimation for horizontally printed cylinders and improves the comparability of rotating bending fatigue results across different FFF configurations.
The following article is
Open access
Perspective on measurements and modeling of Earth’s climate
Graziano Coppa and Laura Teresa Massano 2026
Meas. Sci. Technol.
37
161001
View article
, Perspective on measurements and modeling of Earth’s climate
PDF
, Perspective on measurements and modeling of Earth’s climate
This paper celebrates the achievements in the modeling of the Earth’s atmosphere, ocean, and land that led to the discovery of anthropogenic climate change and, ultimately, to the awarding of the 2021 Nobel Prize in Physics to Syukuro Manabe and Klaus Hasselmann. The paper will succinctly recap its history, from the first pioneering years of Tyndall and Arrhenius, to the introduction of computers, to the latest breakthroughs and refinements. It will connect the work of modelists, who strive to create ‘digital twins’ of our planet in order to simulate its hydro-dynamical, chemical, and physical evolution through computerized models, and the observations needed to initialize the models themselves and validate them through comparisons and reanalysis, bridging the delicate gap between theory and measurements. Finally, we will present an overview of the future direction of this field of research, trying to highlight the challenges but also the opportunities and the importance of understanding the evolution of the Earth, especially for thermal-related quantities.
The following article is
Open access
Experimental study and prediction of the effects of freeze–thaw times and freeze temperatures on the mechanical characteristics of silty clays
Haotian Guo
et al
2026
Meas. Sci. Technol.
37
165802
View article
, Experimental study and prediction of the effects of freeze–thaw times and freeze temperatures on the mechanical characteristics of silty clays
PDF
, Experimental study and prediction of the effects of freeze–thaw times and freeze temperatures on the mechanical characteristics of silty clays
The structure and characteristics of soils are influenced by freeze–thaw cycles, and the mechanical characteristics of silty clays exhibit distinct patterns under these conditions. The common silty clay in engineering is used as the research object in order to analyze the effects of freeze–thaw times and freeze temperatures on the mechanical properties of silty clay. The freeze–thaw cycle test, straight shear test, and consolidation compression test are used to examine the effects of freeze–thaw times and freeze temperatures on the mechanical properties of silty clay, and the neural network model is used to predict the mechanical properties of silty clay based on the outcomes of indoor experiments. Test results indicate that the freeze–thaw cycle times increase, and the shear stress–displacement curves of soil samples gradually transition from a strain softening pattern to a strain hardening pattern. The compression coefficient progressively increases, while the compression modulus, compression coefficient, and the magnitude of compression modulus variation gradually decrease. As the freeze temperature decreases, the peak values of the shear stress–displacement curve diminish and exhibit a strain-hardening pattern. The increases in shear strength, compressive modulus, and compression coefficient, as well as the decrease in compressive modulus, all gradually diminish. The strength and compression indices of silty clay were predicted using a BP neural network based on genetic algorithm optimization. The results can provide reference and guidance for the prediction of the strength and compression indexes of silty clay in the quaternary freeze zone.
The following article is
Open access
Accurate laboratory testing of low-frequency triaxial vibration sensors under various temperature conditions
Tomofumi Shimoda
et al
2026
Meas. Sci. Technol.
37
165001
View article
, Accurate laboratory testing of low-frequency triaxial vibration sensors under various temperature conditions
PDF
, Accurate laboratory testing of low-frequency triaxial vibration sensors under various temperature conditions
Triaxial vibration sensors are widely used in various application. Recently, low-cost sensors based on micro electro mechanical system (MEMS) technology are also becoming more widely adopted. However, their measurement accuracy can be affected by environmental factors such as temperature. In this study, we developed an environmental testing system integrated with a triaxial vibration exciter. The system can reproduce long-stroke, low-frequency triaxial vibrations—such as those caused by huge earthquakes—under temperatures ranging from
to
. Using this system, the measurement accuracy of vibration sensors can be evaluated under different environmental conditions. The system provides highly accurate reference measurements using a laser interferometer and reference accelerometers that are primarily calibrated within the system. The overall accuracy of the reference vibration measurement is estimated to be approximately 1.1% below 10 Hz. Based on these reference measurements, we investigated the accuracy of earthquake observations using a MEMS accelerometer as a demonstration. The system configuration and testing procedures are presented in this paper.
The following article is
Open access
Evaluating temperature and humidity measurement biases in RS92 and RS41 radiosondes using radio occultation data
Frederick Mokibelo Mashao
et al
2026
Meas. Sci. Technol.
View article
, Evaluating temperature and humidity measurement biases in RS92 and RS41 radiosondes using radio occultation data
PDF
, Evaluating temperature and humidity measurement biases in RS92 and RS41 radiosondes using radio occultation data
Upper-air temperature and humidity measurements from radiosondes are critical for monitoring atmospheric stability and climate trends. However, historical records can be affected by biases, inhomogeneities, and discontinuities, particularly during instrument transitions such as from RS92 to RS41. Ensuring long-term data homogeneity is therefore essential for reliable climate assessment. This study quantifies systematic biases in RS92 and RS41 radiosonde measurements at the Beltsville GRUAN Station, USA (2017–2020), and compares them with COSMIC-1 radio-occultation (RO) temperature and humidity profiles. Both daytime and nighttime profiles were analysed using mean differences, Bland–Altman plots, Taylor diagrams, and quadratic linear regression. Results indicate mean temperature biases between RS92 and RS41 of 0.08–0.39 K, with RMSE ranging from 0.47 to 0.93 K in the stratosphere (15–35 km). Bland–Altman analysis revealed temperature differences of −1 to 2 K and relative humidity biases of −9% to 5% across 1000–200 hPa. Comparisons with COSMIC-1 revealed strong correlations (r = 0.98–0.99, p < 0.05) and normalized SDs of 0.15–0.6 K for temperature and relative specific humidity errors of 5–25%, with larger deviations occurring in the upper troposphere where absolute humidity is low. These findings underscore the importance of global, long-term monitoring and calibration to enhance the quality of radiosonde data, which is essential for understanding atmospheric processes and improving climate change detection.
The following article is
Open access
Finite state machine control optimized by simulated annealing for reducing combined sewer overflows
Wellington Teixeira Teixeira Martins
et al
2026
Meas. Sci. Technol.
View article
, Finite state machine control optimized by simulated annealing for reducing combined sewer overflows
PDF
, Finite state machine control optimized by simulated annealing for reducing combined sewer overflows
Urban drainage networks that combine stormwater and wastewater can experience combined sewer overflows (CSOs) during intense rainfall, discharging untreated effluent to receiving waters. This paper proposes a supervisory control strategy based on finite state machines (FSMs) that coordinate pump and valve actions in sewer storage tanks. The FSM parameters (level thresholds and a flow-distribution factor) are calibrated offline using simulated annealing over bounded, discretized parameter sets to minimize the cumulative overflow volume. The tuned controller is evaluated in the Benchmark Simulation Model for
Integrated Urban Wastewater Systems (BSM-UWS) under multiple rainfall scenarios. The optimized FSM reduces CSO volumes by up to 56.9%, decreases pollutant loads by up to 61%, and lowers overflow frequency by up to 40%, while improving dissolved oxygen conditions in the receiving river. After calibration, the controller runs online with fixed parameters as a set of state-transition rules, requiring negligible runtime computation and remaining compatible with resource-limited automation infrastructures.
The following article is
Open access
Evaluation of real-time precise time transfer with Galileo Has and QZSS MADOCA-PPP service
Daqian Lyu
et al
2026
Meas. Sci. Technol.
View article
, Evaluation of real-time precise time transfer with Galileo Has and QZSS MADOCA-PPP service
PDF
, Evaluation of real-time precise time transfer with Galileo Has and QZSS MADOCA-PPP service
The Quasi-Zenith Satellite System (QZSS) Multi-Mode GNSS Advanced Orbit and Clock Difference Augmented Precise Point Positioning (MADOCA-PPP) service and Europe’s Galileo High Accuracy Service (HAS) are two freely accessible real-time augmentation services that provide orbit and clock corrections for PPP. This paper presents an evaluation of these two services for precise time transfer. Archived corrections and GNSS observations collected from May 10 to 20, 2024, were processed to assess (i) orbit and clock correction quality and (ii) PPP timing performance under static and kinematic conditions. For MADOCA-PPP, the mean orbit differences in the radial (R), along-track (A), and cross-track (C) directions were 0.033/0.041/0.025 m for GPS, 0.030/0.072/0.047 m for GLONASS, and 0.042/0.056/0.031 m for Galileo. In terms of the standard deviation of the clock deviation, the values of GPS and Galileo systems are smaller than that of the GLONASS system. For the HAS system, the orbital deviation and clock deviation sequences show higher noise levels, and the Galileo orbital deviation shows significant systemic fluctuations during the evaluation period. In the PPP time transmission experiment, the timing statistics obtained by the solution scheme based on MADOCA-PPP correction are close to the final product of CODE, while the solution scheme based on HAS shows greater dispersion and decreased frequency stability over the medium-long averaging period, and the sensitivity increases in dynamic tests. Overall, this study analyzes the correction behavior and terminal PPP timing performance of MADOCA-PPP and HAS, which provides a practical reference for real-time precision positioning, navigation and timing (PNT) applications.
The following article is
Open access
Cross-domain transfer learning for reliable condition monitoring of primary batteries under discharge-only operation
Sultan Zeybek and Imen Turki 2026
Meas. Sci. Technol.
37
156007
View article
, Cross-domain transfer learning for reliable condition monitoring of primary batteries under discharge-only operation
PDF
, Cross-domain transfer learning for reliable condition monitoring of primary batteries under discharge-only operation
Primary lithium-based coin-cell batteries are widely used in embedded systems, low-power sensors, and Internet of Things devices due to their long operational life and maintenance-free characteristics. In these applications, accurate estimation of the remaining battery life is essential to ensure system reliability. However, conventional methods for estimating battery health rely on repeated charge and discharge cycles, which are not applicable to primary batteries. This study presents a deep learning-based transfer learning framework to infer the condition of primary coin-cell batteries using discharge-only data. A new dataset is introduced, consisting of voltage versus time profiles obtained under constant current discharge from batteries manufactured by four different brands. The prediction target, referred to as the discharge progression indicator (DPI), is formally defined as the normalised elapsed discharge time and shown to be mathematically equivalent to Depth of Discharge under constant-current operation, making it fully observable from a single discharge event without requiring cycling data or explicit capacity measurements. Neural network models are initially pretrained on large-scale lithium-ion datasets and then adapted to the new dataset through a partial transfer strategy with input layer re-initialisation, enabling cross-chemistry knowledge transfer across fundamentally different battery chemistries. A range of model architectures is evaluated, including fully connected networks, convolutional networks, recurrent memory-based models, and attention mechanisms. The results demonstrate that temporal models, particularly those with memory structures, achieve superior predictive performance and robustness against domain-induced variability. A sensitivity analysis further confirms that standard 8-bit or 10-bit analogue-to-digital converters are sufficient for reliable DPI prediction, supporting deployment in resource-constrained embedded systems. The proposed framework enables early and accurate condition estimation in the absence of charging data or domain-specific calibration.
The following article is
Open access
Fractional-order modelling and system identification of a permanent-magnet induction flowmeter for ionically conductive liquids
Radek Boháč
et al
2026
Meas. Sci. Technol.
37
155007
View article
, Fractional-order modelling and system identification of a permanent-magnet induction flowmeter for ionically conductive liquids
PDF
, Fractional-order modelling and system identification of a permanent-magnet induction flowmeter for ionically conductive liquids
The theoretical and experimental feasibility of employing a passive induction flowmeter based on permanent magnets for measuring the flow of liquids with ionic conductivity is investigated. In contrast to conventional electromagnetic flowmeters, which rely on time-varying magnetic fields to eliminate electrochemical interference, a static magnetic field generated by a Halbach array of NdFeB magnets is utilised in the present approach. Unlike conventional electromagnetic flowmeters, which require continuous coil excitation with power consumption typically in the orders of units to tens of watts, the proposed permanent-magnet configuration operates with zero excitation power. A physics-based electrical model incorporating electrochemical double-layer effects is presented, and an equivalent circuit capturing the transient voltage behaviour at the measuring electrodes is formulated. Although the prototype has not yet reached a functional stage suitable for precise flow quantification (relative flow measurement error, expressed as the 95th percentile of the full dataset, lies in the range of 4%–47%), the observed voltage dynamics are found to agree well with the proposed R-constant phase element-based model and to reveal key physical constraints.
The following article is
Open access
Model-informed double image prior for thermochemical process monitoring using chemical species tomography
Yalei Fu
et al
2026
Meas. Sci. Technol.
37
155404
View article
, Model-informed double image prior for thermochemical process monitoring using chemical species tomography
PDF
, Model-informed double image prior for thermochemical process monitoring using chemical species tomography
An accurate monitoring of thermochemical processes in combustion-based power generation systems is critical for optimizing combustion efficiency and minimizing harmful emissions. Chemical species tomography (CST), enhanced by
in situ
sensing, stands out for reconstructing thermochemical parameters in industrial applications. However, the inverse problem inherent in CST is ill-posed, resulting in poorly reconstructed reactive-flow images, even with regularization methods. Although most data driven-based algorithms offer potential improvements, their heavy reliance on simulated datasets leads to degraded generalizability in harsh industrial environments. To address this concern, we propose a model-informed double image prior (MI–DIP), an untrained neural network (UNN) for dataset-free, dual-path reconstruction of two-dimensional thermochemical parameters, namely, temperature and species concentration. This network incorporates the physical model of CST, regularizing the inverse problem with its physical model and stabilizing image reconstruction. This approach is validated through lab-scale experiments on combustion devices with various burner profiles and further assessed using numerical simulations of real reactive flows, obtained via large eddy simulations in a fire dynamics simulator. The results demonstrate that the developed algorithm outperforms either the traditional regularization method or the key UNN-based algorithm, deep image prior (DIP), in terms of higher reconstruction accuracy. In addition, a robustness study and an ablation study are performed to demonstrate the reliability of MI–DIP in industrial thermochemical processes.
More Open Access articles
Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review
Bing Pan
et al
2009
Meas. Sci. Technol.
20
062001
View article
, Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review
PDF
, Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review
As a practical and effective tool for quantitative in-plane deformation measurement of a planar object surface, two-dimensional digital image correlation (2D DIC) is now widely accepted and commonly used in the field of experimental mechanics. It directly provides full-field displacements to sub-pixel accuracy and full-field strains by comparing the digital images of a test object surface acquired before and after deformation. In this review, methodologies of the 2D DIC technique for displacement field measurement and strain field estimation are systematically reviewed and discussed. Detailed analyses of the measurement accuracy considering the influences of both experimental conditions and algorithm details are provided. Measures for achieving high accuracy deformation measurement using the 2D DIC technique are also recommended. Since microscale and nanoscale deformation measurement can easily be realized by combining the 2D DIC technique with high-spatial-resolution microscopes, the 2D DIC technique should find more applications in broad areas.
Digital image correlation for surface deformation measurement: historical developments, recent advances and future goals
Bing Pan 2018
Meas. Sci. Technol.
29
082001
View article
, Digital image correlation for surface deformation measurement: historical developments, recent advances and future goals
PDF
, Digital image correlation for surface deformation measurement: historical developments, recent advances and future goals
This article is a personal review of the historical developments of digital image correlation (DIC) techniques, together with recent important advances and future goals. The historical developments of DIC techniques over the past 35 years are divided into a foundation-laying phase (1982–1999) and a boom phase (2000 to the present), and are traced by describing some of the milestones that have enabled new and/or better DIC measurements to be made. Important advances made to DIC since 2010 are reviewed, with an emphasis on new insights into the 2D-DIC system, new improvements to the correlation algorithm, and new developments in stereo-DIC systems. A summary of the current state-of-the-art DIC techniques is provided. Some further improvements that are needed and the future goals in the field are also envisioned.
A review on deep learning in planetary gearbox health state recognition: methods, applications, and dataset publication
Dongdong Liu
et al
2024
Meas. Sci. Technol.
35
012002
View article
, A review on deep learning in planetary gearbox health state recognition: methods, applications, and dataset publication
PDF
, A review on deep learning in planetary gearbox health state recognition: methods, applications, and dataset publication
Planetary gearboxes have various merits in mechanical transmission, but their complex structure and intricate operation modes bring large challenges in terms of fault diagnosis. Deep learning has attracted increasing attention in intelligent fault diagnosis and has been successfully adopted for planetary gearbox fault diagnosis, avoiding the difficulty in manually analyzing complex fault features with signal processing methods. This paper presents a comprehensive review of deep learning-based planetary gearbox health state recognition. First, the challenges caused by the complex vibration characteristics of planetary gearboxes in fault diagnosis are analyzed. Second, according to the popularity of deep learning in planetary gearbox fault diagnosis, we briefly introduce six mainstream algorithms, i.e. autoencoder, deep Boltzmann machine, convolutional neural network, transformer, generative adversarial network, and graph neural network, and some variants of them. Then, the applications of these methods to planetary gearbox fault diagnosis are reviewed. Finally, the research prospects and challenges in this research are discussed. According to the challenges, a dataset is introduced in this paper to facilitate future investigations. We expect that this paper can provide new graduate students, institutions and companies with a preliminary understanding of methods used in this field. The dataset can be downloaded from
Optical gas sensing: a review
Jane Hodgkinson and Ralph P Tatam 2013
Meas. Sci. Technol.
24
012004
View article
, Optical gas sensing: a review
PDF
, Optical gas sensing: a review
The detection and measurement of gas concentrations using the characteristic optical absorption of the gas species is important for both understanding and monitoring a variety of phenomena from industrial processes to environmental change. This study reviews the field, covering several individual gas detection techniques including non-dispersive infrared, spectrophotometry, tunable diode laser spectroscopy and photoacoustic spectroscopy. We present the basis for each technique, recent developments in methods and performance limitations. The technology available to support this field, in terms of key components such as light sources and gas cells, has advanced rapidly in recent years and we discuss these new developments. Finally, we present a performance comparison of different techniques, taking data reported over the preceding decade, and draw conclusions from this benchmarking.
Enhanced feature extraction YOLO industrial small object detection algorithm based on receptive-field attention and multi-scale features
Hongfeng Tao
et al
2024
Meas. Sci. Technol.
35
105023
View article
, Enhanced feature extraction YOLO industrial small object detection algorithm based on receptive-field attention and multi-scale features
PDF
, Enhanced feature extraction YOLO industrial small object detection algorithm based on receptive-field attention and multi-scale features
To guarantee the stability and safety of industrial production, it is necessary to regulate the behavior of employees. However, the high background complexity, low pixel count, occlusion and fuzzy appearance can result in a high leakage rate and poor detection accuracy of small objects. Considering the above problems, this paper proposes the Enhanced feature extraction-You Only Look Once (EFE-YOLO) algorithm to improve the detection of industrial small objects. To enhance the detection of fuzzy and occluded objects, the PixelShuffle and Receptive-Field Attention (PSRFA) upsampling module is designed to preserve and reconstruct more detailed information and extract the receptive-field attention weights. Furthermore, the multi-scale and efficient (MSE) downsampling module is designed to merge global and local semantic features to alleviate the problem of false and missed detection. Subsequently, the Adaptive Feature Adjustment and Fusion (AFAF) module is designed to highlight the important features and suppress background information that is not beneficial for detection. Finally, the EIoU loss function is used to improve the convergence speed and localization accuracy. All experiments are conducted on homemade dataset. The improved YOLOv5 algorithm proposed in this paper improves mAP@0.50 (mean average precision at a threshold of 0.50) by 2.8% compared to the YOLOv5 algorithm. The average precision and recall of small objects show an improvement of 8.1% and 7.5%, respectively. The detection performance is still leading in comparison with other advanced algorithms.
Process defects and
in situ
monitoring methods in metal powder bed fusion: a review
Marco Grasso and Bianca Maria Colosimo 2017
Meas. Sci. Technol.
28
044005
View article
, Process defects and in situ monitoring methods in metal powder bed fusion: a review
PDF
, Process defects and in situ monitoring methods in metal powder bed fusion: a review
Despite continuous technological enhancements of metal Additive Manufacturing (AM) systems, the lack of process repeatability and stability still represents a barrier for the industrial breakthrough. The most relevant metal AM applications currently involve industrial sectors (e.g. aerospace and bio-medical) where defects avoidance is fundamental. Because of this, there is the need to develop novel
in situ
monitoring tools able to keep under control the stability of the process on a layer-by-layer basis, and to detect the onset of defects as soon as possible. On the one hand, AM systems must be equipped with
in situ
sensing devices able to measure relevant quantities during the process, a.k.a. process signatures. On the other hand, in-process data analytics and statistical monitoring techniques are required to detect and localize the defects in an automated way. This paper reviews the literature and the commercial tools for
in situ
monitoring of powder bed fusion (PBF) processes. It explores the different categories of defects and their main causes, the most relevant process signatures and the
in situ
sensing approaches proposed so far. Particular attention is devoted to the development of automated defect detection rules and the study of process control strategies, which represent two critical fields for the development of future smart PBF systems.
The following article is
Open access
PIV uncertainty quantification from correlation statistics
Bernhard Wieneke 2015
Meas. Sci. Technol.
26
074002
View article
, PIV uncertainty quantification from correlation statistics
PDF
, PIV uncertainty quantification from correlation statistics
The uncertainty of a PIV displacement field is estimated using a generic post-processing method based on statistical analysis of the correlation process using differences in the intensity pattern in the two images. First the second image is dewarped back onto the first one using the computed displacement field which provides two almost perfectly matching images. Differences are analyzed regarding the effect of shifting the peak of the correlation function. A relationship is derived between the standard deviation of intensity differences in each interrogation window and the expected asymmetry of the correlation peak, which is then converted to the uncertainty of a displacement vector. This procedure is tested with synthetic data for various types of noise and experimental conditions (pixel noise, out-of-plane motion, seeding density, particle image size, etc) and is shown to provide an accurate estimate of the true error.
A review of recent developments in schlieren and shadowgraph techniques
Gary S Settles and Michael J Hargather 2017
Meas. Sci. Technol.
28
042001
View article
, A review of recent developments in schlieren and shadowgraph techniques
PDF
, A review of recent developments in schlieren and shadowgraph techniques
Schlieren and shadowgraph techniques are used around the world for imaging and measuring phenomena in transparent media. These optical methods originated long ago in parallel with telescopes and microscopes, and although it might seem that little new could be expected of them on the timescale of 15 years, in fact several important things have happened that are reviewed here. The digital revolution has had a transformative effect, replacing clumsy photographic film methods with excellent—though expensive—high-speed video cameras, making digital correlation and processing of shadow and schlieren images routine, and providing an entirely-new synthetic schlieren technique that has attracted a lot of attention: background-oriented schlieren or BOS. Several aspects of modern schlieren and shadowgraphy depend upon laptop-scale computer processing of images using an image-capable language such as MATLAB
. BOS, shock-wave tracking, schlieren velocimetry, synthetic streak-schlieren, and straightforward quantitative density measurements in 2D flows are all recent developments empowered by this digital and computational capability.
The following article is
Open access
PIV uncertainty propagation
Andrea Sciacchitano and Bernhard Wieneke 2016
Meas. Sci. Technol.
27
084006
View article
, PIV uncertainty propagation
PDF
, PIV uncertainty propagation
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the
and
particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5–10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
Combining PIV, POD and vortex identification algorithms for the study of unsteady turbulent swirling flows
Laurent Graftieaux
et al
2001
Meas. Sci. Technol.
12
1422
View article
, Combining PIV, POD and vortex identification algorithms for the study of unsteady turbulent swirling flows
PDF
, Combining PIV, POD and vortex identification algorithms for the study of unsteady turbulent swirling flows
Particle image velocimetry (PIV) measurements are made in a highly
turbulent swirling flow. In this flow, we observe a coexistence of turbulent
fluctuations and an unsteady swirling motion. The proper orthogonal
decomposition (POD) is used to separate these two contributions to the total
energy. POD is combined with two new vortex identification functions,
and Γ
. These functions identify the locations of the
centre and boundary of the vortex on the basis of the velocity field. The POD
computed for the measured velocity fields shows that two spatial modes are
responsible for most of the fluctuations observed in the vicinity of the
location of the mean vortex centre. These two modes are also responsible for
the large-scale coherence of the fluctuations. The POD computed from the
scalar field shows that the displacement and deformation of the
large-scale vortex are correlated to these modes. We suggest the use of such
a method to separate pseudo-fluctuations due to the unsteady nature of the
large-scale vortices from fluctuations due to small-scale turbulence.
Journal links
Submit an article
About the journal
Editorial Board
Author guidelines
Review for this journal
Publication charges
News and editorial
Awards
Journal collections
Pricing and ordering
Journal information
1990-present
Measurement Science and Technology
doi: 10.1088/issn.0957-0233
Online ISSN: 1361-6501
Print ISSN: 0957-0233
Journal history
1990-present
Measurement Science and Technology
1968-1989
Journal of Physics E: Scientific Instruments
1923-1967
Journal of Scientific Instruments
US