Density Meter Types: Coriolis, Tuning Fork, DP, Ultrasonic, and Gamma Compared

Updated: May 5, 2026 — by Sino-Inst Engineering Team

A density meter measures the mass per unit volume of a liquid or slurry, in real time, while the fluid is moving through the pipe or sitting in a tank. Five technologies dominate industrial use: Coriolis, vibrating tuning fork, hydrostatic differential pressure, ultrasonic, and gamma (radioactive). Pick the wrong one and you get either a 5 % error from entrained gas, a 6-month sensor life from abrasion, or a Class 7 nuclear-source licence you did not budget for.

This guide walks the five density-meter technologies, where each one fits, and how to read a spec sheet without being misled by accuracy figures that only apply to clean water at 20 °C.

Contents

What is a density meter, and what does it actually measure?

A density meter outputs density in kg/m³ or g/cm³, often along with a derived concentration (Brix, Plato, % H₂SO₄, API gravity). What it physically senses depends on the technology: Coriolis senses tube vibration frequency, tuning fork senses fork resonance, DP senses hydrostatic head, ultrasonic senses sound speed, and gamma senses photon absorption. None of them measure density “directly” — every reading is a derived value with assumptions about temperature, pressure, and entrained gas.

If your specification calls for ±0.1 kg/m³ accuracy, you are in Coriolis or tuning-fork territory. ±1 kg/m³ opens up DP and ultrasonic. ±5 kg/m³ on slurry usually means gamma is the only thing that survives. The accuracy you can buy depends on what the fluid does, not just what the sensor is rated to do. For the underlying static-head physics behind DP density measurement, see our static vs dynamic pressure guide.

The five density meter technologies, side by side

TechnologySensing principleAccuracy (clean fluid)Best forAvoid for
CoriolisTube oscillation frequency±0.1 kg/m³Custody transfer, concentration, mass flow + density togetherHeavy slurries, gas-laden fluids
Vibrating tuning forkFork resonance frequency±0.5 kg/m³Tank-side or in-line monitoring, hydrocarbonsCrystallising or fouling fluids
Hydrostatic DPPressure head between two taps±1 to 2 kg/m³Open tanks, tall vessels, slurryVariable level or free-surface motion
Ultrasonic (concentration)Speed of sound±2 kg/m³Acid/base concentration, brine, sugarTwo-phase or bubbly flow
Gamma (radioactive)Cs-137 / Am-241 absorption±5 kg/m³Heavy slurries, blast-furnace tap-off, abrasive serviceAnywhere a source licence is impractical

Coriolis dominates clean-fluid custody transfer because it gives mass flow and density from one transmitter — see our Coriolis flow meter density measurement guide for the underlying physics. For sticky or scaling fluids the tuning fork wins because fouling shifts the resonance predictably and can be auto-compensated.

Which density meter for which process? A picker by fluid type

  • Crude oil, refined products, LPG: Coriolis or tuning fork. Coriolis if you also need mass flow; tuning fork if density-only at lower cost.
  • Sugar syrup, fruit juice, dairy concentrate (Brix): Tuning fork or ultrasonic. Tuning fork preferred for in-line, ultrasonic for clamp-on retrofits.
  • Sulfuric acid, caustic, brine (concentration): Ultrasonic or Coriolis. Ultrasonic survives without wetted electronics; Coriolis with hastelloy tubes.
  • Mineral slurry, mining tailings, paper stock: Hydrostatic DP for low-cost monitoring; gamma for high-density abrasive service.
  • Polymer melts, asphalt, heavy fuel oil: Tuning fork with heated insertion probe.
  • Cryogenic LNG, liquid CO₂: Coriolis with low-thermal-mass tubes.

For viscous fluids that fool every other technology, see our note on flow meters for molasses and high-viscosity liquids — the same viscosity bias that wrecks orifice plates also shifts tuning-fork zero by 0.3 kg/m³ per 100 cP.

How to read a density-meter spec sheet without being misled

Five lines on a density-meter spec sheet decide whether the quoted accuracy means anything in your service:

  1. Reference conditions. “±0.1 kg/m³” almost always assumes 20 °C, water, no entrained gas. Subtract one decade for real process conditions.
  2. Temperature coefficient. Look for ppm/°C on density. A 50 ppm/°C device drifts 1 kg/m³ over a 20 °C process swing — bigger than the headline accuracy.
  3. Pressure coefficient. Often 0.005 % per bar. Matters for high-pressure pipelines.
  4. Gas-bubble tolerance. Coriolis loses lock above 2 % gas; tuning fork degrades above 5 %; gamma is gas-blind.
  5. Sample-line correction. If the meter is fed by a slipstream, it reads slipstream conditions, not main-line. Always declare this on the spec sheet.

Four install pitfalls that ruin field accuracy

  1. Air pockets at the top of vertical Coriolis tubes. Always mount a horizontal Coriolis with the tubes facing down for liquid service.
  2. Bottom DP tap above the sediment line. A DP density meter on a sludge tank reads the supernatant if the lower tap is in the wrong place. Place the lower tap below the sediment cone, with a flushing connection.
  3. Tuning fork in a swirl pattern. The fork sees flow-induced noise on its tines. Mount in a 5D straight run, not just downstream of a pump elbow.
  4. Gamma source not centred on the pipe. Misalignment by 5 mm on a 100 mm pipe shifts the calibration by 8 kg/m³.

Background reading on Coriolis-specific install rules: our Coriolis mass flowmeter primer covers the same balance and zero-flow stability issues that affect density mode.

Featured density meters

Online Density Meter (DP Type)

Hydrostatic head measurement, ±1 kg/m³, slurry-tolerant, low-cost tank-mount.

Portable Tuning Fork Density Meter

Hand-held insertion probe, ±0.5 kg/m³, hydrocarbons and refined products.

In-line Tuning Fork Density Meter

Permanent in-line probe, ±0.2 kg/m³, 4-20 mA / Modbus, 100 °C continuous.

FAQ

What types of density meters are there?

Five main types: Coriolis (tube oscillation), vibrating tuning fork (resonance), hydrostatic differential pressure, ultrasonic (sound-speed), and gamma (radioactive absorption). Each suits a different fluid type and accuracy band.

How does a density meter work?

It measures a physical property that varies with density — tube vibration frequency, fork resonance, hydrostatic head, sound speed, or gamma absorption — then converts that signal to kg/m³ using a calibration curve and temperature/pressure compensation.

Which density meter is most accurate?

Coriolis, at ±0.1 kg/m³ on clean liquids. But on slurries or gas-laden fluids the relative ranking changes — gamma can be the only thing that gives any reading at all.

Can a density meter measure concentration?

Yes. Once density is calibrated against a reference fluid, the transmitter outputs Brix, Plato, % concentration, or API gravity directly. Most modern transmitters carry 8-20 pre-loaded fluid tables.

What is the difference between a hydrometer and an online density meter?

A hydrometer is a manual lab tool, single sample at a time. An online density meter measures continuously in the pipe or tank, outputs 4-20 mA or Modbus, and applies live temperature compensation.

Does a density meter need temperature compensation?

Yes. Most fluids change density by 0.5-1 kg/m³ per °C. Modern transmitters apply ASTM D1250 / API MPMS 11.1 corrections automatically; legacy meters need an external Pt100.

How much does a density meter cost?

Tuning fork: USD 3-6 k. DP type: USD 1-2 k. Coriolis: USD 8-25 k depending on size. Gamma: USD 30 k+ plus source-handling costs.

Need help picking a density meter for your fluid, accuracy target and pipe size? Send us your fluid name, line size, temperature and pressure and we will quote within 24 hours.

Request a Quote

Please enable JavaScript in your browser to submit the form

Dew Point Meter for Compressed Air: PDP, ISO 8573-1 Classes, Sensor Placement

Updated: May 5, 2026 — by Sino-Inst Engineering Team

A dew point meter for compressed air tells you the moisture floor your dryer is actually delivering, expressed as pressure dew point (PDP). For instrument air on a 7 barg system, PDP must sit at or below the ISO 8573-1 humidity class your downstream equipment requires — typically Class 2 (-40 °C PDP) for pneumatic controls and Class 4 (+3 °C PDP) for general plant air. Get it wrong and you get rusted manifolds, frozen valve actuators, and contaminated paint lines.

This guide covers what PDP is, the ISO 8573-1 humidity classes that drive sensor selection, how to size and place a probe, and the 4 mistakes that cause field readings to drift within months.

Contents

What is pressure dew point and how is it different from atmospheric dew point?

Pressure dew point is the temperature at which water vapour condenses out of compressed air at the line pressure. Atmospheric dew point is the same temperature measured after the air has been expanded back to 1 atm. The two numbers are not interchangeable — a sample at 7 barg with a +3 °C PDP corresponds to roughly -23 °C atmospheric dew point, a 26 °C gap.

This matters because compressed air specifications are written in PDP, but cheap psychrometric instruments often report atmospheric dew point. If you take a hand-held meter, vent the sample, and read -23 °C, you have not exceeded ISO 8573-1 Class 4 — you have met it. Reading the wrong column on the spec sheet has flunked more compressed-air audits than any actual dryer fault. Always confirm whether the figure is at line pressure or after expansion.

ISO 8573-1 humidity classes: which one does your application need?

ISO 8573-1:2010 defines seven humidity classes. The class number you have to meet depends on what the air feeds, not on the dryer you happen to own. Pick the class first, then the sensor range falls out of it.

ClassPDP targetTypical useSensor range needed
1≤ -70 °CPharma, semiconductor, breathing air-100 to -40 °C
2≤ -40 °CInstrument air, paint spray, food packaging-80 to -20 °C
3≤ -20 °CPlant control air in cold climates-60 to 0 °C
4≤ +3 °CGeneral plant air, pneumatic tools-20 to +20 °C
5≤ +7 °CLight pneumatic load (refrigerant dryer)-10 to +20 °C
6≤ +10 °CCoarse air, agitation0 to +30 °C
XUser-definedProcess-specificBy spec

One mistake to watch: a Class 2 sensor (-80 to -20 °C) loses resolution above -20 °C, so it cannot reliably tell you whether you have exceeded Class 4. Spec to your worst-case PDP target plus 20 °C of headroom, not your best-case.

Which dryer technology hits which PDP?

The dryer fixes the floor your sensor will see; pick the right pair so the sensor sits in the middle of its calibrated range.

  • Refrigerant dryer: +3 to +10 °C PDP. Cheapest, used for Class 4–6.
  • Heatless desiccant dryer: -40 °C PDP nominal, -70 °C achievable. Class 2 standard, Class 1 with tight switching.
  • Heated desiccant dryer: -40 to -70 °C PDP, lower purge loss than heatless.
  • Membrane dryer: -20 to -40 °C PDP for low-flow point-of-use.

If your specification calls for Class 2 but you only own a refrigerant dryer, no amount of sensor calibration fixes that — you need to add a desiccant tower. The dew point meter for compressed air is a diagnostic tool, not a corrective one. For broader gas-dew-point context (CO₂, N₂, hydrocarbons), see our guide to what gases a dew point meter can detect.

Where should you install a dew point probe in a compressed air line?

Install the probe at least 2 metres downstream of the dryer outlet and upstream of any after-filter that might trap moisture. Sensor response time is dominated by gas exchange around the polymer film, not by the electronics, so use a sample cell with a constant 1–2 NL/min purge to reach 90 % response inside 5 minutes. Without the purge, dead-end probes can take an hour to settle after a flow upset. The same straight-run logic that shapes flow-meter placement applies — see our upstream and downstream straight pipe guide for the underlying sampling principle.

Three placement rules from field installations:

  1. Mount the probe horizontally, never sensor-down. Liquid water collecting on the polymer destroys the calibration in hours.
  2. Use stainless or PTFE in the sample line. PVC and rubber outgas plasticisers that load the sensor.
  3. Keep the sample line under 5 m. Long lines act as moisture buffers and slow the reading.

For background on differential pressure across the sample cell, see our static vs dynamic pressure guide.

Calibration and drift: why a 1-year-old sensor reads 8 °C high

Polymer-capacitive dew point sensors drift by 2–3 °C per year in clean air and 5–10 °C in oily air. Four practical errors accelerate that:

  1. Skipping the after-filter. Compressor oil mist coats the polymer and shifts the calibration warm.
  2. Wet exposure. A single bulk-water hit can damage the dielectric layer permanently.
  3. Neglecting auto-cal cycles. Modern sensors run a 200 °C bake every 24 h to drive moisture out; if power is interrupted, drift compounds.
  4. Annual factory cal that ignores process conditions. A sensor returned for cal at -40 °C reference will not match a +3 °C process. Cal at the band you actually run in.

For pressure-side troubleshooting that often masquerades as dew point drift, our pressure transmitter installation guide covers the same impulse-line issues from the moisture side.

Featured dew point meters for compressed air

Dew Point Transmitter 608 Series

In-line probe, -80 to +20 °C PDP, 4-20 mA / RS485 Modbus, ±2 °C accuracy.

Dew Point Meter 602 Series

Wall-mount display, -60 to +60 °C PDP, alarm relays, 35 bar service.

Portable Dew Point Meter

Hand-held audit tool, integrated sample cell, -50 to +20 °C PDP, data-log.

FAQ

What is the dew point limit for compressed air?

It depends on the ISO 8573-1 class your downstream equipment requires. Instrument air is usually Class 2 at -40 °C PDP; general plant air is Class 4 at +3 °C PDP. There is no single number.

How do you measure the dew point of compressed air?

With a polymer-capacitive sensor mounted in a sample cell at line pressure, with 1-2 NL/min purge through the cell. Allow 5-15 minutes for the reading to settle on each new measurement.

What is the difference between pressure dew point and atmospheric dew point?

Pressure dew point is measured at line pressure; atmospheric dew point after expansion to 1 atm. PDP is the higher number — 7 barg air at +3 °C PDP equals roughly -23 °C atmospheric dew point.

What is the best dew point for instrument air?

ISA-7.0.01 calls for instrument air at least 10 °C below the lowest ambient temperature the air will see. In most temperate plants this means -40 °C PDP (Class 2); in arctic service, -70 °C PDP (Class 1).

How often should a compressed air dew point sensor be calibrated?

Annually for clean instrument air, every 6 months for plant air with oil-lubricated compressors. Send the sensor back at the PDP band you actually operate in, not the factory default.

Can a dew point meter be installed downstream of an oil filter?

Yes — and it should be. Place the probe after the coalescing oil filter but before the after-filter; oil mist on the polymer is the fastest way to ruin the sensor.

What gases other than air can a dew point meter measure?

Nitrogen, hydrogen, CO₂, natural gas and most non-corrosive process gases — calibration constants are gas-specific.

Need help picking a dew point meter for your dryer and ISO 8573-1 class? Our engineers can quote and ship within 24 hours — message us with your line pressure, target PDP and flow rate.

Request a Quote

Please enable JavaScript in your browser to submit the form

Shaft Torque Sensors: 3 Failure Modes, Diagnostic Checklist, and Maintenance Intervals

Updated: May 5, 2026 — by Sino-Inst Engineering Team

Shaft torque sensors fail in three predictable ways: slip-ring brush wear (signal noise that climbs above 3000 rpm), zero drift after thermal cycling (1-3 % FS shift overnight), and span shift after an overload above 120 % FS (often non-recoverable). Catch these symptoms early and you can re-zero, re-cal, or replace brushes during a planned shutdown. Miss them and a wind-turbine gearbox test or a marine engine dyno run gives you data that cannot be defended in the report.

This guide is a diagnostic playbook for shaft torque sensor problems: what each failure mode looks like on the trace, what causes it, and the maintenance interval that keeps it from happening twice.

Contents

What is a shaft torque sensor and where does it live in the drivetrain?

A shaft torque sensor is a rotary transducer that sits in-line between a prime mover and its load — usually engine to dyno, motor to gearbox, or turbine to generator. It senses the twist angle of a calibrated shaft section under torque, converts that angle to a voltage via a strain-gauge bridge bonded to the shaft, and transmits the rotating signal off-shaft through slip rings, a rotary transformer, or a digital telemetry link.

The thing that breaks most often is not the strain bridge itself. It is the rotating-to-stationary signal path: the slip-ring brushes that wear, the rotary-transformer coupling that goes off-axis after a thermal expansion event, or the telemetry battery that flat-lines mid-test. Knowing which coupling type you have decides which failure modes to expect — see our torque transducer selection guide for the architecture overview.

Three dominant failure modes and what they look like on the trace

Failure modeSymptom on traceRoot causeRecoverable?
Slip-ring brush wearRandom spikes / noise that grow with rpm; usually visible above 3000 rpmBrush face polished smooth, contact pressure dropped, carbon dust contaminationYes — replace brushes, clean ring
Zero drift after thermal cycling1-3 % FS offset visible at zero load after overnight temperature swingDifferential expansion between shaft and gauge backing; bonding stress reliefYes — re-zero on warm sensor
Span shift after overloadPermanent gain change of 0.5-5 % above former spanPlastic deformation of gauge or shaft after >120 % FS eventSometimes — needs full re-cal, often replacement
EMC pickupSinusoidal noise locked to line frequency or VFD switching frequencyShielded cable broken, dyno cabinet bonding lostYes — fix shielding

The first three are intrinsic to the sensor and its mechanical mount. EMC pickup is intrinsic to the test cell and gets blamed on the sensor unfairly. Always check shielding before sending the unit back for cal.

A 5-step diagnostic checklist when readings look wrong

  1. Re-zero at temperature. Bring the sensor to operating temperature, no load, then capture the zero. Most “drift” is just an unstabilised zero.
  2. Run a known shunt cal. Internal shunt resistor injects a fixed simulated load — confirms the bridge electronics are intact independent of the shaft.
  3. Compare two run-ups. Same speed sweep twice. If the noise is rpm-locked, it is mechanical (slip ring, alignment). If frequency-locked, it is electrical.
  4. Check torsional alignment. Use a dial gauge on the coupling face. Misalignment above 0.05 mm/100 mm on a flange-to-flange mount loads the sensor in bending and reads as torque.
  5. Compare to derived torque. For motor-driven rigs, compute torque from electrical power × efficiency / speed. A 5 % gap is normal; a 20 % gap is a sensor problem.

For the static-pressure analogue of zero shift in transmitters, our static vs dynamic pressure guide explains the same calibration-reference problem in a different sensor family.

Maintenance intervals by signal-coupling type

  • Slip ring: brush inspection every 500 hours; brush replacement every 2000 hours or 10 % length loss; ring resurfacing every 5000 hours.
  • Rotary transformer: air-gap check every 2000 hours; bearing change every 8000 hours.
  • Digital telemetry: battery replacement every 12-18 months; antenna alignment check every 4000 hours.
  • SAW (surface acoustic wave): no rotating contact, no scheduled service; functional check at the annual cal.

If you are running a 24/7 wind-turbine gearbox endurance test, picking the right coupling at the start saves the test from being interrupted at 3000 hours by a brush change. For straight-run mounting and bonding rules that minimise EMC pickup, our upstream and downstream straight pipe guide covers the analogous geometry constraints in process measurement.

When to re-zero, when to re-cal, when to replace

  1. Re-zero (in field): after every cold start, after every coupling re-mount, after a temperature swing >15 °C.
  2. Shunt-cal verification (in field): at the start of every test campaign and when the trace looks suspect.
  3. Full cal (factory or accredited lab): annually, or after any overload above 100 % FS, or when shunt cal disagrees with the previous reading by more than 0.2 %.
  4. Replace: after an overload above 150 % FS, after any mechanical shock that bent the shaft, or when the noise floor at full speed exceeds 1 % FS even with new brushes.

For installation hygiene that prevents most of these problems, see our pressure transmitter installation guide — the same EMC, bonding and stress-relief rules apply to shaft torque mounts.

Featured shaft torque sensors

807 Rotary Torque Sensor

Slip-ring rotary, 0-20 kNm, up to 15000 rpm, ±0.2 % FS, engine and gearbox dyno service.

120 Reaction Torque Sensor

Static reaction, 0-2 kNm, no slip rings, ±0.1 % FS, motor bench and torque-wrench QC.

56 Micro Reaction Torque Sensor

Compact reaction, 0-50 Nm, hand-screwdriver and small-motor QC, tight-space mount.

FAQ

What are the most common shaft torque sensor failure modes?

Three: slip-ring brush wear (signal noise above 3000 rpm), zero drift after thermal cycling (1-3 % FS), and span shift after overload above 120 % FS (often non-recoverable).

How do I know my shaft torque sensor needs re-calibration?

If shunt-cal disagrees with the previous reading by more than 0.2 %, or if the sensor has seen any overload above 100 % FS, send it for a full cal. Annual re-cal is the default for traceable test work.

How long do slip-ring brushes last in a torque sensor?

Typically 2000 operating hours or 10 % brush length loss, whichever comes first. Inspect every 500 hours; replace before noise exceeds 0.5 % FS at full speed.

What rpm range can a shaft torque sensor handle?

Slip-ring designs to 8000 rpm, rotary transformer to 12000 rpm, digital telemetry and SAW to 25000 rpm or higher. Pick the coupling type by your worst-case test speed plus 20 %.

Can I re-zero a shaft torque sensor in the field?

Yes. With no load on the shaft and the sensor at operating temperature, hold the zero command for 10 seconds. Field re-zero corrects thermal drift but not span shift.

What causes the noise on my torque trace at high speed?

Three usual causes: worn slip-ring brushes, mechanical misalignment, or VFD-driven EMC pickup. Diagnose by repeating the run-up — rpm-locked noise is mechanical, frequency-locked is electrical.

When should a shaft torque sensor be replaced rather than re-calibrated?

After any overload above 150 % FS, after a bent shaft from mechanical shock, or when noise at full speed exceeds 1 % FS with new brushes installed.

Need help diagnosing a torque trace or picking a replacement sensor? Send us your model, the symptom on the trace, and the rpm range — our test-rig engineers can usually triage in one email.

Request a Quote

Please enable JavaScript in your browser to submit the form