Search waiting | Matsusada Precision

Searching...

Technical Terms

The 16-bit is a method of representing numbers using 16 binary digits (bits), which corresponds to 2 to the 16th power (216). This means it can express 65,536 distinct values. In computing, 16 bits are equivalent to 2 bytes. When representing unsigned integers, the range of values is from 0 to 65,535. If signed integers are represented using two's complement, the range is from -32,768 to +32,767. In the context of programmable power supplies and measuring instruments, 16-bit resolution is frequently used in digital-to-analog converters (DACs) for setting output values and in analog-to-digital converters (ADCs) for measuring them.

A 16-bit DAC can divide the maximum voltage or current range of a power supply into 65,536 fine steps, allowing for very precise control over the output. For example, for a 65V power supply, a 16-bit DAC provides a theoretical setting resolution of approximately 1 mV per step (65V/65,536≈0.001V). Similarly, a 16-bit ADC offers high-resolution measurement, enabling the capture of small fluctuations in voltage or current.

The 16-bit architecture was central to early personal computers and microcontrollers, and while modern general-purpose CPUs have moved to 32-bit or 64-bit architectures, 16-bit microcontrollers are still widely used in embedded systems due to their lower cost and sufficient performance for dedicated control tasks, such as those within power supplies or other instruments.

Related words