Search waiting | Matsusada Precision

Searching...

Technical Terms

The Character in computing refers to a single unit of text, such as a letter (A, b, c), a number (1, 2, 3), a punctuation mark (., !, ?), or a symbol (@, #, $ ). Since computers can only process binary data, each character is represented by a numerical code. The mapping between characters and their numerical codes is defined by a character encoding standard. The most well-known early standard is ASCII (American Standard Code for Information Interchange), which uses 7 bits (or 8 bits in its extended form) to represent 128 (or 256) common English characters, numbers, and control codes. Modern systems predominantly use Unicode (and its common encoding UTF-8), which supports a vast range of characters from virtually all writing systems in the world.

In the context of controlling programmable instruments, characters are fundamental. Command languages like SCPI are text-based, meaning commands are sent as sequences of characters, or "strings." For example, the command to set voltage is sent as the string "VOLT 5.0". The instrument's firmware receives this string, parses the characters, and executes the corresponding action. In programming languages, there is often a specific data type called char to store a single character, while a sequence of characters is stored in a string data type.

Related words