double precision
C1Technical, Academic
Definition
Meaning
A computing term referring to a numeric data type that uses two consecutive computer words (typically 64 bits) to store a single number, allowing for a wider range of values and greater precision (more decimal places) compared to a 'single precision' type.
In computing and numerical analysis, a standard for representing floating-point numbers with approximately 15-16 decimal digits of precision and a wide exponent range. It often serves as the default high-precision numeric type in many programming languages and systems.
Linguistics
Semantic Notes
Always a compound noun ('double precision') or a hyphenated adjective ('double-precision arithmetic'). It is a term of art in computer science, mathematics, and engineering. It contrasts with 'single precision' (less precise) and 'quadruple precision' (more precise).
Dialectal Variation
British vs American Usage
Differences
No significant lexical differences. Spelling conventions follow local norms (e.g., 'programme' vs. 'program' in surrounding text).
Connotations
Identical technical meaning in both varieties.
Frequency
Equally common in UK and US technical contexts.
Vocabulary
Collocations
Grammar
Valency Patterns
[verb] + double precision (e.g., use, require, support)double-precision + [noun] (e.g., arithmetic, format, number)in + double precisionVocabulary
Synonyms
Strong
Neutral
Weak
Vocabulary
Antonyms
Phrases
Idioms & Phrases
- “[No common idioms for this technical term]”
Usage
Context Usage
Business
Rare. Might appear in contexts discussing data analysis, financial modelling software specifications, or high-performance computing requirements.
Academic
Common in computer science, engineering, physics, and mathematics papers and textbooks when discussing numerical methods, simulation accuracy, or computational results.
Everyday
Extremely rare. Unlikely to be used outside of technical discussions.
Technical
The primary domain. Used in programming (e.g., C's 'double', Python's 'float'), scientific computing, CAD software, and numerical analysis to specify data representation and ensure calculation accuracy.
Examples
By Part of Speech
verb
British English
- [No standard verb use]
American English
- [No standard verb use]
adverb
British English
- [No standard adverb use]
American English
- [No standard adverb use]
adjective
British English
- The simulation requires double-precision arithmetic for stable results.
- We stored the data in a double-precision format.
American English
- Use a double-precision variable for this calculation.
- The software defaults to double-precision floating-point numbers.
Examples
By CEFR Level
- [Too technical for A2]
- [Too technical for B1]
- For accurate scientific results, the programme uses double precision.
- A double-precision number can store very large or very small values.
- The numerical instability vanished when we switched the model's calculations from single to double precision.
- Most modern CPUs have dedicated hardware for accelerating double-precision floating-point operations.
Learning
Memory Aids
Mnemonic
Think of 'double' as 'twice as much' – double precision uses twice the memory (often 64 bits vs. 32 bits) of single precision to store a number, giving you double the detail and accuracy.
Conceptual Metaphor
PRECISION IS A CONTAINER OF INFORMATION: More precision means a larger, more detailed container for numerical data.
Watch out
Common Pitfalls
Translation Traps (for Russian speakers)
- Do not translate 'double' as 'двойной' in isolation. The established term is 'число двойной точности' or 'double precision'.
- Avoid calquing 'double precision' as 'двойная точность' without 'число' as it may sound unnatural.
- Do not confuse with 'double' as a data type name in programming, which is already 'double precision'.
Common Mistakes
- Writing 'double-precise' (incorrect adjective form).
- Using 'double precision' as a verb (e.g., 'to double precision the calculation' – incorrect).
- Confusing with 'double' meaning twice the amount in non-technical contexts.
Practice
Quiz
What is the primary advantage of using a 'double precision' data type over 'single precision'?
FAQ
Frequently Asked Questions
Yes, typically. In most programming languages, the 'double' data type is the standard implementation of double-precision floating-point numbers (usually following the IEEE 754 standard for 64-bit numbers).
Use double precision when you need higher accuracy for calculations (more significant digits), a wider range of values, or when cumulative rounding errors could become significant, such as in scientific simulations, financial calculations, or 3D graphics.
It can. Double-precision operations may use more memory and, on some hardware, can be slower than single-precision operations. However, on modern desktop CPUs, the performance difference is often small for general use.
Think of measuring a distance. Single precision is like using a ruler marked in millimetres. Double precision is like using a calliper that can measure to a tenth of a millimetre. The latter is more precise and can reliably represent finer differences.