Opened 6 years ago
Last modified 3 months ago
#172 assigned defect
Grammar allows zero
Reported by: | dmcclean | Owned by: | Simon Cox |
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | help | Keywords: | semantics, grammar |
Cc: |
Description
The UCUM 1.9 grammar (I'm not sure where to find any work-in-progress newer version, so I apologize if this has already been addressed) allows 0 to appear as a <digits>, and therefore as a <factor>.
It shouldn't be permitted as a <factor> because it doesn't have a multiplicative inverse. It's inclusion ruins the algebraic property noted in section 18 that "For each unit u ∈ U there is an inverse unit u-1 such that u · u-1 = 1. Thus, (U, ·) is an Abelian group."
Change History (6)
comment:1 Changed 6 years ago by
comment:4 Changed 6 years ago by
comment:5 Changed 3 months ago by
Component: | → help |
---|---|
Owner: | set to Simon Cox |
Status: | new → assigned |
comment:6 Changed 3 months ago by
Indeed, in version 2.1 https://ucum.org/ucum.html#section-Syntax-Rules it is still the case that the grammar allows for the factor to be zero. Here are the relevant pieces:
<digit> ::= “0” | “1” | “2” | “3” | “4” | “5” | “6” | “7” | “8” | “9” <digits> ::= <digit><digits> | <digit> <factor> ::= <digits>
In order to suppress this problem, it would be necessary to replace the above, with something like like
<digit> ::= “0” | “1” | “2” | “3” | “4” | “5” | “6” | “7” | “8” | “9” <digits> ::= <digit><digits> | <digit> <non-zero-digit> ::= “1” | “2” | “3” | “4” | “5” | “6” | “7” | “8” | “9” <non-zero-digits> ::= <non-zero-digit><digits> | <digit><non-zero-digit><digits> | <digit><digits><non-zero-digit> | <non-zero-digit> <factor> ::= <non-zero-digits>
Yes, floating point number has an assumed definition as decimal with optional scientific notation. There are additional consideration that relate to the implicit specification of the number of significant digits. These issues have been discussed in another work (the HL7 v3 Data Types standard). We have tried to keep this outside the UCUM specification, because we might have to include standardization of real numbers in computer notation into UCUM.
You may use the Java floating point notation for an example. Most other languages, e.g., SQL are quite similar and differences occur only in edge cases.
In the writing of numbers internally we use some rules:
But this is of interest only in the internal notation we might use in any formal data tables. The UCUM standard as published in text does not make that distinction.
Perhaps we should speak about this, but for us floating point number is a primitive.