The Julian date is the most common way of expressing the time at which an astronomical event occurs. It is independent of the time zone. It is defined as the number of days since noon on January 1, 4713 BC at the Greenwich meridian. Current Julian dates have seven digits to the left of the decimal point: for example, noon on January 1, 2000 is Julian date 2,451,545.0.
Your Python program can handle a JulianDate
instance as if it were an ordinary floating point
number. However, since IEEE float values are good only
to about 15 digits of precision, this means that the
precision of times of day is on the order of
1×10^{-8} days, or in
the neighborhood of a millisecond.
Assuming that modern astronomy is more important than historical research, we can buy a bit more precision by storing internally the Julian date minus 2,200,000, which gives us at least 9 or 10 digits after the decimal, or a precision of roughly 10 microseconds.
Through the magic of Python classes, though, converting a
JulianDate
instance into a Python float
will restore the bias, giving current
exact values. However, subtraction of two JulianDate
values will cause less of a problem
with loss of significance than subtracting two float
values converted from JulianDate
instances.
Here is the class interface.
JulianDate(j, f=0.0)
j
The Julian date as a float
or
int
.
f
If you would like to work with greater precision,
pass the integral part of the Julian date in as
j
and the fractional part as f
. The bias (JULIAN_BIAS
) will be removed from j
before f
is added,
giving you extended precision.
Instance attribute .j
contains the
biased value, that is, j+f-JULIAN_BIAS
.