PEP: 564 Title: Add new time functions with nanosecond resolution Version: $Revision$ Last-Modified: $Date$ Author: Victor Stinner <vstinner@python.org> Status: Final Type: Standards Track Content-Type: text/x-rst Created: 16-Oct-2017 Python-Version: 3.7 Resolution: https://mail.python.org/pipermail/python-dev/2017-October/150046.html Abstract ======== Add six new "nanosecond" variants of existing functions to the ``time`` module: ``clock_gettime_ns()``, ``clock_settime_ns()``, ``monotonic_ns()``, ``perf_counter_ns()``, ``process_time_ns()`` and ``time_ns()``. While similar to the existing functions without the ``_ns`` suffix, they provide nanosecond resolution: they return a number of nanoseconds as a Python ``int``. The ``time.time_ns()`` resolution is 3 times better than the ``time.time()`` resolution on Linux and Windows. Rationale ========= Float type limited to 104 days ------------------------------ The clocks resolution of desktop and laptop computers is getting closer to nanosecond resolution. More and more clocks have a frequency in MHz, up to GHz for the CPU TSC clock. The Python ``time.time()`` function returns the current time as a floating-point number which is usually a 64-bit binary floating-point number (in the IEEE 754 format). The problem is that the ``float`` type starts to lose nanoseconds after 104 days. Converting from nanoseconds (``int``) to seconds (``float``) and then back to nanoseconds (``int``) to check if conversions lose precision:: # no precision loss >>> x = 2 ** 52 + 1; int(float(x * 1e-9) * 1e9) - x 0 # precision loss! (1 nanosecond) >>> x = 2 ** 53 + 1; int(float(x * 1e-9) * 1e9) - x -1 >>> print(datetime.timedelta(seconds=2 ** 53 / 1e9)) 104 days, 5:59:59.254741 ``time.time()`` returns seconds elapsed since the UNIX epoch: January 1st, 1970. This function hasn't had nanosecond precision since May 1970 (47 years ago):: >>> import datetime >>> unix_epoch = datetime.datetime(1970, 1, 1) >>> print(unix_epoch + datetime.timedelta(seconds=2**53 / 1e9)) 1970-04-15 05:59:59.254741 Previous rejected PEP --------------------- Five years ago, the :pep:`410` proposed a large and complex change in all Python functions returning time to support nanosecond resolution using the ``decimal.Decimal`` type. The PEP was rejected for different reasons: * The idea of adding a new optional parameter to change the result type was rejected. It's an uncommon (and bad?) programming practice in Python. * It was not clear if hardware clocks really had a resolution of 1 nanosecond, or if that made sense at the Python level. * The ``decimal.Decimal`` type is uncommon in Python and so requires to adapt code to handle it. Issues caused by precision loss ------------------------------- Example 1: measure time delta in long-running process ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A server is running for longer than 104 days. A clock is read before and after running a function to measure its performance to detect performance issues at runtime. Such benchmark only loses precision because of the float type used by clocks, not because of the clock resolution. On Python microbenchmarks, it is common to see function calls taking less than 100 ns. A difference of a few nanoseconds might become significant. Example 2: compare times with different resolution ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Two programs "A" and "B" are running on the same system and use the system clock. The program A reads the system clock with nanosecond resolution and writes a timestamp with nanosecond resolution. The program B reads the timestamp with nanosecond resolution, but compares it to the system clock read with a worse resolution. To simplify the example, let's say that B reads the clock with second resolution. If that case, there is a window of 1 second while the program B can see the timestamp written by A as "in the future". Nowadays, more and more databases and filesystems support storing times with nanosecond resolution. .. note:: This issue was already fixed for file modification time by adding the ``st_mtime_ns`` field to the ``os.stat()`` result, and by accepting nanoseconds in ``os.utime()``. This PEP proposes to generalize the fix. CPython enhancements of the last 5 years ---------------------------------------- Since the :pep:`410` was rejected: * The ``os.stat_result`` structure got 3 new fields for timestamps as nanoseconds (Python ``int``): ``st_atime_ns``, ``st_ctime_ns`` and ``st_mtime_ns``. * The :pep:`418` was accepted, Python 3.3 got 3 new clocks: ``time.monotonic()``, ``time.perf_counter()`` and ``time.process_time()``. * The CPython private "pytime" C API handling time now uses a new ``_PyTime_t`` type: simple 64-bit signed integer (C ``int64_t``). The ``_PyTime_t`` unit is an implementation detail and not part of the API. The unit is currently ``1 nanosecond``. Existing Python APIs using nanoseconds as int --------------------------------------------- The ``os.stat_result`` structure has 3 fields for timestamps as nanoseconds (``int``): ``st_atime_ns``, ``st_ctime_ns`` and ``st_mtime_ns``. The ``ns`` parameter of the ``os.utime()`` function accepts a ``(atime_ns: int, mtime_ns: int)`` tuple: nanoseconds. Changes ======= New functions ------------- This PEP adds six new functions to the ``time`` module: * ``time.clock_gettime_ns(clock_id)`` * ``time.clock_settime_ns(clock_id, time: int)`` * ``time.monotonic_ns()`` * ``time.perf_counter_ns()`` * ``time.process_time_ns()`` * ``time.time_ns()`` These functions are similar to the version without the ``_ns`` suffix, but return a number of nanoseconds as a Python ``int``. For example, ``time.monotonic_ns() == int(time.monotonic() * 1e9)`` if ``monotonic()`` value is small enough to not lose precision. These functions are needed because they may return "large" timestamps, like ``time.time()`` which uses the UNIX epoch as reference, and so their ``float``-returning variants are likely to lose precision at the nanosecond resolution. Unchanged functions ------------------- Since the ``time.clock()`` function was deprecated in Python 3.3, no ``time.clock_ns()`` is added. Python has other time-returning functions. No nanosecond variant is proposed for these other functions, either because their internal resolution is greater or equal to 1 us, or because their maximum value is small enough to not lose precision. For example, the maximum value of ``time.clock_getres()`` should be 1 second. Examples of unchanged functions: * ``os`` module: ``sched_rr_get_interval()``, ``times()``, ``wait3()`` and ``wait4()`` * ``resource`` module: ``ru_utime`` and ``ru_stime`` fields of ``getrusage()`` * ``signal`` module: ``getitimer()``, ``setitimer()`` * ``time`` module: ``clock_getres()`` See also the `Annex: Clocks Resolution in Python`_. A new nanosecond-returning flavor of these functions may be added later if an operating system exposes new functions providing better resolution. Alternatives and discussion =========================== Sub-nanosecond resolution ------------------------- ``time.time_ns()`` API is not theoretically future-proof: if clock resolutions continue to increase below the nanosecond level, new Python functions may be needed. In practice, the 1 nanosecond resolution is currently enough for all structures returned by all common operating systems functions. Hardware clocks with a resolution better than 1 nanosecond already exist. For example, the frequency of a CPU TSC clock is the CPU base frequency: the resolution is around 0.3 ns for a CPU running at 3 GHz. Users who have access to such hardware and really need sub-nanosecond resolution can however extend Python for their needs. Such a rare use case doesn't justify to design the Python standard library to support sub-nanosecond resolution. For the CPython implementation, nanosecond resolution is convenient: the standard and well supported ``int64_t`` type can be used to store a nanosecond-precise timestamp. It supports a timespan of -292 years to +292 years. Using the UNIX epoch as reference, it therefore supports representing times since year 1677 to year 2262:: >>> 1970 - 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25) 1677.728976954687 >>> 1970 + 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25) 2262.271023045313 Modifying time.time() result type --------------------------------- It was proposed to modify ``time.time()`` to return a different number type with better precision. The :pep:`410` proposed to return ``decimal.Decimal`` which already exists and supports arbitrary precision, but it was rejected. Apart from ``decimal.Decimal``, no portable real number type with better precision is currently available in Python. Changing the built-in Python ``float`` type is out of the scope of this PEP. Moreover, changing existing functions to return a new type introduces a risk of breaking the backward compatibility even if the new type is designed carefully. Different types --------------- Many ideas of new types were proposed to support larger or arbitrary precision: fractions, structures or 2-tuple using integers, fixed-point number, etc. See also the :pep:`410` for a previous long discussion on other types. Adding a new type requires more effort to support it, than reusing the existing ``int`` type. The standard library, third party code and applications would have to be modified to support it. The Python ``int`` type is well known, well supported, easy to manipulate, and supports all arithmetic operations such as ``dt = t2 - t1``. Moreover, taking/returning an integer number of nanoseconds is not a new concept in Python, as witnessed by ``os.stat_result`` and ``os.utime(ns=(atime_ns, mtime_ns))``. .. note:: If the Python ``float`` type becomes larger (e.g. decimal128 or float128), the ``time.time()`` precision will increase as well. Different API ------------- The ``time.time(ns=False)`` API was proposed to avoid adding new functions. It's an uncommon (and bad?) programming practice in Python to change the result type depending on a parameter. Different options were proposed to allow the user to choose the time resolution. If each Python module uses a different resolution, it can become difficult to handle different resolutions, instead of just seconds (``time.time()`` returning ``float``) and nanoseconds (``time.time_ns()`` returning ``int``). Moreover, as written above, there is no need for resolution better than 1 nanosecond in practice in the Python standard library. A new module ------------ It was proposed to add a new ``time_ns`` module containing the following functions: * ``time_ns.clock_gettime(clock_id)`` * ``time_ns.clock_settime(clock_id, time: int)`` * ``time_ns.monotonic()`` * ``time_ns.perf_counter()`` * ``time_ns.process_time()`` * ``time_ns.time()`` The first question is whether the ``time_ns`` module should expose exactly the same API (constants, functions, etc.) as the ``time`` module. It can be painful to maintain two flavors of the ``time`` module. How are users use supposed to make a choice between these two modules? If tomorrow, other nanosecond variants are needed in the ``os`` module, will we have to add a new ``os_ns`` module as well? There are functions related to time in many modules: ``time``, ``os``, ``signal``, ``resource``, ``select``, etc. Another idea is to add a ``time.ns`` submodule or a nested-namespace to get the ``time.ns.time()`` syntax, but it suffers from the same issues. Annex: Clocks Resolution in Python ================================== This annex contains the resolution of clocks as measured in Python, and not the resolution announced by the operating system or the resolution of the internal structure used by the operating system. Script ------ Example of script to measure the smallest difference between two ``time.time()`` and ``time.time_ns()`` reads ignoring differences of zero:: import math import time LOOPS = 10 ** 6 print("time.time_ns(): %s" % time.time_ns()) print("time.time(): %s" % time.time()) min_dt = [abs(time.time_ns() - time.time_ns()) for _ in range(LOOPS)] min_dt = min(filter(bool, min_dt)) print("min time_ns() delta: %s ns" % min_dt) min_dt = [abs(time.time() - time.time()) for _ in range(LOOPS)] min_dt = min(filter(bool, min_dt)) print("min time() delta: %s ns" % math.ceil(min_dt * 1e9)) Linux ----- Clocks resolution measured in Python on Fedora 26 (kernel 4.12): ==================== ========== Function Resolution ==================== ========== clock() 1 us monotonic() 81 ns monotonic_ns() 84 ns perf_counter() 82 ns perf_counter_ns() 84 ns process_time() 2 ns process_time_ns() 1 ns resource.getrusage() 1 us time() **239 ns** time_ns() **84 ns** times().elapsed 10 ms times().user 10 ms ==================== ========== Notes on resolutions: * ``clock()`` frequency is ``CLOCKS_PER_SECOND`` which is 1,000,000 Hz (1 MHz): resolution of 1 us. * ``times()`` frequency is ``os.sysconf("SC_CLK_TCK")`` (or the ``HZ`` constant) which is equal to 100 Hz: resolution of 10 ms. * ``resource.getrusage()``, ``os.wait3()`` and ``os.wait4()`` use the ``ru_usage`` structure. The type of the ``ru_usage.ru_utime`` and ``ru_usage.ru_stime`` fields is the ``timeval`` structure which has a resolution of 1 us. Windows ------- Clocks resolution measured in Python on Windows 8.1: ================= ============= Function Resolution ================= ============= monotonic() 15 ms monotonic_ns() 15 ms perf_counter() 100 ns perf_counter_ns() 100 ns process_time() 15.6 ms process_time_ns() 15.6 ms time() **894.1 us** time_ns() **318 us** ================= ============= The frequency of ``perf_counter()`` and ``perf_counter_ns()`` comes from ``QueryPerformanceFrequency()``. The frequency is usually 10 MHz: resolution of 100 ns. In old Windows versions, the frequency was sometimes 3,579,545 Hz (3.6 MHz): resolution of 279 ns. Analysis -------- The resolution of ``time.time_ns()`` is much better than ``time.time()``: **84 ns (2.8x better) vs 239 ns on Linux and 318 us (2.8x better) vs 894 us on Windows**. The ``time.time()`` resolution will only become larger (worse) as years pass since every day adds 86,400,000,000,000 nanoseconds to the system clock, which increases the precision loss. The difference between ``time.perf_counter()``, ``time.monotonic()``, ``time.process_time()`` and their respective nanosecond variants is not visible in this quick script since the script runs for less than 1 minute, and the uptime of the computer used to run the script was smaller than 1 week. A significant difference may be seen if uptime reaches 104 days or more. ``resource.getrusage()`` and ``times()`` have a resolution greater or equal to 1 microsecond, and so don't need a variant with nanosecond resolution. .. note:: Internally, Python starts ``monotonic()`` and ``perf_counter()`` clocks at zero on some platforms which indirectly reduce the precision loss. Links ===== * `bpo-31784: Implementation of the PEP 564 <https://bugs.python.org/issue31784>`_ Copyright ========= This document has been placed in the public domain.