https://nhigham.com/2023/09/05/what-is-a-flop/

Skip to content
[logo-blue-on-white_narrower]

Nick Higham

Applied mathematics, numerical linear algebra and software.

Primary Menu

  * Home
  * Blog
  * Papers
      + Linear Systems and Condition Estimation
      + Least Squares Problems
      + Correlation Matrices
      + Eigenvalue Problems
      + Matrix Functions and Nonlinear Matrix Equations
      + Miscellaneous Papers
      + Articles and Book Reviews
  * Books
      + Accuracy and Stability of Numerical Algorithms
      + Functions of Matrices: Theory and Computation
      + Handbook of Writing for the Mathematical Sciences
      + MATLAB Guide
      + The Princeton Companion to Applied Mathematics
      + Penguin Dictionary of Mathematics
  * Conferences
  * Talks
      + Videos
      + Slides
  * Resources
  * People
  * Photos
  * Miscellanea
  * Popular Posts
  * SIAM News
      + Posts at SIAM News
      + SIAM News home page
  * "What Is"

What Is a Flop?

A flop is one of the elementary arithmetic operations +, -, *, /
carried out on floating-point numbers. For example, evaluating the
expression (b - ax)/c takes three flops. A square root, which occurs
infrequently in numerical computation, is also counted as one flop.

As an example, the computation of the inner product x^Ty of two n
-vectors x and y can be written

s = x(1)*y(1)
for i = 2:n
    s = s + x(i)*y(i)
end

which requires n multiplications and n-1 additions, or 2n-1 flops. A
matrix multiplication AB of two n\times n matrices involves computing
the inner products of every row of A with every column of B, that is
n^2 inner products, costing 2n^3 - n^2 flops. As we are usually
interested in flop counts for large dimensions we retain only the
highest order terms, so we regard AB as costing 2n^3 flops.

The number of flops required by a numerical computation is one
measure of its cost. However, it ignores the cost of data movement,
which can be as large as, or even larger, than the cost of
floating-point arithmetic for large computations implemented on
modern machines with hierarchical memories. Nevertheless, flop counts
provide a useful way to compare the cost of competing algorithms when
most of the flops occur in similar types of operations, such as
matrix multiplications, and they can be used to choose between
algorithm variants (Lopez et al., 2022).

This table summarizes some flop counts, where \alpha is a scalar, x,y
are n-vectors, and A,B are n\times n matrices, and A^{-1} is computed
via LU factorization.

+---------------------------+
| Computation  |    Cost    |
|--------------+------------|
| x + \alpha y | 2n flops   |
|--------------+------------|
| x^Ty         | 2n flops   |
|--------------+------------|
| Ax           | 2n^2 flops |
|--------------+------------|
| AB           | 2n^3 flops |
|--------------+------------|
| A^{-1}       | 2n^3 flops |
+---------------------------+

As the table suggests, most standard problems involving n\times n
matrices can be solved with a cost of order n^3 flops or less, so the
interest is in the exponent (1, 2, or 3) of the dominant term and the
multiplicative constant. For m\times n matrices there can be several
potentially dominant terms m^kn^\ell and comparing competing
algorithms is more complicated.

Early Usage

Moler and Van Loan give a different definition of "flop" in their
classic paper "Nineteen Dubious Ways to Compute the Exponential of a
Matrix" (1978), and their definition was adopted in the flop command
in the original Fortran version of MATLAB, which counts the number of
flops executed in the most recent command or since MATLAB started. In
the 1981 MATLAB Users' Guide, Cleve Moler explains that

    One flop is one execution of a Fortran statement like S = S + X
    (I)*Y(I) or Y(I) = Y(I) + T*X(I). In other words, it consists of
    one floating point multiplication, together with one floating
    point addition and the associated indexing and storage reference
    operations.

This original definition attempted to account for indexing and
storage costs in the computer implementation of an algorithm as well
as the cost of the floating-point arithmetic. It was motivated by the
fact that in linear algebra computations most arithmetic appears in
statements of the form s = s + x*y, which appear in evaluating inner
products and in adding one multiple of a vector to another.

In the LINPACK Users' Guide (1979, App. B) flops were used to
normalize timing data, but the word "flop" was not used.

The widespread adoption of the flop was ensured by its use in Golub
and Van Loan's book Matrix Computations (1981). In the 1989 second
edition of the book a flop was redefined as in our first paragraph
and this definition quickly became the standard usage. The Fortran
statements given by Moler now involve two flops.

The MATLAB flop function survived until around 2000, when MATLAB
started using LAPACK in place of LINPACK for its core linear algebra
computations and it became no longer feasible to keep track of the
number of flops executed.

It is interesting to note that an operation of the form S + X(I)*Y(I)
that was used in the original definition of flop is now supported in
the hardware of certain processors. The operation is called a fused
multiply-add and is done in the same time as one multiplication or
addition and with one rounding error.

Flops as a Measure of Computer Performance

Another usage of "flop", in the plural, is in measuring computer
performance, with "flops" meaning floating-point operations per
second and having a prefix specifying a power of ten. Thus we have
the terms megaflops, gigaflops, through to exaflops (10^{18}
floating-point operations per second) for today's fastest computers.

The earliest citation given in the Oxford English Dictionary for this
use of the word "flops" is in a 1977 paper by Calahan, who writes

    The most common vector performance measure is the number of
    floating point operations per second (FLOPS), obtained by
    dividing the number of floating point operations--known from the
    algorithmic complexity--by the computation time.

The term "MFLOPS" for "million floating point operations per second"
is used in an earlier Cray-1 evaluation report (Keller, 1976).

If you are aware of any earlier references please put them in the
comments below.

Dictionary Definitions

The word "flop" appears in general dictionaries, but there is some
variation in whether it appears under "flop" or "flops", in
capitalization, and whether it means an operation or an execution
rate.

+-------------------------------------------------------------------+
|       Dictionary       | Term  |            Definition            |
|------------------------+-------+----------------------------------|
| Oxford English         |       | "A floating-point operation per  |
| Dictionary (online,    | FLOP  | second"                          |
| 2023)                  |       |                                  |
|------------------------+-------+----------------------------------|
| Concise Oxford English |       | "Floating-point operations per   |
| Dictionary (11th ed.,  | flop  | second"                          |
| 2004)                  |       |                                  |
|------------------------+-------+----------------------------------|
| The Chambers           |       |                                  |
| Dictionary (13th ed.,  | flop  | "A floating-point operation"     |
| 2014)                  |       |                                  |
|------------------------+-------+----------------------------------|
| Collins English        | flops | "Floating-point operations per   |
| Dictionary (13th ed.,  | or    | second"                          |
| 2018)                  | FLOPS |                                  |
|------------------------+-------+----------------------------------|
| The American Heritage  |       | "A measure of the speed of a     |
| Dictionary of the      | flops | computer in operations per       |
| English Language (5th  | or    | second, especially operations    |
| ed., 2018)             | flop  | involving floating-point         |
|                        |       | numbers"                         |
|------------------------+-------+----------------------------------|
|                        |       | "A unit of measure for           |
| Merriam-Webster        |       | calculating the speed of a       |
| (online, 2023)         | flops | computer equal to one            |
|                        |       | floating-point operation per     |
|                        |       | second"                          |
+-------------------------------------------------------------------+

Notes and References

I thank Jack Dongarra, Cleve Moler, and Charlie Van Loan for comments
on the history of flops.

  * D. A. Calahan, Algorithmic and Architectural Issues Related to
    Vector Processors, in: Large Engineering Systems. Proceedings of
    the International Symposium held at the University of Manitoba,
    Winnipeg, Manitoba, Canada August 9-12, 1976, Alvin Wexler, ed.,
    Pergamon Press, Oxford, 1977.
  * T. Keller, CRAY-1 Evaluation. Final Report, Report LA-6456-MS,
    Los Alamos Scientific Laboratory, Los Alamos, New Mexico, USA,
    December 1976.
  * Francisco Lopez, Lars Karlsson, and Paolo Bientinesi, FLOPs as a
    Discriminant for Dense Linear Algebra Algorithms, in Proceedings
    of the 51st International Conference on Parallel Processing, ACM
    Press, 2022.
  * Cleve B. Moler and Charles F. Van Loan, Nineteen Dubious Ways to
    Compute the Exponential of a Matrix, SIAM Rev. 20(4), 801-836.
    1978.

Related Blog Posts

  * What Is Floating-Point Arithmetic?
  * What Is IEEE Standard Arithmetic?

This article is part of the "What Is" series, available from https://
nhigham.com/index-of-what-is-articles/ and in PDF form from the
GitHub repository https://github.com/higham/what-is.

Share this:

  * Print
  * Email
  * Twitter
  * More
  * 

  * Facebook
  * Reddit
  * LinkedIn
  * 

Related

Posted on September 5, 2023September 5, 2023 by Nick HighamPosted in
what-is

Post navigation

Previous Previous post: Hamilton, the Quaternions, and Creativity

Leave a Reply Cancel reply

Enter your comment here...
[                    ]

Fill in your details below or click an icon to log in:

  *  
  *  
  *  

Gravatar
Email (required) (Address never made public)
[                    ]
Name (required)
[                    ]
Website
[                    ]
WordPress.com Logo

You are commenting using your WordPress.com account. ( Log Out / 
Change )

Facebook photo

You are commenting using your Facebook account. ( Log Out /  Change )

Cancel

Connecting to %s

[ ] Notify me of new comments via email.

[ ] Notify me of new posts via email.

[Post Comment] 

 [                                             ] 
 [                                             ] 
 [                                             ] 
 [                                             ] 
 [                                             ] 
 [                                             ] 
 [                                             ] 
D[                                             ] 

Search for: [                    ] [Search]
Recent Posts

  * What Is a Flop?
  * Hamilton, the Quaternions, and Creativity
  * What Is the Pseudoinverse of a Matrix?
  * What Is the Numerical Range of a Matrix?
  * Wilkinson's Rounding Errors Book Reprinted by SIAM

Recent Comments

  * Nick Higham on What Is the Pseudoinverse of a Matrix?
  * Mark on What Is the Pseudoinverse of a Matrix?
  * Ralph Kelsey on What Is the Perron-Frobenius Theorem?
  * Nick Higham on What Is the Perron-Frobenius Theorem?
  * Ralph Kelsey on What Is the Perron-Frobenius Theorem?

Categories

  * books
  * conferences
  * Emacs
  * LaTeX
  * matrix computations
  * miscellaneous
  * people
  * Princeton Companion
  * publication peculiarities
  * publishing
  * research
  * software
  * what-is
  * writing

Tags

Adobe_Acrobat Algol ASCII_art Basic Beamer bfloat16 BibTeX blogs
bohemian_matrices C CMYK comma Commodore_64 Commodore_Pet
correlations creativity dictionary DOI ellipsis error_analysis Forth
Fortran fp16 GitHub Helm Householder_symposium IEEE_arithmetic
ill_posed_problem IMA Julia Lambert_W_function Lanczos LINPACK Lisp
lists logarithm Mac machine_learning Manchester mathematician MATLAB
matrix_function Moler NAG Org orthogonal_matrix Pandoc paper PCAM PDF
PDF_viewers pencil performance_profile photocopying photography
Princeton_University_press punctuation Python rounding scanning SIAM
SIAM_News softmax spreadsheets stationery talk Twitter typesetting
unwinding_function Vandermonde video wilkinson Windows

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive
notifications of new posts by email.

Email Address: [                    ]

Subscribe

Join 466 other subscribers

Archives

  * September 2023
  * August 2023
  * July 2023
  * June 2023
  * April 2023
  * March 2023
  * February 2023
  * January 2023
  * December 2022
  * November 2022
  * October 2022
  * September 2022
  * July 2022
  * June 2022
  * May 2022
  * April 2022
  * March 2022
  * February 2022
  * January 2022
  * December 2021
  * November 2021
  * October 2021
  * September 2021
  * August 2021
  * July 2021
  * June 2021
  * May 2021
  * April 2021
  * March 2021
  * February 2021
  * January 2021
  * December 2020
  * November 2020
  * October 2020
  * September 2020
  * August 2020
  * July 2020
  * June 2020
  * May 2020
  * April 2020
  * March 2020
  * February 2020
  * December 2019
  * November 2019
  * August 2019
  * June 2019
  * May 2019
  * January 2019
  * December 2018
  * November 2018
  * October 2018
  * August 2018
  * June 2018
  * April 2018
  * January 2018
  * December 2017
  * November 2017
  * October 2017
  * August 2017
  * July 2017
  * June 2017
  * May 2017
  * April 2017
  * March 2017
  * February 2017
  * January 2017
  * December 2016
  * November 2016
  * October 2016
  * September 2016
  * August 2016
  * June 2016
  * May 2016
  * April 2016
  * March 2016
  * February 2016
  * January 2016
  * December 2015
  * November 2015
  * October 2015
  * September 2015
  * August 2015
  * July 2015
  * June 2015
  * May 2015
  * April 2015
  * February 2015
  * December 2014
  * November 2014
  * September 2014
  * August 2014
  * July 2014
  * June 2014
  * May 2014
  * April 2014
  * March 2014
  * February 2014
  * January 2014
  * December 2013
  * November 2013
  * October 2013
  * September 2013
  * August 2013
  * July 2013
  * June 2013
  * May 2013
  * April 2013
  * March 2013
  * February 2013
  * January 2013

RSS Feed

  * RSS - Posts
  * RSS - Comments

Search for: [                    ] [Search]
Recent Posts

  * What Is a Flop?
  * Hamilton, the Quaternions, and Creativity
  * What Is the Pseudoinverse of a Matrix?
  * What Is the Numerical Range of a Matrix?
  * Wilkinson's Rounding Errors Book Reprinted by SIAM

Recent Comments

  * Nick Higham on What Is the Pseudoinverse of a Matrix?
  * Mark on What Is the Pseudoinverse of a Matrix?
  * Ralph Kelsey on What Is the Perron-Frobenius Theorem?
  * Nick Higham on What Is the Perron-Frobenius Theorem?
  * Ralph Kelsey on What Is the Perron-Frobenius Theorem?

Categories

  * books (19)
  * conferences (32)
  * Emacs (9)
  * LaTeX (16)
  * matrix computations (8)
  * miscellaneous (19)
  * people (17)
  * Princeton Companion (12)
  * publication peculiarities (7)
  * publishing (2)
  * research (30)
  * software (30)
  * what-is (91)
  * writing (16)

Top Posts in Last 2 Days

  * What Is a Flop?
  * Better LaTeX Tables with Booktabs
  * Five Examples of Proofreading
  * Half Precision Arithmetic: fp16 Versus bfloat16
  * What Is a Symmetric Positive Definite Matrix?
  * Hamilton, the Quaternions, and Creativity
  * What Is a Matrix Norm?
  * What Is the Pseudoinverse of a Matrix?
  * What Is a Householder Matrix?
  * What Is Backward Error?

Recent Posts from NLA Group: Numerical Linear Algebra Group

NLA Group Talks at ILAS 2023 Conference

Jack Dongarra Elected to National Academy of Sciences

Experiences with the Multiprecision Computing Toolbox

Chris Hickey Wins Bronze Medal at STEM for Britain

SIAM CSE23 minisymposium on "Mixed precision algorithms in numerical
linear algebra"

Website Powered by WordPress.com.

  * Follow Following
      + [croppe] Nick Higham
        Join 466 other followers
        [                    ]
        Sign me up
      + Already have a WordPress.com account? Log in now.
  * 
      + [croppe] Nick Higham
      + Customize
      + Follow Following
      + Sign up
      + Log in
      + Copy shortlink
      + Report this content
      + View post in Reader
      + Manage subscriptions
      + Collapse this bar

[b]