Computer Graphics

   Computer graphics (CG or just graphics) is a field of [1]computer science
   that focuses on visual information. The field doesn't have strict
   boundaries and can blend and overlap with other possibly separate topics
   such as physics simulations, [2]multimedia and [3]machine learning. It
   usually deals with creating or analyzing 2D and 3D images and as such CG
   is used in [4]data visualization, [5]game development, [6]virtual reality,
   [7]optical character recognition and even astrophysics or medicine.

   We can divide computer graphics in different ways, traditionally e.g.:

     * by direction:
          * [8]rendering: Creating images.
          * [9]computer vision: Extracting information from existing images.
     * by basic elements:
          * [10]raster: Deals with images composed of a uniform grid of
            points called [11]pixels (in 2D) or [12]voxels (in 3D).
          * [13]vector: Deals with images composed of geometrical primitives
            such as curves or triangles.
     * by dimension:
          * [14]2D: Deals with images of a 2D plane.
          * [15]3D: Deals with images that capture three dimensional space.
     * by speed:
          * [16]real time: Trying to work with images in real time, e.g.
            being able to produce or analyze 60 frames per second.
          * offline: Processes or creates images over longer time-spans, even
            hours or days, e.g. in 3D movie rendering.
     * ...

   Since the 90s computers started using a dedicated hardware to accelerate
   graphics: so called [17]graphics processing units (GPUs). These have
   allowed rendering of high quality images in high [18]FPS, and due to the
   entertainment and media industry (especially gaming), GPUs have been
   pushed towards greater performance each year. Nowadays they are one of the
   most consumerist [19]hardware, also due to the emergence of general
   purpose computations being moved to GPUs (GPGPU), lately especially mining
   of [20]cryptocurrencies and training of [21]AI. Most lazy programs dealing
   with graphics nowadays simply expect and require a GPU, which creates a
   bad [22]dependency and [23]bloat. At [24]LRS we try to prefer the
   [25]suckless [26]software rendering, i.e. rendering on the [27]CPU,
   without GPU, or at least offer this as an option in case GPU isn't
   available. This many times leads us towards the adventure of using old and
   forgotten algorithms used in times before GPUs.

3D Graphics

   This is a general overview of 3D graphics, for more technical overview of
   3D rendering see [28]its own article.

   3D graphics is a big part of CG but is a lot more complicated than 2D. It
   tries to achieve realism through the use of [29]perspective, i.e. looking
   at least a bit like what we see in the real world. 3D graphics can very
   often bee seen as simulating the behavior of [30]light; there exists so
   called [31]rendering equation that describes how light behaves ideally,
   and 3D computer graphics tries to approximate the solutions of this
   equation, i.e. the idea is to use [32]math and [33]physics to describe
   real-life behavior of light and then simulate this model to literally
   create "virtual photos". The theory of realistic rendering is centered
   around the rendering equation and achieving [34]global illumination
   (accurately computing the interaction of light not just in small parts of
   space but in the scene as a whole) -- studying this requires basic
   knowledge of [35]radiometry and [36]photometry (fields that define various
   measures and units related to light such as [37]radiance, radiant
   intensity etc.).

   In 2010s mainstream 3D graphics started to employ so called [38]physically
   based rendering (PBR) that tries to yet more use physically correct models
   of [39]materials (e.g. physically measured [40]BRDFs of various materials)
   to achieve higher photorealism. This is in contrast to simpler (both
   mathematically and computationally), more [41]empirical models (such as a
   single texture + [42]phong lighting) used in earlier 3D graphics.

   Because 3D is not very easy (for example [43]rotations are pretty
   complicated), there exist many [44]3D engines and libraries that you'll
   probably want to use. These engines/libraries work on different levels of
   abstraction: the lowest ones, such as [45]OpenGL and [46]Vulkan, offer a
   portable API for communicating with the GPU that lets you quickly draw
   triangles and write small programs that run in parallel on the GPU -- so
   called [47]shaders. The higher level, such as [48]OpenSceneGraph, work
   with [49]abstraction such as that of a virtual camera and virtual scene
   into which we place specific 3D objects such as models and lights (the
   scene is many times represented as a hierarchical graph of objects that
   can be "attached" to other objects, so called [50]scene graph).

   There is a tiny [51]suckless/[52]LRS library for real-time 3D:
   [53]small3dlib. It uses software rendering (no GPU) and can be used for
   simple 3D programs that can run even on low-spec embedded devices.
   [54]TinyGL is a similar software-rendering library that implements a
   subset of [55]OpenGL.

   Real-time 3D typically uses an object-order rendering, i.e. iterating over
   objects in the scene and drawing them onto the screen (i.e. we draw object
   by object). This is a fast approach but has disadvantages such as
   (usually) needing a memory inefficient [56]z-buffer to not overwrite
   closer objects with more distant ones. It is also pretty difficult to
   implement effects such as shadows or reflections in object-order
   rendering. The 3D models used in real-time 3D are practically always made
   of triangles (or other polygons) because the established GPU pipelines
   work on the principle of drawing polygons.

   Offline rendering (non-real-time, e.g. 3D movies) on the other hand mostly
   uses image-order algorithms which go pixel by pixel and for each one
   determine what color the pixel should have. This is basically done by
   casting a ray from the camera's position through the "pixel" position and
   calculating which objects in the scene get hit by the ray; this then
   determines the color of the pixel. This more accurately models how rays of
   light behave in real life (even though in real life the rays go the
   opposite way: from lights to the camera, but this is extremely inefficient
   to simulate). The advantage of this process is a much higher realism and
   the implementation simplicity of many effects like shadows, reflections
   and refractions, and also the possibility of having other than polygonal
   3D models (in fact smooth, mathematically described shapes are normally
   much easier to check ray intersections with). Algorithms in this category
   include [57]ray tracing or [58]path tracing. In recent years we've seen
   these methods brought, in a limited way, to real-time graphics on the high
   end GPUs.

See Also

     * [59]computational photography

Links:
1. compsci.md
2. multimedia.md
3. machine_learning.md
4. data.md
5. game.md
6. vr.md
7. ocr.md
8. rendering.md
9. computer_vision.md
10. raster_graphics.md
11. pixel.md
12. voxel.md
13. vector_graphics.md
14. 2d.md
15. 3d.md
16. real_time.md
17. gpu.md
18. fps.md
19. hardware.md
20. crypto.md
21. ai.md
22. dependency.md
23. bloat.md
24. lrs.md
25. suckless.md
26. sw_rendering.md
27. cpu.md
28. 3d_rendering.md
29. perspective.md
30. light.md
31. rendering_equation.md
32. math.md
33. physics.md
34. global_illumination.md
35. radiometry.md
36. photometry.md
37. radiance.md
38. pbr.md
39. material.md
40. brdf.md
41. empiricism.md
42. phong_lighting.md
43. rotation.md
44. 3d_engine.md
45. opengl.md
46. vulkan.md
47. shader.md
48. osg.md
49. abstraction.md
50. scene_graph.md
51. suckless.md
52. lrs.md
53. small3dlib.md
54. tinygl.md
55. opengl.md
56. z_buffer.md
57. ray_tracing.md
58. path_tracing.md
59. computational_photo.md