Ray Tracing Magic
Introduction: The “Information Field” beneath Our Reality
Beneath perceived “Reality” there is an ” Information Field” (the Matrix) which describes “everything” by numbers. Furthermore, any number can be expressed as a binary code by combination of “0” and “1” (or “on/off” states).
|
Deoxyribonucleic acid (DNA) is a nucleic acid present in the cells of all living organisms. It is often referred to as the “building blocks of life,” since DNA encodes the genetic material which determines what an organism will develop into. In addition to maintaining the genetic blueprints for its parent organism, DNA also performs a number of other functions which are critical to life.
DNA is composed of chains of nucleotides built on a sugar and phosphate backbone and wrapped around each other in the form of a double helix. The backbone supports four bases, guanine, cytosine, adenine, and thymine. Guanine and cytosine are complementary, always appearing opposite each other on the helix, as are adenine and thymine. This is critical in the reproduction of DNA, as it allows a strand to divide and copy itself, since it only needs half of the material in the helix to duplicate successfully.
What is Real?
It is amazing that all in nature can be expressed by numbers ( and on the most fundamental level all numbers can be expressed as combination of 0 and 1.)
Computer models are just a simulated reflection of the “real” world. In a virtual reality world of computer models numbers are used to generate images that look very “real”.
Ray tracing can achieve a very high degree of visual realism as you can see in these image created by Gilles Tran with POV-Ray 3.6 using Radiosity.
Source: http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29
Creation of such photo-realistic images requires a hardware (computer), software, and data describing (in numbers) a scene (model of the environment with objects in it and light sources). Running commands given by ray tracing software (programming reflecting physical rules governing behavior of the light) generates images that often our brain cannot distinguish from real images.
A computer program is a sequence of instructions that are executed by a CPU. Machine code or machine language is a system of instructions and data executed directly by a computer’s central processing unit. Machine code may be regarded as a primitive (and cumbersome) programming language or as the lowest-level representation of a compiled and/or assembled computer program. Programs in interpreted languages are not represented by machine code however, although their interpreter (which may be seen as a processor executing the higher level program) often is.
In computing and telecommunication, binary codes are used for any of a variety of methods of encoding data, such as character strings, into bit strings. A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lowercase “a” as represented by the bit string 01100001, can also be represented as the decimal number 97.
A way of representing text or computer processor instructions by the use of the binary number system’s two-binary digits 0 and 1. This is accomplished by assigning a bit string to each particular symbol or instruction.
Binary Code was first introduced by the German mathematician and philosopher Gottfried Wilhelm Leibniz during the 17th century. Leibniz was trying to find a system that converts logic’s verbal statements into a pure mathematical one. After his ideas were ignored, he came across a classic Chinese text called ‘I Ching’ or ‘Book of Changes’, which used a type of binary code. The book had confirmed his theory that life could be simplified or reduced down to a series of straightforward propositions. He created a system consisting of rows of zeros and ones. During this time period, Leibiniz had not yet found a use for this system.
Besides Computers, there are many things that use Binary including:
* CD’s, which have a series of hills and valleys on the surface, which either reflect the light of the thin laser shone on them, representing a one, or don’t, representing the zero.
* Radio’s, which search for a series of radio waves, then translates a radio wave into a one and no radio wave into a zero
It has been said that machine code is so unreadable that the Copyright Office cannot even identify whether a particular encoded program is an original work of authorship. “Looking at a program written in machine language is vaguely comparable to looking at a DNA molecule atom by atom. [ Hofstadter ]
It is amazing that all in nature can be expressed by numbers and on the most fundamental level all numbers can be expressed as combination of 0 and 1.
The Stage
Coordinate system
In geometry, a coordinate system is a system which uses one or more numbers, or coordinates, to uniquely determine the position of a point or other geometric element.
In physics, a coordinate system used to describe points in space is called a frame of reference.
Cartesian coordinate system
In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the signed distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space.
For example, to describe a sphere in space we would need 3 numbers describing location and the fourth number describing radius of the sphere: (x,y,z,r):
Ray Tracing
In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television special effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and chromatic aberration.
The ray tracing algorithm builds an image by extending rays into a scene
Algorithm Overview
Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photo-realism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.
Scenes in raytracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.
Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or “backwards” to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer’s eye, a “forward” simulation could potentially waste a tremendous amount of computation on light paths that are never recorded. A computer simulation that starts by casting rays from the light source is called Photon mapping, and it takes much longer than a comparable ray trace.
Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel’s value is updated. The light intensity of this pixel is computed using a number of algorithms, which may include the classic rendering algorithm and may also incorporate techniques such as radiosity.
What Happens in Nature
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this “ray” as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). In reality, any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may reflect all or part of the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength colour in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.
Ray Tracing algorithm
The next important research breakthrough came from Turner Whitted in 1979.Previous algorithms cast rays from the eye into the scene, but the rays were traced no further. Whitted continued the process. When a ray hits a surface, it could generate up to three new types of rays: reflection, refraction, and shadow. A reflected ray continues on in the mirror-reflection direction from a shiny surface. It is then intersected with objects in the scene; the closest object it intersects is what will be seen in the reflection. Refraction rays traveling through transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material. To further avoid tracing all rays in a scene, a shadow ray is used to test if a surface is visible to a light. A ray hits a surface at some point. If the surface at this point faces a light, a ray (to the computer, a line segment) is traced between this intersection point and the light. If any opaque object is found in between the surface and the light, the surface is in shadow and so the light does not contribute to its shade. This new layer of ray calculation added more realism to ray traced images.
Example of the Scene Description Language
The following is an example of the scene description lanaguage used by POV-Ray to describe a scene to render. It demonstrates use of the camera, lights, a simple box shape and the transforming effects of scaling, rotation and translation.
global_settings {
assumed_gamma 1.0
}
background {
color rgb <0.25,0.25,0.25>
}
camera {
location <0.0, 0.5, -4.0>
direction 1.5*z
right x*image_width/image_height
look_at <0.0, 0.0, 0.0>
}
light_source {
<0, 0, 0>
color rgb <1, 1, 1>
translate <-5, 5, -5>
}
light_source {
<0, 0, 0>
color rgb <0.25, 0.25, 0.25>
translate <6, -6, -6>
}
box {
<-0.5, -0.5, -0.5>
<0.5, 0.5, 0.5>
texture {
pigment {
color Red
}
finish{
specular 0.6
}
normal {
agate 0.25
scale 1/2
}
}
rotate <45,46,47>
}
POV-Ray image output based on the above script
Virtual Refracting Telescope
The following section was created in order to test POV-Ray’s accuracy of refracting light. We have seen sample models with a magnifying glass and a glass ball, howerver would a raytracing program allow to create a working telescope?
Telescope Basics
A simple refractor, or refracting telescope is a hollow tube which uses a primary lens at its opening to diffract, or change the path of incoming light waves. This primary lens is called the “objective lens” and is used to collect more light than the human eye. When light passes through the objective lens, it is bent – or refracted. Light waves that enter on a parallel path converge, or meet together at a focal point. Light waves which enter at an angle converge on the focal plane. It is the combination of both which form an image that is further refracted and magnified by a secondary lens called the eyepiece.
Convex lenses are thicker at the middle. Rays of light that pass through the lens are brought closer together (they converge). A convex lens is a converging lens.
When parallel rays of light pass through a convex lens the refracted rays converge at one point called the principal focus. The distance between the principal focus and the centre of the lens is called the focal length.
Convex lenses are thinner at the middle. Rays of light that pass through the lens are spread out (they diverge). A convex lens is a diverging lens.
When parallel rays of light pass through a concave lens the refracted rays diverge so that they appear to come from one point called the principal focus.
The distance between the principal focus and the centre of the lens is called the focal length. The image formed is virtual and diminished (smaller).
Virtual Telescope
We created a convex and concave lenses and made a telescope similar to the first telescope of Galileo. The eyepiece lens was built as the difference of cylinder and sphere and the magnifying lens was created as intersection of cylinder and sphere.
By trial and error we found the best IOR value and correct relative position of the camera, 2 lenses and an object.
To test our virtual telescope further we added to the stage water and clouds.
Next, we imported a wire-frame model of the Golden Buddha statue [ The Buddha.inc. file was downloaded fromhttp://www.multimania.com/froux/modeles/page6.htm ]
Both Buddhas were placed at the same distance from the camera (compare the heads) – this way we could test the accuracy and the power of this virtual telescope.
The result shows very realistic magnification generated by the telescope.
Another Example
Ray tracing can achieve a very high degree of visual realism. Here is an example created by a true master using the same basic principles of raytracing.
Image created by Gilles Tran with POV-Ray 3.6
Description
Full scene containing:
- 3 types of glasses
- A pitcher
- An ashtray
- Dices
The pitcher and the ashtray are also available as Cinema 4D and OBJ files.
This image was created as a demo scene for several objects modelled with Rhino (glasses, pitcher, ashtray) for Closing time, and a dice modelled with Cinema 4D was added later. It is available also on Wikimedia Commons. It was Picture of the Day on Wikipedia on August 2, 2006 and illustrates several articles on digital images.
Here are some general comments about this image:
- No, it’s not made with Photoshop. It’s amusing how many people now just assume that everything is photoshopped. If you don’t believe me, just download POV-Ray and the image source code (the latter released in the public domain) and run it yourself!
- Yes, this is not state-of-the-art photorealistic computer generated imagery. There’s a lot that could be done to improve it from a photorealistic standpoint, but that was not the purpose. It was just a quick-and-dirty demonstration scene for POV-Ray featuring some old, simple models that I wanted to give away. It just happened that some Wikipedians thought it nice enough.
- The technology is raytracing + radiosity. Focal blur is camera-based (using the basic POV-Ray implementation, no choice for bokeh etc.) and is the major culprit of the long render time. No post-process of any kind.
- No, no matter the tool (and I’ve been using other and better renderers for a while now), creating images like this is never done by just pressing a button and letting the machine work. This was a quick job but it still took a lot of tinkering to get right.
Sources and Related Links
- http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29
- http://hof.povray.org/
- http://www.oyonale.com/
- https://blog.world-mysteries.com/science/the-minds-eye/
- https://blog.world-mysteries.com/science/ancient-wisdom-about-the-universe/
- https://blog.world-mysteries.com/science/the-self-aware-universe/