Ray Tracing Magic

Ray Tracing Magic

Introduction: The “Information Field” beneath Our Reality

Beneath perceived “Reality” there is an ” Information Field” (the Matrix) which describes “everything” by numbers.  Furthermore, any number can be expressed as a binary code by combination of “0” and “1” (or “on/off” states).

Our universe is simply unfolding expression of that information field, just like an invisible to a naked eye (at first) fertilized egg eventually evolves into a visible body of a life form.  The egg contains basically the genome of an organism (inscribed in DNA). In Greek, the word genome means “I become, I am born, to come into being”. In modern molecular biology and genetics, the genome is the entirety of an organism’s hereditary information. It is encoded either in DNA or, for many types of virus, in RNA.


Deoxyribonucleic acid (DNA) is a nucleic acid present in the cells of all living organisms. It is often referred to as the “building blocks of life,” since DNA encodes the genetic material which determines what an organism will develop into. In addition to maintaining the genetic blueprints for its parent organism, DNA also performs a number of other functions which are critical to life.

DNA is composed of chains of nucleotides built on a sugar and phosphate backbone and wrapped around each other in the form of a double helix. The backbone supports four bases, guanine, cytosine, adenine, and thymine. Guanine and cytosine are complementary, always appearing opposite each other on the helix, as are adenine and thymine. This is critical in the reproduction of DNA, as it allows a strand to divide and copy itself, since it only needs half of the material in the helix to duplicate successfully.

What is Real?

It is amazing that all in nature can be expressed by numbers ( and on the most fundamental level all numbers can be expressed as combination of  0 and 1.)

Computer models are just a simulated reflection of the “real” world. In a virtual reality world of computer models numbers are used to generate images that look very “real”.

Ray tracing can achieve a very high degree of visual realism as you can see in these image created by Gilles Tran with POV-Ray 3.6 using Radiosity.
Source: http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29

Creation of such photo-realistic images requires a hardware (computer), software, and data describing (in numbers) a scene (model of  the environment with objects in it and light sources). Running commands given by ray tracing software (programming reflecting physical rules governing behavior of the light) generates images that often our brain cannot distinguish from real images.

A computer program is a sequence of instructions that are executed by a CPU. Machine code or machine language is a system of instructions and data executed directly by a computer’s central processing unit. Machine code may be regarded as a primitive (and cumbersome) programming language or as the lowest-level representation of a compiled and/or assembled computer program. Programs in interpreted languages are not represented by machine code however, although their interpreter (which may be seen as a processor executing the higher level program) often is.

In computing and telecommunication, binary codes are used for any of a variety of methods of encoding data, such as character strings, into bit strings. A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lowercase “a” as represented by the bit string 01100001, can also be represented as the decimal number 97.

A way of representing text or computer processor instructions by the use of the binary number system’s two-binary digits 0 and 1. This is accomplished by assigning a bit string to each particular symbol or instruction.

Binary Code was first introduced by the German mathematician and philosopher Gottfried Wilhelm Leibniz during the 17th century. Leibniz was trying to find a system that converts logic’s verbal statements into a pure mathematical one. After his ideas were ignored, he came across a classic Chinese text called ‘I Ching’ or ‘Book of Changes’, which used a type of binary code. The book had confirmed his theory that life could be simplified or reduced down to a series of straightforward propositions. He created a system consisting of rows of zeros and ones. During this time period, Leibiniz had not yet found a use for this system.
Besides Computers, there are many things that use Binary including:
* CD’s, which have a series of hills and valleys on the surface, which either reflect the light of the thin laser shone on them, representing a one, or don’t, representing the zero.
* Radio’s, which search for a series of radio waves, then translates a radio wave into a one and no radio wave into a zero

It has been said that machine code is so unreadable that the Copyright Office cannot even identify whether a particular encoded program is an original work of authorship. “Looking at a program written in machine language is vaguely comparable to looking at a DNA molecule atom by atom. [ Hofstadter ]

It is amazing that all in nature can be expressed by numbers and on the most fundamental level all numbers can be expressed as combination of  0 and 1.

The Stage

Coordinate system

In geometry,  a coordinate system is a system which uses one or more numbers, or coordinates, to uniquely determine the position of a point or other geometric element.
In physics, a coordinate system used to describe points in space is called a frame of reference.

Cartesian coordinate system

In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the signed distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space.

For example, to describe a sphere in space we would need 3 numbers describing location and the fourth number describing radius of the sphere: (x,y,z,r):


Ray Tracing

In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television special effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and chromatic aberration.

The ray tracing algorithm builds an image by extending rays into a scene


Algorithm Overview

Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photo-realism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.
Scenes in raytracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.

Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or “backwards” to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer’s eye, a “forward” simulation could potentially waste a tremendous amount of computation on light paths that are never recorded. A computer simulation that starts by casting rays from the light source is called Photon mapping, and it takes much longer than a comparable ray trace.
Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel’s value is updated. The light intensity of this pixel is computed using a number of algorithms, which may include the classic rendering algorithm and may also incorporate techniques such as radiosity.

What Happens in Nature

In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this “ray” as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). In reality, any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may reflect all or part of the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength colour in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.

Ray Tracing algorithm

The next important research breakthrough came from Turner Whitted in 1979.Previous algorithms cast rays from the eye into the scene, but the rays were traced no further. Whitted continued the process. When a ray hits a surface, it could generate up to three new types of rays: reflection, refraction, and shadow. A reflected ray continues on in the mirror-reflection direction from a shiny surface. It is then intersected with objects in the scene; the closest object it intersects is what will be seen in the reflection. Refraction rays traveling through transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material. To further avoid tracing all rays in a scene, a shadow ray is used to test if a surface is visible to a light. A ray hits a surface at some point. If the surface at this point faces a light, a ray (to the computer, a line segment) is traced between this intersection point and the light. If any opaque object is found in between the surface and the light, the surface is in shadow and so the light does not contribute to its shade. This new layer of ray calculation added more realism to ray traced images.

Example of the Scene Description Language

The following is an example of the scene description lanaguage used by POV-Ray to describe a scene to render. It demonstrates use of the camera, lights, a simple box shape and the transforming effects of scaling, rotation and translation.

global_settings {
assumed_gamma 1.0

background {
color rgb <0.25,0.25,0.25>
camera {
location        <0.0, 0.5, -4.0>
direction       1.5*z
right           x*image_width/image_height
look_at         <0.0, 0.0,  0.0>
light_source {
<0, 0, 0>
color rgb <1, 1, 1>
translate <-5, 5, -5>
light_source {
<0, 0, 0>
color rgb <0.25, 0.25, 0.25>
translate <6, -6, -6>
box {
<-0.5, -0.5, -0.5>
<0.5, 0.5, 0.5>
texture {
pigment {
color Red
specular 0.6
normal {
agate 0.25
scale 1/2
rotate <45,46,47>

POV-Ray image output based on the above  script

Virtual Refracting Telescope

The following section was created in order to test POV-Ray’s accuracy of refracting light. We have seen sample models with a magnifying glass and a glass ball, howerver would a raytracing program allow to create a working telescope?

Telescope Basics

A simple refractor, or refracting telescope is a hollow tube which uses a primary lens at its opening to diffract, or change the path of incoming light waves. This primary lens is called the “objective lens” and is used to collect more light than the human eye. When light passes through the objective lens, it is bent – or refracted. Light waves that enter on a parallel path converge, or meet together at a focal point. Light waves which enter at an angle converge on the focal plane. It is the combination of both which form an image that is further refracted and magnified by a secondary lens called the eyepiece.

Convex lenses are thicker at the middle. Rays of light that pass through the lens are brought closer together (they converge). A convex lens is a converging lens.
When parallel rays of light pass through a convex lens the refracted rays converge at one point called the principal focus. The distance between the principal focus and the centre of the lens is called the focal length.

Convex lenses are thinner at the middle. Rays of light that pass through the lens are spread out (they diverge). A convex lens is a diverging lens.
When parallel rays of light pass through a concave lens the refracted rays diverge so that they appear to come from one point called the principal focus.
The distance between the principal focus and the centre of the lens is called the focal length. The image formed is virtual and diminished (smaller).

Virtual Telescope

We created a convex and concave lenses and made a telescope similar to the first telescope of Galileo. The eyepiece lens was built as the difference of cylinder and sphere and the magnifying lens was created as intersection of cylinder and sphere.

By trial and error we found the best IOR value and correct relative position of the camera, 2 lenses and an object.

To test our virtual telescope further we added to the stage water and clouds.

Next, we imported a wire-frame model of the Golden Buddha statue [ The Buddha.inc. file was downloaded fromhttp://www.multimania.com/froux/modeles/page6.htm ]

Both Buddhas were placed at the same distance from the camera (compare the heads) – this way we could test the accuracy and the power of this virtual telescope.

The result shows very realistic magnification generated by the telescope.

Another Example

Ray tracing can achieve a very high degree of visual realism. Here is an example created by a true master using the same basic principles of raytracing.

Image created by Gilles Tran with POV-Ray 3.6


Full scene containing:

  • 3 types of glasses
  • A pitcher
  • An ashtray
  • Dices

The pitcher and the ashtray are also available as Cinema 4D and OBJ files.

This image was created as a demo scene for several objects modelled with Rhino (glasses, pitcher, ashtray) for Closing time, and a dice modelled with Cinema 4D was added later. It is available also on Wikimedia Commons. It was Picture of the Day on Wikipedia on August 2, 2006 and illustrates several articles on digital images.

Here are some general comments about this image:

  • No, it’s not made with Photoshop. It’s amusing how many people now just assume that everything is photoshopped. If you don’t believe me, just download POV-Ray and the image source code (the latter released in the public domain) and run it yourself!
  • Yes, this is not state-of-the-art photorealistic computer generated imagery. There’s a lot that could be done to improve it from a photorealistic standpoint, but that was not the purpose. It was just a quick-and-dirty demonstration scene for POV-Ray featuring some old, simple models that I wanted to give away. It just happened that some Wikipedians thought it nice enough.
  • The technology is raytracing + radiosity. Focal blur is camera-based (using the basic POV-Ray implementation, no choice for bokeh etc.) and is the major culprit of the long render time. No post-process of any kind.
  • No, no matter the tool (and I’ve been using other and better renderers for a while now), creating images like this is never done by just pressing a button and letting the machine work. This was a quick job but it still took a lot of tinkering to get right.

Sources and Related Links

  • http://en.wikipedia.org/wiki/Ray_tracing_%28graphics%29
  • http://hof.povray.org/
  • http://www.oyonale.com/
  • http://blog.world-mysteries.com/science/the-minds-eye/
  • http://blog.world-mysteries.com/science/ancient-wisdom-about-the-universe/
  • http://blog.world-mysteries.com/science/the-self-aware-universe/


  1. J.A. says

    World reality check the media for giving us knowledge to make our future a magic one leaving now for the wild open spaces with victory sprayed everywhere thanks for the experience never loose hope the scene changes with angels in the mist.

  2. M.Stramm says

    I just wanted to say that the world cannot be fully described as a sequence of ones and zeros. We cannot, for instance take an atom and accurately measure it’s position *and* speed. It gets much much worse for subatomic particles and nobody even knows if there is something which is the smallest. (Quarks and Strings are promising, because it gets rather hard to measure anything smaller than quarks. Hell, it’s already hard to measure Quarks. Have you seen what they do in Genf?). I wouldn’t be surprised if particles could be split infinitely often. Anyways, even if that was not the case we could only ever hope to encode a very small very limitted model of our universe due to entropy.
    And whatever, my point is: please put more science stuff on your blog, k?

  3. Robbie Campbell says

    What a well structured and readable article. The holographic space we live in is indeed a wonderful place. It would be great if we all enjoyed it rather than trying to destroy it. If the brain is a “node” in the matrix of the universal subconcious it would make so much sense. Madness,genius, even forgetfulness, and all the other anomolies of the brains function might well be some kind of de-tuning of filters between the brain and the mass of information which surrounds us. I just think this article is so important and should be read by everyone.

  4. says

    This web site: http://ray-trace.zxq.net/
    Uses ray-trace rendering to show how, in virtual reality, a complex object can be created by certain actions on very simple starting conditions. This is analogous to Steven Wolfram’s examples in his book “A New Kind Of Science” where he suggests that our complex reality is the result of actions on an underlying simple structure.

  5. says

    We are all one massive thought expressed in a multifaceted, multidimensional broadcast of information of a Fiery Mind localized in this thing called LIFE. We never die, and have existed ALWAYS. There is the “You” and there is the “Other”. The “You” would not exist without the feedback looping of the Other bringing you data to know yourself. We are the Other. We are the One. It is a Pity we have not recognized Ourselves as of late.

    • joebanana says

      Dude, that’s some heavy stuff, go on…….
      Could there be more than one “other”? as in a multiverse?
      Please, more detail.

  6. says

    Old Stuff…BTDT

    Hidden deeply in the current research and development surrounding the disciplines of Virtual Reality, there lurks a potentially new understanding of immortality and Divine Providence. Virtual Reality, which means the experiencing of an implied state of alternate being in a tacit environment of sensual stimuli that contributes to an illusionary or augmented reality for the one who experiences, is about to blow the lid off of current philosophical and religious understanding. VR will literally shake the groundworks of all philosophies of reality and establish a new vortex of universal understanding in the mind of future humanity. We will soon see the roots and constructions of life, spirit, and matter through this new lens on tomorrow.

    Virtual Reality is a human endeavor to participate in a computer-created world via computerized sensors such as viewing goggles, gloves or a total sensor-suit. Those who travel in this newly created world can experience perceptual stimuli that approaches the visual and tactile expressions of reality. The Department of Defense and NASA have plans to converge robotics with VR to allow a cybernaut to travel, work, and fight in hostile environs while controlled by a human at a remote site. With such capabilities, we would be able to explore the universe from our own living rooms. Imagine putting on a head-mounted display and body-sensor suit that would correspond to sensors in a robot that might be millions of miles away — say on Mars. You could actually become part of a cyber-being that would interact with the Martian environment. You would be able to experience all the wonders of such a far away place in a direct sensory manner.

    No discovery or invention of man can hold a candle to the blue-white laser blast that is about to flash across the horizon of humanity’s existence in this manifest state of earthly being. The automobile, television, space travel– you name it, nothing means more to us than the idea that is hidden in the physics and metaphysics of Virtual Reality. This idea will serve as a point of departure to new dimensions of amplified being. The idea is the understanding of the meaning of Logos or Reason. If we are careful in this first step on the threshold of the absolute, we will begin to decipher the codes that will send us into the occupation of universe-building.

    How can this be? The last time mankind came so close to the gods, they gave the tower of Babylon its namesake with many languages to confuse the feebleminded being, that would dare to learn the meaning of life. This time, if we are cautious, we may finally join the lesser gods in springing into a whole new dimension of understanding. Virtual Reality, along with the understandings of Fractal Geometry, Nanotechnology, Holography, Laser-tech, and other leading-edge, computer driven technologies, will show us the way to how Deity and Its powerful energy or spirit can broadcast itself into the virtual reality that we all are experiencing at present…the universe.

    According to Michael Heim, who wrote the 1993 version of The Metaphysics of Virtual Reality, there are seven divergent directions that VR research is moving through. They are: Simulation, Interaction (augmented reality), Artificiality, Immersion, Telepresence, Full-body Immersion, and Networked Communications. These seven directions will someday reconvene offering humanity the tools to recreate himself. Marry the newly advocated theory of a Holographic Universe touted by David Bohm, Karl Pribram and Michael Talbot, then charge its mathematics with Benoit Mandelbrot’s Fractal Geometry (which can emulate all known visual universes) and we are on our way to seeing the deeper aspects of existence.

    Virtual reality will soon allow us to become anyone, anything, anywhere, anyhow, anywho-beings when we flip the switch and step into a high resolution, 3-dimensional visual-sound, pheromone aided environment. There, our real sensory systems will suffer a paradigm shift into a world augmented by heads-up displays of hypertext, cyberspace/form, and amplified knowledge available in a mirror-world context, at speeds that will seem like reality is blending into super-reality. We will be able to learn everything we learned in high school in a few short sessions. If Marshall McLuhan thought that television rendered high school obsolete, then VR will render education meaningless. All knowledge will be gestalt and inherent.

    Total access will be at our behest. You will be able to live through any animal; dive to the depths of your own blood stream; repair your own body from within, live in deep space in a robot body that will become your home or work away from home, and when the robot sleeps you repair and feed (energize) it and yourself. You may even share in your robots dreams. You will be able to visit virtual worlds within virtual worlds and might even lose yourself and die in a virtual world before realizing that you really are alive in your own living room. Of course, you may really be dead.

    Can you see what I am showing you? Who is to say whether or not our lives are really virtual realities for ourselves or a higher form. Localization is suspect. Where do you truly exist? The ancient Essenes said that you exist as spirit or animator occupying a form. You are eternal the form is an experience. Modern day religions say the same thing. Is religion really technology that we think is magical but its really VR in true syntax. Sumerian texts say that the gods came to earth to work the gold mines and became tired. They made a lesser man (via genetic engineering?) and dismembered (divided) a god to use his breath or spirit to drive the beings they made. Many ancient religions such as the Aztecs, Egyptians and others speak of the many faceted god that occupies all of us in our clay-based bodies.

    What we are learning today about evolving Physics (another word for Godhood) in Quantum areas, shows us that locating the being on one level is extremely difficult at another level. Flesh and blood or “meat” as William Gibson referred to our bodies in his book Neuromancer, is matter that is glued together by a matrix of being driven by the irradiating force of a peculiar light. This thinking, vibrant mind-form seems to be on board a body but is extremely elusive when brain studies are conducted. What animates our brains and remembers, acts exactly like a holographic entity because it does not necessarily lose its memory even when 90% of the brain is inactive or cut away. A hologram that is fragmented can reconstitute the original when light is passed through the part. The part knows the whole. The holographic-like brain seems to act as an interface to reality–a lens of complex sensory capability that allows a spirit to manipulate the toys of a dimensional virtual reality via waves and interference waves of information animated in feedback loops.

    Now do you see the power that virtual reality will have to totally transform our civilization? What will virtual reality do to our beliefs, our professions, our culture, our lives? Suffice to say it will be like kicking in the after-burners to our psyches. When we learn to program with DNA and genetics in a Nanotechnological fashion with output devices linked into the program, we will create new universes. Get ready. We are in for the trip of our lives. Hang on!

    • navs says

      hey this is a very nice thought and more than a thought after reading a lot about these stuff it is a reality. Reality can’t be described in words because it is different for every one and it could be possible thats we don’t even leave in reality so what is reality? could be a illusion well i think could be possible .Then why reality for me is also reality for other person strange or could it be possible that we dont know the reality but we live in a world of experiences by which we connect with every one……

    • M.Stramm says

      Well yeah, that’s what people thought like half a century ago..
      Now there’s world of warcraft and facebook. That’s basically as good as it gets.
      Oh yeah, and Google Goggles, that’s pretty cool, too.
      There will be some more development, some new hype, some better graphics (realtime raytracing is slowly manifesting) and some more augmented reality stuff, but people get lazy as far as the virtual reality crap goes. Have you ever played Second Life? It’s user generated content, nothing fancy there. Have you played Minecraft? Some generated stuff, a lot of user content, no universe building going on. It’s still people who do stuff. No gods, no demigods, except for those you slay in WOW using your new tier6 equipment set.

      Kind of a letdown, huh?

    • joebanana says

      Sort of a Holideck ala mode. I’m still trying to grasp photons. What is their power supply? Are they related to neutrino’s?

Leave a Reply

Your email address will not be published. Required fields are marked *