VRML97 logo

The Virtual Reality Modeling Language

4 Concepts

--- VRML separator bar ----

4.1 Introduction and table of contents

4.1.1 Introduction

This clause describes key concepts in ISO/IEC 14772. This includes how nodes are combined into scene graphs, how nodes receive and generate events, how to create node types using prototypes, how to add node types to VRML and export them for use by others, how to incorporate scripts into a VRML file, and various general topics on nodes.

4.1.2 Table of contents

See Table 4.1 for the table of contents for this clause.

Table 4.1 -- Table of contents, Concepts

4.1 Introduction and table of contents
  4.1.1 Introduction
  4.1.2 Table of contents
  4.1.3 Conventions used

4.2 Overview
  4.2.1 The structure of a VRML file
  4.2.2 Header
  4.2.3 Scene graph
  4.2.4 Prototypes
  4.2.5 Event routing
  4.2.6 Generating VRML files
  4.2.7 Presentation and interaction
  4.2.8 Profiles

4.3 UTF-8 file syntax
  4.3.1 Clear text (UTF-8) encoding
  4.3.2 Statements
  4.3.3 Node statement syntax
  4.3.4 Field statement syntax
  4.3.5 PROTO statement syntax
  4.3.6 IS statement syntax
  4.3.7 EXTERNPROTO statement syntax
  4.3.8 USE statement syntax
  4.3.9 ROUTE statement syntax

4.4 Scene graph structure
  4.4.1 Root nodes
  4.4.2 Scene graph hierarchy
  4.4.3 Descendant and ancestor nodes
  4.4.4 Transformation hierarchy
  4.4.5 Standard units and coordinate system
  4.4.6 Run-time name scope

4.5 VRML and the World Wide Web
  4.5.1 File extension and MIME type
  4.5.2 URLs
  4.5.3 Relative URLs
  4.5.4 Scripting language protocols

4.6 Node semantics
  4.6.1 Introduction
  4.6.2 DEF/USE semantics
  4.6.3 Shapes and geometry
  4.6.4 Bounding boxes
  4.6.5 Grouping and children nodes
  4.6.6 Light sources
  4.6.7 Sensor nodes
  4.6.8 Interpolator nodes
  4.6.9 Time-dependent nodes
  4.6.10 Bindable children nodes
  4.6.11 Texture maps

4.7 Field, eventIn, and eventOut semantics

4.8 Prototype semantics
  4.8.1 Introduction
  4.8.2 PROTO interface declaration semantics
  4.8.3 PROTO definition semantics
  4.8.4 Prototype scoping rules

4.9 External prototype semantics
  4.9.1 Introduction
  4.9.2 EXTERNPROTO interface semantics
  4.9.3 EXTERNPROTO URL semantics

4.10 Event processing
  4.10.1 Introduction
  4.10.2 Route semantics
  4.10.3 Execution model
  4.10.4 Loops
  4.10.5 Fan-in and fan-out

4.11 Time
  4.11.1 Introduction
  4.11.2 Time origin
  4.11.3 Discrete and continuous changes

4.12 Scripting
  4.12.1 Introduction
  4.12.2 Script execution
  4.12.3 Initialize() and shutdown()
  4.12.4 eventsProcessed()
  4.12.5 Scripts with direct outputs
  4.12.6 Asynchronous scripts
  4.12.7 Script languages
  4.12.8 EventIn handling
  4.12.9 Accessing fields and events
  4.12.10 Browser script interface

4.13 Navigation
  4.13.1 Introduction
  4.13.2 Navigation paradigms
  4.13.3 Viewing model
  4.13.4 Collision detection and terrain following

4.14 Lighting model
  4.14.1 Introduction
  4.14.2 Lighting 'off'
  4.14.3 Lighting 'on'
  4.14.4 Lighting equations
  4.14.5 References

4.1.3 Conventions used

The following conventions are used throughout this part of ISO/IEC 14772:

Italics are used for event and field names, and are also used when new terms are introduced and equation variables are referenced.

A fixed-space font is used for URL addresses and source code examples. ISO/IEC 14772 UTF-8 encoding examples appear in bold, fixed-space font.

Node type names are appropriately capitalized (e.g., "The Billboard node is a grouping node..."). However, the concept of the node is often referred to in lower case in order to refer to the semantics of the node, not the node itself (e.g., "To rotate the billboard...").

The form "0xhh" expresses a byte as a hexadecimal number representing the bit configuration for that byte.

Throughout this part of ISO/IEC 14772, references are denoted using the "x.[ABCD]" notation, where "x" denotes which clause or annex the reference is described in and "[ABCD]" is an abbreviation of the reference title. For example, 2.[ABCD] refers to a reference described in clause 2 and E.[ABCD] refers to a reference described in annex E.

4.2 Overview

4.2.1 The structure of a VRML file

A VRML file consists of the following major functional components: the header, the scene graph, the prototypes, and event routing. The contents of this file are processed for presentation and interaction by a program known as a browser.

4.2.2 Header

For easy identification of VRML files, every VRML file shall begin with:

#VRML V2.0 <encoding type> [optional comment] <line terminator>

The header is a single line of UTF-8 text identifying the file as a VRML file and identifying the encoding type of the file. It may also contain additional semantic information. There shall be exactly one space separating "#VRML" from "V2.0" and "V2.0" from "<encoding type>". Also, the "<encoding type>" shall be followed by a linefeed (0x0a) or carriage-return (0x0d) character, or by one or more space (0x20) or tab (0x09) characters followed by any other characters, which are treated as a comment, and terminated by a linefeed or carriage-return character.

The <encoding type> is either "utf8" or any other authorized values defined in other parts of ISO/IEC 14772. The identifier "utf8" indicates a clear text encoding that allows for international characters to be displayed in ISO/IEC 14772 using the UTF-8 encoding defined in ISO/IEC 10646-1 (otherwise known as Unicode); see 2.[UTF8]. The usage of UTF-8 is detailed in 6.47, Text, node. The header for a UTF-8 encoded VRML file is

#VRML V2.0 utf8 [optional comment] <line terminator>

Any characters after the <encoding type> on the first line may be ignored by a browser. The header line ends at the occurrence of a <line terminator>. A <line terminator> is a linefeed character (0x0a) or a carriage-return character (0x0d) .

4.2.3 Scene graph

The scene graph contains nodes which describe objects and their properties. It contains hierarchically grouped geometry to provide an audio-visual representation of objects, as well as nodes that participate in the event generation and routing mechanism.

4.2.4 Prototypes

Prototypes allow the set of VRML node types to be extended by the user. Prototype definitions can be included in the file in which they are used or defined externally. Prototypes may be defined in terms of other VRML nodes or may be defined using a browser-specific extension mechanism. While ISO/IEC 14772 has a standard format for identifying such extensions, their implementation is browser-dependent.

4.2.5 Event routing

Some VRML nodes generate events in response to environmental changes or user interaction. Event routing gives authors a mechanism, separate from the scene graph hierarchy, through which these events can be propagated to effect changes in other nodes. Once generated, events are sent to their routed destinations in time order and processed by the receiving node. This processing can change the state of the node, generate additional events, or change the structure of the scene graph.

Script nodes allow arbitrary, author-defined event processing. An event received by a Script node causes the execution of a function within a script which has the ability to send events through the normal event routing mechanism, or bypass this mechanism and send events directly to any node to which the Script node has a reference. Scripts can also dynamically add or delete routes and thereby changing the event-routing topology.

The ideal event model processes all events instantaneously in the order that they are generated. A timestamp serves two purposes. First, it is a conceptual device used to describe the chronological flow of the event mechanism. It ensures that deterministic results can be achieved by real-world implementations that address processing delays and asynchronous interaction with external devices. Second, timestamps are also made available to Script nodes to allow events to be processed based on the order of user actions or the elapsed time between events.

4.2.6 Generating VRML files

A generator is a human or computerized creator of VRML files. It is the responsibility of the generator to ensure the correctness of the VRML file and the availability of supporting assets (e.g., images, audio clips, other VRML files) referenced therein.

4.2.7 Presentation and interaction

The interpretation, execution, and presentation of VRML files will typically be undertaken by a mechanism known as a browser, which displays the shapes and sounds in the scene graph. This presentation is known as a virtual world and is navigated in the browser by a human or mechanical entity, known as a user. The world is displayed as if experienced from a particular location; that position and orientation in the world is known as the viewer. The browser provides navigation paradigms (such as walking or flying) that enable the user to move the viewer through the virtual world.

In addition to navigation, the browser provides a mechanism allowing the user to interact with the world through sensor nodes in the scene graph hierarchy. Sensors respond to user interaction with geometric objects in the world, the movement of the user through the world, or the passage of time.

The visual presentation of geometric objects in a VRML world follows a conceptual model designed to resemble the physical characteristics of light. The VRML lighting model describes how appearance properties and lights in the world are combined to produce displayed colours (see 4.14, Lighting Model, for details).

Figure 4.1 illustrates a conceptual model of a VRML browser. The browser is portrayed as a presentation application that accepts user input in the forms of file selection (explicit and implicit) and user interface gestures (e.g., manipulation and navigation using an input device). The three main components of the browser are: Parser, Scene Graph, and Audio/Visual Presentation. The Parser component reads the VRML file and creates the Scene Graph. The Scene Graph component consists of the Transformation Hierarchy (the nodes) and the Route Graph. The Scene Graph also includes the Execution Engine that processes events, reads and edits the Route Graph, and makes changes to the Transform Hierarchy (nodes). User input generally affects sensors and navigation, and thus is wired to the Route Graph component (sensors) and the Audio/Visual Presentation component (navigation). The Audio/Visual Presentation component performs the graphics and audio rendering of the Transform Hierarchy that feeds back to the user.

VRML browser conceptual model

Figure 4.1 -- Conceptual model of a VRML browser

4.2.8 Profiles

ISO/IEC 14772 supports the concept of profiles. A profile is a named collection of functionality and requirements which shall be supported in order for an implementation to conform to that profile. Only one profile is defined in this part of ISO/IEC 14772. The functionality and minimum support requirements described in ISO/IEC 14772-1 form the Base profile. Additional profiles may be defined in other parts of ISO/IEC 14772. Such profiles shall incorporate the entirety of the Base profile.

--- VRML separator bar ---

4.3 UTF-8 file syntax

4.3.1 Clear text (UTF-8) encoding

This section describes the syntax of UTF-8-encoded, human-readable VRML files. A more formal description of the syntax may be found in annex A, Grammar definition. The semantics of VRML in terms of the UTF-8 encoding are presented in this part of ISO/IEC 14772. Other encodings may be defined in other parts of ISO/IEC 14772. Such encodings shall describe how to map the UTF-8 descriptions to and from the corresponding encoding elements.

For the UTF-8 encoding, the # character begins a comment. The first line of the file, the header, also starts with a "#" character. Otherwise, all characters following a "#", until the next line terminator, are ignored. The only exception is within double-quoted SFString and MFString fields where the "#" character is defined to be part of the string.

Commas, spaces, tabs, linefeeds, and carriage-returns are separator characters wherever they appear outside of string fields. Separator characters and comments are collectively termed whitespace.

A VRML document server may strip comments and extra separators including the comment portion of the header line from a VRML file before transmitting it. WorldInfo nodes should be used for persistent information such as copyrights or author information.

Field, event, PROTO, EXTERNPROTO, and node names shall not contain control characters (0x0-0x1f, 0x7f), space (0x20), double or single quotes (0x22: ", 0x27: '), sharp (0x23: #), comma (0x2c: ,), period (0x2e: .), brackets (0x5b, 0x5d: []), backslash (0x5c: \) or braces (0x7b, 0x7d: {}). Further, their first character shall not be a digit (0x30-0x39), plus (0x2b: +), or minus (0x2d: -) character. Otherwise, names may contain any ISO 10646 character encoded using UTF-8. VRML is case-sensitive; "Sphere" is different from "sphere" and "BEGIN" is different from "begin."

The following reserved keywords shall not be used for field, event, PROTO, EXTERNPROTO, or node names:

4.3.2 Statements

After the required header, a VRML file may contain any combination of the following:

  1. Any number of PROTO or EXTERNPROTO statements (see 4.8, Prototype semantics);
  2. Any number of root node statements (see 4.4.1, Root nodes);
  3. Any number of USE statements (see 4.6.2, DEF/USE semantics);
  4. Any number of ROUTE statements (see 4.10.2, Route semantics).

4.3.3 Node statement syntax

A node statement consists of an optional name for the node followed by the node's type and then the body of the node. A node is given a name using the keyword DEF followed by the name of the node. The node's body is enclosed in matching braces ("{ }"). Whitespace shall separate the DEF, name of the node, and node type, but is not required before or after the curly braces that enclose the node's body. See A.3, Nodes, for details on node grammar rules.

    [DEF <name>] <nodeType> { <body> }

A node's body consists of any number of field statements, IS statements, ROUTE statements, PROTO statements or EXTERNPROTO statements, in any order.

See 4.6.2, DEF/USE, sematnics for more details on node naming. See 4.3.4, Field statement syntax, for a description of field statement syntax and 4.7, Field, eventIn, and eventOut semantics, for a description of field statement semantics. See 4.6, Node semantics, for a description of node statement semantics.

4.3.4 Field statement syntax

A field statement consists of the name of the field followed by the field's value(s). The following illustrates the syntax for a single-valued field:

    <fieldName> <fieldValue>

The following illustrates the syntax for a multiple-valued field:

    <fieldName> [ <fieldValues> ]

See A.4, Fields, for details on field statement grammar rules.

Each node type defines the names and types of the fields that each node of that type contains. The same field name may be used by multiple node types. See 5, Field and event reference, for the definition and syntax of specific field types.

See 4.7, Field, eventIn, and eventOut semantics, for a description of field statement semantics.

4.3.5 PROTO statement syntax

A PROTO statement consists of the PROTO keyword, followed in order by the prototype name, prototype interface declaration, and prototype definition:

    PROTO <name> [ <declaration> ] { <definition> }

See A.2, General, for details on prototype statement grammar rules.

A prototype interface declaration consists of eventIn, eventOut, field, and exposedField declarations (see 4.7, Field, eventIn, and eventOut semantics) enclosed in square brackets. Whitespace is not required before or after the brackets.

EventIn declarations consist of the keyword "eventIn" followed by an event type and a name:

    eventIn <eventType> <name>

EventOut declarations consist of the keyword "eventOut" followed by an event type and a name:

    eventOut <eventType> <name>

Field and exposedField declarations consist of either the keyword "field" or "exposedField" followed by a field type, a name, and an initial field value of the given field type.

    field <fieldType> <name> <initial field value>

    exposedField <fieldType> <name> <initial field value>

Field, eventIn, eventOut, and exposedField names shall be unique in each PROTO statement, but are not required to be unique between different PROTO statements. If a PROTO statement contains an exposedField with a given name (e.g., zzz), it shall not contain eventIns or eventOuts with the prefix set_ or the suffix _changed and the given name (e.g., set_zzz or zzz_changed).

A prototype definition consists of at least one node statement and any number of ROUTE statements, PROTO statements, and EXTERNPROTO statements in any order.

See 4.8, Prototype semantics, for a description of prototype semantics.

4.3.6 IS statement syntax

The body of a node statement that is inside a prototype definition may contain IS statements. An IS statement consists of the name of a field, exposedField, eventIn or eventOut from the node's public interface followed by the keyword IS followed by the name of a field, exposedField, eventIn or eventOut from the prototype's interface declaration:

    <field/eventName> IS <field/eventName>

See A.3, Nodes, for details on prototype node body grammar rules. See 4.8, Prototype semantics, for a description of IS statement semantics.

4.3.7 EXTERNPROTO statement syntax

An EXTERNPROTO statement consists of the EXTERNPROTO keyword followed in order by the prototype's name, its interface declaration, and a list (possibly empty) of double-quoted strings enclosed in square brackets. If there is only one member of the list, the brackets are optional.

    EXTERNPROTO <name> [ <external declaration> ] URL or [ URLs ]

See A.2, General, for details on external prototype statement grammar rules.

An EXTERNPROTO interface declaration is the same as a PROTO interface declaration, with the exception that field and exposedField initial values are not specified and the prototype definition is specified in a separate VRML file to which the URL(s) refer.

4.3.8 USE statement syntax

A USE statement consists of the USE keyword followed by a node name:

    USE <name>

See A.2, General, for details on USE statement grammar rules.

4.3.9 ROUTE statement syntax

A ROUTE statement consists of the ROUTE keyword followed in order by a node name, a period character, a field name, the TO keyword, a node name, a period character, and a field name. Whitespace is allowed but not required before or after the period characters:

    ROUTE <name>.<field/eventName> TO <name>.<field/eventName>

See A.2, General, for details on ROUTE statement grammar rules.

--- VRML separator bar ---

4.4 Scene graph structure

4.4.1 Root nodes

A VRML file contains zero or more root nodes. The root nodes for a VRML file are those nodes defined by the node statements or USE statements that are not contained in other node or PROTO statements. Root nodes shall be children nodes (see 4.6.5, Grouping and children nodes).

4.4.2 Scene graph hierarchy

A VRML file contains a directed acyclic graph. Node statements can contain SFNode or MFNode field statements that, in turn, contain node (or USE) statements. This hierarchy of nodes is called the scene graph. Each arc in the graph from A to B means that node A has an SFNode or MFNode field whose value directly contains node B. See E.[FOLE] for details on hierarchical scene graphs.

4.4.3 Descendant and ancestor nodes

The descendants of a node are all of the nodes in its SFNode or MFNode fields, as well as all of those nodes' descendants. The ancestors of a node are all of the nodes that have the node as a descendant.

4.4.4 Transformation hierarchy

The transformation hierarchy includes all of the root nodes and root node descendants that are considered to have one or more particular locations in the virtual world. VRML includes the notion of local coordinate systems, defined in terms of transformations from ancestor coordinate systems (using Transform or Billboard nodes). The coordinate system in which the root nodes are displayed is called the world coordinate system.

A VRML browser's task is to present a VRML file to the user; it does this by presenting the transformation hierarchy to the user. The transformation hierarchy describes the directly perceptible parts of the virtual world.

The following node types are in the scene graph but not affected by the transformation hierarchy: ColorInterpolator, CoordinateInterpolator, NavigationInfo, NormalInterpolator, OrientationInterpolator, PositionInterpolator, Script, ScalarInterpolator, TimeSensor, and WorldInfo. Of these, only Script nodes may have descendants. A descendant of a Script node is not part of the transformation hierarchy unless it is also the descendant of another node that is part of the transformation hierarchy or is a root node.

Nodes that are descendants of LOD or Switch nodes are affected by the transformation hierarchy, even if the settings of a Switch node's whichChoice field or the position of the viewer with respect to a LOD node makes them imperceptible.

The transformation hierarchy shall be a directed acyclic graph; results are undefined if a node in the transformation hierarchy is its own ancestor.

4.4.5 Standard units and coordinate system

ISO/IEC 14772 defines the unit of measure of the world coordinate system to be metres. All other coordinate systems are built from transformations based from the world coordinate system. Table 4.2 lists standard units for ISO/IEC 14772.

Table 4.2 -- Standard units

Category Unit
Linear distance Metres
Angles Radians
Time Seconds
Colour space RGB ([0.,1.], [0.,1.], [0., 1.])


ISO/IEC 14772 uses a Cartesian, right-handed, three-dimensional coordinate system. By default, the viewer is on the Z-axis looking down the -Z-axis toward the origin with +X to the right and +Y straight up. A modelling transformation (see 6.52, Transform, and 6.6, Billboard) or viewing transformation (see 6.53, Viewpoint) can be used to alter this default projection.

4.4.6 Run-time name scope

Each VRML file defines a run-time name scope that contains all of the root nodes of the file and all of the descendent nodes of the root nodes, with the exception of:

  1. descendent nodes that are inside Inline nodes;
  2. descendent nodes that are inside a prototype instance and are not part of the prototype's interface (i.e., are not in an SF/MFNode field or eventOut of the prototype).

Each Inline node and prototype instance also defines a run-time name scope, consisting of all of the root nodes of the file referred to by the Inline node or all of the root nodes of the prototype definition, restricted as above.

Nodes created dynamically (using a Script node invoking the Browser.createVrml methods) are not part of any name scope, until they are added to the scene graph, at which point they become part of the same name scope of their parent node(s). A node may be part of more than one run-time name scope. A node shall be removed from a name scope when it is removed from the scene graph.

--- VRML separator bar ---

4.5 VRML and the World Wide Web

4.5.1 File extension and MIME types

The file extension for VRML files is .wrl (for world).

The official MIME type for VRML files is defined as:

    model/vrml

where the MIME major type for 3D data descriptions is model, and the minor type for VRML documents is vrml.

For compatibility with earlier versions of VRML, the following MIME type shall also be supported:

    x-world/x-vrml

where the MIME major type is x-world, and the minor type for VRML documents is x-vrml.

See E.[MIME] for details.

4.5.2 URLs

A URL (Uniform Resource Locator), described in 2.[URL], specifies a file located on a particular server and accessed through a specified protocol (e.g., http). In ISO/IEC 14772, the upper-case term URL refers to a Uniform Resource Locator, while the italicized lower-case version url refers to a field which may contain URLs or in-line encoded data.

All url fields are of type MFString. The strings in these fields indicate multiple locations to search for data in decreasing order of preference. If the browser cannot locate or interpret the data specified by the first location, it shall try the second and subsequent locations in order until a URL containing interpretable data is encountered. If no interpretable URL's are located, the node type defines the resultant default behaviour. The url field entries are delimited by double quotation marks " ". Due to 4.5.4, Scripting language protocols, url fields use a superset of the standard URL syntax defined in 2.[URL]. Details on the string field are located in 5.9, SFString and MFString.

More general information on URLs is described in 2.[URL].

4.5.3 Relative URLs

Relative URLs are handled as described in 2.[RURL]. The base document for EXTERNPROTO statements or nodes that contain URL fields is:

  1. The VRML file in which the prototype is instantiated, if the statement is part of a prototype definition.
  2. The file containing the script code, if the statement is part of a string passed to the createVrmlFromURL() or createVrmlFromString() browser calls in a Script node.
  3. Otherwise, the VRML file from which the statement is read, in which case the RURL information provides the data itself.

4.5.4 Scripting language protocols

The Script node's url field may also support custom protocols for the various scripting languages. For example, a script url prefixed with javascript: shall contain ECMAScript source, with line terminators allowed in the string. The details of each language protocol are defined in the annex for each language. Browsers are not required to support any specific scripting language. However, browsers shall adhere to the protocol defined in the corresponding annex of ISO/IEC 14772 for any scripting language which is supported. The following example illustrates the use of mixing custom protocols and standard protocols in a single url field (order of precedence determines priority):

    #VRML V2.0 utf8 
    Script {
        url [ "javascript: ...",           # custom protocol ECMAScript
              "http://bar.com/foo.js",     # std protocol ECMAScript
              "http://bar.com/foo.class" ] # std protocol Java platform bytecode
    }

In the example above, the "..." represents in-line ECMAScript source code.

--- VRML separator bar ---

4.6 Node semantics

4.6.1 Introduction

Each node has the following characteristics:

  1. A type name. Examples include Box, Color, Group, Sphere, Sound, or SpotLight.
  2. Zero or more fields that define how each node differs from other nodes of the same type. Field values are stored in the VRML file along with the nodes, and encode the state of the virtual world.
  3. A set of events that it can receive and send. Each node may receive zero or more different kinds of events which will result in some change to the node's state. Each node may also generate zero or more different kinds of events to report changes in the node's state.
  4. An implementation. The implementation of each node defines how it reacts to events it can receive, when it generates events, and its visual or auditory appearance in the virtual world (if any). The VRML standard defines the semantics of built-in nodes (i.e., nodes with implementations that are provided by the VRML browser). The PROTO statement may be used to define new types of nodes, with behaviours defined in terms of the behaviours of other nodes.
  5. A name. Nodes can be named. This is used by other statements to reference a specific instantiation of a node.

4.6.2 DEF/USE semantics

A node given a name using the DEF keyword may be referenced by name later in the same file with USE or ROUTE statements. The USE statement does not create a copy of the node. Instead, the same node is inserted into the scene graph a second time, resulting in the node having multiple parents. Using an instance of a node multiple times is called instantiation.

Node names are limited in scope to a single VRML file, prototype definition, or string submitted to either the CreateVrmlFromString browser extension or a construction mechanism for SFNodes within a script. Given a node named "NewNode" (i.e., DEF NewNode), any "USE NewNode" statements in SFNode or MFNode fields inside NewNode's scope refer to NewNode (see 4.4.4, Transformation hierarchy, for restrictions on self-referential nodes).

If multiple nodes are given the same name, each USE statement refers to the closest node with the given name preceding it in either the VRML file or prototype definition.

4.6.3 Shapes and geometry

4.6.3.1 Introduction

The Shape node associates a geometry node with nodes that define that geometry's appearance. Shape nodes shall be part of the transformation hierarchy to have any visible result, and the transformation hierarchy shall contain Shape nodes for any geometry to be visible (the only nodes that render visible results are Shape nodes and the Background node). A Shape node contains exactly one geometry node in its geometry field. The following node types are geometry nodes:

4.6.3.2 Geometric property nodes

Several geometry nodes contain Coordinate, Color, Normal, and TextureCoordinate as geometric property nodes. The geometric property nodes are defined as individual nodes so that instancing and sharing is possible between different geometry nodes.

4.6.3.3 Appearance nodes

Shape nodes may specify an Appearance node that describes the appearance properties (material and texture) to be applied to the Shape's geometry. Nodes of the following type may be specified in the material field of the Appearance node:

Nodes of the following types may be specified by the texture field of the Appearance node:

Nodes of the following types may be specified in the textureTransform field of the Appearance node:

The interaction between such appearance nodes and the Color node is described in 4.14, Lighting Model.

4.6.3.4 Shape hint fields

The Extrusion and IndexedFaceSet nodes each have three SFBool fields that provide hints about the geometry. These hints specify the vertex ordering, if the shape is solid, and if the shape contains convex faces. These fields are ccw, solid, and convex, respectively. The ElevationGrid node has the ccw and solid fields.

The ccw field defines the ordering of the vertex coordinates of the geometry with respect to user-given or automatically generated normal vectors used in the lighting model equations. If ccw is TRUE, the normals shall follow the right hand rule; the orientation of each normal with respect to the vertices (taken in order) shall be such that the vertices appear to be oriented in a counterclockwise order when the vertices are viewed (in the local coordinate system of the Shape) from the opposite direction as the normal. If ccw is FALSE, the normals shall be oriented in the opposite direction. If normals are not generated but are supplied using a Normal node, and the orientation of the normals does not match the setting of the ccw field, results are undefined.

The solid field determines whether one or both sides of each polygon shall be displayed. If solid is FALSE, each polygon shall be visible regardless of the viewing direction (i.e., no backface culling shall be done, and two-sided lighting shall be performed to illuminate both sides of lit surfaces). If solid is TRUE, the visibility of each polygon shall be determined as follows: Let V be the position of the viewer in the local coordinate system of the geometry. Let N be the geometric normal vector of the polygon, and let P be any point (besides the local origin) in the plane defined by the polygon's vertices. Then if (V dot N) - (N dot P) is greater than zero, the polygon shall be visible; if it is less than or equal to zero, the polygon shall be invisible (backface culled).

The convex field indicates whether all polygons in the shape are convex (TRUE). A polygon is convex if it is planar, does not intersect itself, and all of the interior angles at its vertices are less than 180 degrees. Non-planar and self-intersecting polygons may produce undefined results even if the convex field is FALSE.

4.6.3.5 Crease angle field

The creaseAngle field, used by the ElevationGrid, Extrusion, and IndexedFaceSet nodes, affects how default normals are generated. If the angle between the geometric normals of two adjacent faces is less than the crease angle, normals shall be calculated so that the faces are smooth-shaded across the edge; otherwise, normals shall be calculated so that a lighting discontinuity across the edge is produced. For example, a crease angle of 0.5 radians means that an edge between two adjacent polygonal faces will be smooth shaded if the geometric normals of the two faces form an angle that is less than 0.5 radians. Otherwise, the faces will appear faceted. Crease angles shall be greater than or equal to 0.0.

4.6.4 Bounding boxes

Several of the nodes include a bounding box specification comprised of two fields, bboxSize and bboxCenter. A bounding box is a rectangular parallelepiped of dimension bboxSize centred on the location bboxCenter in the local coordinate system. This is typically used by grouping nodes to provide a hint to the browser on the group's approximate size for culling optimizations. The default size for bounding boxes (-1, -1, -1) indicates that the user did not specify the bounding box and the effect shall be as if the bounding box were infinitely large. A bboxSize value of (0, 0, 0) is valid and represents a point in space (i.e., an infinitely small box). Specified bboxSize field values shall be >= 0.0 or equal to (-1, -1, -1). The bboxCenter fields specify a position offset from the local coordinate system.

The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside a grouping node (e.g., Transform). These are used as hints to optimize certain operations such as determining whether or not the group needs to be drawn. The bounding box shall be large enough at all times to enclose the union of the group's children's bounding boxes; it shall not include any transformations performed by the group itself (i.e., the bounding box is defined in the local coordinate system of the children). Results are undefined if the specified bounding box is smaller than the true bounding box of the group.

4.6.5 Grouping and children nodes

Grouping nodes have a field that contains a list of children nodes. Each grouping node defines a coordinate space for its children. This coordinate space is relative to the coordinate space of the node of which the group node is a child. Such a node is called a parent node. This means that transformations accumulate down the scene graph hierarchy.

The following node types are grouping nodes:

The following node types are children nodes:

The following node types are not valid as children nodes:

All grouping nodes except Inline, LOD, and Switch also have addChildren and removeChildren eventIn definitions. The addChildren event appends nodes to the grouping node's children field. Any nodes passed to the addChildren event that are already in the group's children list are ignored. For example, if the children field contains the nodes Q, L and S (in order) and the group receives an addChildren eventIn containing (in order) nodes A, L, and Z, the result is a children field containing (in order) nodes Q, L, S, A, and Z.

The removeChildren event removes nodes from the grouping node's children field. Any nodes in the removeChildren event that are not in the grouping node's children list are ignored. If the children field contains the nodes Q, L, S, A and Z and it receives a removeChildren eventIn containing nodes A, L, and Z, the result is Q, S.

Note that a variety of node types reference other node types through fields. Some of these are parent-child relationships, while others are not (there are node-specific semantics). Table 4.3 lists all node types that reference other nodes through fields.

Table 4.3 -- Nodes with SFNode or MFNode fields

Node Type Field Valid Node Types for Field
Anchor children Valid children nodes
Appearance material Material
texture ImageTexture, MovieTexture, Pixel Texture
Billboard children Valid children nodes
Collision children Valid children nodes
ElevationGrid color Color
normal Normal
texCoord TextureCoordinate
Group children Valid children nodes
IndexedFaceSet color Color
coord Coordinate
normal Normal
texCoord TextureCoordinate
IndexedLineSet color Color
coord Coordinate
LOD level Valid children nodes
Shape appearance Appearance
geometry Box, Cone, Cylinder, ElevationGrid, Extrusion, IndexedFaceSet, IndexedLineSet, PointSet, Sphere, Text
Sound source AudioClip, MovieTexture
Switch choice Valid children nodes
Text fontStyle FontStyle
Transform children Valid children nodes

4.6.6 Light sources

Shape nodes are illuminated by the sum of all of the lights in the world that affect them. This includes the contribution of both the direct and ambient illumination from light sources. Ambient illumination results from the scattering and reflection of light originally emitted directly by light sources. The amount of ambient light is associated with the individual lights in the scene. This is a gross approximation to how ambient reflection actually occurs in nature.

The following node types are light source nodes:

All light source nodes contain an intensity, a color, and an ambientIntensity field. The intensity field specifies the brightness of the direct emission from the light, and the ambientIntensity specifies the intensity of the ambient emission from the light. Light intensity may range from 0.0 (no light emission) to 1.0 (full intensity). The color field specifies the spectral colour properties of both the direct and ambient light emission as an RGB value.

PointLight and SpotLight illuminate all objects in the world that fall within their volume of lighting influence regardless of location within the transformation hierarchy. PointLight defines this volume of influence as a sphere centred at the light (defined by a radius). SpotLight defines the volume of influence as a solid angle defined by a radius and a cutoff angle. DirectionalLight nodes illuminate only the objects descended from the light's parent grouping node, including any descendent children of the parent grouping nodes.

4.6.7 Sensor nodes

4.6.7.1 Introduction to sensors

The following node types are sensor nodes:

Sensors are children nodes in the hierarchy and therefore may be parented by grouping nodes as described in 4.6.5, Grouping and children nodes.

Each type of sensor defines when an event is generated. The state of the scene graph after several sensors have generated events shall be as if each event is processed separately, in order. If sensors generate events at the same time, the state of the scene graph will be undefined if the results depend on the ordering of the events.

It is possible to create dependencies between various types of sensors. For example, a TouchSensor may result in a change to a VisibilitySensor node's transformation, which in turn may cause the VisibilitySensor node's visibility status to change.

The following two sections classify sensors into two categories: environmental sensors and pointing-device sensors.

4.6.7.2 Environmental sensors

The following node types are environmental sensors:

The ProximitySensor detects when the user navigates into a specified region in the world. The ProximitySensor itself is not visible. The TimeSensor is a clock that has no geometry or location associated with it; it is used to start and stop time-based nodes such as interpolators. The VisibilitySensor detects when a specific part of the world becomes visible to the user. The Collision grouping node detects when the user collides with objects in the virtual world. Proximity, time, collision, and visibility sensors are each processed independently of whether others exist or overlap.

When environmental sensors are inserted into the transformation hierarchy and before the presentation is updated (i.e., read from file or created by a script), they shall generate events indicating any conditions which the sensor is intended to detect (see 4.10.3, Execution model). The conditions for individual sensor types to generate these initial events are defined in the individual node specifications in 6, Node reference.

4.6.7.3 Pointing-device sensors

Pointing-device sensors detect user pointing events such as the user clicking on a piece of geometry (i.e., TouchSensor). The following node types are pointing-device sensors:

A pointing-device sensor is activated when the user locates the pointing device over geometry that is influenced by that specific pointing-device sensor. Pointing-device sensors have influence over all geometry that is descended from the sensor's parent groups. In the case of the Anchor node, the Anchor node itself is considered to be the parent group. Typically, the pointing-device sensor is a sibling to the geometry that it influences. In other cases, the sensor is a sibling to groups which contain geometry (i.e., are influenced by the pointing-device sensor).

The appearance properties of the geometry do not affect activation of the sensor. In particular, transparent materials or textures shall be treated as opaque with respect to activation of pointing-device sensors.

For a given user activation, the lowest enabled pointing-device sensor in the hierarchy is activated. All other pointing-device sensors above the lowest enabled pointing-device sensor are ignored. The hierarchy is defined by the geometry node over which the pointing-device sensor is located and the entire hierarchy upward. If there are multiple pointing-device sensors tied for lowest, each of these is activated simultaneously and independently, possibly resulting in multiple sensors activating and generating output simultaneously. This feature allows combinations of pointing-device sensors (e.g., TouchSensor and PlaneSensor). If a pointing-device sensor appears in the transformation hierarchy multiple times (DEF/USE), it shall be tested for activation in all of the coordinate systems in which it appears.

If a pointing-device sensor is not enabled when the pointing-device button is activated, it will not generate events related to the pointing device until after the pointing device is deactivated and the sensor is enabled (i.e., enabling a sensor in the middle of dragging does not result in the sensor activating immediately).

The Anchor node is considered to be a pointing-device sensor when trying to determine which sensor (or Anchor node) to activate. For example, a click on Shape3 is handled by SensorD, a click on Shape2 is handled by SensorC and the AnchorA, and a click on Shape1 is handled by SensorA and SensorB:

    Group {
      children [
        DEF Shape1  Shape       { ... }
        DEF SensorA TouchSensor { ... }
        DEF SensorB PlaneSensor { ... }
        DEF AnchorA Anchor {
          url "..."
          children [
            DEF Shape2  Shape { ... }
            DEF SensorC TouchSensor { ... }
            Group {
              children [
                DEF Shape3  Shape { ... }
                DEF SensorD TouchSensor { ... }
              ]
            }
          ]
        }
      ]
    }

4.6.7.4 Drag sensors

Drag sensors are a subset of pointing-device sensors. There are three types of drag sensors: CylinderSensor, PlaneSensor, and SphereSensor. Drag sensors have two eventOuts in common, trackPoint_changed and <value>_changed. These eventOuts send events for each movement of the activated pointing device according to their "virtual geometry" (e.g., cylinder for CylinderSensor). The trackPoint_changed eventOut sends the intersection point of the bearing with the drag sensor's virtual geometry. The <value>_changed eventOut sends the sum of the relative change since activation plus the sensor's offset field. The type and name of <value>_changed depends on the drag sensor type: rotation_changed for CylinderSensor, translation_changed for PlaneSensor, and rotation_changed for SphereSensor.

To simplify the application of these sensors, each node has an offset and an autoOffset exposed field. When the sensor generates events as a response to the activated pointing device motion, <value>_changed sends the sum of the relative change since the initial activation plus the offset field value. If autoOffset is TRUE when the pointing-device is deactivated, the offset field is set to the sensor's last <value>_changed value and offset sends an offset_changed eventOut. This enables subsequent grabbing operations to accumulate the changes. If autoOffset is FALSE, the sensor does not set the offset field value at deactivation (or any other time).

4.6.7.5 Activating and manipulating sensors

The pointing device controls a pointer in the virtual world. While activated by the pointing device, a sensor will generate events as the pointer moves. Typically the pointing device may be categorized as either 2D (e.g., conventional mouse) or 3D (e.g., wand). It is suggested that the pointer controlled by a 2D device is mapped onto a plane a fixed distance from the viewer and perpendicular to the line of sight. The mapping of a 3D device may describe a 1:1 relationship between movement of the pointing device and movement of the pointer.

The position of the pointer defines a bearing which is used to determine which geometry is being indicated. When implementing a 2D pointing device it is suggested that the bearing is defined by the vector from the viewer position through the location of the pointer. When implementing a 3D pointing device it is suggested that the bearing is defined by extending a vector from the current position of the pointer in the direction indicated by the pointer.

In all cases the pointer is considered to be indicating a specific geometry when that geometry is intersected by the bearing. If the bearing intersects multiple sensors' geometries, only the sensor nearest to the pointer will be eligible for activation.

4.6.8 Interpolator nodes

Interpolator nodes are designed for linear keyframed animation. An interpolator node defines a piecewise-linear function, f(t), on the interval (-infinity, +infinity). The piecewise-linear function is defined by n values of t, called key, and the n corresponding values of f(t), called keyValue. The keys shall be monotonically non-decreasing, otherwise the results are undefined. The keys are not restricted to any interval.

An interpolator node evaluates f(t) given any value of t (via the set_fraction eventIn) as follows: Let the n keys t0, t1, t2, ..., tn-1 partition the domain (-infinity, +infinity) into the n+1 subintervals given by (-infinity, t0), [t0, t1), [t1, t2), ... , [tn-1, +infinity). Also, let the n values v0, v1, v2, ..., vn-1 be the values of f(t) at the associated key values. The piecewise-linear interpolating function, f(t), is defined to be

     f(t) = v0, if t <= t0,
          = vn-1, if t >= tn-1, 
          = linterp(t, vi, vi+1), if ti <= t <= ti+1

     where linterp(t,x,y) is the linear interpolant, i belongs to {0,1,..., n-2}.

The third conditional value of f(t) allows the defining of multiple values for a single key, (i.e., limits from both the left and right at a discontinuity in f(t)). The first specified value is used as the limit of f(t) from the left, and the last specified value is used as the limit of f(t) from the right. The value of f(t) at a multiply defined key is indeterminate, but should be one of the associated limit values.

The following node types are interpolator nodes, each based on the type of value that is interpolated:

All interpolator nodes share a common set of fields and semantics:

    eventIn      SFFloat      set_fraction
    exposedField MFFloat      key           [...]
    exposedField MF<type>     keyValue      [...]
    eventOut     [S|M]F<type> value_changed

The type of the keyValue field is dependent on the type of the interpolator (e.g., the ColorInterpolator's keyValue field is of type MFColor).

The set_fraction eventIn receives an SFFloat event and causes the interpolator function to evaluate, resulting in a value_changed eventOut with the same timestamp as the set_fraction event.

ColorInterpolator, OrientationInterpolator, PositionInterpolator, and ScalarInterpolator output a single-value field to value_changed. Each value in the keyValue field corresponds in order to the parameter value in the key field. Results are undefined if the number of values in the key field of an interpolator is not the same as the number of values in the keyValue field.

CoordinateInterpolator and NormalInterpolator send multiple-value results to value_changed. In this case, the keyValue field is an m array of values, where n is the number of values in the key field and m is the number of values at each keyframe. Each m values in the keyValue field correspond, in order, to a parameter value in the key field. Each value_changed event shall contain m interpolated values. Results are undefined if the number of values in the keyValue field divided by the number of values in the key field is not a positive integer.

If an interpolator node's value eventOut is read before it receives any inputs, keyValue[0] is returned if keyValue is not empty. If keyValue is empty (i.e., [ ]), the initial value for the eventOut type is returned (e.g., (0, 0, 0) for SFVec3f); see 5, Field and event reference, for initial event values.

The location of an interpolator node in the transformation hierarchy has no effect on its operation. For example, if a parent of an interpolator node is a Switch node with whichChoice set to -1 (i.e., ignore its children), the interpolator continues to operate as specified (receives and sends events).

4.6.9 Time-dependent nodes

AudioClip, MovieTexture, and TimeSensor are time-dependent nodes that activate and deactivate themselves at specified times. Each of these nodes contains the exposedFields: startTime, stopTime, and loop, and the eventOut: isActive. The values of the exposedFields are used to determine when the node becomes active or inactive Also, under certain conditions, these nodes ignore events to some of their exposedFields. A node ignores an eventIn by not accepting the new value and not generating an eventOut_changed event. In this subclause, an abstract time-dependent node can be any one of AudioClip, MovieTexture, or TimeSensor.

Time-dependent nodes can execute for 0 or more cycles. A cycle is defined by field data within the node. If, at the end of a cycle, the value of loop is FALSE, execution is terminated (see below for events at termination). Conversely, if loop is TRUE at the end of a cycle, a time-dependent node continues execution into the next cycle. A time-dependent node with loop TRUE at the end of every cycle continues cycling forever if startTime >= stopTime, or until stopTime if  startTime < stopTime.

A time-dependent node generates an isActive TRUE event when it becomes active and generates an isActive FALSE event when it becomes inactive. These are the only times at which an isActive event is generated. In particular, isActive events are not sent at each tick of a simulation.

A time-dependent node is inactive until its startTime is reached. When time now becomes greater than or equal to startTime, an isActive TRUE event is generated and the time-dependent node becomes active (now refers to the time at which the browser is simulating and displaying the virtual world). When a time-dependent node is read from a VRML file and the ROUTEs specified within the VRML file have been established, the node should determine if it is active and, if so, generate an isActive TRUE event and begin generating any other necessary events. However, if a node would have become inactive at any time before the reading of the VRML file, no events are generated upon the completion of the read.

An active time-dependent node will become inactive when stopTime is reached if stopTime > startTime. The value of stopTime is ignored if stopTime <= startTime. Also, an active time-dependent node will become inactive at the end of the current cycle if loop is FALSE. If an active time-dependent node receives a set_loop FALSE event, execution continues until the end of the current cycle or until stopTime (if stopTime > startTime), whichever occurs first. The termination at the end of cycle can be overridden by a subsequent set_loop TRUE event.

Any set_startTime events to an active time-dependent node are ignored. Any set_stopTime event where stopTime <= startTime sent to an active time-dependent node is also ignored. A set_stopTime event where startTime < stopTime <= now sent to an active time-dependent node results in events being generated as if stopTime has just been reached. That is, final events, including an isActive FALSE, are generated and the node becomes inactive. The stopTime_changed event will have the set_stopTime value. Other final events are node-dependent (c.f., TimeSensor).

A time-dependent node may be restarted while it is active by sending a set_stopTime event equal to the current time (which will cause the node to become inactive) and a set_startTime event, setting it to the current time or any time in the future. These events will have the same time stamp and should be processed as set_stopTime, then set_startTime to produce the correct behaviour.

The default values for each of the time-dependent nodes are specified such that any node with default values is already inactive (and, therefore, will generate no events upon loading). A time-dependent node can be defined such that it will be active upon reading by specifying loop TRUE. This use of a non-terminating time-dependent node should be used with caution since it incurs continuous overhead on the simulation.

Figure 4.2 illustrates the behavior of several common cases of time-dependent nodes. In each case, the initial conditions of startTime, stopTime, loop, and the time-dependent node's cycle interval are labelled, the red region denotes the time period during which the time-dependent node is active, the arrows represent eventIns received by and eventOuts sent by the time-dependent node, and the horizontal axis represents time.

Time dependent examples

Figure 4.2 -- Examples of time-dependent node execution

 

4.6.10 Bindable children nodes

The Background, Fog, NavigationInfo, and Viewpoint nodes have the unique behaviour that only one of each type can be bound (i.e., affecting the user's experience) at any instant in time. The browser shall maintain an independent, separate stack for each type of bindable node. Each of these nodes includes a set_bind eventIn and an isBound eventOut. The set_bind eventIn is used to move a given node to and from its respective top of stack. A TRUE value sent to the set_bind eventIn moves the node to the top of the stack; sending a FALSE value removes it from the stack. The isBound event is output when a given node is:

  1. moved to the top of the stack;
  2. removed from the top of the stack;
  3. pushed down from the top of the stack by another node being placed on top.

That is, isBound events are sent when a given node becomes, or ceases to be, the active node. The node at the top of stack, (the most recently bound node), is the active node for its type and is used by the browser to set the world state. If the stack is empty (i.e., either the VRML file has no bindable nodes for a given type or the stack has been popped until empty), the default field values for that node type are used to set world state. The results are undefined if a multiply instanced (DEF/USE) bindable node is bound.

The following rules describe the behaviour of the binding stack for a node of type <bindable node>, (Background, Fog, NavigationInfo, or Viewpoint):

  1. During read, the first encountered <bindable node> is bound by pushing it to the top of the <bindable node> stack. Nodes contained within Inlines, within the strings passed to the Browser.createVrmlFromString() method, or within VRML files passed to the Browser.createVrmlFromURL() method (see 4.12.10, Browser script interface)are not candidates for the first encountered <bindable node>. The first node within a prototype instance is a valid candidate for the first encountered <bindable node>. The first encountered <bindable node> sends an isBound TRUE event.
  2. When a set_bind TRUE event is received by a <bindable node>,
    1. If it is not on the top of the stack: the current top of stack node sends an isBound FALSE event. The new node is moved to the top of the stack and becomes the currently bound <bindable node>. The new <bindable node> (top of stack) sends an isBound TRUE event.
    2. If the node is already at the top of the stack, this event has no effect.
  3. When a set_bind FALSE event is received by a <bindable node> in the stack, it is removed from the stack. If it was on the top of the stack,
    1. it sends an isBound FALSE event;
    2. the next node in the stack becomes the currently bound <bindable node> (i.e., pop) and issues an isBound TRUE event.
  4. If a set_bind FALSE event is received by a node not in the stack, the event is ignored and isBound events are not sent.
  5. When a node replaces another node at the top of the stack, the isBound TRUE and FALSE eventOuts from the two nodes are sent simultaneously (i.e., with identical timestamps).
  6. If a bound node is deleted, it behaves as if it received a set_bind FALSE event (see f above).

4.6.11 Texture maps

4.6.11.1 Texture map formats

Four node types specify texture maps: Background, ImageTexture, MovieTexture, and PixelTexture. In all cases, texture maps are defined by 2D images that contain an array of colour values describing the texture. The texture map values are interpreted differently depending on the number of components in the texture map and the specifics of the image format. In general, texture maps may be described using one of the following forms:

  1. Intensity textures (one-component)
  2. Intensity plus alpha opacity textures (two-component)
  3. Full RGB textures (three-component)
  4. Full RGB plus alpha opacity textures (four-component)

Note that most image formats specify an alpha opacity, not transparency (where alpha = 1 - transparency).

See Table 4.5 and Table 4.6 for a description of how the various texture types are applied.

4.6.11.2 Texture map image formats

Texture nodes that require support for the PNG (see 2.[PNG]) image format (6.5, Background, and 6.22, ImageTexture) shall interpret the PNG pixel formats in the following way:

  1. Greyscale pixels without alpha or simple transparency are treated as intensity textures.
  2. Greyscale pixels with alpha or simple transparency are treated as intensity plus alpha textures.
  3. RGB pixels without alpha channel or simple transparency are treated as full RGB textures.
  4. RGB pixels with alpha channel or simple transparency are treated as full RGB plus alpha textures.

If the image specifies colours as indexed-colour (i.e., palettes or colourmaps), the following semantics should be used (note that `greyscale' refers to a palette entry with equal red, green, and blue values):

  1. If all the colours in the palette are greyscale and there is no transparency chunk, it is treated as an intensity texture.
  2. If all the colours in the palette are greyscale and there is a transparency chunk, it is treated as an intensity plus opacity texture.
  3. If any colour in the palette is not grey and there is no transparency chunk, it is treated as a full RGB texture.
  4. If any colour in the palette is not grey and there is a transparency chunk, it is treated as a full RGB plus alpha texture.

Texture nodes that require support for JPEG files (see 2.[JPEG], 6.5, Background, and 6.22, ImageTexture) shall interpret JPEG files as follows:

  1. Greyscale files (number of components equals 1) are treated as intensity textures.
  2. YCbCr files are treated as full RGB textures.
  3. No other JPEG file types are required. It is recommended that other JPEG files are treated as a full RGB textures.

Texture nodes that support MPEG files (see 2.[MPEG] and 6.28, MovieTexture) shall treat MPEG files as full RGB textures.

Texture nodes that recommend support for GIF files (see E.[GIF], 6.5, Background, and 6.22, ImageTexture) shall follow the applicable semantics described above for the PNG format.

--- VRML separator bar ---

4.7 Field, eventIn, and eventOut semantics

Fields are placed inside node statements in a VRML file, and define the persistent state of the virtual world. Results are undefined if multiple values for the same field in the same node (e.g.,  Sphere { radius 1.0 radius 2.0 }) are declared.

EventIns and eventOuts define the types and names of events that each type of node may receive or generate. Events are transient and event values are not written to VRML files. Each node interprets the values of the events sent to it or generated by it according to its implementation.

Field, eventIn, and eventOut types, and field encoding syntax, are described in 5, Field and event reference.

An exposedField can receive events like an eventIn, can generate events like an eventOut, and can be stored in VRML files like a field. An exposedField named zzz can be referred to as 'set_zzz' and treated as an eventIn, and can be referred to as 'zzz_changed' and treated as an eventOut. The initial value of an exposedField is its value in the VRML file, or the default value for the node in which it is contained, if a value is not specified. When an exposedField receives an event it shall generate an event with the same value and timestamp. The following sources, in precedence order, shall be used to determine the initial value of the exposedField:

  1. the user-defined value in the instantiation (if one is specified);
  2. the default value for that field as specified in the node or prototype definition.

The rules for naming fields, exposedFields, eventOuts, and eventIns for the built-in nodes are as follows:

  1. All names containing multiple words start with a lower case letter, and the first letter of all subsequent words is capitalized (e.g., addChildren), with the exception of set_ and _changed, as described below.
  2. All eventIns have the prefix "set_", with the exception of the addChildren and removeChildren eventIns.
  3. Certain eventIns and eventOuts of type SFTime do not use the "set_" prefix or "_changed" suffix.
  4. All other eventOuts have the suffix "_changed" appended, with the exception of eventOuts of type SFBool. Boolean eventOuts begin with the word "is" (e.g., isFoo) for better readability.

--- VRML separator bar ---

4.8 Prototype semantics

4.8.1 Introduction

The PROTO statement defines a new node type in terms of already defined (built-in or prototyped) node types. Once defined, prototyped node types may be instantiated in the scene graph exactly like the built-in node types.

Node type names shall be unique in each VRML file. The results are undefined if a prototype is given the same name as a built-in node type or a previously defined prototype in the same scope.

4.8.2 PROTO interface declaration semantics

The prototype interface defines the fields, eventIns, and eventOuts for the new node type. The interface declaration includes the types and names for the eventIns and eventOuts of the prototype, as well as the types, names, and default values for the prototype's fields.

The interface declaration may contain exposedField declarations, which are a convenient way of defining a field, eventIn, and eventOut at the same time. If an exposedField named zzz is declared, it is equivalent to declaring a field named zzz, an eventIn named set_zzz, and an eventOut named zzz_changed.

Each prototype instance can be considered to be a complete copy of the prototype, with its own fields, events, and copy of the prototype definition. A prototyped node type is instantiated using standard node syntax. For example, the following prototype (which has an empty interface declaration):

    PROTO Cube [ ] { Box { } }

may be instantiated as follows:

    Shape { geometry Cube { } }

It is recommended that user-defined field or event names defined in PROTO interface declarations statements follow the naming conventions described in 4.7, Field, eventIn, and eventOut semantics.

If an eventOut in the prototype declaration is associated with an exposedField in the prototype definition, the initial value of the eventOut shall be the initial value of the exposedField. If the eventOut is associated with multiple exposedFields, the results are undefined.

4.8.3 PROTO definition semantics

A prototype definition consists of one or more nodes, nested PROTO statements, and ROUTE statements. The first node type determines how instantiations of the prototype can be used in a VRML file. An instantiation is created by filling in the parameters of the prototype declaration and inserting copies of the first node (and its scene graph) wherever the prototype instantiation occurs. For example, if the first node in the prototype definition is a Material node, instantiations of the prototype can be used wherever a Material node can be used. Any other nodes and accompanying scene graphs are not part of the transformation hierarchy, but may be referenced by ROUTE statements or Script nodes in the prototype definition.

Nodes in the prototype definition may have their fields, eventIns, or eventOuts associated with the fields, eventIns, and eventOuts of the prototype interface declaration. This is accomplished using IS statements in the body of the node. When prototype instances are read from a VRML file, field values for the fields of the prototype interface may be given. If given, the field values are used for all nodes in the prototype definition that have IS statements for those fields. Similarly, when a prototype instance is sent an event, the event is delivered to all nodes that have IS statements for that event. When a node in a prototype instance generates an event that has an IS statement, the event is sent to any eventIns connected (via ROUTE) to the prototype instance's eventOut.

IS statements may appear inside the prototype definition wherever fields may appear. IS statements shall refer to fields or events defined in the prototype declaration. Results are undefined if an IS statement refers to a non-existent declaration. Results are undefined if the type of the field or event being associated by the IS statement does not match the type declared in the prototype's interface declaration. For example, it is illegal to associate an SFColor with an SFVec3f. It is also illegal to associate an SFColor with an MFColor or vice versa.

Results are undefined if an IS statement:

An exposedField in the prototype interface may be associated only with an exposedField in the prototype definition, but an exposedField in the prototype definition may be associated with either a field, eventIn, eventOut or exposedField in the prototype interface. When associating an exposedField in a prototype definition with an eventIn or eventOut in the prototype declaration, it is valid to use either the shorthand exposedField name (e.g., translation) or the explicit event name (e.g., set_translation or translation_changed). Table 4.4 defines the rules for mapping between the prototype declarations and the primary scene graph's nodes (yes denotes a legal mapping, no denotes an error).

Table 4.4 -- Rules for mapping PROTOTYPE declarations to node instances

                  Prototype declaration

Prototype

definition

exposedField field eventIn eventOut
exposedField yes yes yes yes
field no yes no no
eventIn no no yes no
eventOut no no no yes

Results are undefined if a field, eventIn, or eventOut of a node in the prototype definition is associated with more than one field, eventIn, or eventOut in the prototype's interface (i.e., multiple IS statements for a field, eventIn, and eventOut in a node in the prototype definition), but multiple IS statements for the fields, eventIns, and eventOuts in the prototype interface declaration is valid. Results are undefined if a field of a node in a prototype definition is both defined with initial values (i.e., field statement) and associated by an IS statement with a field in the prototype's interface. If a prototype interface has an eventOut E associated with multiple eventOuts in the prototype definition ED i , the value of E is the value of the eventOut that generated the event with the greatest timestamp. If two or more of the eventOuts generated events with identical timestamps, results are undefined.

4.8.4 Prototype scoping rules

Prototype definitions appearing inside a prototype definition (i.e., nested) are local to the enclosing prototype. IS statements inside a nested prototype's implementation may refer to the prototype declarations of the innermost prototype.

A PROTO statement establishes a DEF/USE name scope separate from the rest of the scene and separate from any nested PROTO statements. Nodes given a name by a DEF construct inside the prototype may not be referenced in a USE construct outside of the prototype's scope. Nodes given a name by a DEF construct outside the prototype scope may not be referenced in a USE construct inside the prototype scope.

A prototype may be instantiated in a file anywhere after the completion of the prototype definition. A prototype may not be instantiated inside its own implementation (i.e., recursive prototypes are illegal).

--- VRML separator bar ---

4.9 External prototype semantics

4.9.1 Introduction

The EXTERNPROTO statement defines a new node type. It is equivalent to the PROTO statement, with two exceptions. First, the implementation of the node type is stored externally, either in a VRML file containing an appropriate PROTO statement or using some other implementation-dependent mechanism. Second, default values for fields are not given since the implementation will define appropriate defaults.

4.9.2 EXTERNPROTO interface semantics

The semantics of the EXTERNPROTO are exactly the same as for a PROTO statement, except that default field and exposedField values are not specified locally. In addition, events sent to an instance of an externally prototyped node may be ignored until the implementation of the node is found.

Until the definition has been loaded, the browser shall determine the initial value of exposedFields using the following rules (in order of precedence):

  1. the user-defined value in the instantiation (if one is specified);
  2. the default value for that field type.

For eventOuts, the initial value on startup will be the default value for that field type. During the loading of an EXTERNPROTO, if an initial value of an eventOut is found, that value is applied to the eventOut and no event is generated.

The names and types of the fields, exposedFields, eventIns, and eventOuts of the interface declaration shall be a subset of those defined in the implementation. Declaring a field or event with a non-matching name is an error, as is declaring a field or event with a matching name but a different type.

It is recommended that user-defined field or event names defined in EXTERNPROTO interface statements follow the naming conventions described in 4.7, Field, eventIn, and eventOut semantics.

4.9.3 EXTERNPROTO URL semantics

The string or strings specified after the interface declaration give the location of the prototype's implementation. If multiple strings are specified, the browser searches in the order of preference (see 4.5.2, URLs).

If a URL in an EXTERNPROTO statement refers to a VRML file, the first PROTO statement found in the VRML file (excluding EXTERNPROTOs) is used to define the external prototype's definition. The name of that prototype does not need to match the name given in the EXTERNPROTO statement. Results are undefined if a URL in an EXTERNPROTO statement refers to a non-VRML file

To enable the creation of libraries of reusable PROTO definitions, browsers shall recognize EXTERNPROTO URLs that end with "#name" to mean the PROTO statement for "name" in the given VRML file. For example, a library of standard materials might be stored in a VRML file called "materials.wrl" that looks like:

    #VRML V2.0 utf8
    PROTO Gold   [] { Material { ... } }
    PROTO Silver [] { Material { ... } }
    ...etc.

A material from this library could be used as follows:

    #VRML V2.0 utf8
    EXTERNPROTO GoldFromLibrary [] "http://.../materials.wrl#Gold"
    ...
    Shape {
        appearance Appearance { material GoldFromLibrary {} }
        geometry   ...
    }
    ...

--- VRML separator bar ---

4.10 Event processing

4.10.1 Introduction

Most node types have at least one eventIn definition and thus can receive events. Incoming events are data messages sent by other nodes to change some state within the receiving node. Some nodes also have eventOut definitions. These are used to send data messages to destination nodes that some state has changed within the source node.

If an eventOut is read before it has sent any events, the initial value as specified in 5, Field and event reference, for each field/event type is returned.

4.10.2 Route semantics

The connection between the node generating the event and the node receiving the event is called a route. Routes are not nodes. The ROUTE statement is a construct for establishing event paths between nodes. ROUTE statements may either appear at the top level of a VRML file, in a prototype definition, or inside a node wherever fields may appear. Nodes referenced in a ROUTE statement shall be defined before the ROUTE statement.

The types of the eventIn and the eventOut shall match exactly. For example, it is illegal to route from an SFFloat to an SFInt32 or from an SFFloat to an MFFloat.

Routes may be established only from eventOuts to eventIns. For convenience, when routing to or from an eventIn or eventOut (or the eventIn or eventOut part of an exposedField), the set_ or _changed part of the event's name is optional. If the browser is trying to establish a ROUTE to an eventIn named zzz and an eventIn of that name is not found, the browser shall then try to establish the ROUTE to the eventIn named set_zzz. Similarly, if establishing a ROUTE from an eventOut named zzz and an eventOut of that name is not found, the browser shall try to establish the ROUTE from zzz_changed.

Redundant routing is ignored. If a VRML file repeats a routing path, the second and subsequent identical routes are ignored. This also applies for routes created dynamically via a scripting language supported by the browser.

4.10.3 Execution model

Once a sensor or Script has generated an initial event, the event is propagated from the eventOut producing the event along any ROUTEs to other nodes. These other nodes may respond by generating additional events, continuing until all routes have been honoured. This process is called an event cascade. All events generated during a given event cascade are assigned the same timestamp as the initial event, since all are considered to happen instantaneously.

Some sensors generate multiple events simultaneously. Similarly, it is possible that asynchronously generated events could arrive at the identical time as one or more sensor generated event. In these cases, all events generated are part of the same initial event cascade and each event has the same timestamp.

After all events of the initial event cascade are honored, post-event processing performs actions stimulated by the event cascade. The entire sequence of events occuring in a single timestamp are:

  1. Perform event cascade evaluation.
  2. Call shutdown( ) on scripts that have received set_url events or are being removed from the scene.
  3. Send final events from environmental sensors being removed from the transformation hierarchy.
  4. Add or remove routes specified in addRoute( ) or deleteRoute( ) from any script execution in the preceeding event cascade.
  5. Call eventsProcessed( ) for scripts that have sent events in the just ended event cascade.
  6. Send initial events from any dynamically created environmental sensors.
  7. Call initialize( ) of newly loaded script code.
  8. If any events were generated from steps 2 through 7, go to step 2 and continue.

Figure 4.3 provides a conceptual illustration of the execution model.

Figure 4.3 -- Conceptual execution model

Nodes that contain eventOuts or exposedFields shall produce at most one event per timestamp. If a field is connected to another field via a ROUTE, an implementation shall send only one event per ROUTE per timestamp. This also applies to scripts where the rules for determining the appropriate action for sending eventOuts are defined in 4.12.9.3, Sending eventOuts.

D.19, Execution model, provides an example that demonstrates the execution model. Figure 4.4 illustrates event processing for a single timestamp in example in D.19, Execution model:

Timestamp ordering example

Figure 4.4 -- Example D.19, event processing order

 
In Figure 4.4, arrows coming out of a script at ep are events generated during the eventsProcessed() call for the script. The other arrows are events sent during an eventIn method. One possible compliant order of execution is as follows:

  1. User activates TouchSensor
  2. Run initial event cascade (step 1)
    1. Script 1 runs, generates an event for Script 2
    2. Script 2 runs
    3. end of initial event cascade
  3. Execute eventsProcessed calls (step 5)
    1. eventsProcessed for Script 1 runs, sends event to Script 3
    2. Script 3 runs, generates events for Script 5
    3. Script 5 runs
    4. eventsProcessed for Script 2 runs, sends events to Script 4
    5. Script 4 runs
    6. end of eventsProcessed processing
  4. Go to step 2 for generated events (step 8)
  5. Execute eventsProcessed calls (step 5)
    1. eventsProcessed for Script 3 runs, sends event to Script 6
    2. Script 6 runs, sends event to Script 7
    3. Script 7 runs
    4. eventsProcessed for Script 4 runs, does not generate any events
    5. eventsProcessed for Script 5 runs, does not generate any events
    6. end of eventsProcessed processing
  6. Go to step 2 for generated events (step 8)
  7. Execute eventsProcessed calls (step 5)
    1. eventsProcessed for Script 6 runs, does not generate any events
    2. eventsProcessed for Script 7 runs, does not generate any events
    3. end of eventsProcessed processing
  8. No more events to handle.

The above is not the only possible compliant order of execution. If multiple eventsProcessed() methods are pending when step 4 is executed, the order in which these methods is called is not defined. For instance, in the third step of the example, the eventsProcessed method is pending for both Script 1 and Script 2. The order of execution in this case is not defined, so executing the eventsProcessed method of Script 2 before that of Script 1 would have been compliant. However, executing the eventsProcessed method for Script 3 before that of Script 2 would not have been compliant because any methods made pending during processing must wait until the next iteration of the event cascade for execution.

4.10.4 Loops

Event cascades may contain loops where an event E is routed to a node that generates an event that eventually results in E being generated again. See 4.10.3, Execution model, for the loop breaking rule that limits each eventOut to one event per timestamp. This rule shall also be used to break loops created by cyclic dependencies between different sensor nodes.

4.10.5 Fan-in and fan-out

Fan-in occurs when two or more routes write to the same eventIn. Events coming into an eventIn from different eventOuts with the same timestamp shall be processed, but the order of evaluation is implementation dependent.

Fan-out occurs when one eventOut routes to two or more eventIns. This results in sending any event generated by the eventOut to all of the eventIns.

--- VRML separator bar ---

4.11 Time

4.11.1 Introduction

The browser controls the passage of time in a world by causing TimeSensors to generate events as time passes. Specialized browsers or authoring applications may cause time to pass more quickly or slowly than in the real world, but typically the times generated by TimeSensors will approximate "real" time. A world's creator should make no assumptions about how often a TimeSensor will generate events but can safely assume that each time event generated will have a timestamp greater than any previous time event.

4.11.2 Time origin

Time (0.0) is equivalent to 00:00:00 GMT January 1, 1970. Absolute times are specified in SFTime or MFTime fields as double-precision floating point numbers representing seconds. Negative absolute times are interpreted as happening before 1970.

Processing an event with timestamp t may only result in generating events with timestamps greater than or equal to t.

4.11.3 Discrete and continuous changes

ISO/IEC 14772 does not distinguish between discrete events (such as those generated by a TouchSensor) and events that are the result of sampling a conceptually continuous set of changes (such as the fraction events generated by a TimeSensor). An ideal VRML implementation would generate an infinite number of samples for continuous changes, each of which would be processed infinitely quickly.

Before processing a discrete event, all continuous changes that are occurring at the discrete event's timestamp shall behave as if they generate events at that same timestamp.

Beyond the requirements that continuous changes be up-to-date during the processing of discrete changes, the sampling frequency of continuous changes is implementation dependent. Typically a TimeSensor affecting a visible (or otherwise perceptible) portion of the world will generate events once per frame, where a frame is a single rendering of the world or one time-step in a simulation.

--- VRML separator bar ---

4.12 Scripting

4.12.1 Introduction

Authors often require that VRML worlds change dynamically in response to user inputs, external events, and the current state of the world. The proposition "if the vault is currently closed AND the correct combination is entered, open the vault" illustrates the type of problem which may need addressing. These kinds of decisions are expressed as Script nodes (see 6.40, Script) that receive events from other nodes, process them, and send events to other nodes. A Script node can also keep track of information between subsequent executions (i.e., retaining internal state over time).

This subclause describes the general mechanisms and semantics of all scripting language access protocols. Note that no scripting language is required by ISO/IEC 14772. Details for two scripting languages are in annex B, Java platform scripting reference, and annex C, ECMAScript scripting reference, respectively. If either of these scripting languages are implemented, the Script node implementation shall conform with the definition described in the corresponding annex.

Event processing is performed by a program or script contained in (or referenced by) the Script node's url field. This program or script may be written in any programming language that the browser supports.

4.12.2 Script execution

A Script node is activated when it receives an event. The browser shall then execute the program in the Script node's url field (passing the program to an external interpreter if necessary). The program can perform a wide variety of actions including sending out events (and thereby changing the scene), performing calculations, and communicating with servers elsewhere on the Internet. A detailed description of the ordering of event processing is contained in 4.10, Event processing.

Script nodes may also be executed after they are created (see 4.12.3, Initialize() and shutdown()). Some scripting languages may allow the creation of separate processes from scripts, resulting in continuous execution (see 4.12.6, Asynchronous scripts).

Script nodes receive events in timestamp order. Any events generated as a result of processing an event are given timestamps corresponding to the event that generated them. Conceptually, it takes no time for a Script node to receive and process an event, even though in practice it does take some amount of time to execute a Script.

When a set_url event is received by a Script node that contains a script that has been previously initialized for a different URL, the shutdown() method of the current script is called (see 4.12.3, Initialize() and shutdown()). Until the new script becomes available, the script shall behave as though it has no executable content. When the new script becomes available, the Initialize() method is invoked as defined in 4.10.3, Execution model. The limiting case is when the URL contains inline code that can be immediately executed upon receipt of the set_url event (e.g., javascript: protocol). In this case, it can be assumed that the old code is unloaded and the new code loaded instantaneously, after any dynamic route requests have been performed.

4.12.3 Initialize() and shutdown()

The scripting language binding may define an initialize() method. This method shall be invoked before the browser presents the world to the user and before any events are processed by any nodes in the same VRML file as the Script node containing this script. Events generated by the initialize() method shall have timestamps less than any other events generated by the Script node. This allows script initialization tasks to be performed prior to the user interacting with the world.

Likewise, the scripting language binding may define a shutdown() method. This method shall be invoked when the corresponding Script node is deleted or the world containing the Script node is unloaded or replaced by another world. This method may be used as a clean-up operation, such as informing external mechanisms to remove temporary files. No other methods of the script may be invoked after the shutdown() method has completed, though the shutdown() method may invoke methods or send events while shutting down. Events generated by the shutdown() method that are routed to nodes that are being deleted by the same action that caused the shutdown() method to execute will not be delivered. The deletion of the Script node containing the shutdown() method is not complete until the execution of its shutdown() method is complete.

4.12.4 EventsProcessed()

The scripting language binding may define an eventsProcessed() method that is called after one or more events are received. This method allows Scripts that do not rely on the order of events received to generate fewer events than an equivalent Script that generates events whenever events are received. If it is used in some other time-dependent way, eventsProcessed() may be nondeterministic, since different browser implementations may call eventsProcessed() at different times.

For a single event cascade, a given Script node's eventsProcessed method shall be called at most once. Events generated from an eventsProcessed() method are given the timestamp of the last event processed.

4.12.5 Scripts with direct outputs

Scripts that have access to other nodes (via SFNode/MFNode fields or eventIns) and that have their directOutput field set to TRUE may directly post eventIns to those nodes. They may also read the last value sent from any of the node's eventOuts.

When setting a value in another node, implementations are free to either immediately set the value or to defer setting the value until the Script is finished. When getting a value from another node, the value returned shall be up-to-date; that is, it shall be the value immediately before the time of the current timestamp (the current timestamp returned is the timestamp of the event that caused the Script node to execute).

If multiple directOutput Scripts read from and/or write to the same node, the results are undefined.

4.12.6 Asynchronous scripts

Some languages supported by VRML browsers may allow Script nodes to spontaneously generate events, allowing users to create Script nodes that function like new Sensor nodes. In these cases, the Script is generating the initial events that causes the event cascade, and the scripting language and/or the browser shall determine an appropriate timestamp for that initial event. Such events are then sorted into the event stream and processed like any other event, following all of the same rules including those for looping.

4.12.7 Script languages

The Script node's url field may specify a URL which refers to a file (e.g., using protocol http:) or incorporates scripting language code directly in-line. The MIME-type of the returned data defines the language type. Additionally, instructions can be included in-line using 4.5.4, Scripting language protocol, defined for the specific language (from which the language type is inferred).

For example, the following Script node has one eventIn field named start and three different URL values specified in the url field: Java, ECMAScript, and inline ECMAScript:

    Script {
      eventIn SFBool start
      url [ "http://foo.com/fooBar.class",
        "http://foo.com/fooBar.js",
        "javascript:function start(value, timestamp) { ... }"
      ]
    }

In the above example when a start eventIn is received by the Script node, one of the scripts found in the url field is executed. The Java platform bytecode is the first choice, the ECMAScript code is the second choice, and the inline ECMAScript code the third choice. A description of order of preference for multiple valued URL fields may be found in 4.5.2, URLs.

4.12.8 EventIn handling

Events received by the Script node are passed to the appropriate scripting language method in the script. The method's name depends on the language type used. In some cases, it is identical to the name of the eventIn; in others, it is a general callback method for all eventIns (see the scripting language annexes for details). The method is passed two arguments: the event value and the event timestamp.

4.12.9 Accessing fields and events

The fields, eventIns, and eventOuts of a Script node are accessible from scripting language methods. Events can be routed to eventIns of Script nodes and the eventOuts of Script nodes can be routed to eventIns of other nodes. Another Script node with access to this node can access the eventIns and eventOuts just like any other node (see 4.12.5, Scripts with direct outputs).

It is recommended that user-defined field or event names defined in Script nodes follow the naming conventions described in 4.7, Field, eventIn, and eventOut semantics.

4.12.9.1 Accessing fields and eventOuts of the script

Fields defined in the Script node are available to the script through a language-specific mechanism (e.g., a variable is automatically defined for each field and event of the Script node). The field values can be read or written and are persistent across method calls. EventOuts defined in the Script node may also be read; the returned value is the last value sent to that eventOut.

4.12.9.2 Accessing eventIns and eventOuts of other nodes

The script can access any eventIn or eventOut of any node to which it has access. The syntax of this mechanism is language dependent. The following example illustrates how a Script node accesses and modifies an exposed field of another node (i.e., sends a set_translation eventIn to the Transform node) using ECMAScript:

    DEF SomeNode Transform { }
    Script {
      field   SFNode  tnode USE SomeNode
      eventIn SFVec3f pos
      directOutput TRUE
      url "javascript:
        function pos(value, timestamp) {
          tnode.set_translation = value;
        }"
    }

The language-dependent mechanism for accessing eventIns or eventOuts (or the eventIn or eventOut part of an exposedField) shall support accessing them without their "set_" or "_changed" prefix or suffix, to match the ROUTE statement semantics. When accessing an eventIn named "zzz" and an eventIn of that name is not found, the browser shall try to access the eventIn named "set_zzz". Similarly, if accessing an eventOut named "zzz" and an eventOut of that name is not found, the browser shall try to access the eventOut named "zzz_changed".

4.12.9.3 Sending eventOuts

Each scripting language provides a mechanism for allowing scripts to send a value through an eventOut defined by the Script node. For example, one scripting language may define an explicit method for sending each eventOut, while another language may use assignment statements to automatically defined eventOut variables to implicitly send the eventOut. Sending multiple values through an eventOut during a single script execution will result in the "last" event being sent, where "last" is determined by the semantics of the scripting language being used.

4.12.10 Browser script interface

4.12.10.1 Introduction

The browser interface provides a mechanism for scripts contained by Script nodes to get and set browser state (e.g., the URL of the current world). This subclause describes the semantics of methods that the browser interface supports. An arbitrary syntax is used to define the type of parameters and returned values. The specific annex for a language contains the actual syntax required. In this abstract syntax, types are given as VRML field types. Mapping of these types into those of the underlying language (as well as any type conversion needed) is described in the appropriate language annex.

4.12.10.2 SFString getName( ) and SFString getVersion( )

The getName() and getVersion() methods return a string representing the "name" and "version" of the browser currently in use. These values are defined by the browser writer, and identify the browser in some (unspecified) way. They are not guaranteed to be unique or to adhere to any particular format and are for information only. If the information is unavailable these methods return empty strings.

4.12.10.3 SFFloat getCurrentSpeed( )

The getCurrentSpeed() method returns the average navigation speed for the currently bound NavigationInfo node in meters per second, in the coordinate system of the currently bound Viewpoint node. If speed of motion is not meaningful in the current navigation type, or if the speed cannot be determined for some other reason, 0.0 is returned.

4.12.10.4 SFFloat getCurrentFrameRate( )

The getCurrentFrameRate() method returns the current frame rate in frames per second. The way in which frame rate is measured and whether or not it is supported at all is browser dependent. If frame rate measurement is not supported or cannot be determined, 0.0 is returned.

4.12.10.5 SFString getWorldURL( )

The getWorldURL() method returns the URL for the root of the currently loaded world.

4.12.10.6 void replaceWorld( MFNode nodes )

The replaceWorld() method replaces the current world with the world represented by the passed nodes. An invocation of this method will usually not return since the world containing the running script is being replaced. Scripts that may call this method shall have mustEvaluate set to TRUE.

4.12.10.7 void loadURL( MFString url, MFString parameter )

The loadURL() method loads the first recognized URL from the specified url field with the passed parameters. The parameter and url arguments are treated identically to the Anchor node's parameter and url fields (see 6.2, Anchor). This method returns immediately. However, if the URL is loaded into this browser window (e.g., there is no TARGET parameter to redirect it to another frame), the current world will be terminated and replaced with the data from the specified URL at some time in the future. Scripts that may call this method shall set mustEvaluate to TRUE. If loadUrl() is invoked with a URL of the form "#name", the Viewpoint node with the given name ("name") in the Script' node's run-time name scope(s) shall be bound. However, if the Script node containing the script that invokes loadURL("#name") is not part of any run-time name scope or is part of more than one run-time name scope, results are undefined. See 4.4.6, Run-time name scope, for a description of run-time name scope.

4.12.10.8 void setDescription( SFString description )

The setDescription() method sets the passed string as the current description. This message is displayed in a browser dependent manner. An empty string clears the current description. Scripts that call this method shall have mustEvaluate set to TRUE.

4.12.10.9 MFNode createVrmlFromString( SFString vrmlSyntax )

The createVrmlFromString() method parses a string consisting of VRML statements, establishes any PROTO and EXTERNPROTO declarations and routes, and returns an MFNode value containing the set of nodes in those statements. The string shall be self-contained (i.e., USE statements inside the string may refer only to nodes DEF'ed in the string, and non-built-in node types used by the string shall be prototyped using EXTERNPROTO or PROTO statements inside the string).

4.12.10.10 void createVrmlFromURL( MFString url, SFNode node, SFString event )

The createVrmlFromURL() instructs the browser to load a VRML scene description from the given URL or URLs. The VRML file referred to shall be self-contained (i.e., USE statements inside the string may refer only to nodes DEF'ed in the string, and non-built-in node types used by the string shall be prototyped using EXTERNPROTO or PROTO statements inside the string). After the scene is loaded, event is sent to the passed node returning the root nodes of the corresponding VRML scene. The event parameter contains a string naming an MFNode eventIn on the passed node.

4.12.10.11 void addRoute(...) and void deleteRoute(...)

void addRoute( SFNode fromNode, SFString fromEventOut,
                       SFNode toNode, SFString toEventIn );

void deleteRoute( SFNode fromNode, SFString fromEventOut,
                          SFNode toNode, SFString toEventIn );

These methods respectively add and delete a route between the given event names for the given nodes. Scripts that call this method shall have directOutput set to TRUE. Routes that are added and deleted shall obey the execution order defined in 4.10.3, Execution model.

--- VRML separator bar ---

4.13 Navigation

4.13.1 Introduction

Conceptually speaking, every VRML world contains a viewpoint from which the world is currently being viewed. Navigation is the action taken by the user to change the position and/or orientation of this viewpoint thereby changing the user's view. This allows the user to move through a world or examine an object. The NavigationInfo node (see 6.29, NavigationInfo) specifies the characteristics of the desired navigation behaviour, but the exact user interface is browser-dependent. The Viewpoint node (see 6.53, Viewpoint) specifies key locations and orientations in the world to which the user may be moved via scripts or browser-specific user interfaces.

4.13.2 Navigation paradigms

The browser may allow the user to modify the location and orientation of the viewer in the virtual world using a navigation paradigm. Many different navigation paradigms are possible, depending on the nature of the virtual world and the task the user wishes to perform. For instance, a walking paradigm would be appropriate in an architectural walkthrough application, while a flying paradigm might be better in an application exploring interstellar space. Examination is another common use for VRML, where the world is considered to be a single object which the user wishes to view from many angles and distances.

The NavigationInfo node has a type field that specifies the navigation paradigm for this world. The actual user interface provided to accomplish this navigation is browser-dependent. See 6.29, NavigationInfo, for details.

4.13.3 Viewing model

The browser controls the location and orientation of the viewer in the world, based on input from the user (using the browser-provided navigation paradigm) and the motion of the currently bound Viewpoint node (and its coordinate system). The VRML author can place any number of viewpoints in the world at important places from which the user might wish to view the world. Each viewpoint is described by a Viewpoint node. Viewpoint nodes exist in their parent's coordinate system, and both the viewpoint and the coordinate system may be changed to affect the view of the world presented by the browser. Only one viewpoint is bound at a time. A detailed description of how the Viewpoint node operates is described in 4.6.10, Bindable children nodes, and 6.53, Viewpoint.

Navigation is performed relative to the Viewpoint's location and does not affect the location and orientation values of a Viewpoint node. The location of the viewer may be determined with a ProximitySensor node (see 6.38, ProximitySensor).

4.13.4 Collision detection and terrain following

A VRML file can contain Collision nodes (see 6.8, Collision) and NavigationInfo nodes that influence the browser's navigation paradigm. The browser is responsible for detecting collisions between the viewer and the objects in the virtual world, and is also responsible for adjusting the viewer's location when a collision occurs. Browsers shall not disable collision detection except for the special cases listed below. Collision nodes can be used to generate events when viewer and objects collide, and can be used to designate that certain objects should be treated as transparent to collisions. Support for inter-object collision is not specified. The NavigationInfo types of WALK, FLY, and NONE shall strictly support collision detection. However, the NavigationInfo types ANY and EXAMINE may temporarily disable collision detection during navigation, but shall not disable collision detection during the normal execution of the world. See 6.29, NavigationInfo, for details on the various navigation types.

NavigationInfo nodes can be used to specify certain parameters often used by browser navigation paradigms. The size and shape of the viewer's avatar determines how close the avatar may be to an object before a collision is considered to take place. These parameters can also be used to implement terrain following by keeping the avatar a certain distance above the ground. They can additionally be used to determine how short an object must be for the viewer to automatically step up onto it instead of colliding with it.

--- VRML separator bar ---

4.14 Lighting model

4.14.1 Introduction

The VRML lighting model provides detailed equations which define the colours to apply to each geometric object. For each object, the values of the Material node, Color node and texture currently being applied to the object are combined with the lights illuminating the object and the currently bound Fog node. These equations are designed to simulate the physical properties of light striking a surface.

4.14.2 Lighting 'off'

A Shape node is unlit if either of the following is true:

  1. The shape's appearance field is NULL (default).
  2. The material field in the Appearance node is NULL (default).

Note the special cases of geometry nodes that do not support lighting (see 6.24, IndexedLineSet, and 6.36, PointSet, for details).

If the shape is unlit, the colour (Irgb) and alpha (A, 1-transparency) of the shape at each point on the shape's geometry is given in Table 4.5.

Table 4.5 -- Unlit colour and alpha mapping

Texture type Colour per-vertex
     or per-face
Colour NULL
No texture Irgb= ICrgb
A = 1
Irgb= (1, 1, 1)
A = 1
Intensity
(one-component)
Irgb= IT × ICrgb
A = 1
Irgb = (IT,IT,IT )
A = 1
Intensity+Alpha
(two-component)
Irgb= I T × ICrgb
A = AT
Irgb= (IT,IT,IT )
A = AT
RGB
(three-component)
Irgb= ITrgb
A = 1
Irgb= ITrgb
A = 1
RGBA
(four-component)
Irgb= ITrgb
A = AT
Irgb= ITrgb
A = AT



where:

AT = normalized [0, 1] alpha value from 2 or 4 component texture image
ICrgb = interpolated per-vertex colour, or per-face colour, from Color node
IT = normalized [0, 1] intensity from 1 or 2 component texture image
ITrgb= colour from 3-4 component texture image

4.14.3 Lighting 'on'

If the shape is lit (i.e., a Material and an Appearance node are specified for the Shape), the Color and Texture nodes determine the diffuse colour for the lighting equation as specified in Table 4.6.

Table 4.6 -- Lit colour and alpha mapping

Texture type Colour per-vertex
     or per-face
Color node NULL
No texture ODrgb = ICrgb
A = 1-TM
ODrgb = IDrgb
A = 1-TM
Intensity texture
(one-component)
ODrgb = IT × ICrgb
A = 1-TM
ODrgb = IT × IDrgb
A = 1-TM
Intensity+Alpha texture
(two-component)
ODrgb = IT × ICrgb
A = AT
ODrgb = IT × IDrgb
A = AT
RGB texture
(three-component)
ODrgb = ITrgb
A = 1-TM
ODrgb = ITrgb
A = 1-TM
RGBA texture
(four-component)
ODrgb = ITrgb
A = AT
ODrgb = ITrgb
A = AT



where:

IDrgb = material diffuseColor
ODrgb = diffuse factor, used in lighting equations below
TM = material transparency

All other terms are as defined in 4.14.2, Lighting `off'.

4.14.4 Lighting equations

An ideal VRML implementation will evaluate the following lighting equation at each point on a lit surface. RGB intensities at each point on a geometry (Irgb) are given by:

Irgb = IFrgb × (1 -f0) + f0 × (OErgb + SUM( oni × attenuationi × spoti × ILrgb
                                                                          × (ambienti + diffusei + specular i)))

where:

attenuationi = 1 / max(c1 + c2 × dL + c3 × dL² , 1 )
ambienti = Iia × ODrgb × Oa

diffusei = Ii × ODrgb × ( N · L )
specular i = Ii × OSrgb × ( N · ((L + V) / |L + V|))shininess × 128

and:

· = modified vector dot product: if dot product < 0, then 0.0, otherwise, dot product
c1 , c2, c 3 = light i attenuation
dV = distance from point on geometry to viewer's position, in coordinate system of current fog node
dL = distance from light to point on geometry, in light's coordinate system
f0 = Fog interpolant, see Table 4.8 for calculation
IFrgb = currently bound fog's color
I Lrgb = light i color
Ii = light i intensity
Iia = light i ambientIntensity
L = (Point/SpotLight) normalized vector from point on geometry to light source i position
L = (DirectionalLight) -direction of light source i
N = normalized normal vector at this point on geometry (interpolated from vertex normals specified in Normal node or calculated by browser)
Oa = Material ambientIntensity
ODrgb = diffuse colour, from Material node, Color node, and/or texture node
OErgb = Material emissiveColor
OSrgb = Material specularColor
on i = 1, if light source i affects this point on the geometry,
0, if light source i does not affect this geometry (if farther away than radius for PointLight or SpotLight, outside of enclosing Group/Transform for DirectionalLights, or on field is FALSE)
shininess = Material shininess
spotAngle = acos( -L · spotDiri)
spot BW = SpotLight i beamWidth
spot CO = SpotLight i cutOffAngle
spot i = spotlight factor, see Table 4.7 for calculation
spotDiri = normalized SpotLight i direction
SUM: sum over all light sources i
V = normalized vector from point on geometry to viewer's position

Table 4.7 -- Calculation of the spotlight factor

Condition (in order)
spoti =
lighti is PointLight or DirectionalLight 1
spotAngle >= spotCO 0
spotAngle <= spotBW 1
spotBW  < spotAngle < spot CO (spotAngle - spotCO ) / (spotBW-spotCO)



Table 4.8 -- Calculation of the fog interpolant

Condition f0 =
no fog 1
fogType "LINEAR", dV < fogVisibility (fogVisibility-dV) / fogVisibility
fogType "LINEAR", dV > fogVisibility 0
fogType "EXPONENTIAL", dV < fogVisibility exp(-dV / (fogVisibility-dV ) )
fogType "EXPONENTIAL", dV > fogVisibility 0


4.14.5 References

The VRML lighting equations are based on the simple illumination equations given in E.[FOLE] and E.[OPEN].

--- VRML separator bar ---
Questions or comments.
Copyright © 1997 The VRML Consortium Incorporated.
http://www.vrml.org/Specifications/VRML97/part1/concepts.html


Back/forward to the:
Last update: 16thMarch,1999

Jochen M. Braun   (responsible for slight adaptions, not for the content of this document)
 (E-Mail: jbraun@astro.uni-bonn.de)