CASA, School of
Architecture, Bath University, BA2 7AY, UK
Tel: 01225 826475
Fax: 01225 826691
e-mail V.Bourdakis@bath.ac.ukThe presentation slides are also available here
Abstract: The paper addresses a number of issues related to the creation
of real-world environments. The problems faced during the construction of the
London's West End computer model are presented, the solutions devised are
demonstrated and suggestions regarding the management of similar projects are
given. The focus is more on the practical issues rather than on the advantages
and disadvantages of VRML and/or VR for the particular application.
Keywords: architecture, visualisation, virtual world design,
VRML
Paper presented in the 3rd UK VRSIG Conference, 3rd July
1996.
The Centre for Advanced Studies
in Architecture (CASA) has been involved in three-dimensional (3D) computer
modelling for the last six years. In 1991 CASA received a grant to construct a
3D computer model of the city of Bath. The project was supported by Bath City
Council and since its completion the model has been used by the city planners to
test the visual impact of a number of proposed developments in the city. The
model was created exclusively from aerial photographs of the city at 1:1250
scale using a stereo digitiser. It is accurate to half a metre and covers the
whole historic city centre, an approximate area of 2.5x3.0 km. There followed a
model of the Edinburgh new town; created using similar techniques and a slightly
higher level of detail. A low detail version of the Bath model in VRML is at: http://www.bath.ac.uk/Centres/CASA/VRML/bath_x.wrl.gz
All 3D city models built by CASA were modelled on 486 and Pentium PCs. AutoCAD R12 and R13 was used due to the need for compatibility with engineering practices, local authorities and anyone that may need to use, add or modify the existing model in the future. For the same reason none of the AutoCAD specific modelling facilities were used (such as the solid modelling extension AME from R12 and ACIS from R13). Various in-house developed utilities, AutoLisp programs, scripts etc. were used to customise AutoCAD, improving speed of modelling and output quality. 3DStudio release 3 and 4 was used for visualisation, producing both single images and animations. Finally an ADAM Technology stereo photo digitiser was used as an AutoCAD station input device.
For VRML publishing, AutoCAD DXF files were initially translated to Inventor
using a freely available Silicon Graphics (SG)
utility - DxfToIV. Using standard SG applications, such as SceneViewer and
gview, lights, materials and camera positions were added. Finally the Inventor
files were translated to VRML, again using SG free tools - ivToVRML. It should
be noted that VRML1.0 is based on Inventor (it is a subset of it) so this
process was quite simple and trouble-free. SG has recently introduced WebspaceAuthor; a tool that makes editing,
linking and publishing VRML files much easier. On the PC front, there are at
least half a dozen CAD, rendering or translating packages that will directly
output VRML1.0 files (Caligari Truespace, V-Realm builder, VRCreator by VReam,
VRML IPAS plugin for 3DStudio, WorldView, Interchange, etc.). It should be noted
that the quality of the output varies greatly, with many packages adding non
standard extensions to the format creating incompatible files.
The first VR software used in CASA two years ago was PolyTRIM, created by the Centre for Landscape Research (CLR), University of Toronto, Canada. It is a landscape design program written for SGs running using the GL graphics library, allowing a degree of interactive design, placing and editing of objects, integrating video, animation and a few other features all in fully shaded views. However it was not tuned to CASA's work, it was proprietary, could not modify it, it didn't import AutoCAD or DXF files successfully enough, forcing modifications to the drawing database, was a research tool (thus slow) and was limited to SGs only. CLR and Rodney Hoinkes in particular, were very helpful in improving, modifying the program to CASA's needs and have also promised a NT version soon. When VRML was announced, a lot of time was spend examining its potential. Admittedly, VRML1.0 was not allowing editing/modifying of the drawing database, but it had the potential to build into a very powerful system; open, programmable, compatible with a variety of platforms, free and Internet aware. VRML2.0 allows scripting, sensors, animations, 3D sound, geometry manipulation via programming, etc. that based on the tests carried out already will cover all of CASA needs.
A few commercial VR solutions were tested, which proved either unsatisfactory
or outside CASA's budget. There seems to be a trend of focusing/tuning
commercial VR applications towards immersiveness and human computer interface in
restrictively small geometrical databases. As a result, such applications would
not handle at all well the size of a city's database. The native SG Inventor
format was also tested successfully but the lack of compatibility with other
platforms forced us to use VRML. Inventor toolkits are currently available for
PCs (running WinNT and Win95, as well as one for the freeware PC UNIX - Linux)
and it is possible to use Inventor based programs in the future if a need for
proprietary applications arises.
There is often a need to apologise for not working in fully immersive environments, as per the definition of VR. It was decided not to, because of problems that are quite serious, difficult to solve and computationally expensive:
British Telecom Labs commissioned CASA to create a 3D model of an area 500 by 1000 metres in the London West End (including Piccadilly Circus, Leicester Square and Shaftesbury Avenue). This model would be used for cellular phones transmitter signal propagation research. Following an Engineering and Physical Sciences Research Council (EPSRC) fund this model was modified, translated to VRML, published on the web and demonstrated during the 1995 Interbuild exhibition (21-27 Nov, Birmingham). This VRML demo can be viewed at http://www.bath.ac.uk/Centres/CASA/london/londonmain.html
The main argument was that although the current generation of digital maps is a significant improvement over paper alternatives, maps are still two dimensional and only provide a restructuring of existing information. This VRML model shows what a map of central London could look like in years to come.
The entire area is modelled in 3D with each building accurately represented in both form and materials used. As well as showing what happens above ground, underground features are included. Links are also made between objects in the model (buildings, streets, telephone boxes, etc.) and data held on the Internet using the World Wide Web (WWW).
This map provides a tool for analysing the relationships between built form
and various kinds of data. It is strongly believed that in the future such maps
will offer a publicly accessible interface to a wide range of urban
information.
Quite a few problems were identified in the process of building the London
model. These ranged from the decision making stage to the publishing
specifications, bandwidth needed and compatibility of the existing browsers on
different platforms. A 3D computer model of a city is a task that to a certain
extent takes more time to plan than to actually construct. More
specifically:
One of the initial questions placed was the targeted degree of accuracy which was decided early in the process to be under half a meter; sufficient for all current and forecasted uses. A 3D model that will be used by designers and engineers should consider the engineers primary source of information. Therefore, creating a computer model exclusively from aerial photographs using a 3D digitiser is not a satisfactory solution when all engineers use Ordnance Survey 2D maps. This was a lesson learned from the Bath city model which was built exclusively from aerial photographs. Small discrepancies exist with the 2D Ordnance Survey maps and since OS did use the same aerial photographs and similar techniques, no one can be sure which is more accurate. A couple of packages claiming direct creation of 3D models from photos were examined, but the output looked more like an elastic stretched membrane than buildings with accurately defined walls. Such packages are more suitable for landscape design or even better relatively small object modelling rather than urban scale work; not to mention that the process is very often quite laborious.
It was finally decided that the London model would be a hybrid construction. OS digital terrain data (scale 1:5000) were used for the landscape, especially since it was quite flat. The OS LandLine data (scale 1:1250) were used for all X and Y co-ordinates (i.e. streets, buildings, parks, bridges, and all other objects). Finally aerial photographs were used for the extraction of the Z co-ordinates (elevations) of the buildings, height of trees, chimneys, roof geometry information, dormer windows etc. This combination worked very well, creating a model that is both accurate and compatible with existing 2D and 3D models.
The London model was also built faster than the exclusively aerial photograph
based one of Bath, albeit with a higher capital cost due to OS maps licensing.
The reduction in overall development time, however, justifies the extra
cost.
When the model of Bath was built 4-5 years ago, it was decided to use the OS
UK co-ordinates with an origin point at Lands End, Cornwall. According to this,
the Circus is at 374750m on X, 165280m on Y and 48.2m above sea level (Z).
London is even further at about 500km on X. The CAAD software used for the
creation of the initial model had no problems dealing with such numbers nor with
the units; a meter with three decimal points. However, following the
exporting/translating of the model to VRML it was found that navigation was
limited to "jumping" 50 meters are a time. Most of the browsers use integer
mathematics for geometry calculations. This leads to a series of rounding errors
and great problems with Z buffering. Trying to rotate or spin the whole area was
impossible since browsers rotate about the co-ordinate origin; in our case
Cornwall. After much discussion with browser writers, it was decided that the
co-ordinate origin should be translated to somewhere in the middle of the model;
that is Leicester Square for the London one, and the Abbey in Bath. This solved
all navigation, rotation and most Z buffering problems as well as reducing the
size of the model.
CAAD software use a world base co-ordinate system; X and Y for the plane definition and Z for heights. VRML defines X and Y across the computer screen and Z out of the screen. This effectively means that Y and Z have to be swapped from a CAAD model to a VRML one. Unfortunately most translators don't do this, but a simple transformation matrix on the top of the VRML file corrects this problem. On the downside, the transformation matrix affects previously defined viewpoints. Excluding these from the effects of the transformation matrix makes the file difficult, if not impossible, to comprehend and edit manually. Failing to exchange the Y and Z results in models that cannot be "walked" through, since the browser-perceived walking is carried in X and Z meaning the visitor comes flying from the sky to the ground.
One of the problems with translations is that there is always something not translated the way it is expected, or some information lost in the process. It is quite normal to spend a few hours or even days, hand-optimising the translated file. Such optimisation will result in files that are use-able on slower computers and more network friendly (smaller in size).
The greatest problem with model translations is the structure of the
geometrical description itself. In most CAAD packages, the operator can define
surfaces that are perceived as double sided. That means, considering three
points in 3D space, the surface defined by triangle (A, B, C) is the same as the
one by triangle (A, C, B). VRML browsers, and the VRML language description,
defines surfaces as being single sided and anticlockwise. Some browsers have the
option of rendering double sided faces - at the penalty of a considerable speed
reduction usually by a factor of two. Furthermore, the VRML language supports
geometry hints, which most PC browsers discard, producing unacceptable
results.
One of the most essential features of CAAD programs is instancing; the definition of a part of the model which is later repeated, without having to re-specify its geometrical properties, colour etc. VRML does support this feature but until now, there is not a single translation program that will generate instances aware VRML files. In the London model case, that was not a serious problem since the only repetitive elements were the telephone boxes and the information flag-posts on top of buildings. However, there are cases that the file size difference is fairly substantial; a VRML file of the Bath Abbey is 9MB, versus 1.5MB with instancing. There is also a limit of what detail should be included in such models since there is a tendency for CAAD users to instance everything and eventually create unmanageable files. For example, instancing a fairly complex modelled chimney pot or roof-line ornament 200 times will place all the load on the browser leading to a poor unusable virtual world.
Another area of concern is in the use of primitives (spheres, cones, cubes,
cylinders, etc.). As an example, following the translation stage, a CAAD sphere
will be expressed as 64, 128 or even more triangles, hardly an efficient way of
representing a sphere. Similarly a box will be defined as 12 triangles making
the VRML file inefficient, bigger and difficult to read.
Following the main problem of creating the 3D model, the structure of it is of vital importance. An area 500metres square in the West End of London was modelled in approximately 25000 triangles. An urban block of the city of Bath modelled fully with textures, windows etc. reaches 10000-15000 triangles. One cannot expect and should not attempt to create models with such high triangle counts and hope users will be able to navigate smoothly within them. Instead, low polygon count representation of areas should be used when the camera is further than a user defined distance. This is the basic concept behind Level of Detail (LOD). A 10,000 triangle urban block can be also represented using 30-50 triangles, showing roughly the borders of the built urban block, when the camera is more than a few hundred meters away. The browser will switch to the high triangle version according to the creator's instructors; in some browsers, this is also done automatically when there is not enough power to render and keep a constant frame rate. For reference, the same area of London was represented with only 3500 triangles on the low level of detail.
A few of the currently available VRML toolkits have a LOD auto-creation feature. However, such features are only suited to freeform shapes and not to buildings, since the criteria for face elimination can never match actual building situations.
LOD calculations are computing intensive and there is a threshold of
acceptable LOD use versus geometry / texture use. As an example, deciding to add
textures on building facades, and switching them on and off per building using
LOD nodes will bring the browser to a halt, not because of the burden of loading
all these textures, but due to the need to do all the LOD checks for each
building on each camera movement! The browser should rather do tests for 4 LODs
per urban block than 50. This leads to a sub-structuring of the model in streets
within each main urban block. Navigation in the London model utilising 2 levels
of detail on its 40 urban blocks (per 500 metre square) slows down noticeably on
even powerful PCs when the camera reaches points that geometry representation
needs to change. On SG machines, there is no noticeable speed change.
The use of LOD in order to improve navigational speed within the VRML worlds
is extremely difficult to implement in landscapes. The reason is the need for
smooth transaction from one level to another. A 20X20 grid landscape for the
500metre square piece of the model will have to be split at least in 4 (in plan)
in order to use LOD effectively. However the edges, where the low level
representation joins the high level, creates gaps and steps that cannot be
corrected unless the low and high polygon count versions of the landscape have
common edges (which defies the reason of having a low polygon count
representation of the landscape in the first place). Tests carried with a high
triangle count landscape did show a considerable slowdown and therefore this
issue is still under consideration.
Contrary to the overused slogan of "What You See Is What You Get", in VRML everything is related to the browser. There are currently over a dozen browsers for a variety of platforms including Macs, PCs, Acorns, and a variety of UNIX workstations. The underlying rendering libraries that these browsers use determine to a large extent the rendering quality (or lack of). Consequently the typical limestone coloured Bath city buildings are rendered red on one browser, brown on another and shiny white on a third. This problem is one of the most difficult to tackle and one that the VRML content creator has little or no power over. The VRML2.0 specification, currently under its 3rd revision, will attempt to set guidelines for the browser authors, although everyone admits that even that will not solve the problem.
Another such problem is in the use of textures. The graphics libraries used by all PC based browsers impose limitations on texture size; most limit each texture to 128 by 128 pixels. This is fine for small signs and tiled textures, but it badly affects building facades or shop-front textures. Furthermore, it indirectly affects bandwidth by prohibiting the use of "texture strips"; a series of textures stitched together in a long strip, loaded once and placed (targeted) in the relevant buildings. Additionally, PC graphics capabilities are, on average, quite low, forcing the browser to use software emulation for texturing which slows the process substantially.
Finally, lights cause serious problems across the available range of
browsers. Directional, omni, spot lights with different intensities and colours
are not supported in a uniform way. A few PC browsers disregard all lights and
place default lights in pre-calculated positions, others use a head mounted
light which produces very flat looking rendering. In many cases, browsers modify
the ambient light and some of them don't support coloured lights.
As mentioned earlier, the lack of integrated tools for the production of VRML models and the constant need for translation makes editing a very laborious process.
The notion of layering, organising data, geometry etc. in groups cannot pass onto the VRML files successfully. Similarly, the 2 to 3 level hierarchical structure of VRML models cannot be easily and intuitively emulated in CAAD software. In the case of the London model, the hierarchy included 5 materials for each building, individual buildings for each urban block, and two levels of detail for each urban block. The five materials were mapped as colours on the CAAD drawing, but the exporting programs would not create objects out of each set of multicoloured triangles unless they were grouped. This created extra problems since it was not possible to have groups within groups, leading to the segmented exporting of the CAAD drawings and the need for extra manual editing time to re-assemble them.
The LOD behaviour has to be manually edited onto the VRML files. In many cases, colour information is repeated for each object in the VRML file, making global colour editing impossible. The concept of a VRML object is completely different from the way CAAD programs structure data. The colours which are used to denote materials in VRML have usually a completely different context in CAAD.
Editing a VRML project is a balanced process, where it sometimes pays off to do the editing in the initial CAAD package and go through the translation, optimisation stages again rather than trying to modify the VRML file directly. Not to mention that the latter is usually done using a text editor and not visually.
This is expected to change as more and more CAAD packages begin to support
VRML and new VRML-specific editors appear. However, in a similar way to HTML
production, there will be many cases when editing the file manually will be both
faster, more efficient and produce better results.
The London model is used to demonstrate how a 3D model of a city can be used as the front end to various underlying databases of information. The model should therefore be recognisable by the users. Early experiments showed that "walking" at ground level was very confusing and users would disorientate themselves in a mater of minutes. The lack of textures and road level elements that they could identify was possibly the reason, as well as the lack of atmospheric properties, sun position etc. The ease with which someone could move from the one end of the model to the other and the lack of concept of time and effort needed for a kilometre walk was also important. Elements like moving cars and buses would be needed to add a sense of scale to the whole model as well as providing traffic direction hints. Camera viewing angles used are also crucial in creating a correct image. Consequently, most of the navigation is done above roof level with much better results (the whole model is closer to a normal 2D map and finding streets, buildings and areas was vastly improved). This is close to the concept that users walk and when in doubt, "jump" or "fly" until they find their destination where they can safely land and continue.
The lack of collision detection and "walk on the ground" feature of many browsers only makes things worse. Frustration and confusion resulted from the ability to "enter" buildings through their walls and go underground by failing to follow the landscape. This is more apparent in the Bath City model with elevation changes of over twenty metres in less than 200 metres of horizontal movement.
Following the construction of the model, a series of links to existing
databases were added. A pointer mechanism was needed so that visitors would
identify the underlying links. It was initially decided to texture map the
facades of the buildings with links but due to various problems with model size,
texturing quality, speed of navigation etc., a simpler method was devised. Using
flag-posts piercing through the buildings, reaching a height of 40 metres above
ground level it was possible to clearly show to both "flying" and "walking"
visitors the position and type of the links. A generic information node /
feedback form was added to all other model objects so that visitors can get a
minimal amount of information by simply highlighting buildings or streets.
There is an ever increasing interest in 3D computer models of buildings, large complexes and cities. Museums, local authorities, city councils, airport authorities, organisations managing big engineering projects are among the parties interested; the potential uses of such models vary enormously.
Evaluation of ideas and proposals for "sensitive" sites, is an area where 3D models can help engineers, decision makers and the public understand and appreciate solutions.
Data presentation, analysis and visualisation within the context of the site/city is another use for such models. Engineering, structures and architectural history tutorials as well as architectural studio modules can utilise 3D computer models.
Security management is another field that can benefit from 3D models; Closed Circuit TV (CCTV) cameras can be positioned and controlled, instruction of fire-fighting, emergency and police crew can be done remotely and accurately. Emergency evacuation procedures and routes can be evaluated, fire and smoke spread can be simulated, etc. disaster action plans can be easily assessed.
In the field of advertising, virtual tours of shopping malls have already been built using models of extremely low detail; the focus being the 2D interface of the commercial ventures involved.
As VRML matures, it should be possible to utilise it better and with an increasing level of integration with CAAD packages there should be fewer problems to solve.