Âĺđíóňüń˙ â áčáëčîňĺęó

Visual Programming Languages

Trevor Smedley


Faculty of Computer Science
Dalhousie University


Source of information: http://users.cs.dal.ca/~smedley/Research.html


The following are descriptions of research areas and projects which I am involvedi n with my colleagues and graduate students.

Areas of Interest

Visual Programming Languages

The primary advantage of visual programming languages is that they provide direct representations of software structures such as algorithms and data. This is in contrast to traditional textual programming languages, where such multi-dimensional structures are encoded into one-dimensional strings according to some intricate syntax. Visual languages remove this layer of abstraction, allowing the programmer to directly observe and manipulate complex software structures. Such directness of representation, termed “closeness of mapping,” is seen by Green and Petre as an important factor in enhancing the programmer’s ability to build and comprehend such structures, and is well supported by practical experience, as well as experimental study.

However, comparing the relative effectiveness of visual and textual programming languages is a complex undertaking, involving many criteria, for example: the class of users the languages are intended for; the kinds of programming tasks to be tested; the environment in which the languages are implemented. One study purports to prove that programs in textual languages are more comprehensible than visually represented programs. Other studies report tests which lead to the opposite conclusion. Furthermore, although there have been commercially successful visual programming languages, such as Prograph and LabView, visual languages have by no means replaced textual languages for programming by professionals or by end users. Visual Basic, for example, which, in spite of the name uses a textual language, is used extensively by end users. Clearly, simply being visual is not enough to make a language more effective than a textual counterpart.

End User Programming

Perhaps the earliest example of an end user programming language is the spreadsheet VisiCalc, invented by Dan Bricklin and Bob Frankston and introduced at the National Computer Conference in New York City in 1979. Spreadsheets continue to be one of the most frequently-used types of computer application, second only to text editors. Their popularity is due largely to the fact that spreadsheets allow end users to write what are effectively computer programs — performing the same computations repeatedly with varying inputs.

Nardi sees tremendous demand for end user programming systems, which she defines as systems “…which allow end users to create useful applications with no more than a few hours of instruction.” She also expresses skepticism for the superlative claims made by some researchers about visual languages and their role in end user programing, and points out some of the difficulties in designing effective visual languages, emphasising that ‘visualness’ on its own is not enough to make a system that end users can use effectively, and that the effectiveness of visual programming depends to a high degree on the task for which it is employed. Workshops on end user programming have come to similar conclusions, suggesting that much more study is required into the appropriate uses of text and graphics in end user programming systems. It is interesting to note that the research work of many, if not all of the participants in this workshop includes investigation of languages which include at least some significant visual aspect.

There have been a significant number of research projects involving visual languages for end user programming which have reported successes, either through informal studies and observations, or commercial success, such as AgentSheets, a system for creating simulation environments, KidSim, a simulation programming environment for children and LabView, a system for creating virtual instrumentation. However, there are also many which have been unsuccessful, and there is still much research required to characterise when, and what sort of visual expressions are valuable to end users.

Visual Design Languages

Much of our research involves the investigation of a particular class of domain-specific visual languages, visual design languages. If we take design to mean “to prepare a plan or sketch or model etc.”, and language to be defined as “any apparently organised system of communication”, then a very wide range of computer software can be considered to involve the use of a design language. Any drawing program, spreadsheet, or database definition application clearly fits into this category, and certainly any computer-aided design package involves the use of one or more design languages. Of particular interest to us are parametric design languages. That is, languages where the designer can specify a whole family of related designs with a single, parameterised description. Parametric design languages are becoming more and more prominent, particularly in advanced tools. For VLSI design, textual languages such as VHDL and Verilog support parametric design of VLSI devices, in a manner similar to programming languages. Computer aided design tools, such as Pro/Engineer, support parametric design at a number of levels. Simple families of designs can be specified by choosing options and entering values in dialog windows, while more complex structures can be defined using a textual language similar to programming languages. Less advanced tools, such as spreadsheets, often provide some facilities for parametric specification through the use of macros, or, as in the case of Excel with Visual Basic, through the integration of a textual programming language.

Research Projects

JGraph

JGraph combines the semantics of the Java language, with an entirely visual syntax based on Prograph, to produce a new visual language that combines the features of both languages and provides:

• Visual handling of exceptions;
• Immediate and visual type checking;
• An explicit syntax;
• Visualization of the definite assignment principle of Java;
• Visual indication of errors; and
• An integrated program development environment.

JGraph was not created with the intention of producing a visual syntax with a one-to-one correspondence to textual Java. Instead, JGraph is based on the early detection and elimination of compile time errors. This is a feature which we believe will reduce the edit-compile-edit cycle of traditional textual development. Also, we believe it will help novice programmers by assisting them in understanding the types of errors that can occur, which may help reduce the frequency of such errors.

PDAGraph

With the ability to put more and more computing power into a smaller package, computer hardware companies started in the early 1990’s to develop handheld computers, such as Apple’s Newton and Palm’s Pilot. These devices are now commonplace, and are expected to become even more widely used than desktop computers. The widespread use of desktop computers created a demand from end users for customisability and programmability: it is clear that there is a similar demand from users of handheld devices. Already, there are simple tools such as spreadsheets and database products for handhelds, that allow some level of end user programmability.

We are particularly interested in investigating end user programmability of the Handspring Visor (and other devices with expansion capabilities). This device uses the PalmOS, but includes a hardware expansion slot which accepts Springboard modules. Various modules exist today for data acquisition, communications, and a variety of other functions. We feel that with this flexibility of use will come an increased demand for user programmability.

We are using an approach in this area that is similar in underlying concept to that used for our robot programming research. That is, a two ‘module’ system, one which is used to define the interface between the programming environment and the objects to be programmed, and then the programming environment itself. The first of these modules would be used to define the concrete representations of the problem domain objects. Representations of the data and functions available from a barcode reader, for example. Then the second module would allow the user/programmer to combine these representations with those of the programming concepts needed to complete their task

Language for Structured Design

We initially approached this problem in a manner similar to that which we used for robot control. We adapted the visual language Prograph by adding operations and datatypes for graphical representations of structured objects. However, even though all aspects of this language are visual, the visualisations of algorithms and objects do not mix. When viewing the algorithms, the objects are not visible, and vice versa. The only one exception to this strict separation is a new graphical rewrite rule in which productions that show how an object can be transformed are directly expressed in terms of the pictorial representation of the object. We became interested in exploring the possibilities of representing algorithm and data structures homogeneously, and determined that a declarative language would most likely be well-suited to this. Thus we explored the use of Lograph, a visual logic programming language, for the description of structured objects.

Our initial descriptions of the language focus on describing a particular design operation, ‘bonding,’ which fuses two components to create a new one. However, other operations are obviously necessary, and may vary from one design domain to another. In order to address this issue, we proposed a model for parametrised solid objects and operations on such objects. The aim was to concentrate on the design-space side of the interface between language and objects, so that our language for structured design could be generalised to account for any kind of transformation and manipulation of solids.

Vivid Framework

The Vivid framework is a component based software framework designed for the development of visual programming languages. The framework implements visual language elements as components. This allows elements to mixed and matched by researcher to develop more interesting designs. The framework is being developed in Java and uses JavaBeans as a component model.

Robot Programming

Our work in this area began with our participation in the “VL’ 96 Challenge” where we used an existing visual programming language, Prograph, and adapted it for robot programming. From this work we determined that, although we were able to adapt a general-purpose visual language for use in programming robots, more benefit could be derived from the graphical representations of a visual language if these graphics were representing things more directly related to robot control, as opposed to general programming structures. Thus we developed VBBL (Visual Behaviour Based Language), a visual language which directly represented the concepts and structures involved in the subsumption architecture approach to robot control.

Although the graphical representations in VBBL are of entities directly related to robot programming, they are still abstract entities: states, behaviours, suppressors, etc. We are exploring the possibility of directly representing the concrete entities involved, i.e. the parts of the robot itself. We developed a two-module approach, consisting of a Hardware Definition Module (HDM) and Software Definition Module (SDM). HDM is used to describe the structure, function and visual representation of a specific robot, and SDM uses this description to enable programming by direct manipulation.

Reactants

The evolution of tools for the development of programming user interfaces has progressed to incorporate visual techniques for the parameterization of controls and the structural relationships between them. The application of these visual techniques has produced measurable gains in productivity, simplicity and speed for the development of user interface structure.

Underlying these tools, however, is a non-visual representation used for defining the behavior of user interfaces. The resulting design of these development tools requires that the developer maintain two different mental models of a user interface. As both structure and behavior of an interface are often developed concurrently, a developer is required to regularly switch between these two modes of thought during development. This dual mode approach, produced by the combination of visual and non-visual representation of structure and behavior within user interface tools, results in a heavier cognitive load.

Our approach is to investigate the possibility of bridging this duality, by allowing not only the structure of a user interface to be created visually, but also the behavior of the user interface. To this end, we have developed the Reactant, a visual programming construct designed to provide behavioral definition of user interfaces in a visual way. Reactants are a combination of visuality, prototypes, dataflow, message-passing, and adopt an editing style based on direct manipulation.