Назад в библиотеку

Авторы: Andrew Blake, Michael Isard

Active Contours

The Application of Techniques from Graphics, Vision, Control Theory and Statistics to Visual Tracking of Shapes in Motion

Contents

1 Introduction 1

1.1 Organisation of the book.......................... 3

1.2 Applications................................. 5

2 Active shape models 25

2.1 Snakes.................................... 26

2.2 Deformable templates........................... 32

2.3 Dynamic contours ............................. 34

I Geometrical Fundamentals 39

3 Spline curves 41

3.1 B-spline functions.............................. 42

3.2 Finite bases................................. 45

3.3 Multiple knots ............................... 46

3.4 Norm and inner product for spline functions............... 47

3.5 B-spline parametric curves......................... 53

3.6 Curves with vertices............................ 54

3.7 Control vector ............................... 57

3.8 Norm for curves............................... 58

3.9 Areas and moments ............................ 63

4 Shape-space models 69

4.1 Representing transformations in shape-space............... 70

4.2 The space of Euclidean similarities.................... 75

vi Contents

4.3 Planar affine shape-space ......................... 76

4.4 Norms and moments in a shape-space .................. 79

4.5 Perspective and weak perspective..................... 81

4.6 Three-dimensional affine shape-space................... 88

4.7 Key-frames................................. 90

4.8 Articulated motion............................. 93

5 Image processing techniques for feature location 97

5.1 Linear scanning............................... 99

5.2 Image filtering ............................... 100

5.3 Using colour................................. 104

5.4 Correlation matching............................ 105

5.5 Background subtraction.......................... 110

6 Fitting spline templates 115

6.1 Regularised matching ........................... 115

6.2 Normal displacement in curve fitting................... 120

6.3 Recursive solution of curve-fitting problems............... 126

6.4 Examples.................................. 134

7 Pose recovery 141

7.1 Calculating the pose of a planar object.................. 141

7.2 Pose recovery for three-dimensional objects............... 149

7.3 Separation of rigid and non-rigid motion................. 153

II Probabilistic Modelling 157

8 Probabilistic models of shape 159

8.1 Probability distributions over curves................... 160

8.2 Posterior distribution............................ 164

8.3 Probabilistic modelling of image features................. 169

8.4 Validation gate............................... 172

8.5 Learning the prior ............................. 174

8.6 Principal Components Analysis (PCA).................. 176

Contents vii

9 Dynamical models 185

9.1 Some simple dynamical prior distributions................ 187

9.2 First-order Auto-regressive processes................... 193

9.3 Limitations of first-order dynamical models............... 196

9.4 Second-order dynamical models...................... 200

9.5 Second-order AR processes in shape-space................ 203

9.6 Setting dynamical parameters....................... 205

10 Dynamic contour tracking 213

10.1 Temporal fusion by Kalman filter..................... 213

10.2 Tracking performance ........................... 220

10.3 Choosing dynamical parameters...................... 225

10.4 Case study ................................. 231

11 Learning motion 235

11.1 Learning one-dimensional dynamics.................... 236

11.2 Learning AR process dynamics in shape-space.............. 242

11.3 Dynamical modes.............................. 247

11.4 Performance of trained trackers...................... 250

12 Non-Gaussian models and random sampling algorithms 255

12.1 Factored sampling............................. 257

12.2 The Condensation algorithm...................... 259

12.3 An observation model........................... 262

12.4 Applications of the Condensation algorithm.............. 267

Appendix 281

A Mathematical background 281

A.l Vectors and matrices............................ 281

A.2 B-spline basis functions .......................... 284

A.3 Probability ................................. 294

B Stochastic dynamical systems 297

B.l Continuous-time first-order dynamics................... 297

B.2 Second-order dynamics in continuous time................ 299

Contents

B.3 Accuracy of learning............................ 300

C Further shape-space models 303

C.l Recursive synthesis of shape-spaces.................... 303

Glossary of notation 311

Bibliography 315

Author Index 335

Index 341

Chapter 1

Introduction

Psychologists of vision have delighted in various demonstrations in which prior knowledge helps with interpreting an image. Sometimes the effects are dramatic, to the point that the viewer can make no sense of the image at all until, when cued with a single word, the object pops out of the image. Priming in that example is rather “high-level” calling on some intricate and diverse common-sense knowledge concerning wheels, hats and so on. The aim of this book is to look at how prior

Knowledge can be applied in machine vision at the lower level of shapes and outlines.

The attraction of using prior knowledge in machine vision is simply that it is so hard to make progress without it, as a decade or more of research around the 1970s showed. There was considerable success in converting images into something like line drawings without resorting to any but the most general prior knowledge about smoothness and continuity. That led to the problem of “grouping” together the lines belonging to each object which is difficult in principle, and very demanding of computing power. One effective escape from this bind has been to design vision processes in a more goal-directed fashion and this is part of the philosophy of the notably successful “Active Vision” paradigm of the 1980s. Consider the task of examining visually the field of view immediately in front of a driverless vehicle, in order to steer automatically along the road. If the nature of the task is taken into account from the outset, it is quite unnecessary to examine an entire image; it is sufficient to focus on the expected appearance and position of the road edge at successive times. Deviations of actual from expected position can be treated as an error signal to control steering. This has two great advantages. First there is no need to organise or group features in the image; the relevant area of the image is simply tested directly against its expected appearance. Secondly, the fact that analysis can be restricted to a relatively narrow “region of interest” (around the road edge) eases the computational load. Active Vision, then, uses task-related prior knowledge to simplify and focus the processing that is applied to each image.

This book is concerned with the application of prior knowledge of a particular kind, namely geometrical knowledge. The aim is to strengthen the visual interpretation of shape via the stabilising influence of prior expectations of the shapes that are likely to be seen. There have been many influences in the development of this approach and two in particular are outstanding. First is the seminal work in 1987 of M. Kass, A. Witkin and D. Terzopoulos on “snakes” which represented a fundamentally new approach to visual analysis of shape. A snake is an elastic contour which is fitted to features detected in an image. The nature of its elastic energy draws it more or less strongly to certain preferred configurations, representing prior information about shape which is to be balanced with evidence from an image. If also inertia is attributed to a snake it acquires dynamic behaviour which can be used to apply prior knowledge of motion, not just of shape. Snakes are described in detail in the next chapter. The second outstanding influence is “Pattern Theory” founded by U. Grenander in the 70s and 80s and a popular basis for image interpretation in the statistical community. It puts the treatment of prior knowledge about shape into a probabilistic context by regarding any shape as the result of applying some distortion to an ideal prototype shape. The nature and extent of the distortion is governed by an appropriate probability distribution which then effectively defines the range of likely shapes.

Defining a prior distribution for shape is only part of the problem. The complete image interpretation task is to modify the prior distribution to take account of image features, arriving at a “posterior” distribution for what shape is actually likely to be present in a particular image. Mechanisms for fusing a prior distribution with “observations” are of crucial importance. Suffice it to say here that a key idea of pattern theory is “recognition by synthesis,” in which predictions of likely shapes, based on the prior distribution, are tested against a particular image. Any discrepancy between what is predicted and what is actually observed can be used as an error signal, to correct the estimated shape. Fusion mechanisms of this general type exist in the snake, in the ubiquitous “Kalman filter” described in the next chapter, and in other more general forms described later in the book.

1.1 Organisation of the book

The organisation of material in the book is as follows. This chapter concludes by illustrating a range of applications and the next introduces active contour models. The book is then divided into two parts. Part I deals with the fundamentals of representing curves geometrically using splines, including basic machinery for least-squares approximation of spline functions, an essential topic not normally dealt with in graphics texts. Chapter 4 lays out a design methodology for linear, image-based, parametric models of shape, an important tool in applying shape constraints. Then algorithms for image processing and fitting splines to image features are introduced, leading to practical deformable templates in chapter 6. At this stage, a tool-set has been amassed sufficient for fitting curves to individual images, under a whole spectrum of prior assumptions, ranging from the least constrained snake to a two-dimensional rigid template. The treatment of part I aims to be thorough and complete, accessible by readers who are not necessarily familiar with the techniques of computer vision, given just a reasonable background in computing and vector algebra. (Appendix A reviews the necessary background in vectors and matrices, and gives some additional implementation details on spline curves.)

Abilistic treatment of shape and motion. It begins (chapter 8) by reinterpreting the deformable templates of part I, in probabilistic terms. This is extended to dynamical models in chapter 9, as a preparation for fully probabilistic dynamical contour tracking, by Kalman filter, in chapter 10. By this stage, there are numerous parameters to be chosen to build a competent tracker and clear design guidelines are given on setting those parameters and on their intuitive physical interpretations. The most effective dynamical models derive, however, from learning procedures, as described in chapter 11, in which tracking performance improves automatically with experience. Finally, probabilistic modelling up to this point has been based on Gaussian distributions. Chapter 12 shows that for the hardest tracking problems, involving dense background clutter, non-Gaussian models are essential. They can be applied via random sampling algorithms, at increased computational cost, but to very considerable effect in terms of enhanced robustness.

As far as writing conventions go, references to books and papers have been kept out of the main text, to improve readability, and collected in separate bibliographic notes, appearing at the end of each chapter. These notes give sources for the ideas introduced in the body of the text and pointers to references on related ideas. Again for readability, mathematical derivations are kept from intruding on the main text by the use of two devices. The most important derivations are sandwiched (stealing a convention from Knuth’s TpX manual) between double-bend and all-clear road signs in the margins. These are optional reading for those who want the mathematical details. Still more optional are the results and proofs in appendix B which support chapter 9 on dynamical models.

1.2 Applications

A decade ago, it seemed unlikely that the research effort invested in Computer Vision would be harvested practically in the foreseeable future. Partly this reflected the lack of computational power of hardware available at the time, a limitation which has been greatly eased by the passing years. Partly though it was the result of an ambitious view of the problems of vision, in which the aim was to build a general purpose vision engine, rather than particular applications. More recently, that view has been rather overtaken by a more focused, algorithmically driven approach. The result is that Computer Vision ideas are working their way into a variety of practical applications, particularly in the areas of robotics, medical imaging and video technology.

The active contour approach is a prime candidate for practical exploitation. This is because active contours make effective use of specific prior information about objects and this makes them inherently efficient algorithms. Furthermore, active contours apply image processing selectively to regions of the image, rather than processing the entire image. This enhances efficiency further, allowing, in many cases, images to be processed at the full video rate of 50/60 Hz. Incidentally, the ability to do vision at real-time rate has an important spin-off in stiffening criteria of acceptability, amounting to a qualitative re-evaluation of standards. As an example, an algorithm that locates the outline of a mouth in a single image nine times out of ten might be considered quite successful. Let loose on a real-time image sequence of a talking mouth, this is re-interpreted as abject failure — the mouth is virtually certain to be “lost” within a second or so, and the loss is usually unrecoverable. The ability to follow the mouth while it speaks an entire paragraph, tracking through perhaps 1000 video frames is an altogether more stringent test.

Ten examples of applications follow. Earlier ones are already promising candidates for commercial application while later ones are more speculative.

Actor-driven facial animation

A deforming face is reliably tracked to relay information about the variation over time of expression and head position to a Computer Graphics animated face. The relayed expression can be reproduced or systematically exaggerated. Tracking can be accomplished in real time, keeping pace with rate of acquisition of video frames so the actor can be furnished with valuable visual feedback. Systems currently available commercially rely on markers affixed to the face. Visual contour tracking allows marker-free monitoring expression, given a modicum of make-up applied to the face, something to which actors are well accustomed. An example of real-time re-animation is illustrated for a cartoon cat in figure 1.2. This was done using two SGI INDY workstations, linked by network, one for visual tracking and one for mapping tracked motion onto the cat animation channels and display.

Traffic monitoring

Roadside video cameras are already familiar in systems for automated speed checks. Overhead cameras, sited on existing poles, can relay information about the state of traffic — its density and speed — and anomalies in traffic patterns. Contour tracking is particularly suited to this task because vehicle outlines form a tightly constrained class of shapes, undergoing predictable patterns of motion. Already the state of California has sponsored research leading to successful prototype systems.

Work in our laboratory, monitoring the motion of traffic along the M40 motorway near Oxford, is illustrated in figure 1.3. Vehicle velocity is estimated by recording the distance traversed by the base of a tracked vehicle contour over a known elapsed time. The measured distance is in image coordinates and this must be converted to world coordinates to give true distance. The mapping between coordinate systems is determined as a projective mapping between the image plane and the ground plane. The mapping is calibrated in standard fashion from the corners of a rectangle on the ground of known dimensions (known by reference to roadside markers which are standard fittings on British motorways), and the corresponding rectangle in the image plane, as in figure 1.4. Analysis of speeds shows clearly a typical pattern of UK motorway traffic with successively increasing vehicle speeds towards the centre lanes of the carriageway.

Automatic crop spraying

Target of legislators. It is clearly also highly desirable to ensure that toxic chemicals used to control weeds are directed away from plants intended for human consumption. Segmentation of video images on the basis of colour can be an effective means of visually separating plant from soil but is disrupted by shadows cast by the moving tractor. Contour tracking, as in figure 1.6, offers an alternative means of detecting plants that is somewhat immune to such disruption.

Robot grasping

The use of vision in robotics is commonplace in commercial practice, both for inspection and for coordination of grasp. Figure 1.7 shows an experimental system designed for use with a camera mounted on the robot’s wrist, to determine stable two-fingered grasps. A snake is used to capture the outline shape, and geometric calculations along the B-spline curve, using first and second derivatives to calculate orientation and curvature, establish a set of safe grasps.

region

(lane)

start

(sec)

exit

(sec)

distance

(yards)

speed

(mph)

av spd (mph)

1

269.28

273.96

132

58

1

275.92

279.72

127

68

1

297.86

301.56

129

72

1

303.96

308.40

130

60

68

1

314.12

317.24

133

87

1

321.76

325.24

126

74

1

330.20

334.04

132

70

1

343.16

347.58

123

57

2

687.38

692.18

158

67

2

708.46

712.36

164

86

2

727.26

731.20

155

80

76

2

733.12

737.72

164

73

2

749.12

753.64

169

77

3

3

506.78

513.04

510.66

517.04

156

148

83

75

79

For video analysis has the task of spraying earth and plants automatically, using an array of independently controlled spray nozzles. Plants can be segmented dynamically from the earth and weeds around it, the spraying of fertiliser and weed-killer to be directed onto or away from plants as appropriate. (Figures courtesy of David Reynard, Andrew Wildenberg and John Marchant.)

Surveillance

A combination of visual motion sensing and contour tracking is used to follow an intruder on a security camera in figure 1.8. The camera is mounted on a computer controlled pan-tilt platform driven by visual feedback from the tracked contour.

Biometrics: body motion

This application (figure 1.9) involves the measurement of limb motion for the purposes of analysis of gait as a tool for planning corrective surgery. The tool is also useful for ergonomic studies and anatomical analysis in sport. It is related to the facial animation application above, but more taxing technically. Again, marker based systems exist and are commercially successful as measurement tools both in biology and medicine but it is attractive to replace them with marker-free techniques. There are also increasingly applications in Computer Graphics for whole body animation. Capture of the motion of an entire body from its outline looks feasible but several problems remain to be solved: the relatively large number of degrees of freedom of the articulating body poses stability problems for trackers; the agility of, say, a dancing figure requires careful treatment of “occlusion” — periods during which some limbs and body parts are obscured by others.

Audio-visual speech analysis

Automatic speech-driven dictation systems are now available commercially with large vocabularies though often restricted to separately articulated words. The functioning of such a system is dependent on very reliable recognition of a small set of keywords. In practice, adequately reliable keyword recognition has been realised in low-noise environments but is problematic in the presence of background noise, especially cross-talk from other speakers. Independent experiments in several laboratories have suggested that lip-reading has an important role to play in augmenting the acoustic signal with independent information that is immune to cross-talk. Active contour tracking has been shown to be capable of providing this information (figures 1.10 and 1.11), robustly and in real time, resulting in substantial improvements in recognition-error rates.

Medical diagnosis

Ultrasound scanners are medical diagnostic imaging devices that are very widely available owing to their low cost. They are especially suited to dynamic analysis owing to their ability to deliver real-time video sequences. There are numerous potential applications for automated analysis of the real-time image sequences, for example the analysis of abnormalities in cardiac action as in figure 1.12. Noisy artifacts — ultrasound speckle — make these images especially hard to analyse. In this context, active contours are particularly powerful because speckle-induced error tends to be smoothed by the averaging along the contour that is a characteristic of active contour fitting. Broadly tuned, learned models of motion are used in tracking as prior constraints on the moving subject, to aid automated perception. The research issue here is how to learn more finely tuned models to classify normal and aberrant motions.

Another important imaging modality for medical applications is “Magnetic Resonance Imaging” (MRI). It is an expensive technology, but popular because it is as benign as ultrasound, yet as detailed as tomographic X-rays. Applications are pervasive, and one specific example concerning measurements of the cerebral hemispheres of the brain is illustrated in figure 1.13. In each of successive slices of the brain image, two separate snakes lock onto the outlines of the left and the right hemispheres. Geometric coherence in successive slices means that a fitted snake from one slice canbe used as the initial snake for the next. The entire fitting process can therefore be initialised by hand fitting snakes around outlines in the first slice. The degree of symmetry of the reconstructed hemispheres has been proposed as a possible diagnostic indicator for schizophrenia.

Automated video editing

It is standard practice to generate photo-composites by “blue-screening” in which a foreground object, photographed in motion against a blue background is isolated electronically. It can then be superimposed against a new background to create special effects. Contour tracking raises the possibility of doing this with objects photographed against backgrounds that have not been prepared specially in any way, as in figure 1.14. This increases the versatility of the technique and raises the possibility of extracting moving objects from existing footage for re-incorporation in new video sequences. In a second example , the motion of a cluster of leaves is not only tracked, but also interpreted as a three-dimensional displacement, so that a computer-generated object can be “hung” from the cluster and added to the animation. This is achieved despite the heavy clutter in the background that makes tracking harder by tending to camouflage the moving leaves.

User interface

The use of body parts as input devices for graphics has of course been thoroughly explored in “Virtual Reality” applications. Current devices such as data-gloves and infra-red helmets are cumbersome and restrictive to the wearer. Visual tracking technology raises the possibility of flexible, non-contact input devices. One aim is to use tracking to realise the “digital desk” concept in which a user manipulates a mixture of real and virtual documents on a desk, the virtual ones generated by an overhead video-projector.

Hand translates in x, y and z directions and rotates; object follows hand’s motion.

Thumb closed to “lock” object while hand returns to start.

Thumb open: object follows hand translating and rotating.

Bibliographic notes

Despite enormous research effort, the pinnacle of which is represented by (Marr, 1982), the goal of defining general low-level processes for vision has proved obstinate and elusive. Much effort was directed towards finding significant features in images. The theory and practice of image-feature detection is very fully developed — some of the landmarks include (Roberts, 1965; O’Gorman, 1978; Haralick, 1980; Marr and Hildreth, 1980; Canny, 1986; Perona and Malik, 1990) on feature detection and (Monta-nari, 1971; Ramer, 1975; Zucker et al., 1977) on grouping them into linear structures. See also (Ballard and Brown, 1982) for a broad review. The challenge lies in recovering features undamaged and free of breaks, and in successfully grouping them according to the object to which they belong. In some cases subsequent processes can tolerate errors — gaps in contours and spurious fragments — and this is particularly true of certain approaches to object recognition, for instance (Ballard, 1981; Grim-son and Lozano-Perez, 1984; Faugeras and Hebert, 1986; Mundy and Heller, 1990). Another important theme in “low-level” vision has been matching using features, including (Baker and Binford, 1981; Buxton and Buxton, 1984; Grimson, 1985; Ohta and Kanade, 1985; Pollard et al., 1985; Ayache and Faverjon, 1987; Belhumeur, 1993), mostly applied to matching pairs of stereoscopic images.

One notably successful reaction against the tyranny of low-level vision was “active vision” (Aloimonos et al., 1987; Bajcsy, 1988) whose progress and achievements are reviewed in (Blake and Yuille, 1992; Aloimonos, 1993; Brown and Terzopoulos, 1994). Another radical departure was the “snake”, for which the original paper is (Kass et al., 1987), and many related papers are given in the bibliography to the following chapter. Pattern theory is a general statistical framework that is important in the study of active contours. It was developed over a number of years by Grenander (Grenander, 1981), and a lucid summary and interpretation can be found in (Mumford, 1996). Again, many related papers following the pattern theory approach are given in the course of the book.

Applications

Actor-driven animation is a classic application for virtual reality systems. Tracking of changing expressions can be done using VR hardware, or visually with reflective markers (Williams, 1990), using active contours (Terzopoulos and Waters, 1990; Terzopoulos and Waters, 1993; Lanitis et al., 1995) or using so-called “optical flow” (Essa and Pentland, 1995; Black and Yacoob, 1995). Underlying muscular motion may be modelled to constrain tracked expressions.

Traffic monitoring is firmly established as a viable application for machine vision, for traffic information systems, non-contact sensors, and autonomous vehicle control (Dreschler and Nagel, 1981; Dickmanns and Graefe, 1988a; Sullivan, 1992; Dickmanns, 1992; Koller et al., 1994; Ferrier et al., 1994). Projective (homogeneous) transformations (Mundy and Zisserman, 1992; Foley et al., 1990) are used for the conversion between image and world coordinates.

Automated crop-handling based on vision has become a realistic possibility in the last decade (Marchant, 1991; Pla et al., 1993), and active contour tracking has a role to play here (Reynard et al., 1996).

A series of theories of determining stable grasps based on an outline have been proposed (Faverjon and Ponce, 1991; Blake, 1992; Rimon and Burdick, 1995a; Rimon and Burdick, 1995b; Rimon and Blake, 1996; Ponce et al., 1995; Davidson and Blake, 1998) and are particularly suited to real-time grasp planning with active contours (Taylor et al., 1994).

A pioneering advance in the visual tracking of human motion was Hogg’s “Walker” (Hogg, 1983) which used an articulated model of limb motion to constrain search for body parts. Active contours have been applied with some success to tracking whole bodies and body parts (Waite and Welsh, 1990; Baumberg and Hogg, 1994; Lanitis et al., 1995; Goncalves et al., 1995), though methods based on point features can also be useful for coarse tracking (Rao et al., 1993; Murray et al., 1993).

Audio-visual speech analysis, or speech-reading, has been the subject of psychological study for some time (Dodd and Campbell, 1987). The computational problem has received a good deal of attention recently, using both active contours (Bregler and Konig, 1994; Bregler and Omohundro, 1995; Kaucic et al., 1996) and methods based more directly on image intensities (Petajan et al., 1988), or using artificial facial markers (Finn and Montgomery, 1988; Stork et al., 1992). Generally, as in conventional speech recognition, Hidden Markov Models (HMMs) (Rabiner and Bing-Hwang, 1993) are used for classification of utterances, e.g. (Adjoudani and Benoit, 1995).

Several researchers have investigated the application of active contours to the interpretation of medical images, for example (Amini et al., 1991; Ayache et al., 1992; Cootes et al., 1994).

The technique of rotoscoping allows film-makers to transfer a complex object from one image sequence to another. This can be done automatically using blue-screening (Smith, 1996) if the object can be filmed against a specially prepared background.

Computer-aided techniques for object segmentation are also of great interest for augmented reality systems, which attach computer-generated imagery to real scenes. Traditionally mechanical or magnetic 3D tracking devices have been used (Grimson et al., 1994; Pelizzari et al., 1993; Wloka and Anderson, 1995) to solve this problem, but they are inaccurate and cumbersome. Vision-based tracking has been used instead (Ku-tulakos and Valliano, 1996; Uenohara and Kanade, 1995; State et al., 1996; Heuring and Murray, 1996), especially for medical applications, mostly restricted to tracking artificial markers. Graphical objects can be made to pass behind real ones (State et al., 1996), by building models of the real-world objects off-line, using scanned range maps.

Effective ways of using a gesturing hand as an interface are yet to be generally established. One very appealing paradigm is the “digital desk” (Wellner, 1993) in which moving hands interact both with real pieces of paper and with virtual (projected) ones, on the surface of a real desk. Other body parts may also be useful for controlling graphics, for instance head (Azarbayejani et al., 1993) and eyes (Gee and Cipolla, 1994). Gestures need not only to be tracked but also interpreted by classifying segments of trajectories, either in configuration space or phase space (Mardia et al., 1993; Campbell and Bobick, 1995; Bobick and Wilson, 1995). This is related both to classification of speech signals (see above) and to classification of signals in other domains, such as electro-encephalograph (EEG) in sleep (Pardey et al., 1995).

Active shape models

Active shape models encompass a variety of forms, principally snakes, deformable templates and dynamic contours. Snakes are a mechanism for bringing a certain degree of prior knowledge to bear on low-level image interpretation. Rather than expecting desirable properties such as continuity and smoothness to emerge from image data, those properties are imposed from the start. Specifically, an elastic model of a continuous, flexible curve is imposed upon and matched to an image. By varying elastic parameters, the strength of prior assumptions can be controlled. Prior modelling can be made more specific by constructing assemblies of flexible curves in which a set of parameters controls kinematic variables, for instance the sizes of various subparts and the angles of hinges which join them. Such a model is known as a deformable template, and is a powerful mechanism for locating structures in an image.

Things become more difficult when it is necessary to locate moving objects in image sequences — the problem of tracking. This calls for dynamic modelling, for instance invoking inertia, restoring forces and damping, another key component of the original snake conception. We refer to curve trackers that use prior dynamical models as “dynamic contours.” Later parts of the book are all about understanding, specifying and learning dynamical prior models of varying strength, and applying them in dynamic contour tracking.

2.1 Snakes

The art of feature detection has been much studied (see bibliographic notes for previous chapter). The principle is that a “mask” or “operator” is designed which produces an output signal which is greatest wherever there is a strong presence in an image of a feature of a particular chosen type. The result is a new image or “feature map” which codes the strength of response for the chosen feature type, at each pixel. Examples of feature maps for three different kinds of feature are illustrated in figure 2.1. Details of the designs of masks and the application to images by digital convolution are given in chapter 5. For now it is sufficient to say that the operator is a sub-image which is scanned over an image using “mathematical correlation” or “convolution” (this is explained in chapter 5). The mask is a prototype image, typically of small size, of the feature being sought: for a valley feature, for instance, the mask would be a V-shaped intensity function. The output of the correlation process is a measure of goodness of fit of the prototype to the image, in each of the image locations evaluated.

However, feature maps are only the beginning. They enhance features of the desired type but do not unambiguously detect them. Detection requires a decision to be made at each pixel, the simplest decision rule being that a feature is marked wherever feature strength exceeds some preset threshold. A constant threshold is rarely adequate except for the simplest of situations such as an opaque object on a back-lit table, as commonly used in machine vision systems. However, the features on a face cannot be back-lit and, if the threshold is set high, gaps appear in edges. If the threshold is low, spurious edges appear, generated by fine texture. Often no happy medium exists. More subtle decision schemes than simple thresholds have been explored but after around two decades of concerted research effort, one cannot expect to do very much better than the example in figure 2.2. The main structure is present but the topology of hand contours is disrupted by gaps and spurious fragments.

The lesson is that “low-level” feature detection processes are effective up to a point but cannot be expected to retrieve entire geometric structures. Snakes constitute a fundamentally new approach to deal with these limitations of low-level processing. The essential idea is to take a feature map F(r) like the ones in figure 2.1, and to treat (-F(r)) as a “landscape” on which the snake, a deformable curve r(s), 0 < s < 1, can slither. For instance, a filter that gives a particularly high output where image contrast is high will tend to attract a snake towards object edges. Equilibrium equations for r(s) are set up in such a way that r(s) tends to cling to high responses of F, that is, maximising F(r(s)) over 0 < s < 1, in some appropriate sense. This tendency to maximise F is formalised as the “external” potential energy of the dynamical system. It is counterbalanced by “internal” potential energy which tends to preserve smoothness of the curve. The equilibrium equation is:external force and internal forces (Note: s and t subscripts denote differentiation with respect to space and time, and VF is the spatial gradient of F.) If is solved iteratively, from a suitable configuration, it will tend to settle on a ridge of the feature map F. The coefficients w1 and w2, which must be positive, govern the restoring forces associated with the elasticity and stiffness of the snake respectively. Either of these coefficients may be allowed to vary with s, along the snake. For example, allowing w2 to dip to 0 at a certain point s = so will allow the snake to kink there, as illustrated at the mouth corners. Increasing w2 encourages the snake to be smooth,

Active shape models

Initial configuration Final configuration

The eyebrow snake moves over an edge-feature map. The mouth snake is also attracted to edge-features; smoothness constraints are suspended at mouth corners, to allow the snake to kink there. Given that the strongest feature on the nose is a ridge, the nose snake is chosen to be attracted to ridges.

Like a stiff but flexible rod, but also increases its tendency to regress towards a straight line. Increasing wi makes the snake behave like stretched elastic which encourages an even parameterisation of the curve, but increases the tendency to shortness, even collapsing to a point unless counterbalanced by external energy or constraints.

Discrete approximation

Practical computations of r(s) must occur over discrete time and space, and approximate the continuous trajectories of (2.1) as closely as possible. The original snake represented r(s) by a sequence of samples at s = Si, i = 1,...,N, spaced at intervals of length h, and used “finite differences” to approximate the spatial derivatives rs and rss by rs(*) =r(s)-hr(s) and rss(Si) = r(s+l)-2r/Si+r(s-1) and solve the resulting simultaneous equations in the variables r(si),..., r(sn). The system of equations is “sparse,” so that it can be solved efficiently, in time O(N) in fact.

In finite difference approximations, the variables r(si) are samples of the curve r(s), at certain discrete points, conveying no information about curve shape between samples. Modern numerical analysis favours the “finite element” method in which the variables r(sn) are regarded as “nodal” variables or parameters from which the continuous curve r(s) can be completely reconstructed. The simplest form of finite-element representation for r(s) is as a polygon with the nodal variables as vertices. Smoother approximations can be obtained by modelling r(s) as a polynomial “spline curve” which passes near but not necessarily through the nodal points. This is particularly efficient because the spline maintains a degree of smoothness, a role which otherwise falls entirely on the spatial derivative terms. The practical upshot is that with B-splines the smoothness terms can be omitted, allowing a substantial reduction in the number of nodal variables required, and improving computational efficiency considerably. For this reason, the B-spline representation of curves is used throughout this book. Details are given in chapter 3.

Robustness and stability

Regularising terms in the dynamical equations are helpful to stabilise snakes but are rather restricted in their action. They represent very general constraints on shape, encouraging the snake to be short and smooth. Very often this is simply not enough, and more prior knowledge needs to be compiled into the snake model to achieve stable behaviour. Consider the following example in which a snake is set up with internal constraints reined back to allow the snake to follow the complex outline of the leaf. In fact it is realised as a B-spline snake with sufficient control points to do justice to the geometric detail of the complex shape. Suppose now the snake is required to follow an image sequence of the leaf in motion, seeking energy minima repeatedly, on successive images in the sequence. If all those control points are allowed to vary somewhat freely over time, the tracked curve can rapidly tie itself into unrecoverable knots, as the figure shows. This is a prime example of the sort of insight that can be gained from real-time experimentation. A regular snake, with suitably chosen internal energy may succeed in tracking several dozen frames off-line. However, once tracking is seen as a continuous process, and this is the viewpoint that real-time experiments enforce, the required standards of robustness are altogether more stringent. What was an occasional failure in one computation out of every few, becomes virtually certain eventual failure once the real-time process is allowed to run. It is of paramount importance that recovery from transients — such as a gust of wind causing the leaf to twitch — is robust.

This need for robustness is what drives the account of active contours given in this book. General mechanisms for setting internal shape models are not sufficient. Finely tunable mechanisms are needed, representing specific prior knowledge about classes of objects and their motions. The book aims to give a thorough understanding of the components of such models, initially in geometric terms, and later in terms of probability, as a means of describing families of plausible shapes and motions.

2.2 Deformable templates

The prior shape constraints implicit in a snake model are soft, encouraging rather than enforcing a particular class of favoured shapes. What is more, those favoured shapes have rather limited variety. For example, in the case that wi = 0, they are solutions of rss = 0 which are simply straight lines. Models of more specific classes of shapes demand some use of hard constraints, and “default” shapes more interesting than a simple straight line. This can be achieved by using a parametric shape-model r(s; X), with relatively few degrees of freedom, known as a “deformable template.” The template is matched to an image, in a manner similar to the snake, by searching for the value of the parameter vector X that minimises an external energy Eext(X). Internal energy Eint(X) may be included as a “regulariser” to favour certain shapes.

As an example of a deformable template, Yuille and Hallinan’s eye template, showing how the template is parameterised, and results of fitting to an image of a face. The template r(s; X) has a total of 11 geometric parameters in the parameter vector X and it varies non-linearly with X. The non-linearity is evident because, for example, one of the parameters is an angle d whose sine and cosine appear in the functional form of r(s; X). The bounding curves of the eye are parabolas which also vary non-linearly, as a function of length parameters a, b and c. The internal energy Eint(X) is a quadratic function of X that encourages the template to relax back to a default shape. The external energy Eext(X) comprises a sum of various integrals over the image-feature maps for edges, ridges and valleys. Each integral is taken over one of the two regions delineated by the eye model or along a template curve, which causes Eext to vary with X. Finally the total energy is minimised by iterative, non-linear gradient descent which will tend to find a good minimum, in the sense of giving a good fit to image data, provided the initial configuration is not too far from the desired final fit.

A methodology for setting up linearly parameterised deformable templates — we term them “shape-spaces” — will be described in chapter 4. Restriction to linear parameterisation has certain advantages in simplifying fitting algorithms and avoiding problems with local minima. It is nonetheless surprisingly versatile geometrically. It should be pointed out that some elegant work has been done with three-dimensional parametric models (see bibliographic notes) but this is somewhat outside the scope of this book. Here we deal with three-dimensional motion by modelling directly its effects on image-based contour models using “affine spaces” amongst other devices.

2.3 Dynamic contours

Active contours can be applied either statically, to single images, or dynamically, to temporal image sequences. In dynamic applications, an additional layer of modelling is required to convey any prior knowledge about likely object motions and deformations. Now both the active contour r(s,t) and the feature map F(t) vary over time. The contour r(s,t) is drawn towards high responses of F(t) as if it were riding the crest of a wave on the feature map. The equation of motion for such a system extends the snake in (2.1) with additional terms governing inertia and viscosity inertial force and external force.

This is Newton’s law of motion for a snake with mass, driven by internal and external forces. New coefficients, in addition to w1 and w2 the elastic coefficients, are p the mass density and the viscous resistance from a medium surrounding the snake. Given that all coefficients are allowed to vary spatially, there is clearly considerable scope for setting them to impose different forms of prior knowledge. The spatial variation also introduces a multiplicity of degrees of freedom and potentially complex effects. One of the principal aims of the book is to attain a detailed understanding of those effects, and to harness them in the design of active contours.

Most powerful of all is to combine dynamical modelling as in with the rich geometrical structures used in deformable templates, and this is the basis of the dynamic contour. It involves defining parameterised shapes r(s; X) as for deformable templates and then specifying a dynamical equation for the shape parameter X. In the dynamic contour equation, prior constraints on shape and motion were implicit, but to facilitate systematic design it is far more attractive that they should be explicit. This can be achieved by separating out a dynamical model for likely motions from the influence of image measurements. The dynamic contour becomes a two-phase process in which a dynamical model is used for prediction, to extrapolate motion from one discrete time to the next. Then the predicted position for each new time-step is refined using measured image features, as in figure 2.6. The “Kalman filter” is a ready made engine for applying the two-phase cycle, and for this reason has been a very popular and successful paradigm for tracking (see bibliographic notes). It is a probabilistic mechanism and this is one reason that probabilistic modelling pervades the treatment of the second part of this book.

Intuitively, predictive models demand probabilistic treatment in order to avoid being too strong. The two-phase cycle fuses a prediction with some measurements. If the prediction were deterministic with no allowance for uncertainty, it would dominate the measurements, which would therefore be ignored. As an example, consider the task of tracking a pendulum in motion. If the pendulum is believed to be executing perfect harmonic motion, free of external disturbances, then provided initial conditions are known, the future motion of the pendulum is entirely determined. Knowing initial conditions, any subsequent observation of the pendulum is redundant. Realistic visual tracking problems are more like observing a pendulum oscillating in a turbulent airflow. The mean behaviour of the pendulum may be explained as deterministic simple harmonic motion, but the airflow drives the motion with random external forces. In terms of the shape parameter X, this implies a dynamical equation of the form

X = f (X, X, w)

where X and X are the first and second temporal derivatives of X and w is a random disturbance. Thus the value of initial conditions weaken over time, as the motion of the pendulum is progressively perturbed away from the ideal deterministic motion. This increasing uncertainty generates a “gap” in information which sensory observations can fill. A primary aim of the book is to define design principles for probabilistic models of shape and motion and explain those principles in terms of their effects both on representation of prior knowledge and in constraining and conditioning tracking performance.

Bibliographic notes

The seminal paper on snakes is (Kass et al., 1987). This spawned many variations and extensions including the use of Fourier parameterisation (Scott, 1987), incorporation of hard constraints (Amini et al., 1988) and incorporation of explicit dynamics (Terzopoulos and Waters, 1990; Terzopoulos and Szeliski, 1992). Realisation of snakes using B-splines was developed by (Cipolla and Blake, 1990; Menet et al., 1990; Hinton et al., 1992) and combined with Lagrangian dynamics in (Curwen et al., 1991). B-splines used in this way are a form of “finite element,” a standard technique of numerical analysis for solving differential equations by computer (Strang and Fix, 1973; Zinkiewicz and Morgan, 1983).

The idea of deformable templates predates the development of snakes (Fischler and Elschlager, 1973; Burr, 1981; Bookstein, 1989) but has enjoyed a revival inspired by the snake. Variations on the deformable template theme rapidly emerged (Yuille et al., 1989; Yuille, 1990; Bennett and Craw, 1991; Yuille and Hallinan, 1992; Hinton et al., 1992; Cootes and Taylor, 1992; Cootes et al., 1993; Cootes et al., 1995). A good deal of research has been done on matching with three-dimensional models, both rigid (Thompson and Mundy, 1987; Lowe, 1991; Sullivan, 1992; Lowe, 1992; Harris, 1992b; Gennery, 1992) and deformable (Terzopoulos et al., 1988; Terzopoulos and Fleischer, 1988; Cohen, 1991; Terzopoulos and Metaxas, 1991; Rehg and Kanade, 1994) but >is somewhat outside the scope of this book. As models become more detailed, and search becomes more exhaustive, the three-dimensional approach merges into visual object recognition (Grimson, 1990).

The Kalman filter (Gelb, 1974; Bar-Shalom and Fortmann, 1988) is very widely used in control theory and for target tracking (Rao et al., 1993) and sensor fusion (Hallam, 1983; Durrant-Whyte, 1988; Hager, 1990) and has become a standard tool of computer vision (Ayache and Faugeras, 1987; Dickmanns and Graefe, 1988b; Dickmanns and Graefe, 1988a; Matthies et al., 1989; Deriche and Faugeras, 1990; Harris, 1992b; Terzopoulos and Szeliski, 1992; Faugeras, 1993).

Finally, it seems appropriate at least to give some pointers to approaches to visual tracking that are rather outside the active contour paradigm.

• (Black and Yacoob, 1995) uses the visual motion field over a region to track and identify movement

• (Bray, 1990) tracks using a mixture of polyhedral, model-based vision to initialise and optic-flow vectors along contours for incremental displacement

• (Fischler and Bolles, 1981; Gee and Cipolla, 1996) are very elegant uses of random generation and testing of point-correspondence hypotheses, respectively for static and dynamic image matching problems

• (Huttenlocher et al., 1993) used the “Hausdorff metric” to match successive views in a sequence; the beauty of the approach is that it requires almost no prior model of shape or motion

• (Allen et al., 1991; Papanikolopoulos et al., 1991; Mayhew et al., 1992; Brown et al., 1992; Murray et al., 1992; Heuring and Murray, 1996) are control theoretic approaches to visual-servoing, real-time tracking with robot hands and heads.

Spline curves

Throughout this book, visual curves are represented in terms of parametric spline curves, as is common in computer graphics. These are curves (x(s),y(s)) in which s is a parameter that increases as the curve is traversed, and x and y are particular functions of s, known as splines. A spline of order d is a piecewise polynomial function, consisting of concatenated polynomial segments or spans, each of some polynomial order d, joined together at breakpoints. Parametric spline curves are attractive because they are capable of representing efficiently sets of boundary curves in an image (figure 3.1). Simple shapes can be represented by a curve with just a few spans. More complex shapes could be accommodated by raising the polynomial order d but it is preferable to increase the number of spans used. Usually the polynomial order is fixed at quadratic (d = 3) or cubic (d = 4). Maintaining a fixed, low polynomial degree, even in the face of geometric complexity, makes for computational stability and simplicity.

The chapter begins by explaining spline functions and their construction. Later sections explain how parametric curves are constructed from spline functions and introduce methods for matching one curve to another. This forms the basis for the algorithms developed in subsequent chapters for active contour matching.

The order of a polynomial is the number of its coefficients. Hence a quadratic function a + bx + cx

has order d = 3. Its degree — the highest power of x — is 2.

3.1 B-spline functions

B-splines are a particular, computationally convenient representation for spline functions. In the B-spline form, a spline function x(s) is constructed as a weighted sum of NB basis functions (hence ‘B’-splines) Bn(s), n = 0,... ,NB-1. In the simplest (“regular”) case, each basis function consists of d polynomials each defined over a span of the s-axis. We take each span to have unit length. The spans are joined at knots. It shows the simplest case in which the knots are evenly spaced and the joins between polynomials are regular — that is, as smooth as possible, having d — 2 continuous derivatives. The quadratic spline, for instance, has continuous gradient in the regular case. The constructed spline function is

X(s) =Xn*Bn(s)

where Xn are the weights applied to the respective basis functions Bn(s). This can be expressed compactly in matrix notation as

x(s) = B(s)T*Q

a matrix product between a vector of B-spline functions

B(s) = (Bo(s), Bi(s),..., BNb-i(s)

and a vector of weights. By convention, B-spline basis functions are constructed in such a way that they

sum to 1 at all points:

Bn(s) = 1 for all n=0

This summation or “convex hull” property is the underlying reason that the B-spline function follows the “control polygon,” made up of the points (x0), (x1),... quite closely.

In the simple case of a quadratic B-spline with knots spaced regularly at unit intervals, the first B-spline basis function has the form otherwise and the others are simply translated copies:

Bn(s) = Bo(s - n).

However, this basis is bi-infinite: there are infinitely many Bn and the functions x(s) they are used to construct are bi-infinite, extending too. For practical applications finite bases are needed.

3.2 Finite bases

A finite spline basis can be either periodic or aperiodic over a closed interval 0 <s<L. The periodic basis is simply the bi-infinite basis suitably wrapped around. For example, the basis functions for regular, quadratic splines are B0,...,BL-1 (Nb = L), defined as above, but treated as periodic over the interval 0 < s < L. The four periodic basis functions for the case that L = 4 are illustrated in figure 3.4. A non-periodic basis on a finite interval is more complex to construct, requiring so-called “multiple knots” (see later) at its endpoints. This allows full control over boundary conditions — the value of the function x(s) and its derivatives at the ends s = 0,L of the interval. Details of the construction of the Bn for this case can be found in appendix A. It is no longer the case that the number of basis functions Nb is equal to the interval length L. Additional basis functions are needed to control boundary conditions (values of the spline function and its derivatives at s = 0,L). In the regular case, d — 1 extra functions are needed (d is the order of the polynomial) so that NB = L + d — 1. In figure 3.5, for example, L = 5 and d = 3 (quadratic) so there must be NB = 7 basis functions.

There is an efficient algorithm for generating spline functions from the weights xn in which the basis functions are represented in terms of a matrix of polynomial coefficients. The standard method is given in appendix A.

3.3 Multiple knots

Sometimes it is desirable to allow a reduced degree of continuity at some point within the domain of a function x(s). This can be achieved by forming a multiple knot, in which two knots in the B-spline basis approach one another and coincide. The spline then consists of a sequence of polynomial spans joined at breakpoints, some of which are single knots while others are multiple knots. At a regular breakpoint (single knot), the degree of smoothness is at its maximal value, that is Cd—2 — continuity of all derivatives up to the (d — 2)th. At a double knot, however, continuity is reduced to Cd-3 and generally, continuity at a knot of multiplicity m is Cd-m-1. Forming a multiple knot is a limiting process in which m consecutive regular knots approach one another, as illustrated in figure 3.7 for the quadratic case. Once a double knot has been introduced into the basis, any constructed spline function generally loses one order of continuity at that breakpoint. In the quadratic case for instance, a spline function is C0 at the knot: it remains continuous but its gradient becomes discontinuous — see figure 3.8 for an illustration. If a triple knot is introduced, the function becomes discontinuous, broken into two continuous pieces, one on each side of the knot. Hence triple knots are used to terminate a quadratic B-spline basis over a finite interval as in figure 3.5.

3.4 Norm and inner product for spline functions which is precisely the “root-mean-square” value2 of x(s) over the range 0 < s < L. The functional norm is especially useful for measuring the difference between two functions x1(s),x2(s) as ||x1 — x2||, for instance when it is necessary to measure how closely a function x1 is approximated by another function x2.

The norm has a corresponding “inner product,” denoted < •, • >, which is bilinear and is applied to a pair of functions x,y as < x,y >. The relationship between an inner product and a norm is that <x,x >= ||x||2, so in the L2 case the inner product between two functions works out to be:

<x,y>=-x(s)y(s) ds.

Conventionally the L scaling factor in the definition of the norm would be omitted; we include it so that IHII is truly a root-mean-square measure.

The inner product will be used later to express function approximations concisely.

Since we are representing functions compactly as vectors QX of spline weights, it is natural to express norms and inner products in terms of these vectors, that is, to define || • || for weight vectors such that norm must be defined.

The B-matrices are sparse. This reflects the fact that each weight Qfn affects the function x(s) only over a short sub-interval — the “support” of the corresponding basis function Bn — and this lends efficiency to least-squares approximation algorithms. In the periodic, quadratic case the B-matrices are sparse circulants of order 5, and in general they have order 2d — 1 (the number of non-zero elements in each row). For a given polynomial order d, therefore, the sparsity is most significant when the number NB of basis functions is large. For periodic quadratic splines with NB = 8 the B-matrix is:

( 0.55

0.217

0.008

0.0

0.0

0.0

0.008

0.217

0.217

0.55

0.217

0.008

0.0

0.0

0.0

0.008

0.008

0.217

0.55

0.217

0.008

0.0

0.0

0.0

0.0

0.008

0.217

0.55

0.217

0.008

0.0

0.0

0.0

0.0

0.008

0.217

0.55

0.217

0.008

0.0

0.0

0.0

0.0

0.008

0.217

0.55

0.217

0.008

0.008

0.0

0.0

0.0

0.008

0.217

0.55

0.217

0.217

0.008

0.0

0.0

0.0

0.008

0.217

0.55

— note the circulant structure (repeating rows), characteristic of the periodic case. For a non-periodic quadratic function, B is still sparse, no longer a circulant, but now pentadiagonal, as shown here for the case NB = 6 (L = 4):

0.2

0.117

0.017

0.0

0.0

0.0 \

0.117

0.333

0.208

0.008

0.0

0.0

1

0.017

0.208

0.55

0.217

0.008

0.0

4

0.0

0.008

0.217

0.55

0.208

0.017

0.0

0.0

0.008

0.208

0.333

0.117

0.0

0.0

0.0

0.017

0.117

0.2

and would be heptadiagonal for cubic splines.

Armed with the inner product, some approximation problems become straightforward. The simplest tutorial example is the problem of approximating a spline function represented as Qf in terms of two other spline functions Qf, Qf. The least-squares approximation can be expressed, using inner products, as for curves instead of functions.

Another important type of problem is to approximate some function f (s), not necessarily a spline, as a spline function x(s), represented, as usual, by weights Qx. Again, the solution can be derived neatly using inner products to give:

Qx = B-1 1J B(s)f (s) ds (3.15)

and an example is shown in figure 3.10. Functional approximation of this kind can be developed to construct approximations of curves in image data, and this is close to what will be required for active contour algorithms. Of course it is not possible to evaluate integrals over data exactly, so in practice data must be sampled. The simplest approximation of a sampled function as a spline.

The application to image curves is discussed in chapter 6.

3.5 B-spline parametric curves

Spline functions were introduced to serve as a tool for constructing curves in the plane, which they do in the following manner. Parametric spline curves have coordinates x(s), y(s) each of which is a spline function of the curve parameter s. First it is necessary to choose an appropriate interval 0 < s < L covering L spans and an appropriate basis B0, B\,..., BNb_ of Nb B-spline functions or basis functions. If the interval [0, L] is taken to be periodic the resulting parametric curve will be closed. Alternatively, an open curve requires a B-spline basis over a finite interval as in figure 3.6. For each basis function Bn a control point qn = (qX, qX)T must now be defined and the curve is a weighted vector sum of control points a smooth curve that follows approximately the “control polygon” defined by linking control points by lines (figure 3.11). The component functions of r(s) do, of course, turn out to be spline functions, for instance:

Nb-1

x(s) = ^ Bn(s)qn for 0 < s < L

n=0 — a weighted sum of basis functions with weights qf. The example curve in figure 3.11 uses the basis functions Bn for regular, periodic, quadratic splines that were defined earlier in (3.6) on page 45 and, as before, Nb = L so that the number of control points of the curve is equal to the number of its spans.

3.6 Curves with vertices

It is often necessary to introduce a vertex or hinge at a certain point along a parametric curve, to fit around sharp corners on an object outline. One straightforward way of doing this is to allow two or more consecutive control points to coincide to form a “multiple control point.” When n consecutive control points coincide, the order of continuity of the curve is reduced by n — 1. A quadratic spline, for instance, has continuous first derivative but discontinuous second derivative when all control points are distinct. A hinge (discontinuous first derivative) is formed therefore when 2 consecutive control points coincide, as in figure 3.12. Unfortunately, introducing a hinge in this way generates spurious linearity constraints (see figure). What is more, the parameterisation of the curve behaves badly in the vicinity of the hinge in the sense that r'(s) = 0 so that the parameter s is changing infinitely fast as the curve passes through the hinge. This would have the effect in the curve-fitting algorithms to be described in chapter 6 of giving undue weight to the region of the curve around the hinge. A good alternative is to use multiple knots — not quite as simple but having good geometric behaviour. The formation of multiple knots in a B-spline basis was explained earlier, in section 3.3. Parametric curves defined with the new basis inherit its reduced continuity. For example, a hinge can be formed in a quadratic, parametric curve by introducing a double knot into the underlying quadratic B-spline basis, as in figure 3.13. Alternatively a triple knot introduces a break in a quadratic B-spline curve.

3.7 Control vector

Dealing with control points explicitly is cumbersome so, as a first step towards a more compact notation, let us first define a space Sq of control vectors Q consisting of

control point coordinates, first all the x-coordinates, then all the y-coordinates and similarly for Qy. Then the coordinate functions can be written as

x(s) = B(s)T Qf,

where B(s) is a vector of B-spline basis functions as defined earlier, and similarly for y(s), so that

r(s) = U(s)Q for 0 < s < L

where

B(s)T 0

0 B(s)T,

a matrix of size 2 x 2. U(s) = I2 <g> B(s)T =

3.8 Norm for curves

Now that we have set up a representation of curves as parametric splines, the next step is therefore to extend the norm and inner product to curves, for use in curve approximation. We can define a norm || • || for B-spline curves which is induced by the Euclidean distance measure in the image plane or equivalently, where the metric matrix for curves U is defined in terms of the metric matrix B for B-spline functions.

Of course, the norm also implies an inner product for curves:

(Qi, Q2) = QT U Q2

The curve norm is particularly meaningful when used as a means of comparison between two curves, using the distance ||Q1 — Q2||. This is the basis for approximation of visual data by curves, and is illustrated in figure 3.14.

There are potentially simpler norms than the one above, the obvious candidate being the Euclidean norm |Q|2 = QTQ of the control vector. Compared with the L2 norm || • || defined above, the Euclidean norm is simpler to compute because the banded matrix u is replaced by the identity matrix. However, attractive as this short cut may be, the simpler norm does not work satisfactorily. This is made clear by the counter-example of figure 3.15 in which decreasing the displacement between two curves produces an increase in the Euclidean norm. This example makes it clear that the Euclidean norm is not suitable for ranking the closeness of curve approximations.

Invariance to re-parameterisation

It is important to note that the curve norm, as a measure of the difference between a pair of curves, does not allow for possible re-parameterisation of one of the curves. For example, a curve r(s), 0 < s < 1 could be reparameterised to give a new curve r*(s) = r(1 — s), 0 < s < 1. Geometrically, the two curves are identical; the difference between them is simply a reversal of parameterisation. We would like ideally to measure shape differences in a way that is invariant to re-parameterisation, so that the vector difference function r* — r would ideally have a norm of zero. However, the L2 norm will not behave in this way:

r*(s) — r(s)||2 = L J |r(1 — s) — r(s)|2 ds = 0,

in general. As a result, any curve-fitting algorithm that uses the norm will be disrupted if the parameterisation of the target curve fails to match that of the template.

A general solution to the problem of parameterisation invariance would require a search over possible parameterisations. The proximity of a curve r(s) to a second curve r*(s) could be evaluated as min ||r(s) — r*(g(s))|| g.

3.9 Areas and moments

Applications in computer vision often require the computation of gross properties of a curve. Curve moments — area, centroid, and higher moments are useful for computing approximate curve position and orientation, and to obtain gross shape information sufficient for some coarse discrimination between objects.

Generally, moments have two roles in active contours. The first is initialisation, in which a spline template is positioned sufficiently close to the tracked object to “lock” onto it. At this stage moments may also be used for coarse shape discrimination to confirm the identity of the object being tracked. The second role is in interpreting the position and orientation of a tracked object, for example the 3D visual mouse in figure 1.16 on page 20. For example, if hand motion is restricted to 3D translation and rotation in the image plane, those four parameters can be recovered from the zeroth moment (area) the first moment (centroid) and the second moment (inertia).

Centroid

A conventional definition for the centroid of a curve is which can be computed straightforwardly from the spline-vector Q using inner products:

This simple definition of centroid is computationally convenient but has the drawback that it is not invariant to re-parameterisation of the curve, as figure 3.17 shows. This is because s is not generally true arclength; it is simply a convenient spline parameter.

The length of an infinitesimal segment of curve is not ds but |r'(s)|ds, so that an invariant centroid of the curve would be

The square root implicit in |r/(s)| means that this invariant centroid cannot be computed directly in terms of the spline-vector Q. An alternative invariant centroid for closed curves is the centroid of area described below, which can be computed directly, as a cubic function of Q. For many purposes, non-invariant moments are adequate. For example, in a hand-tracking application the parameterisation of the tracked curve is, typically, strongly stabilised by a template. The parameterisation of the tracked curve does not, in practice, deviate much from the standard parameterisation inherited from the template. In that case the 2D translational motion of the tracked object can be recovered satisfactorily from the non-invariant centroid.

Invariant moments

Suppose an active contour is to be initialised from an area of pixels detected by image processing based on brightness, colour or motion. The vector Q for the initial configuration of the tracked curve is set by manipulating it to bring the moments of the area enclosed by the curve into close agreement with the moments of the active area.

The simplest available parameterisation-invariant measure for a closed curve is the area where |x, y| denotes the determinant of the matrix whose columns are x, y. This is neatly expressible as a quadratic form in Q: reminiscent of the norm in but in place of the symmetric matrix U we have:

J = SL r(s)|r'(s)| dS

J0L lr'(s)l ds

A(Q) = QTQ

(Details of efficient computation of A are given in appendix A.2.) As with the matrix U, A is 2Nq x 2Nq where

Nq = 2Nb

the dimension of the spline space. The matrix A is sparse which makes the computation of the area quadratic form relatively efficient. One direct application for curve area computation is in visual navigation, and a picturesque example is given in figure 3.18.

The centroid r of the area enclosed by a closed B-spline curve, which is invariant to curve re-parameterisation, is given by

In principle this is a useful measure for positioning, but is moderately costly — O(NQ) — to compute exactly. This improves to O(N'X) if curves are restricted to a “shape-space” of reduced dimension Nx, and this is discussed in the next chapter. Similarly, the second moment is invariant and useful in principle for orienting a shape, but the computational cost is 0(Nq), again reduced if a shape-space is used.

Bibliographic notes

This chapter has outlined a framework for representing curves in the image plane. It has been common both in robotics and in computer vision to represent curves algebraically as f (x,y) = 0 where f is a polynomial (Faverjon and Ponce, 1991; Petitjean et al., 1992; Forsyth et al., 1990). Although such representations are often attractive mathematically, for the purpose of constructing proofs, they are cumbersome from the computational point of view. Practical systems using curve approximation are better founded on B-splines. Tutorials on splines can be found in graphics books such as (Foley et al., 1990) or in books on computer-aided design such as (Faux and Pratt, 1979). Some essential details and algorithms are given also in the appendix of this book. A more complete book on splines, oriented toward computer graphics is (Bartels et al., 1987) and a mathematical source on spline functions (but not curves) is (de Boor, 1978).

Splines, common in computer graphics, have also been used in computer vision for some years, for shape-warping (Bookstein, 1989), representing corners and edges in static scenes (Medioni and Yasumoto, 1986; Arbogast and Mohr, 1990) and for shape approximation (Menet et al., 1990) and tracking (Cipolla and Blake, 1990).

Shape approximation using spline curves is an application of the “normal equations” for approximation problems (Press et al., 1988). Equivalently it uses a “pseudoinverse” (Barnett, 1990) of which the B-spline metric matrix B is a component. Function norms are a standard mathematical tool for functional approximation (Kreysig, 1988) and in signal processing (Papoulis, 1991) and image processing (Gonzales and Wintz, 1987) for least-squares restoration problems.

Measures of curve difference that are more economical to compute than the L2 norm can be made by replacing the metric matrix with the identity to give the Euclidean distance between control vectors, as done for polygons in (Cootes and Taylor, 1992). However, such measures do have some undesirable properties, as explained earlier. Curve matching using norms is not invariant to re-parameterisation; matching algorithms do exist that deal with re-parameterisation, for example ones developed for stereoscopic image matching (Ohta and Kanade, 1985; Witkin et al., 1986) but they are computationally expensive, too much so for use in real-time tracking systems. This is discussed again in chapter 6.

Shape-space models

In practice, it is very desirable to distinguish between the spline-vector Q eSq that describes the basic shape of an object and the shape-vector which we denote X eS, where S is a shape-space. Whereas Sq is a vector space of B-splines and has dimension Nq = 2Nb, the shape-space Sx is constructed from an underlying vector space of dimension Nx which is typically considerably smaller than Nq. The shape-space is a linear parameterisation of the set of allowed deformations of a base curve. The necessity for the distinction is made clear in figure 4.1. To obtain a spline that does justice to the geometric complexity of the face shape, thirteen control points have been used. However, if all of the resulting 26 degrees of freedom of the spline-vector Q are manipulated arbitrarily, many uninteresting shapes are generated that are not at all reminiscent of faces. Restricting the displacements of control points to a lowerdimensional shape-space is more meaningful if it preserves the face-like quality of the shape. Conversely, using the unconstrained control-vector Q leads to unstable active contours and this was illustrated in figure 2.4 on page 31.

The requirement that a shape-space be a linear parameterisation is made for the sake of computational simplicity. The curve-fitting and tracking procedures described in the book are substantially simplified by linearity and in many cases exact algorithms are available only for linear parameterisations. Linearly parameterised, image-based models work well for rigid objects however, and for simpler non-rigid ones. Linearity can certainly be a limitation when the allowed motions of an object become more complex, for example a three-dimensional object with articulated parts. Articulation can in fact be dealt with in linearly parameterised, image-based models but only at the cost of relaxing certain geometric constraints. This is explored further in the discussion of shape-spaces below. A more detailed discussion of the trade-off between image-based models and three-dimensional models is given in appendix C.

4.1 Representing transformations in shape-space Rigid motion

A simple example of a shape-space is the space of Euclidean similarities of a template curve r0(s). This is a space of dimension 4 corresponding exactly to the variation of an image curve as a camera with a zoom lens looks directly down on a planar object that is free to move on a table top. The effect on the curve is that it moves rigidly in the image plane and may also magnify or diminish in size, but its shape is preserved, as in figure 4.2. Alternatively it could be that the camera is able to translate in three translate vertically dimensions and to rotate about an axis perpendicular to the table top (something that occurs, for example, when a camera is mounted on a “SCARA” arm, popular in robot automation, which translates freely and rotates in the plane of the table). This combination also sweeps out a shape-space of Euclidean similarities.

Another important shape-space is the one that arises when a planar object has complete freedom to move in three dimensions. Its motion has 6 degrees of freedom, three for translation and three for rotation. Provided perspective effects are not too great, the image of a planar object contour is well described as a shape-space of planar affine transformations, a space with dimension 6. It can be thought of as the space of linear transformations of a template. Alternatively, it is the space of transformations which preserves parallelism between lines. The planar affine group of transformations is depicted in figure illustrates how the planar affine shape-space can enhance active contours when used appropriately. The figure shows tracking of an outstretched hand which, being almost planar, is well modelled by a planar affine space. The increased degree of constraint enhances immunity to distraction from clutter in the background.

The planar affine and Euclidean similarity shape-spaces work efficiently in the sense that the dimension of the shape-space is exactly equal to the number (six/four respectively) of the degrees of freedom of camera movement. Unfortunately this happy state of affairs does not persist in general because transformation groups do not necessarily form vector spaces; it is not always possible to find a vector space which matches exactly the degrees of freedom of camera/object motion. Consider the case of a camera with fixed magnification, viewing a planar object moving rigidly on a table. The image curve translates and rotates rigidly without any change of size. This is now simply the planar Euclidean group which does not however form a vector space. To see this, consider the template ro(s) and a copy of it rotated through 180° to give — r0(s); when these two are added vectorially they give r0(s) + (—r0(s)) = 0 which is not a rotated version of the template at all. The rotation operation is therefore not “closed” under addition and therefore cannot form a vector space. Rotation and scaling taken jointly do form a vector space, of dimension 2. Combining them with translation gives the Euclidean similarities, of dimension 4. The smallest vector space that encompasses Euclidean transformations is therefore the space of Euclidean similarities. The price of insisting on a linear representation of the Euclidean transformations is that 4 dimensions are needed to represent 3 degrees of freedom; the resulting space is underconstrained by one degree of freedom.

At this stage, a more precise definition of shape-space is called for. A shape-space 5 = L(W, Qo) is a linear mapping of a “shape-space vector” X є RNx to a spline-vector Q є RNq :

Q = Wx + Qo

where W is a Nq x Nx “shape-matrix.” The constant offset Q0 is a template curve against which shape variations are measured; for instance, a class of shapes consisting of Q0 and curves close to Q0 could be expressed by restricting the shape-space S to “small” X. The image of RNx need not necessarily be a vector space itself but is a “coset” — an underlying vector space {WX, X є RNx } plus an offset Q0. We talk of the “basis” V of a shape-space meaning a basis for the underlying vector space. The matrix W is comprised of columns which are the vectors of the basis V. In fact the two spaces discussed in this chapter — Euclidean similarity and Affine, are vector spaces, because there exists an X for which Q = WX. In chapter 8 we encounter shape-spaces whose images are not vector spaces because the offset Qo is linearly independent of the basis V. In fact the simplest shape-space that is not a pure vector space is the space of translations of a template Q0.

4.2 The space of Euclidean similarities

The Euclidean similarities of a template curve r0(s) represented by Q0 form a 4dimensional shape-space S with shape-matrix

The first two columns of W govern horizontal and vertical translations respectively. The third and fourth columns, made up from components of the spline-vector Qo for the template, cover rotation and scaling. By convention, we choose Q0 to have its centroid at the origin (< QQ, 1 >=< QQ, 1 >= 0) so that the third and fourth columns are associated with pure rotation and scaling, free of translation. In practice the template is obtained by fitting a spline interactively around a standard view of the shape, and translating it so that its centroid lies over the origin.

Some examples of shape representations in the space of Euclidean similarities follow.

1. X = (0,0,0,0)T represents the original template shape Q0

2. X = (1, 0,0,0)T represents the template translated 1 unit to the right, so that, where the NB-vectors 0 and 1 are:

0 = (0, 0,...,0)T, 1 = (1,1,...,1)T

3. X = (0,0,1,0)T represents the template doubled in size Q = 2Qo

4. X = (0,0, cos 9 — 1, sin 9)T represents the template rotated through angle 9:

( cos 9 Q — sin 9 Qo )

( sin 9 Q + cos 9 Qo )

As an example, the lotion bottle in figure 4.5 moves rigidly from the template configuration X = 0 in shape-space to the configuration

X = (0.465, 0.047, —0.282, —0.698)

representing a translation through (0.465,0.047)T, almost horizontal as the figure shows, a magnification by a factor

and a rotation through

v/(1 — 0.282)2 + 0.6982 = 1.001

arctan(1 — 0.282, —0.698) = —44.2C

all consistent with the figure.

4.3 Planar affine shape-space

It was claimed that for a planar shape just six affine degrees of freedom are required to describe, to a good approximation, the possible shapes of its bounding curve. The planar affine group can be viewed as the class of all linear transformations that can be applied to a template curve r0(s):

r(s) = u + M r0(s), (4.3)

where u = (ui,u2)T is a two-dimensional translation vector and M is a 2 x 2 matrix, so that M, u between them represent the 6 degrees of freedom of the space. This class can be represented as a shape-space with template Q0 and shape-matrix.

(A derivation is given below.) The first two columns of W represent horizontal and vertical translation. As before, by convention, the template ro(s) represented by Qo is chosen with its centroid at the origin. Then the remaining four affine motions (figure 4.3), which do not correspond one-for-one the last four columns of W, can however be expressed as simple linear combinations of those columns. Recall that the shape-space transformation is Q = WX + Q0 so that the elements of X act as weights on the columns of W. The interpretation of those weights in terms of planar transformations (4.3) of the template is:

X =(u1,u2, M11— 1, M22— 1, M21,M12).

Some examples of transformations are:

1. X = (0,0,0, 0, 0, 0)T represents the original template shape Q0

2. X = (1,0,0, 0, 0, 0)T represents the template translated 1 unit to the right,

3. X = (0,0,1,1, 0, 0)T represents the template doubled in size

4. X = (0, 0, cos 9 — 1, cos 9 — 1, — sin 9, sin 9)T represents the template rotated

through angle 9

5. X = (0,0,1, 0, 0, 0)T represents the template doubled in width

In practice it is convenient to arrange for the elements of the affine basis to have similar magnitudes to improve numerical stability. If the control-vector Q0 is expressed in pixels, for computational simplicity, the magnitudes of the last four columns of the shape-matrix may be several hundred times larger than those of the first two, and it is then necessary to scale the translation columns to match.

Derivation of affine basis. Using (3.19), (4.3) can be rewritten:

r(s) — ro(s) = u +(M — I )U (s)Qo.

Now using the definition (3.20) of U(s) and noting that B(s)T 1 = 1 (3.5), this becomes:

u\BT(s)1 \ ( (Mn — 1)BT(s)Qo + Mi2BT(s)Qq

vector space of dimension 6, for which the rows of W in (4.4) form a basis, and furthermore, given that Q = WX + Qo, X is composed of elements of M.

4.4 Norms and moments in a shape-space

Given that it is generally preferred to work in a shape-space S, a formula for the curve norm is needed that applies to the shape-space parameter X. We require a consistent definition so that, for a given space, ||Qi — Q2|| = ||Xi — X2||. The L2 norm in shape-space S is said to be “induced” from the norm over Sq , which was in turn induced from the L2 norm over the space of curves r(s). From (4.1), this is achieved by defining: is the average displacement of the curve parameterised by X from the template curve. We can also now define a natural mapping from Sq onto the shape-space S. Of course there is in general no inverse of the mapping W in (4.1) from Sq to X but, providing W has full rank (its columns are linearly independent), a pseudo-inverse W + can be defined:

It turns out (see chapter 6) that W + can be naturally interpreted as an error-minimising projection onto shape-space.

In the case of spline space, it was argued in the previous chapter, the Euclidean norm | • | defined by |Q|2 = QTQ is not as natural as the L2-norm ||Q||, although the two can have approximately similar values in practice, especially when curvature is small. Their approximate similarity derives from the fact that the metric matrix U is banded. However, the matrix H above is dense and so the Euclidean norm |X|

x|| = V xthx,

where

H = W TU W.

The norm over S has a geometric interpretation:

Xll = HQ — Qo

X = W +(Q - Qo) where W + = H-lWTU.

in shape-space does not approximate the induced L2-norm, and in fact |X| has no obvious geometric interpretation. Therefore, while one might get away with using the Euclidean norm in spline space, it is of no use at all in shape-space — only the L2-norm will do.

Computing Area

As with the norm, the area form A(X) can be expressed in a shape-space as a function

A(X) = (W X + Q0)T A(W X + Q0)

that is quadratic in X, and whose quadratic and linear terms involve just NX(NX + 3)/2 coefficients so that, in the case of Euclidean similarity, there are just 14 independent coefficients.

Centroid and inertia

The centroid r of the area enclosed by a closed B-spline curve ((3.28) on page 66) is a symmetric cubic function of the configuration X. Such a function has O((Nx)3) terms which is obviously large for larger shape-spaces, but works out to be just 20 terms in the case of Euclidean similarities — quite practical to compute. (The exact formula for the number of terms is Nx(Nx + 1)(Nx + 2)/6.) The invariant second moment or inertia matrix ((3.29) on page 66) could be expressed in terms of a symmetric quartic form which, surprisingly, has only 23 terms in the case of Euclidean similarity (NX = 4) but of course this number is O((NX)4) in general. In cases where the size of the configuration space is too large for efficient computation of invariant moments, the alternative is to compute them by numerical integration.

Finally, note that there is an important special case in which invariant moments are easily computed. The special properties of the affine space mean that moments can be computed efficiently under affine transformations. For instance, although the invariant second moment for a 6-dimensional shape-space turns out generally to be a quartic polynomial with 101 terms, in the affine space it can be computed simply as a product of three 2 x 2 matrices:

I = MI0MT

where I0 is the inertia matrix of the template, computed numerically from (3.28) on page 66. Similarly, for area:

A = (det M )A0.

Note that I represents only 3 constraints on M and one of those is duplicated by the area A. So I, r and A between them give only 5 constraints on the 6 affine degrees of freedom. It would be necessary to compute higher moments to fix all 6 constraints. On the other hand, those moments up to second-order are sufficient to fix a vector in the space of Euclidean similarities.

Using moments for initialisation

For the Euclidean similarities, a shape-vector X can be recovered from the moments up to second order, as follows.

1. The displacement of the centroid r gives the translational component of X.

2. The scaling is given by -JA/A0.

3. The rotation is the angle d through which the largest eigenvector of I rotates.

This procedure is illustrated in figure 4.6 below. In the illustration given here, moments were computed over a foreground patch segmented from the background on the basis of colour. An effective method for certain problems such as vehicle tracking, in which the foreground moves over a stationary background, is to use image motion. So-called “optical flow” is computed over the image, and a region of moving pixels is delineated. Either a snake may be wrapped around this region directly or, in a shape-space, moments can be computed for initialisation as above. Background subtraction methods are also useful here — see chapter 5 for an explanation.

4.5 Perspective and weak perspective

The next task is to set up notation for perspective projection in order to show that, under modest approximations, the set of possible shapes of image contours do indeed form affine spaces. Standard camera geometry is shown in figure 4.7 and leads to the following relationship between a three-dimensional object contour R(s) and its two-dimensional image r(s):

The 1/Z term is intuitively reasonable as it represents the tendency of objects to appear smaller as they recede from the camera. However, it makes the projection function non-linear which is problematic given that shape-spaces, being vector spaces, imply linearity. Fortunately there are well-established methods for making good linear approximations to perspective. The most general of these is the weak perspective projection.

Note that a good approximate value for f can be obtained simply by using the nominal value usually printed on the side of a lens housing, which we denote f'. To a first approximation, f = f', but a better one, taking into account the working distance Zc, is

Since image positions x, y available to a computer are measured in units of pixels relative to one corner of the camera array, a scale factor is needed to convert pixel units into length units (mm). This can be done quite effectively by taking a picture,

of a ruler lying in a plane parallel to the image plane. Moving a cursor over the image shows that 597 pixels corresponds to 200 mm at a distance of Zc = 1060 mm from the camera iris, and the nominal focal length is = 25 mm. From (4.10) we have f = 25.6 mm, and the scaling factor for distance x on the image plane.

Finally, note that more precise calculations, including allowances for minor mechanical defects such as asymmetry of lens placement with respect to the image array, can be made precisely using automatic but somewhat involved “camera calibration” procedures. However, the simple procedure above has proved sufficient for active contour interpretation, in most cases.

Weak Perspective

The weak perspective approximation is valid provided that the three-dimensional object is bounded so that its diameter is small compared with the distance from camera to object. Taking Rc = (Xc, Yc, Zc)T to be a point close to the object — think of it as the object’s centre, replace R(s) in the projection formula by Rc + R(s) and then the assumption about object diameter can be written as which is linear in R(s) and approximates perspective projection to first order in |R(s)|/Zc. The tendency of image size to diminish as an object recedes is present in the f/Zc term, now approximated to an “average” value for a given object. As individual points of R(s) recede they tend to move towards the centre of the image and the third term expresses this. In typical views, the approximation works well, as figure 4.9 shows. If, in addition to the camera having a large field of view, the object also fills that field of view, then errors in the weak perspective approximation become significant. That is not a situation that commonly arises in object tracking however. If the camera is mounted on a pan-tilt head, the camera’s field of view is likely to be narrow in order to obtain the improved resolution that the movable head allows. Alternatively, when the camera is fixed, the image diameter is likely to be several times smaller than the field of view to allow for object movement. Since the field of view of a wide-angle camera lens is of the order of 1 radian, it follows that object diameter is likely to be considerably less than 1 radian, precisely the condition for the weak perspective approximation to hold good.

Orthographic projection

For a camera with a narrow field of view (substantially less than one radian) it can further be assumed, in addition to the assumption (4.11) about object diameter, that

|Xc| = Zc and |Yc| = Zc

— simply the condition that the contour centre is close enough to the centre of the image for the object actually to be visible. In that case, the third term in (4.12) is negligible and image perspective is well approximated by the orthographic projection

f = (X(c) + X(s))

r(s) = f

Zc = Y(c) + Y(s)

Suppose the object contour Rc + R(s) derives from a contour R0(s) in a base coordinate frame which has then been rotated to give R(s) = RR0(s) and translated through Rc, so that R, Rc are parameters for three-dimensional motion. Suppose also that the object is planar so (without loss of generality) Z0(s) = 0. Then the orthographic projection equation becomes

where u is the orthographic projection of the three-dimensional displacement vector Rc and R2x2 is the upper-left 2 x 2 block of the rotation matrix R. Finally, take M = (f/Zc)R2x2 and adopt the convention that Zc = f in the standard view so that is the image template. This gives a general planar affine transformation as in (4.3), so the image of a planar object moving rigidly in three dimensions does indeed sweep out a planar affine shape-space.

If the orthographic constraint is relaxed again, to allow general weak perspective, it turns out that, when R(s) is planar, r(s) still inhabits the planar affine shape-space. Later we return to weak perspective for a general analysis of planar affine configurations, in particular to work out the three-dimensional pose of an object from its affine coordinates. This is used, for example, to calculate the three-dimensional position and attitude of the hand in the mouse application of figure 1.16 on page 20. That general method of pose calculation will work even when the camera is positioned obliquely relative to table-top coordinates and when the hand moves over the whole of a wide field of view.

4.6 Three-dimensional affine shape-space

Shape-space for a non-planar object is derived as a modest extension of the planar case. The object concerned should be visualised as a piece of bent wire, rather than a smooth three-dimensional surface. Smooth surfaces are, of course, of great interest but shape-space treatment is more difficult because of the complex geometrical behaviour of silhouettes. The bent wire model also implies freedom from hidden lines; the approach described here deals with parallax effects arising from three-dimensional

ro(s) = (Xo(s),Yo(s))T

shape but not with the problem of “occlusion” for which additional machinery is needed.

Clearly the 6-dimensional planar affine shape-space cannot be expected to suffice for non-planar surfaces and this is illustrated in figure 4.10. The new shape-space is “three-dimensional affine” with 8 degrees of freedom, made up of the six-parameter planar affine space and a two-parameter extension. Consider the object to be a three-dimensional curve, which is projected orthographically as in (4.14) to give an image curve and this can be expressed as the standard planar affine transformation (u, M) of (4.3) with an additional depth-dependent term:

r(s) = u + M ro(s)+vZo(s) (4.15)

where

z = y (M | v).

The three-dimensional shape-space therefore consists of the two-dimensional one for the planar affine space generated by template Q0, with two added components to account for the depth variation that is not visible in the template view. The two additional basis elements.

The extra two elements are tacked onto the planar affine W-matrix (4.4) to form the W-matrix for the three-dimensional case.

Just as equation (4.5) provided a conversion from the planar affine shape-space to the real-world transformation, the three-dimensional affine shape-space components have the following interpretation.

The expanded space now encompasses the outlines of all views of the three-dimensional outline as figure 4.11 shows. Automatic methods for determining Q0 from example views are discussed in chapter 7.

4.7 Key-frames

Affine spaces are appropriate shape-spaces for modelling the appearance of threedimensional rigid body motion. In many applications, for instance facial animation, speech-reading and cardiac ultrasound, as described in chapter 1, motion is decidedly non-rigid. In the absence of any prior analytical description of the motion, the most effective strategy is to learn a shape-space from a training set of sample motion. A general approach to this, based on statistical modelling, is described in chapter 8.

In the meantime, a simpler methodology is presented here, based on “key-frames” or representative image outlines of the moving shape. Often, an effective shape-space can be built by linear combination of such key-frames. What is more, the shape-space coordinates have clear interpretations, for example:

• X = (0, 0)T represents the closed mouth;

• X = (1/2, 0)T represents the half-open mouth;

• X = (1/4,1/2)T represents the mouth, half-protruding and slightly open.

A little more ambitiously, the same three frames can be used to build a more versatile shape-space that allows for translation, zooming and rotation of any of the expressions from the simple two-dimensional shape-space. Minimally, this should require 2 parameters for expression plus 4 for Euclidean similarity, a total of 6 parameters. However, the linearity of shape-space leads to a wastage of 2 degrees of freedom and the shape-space is 8-dimensional with template Qo as before and shape-matrix

This is based on the shape-matrix for Euclidean similarities (4.2) on page 75, extended to all three frames. Again, expressions are naturally represented by the shape-vector, for example:

• X = (u, 0, 0, 0,1, 0, 0, 0)T represents the fully open mouth, shifted to the right

• X = (0,0, cos 9 — 1, sin 9, 0, 0, 2 cos 9, 1 sin 9)T represents the closed mouth, half-protruding and rotated through an angle 9.

Of course this technique, illustrated here for 2 key-frames under Euclidean similarity, does apply to an arbitrary number of key-frames, and a general space of rigid transformations spanned by a set {Tj, j = 1,...,Nr}. In that case any contour corresponding to the appearance of i key-frame is composed of a linear combination of for an Nr-dimensional space of rigid transformations. Then the W-matrix is composed of columns which are vectors Qj, i = 0,1,..., j = 1,2,..., Nr. To avoid introducing linear dependencies into the W-matrix, it is best to omit translation from the space of rigid transformations and treat it separately, as in the two key-frame example above. Then the W-matrix for the composite shape-space of rigid and non-rigid transformations.

One final caveat is in order. With Nr degrees of transformational freedom (excluding translation) and Nk key-frames, there are a total of Nr + Nk degrees of freedom in the system. However the linear representation as a shape-space with a W-matrix as above has dimension Nr x (Nk + 1), a “wastage” of Nk(Nr — 1) degrees of freedom. The two key-frame example above has Nr = 2,Nk = 2 so the wastage is just 2 degrees of freedom, in a shape-space of total dimension 8 (including translation). With more key-frames and larger transformational spaces such as three-dimensional affine (Nr = 6), the wastage is more severe — 5 degrees of freedom per key-frame. In such cases, the constructed shape-space is likely to be too big for efficient or robust contour fitting. However, it is often possible to construct a smaller space by other means such as “PCA” (chapter 8) and use the large shape-space constructed as above for interpretation. In particular, shape displacements can be factored into components due to rigid and non-rigid transformations respectively, and this is explained at the end of chapter 7.

4.8 Articulated motion

When an object (e.g. a hand) is allowed, in addition to its freedom to move rigidly, to support articulated bodies (fingers), more general shape-spaces are needed. Clearly, one route is to take a kinematic model in the style used in robotics for multi-jointed arms and hands and use it as the basis of a configuration space. The advantage is that the resulting configuration space represents legal motions efficiently because the configuration space has minimal dimension. The drawback is that the resulting measurement models (see next chapter) are non-linear. This is due to trigonometric non-linearities as in the previous section on rigid motion but exacerbated by the hinges added onto the base body. The result is that classical linear Kalman filtering is no longer usable, though non-linear variants exist which are not however probabilistically rigorous. Furthermore, linear state-spaces admit motion models which apply globally throughout the space. In a non-linear space, motion models could perhaps be represented as a set of local linear models in tangent spaces placed strategically over a manifold. This is hard enough to represent and the task of learning such models seems a little daunting.

As with rigid motion, there is a way to avoid the non-linearities by generating appropriate shape-spaces. Again, there is some inefficiency in doing this and the resulting space underconstrains the modelled motion. The degree of wastage depends on the precise nature of the hinging of appendages, and this is summarised in the below. Proofs are not given here, but there is a more detailed discussion in appendix C.

Two-dimensional hinge

For a body in two dimensions, or equivalently a three-dimensional body constrained to lie on a plane, each additional hinged appendage increments the dimension of shape-space by 2, despite adding only one degree of kinematic freedom. Hence the wastage amounts to 1 degree of freedom per appendage.

Two-dimensional telescopic appendage

Still in two dimensions, each telescopic element added to the rigid body increments the shape-space dimension by 2, causing a wastage of one degree of freedom, as for the hinge.

Hinges on a planar body in three dimensions

The rigid planar body above, with its co-planar hinged appendages, is now allowed to move out of the ground plane, so that it can adopt any threedimensional configuration. Each hinged appendage now adds 4 to the dimension of shape-space, resulting in the wastage of 3 degrees of freedom.

Universal joints on a rigid three-dimensional body

Given a three-dimensional rigid body, whose shape-space is 3D affine, each appendage attached with full rotational freedom (via a ball joint, for instance) increments the dimension of shape-space by 6. Such an appendage introduce 3 kinematic degrees of freedom, so the wastage is 3.

Hinges on a rigid three-dimensional body

For appendages attached to the three-dimensional body by planar hinges, with just 1 kinematic degree of freedom, the dimension of shape-space increases by 4, so again the wastage is 3 degrees of freedom per appendage.

Note that the above results hold regardless of how the appendages are attached — whether directly to the main body (parallel), or in a chain (serial) or a combination of the two.

Bibliographic notes

This chapter has explained how shape-spaces can be constructed for various classes of motion. The value of shape-spaces of modest dimensionality was illustrated in (Blake et al., 1993) as a cure to the instability that can arise in tracking with high-dimensional representations of curves such as the original finite-element snake (Kass et al., 1987) or unconstrained B-splines (Menet et al., 1990; Curwen and Blake, 1992). Shape-spaces are linear, parametric models in image-space, but non-linear models or deformable templates are also powerful tools (Fischler and Elschlager, 1973; Yuille, 1990; Yuille and Hallinan, 1992). Linear shape-spaces have been used effectively in recognition (Murase and Nayar, 1995). Shape-spaces discussed so far have been image-based but a related topic is the use of three-dimensional parametric models for tracking, either rigid (Harris, 1992b) or non-rigid (Terzopoulos and Waters, 1990; Terzopoulos and Metaxas, 1991; Lowe, 1991; Rehg and Kanade, 1994).

Initialisation from moments is discussed in (Blake and Marinos, 1990; Wildenberg, 1997) and the use of 3rd moments to recover a full planar affine transformation is described in (Cipolla and Blake, 1992a). In some circumstances, region-based optical flow computation (Buxton and Buxton, 1983; Horn and Schunk, 1981; Nagel, 1983; Horn, 1986; Nagel and Enkelmann, 1986; Enkelmann, 1986; Heeger, 1987; Bulthoff et al., 1989) can be used to define the region for snake initialisation. This has been shown to be particularly effective with traffic surveillance (Koller et al., 1994).

Shape-spaces are based on perspective projection and its linear approximations in terms of vector spaces (Strang, 1986). Mathematically, projective geometry is a somewhat old-fashioned topic and so the standard textbook (Semple and Kneebone, 1952) is rather old-fashioned too. More accessible, is a graphics book such as (Foley et al.,

1990) for the basics of camera geometry and perspective transformations. Computer vision has been concerned with camera calibration (Tsai, 1987) in which test images of grids are analysed to deduce the projective parameters for a particular camera, including both extrinsic parameters (the camera-to-world transformation) and intrinsic parameters such as focal length.

The most general linear approximation to perspective is known variously as paraperspective (Aloimonos, 1990) or weak perspective (Mundy and Zisserman, 1992) and can be particularly effective if separate approximations are constructed for different neighbourhoods of an image (Lawn and Cipolla, 1994). The gamut of possible appearances of three-dimensional contours under a particular weak perspective transformation forms an affine space (Ullman and Basri, 1991; Koenderink and van Doorn,

1991). This idea led to a series of studies on using affine models to analyse motion, including (Harris, 1990; Demey et al., 1992; Bergen et al., 1992a; Reid and Murray, 1993; Bascle and Deriche, 1995; Black and Yacoob, 1995; Ivins and Porrill, 1995; Shapiro et al., 1995).

The first three chapters of (Faugeras, 1993) are an excellent introduction to projective and affine geometry and to camera calibration.

Articulated structures are most naturally described in terms of non-linear kinematics (Craig, 1986) in which the non-linearities arise from the trigonometry of rotary joints. Such a model has been incorporated into a hand tracker, for instance (Rehg and Kanade, 1994), in which the articulation the fingers is full treated. Articulated structures can be embedded in linear shape-spaces but this can be very “inefficient,” in the sense of section 4.1, that kinematic constraints have to be relaxed — see appendix C.

Finally, smooth silhouette curves and their shape-spaces are beyond the scope of this book. However, it can be shown that a shape-space of dimension 11 is appropriate. This shape-space representation of the curve is an approximation, valid for sufficiently small changes of viewpoint. Its validity follows from results in the computer vision literature about the projection of silhouettes into images (Giblin and Weiss, 1987; Blake and Cipolla, 1990; Vaillant, 1990; Koenderink, 1990; Cipolla and Blake, 1992b).

Image processing techniques for feature location

The use of image-filtering operations to highlight image features was illustrated in chapter 2. Figure 2.1 on page 27 illustrated operators for emphasising edges, valleys and ridges, and it was shown how the emphasised image could be used as a landscape for a snake. However, for efficiency, the deformable templates described in the next two chapters are driven towards a distinguished feature curve rf (s) rather than over the entire image landscape F that is used in the snake model. This is rather like making a quadratic approximation to the external snake energy:

where rf (s) lies along a ridge of the feature-map function F. The increase in efficiency comes from being able to move directly to the curve rf, rather than having to iterate towards it as in the original snake algorithm described in section 2.1.

It is therefore necessary to extract rf (s) from an image. One way of doing this is to mark high strength values on the feature maps and group them to form point sets to which spline curves could be fitted. An example of feature curves grouped in this way was given, for edge-features, in figure 3.1 on page 42. However, the wholesale application of filters across entire images is excessively computationally costly. At any given instant, an estimate is available of the position of a tracked image-contour and this can be used to define a “search-region,” in which the corresponding image feature is likely to lie. Image processing can then effectively be restricted to this search region. The search region displayed in the figure is formed there by sweeping normal vectors of a chosen length along the entire contour. Features can then be detected by performing image filtering along each of the sampled normals, and this is very efficient. If normals are constructed at points s = si, i = 1,...,N, along the curve r(s), this will give a sequence of sampled points rf (si), i = 1,...,N along the feature curve rf (s). It is of course possible that more than one feature may be found on each normal, but for now it is assumed that just one — the favourite feature — is retained.

Image processing techniques for feature location 99

5.1 Linear scanning

In order to perform one-dimensional image processing, image intensity is sampled at regularly spaced intervals along each image normal. An arbitrarily placed normal line generally intersects image pixels in an irregular fashion, as in figure 5.2. This is well known to produce undesirable artifacts in Computer graphics — “jaggies” in static images and twinkling effects in moving ones, for which the usual cure is “anti-aliasing.” Unlike graphics, in which the task is to map from a mathematical line onto pixels, the problem here is to generate the opposite mapping, from image to line. This calls for a sampling scheme of its own.

An effective sampling scheme, spatially regular and temporally smooth (when the line moves) involves interpolation as follows. A sequence of regularly spaced sample points are chosen along the line. The intensity I at a particular sample point (x, y) is computed as a weighted sum of the intensities at 4 neighbouring pixels, as in figure 5.3. A pixel with centre sited at integer coordinates (i,j) has intensity Ii}j. The intensity

I at (x, y) is then computed by bilinear interpolation:

I = Wi,jIi,j

so that at most four pixels, the ones whose centres are closest to (x,y), have non-zero weights, as the figure depicts.

5.2 Image filtering

Analysis of image intensities now concentrates on the one-dimensional signals along normals. The intensity I(x) along a particular normal is sampled regularly at x = Xi and intensities are stored in an array Ii — I(xi), i = 1,...,N. A variety of feature detection operators can be applied to the line, popular ones being edges, valleys and ridges. Features are located by applying an appropriate operator or mask Cn, — NC < n < NC, by discrete convolution, to the sampled intensity signal In, 1 < n < Nj, to give a feature-strength signal

Maxima of that signal are then located, and marked wherever the value at that maximum exceeds a preset threshold (chosen to exclude spurious, noise-generated maxima). This is illustrated for edges in figure 5.4 and for valleys in figure 5.5.

Corners

Effective operators for corners also exist and have been used for visual tracking. However, corners do not quite fit into the search paradigm described here. Being discrete points, a corner is likely to be missed by search along normals unless it happens to lie exactly on some normal; more generally it will be located in the gap between two adjacent normals. This problem does not arise with edges because they are extended and should generally intersect one or more of the normals. If corners are to be used they must be located by an exhaustive search over the region of interest, which is rather more expensive computationally than a search that is restricted to normals.

One operator, the “Harris” corner detector, works by computing a discrete approximation to the moment matrix

S(x,y) = J G(x',y')*[VI(x + x',y + y')]*[VI(x + x',y + y')]T dx' dy'

at each image point (x, y), where VI — (dI/dx, dI/dy)T, the image-gradient vector at a point, and G is a two-dimensional Gaussian mask for smoothing, typically 2-4 pixels in diameter. The trace tr(S) and the determinant det(S) are examined at each point (x,y). Wherever tr(S) exceeds some threshold, signaling a significantly large image gradient, and also the ratio tr(S)/2det(S) is sufficiently close to its lower bound of 1, a corner feature is marked. The Harris detector responds reliably to objects that are largely polyhedral, marking corners that are likely to be stable to changing viewpoint. Natural shapes may however fire the corner detector in locations that are an image (top) to find valleys — locations of minimum intensity. The valley operator (left) convolved with the intensity signal (right) produces a ridge (bottom) corresponding to the dark line between adjacent fingers.

5.3 Using colour

Colour in images is a valuable source of information especially where contrast is weak, as in discriminating lips from facial skin. Colour information is commonly presented as a vector I = (r,g,b) of red-green-blue values, at each pixel. The most economical way to treat the colour vector is to reduce it to a single value by computing a suitable scalar function of I. The scalar I can then be treated as a single intensity value

and subjected to the same image-feature operators as were used above, for processing monochrome images. Two scalar functions are described here: one that is general, the hue function and one that is customised — learned from training data, the Fisher linear discriminant.

The hue function is used commonly to represent colour in graphics, vision and colorimetry and corresponds roughly to the polar angle for polar coordinates in r,g, b-space. It separates out what is roughly a correlate of spectral colour — i.e. colour wavelength — from two other colour components: intensity (overall brightness) and saturation (“colouredness” as opposed to whiteness). This explains why it is particularly effective when contrast is low so that changes in intensity (r + g + b) are hard to perceive. Hue is defined as follows hue(r, g, b) = arctan(2r — g — b).

Note that there is a linear “hexcone” approximation to the hue function for applications where it is important to compute it fast, for example to convert entire video images.

The Fisher linear discriminant is an alternative scalar function that attempts to represent as much of the relevant colour information as possible in a single scalar value. Its efficiency is optimal in certain sense, but this comes at the cost of re-deriving the function, in a learning step, for each new application. Learning is straightforward and works as follows. First, a foreground area F and a background area B are delineated in a training image. For instance, in figure 5.7, F would be the lip region and B would be part of the immediate surround. The Fisher discriminant function is defined as a simple scalar product:

fisher(I) = f • I where I = (r,g,b)T. (5.5)

The function is learned from the foreground and background areas F and B by the algorithm of figure 5.8, which determines the coefficient vector f. The effect of the algorithm is to choose the vector f in colour (r, g, b) space which best separates the background and foreground populations of colours. When the Fisher function is used in place of intensity, feature detection works effectively, as figure 5.7 shows.

5.4 Correlation matching

The oldest idea in visual matching and tracking, and one that is widely used in practical tracking systems, is correlation. Given a template T in the form of a small array of image intensities, the aim is to find the likely locations of that template in some larger test image I. In one dimension for example, with image I(x) and template T(x), 0 < x < A, the problem is to find the offset X for which T(x) best matches I(x + x') over the range 0 < x < A of the template.

M (x') = (I (x + x' ) — T (x))2 dx

with respect to x'. In practice this theoretical measure must be computed as a sum over image pixels.

An illustration is given in figure 5.9 in which the problem is to locate the position of the eyebrow along a particular line. A template T is available that represents an ideal distribution of intensity along a cross-section located in a standard position. The task is to find the position on the line which best corresponds to the given intensity template, and this is achieved by minimising M(x) as above. The nomenclature “correlation” derives from the idea that in the case that T(x) and I(x) span the same x-interval, and are periodic, minimising M(x) is equivalent to maximising the “mathematical correlation”.

Correlation matching can be used to considerable effect as a generalised substitute for edge and valley location. Position of the located feature along each normal can be reported in the same way as for edges and used for curve matching. This can be particularly effective in problems where image contrast is poor. In the lip-tracking application of figure 1.10 on page 14, tracking without lip make-up proved possible only when edge detection was replaced by correlation.

There are numerous variations on the basic theme (see bibliographic notes, at the end of the chapter). One variation is to pre-process I(x), for example to emphasise its spatial derivative, which tends to generate a sharper valley in M(x'), which can then be located more accurately. More modern approaches dispense with intensities altogether, representing I and T simply as lists of the positions of prominent features. The problem then is to match those lists, using discrete algorithms. This has the advantage of efficiency because of the data-compression involved in reducing the intensity arrays to lists. It is also more robust for two reasons. First, intensity profiles vary as ambient illumination changes whereas the locations of features are approximately invariant to illumination. Secondly, there is the additional flexibility that different amounts of offset x' can be associated with different features, for example when performing stereo matching over a large image region, whereas in the correlation framework x' is fixed. For these reasons, feature-based matching is considered superior to correlation for many problems.

Correlation matching is often used in two dimensions with a template T(x,y) matched to an offset image I(x + x',y + y'), as in figure 5.10. This is considerably more computation-intensive than in one dimension as it involves a double integral over x,y and also a two-dimensional search to minimise M with respect to x',y'. Various techniques such as Discrete Fourier Transforms and “pyramid” processing at multiple spatial scales can be used to improve efficiency. In higher dimensions than two, for example when rotation and scaling are to be allowed in addition to translation, exhaustive correlation becomes prohibitively expensive and alternative algorithms are needed. One approach is to generate the offsets in higher dimensions in a more sparing fashion, using gradient descent for instance. Numerous authors have shown that this can be very successful.

5.5 Background subtraction

A widely used technique for separating moving objects from their backgrounds is based on subtraction. It is used as a pre-process in advance of feature detection to suppress background features to prevent them distracting fitting and tracking processes. It is particularly suited for applications such as surveillance where the background is often largely stationary.

An image of IB(x,y) of the background is stored before the introduction of a foreground object. Then, given an image I(x,y) captured with the object present, feature detection is restricted to areas of I(x, y) that are labelled as foreground because they satisfy where a is a suitable chosen noise threshold. As figure 5.11 shows, background features tend to be successfully inhibited by this procedure. Cancellation can disrupt the

Background IB(x, y) Image I(x,y)

foreground, as the figure shows, where the background intensity happens to match the foreground too closely. This results in some loss of genuine foreground features, a cost which is eminently justified by the effectiveness of background suppression.

Finally, it should be noted that the expense of computing the entire difference image AI can be largely saved by computing differences “lazily” just of those pixels actually required for interpolation along normals in (5.2). This is an important consideration for real-time tracking systems.

Bibliographic notes

The Bresenham algorithm (Foley et al., 1990) is routinely used in graphics to convert a mathematical line to a sequence of pixels and “anti-aliasing” is employed to achieve an interpolated pattern of pixel intensities which varies smoothly, without flicker, as the line is moved. The interpolated sampling scheme described in this chapter is similar in spirit, but uses a different sampling pattern which performs the inverse function of mapping from an array of pixels to an arbitrary point on a mathematical line.

A general reference on feature detection and the use of convolution masks is (Ballard and Brown, 1982). A much-consulted study of trade-offs in the design of operators for edge detection is (Canny, 1986) and the design of operators for ridges and valleys is described in (Haralick, 1980); the discussions relate to two-dimensional image processing whereas in this chapter the simpler one-dimensional problem is addressed. Effective detectors for corners exist (Kitchen and Rosenfeld, 1982; Zuniga and Haralick, 1983; Noble, 1988) and have been used to good effect in motion analysis, e.g. (Harris, 1992a) and tracking (Reid and Murray, 1996). Operators that respond to regions rather than curves are also important, for example texture masks (Jain and Farrokhnia, 1991) which can be used effectively in snakes that settle on texture boundaries (Ivins and Porrill, 1995).

Correlation matching is based on the idea of “mathematical correlation” which is central to the processing of one-dimensional signals (Bracewell, 1978). It is also used in two-dimensional processing of images to locate patterns (Ballard and Brown, 1982), track motion (Bergen et al., 1992b) and register images for stereo vision (LucaS,&Ag:5 &@esNf|6C9_*'~V#Qe4+s4ÿ^Z[pSDHwϏD++${uW_/ e'V޾]4Xdq`5oŚ9P?y-f,{6_WYϚQLsƬrļV KYK@#Eg͙9]׹`QPJdV4ɄEwX萟Xr0-H30 [_ѫx?NXH@=.%x\.hh{#Dh)yZO?GS 8/J@u{q.Y˧OgʨU'K9|Ĝȕ@z{!`3 zv*e]lhu `[ ]$!ot(wY.jeA* qA@@@ p\_nQUR@ޫ*Wtz-%P\zAq6N<n;cgnӀg"o`3b'kKiP3D' " >rd,g)AQyBHdL6ΤO׃bI^]6_ZkZ(&D)e1OVUb)d)l̲Q9sz{y>r6%YY3kr\2BˊF鬄򂔗jE(鱀>V z[mo^5W⯛'2V/  l,Yxa{q&c|[#ޖEcqzD䝱٠< Pm4e~"mp(9L Z!ۉJ!;&P(Mvp[)v*Hsy}h>cV)~rYq|>`Y8DЀí|LʭYF_G*,yLC !G)hOnZ x$PE0d*?2K;(HG[$SRQv?u Lmhʙ:$ o݊GA\g ׭.B7bz׎Gihz8M\lGqe|")L[9 FQ誝 Es"@ȮyMKY܊w @@@@`.m6 [ULp[c٫߈Gs* fľL$jwT!˟cј(QOZB牱-qW@C+{ O٪,:o]M|~H7_Cl4$}    s2:Z\ʌw%L72$Vb^Iec)>gi?M'%ڔ66(cv$-pYU:PsLyx fk:@N-Tx{%€Ԁf_#g5,1(Ю~,_៥,hlw٘Xv,؀ @᠙]!]Z^$ >a KitG#y>ИͲ{kUpWQ74@A ;9ghMIrt5/Ox,*6    `.̚4Y,MO4M7۫/YGG31XM(@ 4O:VMǼjx"ɲyŸy3Ū%%h[N-Hg T>ΗiɆ-ʆi?mfPZ!P~U'9kLJgYD:$ 5գ4ZO4ova|B:1iWd~d,7Ji\ !%п2(@M 闪tWl$WڈS8#&L>Qh̟\Y1O&lH*}& ԧx@@g׶Baך9t,UG.^R n 6W4J  P{pԞK*"^B$Mhod.)Єo#4@[GNwj̆W@*߬,BH@ b.ȂV]E˓y. \G-tdӹ<|C,BLE2-IR4E P7UV|Vm)5R?`.ak0a# J#?`UJad*r@#>@O`RK\7b*F1L,u/"/䑾c_danJg;&P]eVq*39)eNi}Ķcصh#n Һ`epY=;2kyZLTАF[,K1oý"d+š~[Qpaj&++Y6P_g^ùǐۗ"je5Q'z~]54r"!j)/'˟v#JY  @/׺Ş9d[l܌OezbS]\n|Ln]> OVKк,JjG.N9Et3@+HqDl-V/aeUvnɻ(D"%'p87ƚ/u"gr{.Tsq}EZB*VZ=Ծ|觏(%j}n80|e; ˆVFsجwg-"8mTvtt(2۟a-Gy?Uh?Gc+/4ܝ3ckh.^Ay@ x"o=IIb7H]@KXl6_Al2Yh?~J6:z3el&ISݸ~zpkLdHmˊ歫KQҷs֬|9ETi6y1.("P|EN<}eΜ^>Lb/j}IY6L$,Ɗ21lXsˢ9:҂ =2)7R5؂2eLFs(.ٷMKEdm]" K38(SHq6DFE2*'YhtWm,f1>R̼𬟥ǺRU졜/SKժع8gvbAI*(KuCs/dɒLtoo>nB{xhUh,5ߛk]C'ݟLg#4Z BAFI͒]Hg/=u]Uv6K~(0i*ɉd w\W%bt`n9z:tY%ߜg }+i-%6_/=)R%W}[ճv.fEGDp !Af|j<^3Έ,t&OGf7,6e|G4Shp*?Φ2䤁_zD~ fSF3g6v̤o6_5rF4rMv:tuJQ簾smj@Goh( #As 7ʍr6 T "@K0;@j6U5~ahE+55V̎lR@Z3\5X5 l?2ړ-'oh3 *29sj GȢ9P~ fh_RNv K*\8 Muf틝`w 4@@BZ?AtrO3.ܧ4/v h@?.PK^SE   =|q^c89- Pcgy; mv~v/O[O1_?U%dOi$TD 8D$gdړW͎%s٬R{w$ǔq4(l@fRB.W:z    Єi/nzY oo3!vuNO4h(~n";>HDȏ@;ntR+n7Ԃ@u[ Rh, cxt|"FѪdn$=(9Lo%A@@@@@@a +4vAꦱ ~mGG2 >mTQ2iL ?A\4=h-&_6L*3 $ٰ"@Z ,>WsDz?saWqA@@@J@J2b+@} x*j;ÂB>QԽ*fs1/h?6Fpp0k w=zj$P J6,`nf7!99?./\y! W O6O&R 6dc? Tn˱ J7@ZhUyϽQp.v]YJRGŢJG\ dSL.R[CЦ@@@@@ ;l`Q/˟J$v(Qӿ)Pǜ6L],Tfv0/&Q/(y:o,K(Y@@l,D"z @1ªb+5v?E5]u(h*ǖ\@2ɎY x|t  5#@fVÌz~cڋ+qڥYÒTUV1*  jRNx<҇>Q:5v t_R UhuvӄX3wtfO_޻t˺y~ lCĵeKQLTiÏ N HWL?m?|Y$[? h?M,N  !ٵM3 4O3$nթeHJ P97@@+ ]^v>Wn|ve\j4i? @(u[|4N[zC/hDKlްH%~/fDM'mdX[,TT1ƞez$MKZ"c˓)<,&JOi5%n[;NTdM7i%ʜF,6Ŕ**u[slodw([%b2XKD$ U>_Yu$zV/UY>֕嫋'ͱ8Te[ũ,k6!ɢ.uUXgu.wbhT,x{o@~h~XPv6KN|Y:q0-h YʂSdMPh؊OL^EzT]ff3zi!}ɂIq*:o:[w)MVi4Kj kșt%ao$6hg1,Ez\n6i-E)"*f랿XPzX,\%ZTo qXPKsLk&ʼRu׎156bV; w1Ny Kŷ)+X%+'fWbɔkn+gQR1}ڲԢ,=jEa$X߼W -@U_[s;g#a֌ bc7ofFz*QKR3iY6ljk :b:zgJnkw]0ܕZ[M_VU@ ,4Yl?f{/]x9!s@ hSmo]h_ - 0IUkLiIx%{hٲ.#vhE&VCy}dZ'2sL$TkJaOՈeEVDzE53tiS;lZIu jש g>GZz4+O= I0 .)frpm;oa,uögYC-߬b6c K}sĄל(Zw \7]}Z?Dv I1]ε԰{60Δֿvq 3X۪@Yy=}]8@ d|\lPHcY#<[Ĉҧitg2ǰ14+]8(Ѹ޾]ElICfEIbiqP2#{"զȦ:kOjsdʆi۫t/N/(O ;2)̢k.5֩ySA-yz,SUyV0\De#3x9 Y(= jHe͡9(E)aX|vܶ3sQ1(5ɠ<͔i+3I-CdBrXߺ܄@<!S}|gT6GH7oG? C}(M,ܐᆫI1 5=XJy*u"g-6_Ĕݪno8N{~$=h ~4F%:&$m\ [X{,_p1 ːWWG/6I͈-s](,3aϲ ߳L{UѸ]kY6YSĶ|l&mW:(Ї9-J!Za뜚WN'*6%հبDɭzު[߾{  mO D&iؿ@ nZd'h]AS>C4|>WSft&|ޛ֯__5Yz#Ei׶KeΜ^BǷx=MUGm$s{JG$o3)!QV)TWgYԒU,G Yfu>}܄v1-oKmX<F"tW=D `9=iYPr$&R+d#) k, |ٳt,W(?(S2lcAeC.ɜYCL6:Pr_yA+!PW2WJ+g#KMgb晀C=pY>F"@I.mvhb/FAڔΝVذhTA[EX*OӴM(@hcn!Y 1@|t/!(dw$؆fy.#8l l­D6$9j~H!&"P(1mI-$e_x/o"(   u#p뭷fOmtMqv9.8ct6e #BxԩSC= zD=2\ UqO_޻tK5ۆ&տv]5RÊk͜FVȌG0o;\Bx = n>KϻWouuɗ3o[(IXʦfL[),B"{iͳ+{beC1B4O!X^Bb٩P 'AqYPCqx¸) 6Gc3 (jL c p&t%Mkn2..´*cY JW`e&q䔴ΧҚ>oud"T6Rymbn h-nуE=-KE/Cɳ@]0o=XE437(~( 舌oY|CNMmABܕT"0O Sv>*Q@A@[h@Wb0wՎ5ήKυg.aL ,0]ۿvq Yg:üyM_6gݿwj2mef;&j2KHa6"5]E9բ?ĺ{ԏ]}̰Ja̳jd-[v19W\-RwYι*4\ â2S%G9})* vJUk3s*"z6i w%Z/#ڽճi'85ީZ!bӎ>7)X(9qn5J;~ʱ vsJ/G4?Gb?n@WһhY۷#QݜMev玗ʦ"X8w+ nPL_Zf ,X̜}ѹ`YCsWv9ge[6u4۽c** 3(.VQ`mUFOrzWZs]-#$HD.IY}Kd^8K"M iǢe.Gட1֬Δ\ԡm Ie?T[=b\;bd&nI6{F ^µsuG[ɤHMߢm7M/NF @diWˬ>gs|mue*YFL#67X2 ⓤ.5>AgKhQe]ʫL=cn.WU e!mF;GCkRuXҌReDNJUi5l.r%e^nZSr)95ň6U?X^-J[$d/juQxTD}3;W#c-4Xb"@mLY9ScC7JV;/uWFJ ^D߶Kx˰#D@s.A -kkX,F"tG"1 p~{vJH2M%HT(Y+)rEg828QD/eҙtC? {׶Y OB&k!71K)Wi&ܫZ];l) 2tӳl=yO浺 5&.4&1H  er8(YqH@M&^6 [YVлpU >bQg1}\!HN; d 0/m;hE:yU]P}*MCڂФQ,hj@v 6ZNi>_Nv{upjcIPaj`X e<̛"N̈́Gœ6x`U!@ p(       xbH=: $$ @/Е|>e=~K @@@@@&p뭷N8mţtGch4(t+tN8l.Jg3e!kAQO:jW= @@@@@@H.:G       P;pԎ5r: 5Ԏ@`..@ Ljב ' }@ t|t    V+l6V&ཿ @޻Kl" 9oDj@= PO@@@@@@jF 0@> _+5G@:x;=O`.y# @[f 2h-+(ԑ'8@@@@@@@ftAL}5I@+0N8;@} xytx@@@@@@@jF 0@}]@jo"#hMڎCT|z(0@@@@@@@fs@d5H@+@xysx1@@@@@@@jF.FF       POpԓ>PP ACԋ @@@# A? $y7Ԍ\5C@@@@@@@stI[7@V޲׏|CA^w3       5#@P##'\|AԳkA   H] "ࣛ #oD]@mǞ'@@@ @ l<9†x@@@l + {\F  fԓ@`.0L 2zv-@@@`^5A l|t3|(       5#@P##']ay@ٹ o#+ -Nso    `KPn Pw;\F PwY?@@@Z޵@FG Gވ       P3p 52z 7Ԍ@p.< 5H@+@xys(ay@w     U"0@H@ hYԶϸ@@@@ X ,~+ $ࣁ7Ԍ\5C@@@@@@@'}       5#@P##'\y!PϾy4/7! P޻\5(D{    `I{Ԁ+0t H J,@' XOt*0 &๿ 9oDj@= PO@@@@@@jF 0@ ڜ ;?  P/>w`.y# @P3@@@@@@I.zG       P3}!pOlP@0W " 9oD\rz _+5G@{@ys`@@@@l뇮PAZ*8@"H5E   -N FQz pUAI#&D{`E: A X\{,Z  Nw ^?p4]-yy |ٳQ 6P ,]\&ͤ3}dRe\@@@@XFqc -AdXb>. `fԨ \D ˲d@@@@V8?/ n'i \%׿C#K( #4Ϭ}4e!MX|,&>dӕr_T%YP(!jY@0@zr@^u1M`7[3@pcO /@O[Up!M %+@ՉyZl?$;|6|K1_tS&h@N@ ,԰ yO?h>g.~^3i"( |~6d8Uӂ T{0M1^D% Ϭ}Abfu#^PC}.U<2fsg{q_]'p{ܪ.u/K^YP*!7γ 8cR"+(xşhn%2 >P|Vd fo آ:@}cFDh-b a~6#c\FHnNͅ+'Dh!|?h@q.; !& `yN_OsSrg@57BQFHtd'hڸ[ CkSt}/,rKt?kM?tBE@H?Hni€4(%KD]F,gڄ#/ʛ6h!!64.W#F@<v@>zGA@@H Il&eٝgGu*fA!@Y9{X  PO4~is`:    @&̦Sl:e^zo0@jM{d=bT @@@@  3OK͆IԄY5TF@. 2    PN@9n 9q{`!\ ' AOOS  A@@@V.r G`.y# @    b-;Tf%$@P5HA@Kp@`,Ò/ `T tu"`> _=6-@@Z[lWց@.wB.*b@= `743u!_P\([pv.t]@拀    D@/nw00pjs@@8\     B~^< P{>x`.y# @P3@@@@@@I.zG       P3j9YG@@@Z@BB{ {ވ       P3?Y93N8<՞?=k\{%切Lhd `$)ٝٝ-u嗃TN[oQtmC4FDv3|tp6Ole"k_# S:'ZL @@@@@@@ g@*@p.1@4wH@@@ E @ TI       5# T @p@~ 6>x`.y# @8~v!P"ZWQv\    ,!el0Bswv|9aDg@,       > ˬd  `Cgh    `Fob %཯ P߂#w<b3 @!D`.y# @P3@@@@@@I.zG       P3\֒q0%PCx/=O~Vt  ?p|YҿO9#<|s Bs9/<@[ FchX-FH,Dŭ(~igrtp6Olre0B7.ڼ:u,}ZJـMa!hϠAz       "m/jItA@@@@@@L.*F       p T@`.A4U=<@kzݒ_?J PK>\tnp% X~apԟ^O@@@@@@@ D]t  QAE+YDU@@@@@@D 0@-=D^ nT@   -Kxw}Q`.y#  'n[=5y2TPPsA@CN!apl|7> A8&?P{AE{ {ވ    @]߻Cy^{W@ $IE@ &%p_=~rbh$._|znAV?r7PI (.}ZV,o@@@N;>C_N@ X>Z6(     T>$E7 @]+=$&ĝ?v?v}屍_6h4W>g@=Ip.y#41[DW.Z U@ p@H}A@@T6 kF,@n4@@@@@ZwSi@hDAD߶Gzy%yJGSrOv #P\!GW6Mg؞?_z9"e!dɒ٩mmqmN'x"Gbh$DqԊ=Pr$lȚb4eW$R(ًfv^Ow|@@@@@@@BO^KwU2G       "p+ iM@@@@5áz0 Ђ\]-`p.y#@P3@@@@@@I 0@A) $gׂA@@w @|t3|(       5#@P##'\-" !'PϾy4#A<h޻\F  fԓ^H]pr`:[d  D] h:b^ *4+݌,sϜx Q@@jHgYD0od    N.hV K N_{d6_ [nL7-M"oZXu 4 }/ Д<<  2] -KI[ m]\y ]k(ʜ5 n YCh~ҥB@Ch:%>}տyc̚!|!7+ UE%x  E4!oJ\F h<` Wq@q@ÈN||n>_)}U>F(iytk!rt֖vxmOi=E1UV$$OEuJ|-( S.ײe2:pGI"֖-,\gJڃ:&hjcc<.+RMԐjjߦ,hBXƕ9f:֦z6+A&T F=q+/nG+" Q]l}U]fuV-enS@ җ%\%~G'B4gW]CcҥB@Cpl-V6l뚡|Y@{b[( ӫj 螯=ߩb_k@M.ЂU%h~7_ 'Ч.>o_昘޷Rl-r-ea +`HϷuuJl뚧*JF1Ԋ@]~gڝ|s^TKAI&֭V*.UM=WZJ!TW %ڪ"̊/ VScP۵f夶;V]7`Ji0G]P2o=()7/JYQTnDؔV%aliVL,XYwQN31zV/Uuv"J3V;ijDNC,S_KA[n_P޲,uZ]:FjO6+e]i+kJn~jF; nw$MCG?7#iD'г֬wwESgi5jǚ "VgʥJlyg:C֭vU5+.~]}d-]h_ww-ٵYvҤ=_@uqCYk%2d4k(8ReSa^kN(PuY`}ʚژWʢ \YjP\vlQF_oڍ)j .3Z)}@Yhlw}.;*i]/).AN:FIi;.Vm ՝,~pF  `G~  Тf?`:~GfKD4^LcJo.)_¬'^+Ǯ(njs"2ŘGe(92yn2V譴bni|i貴UevEd%ɭѼj$P!^`.V#cTk:%WUk)!"+̇T~ }󐬴EWGVn5.SW4(-KBIլC̲Pr߀6UcACEoL`Za*0HMV]aSŎT 83X(+*L`S\#ե7t94+vd+1ؚx_T\#v"XF)VNTMZi~Zy%tt:}B('-F`pE 8@WV (2^ө{zؒf"{m SMS_!&.>+dc%U(=.L=ыb Kѽ,/gϲжjaPKE lox|Deҭ<TCkH˓'{-(mǖXYj=JEen+(RJHJ0Oݔd9Xu\24$`Wb8r~m'ԛ6 {ވ a$`dYbta{^ھ^I͟R6ܫwN{ Q) E[ZZ&RNX@mZ\!f&tm`jbŊJl0^,Ѳj+-iYc^fp˦5M~BVnm|^SSwN8XL1Aɝ/)+ =CP2]^vܬuRN_G9R;9E3Rn5]`&0wiؖŇZ@dm{䇗wSA=miX P=|>_rtesL.JW/fLin=.^ZYf͎|>MB],[aPb,cWWnKr֞KOUaDlǮRBEIKub}g*"A` /,CdaY C4xb"-tW JaXBt *qz>1Y,RVIL-6WŢMt?R+Jz$61PǬ4<)L`s7'o: ,Ix&蔜h86+V嵹beJU*Ĝ5kf-_Cɭn˓W:5j5R9 R3;.Q:4mpa Ӗkܴb%K<>5hKkkw#bH,h n\ "p܋OCɑd"k(o]Hg/=.ZU* ʀ<4n*zý\6P +zis!ܰ*33#tL6z֮erJmrG.@p`!@*@`@*](FgXҐQRʮ&ㆪRB$W\W\u|J@@@mhhE`c珻jZ~d4,VV`@ 4?A( 07D`ǟw,`1&v¤B"ƌay6BBh&|f@#{Іte_ l   ^S ovv P\vn^AB@7[bun( @ keC{z@| |(       5#@P##'Iy@P3@@@@@@IpU(c $   `9b⛿x  P/> @@@@@@@@ <]eaH@A@@@/$ P=ڭ(@@@@@@@@fjd      $@=#o\[܀AY?@zE"e= yDYBɀ@] h'{9A,` sO]@8 -9;A* Tgk<h=.^ś b@e+Z34n 6-#/pE@v$4®ΖFƒ@8 عn $4ݶe4 gj^\\|z-ўзLv29CE];mL]Jluu_:kmuq͵;ں%< 1_rYرJ9IԬKݕ "]ֺ`Z;u*RɁY]kլV!J+BR}S}B6qq7lWoŶK4OQifz uG@^@Fs@xK@MkotKa[Wg9kv_=m޲wb) ;2u붩3ܗ,X3G$%..&>cud^nL޳)$Kbu׬ZvORTY{V/A{Eˊ޳l~71W΢R*XlY*JHYr>o=XE<*Nbݤ *1tcjnpT-eTuT4B$<3FRSVG+B5x¥-p.4Ym./ʄ-ա&5w-k[WŤwO@@ 򀀑Sm&jT/ZZ;w(ʬ9r/)c~Z[%H@{͆i 4*>҅̋ϞOY=#-\Z,!lڱf].!M&ٿvy) v<=JTͩ;l.ήe,[tww/ݢ~#tRuYnDaWc:fR.uR:g̪p]VZM5΀xsQ@@ hTr2.>1iS7Kv:(ޱH5|p)ىۥK,_6υHl&6Küb4jBdI-sfN7`b^L^&ehޣυˊ B$eDgV|&iւFQe k&SI`oR3*ȼehÈ     ;}H汓ZWkFyiľPؠ,fKMY`u^:/[gmPC6裡,C4)^yI9! l^-Y.+RXO qTv&{R ~hL-˵J! 4w-*^@B{ {ވ a%нZa?yIG#7ۑ'>)l}qX|_OOeh~T|R$`)0ObFD2ՉV/eX`/i2 I1_x8yRt|5ˌ6:{z0*"τj .&Ԧ$uWĦE{Z @P,+~60IoR^0ۑ;Y l_t4߲4Zfarް.jhkٺp[-SӒvlnoI7KEFf$KD#u9#:6|@jkU[w%4̈́N=6)F0UDaMϠ*G\v,8M ۚ&ert@@jJ =rwS'W\u ; uGvTpd PN\>e3Lj_< +8i¿׶/ˉݲh|uN>5/'+XlNs$fy`ʜ9|~nXGI-K }_Rb,J9YS*J9 %%lBpbc҄weK:g)uKXي6 `:ZS++@\ֿRtzeszQ Q Xc8hP;+-B5Yn&˺)odO=@M ,Yٱ㉎6ǵ%x[{,;DB7.ZX7ݟLg#4eK-7DQFD .=`*p#Oժ@0aVZCW=2g|%w(eϲe:;[+Hkhrk)JTId@S/ Ӡ9^]XРu A@JFps)"ԍ@p.LAl[ǂA7-Zq {Uev@BC@~BM @w  U|?;1*Z ȥc縳c W>Z"#]ZC=BAp\#E  PK9^Ky\Ji_ήm%Hd`@@%@8@`.@ Fav @@Js`3! 8 .PDZcpp\q59yD?Adn5o a&@ "W "g p!@@@@@@@Vi      u% ^/-@@@W >z\>F fԓ@p.BA  *[7@({ϡx7݌ {ʈ       ".x!x'pfHvD n./qG@MՅ #oDj@= PO@@@@@@jF 8@VB@@:d   "= {D@2#D W 6@8WS   ?@%+PcZ #{D W.^r*m3Ɲ{̞~_w`c,B>_sl&2cݹ~刔A@@Z%KGNumqmX["kۢX$D" ݸ@jE``{;x"GH$DqԊ/dd:;8I)[j!e2eW$R(ًfX|@L2L긁mp4ob8  Bߨp x =o 4E _0n. dqZg3LxNݟwT"&$ȵI+L|TK]/-9{iJif.2?/', .4: xG!"z $`,zH5~Lߏy=lYds@.^xYBrlg#W=fD;[ !&L'/Kw\C:z<m6!v0 !@t4ɸ]p4 I.Ƭ7oR?{щG}m?tdO;w%/>{9Weӏ=cMrIٷ}[GDŽ3ϝۏ9ôiD+^o#I  +5Ƨtjc~]z癧΅w4(B vs \ͭI(]# kͫ:@Gj7gvwUw}辿pmͽ|I}ݝNU~ïtڴCt'3_^翙fiW1@@MȢ9VxqoZwn=W,{?r뮾AР\]( {1Oߟ%:O?gdȶ?|+f}WY=Z1Z>us_| &'c}Wn+Wf⠌ 5$䲙-M7uޅ%:29%(JhdG|&NL#%)bg8J2+(}Vyf,.]aU hdsyUP#"Cth`$T|!i6hQd@6h-B#jlhJɸ>tub.{ّa\:cܹgNqc.GlU$wdSS"$=66FCIO<鴽O}q4ȿ'/|ܔ,ɦ{=xz&[x݉Otbv7"'ioOsr;<'I%9vB9Opы,B(9' gc?dc|gq'+{rw*a"uiȴi7A\PD{<;Nh,N e @6>.> @@AXmg$SSOt:gFF&L#,T%9g5Γ_xtS̶eـ9J?>0KO4Lvw##COGs 1\\wugVZuM7%emmm}}}y{}Q6ͮB6'ӟ[oz-*4F?JHjɠ( ۰aÏぁq6tQ28t(/) |̟˖<џܾ+ȮO hFɋAד{jOduC$rqҙLS=[u2|!3{;sG@癑䗾t'>{+.u20FCQ{G#Ϟ6_qY} ą/}5Ο~ +!Rڳ=}Oq÷^`š]ǎؕlҤO>M_/"2@$֑SX} TM$a{{`FYǮCZ5"E+\    f.+n*!̅l  O:a1/gM>a:$Ǟ|N?W{i#͖ /eRGsfɯ~+3:Ue xr'Wq?d`}iJ7mc/MCU-BLǩS %znC-yё_w駳H}?k##3N?(iGa |}l$A>E?/y۶mY}QzvIc_מ{ NNjOF#b#AZA\hSOev~1S Ǐ?|0ޙ[   MI @ TBj^1MSGGs;3_G;ХӴ`^>.Rȥ"=LH22 h[ܰS&F_2NM]|3.74_'hc=w{ĉs3.̜v#jI,YrئcJ( &!ٴM!v!e4 ͚ 1` .mhFHIds?wνM؉Pev> ׿o~WC;:u7| 7 iשԬ}~Zm%|i_o⴩Oej]}(ez4xKeEz'a }߰fLZNG͟ԧo9YɘQVU?YyTuUUc*sյGO>zLsKG`Am~٢Ew}L ijtַ̛UqךkLh=OtZ"=Oq_G<Ȃ {]sgiQ/^?O`,YI$0${!XYJ;mt8lcwGwI_ jsS}nnAkkcy:Bo."=%E?1kګ# [/*2))޿n@Y:.lNow7xtDG2e^W]U4Ul7Act{0Ӗ6ɌO}+p #iEȾuhU=xrVۿ?_~f]-vx~?0/9$J? 8|0"d\fo#4O_!+| Ҳ_ ^do~ߋ2&~h$#?آ`8Td? "\0 a"qo1"I`$ |l͐#$px$ƲI!{7tv[/v"iaʚufTDÿ>W9?E3DSkZ#(Rɶؽp[_޾ZφA8sn%aVjnQ%c++}YZzݪrHi7v;p SP i | РbIӴx PEUX5wRGd ( NЦz-&BcN` P~c>($'5﫯9gwz\(ʆF 'Xu\&I>fC&ӗ[RT-y8P9EXm]95lX 7h"*qx+VkQV6dB尻\*e8 ݭ.+0b\ U%e8o_.{ ;NYa!-;p|"Uw,69(:%dVfKF ^%KjUvDt7Y.<9sRce$0hm⬏&4&tq$Ma`0flK6}5m:s hF˜[2Jfwz_C#  zΎ6)`4]͞  SKF-=^xhe,ݻmò/9yu];xZ zְ4g޹x<<=UVcQA)2[233322gPY|}BFoti]Si9,y 0Y% L\0h|@su9)"մ XYsi\L@$0rarS׬Y3r5'ivb{Cޏ$5v[Ɔ#ڐ}qC{τ$:Z{ SZ ESƙ&/C?6N2<"<^Gg׮].\8mht5w*ϝ9qWWi gxSS?l*̙Cnب]mȐ o~Q:]EE",^sw1j(>?o×o=x  ##޹){'ňIHH`qCBN8l( াf,MZ:V;9vK~o4wXb6ĉWEޛ6ml#bt\~oZR.8GTXEI{Ν;7v({'CNŢ;Q=e˖2b )J??v؁Іe#RZ,AIHH 5,H@F'7 a'FN/Imv@wP{lTmQeau}rmku˫5\̗Nuׯr̞tq/\`J/X50 Q{N,MfT&(??jyEyY`~ ںK_m/^08gVLǺ+XY^>=++sY+W,X}˖-mjh()wd;NfhҸ2{Z8G7mɹ ,F!X8w`C k ":w3A1s`MTN"ja`hqkje.˄Ы c"RJ/v8kx p3fԔd(͍t*c=^fq j`uå 9TL5r1lFJrx1Pӧͦ Bs-R>liiYdi ֬Y_UU} 7* A0ï2]xy֬[n烀ҧ(ښŋգ[O  ѣG/X }(Livj9@QPO8le(YY 0VZҊX`];^v@܌`۷mKT{sg+ľ}к3@Kwx .,JP-[ -UVajuʝޱu떥K:sW*wqC?<#ښC͘9ԩSS^uUPȣ g%up1s@Qaڶly(nƢh^]]M㢂>;4]belw^b9$ T DQS]gW^Y]Sd(ў0~&LII}o<{eW` =O0e0Y K&xI1 L<4L> oў[-߃m۶ xΜyy!} ![[Bc… ۷oNOK AXM\B}3g@A馛2t/|2B_nw8+! 젮,Y8$h_xb[SS믯:1x^nj%^oQ_Ə;=EVLwgggA0M==?x'._v9@җ- ~;3#3r %=T@}v3gN:я~#G5[o]y?o}t`>SG1fbK,?eK/[p%1t`hkf͝ wXk @k.듟$tNd[F[WW+~}]1P8qDwW|+_ ,!(677!]ooQZa6 (ޗ]|$=w܆hljzEb <)S 4?EC/þBYYi AzQW߰iӛ (`BJɰo92{.ڨ5ܹs{Yp[a#S8bš,CR4N<+a˸+" kgs 9v+Ϝ;`aR9q\0.BN60 0^.J~H@w/Ӎ'xlVΝ[zhxɶX$@@LҞk:Pq7P! pP81ŊP(s8viJ8ו~ 핋XRSSBC:_1@QO:oLBQJFb̔.NXh JH@q1VycdSrxWE@NeW(0k1Qx8 xF[ mVki4Q\s5:λdG!I0" ⊕軜l*5k]柯D uAC4fRSQ,`3P8p ݽr(hZ}WGp h,cTtwkK˅/^icjL&$H!b1x-: s`$Z-#<.Ŕ)0  PG0h0? 'Y]x<Vdx-Bت=(fSP:lƄvYt {DEQJ:nkȆ5(C(==tTLy/`y!4*E&YWA!:(PYYUUϝ7DAXx_S]1N0>Qxhٳg 7HzznNn>໢iƌ(<6`{W> Y}-7kѺu;v;3f8} hcTL6- ԍ7<9sˠlFQ:k(rGWl!?⡸N& J-ʨ $C ]7,@JGFf}}3<x\`#w:V4dv̂u(+L H(Aؾ}|Qk U*nDFE NXN Ab , >,t2*L*s~l,?zƩ)gΞŲ*24삂5Ԇ zBh kĈS)'pnEwg;:'x)II!Uoqqq‚nyhd H`̠C#yk-W]ucӿ[^o*\l;w9yn\N/߸(Ky1?^~Cy@9ȅ 7(f )a2+d@H9j5 ?}[NyE޸IB J«vdihlpӎVCdlRT9I{廨EWSہپ5x˫9c&ԉP RTc_ 'g( ~kZpF·J'lߏSֽ] "H._ {_F`Rz2ò5XД~-<ȉ*`J(lѺ e5Z FcȨE`l0)wPNP$@uwlˮ` uPZPRP`V?BGU@Owc`VF  Z2+W:@P 2a7 #h`H nTl7~Z7`EEӺ?Fdhoo.07@Qs."O"Yg;/^l>lCE`$q-1EIP"h e0 (wGȨ@/;Moh-fub,^y{z 9.*B@3X Px) rsSڵ'#]?8_5W_3uH 3/F/E9' {>4vAٻLqf;I (&{)☻o&S*L!1 Mב9-#dě.-8ǘlɝ" qDuC NswW62FB#jCC Ɉw}ߣNɦ,s~l]`L?a MwgfCJn9gU^ ~P0yD+?o0A8fuIeQڽiSS3c?!u"r =PنV> xVG`,lL|^{D{Nom7H!Mc^9` +2`TDlQP"3qac<ľe< (62ӤY!!Qfy`+`0  ĘTO6rE4ҝt;=qEr8"#n]fNtdi @gPհhKwtS}9cUrQJIi<Ɠ~ [@1k!/%4Lkhu7x{=]w# ǣcOBl" =kcfrOU!؋4č I` 8wLD$@Z5jNIN796e6gB&~FE}υLD#B/b6؎'#V` 8¹ \- >J`; D&0H 9p@r K&pwǏ>UL1a5NnSA$@$l`2#rsϗ5 j CGEQ'aH `}LƖ&#@ S,=EHH&lo6ZjSXl-Uk@/Q#4@/cNd$@Dw 7R߰ tt,HF-.'-rHB nw} NmH  z2 iϋZxaHH`ψlݶm˖7Z[u O$lHS3HH` tw_Zl&?4mO$@ 涮5w޵tMm(YHH 4k>su9)Vei}a\L@$0!L\~<>'ua͚5#Y#mV)է1ciiE}͕|]/^~Y'>-bf*.^k2ΝA0h)#|͛3%-3gN3pdÆ -vZUk4o9.ÏKWu\"^ :Mz4L!nX)2R. ` a4auIH1 U.njp[SEWZ.{ F HosJ%n AdY  k#G'A $3*ۋu3ĔR]$gb)%ҋDtQX!2T$( Meh.PO4$@$@$@$hLʴ?CN-# $o!%K24! nG{{pQ    F`wPO6߈H`=؜^;8zzJ+~{_:߈_lŁGd?={sO?-o+~s&pF_mHH z=;ra<ޯp-H I 15xDxIBb   P&ѣ3UUbt9{׋-L*kbqLq(+  ZQT4hPD  Q%@$l2{bl(( 1`)+++??_]'GtxQ$].KDYao8pLZ剖nҋnЧ I[Ď*:ET1c8qNˮ]q@" &ZzIF`OL(YHHF@y쟏Uovڳ8]=Od6ը;R}vS)=3kRN/8n)p,UGRV{ֳp휓oYx L 4LdSH&Sng4H;ymI͕or +^!ƕyԧ/+l\9 zEicK/.-h*iUSh*y+50-]3/ 'Z>76(ѕum[؂m~`\L@$0r|><.gmz|1 V o-]*RӥkVL@gSHH`\6٪ >\[/-[nݱLޠzGgNѪZjy O}svncq#fĉĭC<^+ `: 8zL& 0p5X'ą36N7J{BV#R Kwj{Jgۻݳfx˧ =oXҤ]>jW.JQ^եHlnNiѵzq nK24ϬN-.IAݨSi\^5 w ]qzwg&%Z E$@#D @ w#iK0"0)G = 0{5MтZaJ.!6aJN; IحrվN/la0,2Ap9}&cܗ.r:Dz0dhޟ%<.aL[P. % &C$@$'gs/,cP@  [ @$$H^Mp)-ӒH&g0jtޤ:5bzHg䓩Vc5NwWGM#:OJ|{0}IH Q0/_-K(bÓ?>xGh$@$@$@$@@Lɏ !0IcX [EEb~#1'?QW^7N⩧OO>l1 .PQX2<:H`7 &{zA$@$H>l6wttzႜҿZ2@J%g-]&,5K߿X:TEii"cY$@$@!WU> k}O ii!H6cxa#BL^zZ8|B9-KŴbthh Kħ?-DMĜEg9Y( @0Nu $!aW O}CgsNhcx J> FQ+  @@f  DH_u5L]VyI+Z?w%ږ7\ynN~^Vqe3*["`Y$@$@!T\>mA$@cK@~S}76(Tʺ-g2n~As1  Xh}AG1( 6yyDu>;Lc"l0 dmf#lN,֭[w=K3j 8F1&Lh$@$h[aqhonxkxqoE?4pLs"R haL^O5k`z3Ю?7W:$ϗM:6MZ%,f`_"Nirr* {NA #bq 4HE01?/<Y`>ye^QX(wy Yɓ򃟺ƍL$~e`lM^MMRX/RG C>at*VYuAԆ ZFF⣓8hj Z|܏  '"x^Ewgu8.L)Q;午/wz8 v @oJy0twwAZ>uJ~{N,nj-{^l.u MA~o$EJ8xP~\J*ȈN@Ajkߒ-[o/ L^/ Sw%%21=+^zIZVPF\Gu Gi1S۵9Yjs=X٣InjRvtn] XAFxpT-ih@s\G],OySh}n.-ͧnGbiT.V [r(<3}A)R1A$@VlxUN,Б!c^i4[}<GI`]75"M㵳)7  `2 A#4ՁTt&TnKq8rA?z=rſ&3EO0"]OH\uk/B#ZDJpم' DEXv0 K #EKʱt)Ӱ3`2H$0Y 0Y{&Q!0&ãFVB$@ Uo6gfzH#oSS]j[,PV4:WqI' OHK%[xPyL2YOMmnm==2Ipo\G8 C|>$@$@$/@|c:I`O8f  ap\N ;ŝw C {1qx_We?xi? Cf!q ZIDAT   hFI&-gX0v@w+ l &o";pqlQR"oFOZl8 L:L/t` Hp##[8'NT;f@U;/Zű¥WĽsD]Kƿw ,"( eL$V@il.^H~=jHmwR'g;^v@Z鍇f4<ɩ5g>;90rT=€  QM!nD%0[>.Ke  'ŹMmWjuܴ37/޳mY~bLiXTa^QP`'MGk$   1 0 ǠOX% \>,vLzKe+nοܻWeߐU ?sKGV{oʽkUgޟ󁫵J#a $@$0a9vl幺̔A մU7 HF| a~Ǔ_fȢGUnoży2`vTMFaRy@耶6s8zT(o)Z[̅_qEiJ8^#C;'{Oӧ7-y, Ξ;v[EZ I(svY @(efd^ MR=vL&@ʷߖB DCQ'FYQ]L;Y! \h}>xŌwÆ -vZt'jJ& yȇ5 "hnywgu8.JFrWP|>&/zQ$"(uӉ\RK7LIJ8!~\A; K\)5g4. vIp|P 2؜YrmTaشIF.,(( +  $ 0~_(9 L<C~$$0亙HH``5Li4;0*jEY A\(,6</ VNh)ʅ+f\'wt S0K")br\1DR\u0Yw̖埫ij)[ۥ$bbbn#XpʫDKȒyz 2KwevtB)I^HoP1= %P}cɐu $@74'J&C$00j,@0?c>g/KUw8 *Q^&l_QBn˥׫J &PRL1tZ_iCU<.3%Umά4wz$ߙb𪅯"r2a.Ob&7L!@" ,;}T7 pulӌ[-28kr<ۻ@3}ɲH`nҳsC-铙b|Ё0x/98rژvTf*'pk]?9a:}^\I1b`] lNa{.':OU$vur`@ۗj6+f̘wlfwzVuFպ¤XPʗbvv!xYerZ^n[LíZ֥6ʼnQ5ESD;\[/7bݺu':szVgMDoh jN[Zv)E>qsvn΁ܙP;+7DB.l#@Ј 7%Ffp~}p9e?܂ Mћ%0 ._)v81w%(,m5vyO5b@ j>i0KPk.wA[yP9JWnyB) x @Ӧtis^u*b!HWKe&H`~LèYHHJY,eC*;W̛'`hkfe)S76 U' TՌiyRIIH`0PY$ uʚhqZ)>1شMT_=iGoRQ0M|bBmbeB6b`  ?VpB$$@$0&(3 LHx(1^p=fڷg:{xzힼc3mwjh>P?SPwiu,Ύ"֒ @gtkNxm #O7+6o\73 $*;sYqyeemg*w޲7ϝZPfy,K/^^8.J,HHHH i o٦+/gzLge]ۖ3>`$0(A^] N[h"%O" Y]/Qa @sXXC1)Jiy d ?vZZ nݺ]yP@]_\@AuhڨR[MƧA$0ZYs}BjH2/l$09 ֣-OEuuع]<ϋ KFp:ƞ{N#A$@$0^ Lw*%xzA$@$Xs84jq5fbN9u?uӟK/ģW_Wcg?>-cxiA)^{M@GdA(,{Ģ7(pzbp&ez>Zg⋢Vtvڏ2~BޒYބ``!$@$@$@$@qH i #&-L4 (`gggKK"6(.gc*ic~L}>.WR{o ;BmΝvX|SJ,, PmA :B0e 8@f͚% @EdS]vlaA$@$0 Lڗ+6 4 nf  t%y _KY:}hnY9§ω6qbJ1}pD41{$ āCB]liQqA8g@\/^aJ=V!2c^,3'_:+0).4v3 U^9ĴibBQU+ R\LI-])cHHHH`fH6c!Sْ<-z#f[rZ?sF;TYru3]E))vQf?cUhnqY4KyfyTJGEe\vWN16{q#WkֹEv\^*J,>$|]gw1~{s>\in;tJH3rKYd 2s+jHGHH` @ah $a<[=1ꙅH Q#umgDDMr#`e]2&٥Ls@3VZDZSe2ؐbnj*Ljy gv6["\X gS}p`wLŮ6PfWc\ BnWW:3S.)9TF=jWM-%k\^$x) $ċpI7>fX ?xP#0$LLK&I-(`h >px)F"p90!VV*]$ H.4$WPpFp qEXx"jp9qx8dm@=܃>l?7☄HH` L4@w\3j U:FWkujN֨TjVd$@Ev?D  !Z}Ƿ"!B<)#S |pȏ|_)N~Z44Ç ??A ğ,^~y\B $M "HH`4 X@1_S#.ǎfI 1k>,NYEIܿ'h2     IB&Il& !mYYYgkřZ,\l*DY:! b"'E* ļB6AȰ$@$@$@$@H "R `>`B09~`_9svWCZ,j]{ջg6}kj<8iCGw\M(l Hy{ ݞ07 P#pbeP\>sR[,:{jէt͝Z8ǽ|v%e-+fu̻"=X$  !/ %0[]~S}pffZPxPm{O HF@HP@7vneP>&%G rEQ JYI̝+rDXVJ$@$ӎy:AѪ\֭[wgh@t@dPqڟ{$>(޵3A8$@$0Q@y_JGđ.6&@ )"   HN  n`$@!C.liiQ h?(~Jug2 (Nݻ]?2`{ذAZP  ^9pF$0kR2(̿r+CHHJ %%%=CRr[cbA[]mSL.:J+N&&3+pm+"0    $0*eMP$@$0T*^o46jpi|xqw{jg,]gWХۮ^h_gv]^Xe_P5|RGKA$@$@$@$0 $p#Ƒ @RX_Y):;EMX-=歩G6^i߲iS7܃ol8-9x_ݘw£S}d@h$@$@$@$@@;=Aq$‚::0{mE pz% Fq O p ]2) -LBջ`R1= @40(0`Z"JR5鍆 hWTP80.X5 @DqܸL2tw8v"EEJ$K ui$Ŕ\h )$@$0r |"HC IP  !?a;ٳN'\{uAEU8o6' -|"H6ø7 tf! H,t:;RSM7z;[o^O>)6moEzbdi$@$@$@$@K`p y[@HH`ZXvԟΉsr1?TL.b¿[̚-JJ.s ?csIHHH`0$@$ti)+++?➛+*Źq§KWMD}谈GEJX\\_,{D^DӒHHHH`$l"( =7Xl04ٟupGSQ}Fuh۾.:?>no.lܮڵꑷ;.XIHH H  8@<$@C~ 0&]жԫW³>ou7>1ˌ kV./*u"%+SW,_5%=7[NIHF_Hę^7s ${JJ1e}՗uԫ m,^jEKayWd}ڌ.ϸۯI_uyKj635!b    $'@@w# A _C@8DQW)t>kЅF@p>@9$Jx^Q $p{kW6[WOp(3"ǐF! /AIHӄh; (͚|=\]NfʠUvtjk/ HF*0IUYfkdѣN[G _GP@GOZ3G&@8#Gd ;JL( &6xQ+&EYeA<#LhmfIG(mTFjeƍƔ˲dV<߰AJ*R "`VIwgu8.G w;pHU[\\gvH ֓*x:;;[ZZZEB80 33R }kʌR t> 7SPW/TehƋ r:9Zh(>$*(14BCu|ϛ'cdDv(ЙaTg=[,9SKHLFA$Sȃ˖I%H5 ;RX @RݦGk5&HH`2d#= &8bۓ@F7$C$Tl-15 fyKѕ&o'7mvm=v7dYtj߬)橙4wf^*vh}lhD\f3hsOIl;WcP{;f3|SRNUWw9У{nTs^/Gk5Y;[%FIWԓfWn6[}nl]6COSLddj4lTY鎲1ŚcޢTsaè ׌ROO3StRڮƮTov9-͝mjigpSj-ILqOwhn7`eժ&dtt;?ݵ`Z Cڡq6tTL3mN1xTk;#|ᬽ޷ۨr 8ԛ*lS[-ij{^7WcV9SH9+fj5MB #c*뱹:DclqV߷@p#@ԘFoPT, M3Jsm˙+n\L@$0r{qyik=6p@nw@W*T< ƙwdb1LlӪVBgJ'$c-Z|Xn4:;}Y*x Ydl=TY#lӥ*]]a܅\g0,f"]>AM/%uI9SLXEþQI}Z'YЅ._g;s˅Z4{Gu'ietV>Isʧ׫v_yj_Lx!]޹Xآ*Wtua5`\.G6Z,DEP3U.)w0;2_svn;ѐ`_jx'>^bELh#O& Vih `"lpM-V;r[ ¥v q[*.yJy T^hH$@Q^|t(=KV.7ʌ8Q^I(A %> u.Vg- *6 Xp0d o.&}$אz4ȀPM,+ |< ȫVˆ#/J!.P W: N;!D6\??dNnk.n? RM XhHaY$/:&xd:  7x*FX-~ZYsĒEc)'& #mh8o-^ѫܹl:#LZ F'Q%[Wu@"jCY LJZmHKi&b:jg fMF4]ՕqaO*A$@$0bT:4%65W[*4  #\0vȚI ` wwF>52:]TTb|߳X҃  KŬYr?   u[AxuUak `t8a `HM7HH?0Low} JK:H 9yAoZ&    W^9!+C{nxj5%F IHHHH ^+G;  ocv @$B ޻HHHH >#߻H@|7npę^7s F 5+"      !Pw>Լwwv{ &BIHHHHHH`.f̹2$N@$L`8HHH` @Rs vkwgBM\0Il& ';m.ֻ[p<      (gڻ>Ax5$j C$w<   $+ d#pQ+@xnq9ZX~ȒIHH&u֝Fބ 5:FWkjFG%A$0ZnjPJ*H M/q$@$@$@s{|'SCj nw} d       H"pI$2E!      :Mq9Y; @88o^&#   x( E$<sC$0HHHHHHH`$<JB$A$@$@$@%|$"hw~M}4       PhVęBaH`j   @o\$@cK #ZgIHHHHHHFnNB `d      U޵b > X; 곕 #ՋH >Z>8 M3Js}Uֵm9c Nr- HK|u=nl~} LN֭;kuz`Sp4ZZU5*>* "pZ7 ӓ $0@2 $@z哉B $5G^$@cK`ωQ7 ę? -ڃ L^r#cl_X; L6C$0HHHHHHH`0jY %Mu \2M>>\       Knl!v ܵ=qLH$@$@$@x i.[Dh$@$@$@$@$@$@$@#A?Yfl^H~#Xa$@$@$@@OP@#$0 ??phH `2? v      [d*h0}IHHHHHH ab#&|IbHs @a $'ӳt^:c#?1ܻ>|-pg @_9"S|l<$@$@$@$@$@$@$@ _DN$@$@$@$@$@$@$be @$jSg7%!P    ]]H I d@.';^E0$@C$0 B@H  ōCF $@$@$@$@$@$@$0 $ @$ALIHH`\+ko$0H `u3 )ڧ>(k"ez{6bAi%  L]QMd3X @GHH`־p+_'ĝ)^oѠxu>} k3B|;$9D\ >ٽ"[rUu0 $'0 @1_AHBq2 @u/f1_ַV]a#NFa*eiK4Z  "w- d#0p@& C$03 &SڿIAfG嚂 V 6U̎&󾶮ZПjU+boxoH3[OE_eGu@@D6Y>T." G"M82 XX>"&=\ $?7 L   qK d]z }[+ʊ+/>ק?n~96*Nkv/g@Y-{:f&@+Ѿ$ęǕ= ?)MNS lnp’Pt}TYԾ )ZC"IJ5R>U~ڰ/hiy$"! *{$rQ,vbtalim)HHMGHH`>-y_>EU9~@U/P̯߰Kmϯ^nغq`^EyTD 3Wn@*譱lg[ȭVeB5Gz/V +QukdfkP*063hHdiC)k"qoUU N(Zc$޾Z"FlCaeCGS8: !@+JJ$@$0bz}yUb[րZ]uF^Qc~%:Fiav)R0X\o9| DпAaݴY-.""㍐wJn)H -R!뱾!h|(e[i@∬8bF1KB#v`  K'hH]&9KD"ȵ!a@am{^w+D ?ퟢ|(BeoUkzE-b̐/B!oc#F2G'!<7gk1xw LA9?<~{W`uG!+exw7/[.s;3zf: ^䦰CEB(\qf!UJa`q~ d|Ed]="&ƈ[9(h=;XՉX PvIH]/044wP*ڶ'{i\L@$0r|^zqy.˞vׯY2 @fW>oʿ~oM6r|ߦJ^y|HyA #P^ E\b޼'xhJE7W*5cnO\)~Pp11E<#1bVğ/ kk 7(DRXE8v\c^"P*P` bXn]\k0iF!huFVQQ |x fWpD\0 ޵S) `@hӑ" : e_yÉ_=0 Ce5@1UDr&&B/ @DZ +07@$l"( {ȱA$@ 0k[fN LH /dxR  !BǣkWP,66G X `,n  H>%/GL|Dvz5 &@wGN /0ٻ 8! w&'G1',ˆ&h`HHHHHHH cDHHHHHHH&<dIHHHH&G^:ճkQl ɋQfrVn^asz'gjL qT2ST  H~9ݱpBSKt$9-onEIqQTTʤG89naTR7v,m1T_~ZWosK.n;CI:: O6"  H /M~+6 :PRU5$hzW|s~vri5VsRyBXbbŒ!5-Q>|ɚUQœ9S̛dnY 9QUΔA5$T 6qb obna%Nܭ˄C%9}#|*ٻo+w \l  &NK?T#tf i.-O.~h5ut LK$@$@$0"`ۿkN62|y!UѧlkY1\3s)y!5-! j{#G^<|jJ~leתof1%3](S}/_"tBhCט6h/\ a%<'.pYIca6lԳ ,8Z޿ uu4[.>+&n>sz֬,QSSsPzxwڻ С'߇:f#  HtCOnpB濿Oy=CGcL~5äWiM\J[wܶ:v%$A+[- M& aZ%4jڰSMwܸBRcϡ8KG^xgo]||B瘮R +.TqV<)KH~ϝڶ$:㨫Dnn8>k榔 ڡ6h}QG$oM7MO@I <Կd¬.[n5Nߺ%V {s:5e$@$@$@q>ko>v:ﯾħSݶ6\iptA2)9{7<+~w>-|_$ͨ9pAɅ,*+Ζ;R}m,Rݙ7^¶ FQ U{>xU:frɧU|X 6h/ j&}nn7K$ip9W.Qrdڻ$ P >OZShx%v9.Vwr\oZv웇 6Qc)D^e0c>&cAg0ݘ% PC[}K8./|$AxziHHHFV詷 ?k/(3?.ѿRjs+}ߞmHI+~tYS\z!=~Cx?xdE߼_'qa8R?BEJve;-6__ Υ |bȳ[x1c_į@+)<ˤݾ{~VGt:ZկbQ a{OҲNxO+Vc[.;MM3˦ؽ9/oڳbNXӖ}B=T)*aՆC5}C_~_#J [#{OčEu+}HѴ$=sWծFmZϟ?fffUu]tuyLxE[:%DMgb aqtxcHHHF>?vݎS̾}{MO~~bܻ_O!&[o WG_orտկ?y eHE7q&xɥ `+F; Z /%%9Fg4 J& #]>]yݓc>~#gd?kޫw.2#溻{ccH!NzA-2W?{{ljʈ__UO(@+!Yy# "NjTx+b.s a z S9RľGCG(m$=֎c/}3Yjղe˞~iw*q=|伇9[4{* A'1Z@JO>_Zߟ^RYY}O֩Z[*4{)G |_b_6>DwP^N<`‡DĻ/KXw:\'&m\һ M/*y^Tާ f'  H8مn뺻v;P~搽%ʃۮyG}$3SkfM/RrY&*oƞ*%בWF[wg2\J6]zٲhuWG厃'Hd4%Sdÿw?p FiY}?@]p Vɥ^cש__}g=-g*ouQi(0|/#X+'UcBʹ~ӫ7=+xXND5|/Z@J ,8QJg1W8b10a8q1ֲ`2qbQ[p`-0)9h =ȱͩy6|ܘnzuF]\{?8Jj=zbkϬڪf.;k /8W8|O8WNsDȈ_ƻ/L=}QvP9C~S6n5>_q\ 4).JjQCxD+98/'!*~x {amw.ù]"ɭ[bO9_ީjɔ=}FjtmX7oi'JXH8#^ )X5FC11<>d'JhH|+'ъŢ5EiF#CC_ hҪ{Ɉ,sn6ퟖ/-?q+ZU}b lԨ*RQ}H8`~7}_ >x~Q>j ^xx9S2F/!H^uxJA?1bX%Nn)`stЬ9YY hl6ۑܲ?_vxFqvG& Aɐ)1,#1| tH)W;;!%ea?&W^"?WR~$!$@$@$@#D@&99`YjR|i'CQok|X QJh J2Fȅ◾ԡ֮]++ӕw|sx@o=_8w' .bx+JsE;$ ?A*1b5Sl6gz&͠=qV|{灭oޱ۸e';;]zQ;G_o=Kr\(/Ks\u%6dGMfTLxߙڱn7xP8ӒHXҪ-ڱ/)?m}w%-2.JH,JhC$ $#8r=\ΐ* :bh%c Xo]ˀH7pоTeؕCJ4 - !i~{￶Du,x8)ǫ' Ϝ|_[ ޹o}ܐr%f}|/qK_vCJf$_ jH%IHHB!|Oݺd@'Ϝݧ{:{F 3y5YKevYRΟ7eV?FGw?K8M>~AXcє9Yӗ m}o($W?2glf޻?@b b m}/}ru^^vzzZj m&f mFhkWgcϾ}KfT7uʙ?OKG;]vW?s;.f'&aՁAL[{ _ڱ%&K'5_1JxH=] `D%J|CV4fl]Sw.Sc9) Jc_>[Xe]y OD5 :CIHH`@~㞛n:hݙp@ _M(Ӹ_)3g|Ph +'16Ӷ+n;1ϭ:'F_x!}+u&Gy,(xeޒW/LOOM5Z5,jxoy׶{H >q㝫潼W>}+y(ӋpS2*j׾p0c؄UjS_HV}q(p˿t cp7m,6p d~gZf<#1e% W1َlw_~Ͽqky"ZxFW0 };>ok9\O[0u)z\Qx{+L:7,.n멥WaR<1Rϯݕ~;Xuu)c犆nx~e%)ZҌLx{בJּoQnFNH ݞ.6ses<],cAY&j7-89pj E]%V$@rI:hވdny+Z2XrB2; @0LCnw>pg+qګ5+/RTXeK2ROQVpH F}DՉ}eV`z.<΅3J- ԕpzrrC9) ªimeYSFyXn$7*ͣ($J8RZ.EB%  !wݽj{?]l- .n)Ͽ('}kK\ӊԴD%N3ifToRw%g畗L)/ɯm4[DU7 6a 4Ԁ9Q|͢ G&K%8>NN-(5 $)4Cxڀ=fU>hЄfom||#Нq0 4V?V+MoјҌTzUjF;HާEVZoKe\/! ;^)6ޝ^/v ý_?;ħX\֔0Z5fH'#.ȉ>%G"ƕed!l{b #"~X#7|hF|Qn1Gm)zq<& :leDmB"{)1}ch9{;R8{ .g%:;RE g`'ǔ|qRط2-|ORpuUhoOV,]S0@o0q0Jw;W{P?:3c 4Ѯ+vAoW@(hW {U3VmI _P Tzz,a'*SB+/,/$ъ?F _|P^q_аF\ԚB@9C2 D|qYG#A[{ǟrC٣zHcf_>=ңO{r*hwWbEP~T@8o7SK6 x`!Yט蹆i^W_`|+ i&D7qE}DnfxRIr|*Ϲ'פ45*}`OkJ@}I1!8RIf)/,XH)ܡy~r1` a?Е}BjS^/BczuDj;L/IENDB`Dd '0  # Abn3Bܭn=ZmJ+8nB3Bܭn=ZmPNG  IHDRP @osRGBIDATx^} $E7= 8 ܈5( * \PDX9[A`YWaaȌʪ5=2##kdƍ?nԧ>CAO`AL, ~J>###* `?67JPrZT.eddc:a44%ֲLbӮdp-:DDLĺhn#?bp^iNBmZu4N!FnBRZzv_~{A3o6{]J95kGY3qϫ8RʀN Tڝ^H\$hJAC:sK閯4z*;6d˗K,=_˿p vmK9N/3w5-'\v_G38gszʡԺʩe Pԯi]N"IMQ3,ЫyI=sv-,B#;EMfַ̘K/K߽zq2+K;,w<ٳg+ֿk/ӟSg@]HQ6wy}#M;@4pqQY[玭m) P e`}68# R/yK_"𝏌4dΝ;;gqOz;@& d3oUo=JTbZÎ^t豍f=9|k뭷~>~~G7MgΜy;7R~̕㈿Ot0˜:Ce:_Xd.;?z!p<iA\sKNևt :8g0FQ5F *Q %{jȆ?@Po|4_JIq3Zgd'M88tCª Fi47d}<ϡS'+Z"5s" 6&'c 'I@pDA'xEDdJkӋ쳂#4R9`mJf6O:h&!y픁"zs/Gqua'0gΜo׼ew'TG?inci'ٸQ0όgos?te_կ~Bv۵kR?7v^vb43j9܄6a'sCa6*Vͩ{=4'_LUΠgIkqo|w {^yp^?O}͉o?y ӏ*?9z%@uA*ns2zgsS>_U`} N?묳8|Of-/{N<[="G~yMcHN/&T/;<5+ڿբ:g%"˯" ֑doC|LPy6 ґt™aE9ϳfC;"ѡn7'9Ůt ߐQz✘NLV͈҇pg6uv SNy@Ep :TRX|+^σGW7ݶϐˎ~/啯80Z߻8Goa#&E~=!P|ؾy Egʼe|OįmЯΈM4k_Gևm.䒑E?O_^ߝ]`ҟxaD yxz";8k:D:Gsl\|VAɌbϡf)73Ո}yc}g]~/d;84Tb+PQit ےMID&AMfpm$ibqEʼnlI?- '~. #éIGϽS>{OtK-XGt#|fuu`{Ͼx'}>y>ꌾBlg+E{/!(f)S6IԩSO2m\Ǒ21PBa6'əf7'ŷ0$v>D|$Q?Q5w✓.fPuG=%!Q ~dl-)p]Dw u9=."!؏S8z)n v.g(tt ZtL3 .gݺuWsܸ>|9 h>pכa~8q~>$|='\XNJ}6hT;ߢc,Vj~횱ukp ep'#L $-Rę=7AAʇ!W eHV+w𪷼{qXy*lםO$reyL)|"Qp!lSym^;B';x6l5u6g1 zS]8?* )~S!'gKH_n\Uj-bQ'VH)H8qҾDiR':rJjWdzEN_99y$JG yxІ03aR*@LXShc*Uծ '8[Ԙfb6މRj#M#!FK#ȁwJ8 ppxDqw`oy=:L< )t6`}E;w֍w?)eݺ#6g<^fC o\r_ Ԁc>"3;tMxq???8 3GgM*^!`S2H;~q&#s|(@N I2Gؽ|\I`~ߣ|y/)?USxTt^Cp|*8'v/ 2[^ţ!c:MV:HPH$?w̛5r1']Ԏ~utp^ ;ClZ]/<|F'm2'i~HkR)]5gg,Ÿo;/ZOޫN߰awZY&M{gz?_v˝t^|MO wh[n+wl"{,u) + 7b+W;l`|J|r~y}Uwr$8INdNש`,蜄64"TZDk+]Qzʙצ|S?3ݠ`,r$Ht]<8$*O|\t DbKkw>l/$ K @6Hg7, n Aok26n׾ L{>zi7x~?nXD%+V$p9:7uȘP|?{<@s )];aoI " `'#*&Aúpb[ȍo/. T$>%F?3 &.@5uFbWd24CKy%wԐe [-en/J1$D5dgG'z dM}H߿>ѳOσάi]&׬z ϔ)L1uҬ#3M1 'Ϝ>:s3FϜ4mȴag HQs̭>r)fp{?7t8:i" %jt̑ ɺљӠiSԴsCV9SNX}?h_Oo/xҥswCc7z'x1 G??я]!CG?Qڇ./2 9pꫯC@yHWUpN^`"' 7toa{WT CTHX <|''ร鼦˵UjF @`fu\B^[LEppk1#QLBb%VwtJ9p5sj|2FC/b? h*cMO-30CP&`ϼPIç;r$Y1t@-DCbG>3d?acCYHM97?rHMWґL#̤9.z^C9- !8hOi~0ڪwݭkoЄ} {Ǡ\z( O=W|߂WJIXjg};%:4vǩc=%ש6[NX:;x٧t'ӈghH40X}p,P3TMfn6'Ӑ!DJ:NykGH,SCуo;8&4&>@s>k iamr5S Q% vʢmg i+îhݶ;Qq x9ɈJ-oy.J珍Kz"xlOh܎q !$SvҌ]j8RPZ*+߽S3S | X!Ç4!R?0\{:=v =y4؁I ,~ƦrEs>ת3_Թs'ϛ;: F,??e 6Λ6o3oԹƩL`gϜ` 9f>gPW$+WBW^ q6'K>l<GaU7'7'aaVs ؇(рmx!; 9.5p ?L2 'Co?VW\3Ax7_G)fv00=9/| P4>ͺ;utKZ)Ց\շ&Nn=Nԍ(=6v8n0T!%ښ FO Po _=WF ^Z_Έ ,1>?YYuq:2 :W?!Uf lP3dNh%1x[?{ӦOF;lQH6NO6}޴9sf̝=ui sN>S̛2g9s͞=%ӑ9P~BY~RŃP>o sƂr/g#/`xdɒ%p2oV( `{>KwyQGa;y)pPM^{-,S~H`kXu3@@Ðsłp=ʧP'sXDxm0@"Ù{v?cH8o$ d`6t!WO|~r#<sٛn'g% p @8spEo~t{/L&N8ᄭ ?o}e+oUjYVOa.xTu ׎(`Ox~ WoP1>2e@ ʬV|G+F27η^/ޮɞNxSxigzo̿Wwk;T!Y eB#s[a0¨Y؉lFR6AQo޵:isnAnCA؉}'3X̹+G[܃;2^pq/CXٰ̲$Pyܴ{cj/3DzvP)÷q(5Fc$ %!0iTySc{I 7mfg Q]^ױe>b US3@j}FCt }Sb5tYw*Q#s:+1ɶ}Il;so]af>Ul g j؍Y6AC;}f1Qb/^ W^N>M60U\@IWԿ?~ϝ:` ݏՅp0E7V}d߫`{u<Ҕi֬/e_ m z #:794"#id\gO,½}{p= DѯsG{Y*;dYʖSxZ"*Ջ8Y5(| 60i! i~q,JFB.}jnɘ<W֚ka >wc}=deoP>DPZdmiM,ȞAf8-4W@do ™GY`=vM0AjY4x;+` 4h ӓzqu%6PUYV [s˗ͬ,隭I3.K?E ʪ0!v:*,k뻓Ԧƾ|@8՚m;s5a\޴]6Oo-StI a,AՓ!M  TlDcKº!l ߶ :vMv2*6N̐PGLG0\)q~ Q * 74jgbpBXC+Cuo:Ο} Bm|v"/xN! 4`b6_a9vaǿy9_ܨ@]>fU!3I2ة[񛼒uk7Ҏ>lOƯcLs^YGH h3WΛm{TXf:l$ENj/P6 +]xJÛ2_j7`ҡ=o0lSn̗ جvGD**9飆Lf&zGxFYیdm`M.1*[׹j F1bA3!t8Q9sD14E 2:!`"8ngl s`\f/NuE(^74vO]M 3qzr- U;r/f}ȁkLF1Mji3u㛩)껦lЎ=Q-c} L  q ~cZYVM쏖i xCW7|`զc|`A8l_ͼtj`Ëz%>So; kמm\4i"[8z;6Ȇ\2)w<$z&4#MVqװ1Sv1ZƱ CZB$v5QZP)zRh΀hm OaϽ6;{`f!p!}E(H*PCu=bçF-zn#2ӌ0 BfgθnsT8z^"fe=&%Il4K= ۚ؃>۱SFtM5c2/C.!W,҈_΃CD *+tȣ UJ38" pR>g&d;^d<[AFFեy1nT똆#8Lt Ph) ec 7PGcאL:M؂:ʼn]snbe=USY=FOOsT~s6P٥י}y_ j>hx '>`by< sLܥ;(ѽJp4u*YէdL1Q26BH;A7|9Uk|͌ ,nldL BVu#Ȃ-C #x>nsp6BS޺b:p ر0& ~*7,scŰ fp>nH過kt!3iM룎I)1ܲ>;=Xl17of+ڍ= ^ ={⛎qT5#|4 lƒPLRq3pMZBG bnUQWnW1z0P:7f6،?4 OlM'tI-0-vCkMv9Wh?V, W0.!.nd2 [Bֱ᝝2 n4a8 c$7}Zla(O O}|75M @፟g.mSY4ÚE= Oi{sj6 *:2|s)KTnzhNQu[h/{x٭Q+y̲kyq``o3ˍdո:AYsgc4'}L*p;yrg?fb}x I:*$C3t~q`gZy:*@Ҝw<@ j]~~׆uL/bq,*S:]5>t⫮'צ:$̸;c:Z9crؘb](՞|lĶj^./44l80b4xRٌwx,7BkZy8f|LGBX (aA _1b`}alËEx\19uq]3 + ]ڔ##vu36$+ʚ߅EF0o.7uŧ[(LkDlx+7DE *oy[VX.]r  @{Xl)=tA(* @C7A@ZazѢE\5VZuW~'Kֿ~}$~_RJA[~c=v…eyd?0Xw @Yg@Zk֬)[P`HF0 ,/.$ 91 -'w%%A@B> 4Xۚr~kEA@$D Oy֯|>0ߣl$+ Ei)$E@dmoBؠ- c &Rs4;:)~HOyc:j ")K|醮t@oʫMyD4esH] ~t:u6ٍ-o~w z w2)Bj_~&KstR֡&$vu2!J`D+RHeW(\2# :4O?q~"O̼S#E.l:Zm)W(3 ~[hm²2.P6?lD'sjXj)"C bxifc7ا pyXʑL)҅qEò ;tN|N$O7TN^JO{ q#vRx0 )Lct?/d@`6eI8(9)<u]m]PBa![xԗ~a\ا<<=ŔSy|v+(ǭ4InMQx0%XI`bJY%  10yy?ֿ;C91 w gkdT{Hڃ Mޯ?cӜJ-XUN ~8(E)ClͳqQLqĦRd~7 [_~7}WW;zy5Ca A`2O$Kh'IJV87aWh'A@ڃ@hփz{SEbS"cf'b'A@jAؤ([F/^|w\T`ɒ%oy[VXQK._t)A  9%y}x;f͚?/ AzֳV^O}3Ylg>npo<a!1A6#p%L>?~ln=7kמxy0`}:7Yb67&#pE=#ׯtCԩSὃ'tRP?rK^p_pe/+  ]"YE1֧ vq`?qpBۥoIqA@!B>^ַ~5mk)GA@SzP; ՊA@A@@zCWpxN8oB-zǍ#ONA@A VvRaK/~Gru}կ~2vAXBMq   ?]~y}\͋؇ _1GA@hfU_nf_跭DA@A{I}/~A@AS>*'IA@jA/`>!B,BA@ 3<%M#j 5#@>E_>5 A@aG8ԣS>qO?`aoOA@wקhPg%e|5 >O2@IyZ(! 01=/@v`Yo[9ynBUA@ oX_vﻷq᱂)\tǕAoZ/ 8 ɓ;Zr^4X{?,9>/;)"mxʜ._*Oќ"'oE ^Vg/@_bWMy}=Aʿ v4ar`4('OB{NiS|`=&fVF! ~>dz{Sqd䆩g*d!)Xa<('jK)Lg2gDO+e2ɬV*Q%&@ 9~vޙgpنX?L))Nf^Wv9(?jº:R /+  Pܫ@"kvsm=0%\(=6rmOӈ& 0aq&ew:<DT1ee0 'j @;pX4wD_c3wҒT}bܯ>=VLA@N^ǃ`! /:  Dyod bc=vŊf\|ˏ梁}+n)GݤjA@;o J%-[}+Y>wmQ1#_zC(b >k׮]r)SMFqXW5kN?ta>W\ [so˄7S !U!"VA@h΃z@1)=iGEƅz%'JêXD  Q~9x^ӷq8PwA@!?{)#iY9w:_q9Rz$We%4gHA@%jl֛:ff`OKkIt' s@xE/z'?I!OYB0It9sWYf$:~ |KWb8gٝjA)Dž`܏y<8<1 @^:x{ܿ>?/7vMGaνx+&r/w9w5CL$xHE ccc]v>x|ҥK8=/}xՉjp6Qh|5T";K+`7>h-l T,,s$ F`O! GX֯_߸A@*|N|7ʤ~4̯Dyu4p=O̓ﰻpĨف޴""@#b}[i;CD_bŊG+c_4JG1_xwh aJ5d +bA7z{k_,;#_#^=tx}^=1Oc.j?Jv9}),u5Ւr_~(A@* rʅ "Ml0~Q5O=O>cO\П/ؾ H sr7wQh.eKRC(~-?$AAԽ"A 0׮]VoO?SDI kS?;eߗWtra)'UgtR\4K7mm:xBf9yhCqdWьEo8|W佛'W$mMKgbn)i̳EriB/=DA@R3|)Dx??v+R^\Oun`{>")+ {5AU ݐɗZ~~?n>!}C bA@#/R{իW?k J;p<]YV!+{W9g[*Ӫ gP%MBvVh|$Zܨy+ t0'2:+F/k"@ҺK lƹ'`K' tjIfCA@`ï .WׅA@Q}ڌ͝;M6Yp&sV.禭6WS =wcEA@ .=w9O~֊3fLLq<޶'7b v"M"p>]-Jk<'Yx-/} /ib ǀ ._uU?OW\6jg6=P46O)r饗!{";R :ß6f+|G#9jq`(IA@h?k׮?7Qwvؾ9266{O>ϽZv駟~yߴapٲesε^#3|Cn<03wey %R?o~Ww`(D|L CD8ϙR N9E@ @?::6;B @[ 翤?YրY1dM>}>v!_s5pF&_ic'o>Wqwuז[=맄‚yXYϫE8'~ 6̙3gܹl(ǟxGh"V+{`ՙ}/X:X/~ae&M~[l&RZuݳ~f&z; ; <\" AgԾgz ,XfMϬG}3> >]_=f=,̜9.H?XWʸPO~N?K9^5 F#Bkׯ_pBnZ B!|L.}ޔ)SN 7k"ڥ| rB/)ߩ:B%  KºKgϽz6۔UG_Bi!sfwQxkCrȞXm8S".ԃNRx"MZ\r,ywjg͛I2I%UɑXÀab AHP9dO)8=\&aQspI6A@u֭]n͚?c=#>#O>TY%ohFkI,'8?[ )yCv_T)ZIA@ڀ ,op9m Cv+< )B7֧P>ou=XW傉#IJM((Cpnni,<h攆p+)ut~O{Yr^`/A@R\ uj؇{ 9a7Ϡ\2Wuj/elbBA^E7P|f-P9d\% "axjis=PRߐ7 )XyHT_$S">0ag=Є vXE~6 m}cqLn|LЛ1C吽rD$ QnnMW@"|SOҐiZO.}"~<%XSz>De"^)Azu9ij25-YΙ~LKr?+?1QqN7Og S",ȧiGw%w"T no;h?ᨓey) XW0O^b?w3gg~]rBv^Kb Wj˛3 gԛ.$%ga7HR9OVJ D6,LqQߝ!.%2)lp)4ȊxQ{^9?'DMF^s*u_/RCv_f傥ԓ̂ ~R¡:zQZ _;Mq~"7%ۼaO'oT'~X0F Eq_XgSO{L7Uٱ`w7*lڃ11K-ȣ2|sBq0e8p Y GTy]HÈn߯Eu׮O'ZeKWKh˪c֎ύ #lV= "@0{;<`rŋo}s /N~߿b .[o{ֳ5vv<`}/,uꩧ'/?>/)r饗Mh56?#oZ**" &DRjqLvV%"tڵk?Ͽ oذa S ?>mX6Sn;jQ~y睗W9oglzQG5&ݘ=7|N]lYguW`u '#aH}g~S9~n, C_m?w;)/<=4p*yx屸S/ O\Jo8" AsU?>} 'jW:Csq _?:N >yߜ>)7X~0Gې\dV_;G`zL?q0K<ֿ[`}-`7 ^Puꪫd)=4Gp~}'QH߳M}Y /@yK_zKۿ}+_ʽګ7*I-qk<+O_*[i}rN%rw 1_V[~@:֑ o#P C C41AA.|Nw֤nҭ'H_NAA97WD\9d\p!,  p}aT5穈. RY.$ׂ)5;jum<RƼ %"Z'wYkwxֳHUڇCBGyHP,JŻ@B;nmI'HjC:lk#yxj Sh5Ͽ֛gC$2T0~(^z2&UB &U1L7xf\FwUV"@4~ ?24?OC9q@6 O@ĨrR~Z g8tr5r/<eq>5EҡETv8\\leOYFatm EIARBmJ]󟁛,6/$pPi)m!3T(S  8և@I;=' nK5ly T E 8,ǯKK6R,QJT0s\B^(Y tF\8wRpRUQtĂ9~$UϠn5vi*#@12Ojuyb˪\ ;H7 L3 ȃ%8U;W~^?ox Q&2LwfDxsPUQjpR ;aR-$.) csH" Ow1P5=~yJP_$#m,[6_ioKx gU"Ԫ6)i K<}y? s2bO 9+ C "WPҰB}JeS>nT0歎07I2N+>UMXY p0ZPYʓ% @QX'ʍdv9@JKv[ L䙝uN>NΣIgte5AiqQ-"0y9bul o5^T)EK9{tD$ c87{qǦIOXYAz 'P $9)6yR ܥJE hԧr Fc+HK\dӆ;!noiԀ*!8Ʈ?Oh潹'5< AY&Z.%eA,uK(hN[zqN{)3BJ uWd툒IoZ'uܫpP|*P9 !_ >j0R%OPZ$^J`0s^Kw_W(ل"S h7)H]e5Bh*Hp5>; *Z*A{ ;ZA j2C#g1|0+6 2tämJf2d``J*J\ x4qYx6(*]>UH;ybH-jKe:U||(˯5S 5"Wh=/_#≑_c*h?"Ku񳶼jD&ȬqQ  fsɓNXo9eՈ(5.֞aeH 6 Ѝb:ܨKD +5_?0U+%JnJcˤAUSXɃ?`e}ݸ`u^~-B:ԫF2*-vaEX/*TM7*ZߗR@koi}+X/ 3ՋHu)`%qy4A\cMV\z*% c _2 "XAǨy`(hA@ `ST{_AD @k{˱Ez) a970}^Z! BKܛv_'?Yjؐn=fl+X"dTŋtMoF?z+V8f_pe'/?G?~3`ǡeM7?<O9K/p/xWy;Һ+Xi:)ružSWm]Xd<ν}~6艄!A@~%!'f3o+qM޾TZU dK5 0p8ZmQL@닰 /mnK悀 LZ]."{~|ܡ5X ;{!~Hݽ'd@,X`PFbAi)S @b}E /=Ut6NKDB7L^44f{? b 0W1ܿ nIn ߼wAn+/@-BL46M+oQwo8T`pq>7p5J~Av<]!~0(t :JS*kVyPx>*RʗW-G}  GEڽ6Ei|6p.7ݮ}J3o`P$BMNa4*^NZ'=u>o'jc|@$ @hQQ "{HX@~XGr00L&CkkN7PHYA`"##9!~LٓsWN'xGէti*Hw(ԤzБئKNA@ -K\>9jTbBciy0g3TT50h3kyf&NrWO/Z2IQu_x, }G BpШx"c}п_~]~rxBY$9{4:VpA@HG }=ENnas9֏,D@﯇!|yS`s՘TFTVe'h ը `i)UKA@h!],A-S1bpT"tX)?銑`E\`)MH *PEt lDDl *:XGwj~KXA@1n-%֯rJ_At_HHn3Q@&ZtWwhDIA@&m<}4ak#U{_` #FުHX% !j  ;}^PQu@ڑPT*_Ұ~Xt!j obEOA/ W-茕q^CGFD' LIi ڇ!SPPZ&ogxGVdT @- Gs:nP9y  _R-ys,NJ<}|8J*ׁQJԷ˗1퍴caӓ܄#.@ˣXOCǧѐ}rJ8B˭| CypYş vhRHSc^Q>_NT0p^ L^euo5dD2KE.1X֙Rș ~%Eڀ@V> yp@ڀ)ЯX}pp[?)4A3i"Ro]JK4/&P+̣%E _#wP^{b}d$ 8ui $4nllk &*jiGRR M ЇXv3胙 `NP`$Ʃ@d>OE])' 49,Wixk&VYnT C2O`OE DIBJb,BZ4C{'(8\vj]X<6}^W yze/ >q_X^okc-@K@% !j~k^vYJ'fK9eA"]ZUMZJ -"Ba]Bbт /Yr '"A@~K "ZA<u\:!]>'Np+!~9$ #彠l. u>ͯV)?~B<9fH;R=~UOkc͏]m?C3Hlʟ (MY M9=Pp^W3w a&m!نYnhR /M7t!XP_E z*r?+κB^N38hjm\OR^i^D.674%!t9ql p!5 A~/Whyg˧3Oژ\EIndYpr:};yzR)AG;3Mʫ"@$ׁH`"B%z35B$ð"PO/g:c?"'*J^۷Y'Y/.bإŠ*ЗJԙ7TJɁY(D c} e{gy;ZaD sKXiPf*`[H9@䙜cC]2RJwn/OwgS6KNQiM":Ҫ6K]f~-BTjD_1mX??KqO Ip}eyD+8 Z>DDmӗITJ.դV'!1<~{l^_̯mc#TF@X2t;4,VX!'#hdwXrF"gup͛Tցk3>h ϳrg~`Qr|<~YrC~XAE @oq7hNY h Dӕ6!h i)aXApE @yc\Bbd$ @/oeWD  G@XA݂5R57O$z`5D +{N`ϔOһ$83<V,Y"ߐ.VXvH3ecի? ˆDyT;ØVk7-hTu!Eu^A3n\T^qі<N ѐDgY},hT2Ke N;izt%x+X?}Q \/!`!˦KݴcT\ZHz )PA.T3+-eNfҨ抣7:B'mKzoذA(9o]~VBUAq% fчi0r0GghHǝ1XKPB"T>^BZ!5Ҕ)Ps`6jiHmY[0 I\5Vsr.6Oқ%#ZаPhd# _*?Azyz?)1l^~)_"켘&(E#2 .MѤ>;O[X0Ps mOZqjyl@契hj.|^+9{~=OT.ؠ1"ZjB eEj>TPCj2AdR>]`IdO5)TW丨~3zܬ=0-fM:yAKYx7|S'?!o|wm7a0…' Lz{G,_nXo3.L@> P ;_~9s|^93hC4Oj#j ~Rn8؁ m ~FӧϚ5 !5cs,1i໅fm_^![qή )-SVai0(*=SozӛpdPt=+ ^wb}?`msE4| >R:g)cufl/:v͟/vS/y5W4χ]2?։k_܃kR>:|ԏ;<͵[5g}6}̙‰>x=8蠃[af5yO?nzxeEs̛7O]ϾΡKn?DeRQ`Fj֬5i!M_#<V+;0!r 8^גwܯ?0~AruA馛 y}9Ru;rKeoh6-Z.l@rmS>-b}]t˛nY=@t) X.ZnDE'[vVΙ3فɜRن18 e_wi-Q6aqz[Mz>+ ]mo{,>\/[K(g뭷;wW ({?}d6G?6o߹k9Ӧ̀9dO/"98!) *ʇ}\gT>p߹}AJOos8): u!wk§"#÷ `:'aO;7ʝs9 0AZFν8Gmz Y<@Awz{l}A`X/UgV* l>iBA@A@vݯ_myҶ\NoZA@"0Y/> |/R  K O]{gR @b}|UY0hKy'WڝR_G-Xmk/ 0tA3<ƅt?C) : RJN epYr8P{d!e6P,/  4kwqP~M%_#"Jڀg;m2{֬6SO^}׷_x%̚9裏^hQ-AȪUO?}}X+[v@Q空tjC mp<om`q~=؅  <ۃ/)f])#i8" 0kaZ0 2ׯΠ4V`3;S,G_:)(p#m)Ɖuϟb /dA`!01)iֱ /kt?.& 0pL(wgYq8A@foHag߿Ǧ-MVlE /hA)맼̏RcA@}bp;8lgϞ QǭB馛 r~ fOY~RDA ?7z:_C0ܿrJ*?oBhࡔ( Q`ZܨS>#[mUw9`=ŋ|~ʃ:Nxŗar- Ҧb 0]gΜ14P<~w-?.uQ{o|k׮=}s9kCn&vab [IKZ!Dw?[_tE_U |xI',XiOn&Q^A /ֿkht/  @ r5@4() TA9#_D)#  "j  SQ| "@/?-(j  / A@%K\y_閾}?u]7&-@`$o#۸JogdY@ͬ\aƍ*F+XReTUo&{ga% `(?&FemMYMB*9I q7ΌXi7=A::~۞>H4~n ˗/yGviG_Wϟg}g͚ՀFU/U67p3B4çsP.dF&Q~h92fD@.e} +#*}Ս΅E d#ZQ؞&J${kG!YM,8HP4W4)M8hlV!YK^ QM@ 嫣45=qvmJ݉zZeJgʇ?@ IHdW)UgfirU-LTJ gʙ@q֙n~ܻ.?1lRk$>3x(ѮRPcɴRv_nP1ƍccc`|Fod G6b6LUB=zaҸHя:ёQQ#u!! veBu$u,P8҈b}LX00[fU#Q128y bR mFavPGX@899DZNgcvl)rv Lڭl ,]Zii峾Q( ֥A78c%}΅S:-C!} z[/Q͉+t3Ucpj7N*vT^;즛nzGn׿gqx^6'RJlڗQ9u *|"zJ4-!Z.:DADaЇA=aGǯ8YN 0Wcn?Eݐlbu=&"8b`c&`* ki|+a Z)Z3Vʥt 4ᾆ¡LJi0se˘qs*MVYMnQ+ԌPA3:=;nWg=vMcC%LWbQ92MӨ8c}L0%)7ZuXֿC#ڄ̢>la}#=Y%mb}T-o@NOr(vs]w]pq/}IE7̙3o.sk_^6 Tf=H ͗%`~kpe' A.bt|?A`odQjtV..mΆlpd[փR qHMcp)SX k6/Y_?1_[jl2D.zK\P3 (-Iq;;ln nFl;" Xw8|oS56sp1t 9(ʇ+^į'|g=KS6n21 "ۃ R(d;&4 H]*SMAalp! S@ՉwmǴi\tؽTrC5' +nٚALat>5lReȜfkEJOa *`׌LkhT_ItKwP ,"j.7ʚxA+ ݒ8t֠u )\=ׂ2(Ex}5Z[W@ QQ.[\Md)T&3xCclEkFpCL`}@ĬoľL>a k& BQQ`gb!b~ B E'E@\CItUf$Ea|-5f< ڴE؉" Zc T&zQ'>wI!9ONvƻpvN9wjcSSiI / xxS/ڸacsSL-X?_o)#$yzmy S|E:&Wq0w?pY Ƭ@Ov Oifsr]Y+mxBAhq~ݛOįauG1=AqeXMCI8 (ECޚ&+,RNv'fKo5̴J4FU3wqƔ'[]l@UR?W)E(mh kcBF1ԬE& _lXu3O TZvTգ>'.< $CUM&(3n6uMȅlX:@SWR-: hSr\ #. ^G:kRim7Șh D-ߏm^ ijȭE5QqdUط@hN=Ĺvt@胷.3qys?$:( 8_zZBJ g$ %R8eGj]g)L .g|@5͚8+L0iDJLuiu⢪ZUwa^Rb01 tMm"҆mԓenAr.몺ڮRJj͊E5;^ ; !!7"F!z^׃M&6A/1ﰩC5i "D5%N]6q嗟vBnȐU_;YrJ`gϞ]\JkA6DJ٣f=|_Oe\v)/+E`J`A%ءf8$JGNG3ټ>Gj{ߏ;he??q qu5̿ }ov հ(4 <}6Šu%ѳ)8kj'5^rsi nHP %Fg+n&Tj1mp`],GyABz1F!xrxZ!C!P0beJȷ8SLTe|42lSm"JG.p2)ФHW܈)6(W_LNKO X'`[CbFb,ǻ-!>..>tX9iMr|<4}Q6*ɘ] B1?R":3Fw9!4,]vZ; )P25Q`Er`Re7NVz1 61&Z"\m)EI-@ǤET^ԾMx)H*i&ȘUPP~ֹ웶^Dh '\wCOf t#귘QZ;%D5i@V_%n*Ke62]}J5"FI7MbAB\ne' 3'&$IVc|ܖJUsͲ؎2*#ͣOgh(P_a$n\ph yD o2Txg % kQG2zk.9` OG4PHPֈ)cf! <4yW <_ 4r#&,ʅ[ -0pe?^y%99嬳64|+[fR]YSޫ@ 3< B8ج`2Av NbCf ,|Q$~)? \C.0Weuyºgs!z* jg\_ fT6Y*t ԥibkY9e@^u_T*[ېKD'fP׃mT%#g<8YyeW;?{S )0<`X'G NP͍VWNgF09`p IKZKq\ ?S%$k F@F CoĶ%E3#DVJԊI~Ě7 4iQЄ.q2RJ<)&WяK 31S}V.ZZGUQ@T!E>6)?'U $_eu`9kJ_* 3_EV4D!,HRdi#5Ұ(`1gdq[ 4K RW [ZCFӭ. d_xq3PC UK(RTͣ#B#P+I|ҐO# {P u]s@P 0W.;jVx%`D54Գ5i *CU&`H.J X|Ӈv{8r7EFC,0)LPM@N?_vW*7rCaTCW./X`w"býܜ }ÀFo u67;`5%! N-Ͱ$ %\d|$ʤ:Q<+N;皵kUMX2)(jz/^>mڊ"K8i!CjWϼ^E I])&"F}}mLk)eٮL˂[*.$hx NQl 7u~G|ρ >'ҵےŋ{y/"q]5,KMb0W'@jJ%A<`!ßƆFP Yec{?z4{#nd;?0Sޮz VLՠV晆NINJFfzY^fcUO4ȩ1blJ)t{BH:fq: XWFW/1-6:7~on[CeƱ\mNb/'1Mqphnz4XSkŢ@U?!\'S5Ϋ +jzN/fbPm PTO*}F)ezn5Rh`5ܼ$(S3rf6Ah.?z1SCW74Qx?ǭGRq1н 1-EXQ˱u2̩-soT4h T?q+9wz Ö*B٩3U1 M28Nv ٲt-&r⡒ᗞ.y1uI17Z2^! n=ZEa v1S:9A6Cjhd=Q f) gZ|/3%VT J()Os5U7Ih lw.G1hLԦ|C[  Uژi?\8 =&h動3,&@?@b``=E дNʿv/,쫇@" 8IiM]߯+b%"@uwx筧 `?@4I9RAoT)^LI-wBˬ1#AJeaX>LCPEyqL[X]bg8Z(|Sj5Y 4x,41rrL%g\ G\8b<,Wmhhe{%+4BpG#(8G_5~eY".R15Bi)O$S6ʖ0Gl$ K0ăll-%}X_ſԑ--G͜igb"?#4^Z48IBhLl#([kVUk<4upvZ22rs|@'$^౮* \GR=S]IVnJI!@F[ ְF7WFY0/Ro1* xx2뎥XQk碦À[@X'3(La^' DȳRE[cqזyx{ճ qB"k lS"-^CŌ3 ȩĶ<&;QK\` ]MrWL"+JL5Щ?k>Pcn:`Q7F,c7Rd6cZEy.yWDdW4E,Q5tyA/P` '>XVmD-cfY0f\5PaDxFAU"J§`K8n3`-'-5 []xƒuK2SHa II=EXaOR+MfX&C4ZbVWa閲pjB.]z`!èŕF E1Z2i#揫V->WYUf+XGEs6-7*29 5k\ҝaaJ̱AJkvibU MrK 4:R`,u5 $TkȧJP {o5oc40;=-9塎3c`;'w)MM Vd  2@ZPL=}}^(PQȬ6"ЛUFAز܇VdQ XkgdMBke?f( "]*K+H6 $8$3Y4+ 7*Wi\G8XPT c7_I/Ï'?=%5Uܫ:F\/l䬯tEAg@X?yPaKX&/*'*K  ;| _G <.#>2{,"UٱfEE0[h^Za&;fzq ~xLb[Kyx,dLo沼:~a2Ғ!= =$0 MV崙klLIIņp`g&]v{GIKvv,qeA1e7; !cet7UW$edozV|V#o-<G=ldgKFȽz56y-v찕Gd}5׮ٻgym56qP6س18#,Tf஦EexC"]aݔcWx+Vb5D3~6&!1\[S-f!JJ`QxxSiP?UW_&@6 ka~h 4Jsӯj~bȨ9wuu]kpJ1E !2tK٤ͨ 8>vp ]wݥ\)F Ny%Y8JUL a~f;OYZ2;!s/hKh t\&"%Yn njQ-uŞIy%QjupdYs>pi.y!@! NSZM[S8ʿlXf CqR|3Tjdba2Tbţ/ ˆKegW Q#!%)bRu2ޚ>( =҃5x3^ҸgA ;& 45HU1^pbrR"s5+%k薛o[A4zOlmHY͝BBN QagM.>4s C2L+Y:y%?,}ge=CRQa47|=wOr ^ V3„#*=3`kt9>8U#%wxPvF[Z-!eqҴib|Ή w=^= ^{%r2 € 1 S>G?LYOw o:xeHR!{l%/̿7mo<菧y„pU D66WIh2x <Ay)UUة3KJ@<1Y"}2QQ}$p hP ¢n,(pNloKK*fֲ 7\+lk*P3g[Ȑte(Ӳzt3.R 꺍pV`DBp'(l^KJj倕@ 3"0: T2hP9q2v7,^bQe߃]ʵ!LB r!C#95QeD.PHTOUR7f`qRx! E%Q'cw7J;G:.[nAe ѭM]!_fr$OKށ@a5wy'_^} R[8B3GU*Ɯe$Y"2.IHDiڑS )P<0R*?rʥTTKnn=qRUiqI (7ӧªDJJBwNQR2Ix H-{8*؃)ɘ! l(StrME9)KL㖅d'*Fy' pp-yX8\:VJD&Rﰊb7u*~fXd322b$V_g@ZSa}7poqg@%%5Sx_"yB8ʃe6L.dPnRJz^ Y|>)K +Y{!.>٭g20<;V8{hm<Ԉkɸƫ'AK>LJ0ēO^|E5Rzz.I#)BFX2t'>B+gƌ[nf 􀙸FՕM7Hn2no⡢Q >sD7|ӭzGؽN{シ̜DP)+ kB3 | _{-z+`tB2?A.(Fu =ύsv 7oѸ t(Z$ o{-+b%&^}ՕæGW_wՐ0×``/8_(-1,g@;%Wg0udJx}9,nk1me2{gI]z饗\LQ&ak CA\p5Am&'"0J}+^%!ļ[a 2~+C F4q;H]wqםwqwq;`H@2IHٳ θ}4]=wuw%܆do5s{[95>6rL@OE]w @SOĹ*^I 4΍7\Zbnۡ]PSh6^s4a%u暫i挼?䓾`T(-] (*Dp976;w+I-N7][WI= I\V^ ,c&(LY_7U<#ѷGh^#2{g' -Vj 7[zt챙m8tguV_jc}oh(]p%dvG'sJ MPI r wC%GHzN\$eelDjVHDžwtˊb)UV˪,b~lUyl۫O{69q[=I՞tGSv[qMYE||g@uFx 64zkj!;a̠A#'6zZ Ԇ H#--Rfʚh u'1 +gnx,Q =n99|:zv &[_}"3Eze1 aDbDۇJ2SD8q_|eױJo!n렧^}|D0 =PJ#[rw2cMWAC@0ho7yP%yLAt: .sΑBgN/M !l}x] d,A^Bɷ TU.I6V7@.-%]F! dndao;GcafV4=Ѡ*"51?(J8=E8lv,w%UY!s')$n[zCn8Q֤Tò-"XGNQЈQK)ɰ.IQDT*Cbi*dt.%us16w%\ AI151 {bVvBVv|fVbFV+;.;YYAS.̞@qXDBF&tْ*P 6 uA"Tsljs;s&XQ/yܟKolql6֥7ո+." V_` WV׃^?jP}%:Ylݶe3ygQBmcgg~ kJLR8y)p |n+j| CJ9s-|5 j=hJ`0˧o cbk I".,_'ď ;̓R +8kT(6lp>xExl0]s cTHYqm b+:iMaW2bh#P45dkWCeWa7|R$-͸+K6m\Q^%I XHtxRC I"9ؕ^TA&2R E!JzjΦ I}bXMZdVÕQaGuC/;Bd1KYD, p&D)/Ndt481T:tn>\ 7 s+85'_r =F4)'{ ;)$HAj+xsV+.^KU4 =WH8s\+~`Y!ŜcVn|c=I; kFQOOlKtslTSUFpr |ouIqav P02=벺Wx0qf J0 oAcAZ&]v˔3?lL J(L(\jw/kpKx~w_cFA/{QN!q><8^PRMNI-+ eָx&x4SL+x2sXMK2ccHU`N*u\Q 1I!!PDI^F5=#GHS!K@S-kD p"mP@(\,Hkj0 Fu6Z/ ax=gqPgu3>G  0pv y"N$"cfXØ|%u~!@[zzU@ //`}M`|KOէ/KOR3{7Nb 4m5USi9 q~58UtvĉVB qF5E6+0fKa. {$Oߜ@yCC&O< FW:$4𛛓7!}?7c C=w{p1wsE{v} ]nc]O?1a埾a\ߒ27'`&zCFv,@r2𘱞q'lESF'}(6\ g Ϡ c)!dp EcAW1>gdTz!)܆jU8Kㄗ d“eg V9iiySgZAGEl뉶\5h uOA8jV80Ϩ 6g?b8E,Tf!x6"2% qB&;a5(81S?„Ԭ$7y}YOߗ2WWO@<T_Dk넙azj-).]r23t.~ݺfdeRYRרa# zÅ3f9sĉ/͚Hv̜5pӦ*.*mBfMN\^yPBڣ>&p&3N5_XU-%_\|v{U5uv[ _s8,jF8M%7$'\>23?ߓ"\#BIBneWJ*Љf= xd& ;$.ir-f׼zcŴ|Uz8Kp (Ma*=i7(@~˝5N:%)kʢ xtMԑ '4+R{WyzL}ep ADdA$&A4KMҁ .:F0 q$ 1V\kTXFxI%\gQCqtt &4{xu'2"Qr+ޒhvқy743k}·H|\ 2;_k$^{gIjTUJi08g^B'ij r{<&fqe1Gb1ۻ/-ع!JD.d$?a"vë2x uX˦:B-0xtJ.ɭL=ugA,[/5?4֖Wp}UAī<ӧA$p̲RVQCX<$1%{ 7nRs0U-- /(LV+ci1:Bx~yٵz`gIV=xa,,fc]T ~CV|#(*PIxP2ۼ?oT,ƫd%~r"Jdr>ЉztZdDwT?ۨ9]6f:)An Nd JD)Jgf&oWZeY={:{xQ|[)!VKyaS>߈eZ~ Hݙ{jNjybMh;wˎڞVqw|vʱ֮ul暗yuzG ~&ud-^iW ݻw œ߸qC.lb] hfqi'O:Ksdg"DCi 㹚wd~>]Z|`%=K.C< f5vPJz޻LgRMS͚MeoP."By&O=w$J&L<דKpgYrvԘu6Y X(nZtǑRN7&LR,_7kJٕ!'AO%p[o%qB9Ŵ>eRQ>龫՝S S'o\'<ư (&}(sSY$baaL^f"Vu>̝4ox#)-ė]r#=6O\ik#bm1U/<3tzDgɺEǩY8I5ҭ@k)@6S13rzYQ%`wO" $PʥPryUWGUd=M &TwEY1'SEAiK͈VWj`TpCks\{QI6&{]r4\?^Xiu%wd;6ZxK!@8 k!R"E ˪P=5C3íYd(@ETUGA !=Բ %Z-Z L##׉ᕔdciUl{܂ dՐ% tQc}qGBZHFVҚ7ՅC׍alm\xVgR)+L#Փ>=^j3 1xG㡆Q @:!4mx+n)6KssT_ ,CH䵞qfѣO~ wcFy9V+=zuI頡lw֝^e?@ \(/HZxV;ZU^kd= ۷lMLIr<y0!B-&|*H'<Q4S UՂ P`뱭hMÃ!T='S* 9BCE0`Ahm .0DBC՛ȹ+Y(%iMQءz4SMU 1kj6 $`Ճ*_O?]Q7y?R9ѯ.'9/?A+si/2 (M^YaHHYxKdK_9yc\HYK[Q Q]G/êbgO*)DJ~bO~P8o3p N_7 Fg%k o@6 .dnu^Tc9xQZjC"-+IBȤG4REQ6S73758M* K/E 'ˋTK<+Ehʨϟ=1SgJ=Qlm)|ꫢ6  T1%@ş;.O$KkocJ躀emȕbbݢJ2N,\e%t|_P :mU{7y ?R&g@\>sӮްXn1;ӿ+|ʓO |U֢!j6tgfœ\xZ[Wo$nrOsgɪIIOςo@<28ĆFW_fddddҒ-9]{IO wYZtIJ 3^Gx ljX$+7MZE`aɠLH22v,VGij&r5`P3O X&d% K#RNrbͤDAQ~30&ƛQ2Ys_!&$"vٔ6ǠPerlx9Q\jnY㔳C)%~aU|v 7 D2T\2<(ĖA{&N$Z}rjo4WaHDNy%j$҆L