+ All Categories
Home > Documents > S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source...

S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source...

Date post: 11-Nov-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
7
SENSOR AND DATA FUSION CONTEST : Comparison of Visual Analysis between SAR- and Optical Sensors ANKE BELLMANN 1 , OLAF HELLWICH 1 Zusammenfassung: Der „EuroSDR Sensor and Data Fusion Contest“ soll Aufschluss darüber geben, in wie weit vergleichbare Ergebnisse bei der Auswertung von opti- schen Sensoren und SAR-Daten erreicht werden können. In der ersten Phase wurden sowohl optische Daten als auch Radardaten von bis zu vier Regionen visuell ausge- wertet. Um die visuellen Auswertungen einheitlich vergleichen zu können, wurden im Vorfeld Referenzdatensätze erstellt, die die Grundlage für den Datenvergleich liefern. Die vier Testgebiete werden vorgestellt; die Entstehung der Referenzdaten erläutert. Das Ergebnis des Vergleichs ist zum einen eine prozentuale Angabe der Auswertege- nauigkeit bezogen auf die Referenzdaten und zum anderen eine visuelle Interpretation der Daten. Die Methoden der Auswertung werden aufgezeigt, bevor die Ergebnisse der ersten Phase gezeigt und erläutert werden. 1 Introduction The EuroSDR Sensor and Data Fusion Contest Information for mapping from airborne SAR and optical imagery seeks to compare the potential of airborne synthetic aperture radar data with imagery from optical sensors for topographic mapping. The test is organised by EuroSDR in conjunction with the IEEE GRSS data fusion technical committee (DFC) and the ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com- petitive information extraction is performed on both state of the art airborne SAR data and high resolution optical imagery of different terrain types. One aim of the contest is to answer two questions: (1) Can airborne SAR compete with optical sensors for the given domain? (2) What can be gained when SAR and optical images are combined? A preliminary answer to these questions will be given in this paper. 2 Basic Principles of the Contest 2.1 General The “EuroSDR sensor and data fusion contest” is divided into the following three phases: - Phase I: Visual Image Interpretation - Phase II: Automatic Object Extraction and Classification - Phase III: Sensor Fusion Currently, a final report on phase I is complete, while some test participants are still in the process of completing phase II. At the end of summer 2005, the second phase will be con- cluded and work on phase III will begin. For the first phase, nine interpreters from six institu- 1 Dipl.-Ing. Anke Bellmann, Prof. Dr.-Ing. Olaf Hellwich, Berlin University of Technologie, Computer Vision & Remote Sensing, Franklinstraße 28/29, 13353 Berlin, e-mail: [email protected], [email protected], URL: http://www.cv.tu-berlin.de
Transcript
Page 1: S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com-petitive information extraction

SENSOR AND DATA FUSION CONTEST: Comparison of Visual Analysis between SAR- and Optical Sensors

ANKE BELLMANN1, OLAF HELLWICH1

Zusammenfassung: Der „EuroSDR Sensor and Data Fusion Contest“ soll Aufschluss darüber geben, in wie weit vergleichbare Ergebnisse bei der Auswertung von opti-schen Sensoren und SAR-Daten erreicht werden können. In der ersten Phase wurden sowohl optische Daten als auch Radardaten von bis zu vier Regionen visuell ausge-wertet. Um die visuellen Auswertungen einheitlich vergleichen zu können, wurden im Vorfeld Referenzdatensätze erstellt, die die Grundlage für den Datenvergleich liefern. Die vier Testgebiete werden vorgestellt; die Entstehung der Referenzdaten erläutert. Das Ergebnis des Vergleichs ist zum einen eine prozentuale Angabe der Auswertege-nauigkeit bezogen auf die Referenzdaten und zum anderen eine visuelle Interpretation der Daten. Die Methoden der Auswertung werden aufgezeigt, bevor die Ergebnisse der ersten Phase gezeigt und erläutert werden.

1 Introduction

The EuroSDR Sensor and Data Fusion Contest Information for mapping from airborne SAR and optical imagery seeks to compare the potential of airborne synthetic aperture radar data with imagery from optical sensors for topographic mapping. The test is organised by EuroSDR in conjunction with the IEEE GRSS data fusion technical committee (DFC) and the ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com-petitive information extraction is performed on both state of the art airborne SAR data and high resolution optical imagery of different terrain types. One aim of the contest is to answer two questions: (1) Can airborne SAR compete with optical sensors for the given domain? (2) What can be gained when SAR and optical images are combined? A preliminary answer to these questions will be given in this paper.

2 Basic Principles of the Contest

2.1 General The “EuroSDR sensor and data fusion contest” is divided into the following three phases: - Phase I: Visual Image Interpretation - Phase II: Automatic Object Extraction and Classification - Phase III: Sensor Fusion Currently, a final report on phase I is complete, while some test participants are still in the process of completing phase II. At the end of summer 2005, the second phase will be con-cluded and work on phase III will begin. For the first phase, nine interpreters from six institu-

1 Dipl.-Ing. Anke Bellmann, Prof. Dr.-Ing. Olaf Hellwich, Berlin University of Technologie, Computer Vision & Remote Sensing, Franklinstraße 28/29, 13353 Berlin, e-mail: [email protected], [email protected], URL: http://www.cv.tu-berlin.de

Page 2: S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com-petitive information extraction

tions are participating in the contest. The institutions which take part in the contest are the University of Hannover, Germany; the Technische Universität München, Germany; Photo-grammetry Department, General Command of Mapping, Ankara, Turkey; University of Rome, Italy; Budapest University of Technology and Economics, Hungary and Berlin Uni-versity of Technology, Germany. Depending on the experience of the interpreters, the results of information extraction consisted of the segmentation of scenes into separate objects and only two interpreters analyzed both imagery types. The object types of interest for the fusion contest include built-up areas, agricultural and for-est areas as well as lakes and linear objects such as highways, roads, alleys and rivers. De-pending on the test data, additional objects like airports, sport fields and railway tracks have been added.

2.2 Imagery The data used for the contest are optical airborne colour or black/white photography of pho-togrammetric quality and state of the art airborne polarimetric SAR imagery (Tab. 1).

Test site Characteris-tics Sensor Polarisation λ Test data resolu-

tion Trudering, Germany

industrial rural AeS-1 none X-Band 1.5 m

Oberpfaffen-hofen, Germany

airport agricultural E-SAR lexicographic L-Band 3.0 m

Copenhagen, Denmark

urban suburban EMISAR Pauli C-Band 4.0 m

Fjärdhundra, Sweden

agricultural forested EMISAR Pauli C-Band 4.0 m

Tab.1: SAR images used for different test areas

The optical imagery was resampled to the pixel size of the corresponding SAR images to avoid taking advantage of the higher resolution of optical images. The data sets used are de-scribed in detail by HELLWICH et al. (2002).

2.3 Reference Data As a basis for analysing the interpretations produced as part of the contest, it was necessary to create reference data sets for the different test areas. Reference data contents should ideally include the information of optical and SAR imagery as well as information from topographic maps (Fig. 1). Since data acquisition times were not the same for optical and SAR images, the data sets have marginal discrepancies. The topographic maps used are of a scale of 1:25.000 and 1:50.000 (in Sweden, topographical maps of a larger scale are not available). Due to the fact that topographic maps are highly generalised (streets are broadened, buildings are merged and objects shown as signatures) and can not be referenced to the other images with pixel accuracy, they only were used to examine the content.

Page 3: S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com-petitive information extraction

Fig. 1: From three basics (left top: optical image, left middle: topographical map, left bottom: SAR image) to the reference map (right)

Test imageries were created with the vector based Desktop GIS ArcView of ESRI. As men-tioned above, there are some differences between optical and SAR image due to differences in acquisition time (Fig. 2).

Fig. 2: Different content in topographical map, SAR and optical images (left); reference map with mask (right)

Nevertheless, to achieve equal data from both sensors, masks were created to cut out objects and terrains not clearly defined. Areas covered by the masks are not included in the analysis in order to avoid a false impression of the information content to be used in the contest. Since data analysis will be conducted on raster images, the reference maps were converted into raster images with pixel sizes corresponding to optical and SAR images. This image conversion was carried out using the program ERDAS Imagine of Leica Geosystems.

Mask

Page 4: S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com-petitive information extraction

3 Analysis

3.1 Basic Information In the initial stages of analysing the interpreted data, numerous problems became obvious. One problem that needs to be dealt with is the individual interpretation style of each inter-preter. As it has been previously shown by ALBERTZ (1970), the quality of image interpreta-tions is very much affected by the level of experience of the interpreter. This is especially true for SAR imagery because interpretation of different imaging characteristics is required. The second issue is the different programs interpreters work with. Some interpreters use vec-tor based programs with different object layers for different objects; others work with raster based programs and do not separate different layers. To get a unitary analysis, the following basic principles of data analysis have to be prede-fined: - Interpretations which were computed by participants as vector data were converted

into raster images, since data analyses are conducted based on raster images. - Raster images were separated into different layers; one layer for each object class. - Comparisons of the results were conducted for corresponding layers. - Results of the data analysis are percentage values. Distances and area sizes were not

calculated.

3.2 Evaluation Procedure To evaluate the interpretation results, objects have been separated into areal and linear object types, since analysis is different for both object types. Image processing for both types of analysis have been carried out with ERDAS Imagine. Comparison of areal objects takes place with pixel accuracy. The pixels of the interpreted image are subtracted from the pixels of the reference image. Figure 3 shows one example of a built-up area analysis. On the left site a reference data set is displayed, the interpreter’s one in the middle and the result of subtraction is shown on the right site. Different colours at the resulting image are defined as followed: Black: Correctly interpreted area Dark grey: False positively interpreted area Light grey: False negatively interpreted area Correctly interpreted areas are those which were located by interpreter as well as by the ref-erence map; false positive areas are those parts which are located by interpreter but not by the reference map; false negative are those areas shown in the reference but not detected by the interpreter.

Fig. 3: Reference data (left); interpreted data (middle); result of image processing (right)

Page 5: S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com-petitive information extraction

To compare linear objects, buffer zones have been created around reference and interpreted linear objects, as it is not significant to compare one-pixel-sized lines with pixel accuracy. To define false positively classified linear objects (located by interpreter but not in the refer-ence), the reference object line was broadened with two adequate buffers. The result of image processing is a classification of the interpreted linear objects into correctly located, incor-rectly located (both inside the buffers) and false position (outside the buffers), where the changeover from correctly to falsely located objects is buffered by an incorrectly located zone (Fig. 4 left).

Fig. 4: Definition of false positively classified linear objects (left); definition of false negatively classi-fied linear objects (right)

To define false negatively classified linear objects (contained in the reference map but not located by interpreter), the interpreted line was broadened with the same buffer width as be-fore. The result is similar for correct and incorrect position (both inside the buffers) of posi-tively classified roads, but differs to the line outside the buffers which is now false negatively classified (Fig. 4 right). To achieve standardised values for analysing the quality of interpretation, the number of pix-els from correctly classified objects (positively and negatively) and the number of pixels from incorrectly classified objects were averaged to give AVcor and AVinc respectively. To calcu-late the number of reference pixels, the following measure was used: 100% = AVcor + AVinc + fn were fn is the number of false negative pixels (1) With this measure a clear diagram can be generated where reference pixels are shown as 100%. False positively classified pixels are added to 100% to display the number of all inter-preted pixels (Fig. 5).

3.3 Qualitative Comparison In addition to pixel accurate analysis, a qualitative comparison is conducted to evaluate spe-cial objects which were not detected by each interpreter, such as sport fields or land use boundaries. The qualitative comparison is only done visually. It is necessary since some in-terpreters provide a more detailed classification than the others. Likewise, there is no possi-bility to evaluate the detection of boundaries, ruins, barracks and parking lots because these detections depend on the interpreter’s classification and can neither be examined pixel-by-pixel nor be approved by the topographic maps.

4 Results

The results look similar for the four different test sites. As described in section 3.2, figure 5 shows an example of a comparison with pixel accuracy.

Page 6: S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com-petitive information extraction

Fig. 5: Examples of pixel accurate comparison of areal objects for lakes and built-up areas of Copenhagen

It is obvious that main roads and highways, i.e. big linear objects, are detected and well lo-cated in both sensor imageries. Although it is no problem to locate big linear objects, classifi-cation appears to be much more difficult in SAR images, as railways were sometimes classi-fied as streets. Smaller linear objects, like roads, alleys and little rivers are reliably detected in optical but not in SAR images. While comparing areal objects, big contiguous areas are mostly well detected while small areas seem to be hardly detectable in SAR imagery, whereas they can be found in optical images. Nevertheless, big buildings can also be detected and well located in SAR, while, as before the smaller ones are not always located correctly. The same results are obtained with sport fields, only partially detected in SAR but nearly completely detected in optical imagery. Small forest areas are hardly detectable in SAR because they don’t look like forest areas in the images (which may also depend on the interpreters experiences). Another problem seems to be the detection of water surfaces: big ones can be located and classified well in SAR im-ages, but smaller ones can not be found, unlike in optical images.

5 Conclusion

To answer one of the main questions of this contest, it seems that SAR performs as well as optical sensing for large areal and linear objects but is not satisfactory with respect to smaller objects. Even though optical imagery was resampled to the pixel size of SAR sensing, it al-lows much more detailed analysis. This might certainly change when using modern SAR data with resolutions in the range of decimetres, as then speckle filtering could be done intensively without loosing interpretability of the data. Without training the interpreter in finding rail-ways and other objects they are not familiar with, it is nearly impossible for them arrive at the correct classification in SAR images. Due to the similarities between the human visual sys-tem and optical imaging, this is much easier with respect to optical images. The analysis shows that under the conditions set by the test regarding resolution of the data and experience

Bui l t - Up Ar e a s - S AR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

i ncor r ect pos. 8, 3 9, 4 8, 4 4, 0 6, 8

i ncor r ect neg. 10, 4 9, 9 12, 9 19, 3 29, 5

cor r ect 89, 6 90, 1 87, 1 80, 7 70, 5

I nt 1 I nt 2 I nt 3 I nt 4 I nt 5 I nt 6

B ui l t - U p A r e a s - opt i c a l

0,0

20,0

40,0

60,0

80,0

100,0

120,0

i ncor r ect neg. 8, 6 4, 2 2, 5

i ncor r ect pos. 7, 8 9, 1 14, 0

cor r ect 92, 2 90, 9 86, 0

I nt 2 I nt 3 I nt 7 I nt 8

La k e s - opt i c a l

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

i ncor r ect pos. 12, 9 11, 1 13, 2 7, 6

i ncor r ect neg. 35, 3 40, 1 31, 2 42, 9

cor r ect 64, 7 59, 9 68, 8 57, 1

I nt 2 I nt 3 I nt 7 I nt 8

La k e s - S AR

0,0

20,0

40,0

60,0

80,0

100,0

120,0

140,0

i ncor r ect pos. 20, 5 30, 7 20, 8 10, 7 5, 8 2, 1

i ncor r ect neg. 46, 2 49, 1 50, 1 46, 2 72, 5 85, 1

cor r ect 53, 8 50, 9 49, 9 53, 8 27, 5 14, 9

I nt 1 I nt 2 I nt 3 I nt 4 I nt 5 I nt 6

Page 7: S DATA FUSION CONTEST: Comparison of Visual Analysis ......ISPRS working group III/6 “Multi-Source Vision”. In the first phase of the contest, a com-petitive information extraction

of the interpreters SAR sensing by its own is not satisfactory for the mapping of small objects having a size of a few pixels in one or two dimensions. In the next phase of the contest, automatic methods for image analysis and interpretation will be used and exiting results will be expected before starting the last phase, where data fusion takes place.

6 Acknowledgments

We would like to express our thanks to several authorities for providing data for the fusion contest, especially Intermap Corp. and Bayrisches Landesvermessungsamt München, IHR/DLR (E-SAR data), Aerosensing GmbH (AeS-1 data) as well as the DDRE (EmiSAR data). Further thanks belong to all participants of the contest for contributing their work and analysis.

7 References

ALBERTZ, J., 1970: Sehen und Wahrnehmen bei der Luftbildinterpretation, Bildmessung und Luftbildwesen 38, (1970), Karlsruhe.

HELLWICH, O., HEIPKE, C., WESSEL, B., 2001: Sensor and Data Fusion Contest: Information for Mapping from Airborne SAR and Optical Imagery, in International Geoscience and Remote Sensing Symposium 01, Sydney. IEEE, 2001, vol. VI, pp. 2793-2795.

HELLWICH, O., REIGBER, A., LEHMANN, H., 2002: OEEPE Sensor and Data Fusion Contest: Test Imagery to Compare and Combine Airborne SAR and Optical Sensors for Map-ping, in Publikationen der DGPF, Band 11, 2002, pp. 279-283.

LOHMANN, P., JACOBSEN, K., PAKZAD, K., KOCH, A., 2004: Comparative Information Extrac-tion from SAR and optical Imagery: IntArchPhRS. Band XXXV, Part B3. Istanbul, 2004, pp. 535-540.


Recommended