"Understanding in this context means the transformation of visual images (the input to the retina in the human analog) into descriptions of the world that make sense to thought processes and can elicit appropriate action."
The process of assigning pixels in a remote sensing image to specific land cover or land use categories based on their spectral characteristics, often using machine learning algorithms.
Image preprocessing: The process of enhancing, normalizing, and filtering raw image data to improve its quality and prepare it for further analysis.
Feature extraction: The process of identifying and extracting relevant features (e.g. shape, texture, color) from an image for classification purposes.
Machine learning algorithms: The use of algorithms such as decision trees, random forests, and support vector machines to train models for image classification.
Image classification techniques: The use of supervised and unsupervised classification methods to classify pixels or regions of an image into different categories based on their spectral characteristics.
Spectral signature analysis: The process of creating a spectral signature for each class of interest using spectral bands and vegetation indices, and then using these signatures for classification.
Accuracy assessment: The process of validating the accuracy of a classification by comparing it to ground truth data or other reference datasets.
Remote sensing platforms: Understanding the basics of remote sensing platforms such as satellite and airborne sensors, their limitations and how to select the appropriate platform.
Types of satellite imagery: Understanding the differences between multispectral and hyperspectral images and how to extract useful information from them.
Classification software: Understanding the characteristics and functionalities of different software platforms for image classification, such as ERDAS Imagine, ENVI, and ArcGIS.
Applications of image classification: Understanding the broad range of applications where image classification is used, such as land use/cover mapping, vegetation monitoring, change detection, and disaster management.
Binary classification: This type of classification involves the categorization of an image into two classes, such as land and water.
Multi-class classification: This type of classification involves the categorization of an image into three or more classes, such as forest, water, and urban areas.
Object-based classification: This type of classification involves the identification and delineation of objects or features in an image, such as buildings, vegetation, and roads.
Pixel-based classification: This type of classification involves the classification of individual pixels in an image based on their spectral characteristics.
Feature-based classification: This type of classification involves the use of contextual and ancillary data, such as terrain features, to aid in the classification process.
Rule-based classification: This type of classification involves the use of a set of pre-defined rules to classify an image, such as using certain spectral bands to identify land cover types.
Supervised classification: This type of classification involves the use of ground truth data or training sites to teach an algorithm to classify an image automatically.
Unsupervised classification: This type of classification involves the automatic clustering of pixels in an image based on their spectral characteristics, without the use of ground truth data.
Hybrid classification: This type of classification involves the combination of two or more classification methods, such as object-based and supervised classification.
Deep learning-based classification: This type of classification involves the use of neural networks and deep learning algorithms to classify images based on their features and patterns.
"This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory."
"The image data can take many forms, such as video sequences, views from multiple cameras, multi-dimensional data from a 3D scanner, 3D point clouds from LiDaR sensors, or medical scanning devices."
"The scientific discipline of computer vision is concerned with the theory behind artificial systems that extract information from images."
"The technological discipline of computer vision seeks to apply its theories and models to the construction of computer vision systems."
"Sub-domains of computer vision include scene reconstruction, object detection, event detection, activity recognition, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, visual servoing, 3D scene modeling, and image restoration."
"Adopting computer vision technology might be painstaking for organizations as there is no single point solution for it."
"There are very few companies that provide a unified and distributed platform or an Operating System where computer vision applications can be easily deployed and managed."