What is the difference between remote sensing and aerial photography




















Reconstruction of radiance values is done over a region by a series of detectors, each gathering data over small pockets of the entire region. Details are fewer in comparison to aerial photographs. In this case the degree of details is restricted to pixel resolution of the sensors. Satellite imagery is not capable of providing stereo- views although such a capability is attained from satellite altitudes. Satellite surveys are not constrained by weather. However, clouds may conceal some information available on NIR bands.

You must be logged in to post a comment. Useful Notes on Geomorphology. Leave a Reply Click here to cancel reply.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. Do not sell my personal information. Cookie Settings Accept. Manage consent. Close Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Visual image processing does not only including the interpretation of the features of a satellite image that can be seen, but also those features that cannot be seen and goes beyond machinery understanding.

It may include two basic objects- 1. The recognition of the object that can be seen 2. The true interpretation that can be seen or cannot be 5. Ability to obtain knowledge beyond our human visual interpretation 4. Ability to obtain a historical image record to document change 5. Size: Size of an object with the following parameters- length, width, perimeter, area and occasionally volume.

Shape: geometric characteristics of an object, e. Shadow: A silhouette caused by solar illumination from the side. Texture: Characteristics placements and arrangements of repetition tone or color. Pattern: Spatial arrangements of object of a ground. Example: systemic, random, circular, elliptical, rectangular, parallel, centripetal, braided, striated etc.

Site elevation, slope, aspect : Elevation, slope, aspect, exposure, adjacency to water, transportation, utilities. Situation: Objects are placed in a particular order or orientation relative to one another. Association: Relative phenomena are usually present. Metadata can be stored as an inherent part of the gis data, or it may be stored as a separate document.

Examples of information this is not a comprehensive list contained within metadata are: creation date of the gis data, gis data author, contact information, source agency, map projection and coordinate system, scale, error, explanation of symbology and attributes, data dictionary, data restrictions, and licensing. Essentially, metadata is a description of the gis data set that helps the user understand the context of the data.

This file specifies the basic characteristics of an image. This file helps us in knowing about the satellite image and for calculating the reflectance for every pixel. These specific characteristics are the consequence of the imaging radar technique, and are related to radiometry speckle, texture or geometry. During radar image analysis, the interpreter must keep in mind the fact that, even if the image is presented as an analog product on photographic paper, the radar "sees" the scene in a very different way from the human eye or from an optical sensor; the grey levels of the scene are related to the relative strength of the microwave energy backscattered by the landscape elements.

High intensity returns appear as light tones on a positive image, while low signal returns appear as dark tones on the imagery. Some features streets, bridges, airports It should be noted that the shape is as seen by the oblique illumination: slant range distance of the radar. The size of known features on the imagery provides a relative evaluation of scale and dimensions of other terrain features.

These This effect, caused by the coherent radiation used by radar systems, is called speckle. It happens because each resolution cell associated with an extended target contains several scattering centres whose elementary returns, by positive or negative interference, originate light or dark image brightness. Tone Speckle The data processing in remote sensing is dominantly treated as digital image processing.

Image pre-processing 2. Image enhancement 3. Image transformation 4. Image classification 6. It is classified into: a. Detector response calibration i. De-stripping ii. Removal of missing scan line iii.

Vignetting removal iv. Random noise removal b. Sun angle and topographic correction c. Atmospheric correction The resultant image is called geo coded image. Two major types of enhancements are- a. Contrast stretching: to increase the tonal distinction between various features in a scene. Spatial filtering: to enhance specific spatial patterns of an image. Contrast enhancement is the changing the original values to increase the contrast between targets and their backgrounds.

Procedure of image enhancement: 1. Image reduction 2. Image magnification 3. Color compositing 4. Transect extraction 5. Contrast enhancement 6. Either way, image transformations generate "new" images from two or more sources which highlight particular features or properties of interest, better than the original input images. For example, these 9 global land cover data sets classify images into forest, urban, agriculture and other classes.

For the most part, they can use this list of free remote sensing software to create land cover maps. Supervised classification: The process of using sample of known identify to classify pixels of unknown identity. Users select representative samples for each land cover class.

Supervised classification uses the spectral signature defined in the training set. For example, it determines each class on what it resembles most in the training set. The common supervised classification algorithms are maximum likelihood and minimum-distance classification.

In other words, it creates square pixels and each pixel has a class. But object-based image classification groups pixels into representative shapes and sizes. This process is multi-resolution segmentation or segment mean shift Supervised-unsupervised classification timeline Supervised vs. Normally, air photos are taken vertically from an aircraft using a highly-accurate camera. There are several things you can look for to determine what makes one photograph different from another of the same area including type of film, scale, and overlap.

Other important concepts used in aerial photography are stereoscopic coverage, fiducial marks, focal length, roll and frame numbers, and flight lines and index maps. Focal length: the distance from the middle of the camera lens to the focal plane i. As focal length increases, image distortion decreases. The focal length is precisely measured when the camera is calibrated. Scale: the ratio of the distance between two points on a photo to the actual distance between the same two points on the ground i.

Large Scale - Larger-scale photos e. A large scale photo simply means that ground features are at a larger, more detailed size. The area of ground coverage that is seen on the photo is less than at smaller scales. Small Scale - Smaller-scale photos e. A small scale photo simply means that ground features are at a smaller, less detailed size.

The area of ground coverage that is seen on the photo is greater than at larger scales. Fiducial marks: small registration marks exposed on the edges of a photograph. The distances between fiducial marks are precisely measured when a camera is calibrated, and this information is used by cartographers when compiling a topographic map.

Overlap: is the amount by which one photograph includes the area covered by another photograph, and is expressed as a percentage. Each photograph of the stereo pair provides a slightly different view of the same area, which the brain combines and interprets as a 3-D view.

Roll and Photo Numbers: each aerial photo is assigned a unique index number according to the photo's roll and frame. For example, photo A is the 35th annotated photo on roll A This identifying number allows you to find the photo in NAPL's archive, along with metadata information such as the date it was taken, the plane's altitude above sea level , the focal length of the camera, and the weather conditions.

Flight Lines and Index Maps: at the end of a photo mission, the aerial survey contractor plots the location of the first, last, and every fifth photo centre, along with its roll and frame number, on a National Topographic System NTS map. Photo centres are represented by small circles, and straight lines are drawn connecting the circles to show photos on the same flight line.

Vertical b. Low oblique c. High oblique d. Trimetrogon e. Multiple Lens Photography f. Convergent Photography g. Panoramic a. A vertical photograph is taken with the camera pointed as straight down as possible. The result is coincident with the camera axis. A vertical photograph has the following characteristics: 1.

The lens axis is perpendicular to the surface of the earth. It covers a relatively small area. The shape of the ground area covered on a single vertical photo closely approximates a square or rectangle. Being a view from above, it gives an unfamiliar view of the ground. Distance and directions may approach the accuracy of maps if taken over flat terrain.

Relief is not readily apparent. Relationship of the vertical aerial photograph with the ground. Vertical photograph. Low Oblique. It is used to study an area before an attack, to substitute for a reconnaissance, to substitute for a map, or to supplement a map. A low oblique has the following characteristics: 1. The ground area covered is a trapezoid, although the photo is square or rectangular. The objects have a more familiar view, comparable to viewing from the top of a high hill or tall building.

No scale is applicable to the entire photograph, and distance cannot be measured. Parallel lines on the ground are not parallel on this photograph; therefore, direction azimuth cannot be measured.

Relief is discernible but distorted. It does not show the horizon. Relationship of low oblique photograph to the ground. Low oblique photograph. High Oblique. It has a limited military application; it is used primarily in the making of aeronautical charts.

However, it may be the only photography available. A high oblique has the following characteristics: 1. It covers a very large area not all usable. The ground area covered is a trapezoid, but the photograph is square or rectangular.

The view varies from the very familiar to unfamiliar, depending on the height at which the photograph is taken. Distances and directions are not measured on this photograph for the same reasons that they are not measured on the low oblique. Relief may be quite discernible but distorted as in any oblique view. The relief is not apparent in a high altitude, high oblique. The horizon is always visible. Relationship of high oblique photograph to the ground. High oblique photograph. This is an assemblage of three photographs taken at the same time, one vertical and two high obliques, in a direction at right angle to the line of flight.

Relationship of cameras to ground for trimetrogon photography three cameras. Multiple Lens Photography. These are composite photographs taken with one camera having two or more lenses, or by two or more cameras.

The photographs are combinations of two, four, or eight obliques around a vertical. The obliques are rectified to permit assembly as verticals on a common plane. Convergent Photography. Again, the cameras are exposed at the same time. For precision mapping, the optical axes of the cameras are parallel to the line of flight, and for reconnaissance photography, the camera axes are at high angles to the line of flight. The development and increasing use of panoramic photography in aerial reconnaissance has resulted from the need to cover in greater detail more and more areas of the world.

To cover the large areas involved, and to resolve the desired ground detail, present- day reconnaissance systems must operate at extremely high-resolution levels.

Unfortunately, high-resolution levels and wide-angular coverage are basically contradicting requirements. A panoramic camera is a scanning type of camera that sweeps the terrain of interest from side to side across the direction of flight. This permits the panoramic camera to record a much wider area of ground than either frame or strip cameras. As in the case of the frame cameras, continuous cover is obtained by properly spaced exposures timed to give sufficient overlap between frames.

Panoramic cameras are most advantageous for applications requiring the resolution of small ground detail from high altitudes. Photograph and map relation to operation from photo to map 2.

Utilization of photographic production and capabilities 3. Utilization by air force 5. Utilization by naval force 7. Defense 2. Area survey 3. Urban planning 4. Transportation of road network 5. Forest conservation 6. Resource management 7. Tourism 8. Water resource management 9.

Weather and climate Other topographic map, shape of land and mass, features of historical and archeological site, environmental studies, civilian and military surveillance, recreational purpose, artistic projects, property surveys and analysis It is concerned with pattern of landform, materials and their related processes. Geomorphology as a science deals with evolutionary processes of relief forms of the earth and lead us to understand various processes of evolution of landforms.

Land use and land cover change LULC is one of the focal themes in geomorphic study. With the possible exception of the last item there has not been extensive use of remotely sensed data in hydrological models.

Nevertheless, remotely sensed data are essential for global and continental applications for which useful estimates of hydrologic parameters can be made. It is typically the end result of a soil survey. Soil maps are most commonly used for land evaluation, spatial planning, agricultural extension, environmental protection and similar projects. Soil survey is the process of determining the pattern of the soil cover, characterizing it and presenting it in understandable and interpretable form to various users.

The advancement of remote sensing technology is a boon for conducting efficient soil surveys and mapping soil efficiently. Recent technological advances in satellite remote sensing have helped to overcome the limitation of conventional soil survey, thus providing a new outlook for soil survey and mapping.

There is a progress of remote sensing in mapping of various soil properties like moisture, salinity, mineralogy, vegetation, etc. Various disasters like earthquakes, landslides, floods, fires, tsunamis, volcanic eruptions and cyclones are natural hazards that kill lots of people and destroy property and infrastructures every year. Landslides are the most regular geological vulnerabilities in mountain regions, particularly in Sikkim Himalaya.

Remotely sensed data can be used very efficiently to assess severity and impact of damage due to these disasters. In the disaster relief Disaster mapping is the drawing of areas that have been through excessive natural or man-made troubles to the normal environment where there is a loss of life, property and national infrastructures.

Plus active sensor may be not very sustainable and passive sensor has to rely on sun; these two are disadvantages as well. The sharpness of the image on a display depends on the resolution and the size of the monitor. See 4. A photograph refers specifically to images that have been detected as well as recorded on photographic film.

In , French balloonist Gaspar Felix Tournachon patented the first aerial photography process, though it took three years to produce the first image. Early experiments included using pigeons equipped with automatic cameras and using biplanes in World War I to capture images of enemy trenches.

Aerial photography was successfully commercialized by Sherman Fairchild for aerial surveys of land and cities after World War I and has been used in government and civil applications ever since. The United States launched the first satellite imaging system in to spy on the Soviet Union. Since then, in addition to military applications, satellite imagery has been used for mapping, environmental monitoring, archaeological surveys and weather prediction.

Governments, large corporations and educational institutions make the most use of these images. It can be used to track weather systems, especially dangerous storms like hurricanes, with great accuracy. Satellites circle the Earth, so their imaging activity can be repeated easily.

Remote sensing relies on detecting different wavelengths of light radiation. Objects may emit or reflect this radiation, and remote sensing can identify and process even small differences across an extensive array of wavelengths and spatial orientations. Professionals use these differences to identify objects and categorize them according to their type, material or location. They can also use them to measure slopes and distances. What is remote sensing used for?

Satellites have used remote sensing in meteorological operations for decades. Remote sensing first came into use because of the high number of color bands in satellite imagery. The technique used those color bands to collect 2D information for weather tracking and geographic information system GIS mapping, for instance.

Today, many satellites in orbit still use remote sensing to gather a range of information from the Earth to evaluate weather and land cover and generate maps. This method is also useful for gathering data for terrestrial projects, like surveying or earthworks construction. Remote sensing encompasses any observation and measurement methods that do not rely on direct contact with the object or landform in question. Photogrammetry uses imaging rather than collecting light wavelength data.

It involves determining the spatial properties and dimensions of objects captured in photographic pictures. Albrecht Meydenbauer, a Prussian architect who made some of the first elevation drawings and topographic maps, first used the term in Today, an airplane, satellite, drone or even a close-range camera might record digital images for photogrammetric use.

Photogrammetry relies on a technique known as aerial triangulation to measure changes in position. This method involves taking aerial photographs from more than one location and using measurements from both places to pinpoint locations and distances more accurately.

The various photographs provide different lines of sight or rays from the camera to specific points. The trigonometric intersection of these lines of sight can then produce accurate 3D coordinates for those points. Modern photogrammetry also sometimes relies on laser scanning as a complement to traditional images.

Light detection and ranging LIDAR , for instance, which uses pulsed lasers to measure distances, often assists in photogrammetry performed from aircraft and satellites, as well as on the ground. Photogrammetry breaks down into two main branches: metric and interpretive.

What is photogrammetry used for? Photogrammetry is exceptionally common in applications such as measuring landforms and terrain and developing topographic maps.

Many industries, including fields as diverse as architecture, construction, engineering, forensics, forestry, geoscience, law and medicine, rely on the precise and accurate 3D data photogrammetry provides.



0コメント

  • 1000 / 1000