Photoscan-Pro 1 2 en | PDF | Graphics Processing Unit | 3 D Computer Graphics.Try now for free

Looking for:

Agisoft photoscan user manual professional edition version 1.2 free

Click here to Download

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Hello bisenberger, you need to comment or remove the following line: Code: [Select]. Code: [Select]. Hello Mathias, Are you using the same parameters, maybe you can post the screenshots and attach the processing logs? Hello Alexey, I was using the same parameters standard options. Hers is a screenshot and the logfile. During the first attempt my computer crashed. What is the right flow to create the google map title.

Impossible to have real zoom. Dear Alexey, as I mentioned earlier, in this beta version I experienced some problems when creating dem or orthomosaic from dense cloud. Indeed, although I „clean“ the dense cloud, when I create a new DEM or ortho in different coordinate system, the results ignores the editing I did to the dense cloud, and instead it uses the original cloud.

Cheers, G. Hello Giancan, Still we are not able to reproduce the problem: we got dense cloud in one coordinate system, then we crop the cloud and build DEM in another coordinate system: the resulting DEM is based only on the cropped part of the cloud though it may be interpolated to the bounding box limits.

Hi We still run the 1. The update to 1. Today the mesh is based on arbitrary and mesh resolution cannot be decided manually. Previously we could export for example collada using the already meshed model. Question: Is it possible to add a manual choice for the 1. The mesh should be built at high and then decimated to create a sharper mesh. And only autogenerate the tiled texture based on generic and a resolution by choice. We use agisoft for city projects and love it but the removal of manual control of the tiled models will soon force us to look at other programs.

Thanks for a great program! To align the model orientation with the default coordinate system use Rotate object button from the Toolbar. In some cases editing model geometry in the external software may be required. PhotoScan supports model export for editing in external software and then allows to import it back as it is described in the Editing model geometry section of the manual.

Main export commands are available from the File menu and the rest from the Export submenu of the Tools menu. Browse the destination folder, choose the file type, and print in the file name.

Click Save button. Specify the coordinate system and indicate export parameters applicable to the selected file type, including the dense cloud classes to be saved. Split in blocks option in the Export Points dialog can be useful for exporting large projects. It is available for referenced models only. You can indicate the size of the section in xy plane in meters for the point cloud to be divided into respective rectangular blocks. The total volume of the 3D scene is limited with the Bounding Box.

The whole volume will be split in equal blocks starting from the point with minimum x and y values. Note that empty blocks will not be saved. In some cases it may be reasonable to edit point cloud before exporting it. To read about point cloud editing refer to the Editing point cloud section of the manual. In the Export Matches dialog box set export parameters. Precision value sets the limit to the number of decimal digits in the tie points coordinates to be saved.

Later on, estimated camera data can be imported back to PhotoScan using Import Cameras command from the Tools menu to proceed with 3D model reconstruction procedure. Camera calibration and orientation data export To export camera calibration and camera orientation data select Export Cameras Note Camera data export in Bundler and Boujou file formats will save sparse point cloud data in the same file.

Camera data export in Bundler file format would not save distortion coefficients k3, k4. Panorama export PhotoScan is capable of panorama stitching for images taken from the same camera position – camera station. To indicate for the software that loaded images have been taken from one camera station, one should move those photos to a camera group and assign Camera Station type to it.

For information on camera groups refer to Loading photos section. Choose panorama orientation in the file with the help of navigation buttons to the right of the preview window in the Export Panorama dialog. Set exporting parameters: select camera groups which panorama should be exported for and indicate export file name mask. Additionally, you can set boundaries for the region of panorama to be exported using Setup boundaries section of the Export Panorama dialog.

Text boxes in the first line X allow to indicate the angle in the horizontal plane and the second line Y serves for angle in the vertical plane limits. Image size option enables to control the size of the exporting file. In the Export Model dialog specify the coordinate system and indicate export parameters applicable to the selected file type. If a model generated with PhotoScan is to be imported in a 3D editor program for inspection or further editing, it might be helpful to use Shift function while exporting the model.

It allows to set the value to be subtracted from the respective coordinate value for every vertex in the mesh. Essentially, this means translation of the model coordinate system origin, which may be useful since some 3D editors, for example, truncate the coordinates values up to 8 or so digits, while in some projects they are decimals that make sense with respect to model positioning task. So it can be recommended to subtract a value equal to the whole part of a certain coordinate value see Reference pane, Camera coordinates values before exporting the model, thus providing for a reasonable scale for the model to be processed in a 3D editor program.

The texture file should be kept in the same directory as the main file describing the geometry. If the texture atlas was not built only the model geometry is exported. PhotoScan supports direct uploading of the models to Sketchfab resource. To publish your model online use Upload Model Thanks to hierarchical tiles format, it allows to responsively visualise large models.

In the Export Orthomosaic dialog box specify coordinate system for the Orthomosaic to be saved in. Note Write KML file option is available only if the model is georeferenced in WGS84 coordinate system due to the fact that Google Earth supports only this coordinate system. World file specifies coordinates of the four angle vertices of the exporting orthomosaic. This information is already included in GeoTIFF file, however, you could duplicate it for some reason. If you need to export orthomosaic in JPEG or PNG file formats and would like to have georeferencing data this informations could be useful.

If the export file of a fixed size is needed, it is possible to set the length of the longer side of the export file in Max. The length should be indicated in pixels. Split in blocks option in the Export Orthomosaic dialog can be useful for exporting large projects. You can indicate the size of the blocks in pix for the orthomosaic to be divided into. The whole area will be split in equal blocks starting from the point with minimum x and y values.

To export a particular part of the project use Region section of the Export Orthomosaic dialog. Alternatively, you can indicate the region to be exported using polygon drawing option in the Ortho view tab of the program window.

For instructions on polygon drawing refer to Shapes section of the manual. Once the polygon is drawn, right-click on the polygon and set it as a boundary of the region to be exported using Set Boundary Type option from the context menu. Default value for pixel size in Export Orthomosaic dialog refers to ground sampling resolution, thus, it is useless to set a smaller value: the number of pixels would increase, but the effective resolution would not. If you have chosen to export orthomosaic with a certain pixel size not using Max.

Additionally, the file may be saved without compression None value of the compression type parameter. Total size textbox in the Export Orthomosaic dialog helps to estimate the size of the resulting file.

However, it is recommended to make sure that the application you are planning to open the orthomosaic with supports BigTIFF format. Alternatively, you can split a large orthomosaic in blocks, with each block fitting the limits of a standard TIFF file.

Google Map Tiles. World Wind Tiles. PhotoScan supports direct uploading of the orthomosaics to MapBox platform. To publish your orthomosaic online use Upload Orthomosaic Note MapBox upload requires secure token with uploads:write scope that should be obtained on the account page of the MapBox web-site.

Secure token shouldn’t be mixed up with the public token, as the latter doesn’t allow to upload orthomosaics from PhotoScan. Multispectral orthomosaic has all channels of the original imagery plus alpha channel, transparency being used for no-data areas of the orthomosaic. Vegetation index data can be saved as two types of data: as a grid of floating point index values calculated per pixel of orthomosaic or as an orthomosaic in pseudocolors according to a pallet set by user. None value allows to export orthomosaic generated for the data before any index calculation procedure was performed.

World file specifies coordinates of the four angle vertices of the exporting DEM. This information is already included in GeoTIFF elevation data as well as in other supported file formats for DEM export, however, you could duplicate it for some reason.

If export file of a fixed size is needed, it is possible to to set the length of the longer side of the export file in Max. Unlike orthophoto export, it is sensible to set smaller pixel size compared to the default value in DEM export dialog; the effective resolution will increase. If you have chosen to export DEM with a certain pixel size not using Max. No-data value is used for the points of the grid, where elevation value could not be calculated based on the source data.

Default value is suggested according to the industry standard, however it can be changed by user. See Orthomosaic export section for details. Similarly to orthomosaic export, polygons drawn over the DEM on the Ortho tab of the program window can be set as boundaries for DEM export. Extra products to export In addition to main targeted products PhotoScan allows to export several other processing results, like Undistort photos, i.

Depth map for any image Export Depth Orthophotos for individual images Export Orthophotos PhotoScan supports direct uploading of the models to Sketchfab resource and of the orthomosaics to MapBox platform.

Processing report generation PhotoScan supports automatic processing report generation in PDF format, which contains the basic parameters of the project, processing results and accuracy evaluations.

PhotoScan processing report presents the following data: Orthomosaic sketch. Survey data including coverage area, flying altitude, GSR, general camera s info, as well as overlap statistics.

Camera calibration results: figures and an illustration for every sensor involved in the project. Camera positioning error estimates. Ground control points error estimates. Scale bars estimated distances and measurement errors. Digital elevation model sketch with resolution and point density info.

Processing parameters used at every stage of the project. Note Processing report can be exported after alignment step.

Processing report export option is available for georeferenced projects only. Survey Data Number of images – total number of images uploaded into the project. Camera stations – number of aligned images.

Flying altitude – average height above ground level. Tie points – total number of valid tie points equals to the number of points in the sparse cloud. Ground resolution – effective ground resolution averaged over all aligned images. Projections – total number of projections of valid tie points. Coverage area – size of the area that has been surveyed. Reprojection error – root mean square reprojection error averaged over all tie points on all images. Reprojection error is the distance between the point on the image where a reconstructed 3D point can be projected and the original projection of that 3D point detected on the photo and used as a basis for the 3D point reconstruction procedure.

Camera Calibration For precalibrated cameras internal parameters input by the user are shown on the report page. If a camera was not precalibrated, internal camera parameters estimated by PhotoScan are presented.

Camera Locations X error m – root mean square error for X coordinate for all the cameras. Y error m – root mean square error for Y coordinate for all the cameras.

XY error m – root mean square error for X and Y coordinates for all the cameras. Z error m – root mean square error for Z coordinate for all the cameras. Total error m – root mean square error for X, Y, Z coordinates for all the cameras. Xi, in – input value for X coordinate for i camera position, Xi, est – estimated value for X coordinate for i camera position, Yi, in – input value for Y coordinate for i camera position, Yi, est – estimated value for Y coordinate for i camera position, Zi, in – input value for Z coordinate for i camera position, Zi, est – estimated value for Z coordinate for i camera position,.

Scale Bars Distance m – scale bar length estimated by PhotoScan. Error m – difference between input and estimated values for scale bar length.

The value depends on the Quality parameter value used at Build point cloud step, providing that DEM has been generated from dense point cloud. Point Density – average number of dense cloud points per square meter. Processing Parameters Processing report contains processing parameters information, which is also available form Chunk context menu.

Along with the values of the parameters used at various processing stages, this page of the report presents information on processing time. Processing time attributed to Dense point cloud processing step will exclude time spent on depth maps reconstruction, unless Keep depth maps option is checked on Advanced tab of Preferences dialog available from Tools menu. For projects calculated over network processing time will not be shown. PhotoScan matches images on different scales to improve robustness with blurred or difficult to match images.

The accuracy of tie point projections depends on the scale at which they were located. PhotoScan uses information about scale to weight tie point reprojection errors. In the Reference pane settings dialog tie point accuracy parameter now corresponds to normalized accuracy – i. Tie points detected on other scales will have accuracy proportional to their scales. This helps to obtain more accurate bundle adjustment results.

On the processing parameters page of the report as well as in chunk information dialog two reprojection errors are provided: the reprojection error in the units of tie point scale this is the quantity that is minimized during bundle adjustment , and the reprojection error in pixels for convenience.

The mean key point size value is a mean tie point scale averaged across all projections. Chapter 4. Referencing Camera calibration Calibration groups While carrying out photo alignment PhotoScan estimates both internal and external camera orientation parameters, including nonlinear radial distortions.

For the estimation to be successful it is crucial to apply the estimation procedure separately to photos taken with different cameras. All the actions described below could and should be applied or not applied to each calibration group individually.

Calibration groups can be rearranged manually. A new group will be created and depicted on the left-hand part of the Camera Calibration dialog box.

In the Camera Calibration dialog box choose the source group on the left-hand part of the dialog. Select photos to be moved and drag them to the target group on the left-hand part of the Camera Calibration dialog box. To place each photo into a separate group you can use Split Groups command available at the right button click on a calibration group name in the left-hand part of the Camera Calibration dialog.

Camera types PhotoScan supports four major types of camera: frame camera, fisheye camera, spherical camera and cylindrical camera. Camera type can be set in Camera Calibration dialog box available from Tools menu.

Frame camera. If the source data within a calibration group was shot with a frame camera, for successful estimation of camera orientation parameters the information on approximate focal length pix is required. Obviously, to calculate focal length value in pixel it is enough to know focal length in mm along with the sensor pixel size in mm. Normally this data is extracted automatically from the EXIF metadata.

Frame camera with Fisheye lens. If extra wide lenses were used to get the source data, standard PhotoScan camera model will not allow to estimate camera parameters successfully.

Fisheye camera type setting will initialize implementation of a different camera model to fit ultra-wide lens distortions. Spherical camera equirectangular projection. In case the source data within a calibration group was shot with a spherical camera, camera type setting will be enough for the program to calculate.

No additional information is required except the image in equirectangular representation. Spherical camera Cylindrical projection. In case the source data within a calibration group is a set of panoramic images stitched according to cylindrical model, camera type setting will be enough for the program to calculate camera orientation parameters.

No additional information is required. In case source images lack EXIF data or the EXIF data is insufficient to calculate focal length in pixels, PhotoScan will assume that focal length equals to 50 mm 35 mm film equivalent. However, if the initial guess values differ significantly from the actual focal length, it is likely to lead to failure of the alignment process. So, if photos do not contain EXIF metadata, it is preferable to specify focal length mm and sensor pixel size mm manually.

It can be done in Camera Calibration dialog box available from Tools menu. Generally, this data is indicated in camera specification or can be received from some online source. To indicate to the program that camera orientation parameters should be estimated based on the focal length and pixel size information, it is necessary to set the Type parameter on the Initial tab to Auto value.

Camera calibration parameters Once you have tried to run the estimation procedure and got poor results, you can improve them thanks to the additional data on calibration parameters. Select calibration group, which needs reestimation of camera orientation parameters on the left side of the Camera Calibration dialog box.

Note Alternatively, initial calibration data can be imported from file using Load button on the Initial tab of the Camera Calibration dialog box. Initial calibration data will be adjusted during the Align Photos processing step. Once Align Photos processing step is finished adjusted calibration data will be displayed on the Adjusted tab of the Camera Calibration dialog box.

If very precise calibration data is available, to protect it from recalculation one should check Fix calibration box. In this case initial calibration data will not be changed during Align Photos process. Adjusted camera calibration data can be saved to file using Save button on the Adjusted tab of the Camera Calibration dialog box. Estimated camera distortions can be seen on the distortion plot available from context menu of a camera group in the Camera Calibration dialog.

In addition, residulas graph the second tab of the same Distortion Plot dialog allows to evaluate how adequately the camera is described with the applied mathematical. Note that residuals are averaged per cell of an image and then across all the images in a camera group. Calibration parameters list fx, fy Focal length in x- and y-dimensions measured in pixels. Setting coordinate system Many applications require data with a defined coordinate system.

Setting the coordinate system also provides a correct scaling of the model allowing for surface area and volume measurements and makes model loading in geoviewers and geoinformation software much easier. Some functionality like digital elevation model export is available only after the coordinate system is defined. PhotoScan supports setting a coordinate system based on either ground control point marker coordinates or camera coordinates.

In both cases the coordinates are specified in the Reference pane and can be either loaded from the external file or typed in manually. Setting coordinate system based on recorded camera positions is often used in aerial photography processing.

However it may be also useful for processing photos captured with GPS enabled cameras. Placing markers is not required if recorded camera coordinates are used to initialize the coordinate system. In the case when ground control points are used to set up the coordinate system the markers should be placed in the corresponding locations of the scene. Using camera positioning data for georeferencing the model is faster since manual marker placement is not required.

On the other hand, ground control point coordinates are usually more accurate than telemetry data, allowing for more precise georeferencing. Placing markers PhotoScan uses markers to specify locations within the scene.

Markers are used for setting up a coordinate system, photo alignment optimization, measuring distances and volumes within the scene as well as for marker based chunk alignment. Marker positions are defined by their projections on the source photos.

The more photos are used to specify marker position the higher is accuracy of marker placement. To define marker location within a scene it should be placed on at least 2 photos.

Note Marker placement is not required for setting the coordinate system based on recorded camera coordinates. This section can be safely skipped if the coordinate system is to be defined based on recorded camera locations. PhotoScan supports two approaches to marker placement: manual marker placement and guided marker placement. Manual approach implies that the marker projections should be indicated manually on each photo where the marker is visible.

Manual marker placement does not require 3D model and can be performed even before photo alignment. In the guided approach marker projection is specified for a single photo only. PhotoScan automatically projects the corresponding ray onto the model surface and calculates marker projections on the rest of the photos where marker is visible.

Marker projections defined automatically on individual photos can be further refined manually. Reconstructed 3D model surface is required for the guided approach. Guided marker placement usually speeds up the procedure of marker placement significantly and also reduces the chance of incorrect marker placement.

It is recommended in most cases unless there are any specific reasons preventing this operation. Open a photo where the marker is visible by double clicking on its name. Switch to the marker editing mode using. Select Create Marker command from the context menu.

New marker will be created and its projections on the other photos will be automatically defined. Note If the 3D model is not available or the ray at the selected point does not intersect with the model surface, the marker projection will be defined on the current photo only. Guided marker placement can be performed in the same way from the 3D view by right clicking on the corresponding point on the model surface and using Create Marker command from the context menu.

While the accuracy of marker placement in the 3D view is usually much lower, it may be still useful for quickly locating the photos observing the specified location on the model. To view the corresponding photos use Filter by Markers command again from the 3D view context menu. If the command is inactive, please make sure that the marker in question is selected on the Reference pane. Create marker instance using Add marker button on the Workspace pane or by Add Marker command from the Chunk context menu available by right clicking on the chunk title on the Workspace pane.

Open the photo where the marker projection needs to be added by double clicking on the photos name. Right click at the point on the photo where the marker projection needs to be placed. From the context menu open Place Marker submenu and select the marker instance previously created. The marker projection will be added to the current photo. To save up time on manual marker placement procedure PhotoScan offers guiding lines feature.

When a marker is placed on an aligned photo, PhotoScan highlights lines, which the marker is expected to lie on, on the rest of the aligned photos. Note If a marker has been placed on at least two aligned images PhotoScan will find the marker projections on the rest of the photos. The calculated marker positions will be indicated with icon on the corresponding aligned photos in Photo View mode.

Automatically defined marker locations can be later refined manually by dragging their projections on the corresponding photos. Open the photo where the marker is visible by double clicking on the photo’s name. Automatically placed marker will be indicated with. Move the marker projection to the desired location by dragging it using left mouse button. Once the marker location is refined by user, the marker icon will change to. Note To list photos where the marker locations are defined, select the corresponding marker on the Workspace pane.

The photos where the marker is placed will be marked with a Photos pane. To filter photos by marker use context menu. In those cases when there are hesitations about the features depicted on the photo, comparative inspection of two photos can prove to be useful.

To open two photos in PhotoScan window simultaneously Move to Other Tab Group command is available from photo tab header context menu. In the Photos pane double click on one photo to be opened. The photo will be opened in a new tab of the main program window.

Right click on the tab header and choose Move to Other Tab Group command from the context menu. The main program window will be divided into two parts and the photo will be moved to the second part.

The next photo you will choose to be opened with a double click will be visualized in the active tab group.

PhotoScan automatically assigns default labels for each newly created marker. These labels can be changed using the Rename Assigning reference coordinates To reference the model the real world coordinates of at least 3 points of the scene should be specified.

Depending on the requirements, the model can be referenced using marker coordinates, camera. Real world coordinates used for referencing the model along with the type of coordinate system used are specified using the Reference pane. The model can be located in either local Euclidean coordinates or in georeferenced coordinates. For model georeferencing a wide range of various geographic and projected coordinate systems are supported, including widely used WGS84 coordinate system.

Besides, almost all coordinate systems from the EPSG registry are supported as well. Reference coordinates can be specified in one of the following ways: Loaded from a separate text file using character separated values format. Entered manually in the Reference pane. Click Import toolbar button on the Reference pane. To open Reference pane use Reference command from the View menu. Browse to the file containing recorded reference coordinates and click Open button.

In the Import CSV dialog set the coordinate system if the data presents geographical coordinates. Select the delimiter and indicate the number of the data column for each coordinate. Indicate columns for the orientation data if present. Note In the data file columns and rows are numbered starting from 0. An example of a coordinates data file in the CSV format is given in the next section.

Information on the accuracy of the source coordinates x, y, z can be loaded with a CSV file as well. Check Load Accuracy option and indicate the number of the column where the accuracy for the data should be read from. The same figure will be tackled as accuracy information for all three coordinates. To remove unnecessary reference coordinates select corresponding items from the list and press Del key.

Additionally, it is possible to indicate accuracy data for the coordinates. Select Set Accuracy It is possible to select several cameras and apply Set Accuracy Alternatively, you can select Accuracy m or Accuracy deg text box for a certain camera on the Reference pane and press F2 button on the keyboard to type the text data directly onto the Reference pane. The reference coordinates data will be loaded into the Reference pane. After reference coordinates have been assigned PhotoScan automatically estimates coordinates in a local Euclidean system and calculates the referencing errors.

To see the results switch to the View Estimated or View Errors modes respectively using largest error will be highlighted.

In the Reference Settings dialog box select the Coordinate System used to compile reference coordinates data if it has not been set at the previous step. Rotation angles in PhotoScan are defined around the following axes: yaw axis runs from top to bottom, pitch axis runs from left to right wing of the drone, roll axis runs from tail to nose of the drone. Zero values of the rotation angle triple define the following camera position aboard: camera looks down to the ground, frames are taken in landscape orientation, and horizontal axis of the frame is perpendicular to the central tail-nose axis of the drone.

If the camera is fixed in a different position, respective yaw, pitch, roll values should be input in the camera correction section of the Settings dialog. The senses of the angles are defined according to the right-hand rule. Note Step 5 can be safely skipped if you are using standard GPS system not that of superhigh precision.

In Select Coordinate System dialog it is possible to ease searching for the required georeferencing system using Filter option. Enter respective EPSG code e. EPSG to filter the systems. To view the estimated geographic coordinates and reference errors switch between the View Estimated and View Errors modes respectively using.

A click on the column name on the Reference pane sorts the markers and photos by the data in the column. At this point you can review the errors and decide whether additional refinement of marker locations is required in case of marker based referencing , or if certain reference points should be excluded.

To reset a chunk georeferencing use Reset Transform command from the chunk context menu on the Workspace pane. Note Unchecked reference points on the Reference pane are not used for georeferencing. After adjusting marker locations on the photos, the coordinate system will not be updated automatically. It should be updated manually using pane. PhotoScan allows to convert the estimated geographic coordinates into a different coordinate system.

Each reference point is specified in this file on a separate line. Sample reference coordinates file is provided below:. Individual entries on each line should be separated with a tab space, semicolon, comma, etc character. All lines starting with character are treated as comments.

Records from the coordinate file are matched to the corresponding photos or markers basing on the label field. Camera coordinates labels should match the file name of the corresponding photo including extension.

Marker coordinates labels should match the labels of the corresponding markers in the project file. All labels are case insensitive. Note Character separated reference coordinates format does not include specification of the type of coordinate system used.

The kind of coordinate system used should be selected separately. PhotoScan requires z value to indicate height above the ellipsoid. Using different vertical datums On default PhotoScan requires all the source altitude values for both cameras and markers to be input as values mesuared above the ellipsoid.

However, PhotoScan allows for the different geoid models utilization as well. PhotoScan installation package includes only EGM96 geoid model, but additional geoid models can be downloaded from Agisoft’s website if they are required by the coordinate system selected in the Reference pane settings dialog; alternatively, a geoid model can be loaded from a custom PRJ file. Optimization Optimization of camera alignment PhotoScan estimates internal and external camera orientation parameters during photo alignment.

This estimation is performed using image data alone, and there may be some errors in the final estimates. The accuracy of the final estimates depends on many factors, like overlap between the neighboring photos, as well as on the shape of the object surface. These errors can lead to non-linear deformations of the final model. During georeferencing the model is linearly transformed using 7 parameter similarity transformation 3 parameters for translation, 3 for rotation and 1 for scaling.

Such transformation can compensate only a linear model misalignment. The non-linear component can not be removed with this approach. This is usually the main reason for georeferencing errors. Possible non-linear deformations of the model can be removed by optimizing the estimated point cloud and camera parameters based on the known reference coordinates. During this optimization PhotoScan adjusts estimated point coordinates and camera parameters minimizing the sum of reprojection error and reference coordinate misalignment error.

To achieve greater optimizing results it may be useful to edit sparse point cloud deleting obviously mislocated points beforehand. Georeferencing accuracy can be improved significantly after optimization. It is recommended to perform optimization if the final model is to be used for any kind of measurements.

In the Reference pane Settings dialog box specify the assumed accuracy of measured values as well as the assumed accuracy of marker projections on the source photos. Click Optimize toolbar button. In Optimize Camera Alignment dialog box check additional camera parameters to be optimized if needed. Click OK button to start optimization. After the optimization is complete, the georeferencing errors will be updated. Note Step 5 can be safely skipped if you are using standard GPS not that of extremely high precision.

Tangential distortion parameters p3, p4, are available for optimization only if p1, p2 values are not equal to zero after alignment step. The model data if any is cleared by the optimization procedure.

You will have to rebuild the model geometry after optimization. Image coordinates accuracy for markers indicates how precisely the markers were placed by the user or adjusted by the user after being automatically placed by the program. Ground altitude parameter is used to make reference preselection mode of alignment procedure work effectively for oblique imagery. See Aligning photos for details. Camera, marker and scale bar accuracy can be set per item i.

Accuracy values can be typed in on the pane per item or for a group of selected items. Generally it is reasonable to run optimization procedure based on markers data only. It is due to the fact that GCPs coordinates are measured with significantly higher accuracy compared to GPS data that indicates camera positions. Thus, markers data are sure to give more precise optimization results.

Moreover, quite often GCP and camera coordinates are measured in different coordinate systems, that also prevents from using both cameras and markers data in optimization simultaneously.

The results of the optimization procedure can be evaluated with the help of error information on the Reference pane. In addition, distortion plot can be inspected along with mean residuals visualised per calibration group. This data is available from Camera Calibration dialog Tools menu , from context menu of a camera group – Distortion Plot In case optimization results does not seem to be satisfactory, you can try recalculating with lower values of parameters, i.

Scale bar based optimization Scale bar is program representation of any known distance within the scene. It can be a standard ruler or a specially prepared bar of a known length.

Scale bar is a handy tool to add supportive reference data to. They can prove to be useful when there is no way to locate ground control points all over the scene. Scale bars allow to save field work time, since it is significantly easier to place several scale bars with precisely known length, then to measure coordinates of a few markers using special equipment.

In addition, PhotoScan allows to place scale bar instances between cameras, thus making it possible to avoid not only marker but ruler placement within the scene as well. Surely, scale bar based information will not be enough to set a coordinate system, however, the information can be successfully used while optimizing the results of photo alignment.

It will also be enough to perform measurements in PhotoScan software. See Performing measurements on mesh. Place markers at the start and end points of the bar. For information on marker placement please refer to the Setting coordinate system section of the manual. Select Create Scale Bar command form the Model view context menu. The scale bar will be created and an instant added to the Scale Bar list on the Reference pane.

Switch to the. Double click on the Distance m box next to the newly created scale bar name and enter the known length of the bar in meters. Select the two cameras on the Workspace or Reference pane using Ctrl button. Alternatively, the cameras can be selected in the Model view window using selecting tools from the Toolbar. Select Create Scale Bar command form the context menu. On the Reference pane check all scale bars to be used in optimization procedure.

Click Settings toolbar button on the Reference pane. In the Reference pane Settings dialog box specify the assumed accuracy of scale bars measurements. Click OK button. After the optimization is complete, cameras and markers estimated coordinates will be updated as well as all the georeferencing errors.

To analyze optimization results switch to the View Estimated mode using the Reference pane toolbar button. In scale bar section of the Reference pane estimated scale bar distance will be displayed. Error pix – root mean square reprojection error calculated over all feature points detected on the photo.

Error pix – root mean square reprojection error for the marker calculated over all photos where marker is visible. Error m – difference between the input source scale bar length and the measured distance between two cameras or markers representing start and end points of the scale bar. If the total reprojection error for some marker seems to be too large, it is recommended to inspect reprojection errors for the marker on individual photos.

The information is available with Show Info command from the marker context menu on the Reference pane. Working with coded and non-coded targets Overview Coded and non-coded targets are specially prepared, yet quite simple, real world markers that can add up to successful 3D model reconstruction of a scene. Coded targets advantages and limitations Coded targets CTs can be used as markers to define local coordinate system and scale of the model or as true matches to improve photo alignment procedure.

PhotoScan functionality includes automatic detection and matching of CTs on source photos, which allows to benefit from marker implementation in the project.

Moreover, automatic CTs detection and marker placement is more precise then manual marker placement. PhotoScan supports three types of circle CTs: 12 bit, 16 bit and 20 bit. While 12 bit pattern is considered to be decoded more precisely, 16 bit and 20 bit patterns allow for a greater number of CTs to be used within the same project. To be detected successfully CTs must take up a significant number of pixels on the original photos. This leads to a natural limitation of CTs implementation: while they generally prove to be useful in close-range imagery projects, aerial photography projects will demand too huge CTs to be placed on the ground, for the CTs to be detected correctly.

Coded targets in workflow Sets of all patterns of CTs supported by PhotoScan can be generated by the program itself. Once generated, the pattern set can be printed and the CTs can be placed over the scene to be shot and reconstructed. When the images with CTs seen on them are uploaded to the program, PhotoScan can detect and match the CTs automatically.

PhotoScan will detect and match CTs and add corresponding markers to the Reference pane. CTs generated with PhotoScan software contain even number of sectors. However, previous versions of PhotoScan software had no restriction of the kind.

Thus, if the project to be processed contains CTs from previous versions of PhotoScna software, it is required to disable parity check in order to make the detector work.

Non-coded targets implementation Non-coded targets can also be automatically detected by PhotoScan see Detect Markers dialog. However, for non-coded targets to be matched automatically, it is necessary to run align photos procedure first.

Non-coded targets are more appropriate for aerial surveying projects due to the simplicity of the pattern to be printed on a large scale. But, looking alike, they does not allow for automatic identification, so manual assignment of an identifier is required if some referencing coordinates are to be imported from a file correctly.

Chapter 5. Measurements Performing measurements on mesh PhotoScan supports measuring of distances between control points, as well as of surface area and volume of the reconstructed 3D model. Distance measurement PhotoScan enables measurements of direct distances between the points of the reconstructed 3D scene.

The points used for distance measurement must be defined by placing markers in the corresponding locations. Model coordinate system must be also initialized before the distance measurements can be performed. Alternatively, the model can be scaled based on known distance scale bar information to become suitable for measurements. For instructions on placing markers, refining their positions and setting coordinate system please refer to the Setting coordinate system section of the manual.

Scale bar concept is described in the Optimization section. Place the markers in the scene at the locations to be used for distance measurement. Select both markers to be used for distance measurements on the Reference pane using Ctrl button.

Select Create Scale Bar command form the 3D view context menu. Switch to the estimated values mode using toolbar. The estimated distance for the newly created scale bar equals to the distance that should have been measured. Note Please note that the scale bar used for distance measurements must be unchecked on the Reference pane. Surface area and volume measurement Surface area or volume measurements of the reconstructed 3D model can be performed only after the scale or coordinate system of the scene is defined.

For instructions on setting coordinate system please refer to the Setting coordinate system section of the manual. The whole model surface area and volume will be displayed in the Measure Area and Volume dialog box. Surface area is measured in square meters, while mesh volume is measured in cubic meters.

Volume measurement can be performed only for the models with closed geometry. If there are any holes in the model surface PhotoScan will report zero volume. Existing holes in the mesh surface can be filled in before performing volume measurements using Close Holes Performing measurements on DEM PhotoScan is capable of DEM-based point, distance, area, and volume measurements as well as of generating cross-sections for a part of the scene selected by the user.

Measurements on the DEM are controlled with shapes: points, polylines and polygons. Alternatively, shapes can be loaded from a. SHP file using Import Shapes Shapes created in PhotoScan can be exported using Export Shapes Double click on the last point to indicate the end of a polyline.

To complete a polygon, place the end point over the starting one. Once the shape is drawn, a shape label will be added to the chunk data structure on the Workspace pane. All shapes drawn on the same DEM and on the corresponding orthomosaic will be shown under the same label on the Workspace pane. The program will switch to a navigation mode once a shape is completed.

Delete Vertex command is active only for a vertex context menu. To get access to the vertex context menu, select the shape with a double click first, and then select the vertex with a double click on it.

To change position of a vertex, drag and drop it to a selected position with the cursor. Point measurement Ortho view allows to measure coordinates of any point on the reconstructed model. X and Y coordinates of the point indicated with the cursor as well as height of the point above the vertical datum selected by the user are shown in the bottom right corner of the Ortho view.

In the Measure Shape dialog inspect the results. Perimeter value equals to the distance that should have been measured. In addition to polyline length value see perimeter value in the Measure Shape , coordinates of the vertices of the polyline are shown on the Planar tab of the Measure Shape dialog. Note Measure option is available from the context menu of a selected polyline.

To select a polyline, double-click on it. A selected polyline is coloured in red. In the Measure Shape dialog inspect the results: see area value on the Planar tab and volume values on the Volume tab. Best fit and mean level planes are calculated based on the drawn polygon vertices. Volume measured against custom level plane allows to trace volume changes for the same area in the course of time.

Note Measure option is available from the context menu of a selected polygon. To select a polygon, double-click on it. A selected polygon is coloured in red.

Cross sections and contour lines PhotoScan enables to calculate cross sections, using shapes to indicate the plane s for a cut s , the cut being made with a plane parallel to Z axis. Generate Contours Set values for Minimal altitude, Maximal altitude parameters as well as the Interval for the contours. All the values shoudl be indicated in meters. When the procedure is finished, a contour lines label will be added to the project file structure shown on the Workspace pane.

Contour lines can be shown over the DEM or orthomosaic on the Ortho tab of the program window. Use Show contour lines tool from the Ortho tab toolbal to switch the function on and off. Contour lines can be deleted using Remove Contours command from the contour lines label context menu on the Workspace pane. Contour lines can be exported using Export Contours command from the contour lines label context menu on the Workspace pane. Alternatively the command is available from the Tools menu.

In the Export Contour Lines dialog it is necessary to select the type of the contour lines to be exported. SHP file can store the lines of the same type only: either polygons or polylines. Vegetation indices calculation PhotoScan enables to calculate NDVI and other vegetation indices based on the multispectral imagery input.

A vegetation index formula can be set by the user, thus allowing for great flexibility in data analysis. Calculated data can be exported as a grid of floating point index values calculated per pixel of orthomosaic or as an orthomosaic in pseudocolors according to a pallet set by the user. Open orthomosaic in the Ortho tab doubleclicking on the orthomosaic label on the Workspace pane.

Open Raster Calculator tool using. Input an index expression using keyboard input and operators buttons of the raster calculator if necessary.

Once the operation is completed, the result will be shown in the Ortho view, index values being visualised with colours according to the palette set in the Raster Calculator dialog. Palette defines the colour for each index value to be shown with. PhotoScan offers several standard palette presets on the Palette tab of the Raster Calculator dialog. For each new line added to the palette a certain index value should be typed in. Double click on the newly added line to type the value in. A customised palette can be saved for future projects using.

Select Generate Contours The contour lines will be shown over the index data on the Ortho tab. Note PhotoScan keeps only the latest contour lines data calculated. After vegetation index results having been inspected, the original orthomosaic can be opened with unchecking Enable transform box in the Raster Calculator and pressing OK button.

Index data can be saved with Export orthomosaic command from the File menu. For guidance on the export procedure, please refer to NDVI data export section of the manual. Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results.

Masks can be applied at the following stages of processing: Alignment of the photos Building dense point cloud Building 3D model texture Exporting Orthomosaic Alignment of the photos Masked areas can be excluded during feature point detection. Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions. This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos.

Masking may be also useful when the object of interest occupies only a small part of the photo. In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects. Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process. Masking can be used to reduce the resulting dense cloud complexity, by eliminating the areas on the photos that are not of interest.

Masked areas are always excluded from processing during dense point cloud and texture generation stages. Let’s take for instance a set of photos of some object. Along with an object itself on each photo some background areas are present. These areas may be useful for more precise camera positioning, so it is better to use them while aligning the photos. However, impact of these areas at the building dense point cloud is exactly opposite: the resulting model will contain object of interest and its background.

Background geometry will „consume“ some part of mesh polygons that could be otherwise used for modeling the main object. Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction.

Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing. Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas. Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available. PhotoScan supports loading masks from the following sources: From alpha channel of the source photos.

From separate images. Generated from background photos based on background differencing technique. Based on reconstructed 3D model. When generating masks from separate or background images, the folder selection dialog will appear. Browse to the folder containing corresponding images and select it.

The following parameters can be specified during mask import: Import masks for Specifies whether masks should be imported for the currently opened photo, active chunk or entire Workspace. Current photo – load mask for the currently opened photo if any.

Active chunk – load masks for active chunk. Entire workspace – load masks for all chunks in the project.

On the contrary, the set of camera positions is required for further 3D model reconstruction by PhotoScan. The next stage is building dense point cloud. Based on the estimated camera positions and pictures themselves a dense point cloud is built by PhotoScan.

Dense point cloud may be edited prior to export or proceeding to 3D mesh model generation. The third stage is building mesh. PhotoScan reconstructs a 3D polygonal mesh representing the object surface based on the dense or sparse point cloud according to the user’s choice.

Generally there are two algorithmic methods available in PhotoScan that can be applied to 3D mesh generation: Height Field – for planar type surfaces, Arbitrary – for any kind of object. The mesh having been built, it may be necessary to edit it. Some corrections, such as mesh decimation, removal of detached components, closing of holes in the mesh, smoothing, etc.

For more complex editing you have to engage external 3D editor tools. PhotoScan allows to export mesh, edit it by another software and import it back. After geometry i. Several texturing modes are available in PhotoScan, they are described in the corresponding section of this manual, as well as orthomosaic and DEM generation procedures.

About the manual Basically, the sequence of actions described above covers most of the data processing needs. All these operations are carried out automatically according to the parameters set by user. Instructions on how to get through these operations and descriptions of the parameters controlling each step are given in the corresponding sections of the Chapter 3, General workflow chapter of the manual. In some cases, however, additional actions may be required to get the desired results.

Pictures taken using uncommon lenses such as fisheye one may require preliminary calibration of optical system parameters or usage of different calibration model specially implemented for ultra-wide angle lens. Chapter 4, Improving camera alignment results covers that part of the software functionality. In some capturing scenarios masking of certain regions of the photos may be required to exclude them from the calculations. Application of masks in PhotoScan processing workflow as well as editing options available are described in Chapter 5, Editing.

Chapter 6, Automation describes opportunities to save up on manual intervention to the processing workflow. It can take up quite a long time to reconstruct a 3D model. PhotoScan allows to export obtained results and save intermediate data in a form of project files at any stage of the process. If you are not familiar with the concept of projects, its brief description is given at the end of the Chapter 3, General workflow. In the manual you can also find instructions on the PhotoScan installation procedure and basic rules for taking „good“ photographs, i.

For the information refer to Chapter 1, Installation and Chapter 2, Capturing photos. Chapter 1. NVidia GeForce 8xxx series and later. PhotoScan is likely to be able to utilize processing power of any OpenCL enabled device during Dense Point Cloud generation stage, provided that OpenCL drivers for the device are properly installed.

However, because of the large number of various combinations of video chips, driver versions and operating systems, Agisoft is unable to test and guarantee PhotoScan’s compatibility with every device and on every platform. The table below lists currently supported devices on Windows platform only.

We will pay particular attention to possible problems with PhotoScan running on these devices. Using OpenCL acceleration with mobile or integrated graphics video chips is not recommended because of the low performance of such GPUs.

Start PhotoScan by running photoscan. Restrictions of the Demo mode Once PhotoScan is downloaded and installed on your computer you can run it either in the Demo mode or in the full function mode. On every start until you enter a serial number it will show a registration box offering two options: 1 use PhotoScan in the Demo mode or 2 enter a serial number to confirm the purchase.

The first choice is set by default, so if you are still exploring PhotoScan click the Continue button and PhotoScan will start in the Demo mode.

The employment of PhotoScan in the Demo mode is not time limited. Several functions, however, are not available in the Demo mode. These functions are the following:. On purchasing you will get the serial number to enter into the registration box on starting PhotoScan. Once the serial number is entered the registration box will not appear again and you will get full access to all functions of the program. Chapter 2. Capturing photos Before loading your photographs into PhotoScan you need to take them and select those suitable for 3D model reconstruction.

Photographs can be taken by any digital camera both metric and non-metric , as long as you follow some specific capturing guidelines. This section explains general principles of taking and selecting pictures that provide the most appropriate data for 3D model generation.

Make sure you have studied the following rules and read the list of restrictions before you get out for shooting photographs. Equipment Use a digital camera with reasonably high resolution 5 MPix or more. Avoid ultra-wide angle and fisheye lenses.

The best choice is 50 mm focal length 35 mm film equivalent lenses. It is recommended to use focal length from 20 to 80 mm interval in 35mm equivalent. If a data set was captured with fisheye lens, appropriate camera sensor type should be selected in PhotoScan Camera Calibration dialog prior to processing. Fixed lenses are preferred. If zoom lenses are used – focal length should be set either to maximal or to minimal value during the entire shooting session for more stable results.

Take images at maximal possible resolution. ISO should be set to the lowest value, otherwise high ISO values will induce additional noise to images. Aperture value should be high enough to result in sufficient focal depth: it is important to capture sharp, not blurred photos. Shutter speed should not be too slow, otherwise blur can occur due to slight movements. If still have to, shoot shiny objects under a cloudy sky.

Avoid unwanted foregrounds. Avoid moving objects within the scene to be reconstructed. Avoid absolutely flat objects or scenes. Image preprocessing PhotoScan operates with the original images. So do not crop or geometrically transform, i. Capturing scenarios Generally, spending some time planning your shot session might be very useful.

Number of photos: more than required is better than not enough. Number of „blind-zones“ should be minimized since PhotoScan is able to reconstruct only geometry visible from at least two cameras. Each photo should effectively use the frame size: object of interest should take up the maximum area. In some cases portrait camera orientation should be used. Do not try to place full object in the image frame, if some parts are missing it is not a problem providing that these parts appear on other images.

Good lighting is required to achieve better quality of the results, yet blinks should be avoided. It is recommended to remove sources of light from camera fields of view.

Avoid using flash. The following figures represent advice on appropriate capturing scenarios:. Restrictions In some cases it might be very difficult or even impossible to build a correct 3D model from a set of pictures. A short list of typical reasons for photographs unsuitability is given below. Modifications of photographs PhotoScan can process only unmodified photos as they were taken by a digital photo camera.

Processing the photos which were manually cropped or geometrically warped is likely to fail or to produce highly inaccurate results. Photometric modifications do not affect reconstruction results. In this case PhotoScan assumes that focal length in 35 mm equivalent equals to 50 mm and tries to align the photos in accordance with this assumption. If the correct focal length value differs significantly from 50 mm, the alignment can give incorrect results or even fail.

In such cases it is required to specify initial camera calibration manually. The details of necessary EXIF tags and instructions for manual setting of the calibration parameters are given in the Camera calibration section.

Lens distortion The distortion of the lenses used to capture the photos should be well simulated with the Brown’s distortion model. Otherwise it is most unlikely that processing results will be accurate.

Fisheye and ultra-wide angle lenses are poorly modeled by the common distortion model implemented in PhotoScan software, so it is required to choose proper camera type in Camera Calibration dialog prior to processing. Chapter 3. General workflow Processing of images with PhotoScan includes the following main steps: loading photos into PhotoScan; inspecting loaded images, removing unnecessary images; aligning photos; building dense point cloud; building mesh 3D polygonal model ; generating texture; exporting results.

If you are using PhotoScan in the full function not the Demo mode, intermediate results of the image processing can be saved at any stage in the form of project files and can be used later. The concept of projects and project files is briefly explained in the Saving intermediate results section. The list above represents all the necessary steps involved in the construction of a textured 3D model from your photos.

Some additional tools, which you may find to be useful, are described in the successive chapters. Preferences settings Before starting a project with PhotoScan it is recommended to adjust the program settings for your needs. In Preferences dialog General Tab available through the Tools menu you can indicate the path to the PhotoScan log file to be shared with the Agisoft support team in case you face any problem during the processing.

Here you can also change GUI language to the one that is most convenient for you. PhotoScan exploits GPU processing power that speeds up the process significantly. If you have decided to switch on GPUs for photogrammetric data processing with PhotoScan, it is recommended to free one physical CPU core per each active GPU for overall control and resource managing tasks.

Loading photos Before starting any operation it is necessary to point out what photos will be used as a source for 3D reconstruction. In fact, photographs themselves are not loaded into PhotoScan until they are needed.

So, when you „load photos“ you only indicate photographs that will be used for further processing. In the Add Photos dialog box browse to the folder containing the images and select files to be processed. Then click Open button. Photos in any other format will not be shown in the Add Photos dialog box.

To work with such photos you will need to convert them in one of the supported formats. If you have loaded some unwanted photos, you can easily remove them at any moment. Right-click on the selected photos and choose Remove Items command from the opened context menu, or click Remove Items toolbar button on the Workspace pane.

The selected photos will be removed from the working set. Camera groups If all the photos or a subset of photos were captured from one camera position – camera station, for PhotoScan to process them correctly it is obligatory to move those photos to a camera group and mark the group as Camera Station.

It is important that for all the photos in a Camera Station group distances between camera centers were negligibly small compared to the camera-object minimal distance. However, it is possible to export panoramic picture for the data captured from only one camera station.

Refer to Exporting results section for guidance on panorama export. Alternatively, camera group structure can be used to manipulate the image data in a chunk easily, e. Right-click on the selected photos and choose Move Cameras – New Camera Group command from the opened context menu.

A new group will be added to the active chunk structure and selected photos will be moved to that group. To mark group as camera station, right click on the camera group name and select Set Group Type command from the context menu.

Inspecting loaded photos Loaded photos are displayed on the Workspace pane along with flags reflecting their status. The following flags can appear next to the photo name:.

In this case PhotoScan assumes that the corresponding photo was taken using 50mm lens 35mm film equivalent. If the actual focal length differs significantly from this value, manual calibration may be required. More details on manual camera calibration can be found in the Camera calibration section. NA Not aligned Notifies that external camera orientation parameters have not been estimated for the current photo yet. Images loaded to PhotoScan will not be aligned until you perform the next step – photos alignment.

Aligning photos Once photos are loaded into PhotoScan, they need to be aligned. At this stage PhotoScan finds the camera position and orientation for each photo and builds a sparse point cloud model. The progress dialog box will appear displaying the current processing status.

To cancel processing click Cancel button. Alignment having been completed, computed camera positions and a sparse point cloud will be displayed. You can inspect alignment results and remove incorrectly positioned photos, if any.

To see the matches between any two photos use View Matches Incorrectly positioned photos can be realigned.

Reset alignment for incorrectly positioned cameras using Reset Camera Alignment command from the photo context menu. Select photos to be realigned and use Align Selected Cameras command from the photo context menu. When the alignment step is completed, the point cloud and estimated camera positions can be exported for processing with another software if needed. Image quality Poor input, e. To help you to exclude poorly focused images from processing PhotoScan suggests automatic image quality estimation feature.

Images with quality value of less than 0. To disable a photo use. PhotoScan estimates image quality for each input image. The value of the parameter is calculated based on the sharpness level of the most focused part of the picture. Right button click on the selected photo s and choose Estimate Image Quality command from the context menu. Once the analysis procedure is over, a figure indicating estimated image quality value will be displayed in the Quality column on the Photos pane.

Alignment parameters The following parameters control the photo alignment procedure and can be modified in the Align Photos dialog box: Accuracy Higher accuracy settings help to obtain more accurate camera position estimates.

Lower accuracy settings can be used to get the rough camera positions in a shorter period of time. While at High accuracy setting the software works with the photos of the original size, Medium setting causes image downscaling by factor of 4 2 times by each side , at Low accuracy source files are downscaled by factor of 16, and Lowest value means further downscaling by 4 times more. Highest accuracy setting upscales the image by factor of 4. Since tie point positions are estimated on the basis of feature spots found on the source images, it may be meaningful to upscale a source photo to accurately localize a tie point.

However, Highest accuracy setting is recommended only for very sharp image data and mostly for research purposes due to the corresponding processing being quite time consuming. Pair preselection The alignment process of large photo sets can take a long time. A significant portion of this time period is spent on matching of detected features across the photos.

Image pair preselection option may speed up this process due to selection of a subset of image pairs to be matched. In the Generic preselection mode the overlapping pairs of photos are selected by matching photos using lower accuracy setting first. Additionally the following advanced parameters can be adjusted.

Key point limit The number indicates upper limit of feature points on every image to be taken into account during current processing stage. Using zero value allows PhotoScan to find as many key points as possible, but it may result in a big number of less reliable points.

Tie point limit The number indicates upper limit of matching points for every image. Using zero value doesn’t apply any tie point filtering. Constrain features by mask When this option is enabled, masked areas are excluded from feature detection procedure. For additional information on the usage of masks please refer to the Using masks section.

Note Tie point limit parameter allows to optimize performance for the task and does not generally effect the quality of the further model. Recommended value is Too high or too low tie point limit value may cause some parts of the dense point cloud model to be missed.

The reason is that PhotoScan generates depth maps only for pairs of photos for which number of matching points is above certain limit. As a results sparse point cloud will be thinned, yet the alignment will be kept unchanged.

Point cloud generation based on imported camera data PhotoScan supports import of external and internal camera orientation parameters. Thus, if precise camera data is available for the project, it is possible to load them into PhotoScan along with the photos, to be used as initial information for 3D reconstruction job. The data will be loaded into the software.

Camera calibration data can be inspected in the Camera Calibration dialog, Adjusted tab, available from Tools menu. Once the data is loaded, PhotoScan will offer to build point cloud. This step involves feature points detection and matching procedures.

As a result, a sparse point cloud – 3D representation of the tie-points data, will be generated. Parameters controlling Build Point Cloud procedure are the same as the ones used at Align Photos step see above. Building dense point cloud PhotoScan allows to generate and visualize a dense point cloud model.

Based on the estimated camera positions the program calculates depth information for each camera to be combined into a single dense point cloud.

PhotoScan tends to produce extra dense point clouds, which are of almost the same density, if not denser, as LIDAR point clouds. A dense point cloud can be edited within PhotoScan environment or exported to an external tool for further analysis. Rotate the bounding box and then drag corners of the box to the desired positions. In the Build Dense Cloud dialog box select the desired reconstruction parameters.

Click OK button when done. Reconstruction parameters Quality Specifies the desired reconstruction quality. Higher quality settings can be used to obtain more detailed and accurate geometry, but they require longer time for processing.

Interpretation of the quality parameters here is similar to that of accuracy settings given in Photo Alignment section. The only difference is that in this case Ultra High quality setting means processing of original photos, while each following step implies preliminary image size downscaling by factor of 4 2 times by each side.

Depth Filtering modes At the stage of dense point cloud generation reconstruction PhotoScan calculates depth maps for every image. Due to some factors, like noisy or badly focused images, there can be some outliers among the points. To sort out the outliers PhotoScan has several built-in filtering algorithms that answer the challenges of different projects.

If there are important small details which are spatially distingueshed in the scene to be reconstructed, then it is recommended to set Mild depth filtering mode, for important features not to be sorted out as outliers. This value of the parameter may also be useful for aerial projects in case the area contains poorly textued roofs, for example.

If the area to be reconstructed does not contain meaningful small details, then it is reasonable to chose Aggressive depth filtering mode to sort out most of the outliers.

This value of the parameter normally recommended for aerial data processing, however, mild filtering may be useful in some projects as well see poorly textured roofs comment in the mild parameter valur description above. Stereoscopic measurements Professional 3D monitors and 3D controllers support for accurate and convenient stereoscopic vectorization of features and measurement purposes.

Direct upload to various online resources and export to many popular formats. Photorealistic textures: HDR and multifile support incl. UDIM layout. Hierarchical tiled model generation City scale modeling preserving the original image resolution for texturing. Cesium publishing. Basis for numerous visual effects with 3D models reconstructed in time sequence. Panorama stitching 3D reconstruction for data captured from the same camera position — camera station, provided that at least 2 camera stations are present.

The mesh should be built at high and then decimated to create a sharper mesh. And only autogenerate the tiled texture based on generic and a resolution by choice. We use agisoft for city projects and love it but the removal of manual control of the tiled models will soon force us to look at other programs.

Thanks for a great program! Mohammed Full Member Posts: Thanks Mohamed. Haven’t read all the posts here, apologies if it’s already been reported. VERY useful. In this case the arrangement of images into cameras and planes will be performed automatically based on available meta data. Once the data is properly organized, it can be loaded into PhotoScan to form multiplane cameras. The exact procedure will depend on whether the multilayer layout variants a and b , multifolder layout variants c and d or if MicaSense data is used.

In the Add Photos dialog box browse to the folder containing multilayer images and select files to be processed. In the Add Photos dialog select the data layout „Create multispectral cameras from files as cameras“. In the Add Folder dialog box browse to the parent folder containing subfolders with images. Then click Select Folder button. In the Add Photos dialog select the data layout „Create multispectral cameras from folders as bands“.

In the Add Photos dialog box browse to the folder containing MicaSense images and select files to be processed. After chunk with multispectral cameras is created, it can be processed in the same way as normal chunks.

For these chunks additional parameters allowing to manipulate the data properly will be provided where appropriate. Aligning photos Once photos are loaded into PhotoScan, they need to be aligned. At this stage PhotoScan finds the camera position and orientation for each photo and builds a sparse point cloud model. The progress dialog box will appear displaying the current processing status. To cancel processing click Cancel button. Alignment having been completed, computed camera positions and a sparse point cloud will be displayed.

You can inspect alignment results and remove incorrectly positioned photos, if any. To see the matches between any two photos use View Matches Incorrectly positioned photos can be realigned. Reset alignment for incorrectly positioned cameras using Reset Camera Alignment command from the photo context menu. Set markers at least 4 per photo on these photos and indicate their projections on at least two photos from the already aligned subset.

PhotoScan will consider these points to be true matches. For information on markers placement refer to the Setting coordinate system section. Select photos to be realigned and use Align Selected Cameras command from the photo context menu.

When the alignment step is completed, the point cloud and estimated camera positions can be exported for processing with another software if needed. Image quality Poor input, e.

To help you to exclude poorly focused images from processing PhotoScan suggests automatic image quality estimation feature. Images with quality value of less than 0. To disable a photo use. PhotoScan estimates image quality for each input image. The value of the parameter is calculated based on the sharpness level of the most focused part of the picture. Switch to the detailed view in the Photos pane using on the Photos pane toolbar.

Select all photos to be analyzed on the Photos pane. Right button click on the selected photo s and choose Estimate Image Quality command from the context menu. Once the analysis procedure is over, a figure indicating estimated image quality value will be displayed in the Quality column on the Photos pane.

Alignment parameters The following parameters control the photo alignment procedure and can be modified in the Align Photos dialog box: Accuracy Higher accuracy settings help to obtain more accurate camera position estimates.

Lower accuracy settings can be used to get the rough camera positions in a shorter period of time. While at High accuracy setting the software works with the photos of the original size, Medium setting causes image downscaling by factor of 4 2 times by each side , at Low accuracy source files are downscaled by factor of 16, and Lowest value means further downscaling by 4 times more. Highest accuracy setting upscales the image by factor of 4.

Since tie point positions are estimated on the basis of feature spots found on the source images, it may be meaningful to upscale a source photo to accurately localize a tie point.

However, Highest accuracy setting is recommended only for very sharp image data and mostly for research purposes due to the corresponding processing being quite time consuming. Pair preselection The alignment process of large photo sets can take a long time. A significant portion of this time period is spent on matching of detected features across the photos.

Image pair preselection option may speed up this process due to selection of a subset of image pairs to be matched. In the Generic preselection mode the overlapping pairs of photos are selected by matching photos using lower accuracy setting first. In the Reference preselection mode the overlapping pairs of photos are selected based on the measured camera locations if present.

For oblique imagery it is necessary to set Ground altitude value average ground height in the same coordinate system which is set for camera coordinates data in the Settings dialog of the Reference pane to make the preselection procedure work efficiently.

Ground altitude information must be accompanied with yaw, pitch, roll data for cameras. Yaw, pitch, roll data should be input in the Reference pane. Additionally the following advanced parameters can be adjusted. Key point limit The number indicates upper limit of feature points on every image to be taken into account during current processing stage.

Using zero value allows PhotoScan to find as many key points as possible, but it may result in a big number of less reliable points. Tie point limit The number indicates upper limit of matching points for every image.

Using zero value doesn’t apply any tie point filtering. Constrain features by mask When this option is enabled, masked areas are excluded from feature detection procedure. For additional information on the usage of masks please refer to the Using masks section.

Note Tie point limit parameter allows to optimize performance for the task and does not generally effect the quality of the further model. Recommended value is Too high or too low tie point limit value may cause some parts of the dense point cloud model to be missed.

The reason is that PhotoScan generates depth maps only for pairs of photos for which number of matching points is. As a results sparse point cloud will be thinned, yet the alignment will be kept unchanged. Point cloud generation based on imported camera data PhotoScan supports import of external and internal camera orientation parameters. Thus, if precise camera data is available for the project, it is possible to load them into PhotoScan along with the photos, to be used as initial information for 3D reconstruction job.

The data will be loaded into the software. Camera calibration data can be inspected in the Camera Calibration dialog, Adjusted tab, available from Tools menu. If the input file contains some reference data camera position data in some coordinate system , the data will be shown on the Reference pane, View Estimated tab. Once the data is loaded, PhotoScan will offer to build point cloud. This step involves feature points detection and matching procedures. As a result, a sparse point cloud – 3D representation of the tie-points data, will be generated.

Parameters controlling Build Point Cloud procedure are the same as the ones used at Align Photos step see above. Building dense point cloud PhotoScan allows to generate and visualize a dense point cloud model. Based on the estimated camera positions the program calculates depth information for each camera to be combined into a single dense point cloud. PhotoScan tends to produce extra dense point clouds, which are of almost the same density, if not denser, as LIDAR point clouds.

A dense point cloud can be edited and classified within PhotoScan environment or exported to an external tool for further analysis. Rotate the bounding box and then drag corners of the box to the desired positions. In the Build Dense Cloud dialog box select the desired reconstruction parameters.

Click OK button when done. Reconstruction parameters Quality Specifies the desired reconstruction quality. Higher quality settings can be used to obtain more detailed and accurate geometry, but they require longer time for processing. Interpretation of the quality parameters here is similar to that of accuracy settings given in Photo Alignment section. The only difference is that in this case Ultra High quality setting means processing of original photos, while each following step implies preliminary image size downscaling by factor of 4 2 times by each side.

Depth Filtering modes At the stage of dense point cloud generation reconstruction PhotoScan calculates depth maps for every image. Due to some factors, like noisy or badly focused images, there can be some outliers among the points.

To sort out the outliers PhotoScan has several built-in filtering algorithms that answer the challenges of different projects. If there are important small details which are spatially distingueshed in the scene to be reconstructed, then it is recommended to set Mild depth filtering mode, for important features not to be sorted out as outliers.

This value of the parameter may also be useful for aerial projects in case the area contains poorly textued roofs, for example. If the area to be reconstructed does not contain meaningful small details, then it is reasonable to chose Aggressive depth filtering mode to sort out most of the outliers. This value of the parameter normally recommended for aerial data processing, however, mild filtering may be useful in some projects as well see poorly textured roofs comment in the mild parameter valur description above.

Moderate depth filtering mode brings results that are in between the Mild and Aggressive approaches. You can experiment with the setting in case you have doubts which mode to choose. Additionally depth filtering can be Disabled. But this option is not recommended as the resulting dense cloud could be extremely noisy. Check the reconstruction volume bounding box. If the model has already been referenced, the bounding box will be properly positioned automatically. Otherwise, it is important to control its position manually.

To adjust the bounding box manually, use the Resize Region and Rotate Region toolbar buttons. Rotate the bounding box and then drag corners of the box to the desired positions – only part of the scene inside the bounding box will be reconstructed. If the Height field reconstruction method is to be applied, it is important to control the position of the red side of the bounding box: it defines reconstruction plane.

In this case make sure that the bounding box is correctly oriented. In the Build Mesh dialog box select the desired reconstruction parameters. Reconstruction parameters PhotoScan supports several reconstruction methods and settings, which help to produce optimal reconstructions for a given data set. Surface type Arbitrary surface type can be used for modeling of any kind of object. It should be selected for closed objects, such as statues, buildings, etc. It doesn’t make any assumptions on the type of the object being modeled, which comes at a cost of higher memory consumption.

Height field surface type is optimized for modeling of planar surfaces, such as terrains or basereliefs. It should be selected for aerial photography processing as it requires lower amount of memory and allows for larger data sets processing.

Source data Specifies the source for the mesh generation procedure. Sparse cloud can be used for fast 3D model generation based solely on the sparse point cloud. Dense cloud setting will result in longer processing time but will generate high quality output based on the previously reconstructed dense point cloud. Polygon count Specifies the maximum number of polygons in the final mesh.

They present optimal number of polygons for a mesh of a corresponding level of detail. It is still possible for a user to indicate the target number of polygons in the final mesh according to their choice.

It could be done through the Custom value of the Polygon count parameter. Please note that while too small number of polygons is likely to result in too rough mesh, too huge custom number over 10 million polygons is likely to cause model visualization problems in external software. Interpolation If interpolation mode is Disabled it leads to accurate reconstruction results since only areas corresponding to dense point cloud points are reconstructed.

Manual hole filling is usually required at the post processing step. With Enabled default interpolation mode PhotoScan will interpolate some surface areas within a circle of a certain radius around every dense cloud point.

As a result some holes can be automatically covered. Yet some holes can still be present on the model and are to be filled at the post processing step. In Extrapolated mode the program generates holeless model with extrapolated geometry.

Large areas of extra geometry might be generated with this method, but they could be easily removed later using selection and cropping tools. Point classes Specifies the classes of the dense point cloud to be used for mesh generation.

Preliminary Classifying dense cloud points procedure should be performed for this option of mesh generation to be active. Note PhotoScan tends to produce 3D models with excessive geometry resolution, so it is recommended to perform mesh decimation after geometry computation. More information on mesh decimation and other 3D model geometry editing tools is given in the Editing model geometry section.

Select the desired texture generation parameters in the Build Texture dialog box. Texture mapping modes The texture mapping mode determines how the object texture will be packed in the texture atlas. Proper texture mapping mode selection helps to obtain optimal texture packing and, consequently, better visual quality of the final model. Generic The default mode is the Generic mapping mode; it allows to parametrize texture atlas for arbitrary geometry.

No assumptions regarding the type of the scene to be processed are made; program tries to create as uniform texture as possible.

Adaptive orthophoto In the Adaptive orthophoto mapping mode the object surface is split into the flat part and vertical regions. The flat part of the surface is textured using the orthographic projection, while vertical regions are textured separately to maintain accurate texture representation in such regions.

When in the Adaptive orthophoto mapping mode, program tends to produce more compact texture representation for nearly planar scenes, while maintaining good texture quality for vertical surfaces, such as walls of the buildings. Orthophoto In the Orthophoto mapping mode the whole object surface is textured in the orthographic projection.

The Orthophoto mapping mode produces even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions. Spherical Spherical mapping mode is appropriate only to a certain class of objects that have a ball-like form.

It allows for continuous texture atlas being exported for this type of objects, so that it is much easier to edit it later. When generating texture in Spherical mapping mode it is crucial to set the Bounding box properly. The whole model should be within the Bounding box. The red side of the Bounding box should be under the model; it defines the axis of the spherical projection. The marks on the front side determine the 0 meridian. Single photo The Single photo mapping mode allows to generate texture from a single photo.

The photo to be used for texturing can be selected from ‚Texture from‘ list. Keep uv The Keep uv mapping mode generates texture atlas using current texture parametrization. It can be used to rebuild texture atlas using different resolution or to generate the atlas for the model parametrized in the external software. Texture generation parameters The following parameters control various aspects of texture atlas generation: Texture from Single photo mapping mode only Specifies the photo to be used for texturing.

Available only in the Single photo mapping mode. Blending mode not used in Single photo mode Selects the way how pixel values from different photos will be combined in the final texture. Mosaic – implies two-step approach: it does blending of low frequency component for overlapping images to avoid seamline problem weighted average, weight being dependent on a number of parameters including proximity of the pixel in question to the center of the image , while high frequency component, that is in charge of picture details, is taken from a single image – the one that presents good resolution for the area of interest while the camera view is almost along the normal to the reconstructed surface in that point.

Average – uses the weighted average value of all pixels from individual photos, the weight being dependent on the same parameters that are considered for high frequence component in mosaic mode.

Max Intensity – the photo which has maximum intensity of the corresponding pixel is selected. Min Intensity – the photo which has minimum intensity of the corresponding pixel is selected. Disabled – the photo to take the color value for the pixel from is chosen like the one for the high frequency component in mosaic mode. Exporting texture to several files allows to archive greater resolution of the final model texture, while export of high resolution texture to a single file can fail due to RAM limitations.

Enable color correction The feature is useful for processing of data sets with extreme brightness variation. However, please note that color correction process takes up quite a long time, so it is recommended to enable the setting only for the data sets that proved to present results of poor quality. Improving texture quality To improve resulting texture quality it may be reasonable to exclude poorly focused images from processing at this step.

PhotoScan suggests automatic image quality estimation feature. PhotoScan estimates image quality as a relative sharpness of the photo with respect to other images in the data set. Building tiled model Hierarchical tiles format is a good solution for city scale modeling. It allows for responsive visualisation of large area 3D models in high resolution, a tiled model being opened with Agisoft Viewer – a complementary tool included in PhotoScan installer package.

Tiled model is build based on dense point cloud data. Hierarchical tiles are textured from the source imagery. Check the reconstruction volume bounding box – tiled model will be generated for the area within bounding box only.

To adjust the bounding box use the Resize Region and Rotate Region toolbar buttons. In the Build Tiled model dialog box select the desired reconstruction parameters. Reconstruction parameters Pixel size m Suggested value shows automatically estimated pixel size due to input imagery effective resolution. It can be set by the user in meters. Tile size Tile size can be set in pixels. For smaller tiles faster visualisation should be expected.

Building digital elevation model PhotoScan allows to generate and visualize a digital elevation model DEM. A DEM represents a surface model as a regular grid of height values. DEM can be rasterized from a dense point cloud, a sparse point cloud or a mesh.

Most accurate results are calculated based on dense point cloud data. PhotoScan enables to perform DEM-based point, distance, area, volume measurements as well as generate cross-sections for a part of the scene selected by the user. Additionally, contour lines can be calculated for the model and depicted either over DEM or Orthomosaic in Ortho view within PhotoScan environment. More information on measurement functionality can be found in Performing measurements on DEM section. Note Build DEM procedure can be performed only for projects saved in.

PSX format. DEM can be calculated for referenced models only. So make sure that you have set a coordinate system for your model before going to build DEM operation. For guidance on Setting coordinate system please go to Setting coordinate system DEM is calculated for the part of the model within the bounding box.

Preliminary elevation data results can be generated from a sparse point cloud, avoiding Build Dense Cloud step for time limitation reasons. With Enabled default interpolation mode PhotoScan will calculate DEM for all areas of the scene that are visible on at least one image.

Enabled default setting is recommended for DEM generation. In Extrapolated mode the program generates holeless model with some elevation data being extrapolated. Point classes The parameter allows to select a point class classes that will be used for DEM calculation.

To generate digital terrain model DTM , it is necessary to classify dense cloud points first in order to divide them in at least two classes: ground points and the rest. Please refer to Classifying dense cloud. Indicate coordinates of the bottom left and top right corners of the region to be exported in the left and right columns of the textboxes respectively.

Error m – distance between the input source and estimated positions of the marker. Error pix – root mean square reprojection error for the marker calculated over all photos where marker is visible. If the total reprojection error for some marker seems to be too large, it is recommended to inspect reprojection errors for the marker on individual photos. The information is available with Show Info command from the marker context menu on the Reference pane.

Moreover, automatic CTs detection and marker placement is more precise then manual marker placement. PhotoScan supports three types of circle CTs: 12 bit, 16 bit and 20 bit.

While 12 bit pattern is considered to be decoded more precisely, 16 bit and 20 bit patterns allow for a greater number of CTs to be used within the same project. To be detected successfully CTs must take up a significant number of pixels on the original photos. This leads to a natural limitation of CTs implementation: while they generally prove to be useful in close-range imagery projects, aerial photography projects will demand too huge CTs to be placed on the ground, for the CTs to be detected correctly.

Coded targets in workflow Sets of all patterns of CTs supported by PhotoScan can be generated by the program itself. To create a printable PDF with coded targets 1. Select Print Markers Once generated, the pattern set can be printed and the CTs can be placed over the scene to be shot and reconstructed.

When the images with CTs seen on them are uploaded to the program, PhotoScan can detect and match the CTs automatically. To detect coded targets on source images 1. Select Detect Markers CTs generated with PhotoScan software contain even number of sectors.

However, previous versions of PhotoScan software had no restriction of the kind. Thus, if the project to be processed contains CTs from previous versions of PhotoScan software, it is required to disable parity check in order to make the detector work.

Chapter 5. Measurements Performing measurements on model PhotoScan supports measuring of distances on the model, as well as of surface area and volume of the reconstructed 3D model.

All the instructions of this section are applicable for working in the Model view of the program window, both for analysis of Dense Point Cloud or of Mesh data. When working in the model view, all measurements are performed in 3D space, unlike measurements in Ortho view, which are planar ones.

Distance measurement PhotoScan enables measurements of distances between the points of the reconstructed 3D scene.

Obviously, model coordinate system must be initialized before the distance measurements can be performed. Alternatively, the model can be scaled based on known distance scale bar information to become suitable for measurements. For instructions on setting coordinate system please refer to the Setting coordinate system section of the manual. Scale bar concept is described in the Optimization section. To measure distance 1.

Select Ruler instrument from the Toolbar of the Model view. Upon the second click on the model the distance between the indicated points will be shown right in the Model view. To complete the measurement and to proceed to a new one, please press Escape button on the keyboard.

The result of the measurement will be shownon the Console pane. Shape drawing is enabled in Model view as well. See Shapes section of the manual for information on shape drawing.

Measure command available from the context menu of a selected shape allows to learn the coordinates of the vertices as well as the perimeter of the shape. To measure several distances between pairs of points and automatically keep the resulting data, markers can be used.

To measure distance between two markers 1. Place the markers in the scene at the targeted locations. To measure distance between cameras 1. Switch to the estimated values mode using View Estimated button from the Reference pane toolbar. The estimated distance for the newly created scale bar equals to the distance that should have been measured. Surface area and volume measurement Surface area or volume measurements of the reconstructed 3D model can be performed only after the scale or coordinate system of the scene is defined.

To measure surface area and volume 1. Select Measure Area and Volume The whole model surface area and volume will be displayed in the Measure Area and Volume dialog box. Surface area is measured in square meters, while mesh volume is measured in cubic meters. Volume measurement can be performed only for the models with closed geometry.

If there are any holes in the model surface PhotoScan will report zero volume. Existing holes in the mesh surface can be filled in before performing volume measurements using Close Holes Distance measurement To measure distance with a Ruler 1.

Select Ruler instrument from the Toolbar of the Ortho view. Upon the second click on the DEM the distance between the indicated points will be shown right in the Ortho view. To measure distance with shapes 1. Connect the points of interest with a polyline using Draw Polyline tool from the Ortho view toolbar. Right button click on the polyline and select Measure In the Measure Shape dialog inspect the results.

Perimeter value equals to the distance that should have been measured. In addition to polyline length value see perimeter value in the Measure Shape , coordinates of the vertices of the polyline are shown on the Planar tab of the Measure Shape dialog. To select a polyline, double-click on it. A selected polyline is coloured in red.

Surface area and volume measurement To measure area and volume 1. Right button click on the polygon and select Measure Cross sections and contour lines PhotoScan enables to calculate cross sections, using shapes to indicate the plane s for a cut s , the cut being made with a plane parallel to Z axis.

To calculate cross section 1. Generate Contours Set values for Minimal altitude, Maximal altitude parameters as well as the Interval for the contours. All the values shoudl be indicated in meters. When the procedure is finished, a contour lines label will be added to the project file structure shown on the Workspace pane.

Contour lines can be shown over the DEM or orthomosaic on the Ortho tab of the program window. Use Show contour lines tool from the Ortho tab toolbal to switch the function on and off.

Contour lines can be deleted using Remove Contours command from the contour lines label context menu on the Workspace pane. To calculate a vegetation index 1. Open orthomosaic in the Ortho tab doubleclicking on the orthomosaic label on the Workspace pane.

Input an index expression using keyboard input and operators buttons of the raster calculator if necessary. Once the operation is completed, the result will be shown in the Ortho view, index values being visualised with colours according to the palette set in the Raster Calculator dialog.

Palette defines the colour for each index value to be shown with. PhotoScan offers several standard palette presets on the Palette tab of the Raster Calculator dialog. For each new line added to the palette a certain index value should be typed in.

Double click on the newly added line to type the value in. A customised palette can be saved for future projects using Export Palette button on the Palette tab of the Raster Calculator dialog.

To calculate contour lines based on vegetation index data 1. Select Generate Contours The contour lines will be shown over the index data on the Ortho tab. Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results. Masks can be applied at the following stages of processing:.

Alignment of the photos Masked areas can be excluded during feature point detection. Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions. This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos. Masking may be also useful when the object of interest occupies only a small part of the photo.

In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects. Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process. Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction.

Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing. Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas.

Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available. PhotoScan supports loading masks from the following sources:.

When generating masks from separate or background images, the folder selection dialog will appear. Browse to the folder containing corresponding images and select it. Import masks for Specifies whether masks should be imported for the currently opened photo, active chunk or entire Workspace. Entire workspace – load masks for all chunks in the project.

Mask file names not used in From alpha mode Specifies the file name template used to generate mask file names. This template can contain special tokens, that will be substituted by corresponding data for each photo being processed.

The following tokens are supported:. Tolerance From Background method only Specifies the tolerance threshold used for background differencing. Tolerance value should be set according to the color separation between foreground and background pixels. For larger separation higher tolerance values can be used. Editing masks Modification of the current mask is performed by adding or subtracting selections. A selection is created with one of the supported selection tools and is not incorporated in the current mask until it is merged with a mask using Add Selection or Subtract Selection operations.

To edit the mask 1. The photo will be opened in the main window. The existing mask will be displayed as a shaded region on the photo. Click on Add Selection toolbar button to add current selection to the mask, or Subtract Selection to subtract the selection from the mask. Invert Selection button allows to invert current selection prior to adding or subtracting it from the mask. Intelligent paint tool Intelligent paint tool is used to „paint“ a selection by the mouse, continuously adding small image regions, bounded by object boundaries.

Magic wand tool Magic Wand tool is used to select uniform areas of the image. To make a selection with a Magic Wand tool, click inside the region to be selected. The range of pixel colors selected by Magic Wand is controlled by the tolerance value. At lower tolerance values the tool selects fewer colors similar to the pixel you click with the Magic Wand tool. Higher value broadens the range of colors selected. A mask can be inverted using Invert Mask command from the Photo menu.

The command is active in Photo View only. Alternatively, you can invert masks either for selected cameras or for all cameras in a chunk using Invert Masks The masks are generated individually for each image. If some object should be masked out, it should be masked out on all photos, where that object appears. Image with alpha channel – generates color images from source photos combined with mask data in alpha channel.

Mask file names Specifies the file name template used to generate mask file names. Mask file names parameter will not be used in this case. Editing point cloud The following point cloud editing tools are available in PhotoScan:. Reprojection error High reprojection error usually indicates poor localization accuracy of the corresponding point projections at the point matching step.

It is also typical for false matches. Removing such points can improve accuracy of the subsequent optimization step.

Reconstruction uncertainty High reconstruction uncertainty is typical for points, reconstructed from nearby photos with small baseline. Such points can noticeably deviate from the object surface, introducing noise in the point cloud.

While removal of such points should not affect the accuracy of optimization, it may be useful to remove them before building geometry in Point Cloud mode or for better visual appearance of the point cloud.

Image count PhotoScan reconstruct all the points that are visible at least on two photos. However, points that are visible only on two photos are likely to be located with poor accuracy. Image count filtering enables to remove such unreliable points from the cloud. Projection Accuracy This criterion allows to filter out points which projections were relatively poorer localised due to their bigger size. To remove points based on specified criterion 1.

Switch to Point Cloud view mode using Point Cloud toolbar button. In the Gradual Selection dialog box specify the criterion to be used for filtering. Adjust the threshold level using the slider. You can observe how the selection changes while dragging the slider. Click OK button to finalize the selection.

To remove selected points use Delete Selection command from the Edit menu or click Delete Selection toolbar button or simply press Del button on the keyboard. Filtering points based on applied masks To remove points based on applied masks 1. Switch to Dense Cloud view mode using Dense Cloud toolbar button. In the Select Masked Points dialog box indicate the photos whose masks to be taken into account. Adjust the edge softness level using the slider.

Click OK button to run the selection procedure. Choose Select Points by Color In the Select Points by Color dialog box the color to be used as the criterion. Adjust the tolerance level using the slider. Tie point per photo limit Tie point limit parameter could be adjusted before Align photos procedure.

The number indicates the upper limit for matching points for every image. Using zero value doesn’t apply any tie-point filtering. The number of tie points can also be reduced after the alignment process with Tie Points – Thin Point Cloud command available from Tools menu.

To add new points to the current selection hold the Ctrl key during selection of additional points. To remove some points from the current selection hold the Shift key during selection of points to be removed. To delete selected points click the Delete Selection toolbar button or select Delete Selection command from the Edit menu. To crop selection to the selected points click the Crop Selection toolbar button or select Crop Selection command from the Edit menu.

To classify ground points automatically 1. Select Classify Ground Points In the Classify Ground Points dialog box select the source point data for the classification procedure. Click OK button to run the classification procedure. Automatic classification procedure consists of two steps. At the first step the dense cloud is divided into cells of a certain size.

In each cell the lowest point is detected. Triangulation of these points gives the first approximation of the terrain model. At the second step new point is added to the ground class, providing that it satisfies two conditions: it lies within a certain distance from the terrain model and that the angle between terrain model and the line to connect this new point with a point from a ground class is less than a certain angle.

The second step is repeated while there still are points to be checked. Max angle deg Determines one of the conditions to be checked while testing a point as a ground one, i.

For nearly flat terrain it is recommended to use default value of 15 deg for the parameter. It is reasonable to set a higher value, if the terrain contains steep slopes. Max distance m Determines one of the conditions to be checked while testing a point as a ground one, i.

In fact, this parameter determines the assumption for the maximum variation of the ground elevation at a time. Cell size m Determines the size of the cells for point cloud to be divided into as a preparatory step in ground points classification procedure. Cell size should be indicated with respect to the size of the largest area within the scene that does not contain any ground points, e.

Manual classification of dense cloud points PhotoScan allows to associate all the points within the dense cloud with a certain standard class see LIDAR data classification. This provides possibility to diversify export of the processing results with respect to different types of objects within the scene, e. DTM for ground, mesh for buildings and point cloud for vegetation.

To assign a class to a group of points 1. Switch to Dense Cloud view mode using using Dense Cloud toolbar button. Dense point cloud classification can be reset with Reset Classification command from Tools – Dense Cloud menu. Editing model geometry The following mesh editing tools are available in PhotoScan:. More complex editing can be done in the external 3D editing tools. PhotoScan allows to export mesh and then import it back for this purpose.

Decimation tool Decimation is a tool used to decrease the geometric resolution of the model by replacing high resolution mesh with a lower resolution one, which is still capable of representing the object geometry with high accuracy. PhotoScan tends to produce 3D models with excessive geometry resolution, so mesh decimation is usually a desirable step after geometry computation. Highly detailed models may contain hundreds of thousands polygons. While it is acceptable to work with such a complex models in 3D editor tools, in most conventional tools like Adobe Reader or Google Earth high complexity of 3D models may noticeably decrease application performance.

High complexity also results in longer time required to build texture and to export model in pdf file format. In some cases it is desirable to keep as much geometry details as possible like it is needed for scientific and archive purposes. You will have to rebuild texture atlas after decimation is complete.

Close Holes tool Close Holes tool provides possibility to repair your model if the reconstruction procedure resulted in a mesh with several holes, due to insufficient image overlap for example. Some tasks require a continuous surface disregarding the fact of information shortage. It is necessary to generate a close model, for instance, to fulfill volume measurement task with PhotoScan.

Close holes tool enables to close void areas on the model substituting photogrammetric reconstruction with extrapolation data. It is possible to control an acceptable level of accuracy indicating the maximum size of a hole to be covered with extrapolated data.

Agisoft Metashape Professional Edition. User Manual (PDF) Agisoft Metashape Standard Edition. User Manual (PDF) Download: in English in Russian. Python API Reference. Version (PDF) Download: in English. Java API Reference. Version (HTML) Browse: in English! Visit Tutorials page for guidance on data processing with Agisoft. Agisoft Metashape User Manual Standard Edition, Version Agisoft Metashape User Manual: Standard Edition, Version Publication date Quadro M Radeon Pro WX GeForce TITAN X Radeon RX GeForce GTX Ti FirePro W GeForce GTX TITAN X Radeon R9 x. Nov 11,  · I did install the latest beta, photoscan-pro_1_2_0_x64_beta, and get this extremely strange colours on the orthophoto!! For some reason, photoscan have not been able to create a complete mesh in the area also, but it is extrapolated. The seamlineeditor is very promising! The possibility to save batch-jobs is great. Photoscan-pro 1 2 En – Free download as PDF File .pdf), Text File .txt) or read online for free. Agisoft user manual. Agisoft user manual. Open navigation menu. Close suggestions Search Search. en Change Language. close menu Language. Professional Edition, Version Agisoft PhotoScan User Manual: Professional Edition, Version Agisoft PhotoScan User Manual – Free download as PDF File .pdf), Text File .txt) or read online for free. Standard Edition, Version Agisoft PhotoScan User Manual: Standard Edition, Version Photoscan 1 2 En. Uploaded by. Krizzatul. Photoscan-pro 0 9 1 En. Uploaded by. Florian Gheorghe.

Did you find this document useful? Is this content inappropriate? Report this Document. Description: Agisoft photoscan user manual. Flag for inappropriate content. Download now.

Jump to Page. Search inside document. Grit: The Power of Passion and Perseverance. Cost of Capital. Yes Please. Cheezy Muff Spaghetti. Working Capital Mgt. Twelve Tribes New Pamphlet. Financial Statement Analysis. Promotion Mix of Retail Sector. Principles: Life and Work. Fear: Trump in the White House. Marico Annual Report Breast Cancer. The World Is Flat 3. Introduction and History of Company. The Outsider: A Novel. Becoming and Effective Leader. The Handmaid’s Tale. Samsung Case Review. The Alice Network: A Novel.

Life of Pi. The Perks of Being a Wallflower. Manhattan Beach: A Novel. Bat Sample Questionsfgdfg. Little Women. Mobile Broadband A4 Dabur India Report. A Tree Grows in Brooklyn. In the Build Orthomosaic dialog box set Coordinate system for the Orthomosaic referencing. Select type of surface data for orthorectified imagery to be projected onto.

To generate an orthomosaic in planar projection, preliminary generation of mesh data is required. Parameters Surface Orthomosaic creation based on DEM data is especially efficient for aerial survey data processing scenarios allowing for time saving on mesh generation step. Alternatively, mesh surface type allows to create orthomosaic for less common, yet quite demanded applications, like orthomosaic generation for facades of the buildings or other models that might be not referenced at all.

Blending mode Mosaic default – implements approach with data division into several frequency domains which are blended independently. The highest frequency component is blended along the seamline only, each further step away from the seamline resulting in a less number of domains being subject to blending. Average – uses the weighted average value of all pixels from individual photos.

Disabled – the color value for the pixel is taken from the photo with the camera view being almost along the normal to the reconstructed surface in that point. Enable color correction Color correction feature is useful for processing of data sets with extreme brightness variation. However, please note that color correction process takes up quite a long time, so it is recommended to enable the setting only for the data sets that proved to present results of poor quality before.

Pixel size Default value for pixel size in Export Orthomosaic dialog refers to ground sampling resolution, thus, it is useless to set a smaller value: the number of pixels would increase, but the effective resolution would not. However, if it is meaningful for the purpose, pixel size value can be changed by the user. PhotoScan generates orthomosaic for the whole area, where surface data is available. Bounding box limitations are not applied.

To build orthomosaic for a particular rectangular part of the project use Region section of the Build Orthomosaic dialog. Estimate button allows you to see the coordinates of the bottom left and top right corners of the whole area.

Estimate button enables to control total size of the resulting orthomosaic data for the currently selected reconstruction area all available data default or a certain region Region parameter and resolution Pixel size or Max.

The information is shown in the Total size pix textbox. This includes mesh and texture if it was built. Note that since PhotoScan tends to generate extra dense point clouds and highly detailed polygonal models, project saving procedure can take up quite a long time.

You can decrease compression level to speed up the saving process. However, please note that it will result in a larger project file. Compression level setting can be found on the Advanced tab of the Preferences dialog available from Tools menu. This format enables responsive loading of large data dense point clouds, meshes, etc.

You can save the project at the end of any processing stage and return to it later. To restart work simply load the corresponding file into PhotoScan. Project files can also serve as backup files or be used to save different versions of the same model. Project files use relative paths to reference original photos. Thus, when moving or copying the project file to another location do not forget to move or copy photographs with all the folder structure involved as well.

Otherwise, PhotoScan will fail to run any operation requiring source images, although the project file including the reconstructed model will be loaded up correctly. Alternatively, you can enable Store absolute image paths option on the Advanced tab of the Preferences dialog available from Tools menu. Exporting results PhotoScan supports export of processing results in various representations: sparse and dense point clouds, camera calibration and camera orientation data, mesh, etc. Orthomosaics and digital elevation models both DSM and DTM , as well as tiled models can be generated according to the user requirements.

Point cloud and camera calibration data can be exported right after photo alignment is completed. All other export options are available after the corresponding processing step. Point cloud export To export sparse or dense point cloud 1. Select Export Points Browse the destination folder, choose the file type, and print in the file name.

Click Save button. Specify the coordinate system and indicate export parameters applicable to the selected file type, including the dense cloud classes to be saved. Split in blocks option in the Export Points dialog can be useful for exporting large projects. It is available for referenced models only. You can indicate the size of the section in xy plane in meters for the point cloud to be divided into respective rectangular blocks.

The total volume of the 3D scene is limited with the Bounding Box. The whole volume will be split in equal blocks starting from the point with minimum x and y values. Note that empty blocks will not be saved. In some cases it may be reasonable to edit point cloud before exporting it.

To read about point cloud editing refer to the Editing point cloud section of the manual. Tie points data export To export matching points 1. Select Export Matches In the Export Matches dialog box set export parameters. Precision value sets the limit to the number of decimal digits in the tie points coordinates to be saved.

Matching points exported from PhotoScan can be used as a basis for AT procedure to be performed in some external software. Later on, estimated camera data can be imported back to PhotoScan using Import Cameras command from the Tools menu to proceed with 3D model reconstruction procedure. Camera calibration and orientation data export To export camera calibration and camera orientation data select Export Cameras Panorama export PhotoScan is capable of panorama stitching for images taken from the same camera position – camera station.

To indicate for the software that loaded images have been taken from one camera station, one should move those photos to a camera group and assign Camera Station type to it. For information on camera groups refer to Loading photos section. To export panorama 1. Select Export – Export Panorama Choose panorama orientation in the file with the help of navigation buttons to the right of the preview window in the Export Panorama dialog.

Set exporting parameters: select camera groups which panorama should be exported for and indicate export file name mask. Additionally, you can set boundaries for the region of panorama to be exported using Setup boundaries section of the Export Panorama dialog.

Text boxes in the first line X allow to indicate the angle in the horizontal plane and the second line Y serves for angle in the vertical plane limits. Image size option enables to control the size of the exporting file. If a model generated with PhotoScan is to be imported in a 3D editor program for inspection or further editing, it might be helpful to use Shift function while exporting the model. It allows to set the value to be subtracted from the respective coordinate value for every vertex in the mesh.

Essentially, this means translation of the model coordinate system origin, which may be useful since some 3D editors, for example, truncate the coordinates values up to 8 or so digits, while in some projects they are decimals that make sense with respect to model positioning task.

So it can be recommended to subtract a value equal to the whole part of a certain coordinate value see Reference pane, Camera coordinates values before exporting the model, thus providing for a reasonable scale for the model to be processed in a 3D editor program. The texture file should be kept in the same directory as the main file describing the geometry.

Thanks to hierarchical tiles format, it allows to responsively visualise large models. Orthomosaic export To export Orthomosaic 1. Select Export Orthomosaic In the Export Orthomosaic dialog box specify coordinate system for the Orthomosaic to be saved in.

This information is already included in GeoTIFF file, however, you could duplicate it for some reason. If you need to export orthomosaic in JPEG or PNG file formats and would like to have georeferencing data this informations could be useful.

Alternatively, you can indicate the region to be exported using polygon drawing option in the Ortho view tab of the program window. For instructions on polygon drawing refer to Shapes section of the manual. Once the polygon is drawn, right-click on the polygon and set it as a boundary of the region to be exported using Set Boundary Type option from the context menu.

Default value for pixel size in Export Orthomosaic dialog refers to ground sampling resolution, thus, it is useless to set a smaller value: the number of pixels would increase, but the effective resolution would not.

If you have chosen to export orthomosaic with a certain pixel size not using Max. Additionally, the file may be saved without compression None value of the compression type parameter. Total size textbox in the Export Orthomosaic dialog helps to estimate the size of the resulting file. However, it is recommended to make sure that the application you are planning to open the orthomosaic with supports BigTIFF format.

Alternatively, you can split a large orthomosaic in blocks, with each block fitting the limits of a standard TIFF file. PhotoScan supports direct uploading of the orthomosaics to MapBox platform. To export Multispectral orthomosaic 1. Vegetation index data can be saved as two types of data: as a grid of floating point index values calculated per pixel of orthomosaic or as an orthomosaic in pseudocolors according to a pallet set by user. None value allows to export orthomosaic generated for the data before any index calculation procedure was performed.

No-data value is used for the points of the grid, where elevation value could not be calculated based on the source data. Default value is suggested according to the industry standard, however it can be changed by user. See Orthomosaic export section for details.

Similarly to orthomosaic export, polygons drawn over the DEM on the Ortho tab of the program window can be set as boundaries for DEM export. Extra products to export In addition to main targeted products PhotoScan allows to export several other processing results, like. PhotoScan supports direct uploading of the models to Sketchfab resource and of the orthomosaics to MapBox platform. Processing report export option is available for georeferenced projects only. Tie points – total number of valid tie points equals to the number of points in the sparse cloud.

Reprojection error – root mean square reprojection error averaged over all tie points on all images. Reprojection error is the distance between the point on the image where a reconstructed 3D point can be projected and the original projection of that 3D point detected on the photo and used as a basis for the 3D point reconstruction procedure.

Advanced tab of Preferences dialog available from Tools menu. For projects calculated over network processing time will not be shown. PhotoScan matches images on different scales to improve robustness with blurred or difficult to match images.

The accuracy of tie point projections depends on the scale at which they were located. PhotoScan uses information about scale to weight tie point reprojection errors. In the Reference pane settings dialog tie point accuracy parameter now corresponds to normalized accuracy – i. Tie points detected on other scales will have accuracy proportional to their scales. This helps to obtain more accurate bundle adjustment results.

On the processing parameters page of the report as well as in chunk information dialog two reprojection errors are provided: the reprojection error in the units of tie point scale this is the quantity that is minimized during bundle adjustment , and the reprojection error in pixels for convenience.

The mean key point size value is a mean tie point scale averaged across all projections. Chapter 4. Referencing Camera calibration Calibration groups While carrying out photo alignment PhotoScan estimates both internal and external camera orientation parameters, including nonlinear radial distortions.

For the estimation to be successful it is crucial to apply the estimation procedure separately to photos taken with different cameras. All the actions described below could and should be applied or not applied to each calibration group individually. To create a new calibration group 1. Select Camera Calibration A new group will be created and depicted on the left-hand part of the Camera Calibration dialog box.

To move photos from one group to another 1. In the Camera Calibration dialog box choose the source group on the left-hand part of the dialog. Select photos to be moved and drag them to the target group on the left-hand part of the Camera Calibration dialog box. To place each photo into a separate group you can use Split Groups command available at the right button click on a calibration group name in the left-hand part of the Camera Calibration dialog.

Camera types PhotoScan supports four major types of camera: frame camera, fisheye camera, spherical camera and cylindrical camera. Camera type can be set in Camera Calibration dialog box available from Tools menu.

No additional information is required except the image in equirectangular representation. Spherical camera Cylindrical projection. In case the source data within a calibration group is a set of panoramic images stitched according to cylindrical model, camera type setting will be enough for the program to calculate camera orientation parameters.

No additional information is required. In case source images lack EXIF data or the EXIF data is insufficient to calculate focal length in pixels, PhotoScan will assume that focal length equals to 50 mm 35 mm film equivalent.

However, if the initial guess values differ significantly from the actual focal length, it is likely to lead to failure of the alignment process. So, if photos do not contain EXIF metadata, it is preferable to specify focal length mm and sensor pixel size mm manually.

It can be done in Camera Calibration dialog box available from Tools menu. Generally, this data is indicated in camera specification or can be received from some online source. To indicate to the program that camera orientation parameters should be estimated based on the focal length and pixel size information, it is necessary to set the Type parameter on the Initial tab to Auto value. Camera calibration parameters Once you have tried to run the estimation procedure and got poor results, you can improve them thanks to the additional data on calibration parameters.

To specify camera calibration parameters 1. Select calibration group, which needs reestimation of camera orientation parameters on the left side of the Camera Calibration dialog box. Initial calibration data will be adjusted during the Align Photos processing step. Note that residuals are averaged per cell of an image and then across all the images in a camera group.

Calibration parameters list fx, fy Focal length in x- and y-dimensions measured in pixels. Setting coordinate system Many applications require data with a defined coordinate system.

Setting the coordinate system also provides a correct scaling of the model allowing for surface area and volume measurements and makes model loading in geoviewers and geoinformation software much easier. Some functionality like digital elevation model export is available only after the coordinate system is defined. PhotoScan supports setting a coordinate system based on either ground control point marker coordinates or camera coordinates.

In both cases the coordinates are specified in the Reference pane and can be either loaded from the external file or typed in manually. Setting coordinate system based on recorded camera positions is often used in aerial photography processing.

However it may be also useful for processing photos captured with GPS enabled cameras. Placing markers is not required if recorded camera coordinates are used to initialize the coordinate system. In the case when ground control points are used to set up the coordinate system the markers should be placed in the corresponding locations of the scene. Using camera positioning data for georeferencing the model is faster since manual marker placement is not required.

On the other hand, ground control point coordinates are usually more accurate than telemetry data, allowing for more precise georeferencing. PhotoScan supports two approaches to marker placement: manual marker placement and guided marker placement. Manual approach implies that the marker projections should be indicated manually on each photo where the marker is visible. Manual marker placement does not require 3D model and can be performed even before photo alignment.

In the guided approach marker projection is specified for a single photo only. PhotoScan automatically projects the corresponding ray onto the model surface and calculates marker projections on the rest of the photos where marker is visible.

Marker projections defined automatically on individual photos can be further refined manually. Reconstructed 3D model surface is required for the guided approach. Guided marker placement usually speeds up the procedure of marker placement significantly and also reduces the chance of incorrect marker placement.

It is recommended in most cases unless there are any specific reasons preventing this operation. To place a marker using guided approach 1. Open a photo where the marker is visible by double clicking on its name. Select Create Marker command from the context menu. New marker will be created and its projections on the other photos will be automatically defined.

While the accuracy of marker placement in the 3D view is usually much lower, it may be still useful for quickly locating the photos observing the specified location on the model. To view the corresponding photos use Filter by Markers command again from the 3D view context menu. If the command is inactive, please make sure that the marker in question is selected on the Reference pane. To place a marker using manual approach 1. To save up time on manual marker placement procedure PhotoScan offers guiding lines feature.

When a marker is placed on an aligned photo, PhotoScan highlights lines, which the marker is expected to lie on, on the rest of the aligned photos. The calculated marker positions will be indicated with icon on the corresponding aligned photos in Photo View mode.

Automatically defined marker locations can be later refined manually by dragging their projections on the corresponding photos. To refine marker location 1. Open the photo where the marker is visible by double clicking on the photo’s name. Automatically placed marker will be indicated with icon. Move the marker projection to the desired location by dragging it using left mouse button. Once the marker location is refined by user, the marker icon will change to.

The photos where the marker is placed will be marked with a icon on the Photos pane. To filter photos by marker use Filter by Markers command from the 3D view context menu.

In those cases when there are hesitations about the features depicted on the photo, comparative inspection of two photos can prove to be useful. To open two photos in PhotoScan window simultaneously Move to Other Tab Group command is available from photo tab header context menu. To open two photos simultaneously 1.

In the Photos pane double click on one photo to be opened. The photo will be opened in a new tab of the main program window. Right click on the tab header and choose Move to Other Tab Group command from the context menu. The main program window will be divided into two parts and the photo will be moved to the second part.

Real world coordinates used for referencing the model along with the type of coordinate system used are specified using the Reference pane. The model can be located in either local Euclidean coordinates or in georeferenced coordinates.

For model georeferencing a wide range of various geographic and projected coordinate systems are supported, including widely used WGS84 coordinate system. Besides, almost all coordinate systems from the EPSG registry are supported as well.

To load reference coordinates from a text file 1. Click Import toolbar button on the Reference pane. To open Reference pane use Reference command from the View menu. Browse to the file containing recorded reference coordinates and click Open button. In the Import CSV dialog set the coordinate system if the data presents geographical coordinates. Select the delimiter and indicate the number of the data column for each coordinate.

Indicate columns for the orientation data if present. Information on the accuracy of the source coordinates x, y, z can be loaded with a CSV file as well.

Check Load Accuracy option and indicate the number of the column where the accuracy for the data should be read from. The same figure will be tackled as accuracy information for all three coordinates. To assign reference coordinates manually 1.

Additionally, it is possible to indicate accuracy data for the coordinates. Select Set Accuracy It is possible to select several cameras and apply Set Accuracy Alternatively, you can select Accuracy m or Accuracy deg text box for a certain camera on the Reference pane and press F2 button on the keyboard to type the text data directly onto the Reference pane. The reference coordinates data will be loaded into the Reference pane. After reference coordinates have been assigned PhotoScan automatically estimates coordinates in a local Euclidean system and calculates the referencing errors.

The largest error will be highlighted. To set a georeferenced coordinate system 1. Assign reference coordinates using one of the options described above.

In the Reference Settings dialog box select the Coordinate System used to compile reference coordinates data if it has not been set at the previous step. Rotation angles in PhotoScan are defined around the following axes: yaw axis runs from top to bottom, pitch axis runs from left to right wing of the drone, roll axis runs from tail to nose of the drone.

Zero values of the rotation angle triple define the following camera position aboard: camera looks down to the ground, frames are taken in landscape orientation, and horizontal axis of the frame is perpendicular to the central tail-nose axis of the drone. If the camera is fixed in a different position, respective yaw, pitch, roll values should be input in the camera correction section of the Settings dialog.

The senses of the angles are defined according to the right-hand rule. A click on the column name on the Reference pane sorts the markers and photos by the data in the column.

At this point you can review the errors and decide whether additional refinement of marker locations is required in case of marker based referencing , or if certain reference points should be excluded.

To reset a chunk georeferencing use Reset Transform command from the chunk context menu on the Workspace pane. It should be updated manually using Update toolbar button on the Reference pane. Each reference point is specified in this file on a separate line. Sample reference coordinates file is provided below:.

JPG Individual entries on each line should be separated with a tab space, semicolon, comma, etc character. All lines starting with character are treated as comments. Using different vertical datums On default PhotoScan requires all the source altitude values for both cameras and markers to be input as values mesuared above the ellipsoid. However, PhotoScan allows for the different geoid models utilization as well.

PhotoScan installation package includes only EGM96 geoid model, but additional geoid models can be downloaded from Agisoft’s website if they are required by the coordinate system selected in the Reference pane settings dialog; alternatively, a geoid model can be loaded from a custom PRJ file. Optimization Optimization of camera alignment PhotoScan estimates internal and external camera orientation parameters during photo alignment.

This estimation is performed using image data alone, and there may be some errors in the final estimates. The accuracy of the final estimates depends on many factors, like overlap between the neighboring photos, as well as on the shape of the object surface.

These errors can lead to non-linear deformations of the final model. During georeferencing the model is linearly transformed using 7 parameter similarity transformation 3 parameters for translation, 3 for rotation and 1 for scaling.

Such transformation can compensate only a linear model misalignment. The non-linear component can not be removed with this approach. This is usually the main reason for georeferencing errors. Possible non-linear deformations of the model can be removed by optimizing the estimated point cloud and camera parameters based on the known reference coordinates.

During this optimization PhotoScan adjusts estimated point coordinates and camera parameters minimizing the sum of reprojection error and reference coordinate misalignment error. To achieve greater optimizing results it may be useful to edit sparse point cloud deleting obviously mislocated points beforehand. Georeferencing accuracy can be improved significantly after optimization. It is recommended to perform optimization if the final model is to be used for any kind of measurements.

Click Optimize toolbar button. In Optimize Camera Alignment dialog box check additional camera parameters to be optimized if needed. Click OK button to start optimization.

You will have to rebuild the model geometry after optimization. Image coordinates accuracy for markers indicates how precisely the markers were placed by the user or adjusted by the user after being automatically placed by the program. Ground altitude parameter is used to make reference preselection mode of alignment procedure work effectively for oblique imagery. See Aligning photos for details. Camera, marker and scale bar accuracy can be set per item i.

Accuracy values can be typed in on the pane per item or for a group of selected items. Generally it is reasonable to run optimization procedure based on markers data only.

It is due to the fact that GCPs coordinates are measured with significantly higher accuracy compared to GPS data that indicates camera positions. Thus, markers data are sure to give more precise optimization results. Moreover, quite often GCP and camera coordinates are measured in different coordinate systems, that also prevents from using both cameras and markers data in optimization simultaneously.

The results of the optimization procedure can be evaluated with the help of error information on the Reference pane. In addition, distortion plot can be inspected along with mean residuals visualised per calibration group.

They can prove to be useful when there is no way to locate ground control points all over the scene. Scale bars allow to save field work time, since it is significantly easier to place several scale bars with precisely known length, then to measure coordinates of a few markers using special equipment.

In addition, PhotoScan allows to place scale bar instances between cameras, thus making it possible to avoid not only marker but ruler placement within the scene as well. Surely, scale bar based information will not be enough to set a coordinate system, however, the information can be successfully used while optimizing the results of photo alignment. It will also be enough to perform measurements in PhotoScan software. See Performing measurements on model.

To add a scale bar 1. Place markers at the start and end points of the bar. For information on marker placement please refer to the Setting coordinate system section of the manual. Select Create Scale Bar command form the Model view context menu. The scale bar will be created and an instant added to the Scale Bar list on the Reference pane. Double click on the Distance m box next to the newly created scale bar name and enter the known length of the bar in meters.

To add a scale bar between cameras 1. Select the two cameras on the Workspace or Reference pane using Ctrl button. Alternatively, the cameras can be selected in the Model view window using selecting tools from the Toolbar. Select Create Scale Bar command form the context menu. To run scale bar based optimization 1. On the Reference pane check all scale bars to be used in optimization procedure. Click Settings toolbar button on the Reference pane. To delete a scale bar 1.

Select the scale bar to be deleted on the Reference pane. What do the errors in the Reference pane mean? Cameras section 1. Error m – distance between the input source and estimated positions of the camera. Error pix – root mean square reprojection error calculated over all feature points detected on the photo. Markers section 1.

Error m – distance between the input source and estimated positions of the marker. Error pix – root mean square reprojection error for the marker calculated over all photos where marker is visible. If the total reprojection error for some marker seems to be too large, it is recommended to inspect reprojection errors for the marker on individual photos.

The information is available with Show Info command from the marker context menu on the Reference pane. Moreover, automatic CTs detection and marker placement is more precise then manual marker placement.

PhotoScan supports three types of circle CTs: 12 bit, 16 bit and 20 bit. While 12 bit pattern is considered to be decoded more precisely, 16 bit and 20 bit patterns allow for a greater number of CTs to be used within the same project. To be detected successfully CTs must take up a significant number of pixels on the original photos. This leads to a natural limitation of CTs implementation: while they generally prove to be useful in close-range imagery projects, aerial photography projects will demand too huge CTs to be placed on the ground, for the CTs to be detected correctly.

Coded targets in workflow Sets of all patterns of CTs supported by PhotoScan can be generated by the program itself. To create a printable PDF with coded targets 1. Select Print Markers Once generated, the pattern set can be printed and the CTs can be placed over the scene to be shot and reconstructed.

When the images with CTs seen on them are uploaded to the program, PhotoScan can detect and match the CTs automatically. To detect coded targets on source images 1. Select Detect Markers CTs generated with PhotoScan software contain even number of sectors. However, previous versions of PhotoScan software had no restriction of the kind.

Thus, if the project to be processed contains CTs from previous versions of PhotoScan software, it is required to disable parity check in order to make the detector work.

Chapter 5. Measurements Performing measurements on model PhotoScan supports measuring of distances on the model, as well as of surface area and volume of the reconstructed 3D model. All the instructions of this section are applicable for working in the Model view of the program window, both for analysis of Dense Point Cloud or of Mesh data.

When working in the model view, all measurements are performed in 3D space, unlike measurements in Ortho view, which are planar ones. Distance measurement PhotoScan enables measurements of distances between the points of the reconstructed 3D scene. Obviously, model coordinate system must be initialized before the distance measurements can be performed. Alternatively, the model can be scaled based on known distance scale bar information to become suitable for measurements.

For instructions on setting coordinate system please refer to the Setting coordinate system section of the manual. Scale bar concept is described in the Optimization section. To measure distance 1. Select Ruler instrument from the Toolbar of the Model view. Upon the second click on the model the distance between the indicated points will be shown right in the Model view. To complete the measurement and to proceed to a new one, please press Escape button on the keyboard.

The result of the measurement will be shownon the Console pane. Shape drawing is enabled in Model view as well. See Shapes section of the manual for information on shape drawing. Measure command available from the context menu of a selected shape allows to learn the coordinates of the vertices as well as the perimeter of the shape. To measure several distances between pairs of points and automatically keep the resulting data, markers can be used. To measure distance between two markers 1.

Place the markers in the scene at the targeted locations. To measure distance between cameras 1. Switch to the estimated values mode using View Estimated button from the Reference pane toolbar. The estimated distance for the newly created scale bar equals to the distance that should have been measured. Surface area and volume measurement Surface area or volume measurements of the reconstructed 3D model can be performed only after the scale or coordinate system of the scene is defined.

To measure surface area and volume 1. Select Measure Area and Volume The whole model surface area and volume will be displayed in the Measure Area and Volume dialog box.

Surface area is measured in square meters, while mesh volume is measured in cubic meters. Volume measurement can be performed only for the models with closed geometry. If there are any holes in the model surface PhotoScan will report zero volume. Existing holes in the mesh surface can be filled in before performing volume measurements using Close Holes Distance measurement To measure distance with a Ruler 1.

Select Ruler instrument from the Toolbar of the Ortho view. Upon the second click on the DEM the distance between the indicated points will be shown right in the Ortho view. Airgo Newbie Posts: Dense point cloud has 38Mio points, mesh 2,4 Mio faces. Regards, Mathias. Hello bisenberger, you need to comment or remove the following line: Code: [Select]. Code: [Select]. Hello Mathias, Are you using the same parameters, maybe you can post the screenshots and attach the processing logs?

Hello Alexey, I was using the same parameters standard options. Hers is a screenshot and the logfile. During the first attempt my computer crashed. What is the right flow to create the google map title.

Impossible to have real zoom. Dear Alexey, as I mentioned earlier, in this beta version I experienced some problems when creating dem or orthomosaic from dense cloud. Indeed, although I „clean“ the dense cloud, when I create a new DEM or ortho in different coordinate system, the results ignores the editing I did to the dense cloud, and instead it uses the original cloud. Cheers, G.

Table of Contents Overview Kser photos General workflow Improving camera alignment results Graphical user interface Supported formats Camera models Overview Agisoft PhotoScan is an advanced image-based 3D exition solution aimed at creating professional quality 3D content from still images. Based on the latest multi-view 3D reconstruction technology, it operates with arbitrary images and is efficient in both controlled and uncontrolled conditions.

Photos can be taken from any fre, providing that the object to be reconstructed is visible on at least two photos. Both image alignment and 3D model reconstruction are fully automated. How it works Generally the final goal of agisoft photoscan user manual professional edition version 1.2 free processing with PhotoScan is to build a textured 3D model.

The procedure of photographs processing and 3D model construction comprises four main stages. The first stage is camera alignment. At this stage PhotoScan searches for common points on photographs and matches them, as well agisoft photoscan user manual professional edition version 1.2 free it finds the position of the camera for each picture and refines camera calibration parameters.

As a result a sparse point cloud and a set of camera positions are formed. The sparse point cloud represents the results of photo alignment and will not be directly used in the further 3D model construction procedure except for the sparse point agisft based reconstruction method. However it can be exported for further usage in external programs.

For instance, the sparse point cloud model can be used in a 3D editor as a reference. On the contrary, the set of camera positions is required for further 3D model reconstruction by PhotoScan.

The next stage is building dense point cloud. Based on the estimated camera positions and pictures themselves a dense point cloud is built by PhotoScan. Dense point cloud may be mnaual prior to export or proceeding to 3D mesh model generation. The third stage is building mesh. PhotoScan reconstructs a 3D polygonal mesh representing the object surface based on the dense or sparse point cloud according to the user’s choice. Generally rfee are http://replace.me/27514.txt algorithmic methods available in PhotoScan that can be applied to 3D mesh generation: Height Field – for planar type surfaces, Professonal – for any kind of object.

The mesh having been built, it may be professionla to edit it. Some corrections, such as mesh decimation, removal of detached components, closing of holes in the mesh, smoothing, etc. For more complex editing you have to engage external 3D editor tools. PhotoScan allows to export mesh, profssional it by another software and photkscan it back. After geometry i. Several texturing modes are available in PhotoScan, they are described in the corresponding section of this manual, as well as orthomosaic and DEM generation procedures.

About the manual Basically, the sequence of actions described above covers agisoft photoscan user manual professional edition version 1.2 free of the data agsioft needs.

All these operations are carried out automatically according to the parameters set by user. Instructions on how to get through these operations and descriptions of the manial controlling each step are given in the corresponding sections of the Chapter 3, General workflow chapter of the manual. In some cases, however, additional actions may be required to get the desired results. Pictures taken using uncommon lenses such as fisheye one may require preliminary calibration of optical system parameters or usage of different calibration model specially implemented for ultra-wide angle lens.

Chapter 4, Improving camera alignment results covers that part of the software prifessional. In some capturing scenarios masking of certain regions of the photos may be required to exclude them from the calculations. Application of masks in PhotoScan profesaional workflow as well as editing options available are described in Мысль microsoft office home and student 2007 full version product key free штука 5, Editing.

Http://replace.me/7294.txt 6, Automation describes opportunities to save up on manual intervention to the processing workflow. It can take up quite a long time to reconstruct a 3D professiona.

PhotoScan agisoft photoscan user manual professional edition version 1.2 free to export obtained results and save intermediate data in a form of project files at any stage of the process. If you are not familiar with the concept of projects, its brief description is given at the end of the Chapter 3, General workflow. Смотрите подробнее the manual you can also find agisoft photoscan user manual professional edition version 1.2 free on the PhotoScan installation procedure and basic rules for читать больше „good“ photographs, i.

For the information refer to Chapter 1, Installation and Chapter 2, Capturing photos. Chapter 1. NVidia GeForce 8xxx series and later. PhotoScan is likely bersion be able to utilize processing power of any OpenCL enabled device during Dense Point Cloud generation stage, provided that OpenCL agisoft photoscan user manual professional edition version 1.2 free for the device are properly installed.

However, because of the large number of various combinations of video chips, driver versions and operating systems, Agisoft is unable to test agisoft photoscan user manual professional edition version 1.2 free guarantee PhotoScan’s compatibility with every device and on every platform.

The table below lists currently supported devices on Windows platform only. We will pay particular attention to possible problems with PhotoScan running on these devices. Using OpenCL acceleration with mobile or integrated graphics video chips is not recommended agisofy of the low performance of such GPUs.

Start PhotoScan by running photoscan. Restrictions of the Demo mode Once PhotoScan is downloaded and installed on your computer you can run it either in the Demo mode or in the full function mode. On every start until you enter a serial number it will show a registration box offering two options: 1 use PhotoScan in the Demo mode or 2 enter a serial number to confirm the purchase.

The first choice is set by default, so if you are still exploring PhotoScan click the Continue button and PhotoScan will start in the Yser mode. The employment of PhotoScan in the Demo mode is not time limited. Several functions, however, are not available in the Demo mode. These functions are the following:. Professinoal purchasing you will get the serial number to enter into the registration box on starting PhotoScan. Once the serial number is entered the registration box will not appear again and you will get full access to all functions of uesr program.

Chapter 2. Capturing нашем apple safari download for windows 10 Это Before loading your photographs into Gta san andreas download pc free version you need to take them and select those suitable for 3D mamual reconstruction. Photographs can be taken by any digital camera both metric and non-metricas long as you follow some specific capturing guidelines.

This section explains general principles of taking and selecting pictures that provide the most appropriate data for 3D model generation. Make sure you have studied the following rules and read the list of restrictions before you get out for shooting photographs. Equipment Use a digital camera with reasonably high resolution 5 MPix or more. Avoid ultra-wide angle and fisheye lenses. The best choice is 50 mm focal length 35 mm photoscwn equivalent lenses. It is recommended to use focal length from 20 to 80 mm interval in 35mm equivalent.

If a data set was captured with fisheye lens, appropriate camera http://replace.me/23463.txt type should be selected in PhotoScan Camera Calibration dialog prior to processing. Fixed lenses are preferred. If zoom lenses are used – focal length should be set either to maximal or to minimal value during the entire shooting session for more stable results.

Take images at maximal possible resolution. ISO should be set pyotoscan the lowest value, otherwise agisoft photoscan user manual professional edition version 1.2 free ISO values will induce additional noise to images.

Aperture value should be high enough to result in sufficient focal depth: it is important to capture sharp, not blurred font windows 10. Shutter speed should not be too slow, otherwise blur can occur due to slight movements. If still have to, shoot shiny objects under a cloudy sky. Avoid unwanted foregrounds. Avoid moving objects within the scene to be reconstructed.

Avoid absolutely flat objects or scenes. Image preprocessing PhotoScan operates with the original images. So do not crop or geometrically transform, i. Capturing scenarios Generally, spending some time перейти на страницу your shot session might be very useful. Xgisoft of photos: more than required is better than not enough. Number of professiomal should be minimized since PhotoScan is able to reconstruct only geometry visible from at bit for windows 10 two phoyoscan.

Each photo should effectively use the frame size: object of interest should take up the maximum area. In some cases vefsion camera orientation should be used. Do not try to place full object in the image frame, if some parts are missing it is not a problem providing that these parts appear on other images. Good lighting is required to achieve better quality продолжить чтение the results, yet blinks should be avoided.

It is recommended to remove sources of light from camera fields of view. Avoid using flash. The following figures represent advice on appropriate capturing scenarios:. Restrictions In some cases it might be very difficult or even impossible to build a correct 3D model editon a set of pictures. Frde short list of typical reasons for photographs unsuitability is given below.

Modifications of photographs PhotoScan can process only unmodified photos as they were taken by a digital photo camera. Processing the photos which were manually editjon or geometrically warped is likely to fail or to produce highly inaccurate results. Photometric modifications do not affect reconstruction results.

 
 

Uploaded by.Agisoft photoscan user manual professional edition version 1.2 free

 
Nov 11,  · I did install the latest beta, photoscan-pro_1_2_0_x64_beta, and get this extremely strange colours on the orthophoto!! For some reason, photoscan have not been able to create a complete mesh in the area also, but it is extrapolated. The seamlineeditor is very promising! The possibility to save batch-jobs is great. Agisoft Metashape User Manual Standard Edition, Version Agisoft Metashape User Manual: Standard Edition, Version Publication date Quadro M Radeon Pro WX GeForce TITAN X Radeon RX GeForce GTX Ti FirePro W GeForce GTX TITAN X Radeon R9 x. Agisoft PhotoScan User Manual – Free download as PDF File .pdf), Text File .txt) or read online for free. Standard Edition, Version Agisoft PhotoScan User Manual: Standard Edition, Version Photoscan 1 2 En. Uploaded by. Krizzatul. Photoscan-pro 0 9 1 En. Uploaded by. Florian Gheorghe. Photoscan-pro 1 2 En – Free download as PDF File .pdf), Text File .txt) or read online for free. Agisoft user manual. Agisoft user manual. Open navigation menu. Close suggestions Search Search. en Change Language. close menu Language. Professional Edition, Version Agisoft PhotoScan User Manual: Professional Edition, Version Agisoft will pay particular attention to possible problems with Metashape running on these devices. Table Supported Desktop GPUs on Windows platform NVIDIA AMD GeForce RTX Radeon RX GeForce RTX Ti Radeon VII Tesla V Radeon RX XT Tesla M60 Radeon RX Vega 64 Quadro P Radeon RX Vega 56 Quadro M Radeon Pro .

Area, volume,profilemeasurementproceduresaretackledinChapter5,Measurements,whichalsoincludes informationonvegetationindicescalculations. WhileChapter7,Automationdescribesopportunitiesto saveuponmanualinterventiontotheprocessingworkflow, Chapter8, Networkprocessing presents guidelinesonhowtoorganizedistributedprocessingoftheimagerydataonseveralnodes.

PhotoScanallowstoexportobtainedresults andsaveintermediatedatainaformofprojectfilesatanystageoftheprocess. Ifyouarenotfamiliar withtheconceptofprojects,itsbriefdescriptionisgivenattheendoftheChapter3,Generalworkflow. InthemanualyoucanalsofindinstructionsonthePhotoScaninstallationprocedureandbasicrulesfor taking“good“photographs,i.

Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous. Carousel Next. What is Scribd? Explore Ebooks. Bestsellers Editors‘ Picks All Ebooks. Explore Audiobooks. Bestsellers Editors‘ Picks All audiobooks. Explore Magazines. Editors‘ Picks All magazines. Explore Podcasts All podcasts. Difficulty Beginner Intermediate Advanced. Explore Documents. Agisoft Photoscan User Manual. Document Information click to expand document information Description: Agisoft photoscan user manual.

Did you find this document useful? Is this content inappropriate? Report this Document. Description: Agisoft photoscan user manual. Flag for inappropriate content.

Download now. Jump to Page. Search inside document. Grit: The Power of Passion and Perseverance. Cost of Capital. Yes Please. Cheezy Muff Spaghetti. You can indicate the size of the section in xy plane in meters for the point cloud to be divided into respective rectangular blocks.

The total volume of the 3D scene is limited with the Bounding Box. The whole volume will be split in equal blocks starting from the point with minimum x and y values. Note that empty blocks will not be saved. In some cases it may be reasonable to edit point cloud before exporting it.

To read about point cloud editing refer to the Editing point cloud section of the manual. Tie points data export To export matching points 1. Select Export Matches In the Export Matches dialog box set export parameters. Precision value sets the limit to the number of decimal digits in the tie points coordinates to be saved.

Matching points exported from PhotoScan can be used as a basis for AT procedure to be performed in some external software. Later on, estimated camera data can be imported back to PhotoScan using Import Cameras command from the Tools menu to proceed with 3D model reconstruction procedure.

Camera calibration and orientation data export To export camera calibration and camera orientation data select Export Cameras Panorama export PhotoScan is capable of panorama stitching for images taken from the same camera position – camera station. To indicate for the software that loaded images have been taken from one camera station, one should move those photos to a camera group and assign Camera Station type to it.

For information on camera groups refer to Loading photos section. To export panorama 1. Select Export – Export Panorama Choose panorama orientation in the file with the help of navigation buttons to the right of the preview window in the Export Panorama dialog. Set exporting parameters: select camera groups which panorama should be exported for and indicate export file name mask.

Additionally, you can set boundaries for the region of panorama to be exported using Setup boundaries section of the Export Panorama dialog. Text boxes in the first line X allow to indicate the angle in the horizontal plane and the second line Y serves for angle in the vertical plane limits.

Image size option enables to control the size of the exporting file. If a model generated with PhotoScan is to be imported in a 3D editor program for inspection or further editing, it might be helpful to use Shift function while exporting the model.

It allows to set the value to be subtracted from the respective coordinate value for every vertex in the mesh. Essentially, this means translation of the model coordinate system origin, which may be useful since some 3D editors, for example, truncate the coordinates values up to 8 or so digits, while in some projects they are decimals that make sense with respect to model positioning task.

So it can be recommended to subtract a value equal to the whole part of a certain coordinate value see Reference pane, Camera coordinates values before exporting the model, thus providing for a reasonable scale for the model to be processed in a 3D editor program. The texture file should be kept in the same directory as the main file describing the geometry.

Thanks to hierarchical tiles format, it allows to responsively visualise large models. Orthomosaic export To export Orthomosaic 1. Select Export Orthomosaic In the Export Orthomosaic dialog box specify coordinate system for the Orthomosaic to be saved in.

This information is already included in GeoTIFF file, however, you could duplicate it for some reason. If you need to export orthomosaic in JPEG or PNG file formats and would like to have georeferencing data this informations could be useful. Alternatively, you can indicate the region to be exported using polygon drawing option in the Ortho view tab of the program window. For instructions on polygon drawing refer to Shapes section of the manual.

Once the polygon is drawn, right-click on the polygon and set it as a boundary of the region to be exported using Set Boundary Type option from the context menu. Default value for pixel size in Export Orthomosaic dialog refers to ground sampling resolution, thus, it is useless to set a smaller value: the number of pixels would increase, but the effective resolution would not.

If you have chosen to export orthomosaic with a certain pixel size not using Max. Additionally, the file may be saved without compression None value of the compression type parameter. Total size textbox in the Export Orthomosaic dialog helps to estimate the size of the resulting file. However, it is recommended to make sure that the application you are planning to open the orthomosaic with supports BigTIFF format. Alternatively, you can split a large orthomosaic in blocks, with each block fitting the limits of a standard TIFF file.

PhotoScan supports direct uploading of the orthomosaics to MapBox platform. To export Multispectral orthomosaic 1. Vegetation index data can be saved as two types of data: as a grid of floating point index values calculated per pixel of orthomosaic or as an orthomosaic in pseudocolors according to a pallet set by user. None value allows to export orthomosaic generated for the data before any index calculation procedure was performed. No-data value is used for the points of the grid, where elevation value could not be calculated based on the source data.

Default value is suggested according to the industry standard, however it can be changed by user. See Orthomosaic export section for details. Similarly to orthomosaic export, polygons drawn over the DEM on the Ortho tab of the program window can be set as boundaries for DEM export. Extra products to export In addition to main targeted products PhotoScan allows to export several other processing results, like.

PhotoScan supports direct uploading of the models to Sketchfab resource and of the orthomosaics to MapBox platform. Processing report export option is available for georeferenced projects only. Tie points – total number of valid tie points equals to the number of points in the sparse cloud. Reprojection error – root mean square reprojection error averaged over all tie points on all images.

Reprojection error is the distance between the point on the image where a reconstructed 3D point can be projected and the original projection of that 3D point detected on the photo and used as a basis for the 3D point reconstruction procedure. Advanced tab of Preferences dialog available from Tools menu. For projects calculated over network processing time will not be shown.

PhotoScan matches images on different scales to improve robustness with blurred or difficult to match images. The accuracy of tie point projections depends on the scale at which they were located. PhotoScan uses information about scale to weight tie point reprojection errors. In the Reference pane settings dialog tie point accuracy parameter now corresponds to normalized accuracy – i. Tie points detected on other scales will have accuracy proportional to their scales.

This helps to obtain more accurate bundle adjustment results. On the processing parameters page of the report as well as in chunk information dialog two reprojection errors are provided: the reprojection error in the units of tie point scale this is the quantity that is minimized during bundle adjustment , and the reprojection error in pixels for convenience.

The mean key point size value is a mean tie point scale averaged across all projections. Chapter 4. Referencing Camera calibration Calibration groups While carrying out photo alignment PhotoScan estimates both internal and external camera orientation parameters, including nonlinear radial distortions.

For the estimation to be successful it is crucial to apply the estimation procedure separately to photos taken with different cameras. All the actions described below could and should be applied or not applied to each calibration group individually. To create a new calibration group 1. Select Camera Calibration A new group will be created and depicted on the left-hand part of the Camera Calibration dialog box. To move photos from one group to another 1. In the Camera Calibration dialog box choose the source group on the left-hand part of the dialog.

Select photos to be moved and drag them to the target group on the left-hand part of the Camera Calibration dialog box. To place each photo into a separate group you can use Split Groups command available at the right button click on a calibration group name in the left-hand part of the Camera Calibration dialog. Camera types PhotoScan supports four major types of camera: frame camera, fisheye camera, spherical camera and cylindrical camera.

Camera type can be set in Camera Calibration dialog box available from Tools menu. No additional information is required except the image in equirectangular representation. Spherical camera Cylindrical projection. In case the source data within a calibration group is a set of panoramic images stitched according to cylindrical model, camera type setting will be enough for the program to calculate camera orientation parameters.

No additional information is required. In case source images lack EXIF data or the EXIF data is insufficient to calculate focal length in pixels, PhotoScan will assume that focal length equals to 50 mm 35 mm film equivalent. However, if the initial guess values differ significantly from the actual focal length, it is likely to lead to failure of the alignment process. So, if photos do not contain EXIF metadata, it is preferable to specify focal length mm and sensor pixel size mm manually.

It can be done in Camera Calibration dialog box available from Tools menu. Generally, this data is indicated in camera specification or can be received from some online source.

To indicate to the program that camera orientation parameters should be estimated based on the focal length and pixel size information, it is necessary to set the Type parameter on the Initial tab to Auto value. Camera calibration parameters Once you have tried to run the estimation procedure and got poor results, you can improve them thanks to the additional data on calibration parameters.

To specify camera calibration parameters 1. Select calibration group, which needs reestimation of camera orientation parameters on the left side of the Camera Calibration dialog box. Initial calibration data will be adjusted during the Align Photos processing step. Note that residuals are averaged per cell of an image and then across all the images in a camera group. Calibration parameters list fx, fy Focal length in x- and y-dimensions measured in pixels.

Setting coordinate system Many applications require data with a defined coordinate system. Setting the coordinate system also provides a correct scaling of the model allowing for surface area and volume measurements and makes model loading in geoviewers and geoinformation software much easier.

Some functionality like digital elevation model export is available only after the coordinate system is defined. PhotoScan supports setting a coordinate system based on either ground control point marker coordinates or camera coordinates. In both cases the coordinates are specified in the Reference pane and can be either loaded from the external file or typed in manually.

Setting coordinate system based on recorded camera positions is often used in aerial photography processing. However it may be also useful for processing photos captured with GPS enabled cameras. Placing markers is not required if recorded camera coordinates are used to initialize the coordinate system. In the case when ground control points are used to set up the coordinate system the markers should be placed in the corresponding locations of the scene.

Using camera positioning data for georeferencing the model is faster since manual marker placement is not required. On the other hand, ground control point coordinates are usually more accurate than telemetry data, allowing for more precise georeferencing. PhotoScan supports two approaches to marker placement: manual marker placement and guided marker placement.

Manual approach implies that the marker projections should be indicated manually on each photo where the marker is visible. Manual marker placement does not require 3D model and can be performed even before photo alignment. In the guided approach marker projection is specified for a single photo only.

PhotoScan automatically projects the corresponding ray onto the model surface and calculates marker projections on the rest of the photos where marker is visible. Marker projections defined automatically on individual photos can be further refined manually. Reconstructed 3D model surface is required for the guided approach. Guided marker placement usually speeds up the procedure of marker placement significantly and also reduces the chance of incorrect marker placement.

It is recommended in most cases unless there are any specific reasons preventing this operation. To place a marker using guided approach 1. Open a photo where the marker is visible by double clicking on its name.

Select Create Marker command from the context menu. New marker will be created and its projections on the other photos will be automatically defined. While the accuracy of marker placement in the 3D view is usually much lower, it may be still useful for quickly locating the photos observing the specified location on the model. To view the corresponding photos use Filter by Markers command again from the 3D view context menu. If the command is inactive, please make sure that the marker in question is selected on the Reference pane.

To place a marker using manual approach 1. To save up time on manual marker placement procedure PhotoScan offers guiding lines feature. When a marker is placed on an aligned photo, PhotoScan highlights lines, which the marker is expected to lie on, on the rest of the aligned photos.

The calculated marker positions will be indicated with icon on the corresponding aligned photos in Photo View mode. Automatically defined marker locations can be later refined manually by dragging their projections on the corresponding photos. To refine marker location 1. Open the photo where the marker is visible by double clicking on the photo’s name. Automatically placed marker will be indicated with icon. Move the marker projection to the desired location by dragging it using left mouse button.

Once the marker location is refined by user, the marker icon will change to. The photos where the marker is placed will be marked with a icon on the Photos pane. To filter photos by marker use Filter by Markers command from the 3D view context menu. In those cases when there are hesitations about the features depicted on the photo, comparative inspection of two photos can prove to be useful.

To open two photos in PhotoScan window simultaneously Move to Other Tab Group command is available from photo tab header context menu. To open two photos simultaneously 1. In the Photos pane double click on one photo to be opened. The photo will be opened in a new tab of the main program window. Right click on the tab header and choose Move to Other Tab Group command from the context menu. The main program window will be divided into two parts and the photo will be moved to the second part.

Real world coordinates used for referencing the model along with the type of coordinate system used are specified using the Reference pane. The model can be located in either local Euclidean coordinates or in georeferenced coordinates. For model georeferencing a wide range of various geographic and projected coordinate systems are supported, including widely used WGS84 coordinate system.

Besides, almost all coordinate systems from the EPSG registry are supported as well. To load reference coordinates from a text file 1. Click Import toolbar button on the Reference pane. To open Reference pane use Reference command from the View menu. Browse to the file containing recorded reference coordinates and click Open button. In the Import CSV dialog set the coordinate system if the data presents geographical coordinates.

Select the delimiter and indicate the number of the data column for each coordinate. Indicate columns for the orientation data if present. Information on the accuracy of the source coordinates x, y, z can be loaded with a CSV file as well.

Check Load Accuracy option and indicate the number of the column where the accuracy for the data should be read from. The same figure will be tackled as accuracy information for all three coordinates. To assign reference coordinates manually 1. Additionally, it is possible to indicate accuracy data for the coordinates.

Select Set Accuracy It is possible to select several cameras and apply Set Accuracy Alternatively, you can select Accuracy m or Accuracy deg text box for a certain camera on the Reference pane and press F2 button on the keyboard to type the text data directly onto the Reference pane.

The reference coordinates data will be loaded into the Reference pane. After reference coordinates have been assigned PhotoScan automatically estimates coordinates in a local Euclidean system and calculates the referencing errors. The largest error will be highlighted. To set a georeferenced coordinate system 1.

Assign reference coordinates using one of the options described above. In the Reference Settings dialog box select the Coordinate System used to compile reference coordinates data if it has not been set at the previous step.

Rotation angles in PhotoScan are defined around the following axes: yaw axis runs from top to bottom, pitch axis runs from left to right wing of the drone, roll axis runs from tail to nose of the drone.

Zero values of the rotation angle triple define the following camera position aboard: camera looks down to the ground, frames are taken in landscape orientation, and horizontal axis of the frame is perpendicular to the central tail-nose axis of the drone. If the camera is fixed in a different position, respective yaw, pitch, roll values should be input in the camera correction section of the Settings dialog.

The senses of the angles are defined according to the right-hand rule. A click on the column name on the Reference pane sorts the markers and photos by the data in the column.

At this point you can review the errors and decide whether additional refinement of marker locations is required in case of marker based referencing , or if certain reference points should be excluded. To reset a chunk georeferencing use Reset Transform command from the chunk context menu on the Workspace pane. It should be updated manually using Update toolbar button on the Reference pane. Each reference point is specified in this file on a separate line.

Sample reference coordinates file is provided below:. JPG Individual entries on each line should be separated with a tab space, semicolon, comma, etc character. All lines starting with character are treated as comments. Using different vertical datums On default PhotoScan requires all the source altitude values for both cameras and markers to be input as values mesuared above the ellipsoid.

However, PhotoScan allows for the different geoid models utilization as well. PhotoScan installation package includes only EGM96 geoid model, but additional geoid models can be downloaded from Agisoft’s website if they are required by the coordinate system selected in the Reference pane settings dialog; alternatively, a geoid model can be loaded from a custom PRJ file.

Optimization Optimization of camera alignment PhotoScan estimates internal and external camera orientation parameters during photo alignment. This estimation is performed using image data alone, and there may be some errors in the final estimates. The accuracy of the final estimates depends on many factors, like overlap between the neighboring photos, as well as on the shape of the object surface.

These errors can lead to non-linear deformations of the final model. During georeferencing the model is linearly transformed using 7 parameter similarity transformation 3 parameters for translation, 3 for rotation and 1 for scaling.

Such transformation can compensate only a linear model misalignment. The non-linear component can not be removed with this approach. This is usually the main reason for georeferencing errors. Possible non-linear deformations of the model can be removed by optimizing the estimated point cloud and camera parameters based on the known reference coordinates.

During this optimization PhotoScan adjusts estimated point coordinates and camera parameters minimizing the sum of reprojection error and reference coordinate misalignment error. To achieve greater optimizing results it may be useful to edit sparse point cloud deleting obviously mislocated points beforehand. Georeferencing accuracy can be improved significantly after optimization.

It is recommended to perform optimization if the final model is to be used for any kind of measurements. Click Optimize toolbar button. In Optimize Camera Alignment dialog box check additional camera parameters to be optimized if needed. Click OK button to start optimization. You will have to rebuild the model geometry after optimization. Image coordinates accuracy for markers indicates how precisely the markers were placed by the user or adjusted by the user after being automatically placed by the program.

Ground altitude parameter is used to make reference preselection mode of alignment procedure work effectively for oblique imagery. See Aligning photos for details. Camera, marker and scale bar accuracy can be set per item i. Accuracy values can be typed in on the pane per item or for a group of selected items.

Generally it is reasonable to run optimization procedure based on markers data only. It is due to the fact that GCPs coordinates are measured with significantly higher accuracy compared to GPS data that indicates camera positions. Thus, markers data are sure to give more precise optimization results. Moreover, quite often GCP and camera coordinates are measured in different coordinate systems, that also prevents from using both cameras and markers data in optimization simultaneously.

The results of the optimization procedure can be evaluated with the help of error information on the Reference pane.

In addition, distortion plot can be inspected along with mean residuals visualised per calibration group. They can prove to be useful when there is no way to locate ground control points all over the scene.

Scale bars allow to save field work time, since it is significantly easier to place several scale bars with precisely known length, then to measure coordinates of a few markers using special equipment. In addition, PhotoScan allows to place scale bar instances between cameras, thus making it possible to avoid not only marker but ruler placement within the scene as well.

Surely, scale bar based information will not be enough to set a coordinate system, however, the information can be successfully used while optimizing the results of photo alignment. It will also be enough to perform measurements in PhotoScan software. See Performing measurements on model. To add a scale bar 1. Place markers at the start and end points of the bar. For information on marker placement please refer to the Setting coordinate system section of the manual.

Select Create Scale Bar command form the Model view context menu. The scale bar will be created and an instant added to the Scale Bar list on the Reference pane. Double click on the Distance m box next to the newly created scale bar name and enter the known length of the bar in meters. To add a scale bar between cameras 1. Select the two cameras on the Workspace or Reference pane using Ctrl button. Alternatively, the cameras can be selected in the Model view window using selecting tools from the Toolbar.

Select Create Scale Bar command form the context menu. To run scale bar based optimization 1. On the Reference pane check all scale bars to be used in optimization procedure. Click Settings toolbar button on the Reference pane. To delete a scale bar 1. Select the scale bar to be deleted on the Reference pane.

What do the errors in the Reference pane mean? Cameras section 1. Error m – distance between the input source and estimated positions of the camera. Error pix – root mean square reprojection error calculated over all feature points detected on the photo. Markers section 1. Error m – distance between the input source and estimated positions of the marker.

Error pix – root mean square reprojection error for the marker calculated over all photos where marker is visible. If the total reprojection error for some marker seems to be too large, it is recommended to inspect reprojection errors for the marker on individual photos. The information is available with Show Info command from the marker context menu on the Reference pane.

Moreover, automatic CTs detection and marker placement is more precise then manual marker placement. PhotoScan supports three types of circle CTs: 12 bit, 16 bit and 20 bit. While 12 bit pattern is considered to be decoded more precisely, 16 bit and 20 bit patterns allow for a greater number of CTs to be used within the same project. To be detected successfully CTs must take up a significant number of pixels on the original photos.

This leads to a natural limitation of CTs implementation: while they generally prove to be useful in close-range imagery projects, aerial photography projects will demand too huge CTs to be placed on the ground, for the CTs to be detected correctly. Coded targets in workflow Sets of all patterns of CTs supported by PhotoScan can be generated by the program itself. To create a printable PDF with coded targets 1. Select Print Markers Once generated, the pattern set can be printed and the CTs can be placed over the scene to be shot and reconstructed.

When the images with CTs seen on them are uploaded to the program, PhotoScan can detect and match the CTs automatically. To detect coded targets on source images 1. Select Detect Markers CTs generated with PhotoScan software contain even number of sectors.

However, previous versions of PhotoScan software had no restriction of the kind. Thus, if the project to be processed contains CTs from previous versions of PhotoScan software, it is required to disable parity check in order to make the detector work. Chapter 5. Measurements Performing measurements on model PhotoScan supports measuring of distances on the model, as well as of surface area and volume of the reconstructed 3D model.

All the instructions of this section are applicable for working in the Model view of the program window, both for analysis of Dense Point Cloud or of Mesh data.

When working in the model view, all measurements are performed in 3D space, unlike measurements in Ortho view, which are planar ones. Distance measurement PhotoScan enables measurements of distances between the points of the reconstructed 3D scene. Obviously, model coordinate system must be initialized before the distance measurements can be performed. Alternatively, the model can be scaled based on known distance scale bar information to become suitable for measurements. For instructions on setting coordinate system please refer to the Setting coordinate system section of the manual.

Scale bar concept is described in the Optimization section. To measure distance 1. Select Ruler instrument from the Toolbar of the Model view. Upon the second click on the model the distance between the indicated points will be shown right in the Model view. To complete the measurement and to proceed to a new one, please press Escape button on the keyboard. The result of the measurement will be shownon the Console pane. Shape drawing is enabled in Model view as well.

See Shapes section of the manual for information on shape drawing. Measure command available from the context menu of a selected shape allows to learn the coordinates of the vertices as well as the perimeter of the shape. To measure several distances between pairs of points and automatically keep the resulting data, markers can be used.

To measure distance between two markers 1. Place the markers in the scene at the targeted locations. To measure distance between cameras 1. Switch to the estimated values mode using View Estimated button from the Reference pane toolbar. The estimated distance for the newly created scale bar equals to the distance that should have been measured. Surface area and volume measurement Surface area or volume measurements of the reconstructed 3D model can be performed only after the scale or coordinate system of the scene is defined.

To measure surface area and volume 1. Select Measure Area and Volume The whole model surface area and volume will be displayed in the Measure Area and Volume dialog box. Surface area is measured in square meters, while mesh volume is measured in cubic meters.

Volume measurement can be performed only for the models with closed geometry. If there are any holes in the model surface PhotoScan will report zero volume. Existing holes in the mesh surface can be filled in before performing volume measurements using Close Holes Distance measurement To measure distance with a Ruler 1.

Select Ruler instrument from the Toolbar of the Ortho view. Upon the second click on the DEM the distance between the indicated points will be shown right in the Ortho view. To measure distance with shapes 1. Connect the points of interest with a polyline using Draw Polyline tool from the Ortho view toolbar.

Right button click on the polyline and select Measure In the Measure Shape dialog inspect the results. Perimeter value equals to the distance that should have been measured. In addition to polyline length value see perimeter value in the Measure Shape , coordinates of the vertices of the polyline are shown on the Planar tab of the Measure Shape dialog. To select a polyline, double-click on it. A selected polyline is coloured in red. Surface area and volume measurement To measure area and volume 1.

Right button click on the polygon and select Measure Cross sections and contour lines PhotoScan enables to calculate cross sections, using shapes to indicate the plane s for a cut s , the cut being made with a plane parallel to Z axis. To calculate cross section 1.

Generate Contours Set values for Minimal altitude, Maximal altitude parameters as well as the Interval for the contours. All the values shoudl be indicated in meters. When the procedure is finished, a contour lines label will be added to the project file structure shown on the Workspace pane.

Contour lines can be shown over the DEM or orthomosaic on the Ortho tab of the program window. Use Show contour lines tool from the Ortho tab toolbal to switch the function on and off. Contour lines can be deleted using Remove Contours command from the contour lines label context menu on the Workspace pane. To calculate a vegetation index 1. Open orthomosaic in the Ortho tab doubleclicking on the orthomosaic label on the Workspace pane. Input an index expression using keyboard input and operators buttons of the raster calculator if necessary.

Once the operation is completed, the result will be shown in the Ortho view, index values being visualised with colours according to the palette set in the Raster Calculator dialog. Palette defines the colour for each index value to be shown with. PhotoScan offers several standard palette presets on the Palette tab of the Raster Calculator dialog. For each new line added to the palette a certain index value should be typed in. Double click on the newly added line to type the value in.

A customised palette can be saved for future projects using Export Palette button on the Palette tab of the Raster Calculator dialog. To calculate contour lines based on vegetation index data 1. Select Generate Contours The contour lines will be shown over the index data on the Ortho tab.

Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results.

Masks can be applied at the following stages of processing:. Alignment of the photos Masked areas can be excluded during feature point detection. Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions. This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos. Masking may be also useful when the object of interest occupies only a small part of the photo.

In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects. Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process. Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction.

Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing. Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas. Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available. The Orthophoto mapping mode produces even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions.

Single photo The Single photo mapping mode allows to generate texture from a single photo. The photo to be used for texturing can be selected from ‚Texture from‘ list. Keep uv The Keep uv mapping mode generates texture atlas using current texture parametrization.

It can be used to rebuild texture atlas using different resolution or to generate the atlas for the model parametrized in the external software. Texture generation parameters The following parameters control various aspects of texture atlas generation:. Texture from Single photo mapping mode only Specifies the photo to be used for texturing.

Available only in the Single photo mapping mode. Blending mode not used in Single photo mode Selects the way how pixel values from different photos will be combined in the final texture. Mosaic – implies two-step approach: it does blending of low frequency component for overlapping images to avoid seamline problem weighted average, weight being dependent on a number of parameters including proximity of the pixel in question to the center of the image , while high frequency component, that is in charge of picture details, is taken from a single image – the one that presents good resolution for the area of interest while the camera view is almost along the normal to the reconstructed surface in that point.

Average – uses the weighted average value of all pixels from individual photos, the weight being dependent on the same parameters that are considered for high frequence component in mosaic mode. Max Intensity – the photo which has maximum intensity of the corresponding pixel is selected. Min Intensity – the photo which has minimum intensity of the corresponding pixel is selected. Disabled – the photo to take the color value for the pixel from is chosen like the one for the high frequency component in mosaic mode.

Exporting texture to several files allows to archive greater resolution of the final model texture, while export of high resolution texture to a single file can fail due to RAM limitations. Enable color correction The feature is useful for processing of data sets with extreme brightness variation. However, please note that color correction process takes up quite a long time, so it is recommended to enable the setting only for the data sets that proved to present results of poor quality.

Improving texture quality To improve resulting texture quality it may be reasonable to exclude poorly focused images from processing at this step. PhotoScan suggests automatic image quality estimation feature.

PhotoScan estimates image quality as a relative sharpness of the photo with respect to other images in the data set. Saving intermediate results Certain stages of 3D model reconstruction can take a long time. The full chain of operations could eventually last for hours when building a model from hundreds of photos.

It is not always possible to complete all the operations in one run. PhotoScan allows to save intermediate results in a project file. Photo alignment data such as information on camera positions, sparse point cloud model and set of refined camera calibration parameters for each calibration group.

Masks applied to the photos in project. Depth maps for cameras. Dense point cloud model. Reconstructed 3D polygonal model with any changes made by user. This includes mesh and texture if it was built. Structure of the project, i. Note that since PhotoScan tends to generate extra dense point clouds and highly detailed polygonal models, project saving procedure can take up quite a long time.

You can decrease compression level to speed up the saving process. However, please note that it will result in a larger project file. Compression level setting can be found on the Advanced tab of the Preferences dialog available from Tools menu. You can save the project at the end of any processing stage and return to it later. To restart work simply load the corresponding file into PhotoScan. Project files can also serve as backup files or be used to save different versions of the same model.

Project files use relative paths to reference original photos. Thus, when moving or copying the project file to another location do not forget to move or copy photographs with all the folder structure involved as well. Otherwise, PhotoScan will fail to run any operation requiring source images, although the project file including the reconstructed model will be loaded up correctly. Alternatively, you can enable Store absolute image paths option on the Advanced tab of the Preferences dialog available from Tools menu.

Exporting results PhotoScan supports export of processing results in various representations: sparse and dense point clouds, camera calibration and camera orientation data, mesh, etc. Point cloud and camera calibration data can be exported right after photo alignment is completed. All other export options are available after the corresponding processing step.

To align the model orientation with the default coordinate system use from the Toolbar. In some cases editing model geometry in the external software may be required. PhotoScan supports model export for editing in external software and then allows to import it back as it is described in the Editing model geometry section of the manual.

Main export commands are available from the File menu and the rest from the Export submenu of the Tools menu. Browse the destination folder, choose the file type, and print in the file name. Click Save button. In some cases it may be reasonable to edit point cloud before exporting it. To read about point cloud editing refer to the Editing point cloud section of the manual. In the Export Matches dialog box set export parameters.

Precision value sets the limit to the number of decimal digits in the tie points coordinates to be saved. Later on, estimated camera data can be imported back to PhotoScan using Import Cameras command from the Tools menu to proceed with 3D model reconstruction procedure. Camera calibration and orientation data export To export camera calibration and camera orientation data select Export Cameras Note Camera data export in Bundler and Boujou file formats will save sparse point cloud data in the same file.

Camera data export in Bundler file format would not save distortion coefficients k3, k4. Panorama export PhotoScan is capable of panorama stitching for images taken from the same camera position – camera station. To indicate for the software that loaded images have been taken from one camera station, one should move those photos to a camera group and assign Camera Station type to it. For information on camera groups refer to Loading photos section. Choose panorama orientation in the file with the help of navigation buttons to the right of the preview window in the Export Panorama dialog.

Set exporting parameters: select camera groups which panorama should be exported for and indicate export file name mask. Additionally, you can set boundaries for the region of panorama to be exported using Setup boundaries section of the Export Panorama dialog.

Text boxes in the first line X allow to indicate the angle in the horizontal plane and the second line Y serves for angle in the vertical plane limits. Image size option enables to control the size of the exporting file. The texture file should be kept in the same directory as the main file describing the geometry. If the texture atlas was not built only the model geometry is exported.

PhotoScan supports direct uploading of the models to Sketchfab resource. To publish your model online use Upload Model Extra products to export In addition to main targeted products PhotoScan allows to export several other processing results, like Undistort photos, i. Depth map for any image Export Depth PhotoScan supports direct uploading of the models to Sketchfab resource and of the orthomosaics to MapBox platform.

Chapter 4. Improving camera alignment results Camera calibration Calibration groups While carrying out photo alignment PhotoScan estimates both internal and external camera orientation parameters, including nonlinear radial distortions. For the estimation to be successful it is crucial to apply the estimation procedure separately to photos taken with different cameras.

All the actions described below could and should be applied or not applied to each calibration group individually. Calibration groups can be rearranged manually. A new group will be created and depicted on the left-hand part of the Camera Calibration dialog box. In the Camera Calibration dialog box choose the source group on the left-hand part of the dialog.

Select photos to be moved and drag them to the target group on the left-hand part of the Camera Calibration dialog box. To place each photo into a separate group you can use Split Groups command available at the right button click on a calibration group name in the left-hand part of the Camera Calibration dialog.

Camera types PhotoScan supports two major types of camera: frame camera and fisheye camera. Camera type can be set in Camera Calibration dialog box available from Tools menu. Frame camera. If the source data within a calibration group was shot with a frame camera, for successful estimation of camera orientation parameters the information on approximate focal length pix is required.

Obviously, to calculate focal length value in pixel it is enough to know focal length in mm along with the sensor pixel size in mm. Normally this data is extracted automatically from the EXIF metadata.

Frame camera with Fisheye lens. If extra wide lenses were used to get the source data, standard PhotoScan camera model will not allow to estimate camera parameters successfully. Fisheye camera type setting will initialize implementation of a different camera model to fit ultra-wide lens distortions. In case source images lack EXIF data or the EXIF data is insufficient to calculate focal length in pixels, PhotoScan will assume that focal length equals to 50 mm 35 mm film equivalent.

However, if the initial guess values differ significantly from the actual focal length, it is likely to lead to failure of the alignment process. So, if photos do not contain EXIF metadata, it is preferable to specify focal length mm and sensor pixel size mm manually. It can be done in Camera Calibration dialog box available from Tools menu. Generally, this data is indicated in camera specification or can be received from some online source.

To indicate to the program that camera orientation parameters should be estimated based on the focal length and pixel size information, it is necessary to set the Type parameter on the Initial tab to Auto value.

Camera calibration parameters Once you have tried to run the estimation procedure and got poor results, you can improve them thanks to the additional data on calibration parameters. Select calibration group, which needs reestimation of camera orientation parameters on the left side of the Camera Calibration dialog box.

Note Alternatively, initial calibration data can be imported from file using Load button on the Initial tab of the Camera Calibration dialog box. Initial calibration data will be adjusted during the Align Photos processing step. Once Align Photos processing step is finished adjusted calibration data will be displayed on the Adjusted tab of the Camera Calibration dialog box. If very precise calibration data is available, to protect it from recalculation one should check Fix calibration box.

In this case initial calibration data will not be changed during Align Photos process. Adjusted camera calibration data can be saved to file using Save button on the Adjusted tab of the Camera Calibration dialog box. Estimated camera distortions can be seen on the distortion plot available from context menu of a camera group in the Camera Calibration dialog.

In addition, residuals graph the second tab of the same Distortion Plot dialog allows to evaluate how adequately the camera is described with the applied mathematical model. Note that residuals are averaged per cell of an image and then across all the images in a camera group. Calibration parameters list fx, fy Focal length in x- and y-dimensions measured in pixels.

Optimization Optimization of camera alignment During photo alignment step PhotoScan automatically finds tie points and estimates intrinsic and extrinsic camera parameters. However, the accuracy of the estimates depends on many factors, like overlap between the neighbouring photos, as well as on the shape of the object surface. Thus, it is recommended to inspect alignment results in order to delete tie points with too large reprojection error if any.

Please refer to Editing point cloud section for information on point cloud editing. Once the set of tie points has been edited, it is necessary to run optimization procedure to reestimate intrinsic and extrinsic camera parameters. Optimization procedure calculates intrinsic and extrinsic camera parameters based on the tie points left after editing procedure. Providing that outliers have been removed, the estimates will be more accurate.

In addition, this step involves estimation of a number of intrinsic camera parameters which are fixed at the alignment step: aspect, skew; and distortion parameters p3, p4, k4. In Optimize Camera Alignment dialog box check camera parameters to be optimized. Click OK button to start optimization. After optimization is complete, estimated intrinsic camera parameters can be inspected on the Adjusted tab of the Camera Calibration dialog available from the Tools menu.

Note The model data if any is cleared by the optimization procedure. You will have to rebuild the model geometry after optimization. Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results. Masks can be applied at the following stages of processing: Alignment of the photos Building dense point cloud Building 3D model texture Alignment of the photos Masked areas can be excluded during feature point detection.

Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions. This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos. Masking may be also useful when the object of interest occupies only a small part of the photo.

In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects. Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process.

Masking can be used to reduce the resulting dense cloud complexity, by eliminating the areas on the photos that are not of interest. Masked areas are always excluded from processing during dense point cloud and texture generation stages.

Let’s take for instance a set of photos of some object. Along with an object itself on each photo some background areas are present. These areas may be useful for more precise camera positioning, so it is better to use them while aligning the photos.

However, impact of these areas at the building dense point cloud is exactly opposite: the resulting model will contain object of interest and its background. Background geometry will „consume“ some part of mesh polygons that could be otherwise used for modeling the main object. Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction.

Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing. Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas.

Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available. PhotoScan supports loading masks from the following sources: From alpha channel of the source photos. From separate images. Generated from background photos based on background differencing technique.

Based on reconstructed 3D model. When generating masks from separate or background images, the folder selection dialog will appear. Browse to the folder containing corresponding images and select it. The following parameters can be specified during mask import: Import masks for Specifies whether masks should be imported for the currently opened photo, active chunk or entire Workspace. Current photo – load mask for the currently opened photo if any.

Active chunk – load masks for active chunk. Entire workspace – load masks for all chunks in the project. Method Specifies the source of the mask data. From Alpha – load masks from alpha channel of the source photos. From File – load masks from separate images. From Background – generate masks from background photos.

From Model – generate masks based on reconstructed model. Mask file names not used in From alpha mode Specifies the file name template used to generate mask file names. This template can contain special tokens, that will be substituted by corresponding data for each photo being processed.

The following tokens are supported:. Tolerance From Background method only Specifies the tolerance threshold used for background differencing. Tolerance value should be set according to the color separation between foreground and background pixels.

For larger separation higher tolerance values can be used. Editing masks Modification of the current mask is performed by adding or subtracting selections. A selection is created with one of the supported selection tools and is not incorporated in the current mask until it is merged with a mask using Add Selection or Subtract Selection operations. The photo will be opened in the main window. The existing mask will be displayed as a shaded region on the photo. Selection to subtract the selection from the mask.

Invert Selection button allows to invert current selection prior to adding or subtracting it from the mask. The following tools can be used for creating selections: Rectangle selection tool Rectangle selection tool is used to select large areas or to clean up the mask after other selection tools were applied.

Intelligent scissors tool Intelligent scissors is used to generate a selection by specifying its boundary. The boundary is formed by selecting a sequence of vertices with a mouse, which are automatically connected with segments. The segments can be formed either by straight lines, or by curved contours snapped to the object boundaries. To enable snapping, hold Ctrl key while selecting the next vertex. To complete the selection, the boundary should be closed by clicking on the first boundary vertex.

Intelligent paint tool Intelligent paint tool is used to „paint“ a selection by the mouse, continuously adding small image regions, bounded by object boundaries. Magic wand tool Magic Wand tool is used to select uniform areas of the image. To make a selection with a Magic Wand tool, click inside the region to be selected. The range of pixel colors selected by Magic Wand is controlled by the tolerance value. At lower tolerance values the tool selects fewer colors similar to the pixel you click with the Magic Wand tool.

Higher value broadens the range of colors selected. Note To add new area to the current selection hold the Ctrl key during selection of additional area. To reset mask selection on the current photo press Esc key. A mask can be inverted using Invert Mask command from the Photo menu. The command is active in Photo View only. Alternatively, you can invert masks either for selected cameras or for all cameras in a chunk using Invert Masks The masks are generated individually for each image.

If some object should be masked out, it should be masked out on all photos, where that object appears. The following parameters can be specified during mask export: Export masks for Specifies whether masks should be exported for the currently opened photo, active chunk or entire Workspace.

Current photo – save mask for the currently opened photo if any. Active chunk – save masks for active chunk. Entire workspace – save masks for all chunks in the project.

File type Specifies the type of generated files. Single channel mask image – generates single channel black and white mask images. Image with alpha channel – generates color images from source photos combined with mask data in alpha channel.

Mask file names Specifies the file name template used to generate mask file names. Mask file names parameter will not be used in this case. Editing point cloud The following point cloud editing tools are available in PhotoScan: Automatic filtering based on specified criterion sparse cloud only Automatic filtering based on applied masks dense cloud only Automatic filtering based on points colours dense cloud only Reducing number of points in cloud by setting tie point per photo limit sparse cloud only Manual points removal.

Filtering points based on specified criterion In some cases it may be useful to find out where the points with high reprojection error are located within the sparse cloud, or remove points representing high amount of noise.

Point cloud filtering helps to select such points, which usually are supposed to be removed. PhotoScan supports the following criteria for point cloud filtering: Reprojection error High reprojection error usually indicates poor localization accuracy of the corresponding point projections at the point matching step.

It is also typical for false matches. Removing such points can improve accuracy of the subsequent optimization step. Reconstruction uncertainty High reconstruction uncertainty is typical for points, reconstructed from nearby photos with small baseline. Such points can noticeably deviate from the object surface, introducing noise in the point cloud.

While removal of such points should not affect the accuracy of optimization, it may be useful to remove them before building geometry in Point Cloud mode or for better visual appearance of the point cloud. Image count PhotoScan reconstruct all the points that are visible at least on two photos.

However, points that are visible only on two photos are likely to be located with poor accuracy. Image count filtering enables to remove such unreliable points from the cloud.

Projection Accuracy This criterion allows to filter out points which projections were relatively poorer localised due to their bigger size. In the Gradual Selection dialog box specify the criterion to be used for filtering.

Adjust the threshold level using the slider. You can observe how the selection changes while dragging the slider. Click OK button to finalize the selection. To remove selected points use Delete Selection command from the Edit menu or click Selection toolbar button or simply press Del button on the keyboard. Filtering points based on applied masks To remove points based on applied masks 1. In the Select Masked Points dialog box indicate the photos whose masks to be taken into account.

Adjust the edge softness level using the slider. Click OK button to run the selection procedure. Filtering points based on points colors To remove points based on points colors 1. In the Select Points by Color dialog box the color to be used as the criterion. Adjust the tolerance level using the slider.

Tie point per photo limit Tie point limit parameter could be adjusted before Align photos procedure. The number indicates the upper limit for matching points for every image. Using zero value doesn’t apply any tie-point filtering.

The number of tie points can also be reduced after the alignment process with Tie Points – Thin Point Cloud command available from Tools menu. To add new points to the current selection hold the Ctrl key during selection of additional points. To remove some points from the current selection hold the Shift key during selection of points to be removed.

To delete selected points click the. To crop selection to the selected points click the toolbar button or select Crop Selection command from the Edit menu. Editing model geometry The following mesh editing tools are available in PhotoScan: Decimation tool Close Holes tool Automatic filtering based on specified criterion Manual polygon removal Fixing mesh topology More complex editing can be done in the external 3D editing tools.

PhotoScan allows to export mesh and then import it back for this purpose. Note For polygon removal operations such as manual removal and connected component filtering it is possible to undo the last mesh editing operation. Decimation tool Decimation is a tool used to decrease the geometric resolution of the model by replacing high resolution mesh with a lower resolution one, which is still capable of representing the object geometry with high accuracy.

PhotoScan tends to produce 3D models with excessive geometry resolution, so mesh decimation is usually a desirable step after geometry computation. Highly detailed models may contain hundreds of thousands polygons. While it is acceptable to work with such a complex models in 3D editor tools, in most conventional tools like Adobe Reader or Google Earth high complexity of 3D models may noticeably decrease application performance.

High complexity also results in longer time required to build texture and to export model in pdf file format.

In some cases it is desirable to keep as much geometry details as possible like it is needed for scientific and archive purposes. However, if there are no special requirements it is recommended to decimate the model down to – polygons for exporting in PDF, and to or even less for displaying in Google Earth and alike tools.

In the Decimate Mesh dialog box specify the target number of polygons, which should remain in the final model. Click on the OK button to start decimation. To cancel processing click on the Cancel button. Note Texture atlas is discarded during decimation process. You will have to rebuild texture atlas after decimation is complete. Close Holes tool Close Holes tool provides possibility to repair your model if the reconstruction procedure resulted in a mesh with several holes, due to insufficient image overlap for example.

Close holes tool enables to close void areas on the model substituting photogrammetric reconstruction with extrapolation data. It is possible to control an acceptable level of accuracy indicating the maximum size of a hole to be covered with extrapolated data.

In the Close Holes dialog box indicate the maximum size of a hole to be covered with the slider. Click on the OK button to start the procedure. Note The slider allows to set the size of a hole in relation to the size of the whole model surface. Polygon filtering on specified criterion In some cases reconstructed geometry may contain the cloud of small isolated mesh fragments surrounding the „main“ model or big unwanted polygons.

Mesh filtering based on different criteria helps to select polygons, which usually are supposed to be removed. PhotoScan supports the following criteria for face filtering:.

Connected component size This filtering criteria allows to select isolated fragments with a certain number of polygons. The number of polygons in all isolated components to be selected is set with a slider and is indicated in relation to the number of polygons in the whole model. The components are ranged in size, so that the selection proceeds from the smallest component to the largest one.

Polygon size This filtering criteria allows to select polygons up to a certain size. The size of the polygons to be selected is set with a slider and is indicated in relation to the size of the whole model.

This function can be useful, for example, in case the geometry was reconstructed in Smooth type and there is a need to remove extra polygons automatically added by PhotoScan to fill the gaps; these polygons are often of a larger size that the rest. Select the size of isolated components to be removed using the slider.

To remove the selected components use Delete Selection command from the Edit menu or click Delete Selection toolbar button or simply press Del button on the keyboard.

Select the size of polygons to be removed using the slider. Note that PhotoScan always selects the fragments starting from the smallest ones. If the model contains only one component the selection will be empty. Manual face removal Unnecessary and excessive sections of model geometry can be also removed manually. Make the selection using the mouse. To add new polygons to the current selection hold the Ctrl key during selection of additional polygons.

To remove some polygons from the current selection hold the Shift key during selection of polygons to be excluded. To crop selection to the selected polygons click the toolbar button or use Crop Selection command from the Edit menu. To grow current selection press PageUp key in the selection mode.

To grow selection by even a larger amount, press PageUp while holding Shift key pressed. To shrink current selection press PageDown key in the selection mode. To shrink selection by even a larger amount, press PageDown while holding Shift key pressed. In the Mesh Statistics dialog box you can inspect mesh parameters. If there are any topological problems, Fix Topology button will be active and can be clicked to solve the problems.

Editing mesh in the external program To export mesh for editing in the external program 1. In the Save As dialog box, specify the desired mesh format in the Save as type combo box. Select the file name to be used for the model and click Save button. In the opened dialog box specify additional parameters specific to the selected file format. Please make sure to select one of these file formats when exporting model from the external 3D editor.

Chapter 6. Automation Using chunks When working with typical data sets, automation of general processing workflow allows to perform routine operations efficiently. PhotoScan allows to assign several processing steps to be run one by one without user intervention thanks to Batch Processing feature.

Manual user intervention can be minimized even further due to ‚multiple chunk project‘ concept, each chunk to include one typical data set. For a project with several chunks of the same nature, common operations available in Batch Processing dialog are applied to each selected chunk individually, thus allowing to set several data sets for automatic processing following predefined workflow pattern. In addition, multiple chunk project could be useful when it turns out to be hard or even impossible to generate a 3D model of the whole scene in one go.

This could happen, for instance, if the total amount of photographs is too large to be processed at a time. To overcome this difficulty PhotoScan offers a possibility to split the set of photos into several separate chunks within the same project. Alignment of photos, building dense point cloud, building mesh, and forming texture atlas operations can be performed for each chunk separately and then resulting 3D models can be combined together.

Working with chunks is not more difficult than using PhotoScan following the general workflow. In fact, in PhotoScan always exists at least one active chunk and all the 3D model processing workflow operations are applied to this chunk.

To work with several chunks you need to know how to create chunks and how to combine resulting 3D models from separate chunks into one model. Creating a chunk To create new chunk click on the Add Chunk toolbar button on the Workspace pane or select Add Chunk command from the Workspace context menu available by right-clicking on the root element on the Workspace pane.

After the chunk is created you may load photos in it, align them, generate dense point cloud, generate mesh surface model, build texture atlas, export the models at any stage and so on. The models in the chunks are not linked with each other. The list of all the chunks created in the current project is displayed in the Workspace pane along with flags reflecting their status.

The following flags can appear next to the chunk name: R Referenced Will appear when two or more chunks are aligned with each other. To move photos from one chunk to another simply select them in the list of photos on the Workspace pane, and then drag and drop to the target chunk.

Working with chunks All operations within the chunk are carried out following the common workflow: loading photographs, aligning them, generating dense point cloud, building mesh, building texture atlas, exporting 3D model and so on. Note that all these operations are applied to the active chunk. When a new chunk is created it is activated automatically. Save project operation saves the content of all chunks.

To save selected chunks as a separate project use Save Chunks command from the chunk context menu. Aligning chunks After the „partial“ 3D models are built in several chunks they can be merged together. Before merging chunks they need to be aligned. In the Align Chunks dialog box select chunks to be aligned, indicate reference chunk with a doubleclick.

Set desired alignment options. To cancel processing click the Cancel button. Aligning chunks parameters The following parameters control the chunks alignment procedure and can be modified in the Align Chunks dialog box: Method Defines the chunks alignment method. Point based method aligns chunks by matching photos across all the chunks. Camera based method is used to align chunks based on estimated camera locations. Corresponding cameras should have the same label.

Accuracy Point based alignment only Higher accuracy setting helps to obtain more accurate chunk alignment results. Lower accuracy setting can be used to get the rough chunk alignment in the shorter time. Point limit Point based alignment only The number indicates upper limit of feature points on every image to be taken into account during Point based chunks alignment. Fix scale Option is to be enabled in case the scales of the models in different chunks were set precisely and should be left unchanged during chunks alignment process.

Preselect image pairs Point based alignment only The alignment process of many chunks may take a long time.

Agisoft will pay particular attention to possible problems with Metashape running on these devices. Table Supported Desktop GPUs on Windows platform NVIDIA AMD GeForce RTX Radeon RX GeForce RTX Ti Radeon VII Tesla V Radeon RX XT Tesla M60 Radeon RX Vega 64 Quadro P Radeon RX Vega 56 Quadro M Radeon Pro . Agisoft Metashape User Manual Standard Edition, Version Agisoft Metashape User Manual: Standard Edition, Version Publication date Quadro M Radeon Pro WX GeForce TITAN X Radeon RX GeForce GTX Ti FirePro W GeForce GTX TITAN X Radeon R9 x. Terrestrial laser scanning (TLS) registration. Simultaneous adjustment of both laser scanner and camera positions. Capability to combine TLS and photogrammetric depth maps. Markers support and automatic targets detection for manual alignment of scanner data. Masking instruments to ignore unwanted objects in scanner data. Nov 11,  · I did install the latest beta, photoscan-pro_1_2_0_x64_beta, and get this extremely strange colours on the orthophoto!! For some reason, photoscan have not been able to create a complete mesh in the area also, but it is extrapolated. The seamlineeditor is very promising! The possibility to save batch-jobs is great.

Note Camera data export in Bundler and Boujou file formats will save sparse point cloud data in the same file. Camera data export in Bundler file format would not save distortion coefficients k3, k4. Panorama export PhotoScan is capable of panorama stitching for images taken from the same camera position – camera station.

To indicate for the software that loaded images have been taken from one camera station, one should move those photos to a camera group and assign Camera Station type to it. For information on camera groups refer to Loading photos section.

Choose panorama orientation in the file with the help of navigation buttons to the right of the preview window in the Export Panorama dialog. Set exporting parameters: select camera groups which panorama should be exported for and indicate export file name mask. Additionally, you can set boundaries for the region of panorama to be exported using Setup boundaries section of the Export Panorama dialog. Text boxes in the first line X allow to indicate the angle in the horizontal plane and the second line Y serves for angle in the vertical plane limits.

Image size option enables to control the size of the exporting file. The texture file should be kept in the same directory as the main file describing the geometry. If the texture atlas was not built only the model geometry is exported. PhotoScan supports direct uploading of the models to Sketchfab resource. To publish your model online use Upload Model Extra products to export In addition to main targeted products PhotoScan allows to export several other processing results, like Undistort photos, i.

Depth map for any image Export Depth PhotoScan supports direct uploading of the models to Sketchfab resource and of the orthomosaics to MapBox platform.

Chapter 4. Improving camera alignment results Camera calibration Calibration groups While carrying out photo alignment PhotoScan estimates both internal and external camera orientation parameters, including nonlinear radial distortions.

For the estimation to be successful it is crucial to apply the estimation procedure separately to photos taken with different cameras. All the actions described below could and should be applied or not applied to each calibration group individually. Calibration groups can be rearranged manually.

A new group will be created and depicted on the left-hand part of the Camera Calibration dialog box. In the Camera Calibration dialog box choose the source group on the left-hand part of the dialog. Select photos to be moved and drag them to the target group on the left-hand part of the Camera Calibration dialog box.

To place each photo into a separate group you can use Split Groups command available at the right button click on a calibration group name in the left-hand part of the Camera Calibration dialog. Camera types PhotoScan supports two major types of camera: frame camera and fisheye camera.

Camera type can be set in Camera Calibration dialog box available from Tools menu. Frame camera. If the source data within a calibration group was shot with a frame camera, for successful estimation of camera orientation parameters the information on approximate focal length pix is required. Obviously, to calculate focal length value in pixel it is enough to know focal length in mm along with the sensor pixel size in mm.

Normally this data is extracted automatically from the EXIF metadata. Frame camera with Fisheye lens. If extra wide lenses were used to get the source data, standard PhotoScan camera model will not allow to estimate camera parameters successfully.

Fisheye camera type setting will initialize implementation of a different camera model to fit ultra-wide lens distortions. In case source images lack EXIF data or the EXIF data is insufficient to calculate focal length in pixels, PhotoScan will assume that focal length equals to 50 mm 35 mm film equivalent. However, if the initial guess values differ significantly from the actual focal length, it is likely to lead to failure of the alignment process.

So, if photos do not contain EXIF metadata, it is preferable to specify focal length mm and sensor pixel size mm manually. It can be done in Camera Calibration dialog box available from Tools menu. Generally, this data is indicated in camera specification or can be received from some online source. To indicate to the program that camera orientation parameters should be estimated based on the focal length and pixel size information, it is necessary to set the Type parameter on the Initial tab to Auto value.

Camera calibration parameters Once you have tried to run the estimation procedure and got poor results, you can improve them thanks to the additional data on calibration parameters.

Select calibration group, which needs reestimation of camera orientation parameters on the left side of the Camera Calibration dialog box. Note Alternatively, initial calibration data can be imported from file using Load button on the Initial tab of the Camera Calibration dialog box. Initial calibration data will be adjusted during the Align Photos processing step. Once Align Photos processing step is finished adjusted calibration data will be displayed on the Adjusted tab of the Camera Calibration dialog box.

If very precise calibration data is available, to protect it from recalculation one should check Fix calibration box. In this case initial calibration data will not be changed during Align Photos process.

Adjusted camera calibration data can be saved to file using Save button on the Adjusted tab of the Camera Calibration dialog box. Estimated camera distortions can be seen on the distortion plot available from context menu of a camera group in the Camera Calibration dialog. In addition, residuals graph the second tab of the same Distortion Plot dialog allows to evaluate how adequately the camera is described with the applied mathematical model.

Note that residuals are averaged per cell of an image and then across all the images in a camera group. Calibration parameters list fx, fy Focal length in x- and y-dimensions measured in pixels.

Optimization Optimization of camera alignment During photo alignment step PhotoScan automatically finds tie points and estimates intrinsic and extrinsic camera parameters. However, the accuracy of the estimates depends on many factors, like overlap between the neighbouring photos, as well as on the shape of the object surface. Thus, it is recommended to inspect alignment results in order to delete tie points with too large reprojection error if any.

Please refer to Editing point cloud section for information on point cloud editing. Once the set of tie points has been edited, it is necessary to run optimization procedure to reestimate intrinsic and extrinsic camera parameters. Optimization procedure calculates intrinsic and extrinsic camera parameters based on the tie points left after editing procedure. Providing that outliers have been removed, the estimates will be more accurate.

In addition, this step involves estimation of a number of intrinsic camera parameters which are fixed at the alignment step: aspect, skew; and distortion parameters p3, p4, k4. In Optimize Camera Alignment dialog box check camera parameters to be optimized. Click OK button to start optimization.

After optimization is complete, estimated intrinsic camera parameters can be inspected on the Adjusted tab of the Camera Calibration dialog available from the Tools menu. Note The model data if any is cleared by the optimization procedure. You will have to rebuild the model geometry after optimization.

Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results. Masks can be applied at the following stages of processing: Alignment of the photos Building dense point cloud Building 3D model texture Alignment of the photos Masked areas can be excluded during feature point detection.

Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions. This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos. Masking may be also useful when the object of interest occupies only a small part of the photo.

In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects. Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process. Masking can be used to reduce the resulting dense cloud complexity, by eliminating the areas on the photos that are not of interest.

Masked areas are always excluded from processing during dense point cloud and texture generation stages. Let’s take for instance a set of photos of some object. Along with an object itself on each photo some background areas are present.

These areas may be useful for more precise camera positioning, so it is better to use them while aligning the photos. However, impact of these areas at the building dense point cloud is exactly opposite: the resulting model will contain object of interest and its background. Background geometry will „consume“ some part of mesh polygons that could be otherwise used for modeling the main object.

Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction. Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing.

Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas. Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available.

PhotoScan supports loading masks from the following sources: From alpha channel of the source photos. From separate images. Generated from background photos based on background differencing technique. Based on reconstructed 3D model.

When generating masks from separate or background images, the folder selection dialog will appear. Browse to the folder containing corresponding images and select it. The following parameters can be specified during mask import: Import masks for Specifies whether masks should be imported for the currently opened photo, active chunk or entire Workspace. Current photo – load mask for the currently opened photo if any. Active chunk – load masks for active chunk.

Entire workspace – load masks for all chunks in the project. Method Specifies the source of the mask data. From Alpha – load masks from alpha channel of the source photos. From File – load masks from separate images. From Background – generate masks from background photos. From Model – generate masks based on reconstructed model. Mask file names not used in From alpha mode Specifies the file name template used to generate mask file names.

This template can contain special tokens, that will be substituted by corresponding data for each photo being processed. The following tokens are supported:. Tolerance From Background method only Specifies the tolerance threshold used for background differencing. Tolerance value should be set according to the color separation between foreground and background pixels.

For larger separation higher tolerance values can be used. Editing masks Modification of the current mask is performed by adding or subtracting selections. A selection is created with one of the supported selection tools and is not incorporated in the current mask until it is merged with a mask using Add Selection or Subtract Selection operations.

The photo will be opened in the main window. The existing mask will be displayed as a shaded region on the photo. Selection to subtract the selection from the mask. Invert Selection button allows to invert current selection prior to adding or subtracting it from the mask. The following tools can be used for creating selections: Rectangle selection tool Rectangle selection tool is used to select large areas or to clean up the mask after other selection tools were applied.

Intelligent scissors tool Intelligent scissors is used to generate a selection by specifying its boundary. The boundary is formed by selecting a sequence of vertices with a mouse, which are automatically connected with segments.

The segments can be formed either by straight lines, or by curved contours snapped to the object boundaries. To enable snapping, hold Ctrl key while selecting the next vertex.

To complete the selection, the boundary should be closed by clicking on the first boundary vertex. Intelligent paint tool Intelligent paint tool is used to „paint“ a selection by the mouse, continuously adding small image regions, bounded by object boundaries. Magic wand tool Magic Wand tool is used to select uniform areas of the image. To make a selection with a Magic Wand tool, click inside the region to be selected. The range of pixel colors selected by Magic Wand is controlled by the tolerance value.

At lower tolerance values the tool selects fewer colors similar to the pixel you click with the Magic Wand tool. Higher value broadens the range of colors selected. Note To add new area to the current selection hold the Ctrl key during selection of additional area. To reset mask selection on the current photo press Esc key. A mask can be inverted using Invert Mask command from the Photo menu. The command is active in Photo View only. Alternatively, you can invert masks either for selected cameras or for all cameras in a chunk using Invert Masks The masks are generated individually for each image.

If some object should be masked out, it should be masked out on all photos, where that object appears. The following parameters can be specified during mask export: Export masks for Specifies whether masks should be exported for the currently opened photo, active chunk or entire Workspace. Current photo – save mask for the currently opened photo if any. Active chunk – save masks for active chunk. Entire workspace – save masks for all chunks in the project.

File type Specifies the type of generated files. Single channel mask image – generates single channel black and white mask images. Image with alpha channel – generates color images from source photos combined with mask data in alpha channel.

Mask file names Specifies the file name template used to generate mask file names. Mask file names parameter will not be used in this case. Editing point cloud The following point cloud editing tools are available in PhotoScan: Automatic filtering based on specified criterion sparse cloud only Automatic filtering based on applied masks dense cloud only Automatic filtering based on points colours dense cloud only Reducing number of points in cloud by setting tie point per photo limit sparse cloud only Manual points removal.

Filtering points based on specified criterion In some cases it may be useful to find out where the points with high reprojection error are located within the sparse cloud, or remove points representing high amount of noise.

Point cloud filtering helps to select such points, which usually are supposed to be removed. PhotoScan supports the following criteria for point cloud filtering: Reprojection error High reprojection error usually indicates poor localization accuracy of the corresponding point projections at the point matching step. It is also typical for false matches. Removing such points can improve accuracy of the subsequent optimization step.

Reconstruction uncertainty High reconstruction uncertainty is typical for points, reconstructed from nearby photos with small baseline. Such points can noticeably deviate from the object surface, introducing noise in the point cloud. While removal of such points should not affect the accuracy of optimization, it may be useful to remove them before building geometry in Point Cloud mode or for better visual appearance of the point cloud.

Image count PhotoScan reconstruct all the points that are visible at least on two photos. However, points that are visible only on two photos are likely to be located with poor accuracy. Image count filtering enables to remove such unreliable points from the cloud. Projection Accuracy This criterion allows to filter out points which projections were relatively poorer localised due to their bigger size. In the Gradual Selection dialog box specify the criterion to be used for filtering.

Adjust the threshold level using the slider. You can observe how the selection changes while dragging the slider. Click OK button to finalize the selection. To remove selected points use Delete Selection command from the Edit menu or click Selection toolbar button or simply press Del button on the keyboard.

Filtering points based on applied masks To remove points based on applied masks 1. In the Select Masked Points dialog box indicate the photos whose masks to be taken into account. Adjust the edge softness level using the slider. Click OK button to run the selection procedure. Filtering points based on points colors To remove points based on points colors 1.

In the Select Points by Color dialog box the color to be used as the criterion. Adjust the tolerance level using the slider. Tie point per photo limit Tie point limit parameter could be adjusted before Align photos procedure. The number indicates the upper limit for matching points for every image. Using zero value doesn’t apply any tie-point filtering. The number of tie points can also be reduced after the alignment process with Tie Points – Thin Point Cloud command available from Tools menu.

To add new points to the current selection hold the Ctrl key during selection of additional points. To remove some points from the current selection hold the Shift key during selection of points to be removed.

To delete selected points click the. To crop selection to the selected points click the toolbar button or select Crop Selection command from the Edit menu. Editing model geometry The following mesh editing tools are available in PhotoScan: Decimation tool Close Holes tool Automatic filtering based on specified criterion Manual polygon removal Fixing mesh topology More complex editing can be done in the external 3D editing tools.

PhotoScan allows to export mesh and then import it back for this purpose. Note For polygon removal operations such as manual removal and connected component filtering it is possible to undo the last mesh editing operation. Decimation tool Decimation is a tool used to decrease the geometric resolution of the model by replacing high resolution mesh with a lower resolution one, which is still capable of representing the object geometry with high accuracy.

PhotoScan tends to produce 3D models with excessive geometry resolution, so mesh decimation is usually a desirable step after geometry computation. Highly detailed models may contain hundreds of thousands polygons. While it is acceptable to work with such a complex models in 3D editor tools, in most conventional tools like Adobe Reader or Google Earth high complexity of 3D models may noticeably decrease application performance.

High complexity also results in longer time required to build texture and to export model in pdf file format.

In some cases it is desirable to keep as much geometry details as possible like it is needed for scientific and archive purposes. However, if there are no special requirements it is recommended to decimate the model down to – polygons for exporting in PDF, and to or even less for displaying in Google Earth and alike tools. In the Decimate Mesh dialog box specify the target number of polygons, which should remain in the final model.

Click on the OK button to start decimation. To cancel processing click on the Cancel button. Note Texture atlas is discarded during decimation process.

You will have to rebuild texture atlas after decimation is complete. Close Holes tool Close Holes tool provides possibility to repair your model if the reconstruction procedure resulted in a mesh with several holes, due to insufficient image overlap for example. Close holes tool enables to close void areas on the model substituting photogrammetric reconstruction with extrapolation data.

It is possible to control an acceptable level of accuracy indicating the maximum size of a hole to be covered with extrapolated data. In the Close Holes dialog box indicate the maximum size of a hole to be covered with the slider. Click on the OK button to start the procedure.

Note The slider allows to set the size of a hole in relation to the size of the whole model surface. Polygon filtering on specified criterion In some cases reconstructed geometry may contain the cloud of small isolated mesh fragments surrounding the „main“ model or big unwanted polygons. Mesh filtering based on different criteria helps to select polygons, which usually are supposed to be removed. PhotoScan supports the following criteria for face filtering:.

Connected component size This filtering criteria allows to select isolated fragments with a certain number of polygons. The number of polygons in all isolated components to be selected is set with a slider and is indicated in relation to the number of polygons in the whole model. The components are ranged in size, so that the selection proceeds from the smallest component to the largest one.

Polygon size This filtering criteria allows to select polygons up to a certain size. The size of the polygons to be selected is set with a slider and is indicated in relation to the size of the whole model. This function can be useful, for example, in case the geometry was reconstructed in Smooth type and there is a need to remove extra polygons automatically added by PhotoScan to fill the gaps; these polygons are often of a larger size that the rest.

Select the size of isolated components to be removed using the slider. To remove the selected components use Delete Selection command from the Edit menu or click Delete Selection toolbar button or simply press Del button on the keyboard.

Select the size of polygons to be removed using the slider. Note that PhotoScan always selects the fragments starting from the smallest ones.

If the model contains only one component the selection will be empty. Manual face removal Unnecessary and excessive sections of model geometry can be also removed manually.

Make the selection using the mouse. To add new polygons to the current selection hold the Ctrl key during selection of additional polygons. To remove some polygons from the current selection hold the Shift key during selection of polygons to be excluded. To crop selection to the selected polygons click the toolbar button or use Crop Selection command from the Edit menu. To grow current selection press PageUp key in the selection mode.

To grow selection by even a larger amount, press PageUp while holding Shift key pressed. To shrink current selection press PageDown key in the selection mode. To shrink selection by even a larger amount, press PageDown while holding Shift key pressed. In the Mesh Statistics dialog box you can inspect mesh parameters.

If there are any topological problems, Fix Topology button will be active and can be clicked to solve the problems. Editing mesh in the external program To export mesh for editing in the external program 1. In the Save As dialog box, specify the desired mesh format in the Save as type combo box.

Select the file name to be used for the model and click Save button. In the opened dialog box specify additional parameters specific to the selected file format. Please make sure to select one of these file formats when exporting model from the external 3D editor. Chapter 6. Automation Using chunks When working with typical data sets, automation of general processing workflow allows to perform routine operations efficiently.

PhotoScan allows to assign several processing steps to be run one by one without user intervention thanks to Batch Processing feature. Manual user intervention can be minimized even further due to ‚multiple chunk project‘ concept, each chunk to include one typical data set. For a project with several chunks of the same nature, common operations available in Batch Processing dialog are applied to each selected chunk individually, thus allowing to set several data sets for automatic processing following predefined workflow pattern.

In addition, multiple chunk project could be useful when it turns out to be hard or even impossible to generate a 3D model of the whole scene in one go. This could happen, for instance, if the total amount of photographs is too large to be processed at a time. To overcome this difficulty PhotoScan offers a possibility to split the set of photos into several separate chunks within the same project.

Alignment of photos, building dense point cloud, building mesh, and forming texture atlas operations can be performed for each chunk separately and then resulting 3D models can be combined together.

Working with chunks is not more difficult than using PhotoScan following the general workflow. In fact, in PhotoScan always exists at least one active chunk and all the 3D model processing workflow operations are applied to this chunk. To work with several chunks you need to know how to create chunks and how to combine resulting 3D models from separate chunks into one model.

Creating a chunk To create new chunk click on the Add Chunk toolbar button on the Workspace pane or select Add Chunk command from the Workspace context menu available by right-clicking on the root element on the Workspace pane.

After the chunk is created you may load photos in it, align them, generate dense point cloud, generate mesh surface model, build texture atlas, export the models at any stage and so on. The models in the chunks are not linked with each other. The list of all the chunks created in the current project is displayed in the Workspace pane along with flags reflecting their status. The following flags can appear next to the chunk name: R Referenced Will appear when two or more chunks are aligned with each other.

To move photos from one chunk to another simply select them in the list of photos on the Workspace pane, and then drag and drop to the target chunk. Working with chunks All operations within the chunk are carried out following the common workflow: loading photographs, aligning them, generating dense point cloud, building mesh, building texture atlas, exporting 3D model and so on. Note that all these operations are applied to the active chunk.

When a new chunk is created it is activated automatically. Save project operation saves the content of all chunks. To save selected chunks as a separate project use Save Chunks command from the chunk context menu.

Aligning chunks After the „partial“ 3D models are built in several chunks they can be merged together. Before merging chunks they need to be aligned. In the Align Chunks dialog box select chunks to be aligned, indicate reference chunk with a doubleclick.

Set desired alignment options. To cancel processing click the Cancel button. Aligning chunks parameters The following parameters control the chunks alignment procedure and can be modified in the Align Chunks dialog box: Method Defines the chunks alignment method. Point based method aligns chunks by matching photos across all the chunks. Camera based method is used to align chunks based on estimated camera locations.

Corresponding cameras should have the same label. Accuracy Point based alignment only Higher accuracy setting helps to obtain more accurate chunk alignment results. Lower accuracy setting can be used to get the rough chunk alignment in the shorter time. Point limit Point based alignment only The number indicates upper limit of feature points on every image to be taken into account during Point based chunks alignment. Fix scale Option is to be enabled in case the scales of the models in different chunks were set precisely and should be left unchanged during chunks alignment process.

Preselect image pairs Point based alignment only The alignment process of many chunks may take a long time. A significant portion of this time is spent for matching of detected features across the photos. Image pair preselection option can speed up this process by selection of a subset of image pairs to be matched. Constrain features by mask Point based alignment only When this option is enabled, features detected in the masked image regions are discarded. For additional information on the usage of masks refer to the Using masks section.

Merging chunks After alignment is complete the separate chunks can be merged into a single chunk. In the Merge Chunks dialog box select chunks to be merged and the desired merging options.

PhotoScan will merge the separate chunks into one. The merged chunk will be displayed in the project content list on Workspace pane. The following parameters control the chunks merging procedure and can be modified in the Merge Chunks dialog box: Merge dense clouds Defines if dense clouds from the selected chunks are combined.

Merge models Defines if models from the selected chunks are combined. Chunks merging result i. Batch processing PhotoScan allows to perform general workflow operations with multiple chunks automatically. It is useful when dealing with a large number of chunks to be processed.

Batch processing can be applied to all chunks in the Workspace, to unprocessed chunks only, or to the chunks selected by the user.

Each operation chosen in the Batch processing dialog will be applied to every selected chunk before processing will move on to the next step. In the Add Job dialog select the kind of operation to be performed, the list of chunks it should be applied to, and desired processing parameters.

The progress dialog box will appear displaying the list and status of batch jobs and current operation progress. The list of tasks for batch processing can be exported to XML structured file using Batch processing dialog and imported in a different project using. Model view Model view tab is used for displaying 3D data as well as for mesh and point cloud editing.

The view of the model depends on the current processing stage and is also controlled by mode selection buttons on the PhotoScan toolbar. Model can be shown as a dense cloud or as a mesh in shaded, solid, wireframe or textured mode. Along with the model the results of photo alignment can be displayed.

These include sparse point cloud and camera positions visualised data. PhotoScan supports the following tools for navigation in the 3D view: Tool. All navigation tools are accessible in the navigation mode only. To enter the navigation mode click the Navigation toolbar button.

Photo view Photo view tab is used for displaying individual photos as well as masks on them. Photo view is visible only if any photo is opened. To open a photo double-click on its name on the Workspace or Photos pane. Switching to Photo view mode changes the contents of the Toolbar, presenting related instruments and hiding irrelevant buttons. Workspace pane On the Workspace pane all elements comprising the current project are displayed.

These elements can include: List of chunks in the project List of cameras and camera groups in each chunk Tie points in each chunk Depth maps in each chunk Dense point clouds in each chunk 3D models in each chunk Buttons located on the Workspace pane toolbar allow to: Add chunk Add photos Enable or disable certain cameras or chunks for processing at further stages.

Remove items. Each element in the list is linked with the context menu providing quick access to some common operations. Console pane Console pane is used for: Displaying auxiliary information Displaying error messages Buttons located on the pane toolbar allow: Save log Clear log. Quits the application. Prompts to save any unsaved changes applied to the current project.

Resets the viewport to display model fully in Top XY projection. Shows or hides Workspace pane. Loads additional photos from folders to be processed by PhotoScan. Generates camera positions and sparse point cloud. Thins sparse point cloud by reducing the number of projections on the individual photos to the given limit. Selects dense cloud points according to the color and tolerance. Decimates mesh to the target face count.

Resets reconstruction volume selector to default position based on the sparse point cloud. Switch between orthographic and perspective view 5 modes Change the angle for perspective view.

Appendix C. Camera models A camera model specifies the transformation from point coordinates in the local camera coordinate system to the pixel coordinates in the image frame. The local camera coordinate system has origin at the camera projection center.

The Z axis points towards the viewing direction, X axis points to the right, Y axis points down. The image coordinate system has origin at the top left image pixel, with the center of the top left pixel having coordinates 0. The X axis in the image coordinate system points to the right, Y axis points down.

Image coordinates are measured in pixels. Equations used to project a points in the local camera coordinate system to the image plane are provided below for each supported camera model. The following definitions are used in the equations: X, Y, Z – point coordinates in the local camera coordinate system, u, v – projected point coordinates in the image coordinate system in pixels , fx, fy – focal lengths, cx, cy – principal point coordinates, K1, K2, K3, K4 – radial distortion coefficients, P1, P2, P3, P4 – tangential distortion coefficients, skew – skew coefficient between the x and the y axis, width – image width in pixels, height – image height in pixels.

Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous. Carousel Next. What is Scribd? Explore Ebooks. Bestsellers Editors‘ Picks All Ebooks. Explore Audiobooks. Bestsellers Editors‘ Picks All audiobooks.

Explore Magazines. Editors‘ Picks All magazines. Explore Podcasts All podcasts. Difficulty Beginner Intermediate Advanced. Explore Documents. Agisoft PhotoScan User Manual.

Uploaded by juan. Did you find this document useful? Is this content inappropriate? Report this Document. Flag for inappropriate content. Download now. Jump to Page. Search inside document. Table 1. Capturing photos Capturing scenarios Generally, spending some time planning your shot session might be very useful. To load a set of photos 1. Select Add Photos Add Photos toolbar button on In the Add Photos dialog box browse to the folder containing the images and select files to be processed. General workflow 3.

Selected photos will appear on the Workspace pane. To remove unwanted photos 1. On the Workspace pane select the photos to be removed. To move photos to a camera group 1. On the Workspace pane or Photos pane select the photos to be moved.

Notifies that Camera Station type was assigned to the group. To align a set of photos 1. Select Align Photos In the Align Photos dialog box select the desired alignment options. To realign a subset of photos 1. To disable a photo use Disable button from the Photos pane toolbar. General workflow PhotoScan estimates image quality for each input image.

To estimate image quality 1. Switch to the detailed view in the Photos pane using on the Photos pane toolbar. Details command from the Change menu 2. Select all photos to be analyzed on the Photos pane. To import external and internal camera parameters 1. Select Import Cameras command from the Tools menu.

Select the format of the file to be imported. Browse to the file and click Open button. To build a dense point cloud 1. To adjust the bounding box use the Resize Region and Rotate Region toolbar buttons. Select the Build Dense Cloud Building mesh To build a mesh 1. Select the Build Mesh Building model texture To generate 3D model texture 1. Select Build Texture Rotate object button In some cases editing model geometry in the external software may be required. Point cloud export To export sparse or dense point cloud 1.

Select Export Points Indicate export parameters applicable to the selected file type. Click OK button to start export. Tie points data export To export matching points 1. Select Export Matches To export panorama 1. Select Export – Export Panorama Select camera group which panorama should be previewed for.

Click OK button 6. Browse the destination folder and click Save button. Select Export Model In the Export Model dialog indicate export parameters applicable to the selected file type.

To create a new calibration group 1. Select Camera Calibration In the Camera Calibration dialog box, select photos to be arranged in a new group. In the right-click context menu choose Create Group command. To move photos from one group to another 1. To place each photo into a separate group you can use Split Groups command available at the right button click on a calibration group name in the left-hand part of the Camera Calibration dialog Camera types PhotoScan supports two major types of camera: frame camera and fisheye camera.

To specify camera calibration parameters 1. In the Camera Calibration dialog box, select Initial tab. Modify the calibration parameters displayed in the corresponding edit boxes. Set the Type to the Precalibrated value.

Repeat to every calibration group where applicable. Click OK button to set the calibration. To optimize camera alignment 1. Choose Optimize Cameras Editing Using masks Overview Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results.

To import masks 1. Select Import Masks In the Import Mask dialog select suitable parameters. To edit the mask 1. Select the desired selection tool and generate a selection. Click on Add Selection toolbar button to add current selection to the mask, or Subtract Selection to subtract the selection from the mask.

Saving masks Created masks can be also saved for external editing or storage. To export masks 1. Select Export Masks In the Export Mask dialog select suitable parameters. Browse to the folder where the masks should be saved and select it. To remove points based on specified criterion 1. Switch to Point Cloud view mode using Point Cloud toolbar button.

Select Gradual Selection Delete Filtering points based on applied masks To remove points based on applied masks 1. Switch to Dense Cloud view mode using Dense Cloud toolbar button. Choose Select Masked Points Delete Filtering points based on points colors To remove points based on points colors 1. Choose Select Points by Color Delete Tie point per photo limit Tie point limit parameter could be adjusted before Align photos procedure. To remove points from a point cloud manually 1.

Switch to Sparse Cloud view mode using mode using 2. To delete selected points click the Delete Selection toolbar button or select Delete Selection command from the Edit menu.

Crop Selection Editing model geometry The following mesh editing tools are available in PhotoScan: Decimation tool Close Holes tool Automatic filtering based on specified criterion Manual polygon removal Fixing mesh topology More complex editing can be done in the external 3D editing tools. To decimate 3D model 1.

Select Decimate Mesh To close holes in a 3D model 1. Select Close Holes To remove small isolated mesh fragments 1. In the Gradual Selection dialog box select Connected component size criterion. To remove large polygons 1. In the Gradual Selection dialog box select Polygon size criterion. To remove part of the mesh polygons manually 1. Select rectangle, circle or free-form selection tool using or 2. To delete selected polygons click the Delete Selection toolbar button or use Delete Selection command from the Edit menu.

Crop Selection To grow or shrink current selection 1. Fixing mesh topology PhotoScan is capable of basic mesh topology fixing. To fix mesh topology 1. Select View Mesh Statistics To import edited mesh 1. Select Import Mesh In the Open dialog box, browse to the file with the edited model and click Open. To set another chunk as active 1.

Right-click on the chunk title on the Workspace pane. Select Set Active command from the context menu. To remove chunk 1.

Select Remove Chunks command from the context menu. To align separate chunks 1. Select Align Chunks command from the Workflow menu. Note Chunk alignment can be performed only for chunks containing aligned photos. To merge chunks 1. Select Merge Chunks command from the Workflow menu. Select Batch Process Click Add to add the desired processing stages. Repeat the previous steps to add other processing steps as required. Arrange jobs by clicking Up and Down arrows at the right of the Batch Process Click OK button to start processing.

The list of tasks for batch processing can be exported to XML structured file using Batch processing dialog and imported in a different project using 37 Open button.

Graphical user interface Application window General view General view of application window. Note Zooming into the model can be also controlled by the mouse wheel. New Opens PhotoScan project file. Appends existing PhotoScan project file to the current one. Saves PhotoScan project file. Save Saves PhotoScan project file with a new name. Save As Export Points Export Model Upload Model Uploads reconstructed polygonal model to one of the supported web-sites. Exit Quits the application.

Edit Menu Undo the last editing operation. Undo Redo the previously undone editing operation. Redo Removes selected faces from the mesh or selected points from the point cloud. Multi-camera projects support. Scanned images with fiducial marks support.

Dense point cloud: editing and classification Elaborate model editing for accurate results. Automatic multi-class points classification to customize further reconstruction. Configurable vertical datums based on the geoid undulation grids. Export in blocks for huge projects. Color correction for homogeneous texture. Inbuilt ghosting filter to combat artefacts due to moving objects. Custom planar and cylindrical projection options for close range projects. Terrestrial laser scanning TLS registration Simultaneous adjustment of both laser scanner and camera positions.

Capability to combine TLS and photogrammetric depth maps. Markers support and automatic targets detection for manual alignment of scanner data. Masking instruments to ignore unwanted objects in scanner data. Scale bar tool to set reference distance without implementation of positioning equipment. Measurements: distances, areas, volumes Inbuilt tools to measure distances, areas and volumes. Stereoscopic measurements Professional 3D monitors and 3D controllers support for accurate and convenient stereoscopic vectorization of features and measurement purposes.

Direct upload to various online resources and export to many popular formats. Photorealistic textures: HDR and multifile support incl.

UDIM layout. Hierarchical tiled model generation City scale modeling preserving the original image resolution for texturing. Cesium publishing. Basis for numerous visual effects with 3D models reconstructed in time sequence.

 

Document Information.Agisoft photoscan user manual professional edition version 1.2 free

 
Terrestrial laser scanning (TLS) registration. Simultaneous adjustment of both laser scanner and camera positions. Capability to combine TLS and photogrammetric depth maps. Markers support and automatic targets detection for manual alignment of scanner data. Masking instruments to ignore unwanted objects in scanner data. Nov 11,  · I did install the latest beta, photoscan-pro_1_2_0_x64_beta, and get this extremely strange colours on the orthophoto!! For some reason, photoscan have not been able to create a complete mesh in the area also, but it is extrapolated. The seamlineeditor is very promising! The possibility to save batch-jobs is great. Agisoft will pay particular attention to possible problems with Metashape running on these devices. Table Supported Desktop GPUs on Windows platform NVIDIA AMD GeForce RTX Radeon RX GeForce RTX Ti Radeon VII Tesla V Radeon RX XT Tesla M60 Radeon RX Vega 64 Quadro P Radeon RX Vega 56 Quadro M Radeon Pro .
Agisoft Metashape User Manual Standard Edition, Version Agisoft Metashape User Manual: Standard Edition, Version Publication date Quadro M Radeon Pro WX GeForce TITAN X Radeon RX GeForce GTX Ti FirePro W GeForce GTX TITAN X Radeon R9 x. Agisoft PhotoScan User Manual – Free download as PDF File .pdf), Text File .txt) or read online for free. Standard Edition, Version Agisoft PhotoScan User Manual: Standard Edition, Version Photoscan 1 2 En. Uploaded by. Krizzatul. Photoscan-pro 0 9 1 En. Uploaded by. Florian Gheorghe. Agisoft will pay particular attention to possible problems with Metashape running on these devices. Table Supported Desktop GPUs on Windows platform NVIDIA AMD GeForce RTX Radeon RX GeForce RTX Ti Radeon VII Tesla V Radeon RX XT Tesla M60 Radeon RX Vega 64 Quadro P Radeon RX Vega 56 Quadro M Radeon Pro .

If you are not familiar with the concept of projects, its brief description is given at the end of the Chapter 3, General workflow. In the manual you can also find instructions on the PhotoScan installation procedure and basic rules for taking „good“ photographs, i. For the information refer to Chapter 1, Installation and Chapter 2, Capturing photos. Chapter 1. NVidia GeForce 8xxx series and later. PhotoScan is likely to be able to utilize processing power of any OpenCL enabled device during Dense Point Cloud generation stage, provided that OpenCL drivers for the device are properly installed.

However, because of the large number of various combinations of video chips, driver versions and operating systems, Agisoft is unable to test and guarantee PhotoScan’s compatibility with every device and on every platform.

The table below lists currently supported devices on Windows platform only. We will pay particular attention to possible problems with PhotoScan running on these devices. Using OpenCL acceleration with mobile or integrated graphics video chips is not recommended because of the low performance of such GPUs.

Start PhotoScan by running photoscan. Restrictions of the Demo mode Once PhotoScan is downloaded and installed on your computer you can run it either in the Demo mode or in the full function mode. On every start until you enter a serial number it will show a registration box offering two options: 1 use PhotoScan in the Demo mode or 2 enter a serial number to confirm the purchase. The first choice is set by default, so if you are still exploring PhotoScan click the Continue button and PhotoScan will start in the Demo mode.

The employment of PhotoScan in the Demo mode is not time limited. Several functions, however, are not available in the Demo mode. These functions are the following:. On purchasing you will get the serial number to enter into the registration box on starting PhotoScan.

Once the serial number is entered the registration box will not appear again and you will get full access to all functions of the program. Chapter 2. Capturing photos Before loading your photographs into PhotoScan you need to take them and select those suitable for 3D model reconstruction. Photographs can be taken by any digital camera both metric and non-metric , as long as you follow some specific capturing guidelines.

This section explains general principles of taking and selecting pictures that provide the most appropriate data for 3D model generation. Make sure you have studied the following rules and read the list of restrictions before you get out for shooting photographs. Equipment Use a digital camera with reasonably high resolution 5 MPix or more. Avoid ultra-wide angle and fisheye lenses. The best choice is 50 mm focal length 35 mm film equivalent lenses.

It is recommended to use focal length from 20 to 80 mm interval in 35mm equivalent. If a data set was captured with fisheye lens, appropriate camera sensor type should be selected in PhotoScan Camera Calibration dialog prior to processing.

Fixed lenses are preferred. If zoom lenses are used – focal length should be set either to maximal or to minimal value during the entire shooting session for more stable results. Take images at maximal possible resolution. ISO should be set to the lowest value, otherwise high ISO values will induce additional noise to images. Aperture value should be high enough to result in sufficient focal depth: it is important to capture sharp, not blurred photos.

Shutter speed should not be too slow, otherwise blur can occur due to slight movements. If still have to, shoot shiny objects under a cloudy sky. Avoid unwanted foregrounds. Avoid moving objects within the scene to be reconstructed. Avoid absolutely flat objects or scenes. Image preprocessing PhotoScan operates with the original images. So do not crop or geometrically transform, i.

Capturing scenarios Generally, spending some time planning your shot session might be very useful. Number of photos: more than required is better than not enough. Number of „blind-zones“ should be minimized since PhotoScan is able to reconstruct only geometry visible from at least two cameras.

Each photo should effectively use the frame size: object of interest should take up the maximum area. In some cases portrait camera orientation should be used. Do not try to place full object in the image frame, if some parts are missing it is not a problem providing that these parts appear on other images. Good lighting is required to achieve better quality of the results, yet blinks should be avoided.

It is recommended to remove sources of light from camera fields of view. Avoid using flash. The following figures represent advice on appropriate capturing scenarios:. Restrictions In some cases it might be very difficult or even impossible to build a correct 3D model from a set of pictures.

A short list of typical reasons for photographs unsuitability is given below. Modifications of photographs PhotoScan can process only unmodified photos as they were taken by a digital photo camera. Processing the photos which were manually cropped or geometrically warped is likely to fail or to produce highly inaccurate results. Photometric modifications do not affect reconstruction results. In this case PhotoScan assumes that focal length in 35 mm equivalent equals to 50 mm and tries to align the photos in accordance with this assumption.

If the correct focal length value differs significantly from 50 mm, the alignment can give incorrect results or even fail. In such cases it is required to specify initial camera calibration manually. The details of necessary EXIF tags and instructions for manual setting of the calibration parameters are given in the Camera calibration section. Lens distortion The distortion of the lenses used to capture the photos should be well simulated with the Brown’s distortion model.

Otherwise it is most unlikely that processing results will be accurate. Fisheye and ultra-wide angle lenses are poorly modeled by the common distortion model implemented in PhotoScan software, so it is required to choose proper camera type in Camera Calibration dialog prior to processing. Chapter 3. General workflow Processing of images with PhotoScan includes the following main steps: loading photos into PhotoScan; inspecting loaded images, removing unnecessary images; aligning photos; building dense point cloud; building mesh 3D polygonal model ; generating texture; exporting results.

If you are using PhotoScan in the full function not the Demo mode, intermediate results of the image processing can be saved at any stage in the form of project files and can be used later. The concept of projects and project files is briefly explained in the Saving intermediate results section.

The list above represents all the necessary steps involved in the construction of a textured 3D model from your photos.

Some additional tools, which you may find to be useful, are described in the successive chapters. Preferences settings Before starting a project with PhotoScan it is recommended to adjust the program settings for your needs.

In Preferences dialog General Tab available through the Tools menu you can indicate the path to the PhotoScan log file to be shared with the Agisoft support team in case you face any problem during the processing.

Here you can also change GUI language to the one that is most convenient for you. PhotoScan exploits GPU processing power that speeds up the process significantly. If you have decided to switch on GPUs for photogrammetric data processing with PhotoScan, it is recommended to free one physical CPU core per each active GPU for overall control and resource managing tasks.

Loading photos Before starting any operation it is necessary to point out what photos will be used as a source for 3D reconstruction. In fact, photographs themselves are not loaded into PhotoScan until they are needed. So, when you „load photos“ you only indicate photographs that will be used for further processing.

In the Add Photos dialog box browse to the folder containing the images and select files to be processed. Then click Open button. Photos in any other format will not be shown in the Add Photos dialog box.

To work with such photos you will need to convert them in one of the supported formats. If you have loaded some unwanted photos, you can easily remove them at any moment.

Right-click on the selected photos and choose Remove Items command from the opened context menu, or click Remove Items toolbar button on the Workspace pane. The selected photos will be removed from the working set. Camera groups If all the photos or a subset of photos were captured from one camera position – camera station, for PhotoScan to process them correctly it is obligatory to move those photos to a camera group and mark the group as Camera Station.

It is important that for all the photos in a Camera Station group distances between camera centers were negligibly small compared to the camera-object minimal distance. However, it is possible to export panoramic picture for the data captured from only one camera station. Refer to Exporting results section for guidance on panorama export.

Alternatively, camera group structure can be used to manipulate the image data in a chunk easily, e. Right-click on the selected photos and choose Move Cameras – New Camera Group command from the opened context menu.

A new group will be added to the active chunk structure and selected photos will be moved to that group. To mark group as camera station, right click on the camera group name and select Set Group Type command from the context menu.

Inspecting loaded photos Loaded photos are displayed on the Workspace pane along with flags reflecting their status. The following flags can appear next to the photo name:. In this case PhotoScan assumes that the corresponding photo was taken using 50mm lens 35mm film equivalent.

If the actual focal length differs significantly from this value, manual calibration may be required. More details on manual camera calibration can be found in the Camera calibration section. NA Not aligned Notifies that external camera orientation parameters have not been estimated for the current photo yet. Images loaded to PhotoScan will not be aligned until you perform the next step – photos alignment.

Aligning photos Once photos are loaded into PhotoScan, they need to be aligned. At this stage PhotoScan finds the camera position and orientation for each photo and builds a sparse point cloud model.

The progress dialog box will appear displaying the current processing status. To cancel processing click Cancel button. Alignment having been completed, computed camera positions and a sparse point cloud will be displayed. You can inspect alignment results and remove incorrectly positioned photos, if any. To see the matches between any two photos use View Matches Incorrectly positioned photos can be realigned.

Reset alignment for incorrectly positioned cameras using Reset Camera Alignment command from the photo context menu. Select photos to be realigned and use Align Selected Cameras command from the photo context menu. When the alignment step is completed, the point cloud and estimated camera positions can be exported for processing with another software if needed.

Image quality Poor input, e. To help you to exclude poorly focused images from processing PhotoScan suggests automatic image quality estimation feature. Images with quality value of less than 0. To disable a photo use. PhotoScan estimates image quality for each input image. The value of the parameter is calculated based on the sharpness level of the most focused part of the picture. Right button click on the selected photo s and choose Estimate Image Quality command from the context menu.

Once the analysis procedure is over, a figure indicating estimated image quality value will be displayed in the Quality column on the Photos pane. Alignment parameters The following parameters control the photo alignment procedure and can be modified in the Align Photos dialog box: Accuracy Higher accuracy settings help to obtain more accurate camera position estimates.

Lower accuracy settings can be used to get the rough camera positions in a shorter period of time. While at High accuracy setting the software works with the photos of the original size, Medium setting causes image downscaling by factor of 4 2 times by each side , at Low accuracy source files are downscaled by factor of 16, and Lowest value means further downscaling by 4 times more. Highest accuracy setting upscales the image by factor of 4. Since tie point positions are estimated on the basis of feature spots found on the source images, it may be meaningful to upscale a source photo to accurately localize a tie point.

However, Highest accuracy setting is recommended only for very sharp image data and mostly for research purposes due to the corresponding processing being quite time consuming.

Pair preselection The alignment process of large photo sets can take a long time. A significant portion of this time period is spent on matching of detected features across the photos. Image pair preselection option may speed up this process due to selection of a subset of image pairs to be matched.

In the Generic preselection mode the overlapping pairs of photos are selected by matching photos using lower accuracy setting first. Additionally the following advanced parameters can be adjusted.

Key point limit The number indicates upper limit of feature points on every image to be taken into account during current processing stage. Using zero value allows PhotoScan to find as many key points as possible, but it may result in a big number of less reliable points. Tie point limit The number indicates upper limit of matching points for every image.

Using zero value doesn’t apply any tie point filtering. Constrain features by mask When this option is enabled, masked areas are excluded from feature detection procedure. For additional information on the usage of masks please refer to the Using masks section. Note Tie point limit parameter allows to optimize performance for the task and does not generally effect the quality of the further model. Recommended value is Too high or too low tie point limit value may cause some parts of the dense point cloud model to be missed.

The reason is that PhotoScan generates depth maps only for pairs of photos for which number of matching points is above certain limit. As a results sparse point cloud will be thinned, yet the alignment will be kept unchanged. Point cloud generation based on imported camera data PhotoScan supports import of external and internal camera orientation parameters. Thus, if precise camera data is available for the project, it is possible to load them into PhotoScan along with the photos, to be used as initial information for 3D reconstruction job.

The data will be loaded into the software. Camera calibration data can be inspected in the Camera Calibration dialog, Adjusted tab, available from Tools menu. Once the data is loaded, PhotoScan will offer to build point cloud.

This step involves feature points detection and matching procedures. As a result, a sparse point cloud – 3D representation of the tie-points data, will be generated. Parameters controlling Build Point Cloud procedure are the same as the ones used at Align Photos step see above. Building dense point cloud PhotoScan allows to generate and visualize a dense point cloud model. Based on the estimated camera positions the program calculates depth information for each camera to be combined into a single dense point cloud.

PhotoScan tends to produce extra dense point clouds, which are of almost the same density, if not denser, as LIDAR point clouds. A dense point cloud can be edited within PhotoScan environment or exported to an external tool for further analysis. Rotate the bounding box and then drag corners of the box to the desired positions.

In the Build Dense Cloud dialog box select the desired reconstruction parameters. Click OK button when done. Reconstruction parameters Quality Specifies the desired reconstruction quality. Higher quality settings can be used to obtain more detailed and accurate geometry, but they require longer time for processing. Interpretation of the quality parameters here is similar to that of accuracy settings given in Photo Alignment section. The only difference is that in this case Ultra High quality setting means processing of original photos, while each following step implies preliminary image size downscaling by factor of 4 2 times by each side.

Depth Filtering modes At the stage of dense point cloud generation reconstruction PhotoScan calculates depth maps for every image. Due to some factors, like noisy or badly focused images, there can be some outliers among the points. To sort out the outliers PhotoScan has several built-in filtering algorithms that answer the challenges of different projects. If there are important small details which are spatially distingueshed in the scene to be reconstructed, then it is recommended to set Mild depth filtering mode, for important features not to be sorted out as outliers.

This value of the parameter may also be useful for aerial projects in case the area contains poorly textued roofs, for example.

If the area to be reconstructed does not contain meaningful small details, then it is reasonable to chose Aggressive depth filtering mode to sort out most of the outliers. This value of the parameter normally recommended for aerial data processing, however, mild filtering may be useful in some projects as well see poorly textured roofs comment in the mild parameter valur description above.

Moderate depth filtering mode brings results that are in between the Mild and Aggressive approaches. You can experiment with the setting in case you have doubts which mode to choose. Additionally depth filtering can be Disabled. But this option is not recommended as the resulting dense cloud could be extremely noisy.

Check the reconstruction volume bounding box. If the model has already been referenced, the bounding box will be properly positioned automatically. Otherwise, it is important to control its position manually. To adjust the bounding box manually, use the Resize Region and Rotate Region toolbar buttons. Rotate the bounding box and then drag corners of the box to the desired positions – only part of the scene inside the bounding box will be reconstructed. If the Height field reconstruction method is to be applied, it is important to control the position of the red side of the bounding box: it defines reconstruction plane.

In this case make sure that the bounding box is correctly oriented. In the Build Mesh dialog box select the desired reconstruction parameters. Reconstruction parameters PhotoScan supports several reconstruction methods and settings, which help to produce optimal reconstructions for a given data set. Surface type Arbitrary surface type can be used for modeling of any kind of object. It should be selected for closed objects, such as statues, buildings, etc. It doesn’t make any assumptions on the type of the object being modeled, which comes at a cost of higher memory consumption.

Height field surface type is optimized for modeling of planar surfaces, such as terrains or basereliefs. It should be selected for aerial photography processing as it requires lower amount of memory and allows for larger data sets processing. Source data Specifies the source for the mesh generation procedure. Sparse cloud can be used for fast 3D model generation based solely on the sparse point cloud. Dense cloud setting will result in longer processing time but will generate high quality output based on the previously reconstructed dense point cloud.

Polygon count Specifies the maximum number of polygons in the final mesh. They present optimal number of polygons for a mesh of a corresponding level of detail. It is still possible for a user to indicate the target number of polygons in the final mesh according to their choice. It could be done through the Custom value of the Polygon count parameter. Please note that while too small number of polygons is likely to result in too rough mesh, too huge custom number over 10 million polygons is likely to cause model visualization problems in external software.

Interpolation If interpolation mode is Disabled it leads to accurate reconstruction results since only areas corresponding to dense point cloud points are reconstructed. Manual hole filling is usually required at the post processing step. With Enabled default interpolation mode PhotoScan will interpolate some surface areas within a circle of a certain radius around every dense cloud point.

As a result some holes can be automatically covered. Yet some holes can still be present on the model and are to be filled at the post processing step. In Extrapolated mode the program generates holeless model with extrapolated geometry.

Large areas of extra geometry might be generated with this method, but they could be easily removed later using selection and cropping tools.

Note PhotoScan tends to produce 3D models with excessive geometry resolution, so it is recommended to perform mesh decimation after geometry computation. More information on mesh decimation and other 3D model geometry editing tools is given in the Editing model geometry section.

Select the desired texture generation parameters in the Build Texture dialog box. Texture mapping modes The texture mapping mode determines how the object texture will be packed in the texture atlas.

Proper texture mapping mode selection helps to obtain optimal texture packing and, consequently, better visual quality of the final model. Generic The default mode is the Generic mapping mode; it allows to parametrize texture atlas for arbitrary geometry. No assumptions regarding the type of the scene to be processed are made; program tries to create as uniform texture as possible. Adaptive orthophoto In the Adaptive orthophoto mapping mode the object surface is split into the flat part and vertical regions.

The flat part of the surface is textured using the orthographic projection, while vertical regions are textured separately to maintain accurate texture representation in such regions. When in the Adaptive orthophoto mapping mode, program tends to produce more compact texture representation for nearly planar scenes, while maintaining good texture quality for vertical surfaces, such as walls of the buildings.

Orthophoto In the Orthophoto mapping mode the whole object surface is textured in the orthographic projection. The Orthophoto mapping mode produces even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions. Single photo The Single photo mapping mode allows to generate texture from a single photo.

The photo to be used for texturing can be selected from ‚Texture from‘ list. Keep uv The Keep uv mapping mode generates texture atlas using current texture parametrization. It can be used to rebuild texture atlas using different resolution or to generate the atlas for the model parametrized in the external software. Texture generation parameters The following parameters control various aspects of texture atlas generation:.

Texture from Single photo mapping mode only Specifies the photo to be used for texturing. Available only in the Single photo mapping mode.

Blending mode not used in Single photo mode Selects the way how pixel values from different photos will be combined in the final texture. Mosaic – implies two-step approach: it does blending of low frequency component for overlapping images to avoid seamline problem weighted average, weight being dependent on a number of parameters including proximity of the pixel in question to the center of the image , while high frequency component, that is in charge of picture details, is taken from a single image – the one that presents good resolution for the area of interest while the camera view is almost along the normal to the reconstructed surface in that point.

Average – uses the weighted average value of all pixels from individual photos, the weight being dependent on the same parameters that are considered for high frequence component in mosaic mode. Max Intensity – the photo which has maximum intensity of the corresponding pixel is selected. Min Intensity – the photo which has minimum intensity of the corresponding pixel is selected. Disabled – the photo to take the color value for the pixel from is chosen like the one for the high frequency component in mosaic mode.

Exporting texture to several files allows to archive greater resolution of the final model texture, while export of high resolution texture to a single file can fail due to RAM limitations.

Enable color correction The feature is useful for processing of data sets with extreme brightness variation. However, please note that color correction process takes up quite a long time, so it is recommended to enable the setting only for the data sets that proved to present results of poor quality. Improving texture quality To improve resulting texture quality it may be reasonable to exclude poorly focused images from processing at this step.

PhotoScan suggests automatic image quality estimation feature. PhotoScan estimates image quality as a relative sharpness of the photo with respect to other images in the data set.

Saving intermediate results Certain stages of 3D model reconstruction can take a long time. The full chain of operations could eventually last for hours when building a model from hundreds of photos. It is not always possible to complete all the operations in one run. PhotoScan allows to save intermediate results in a project file. Photo alignment data such as information on camera positions, sparse point cloud model and set of refined camera calibration parameters for each calibration group.

Masks applied to the photos in project. Depth maps for cameras. Carousel Previous. Carousel Next. What is Scribd? Explore Ebooks. Bestsellers Editors‘ Picks All Ebooks. Explore Audiobooks. Bestsellers Editors‘ Picks All audiobooks. Explore Magazines.

Editors‘ Picks All magazines. Explore Podcasts All podcasts. Difficulty Beginner Intermediate Advanced. Explore Documents. Agisoft Photoscan User Manual. Document Information click to expand document information Description: Agisoft photoscan user manual. Did you find this document useful? Is this content inappropriate? Report this Document. Description: Agisoft photoscan user manual. Flag for inappropriate content. Download now. Jump to Page. Search inside document.

Grit: The Power of Passion and Perseverance. Cost of Capital. Yes Please. Cheezy Muff Spaghetti. Palette defines the colour for each index value to be shown with. PhotoScan offers several standard palette presets on the Palette tab of the Raster Calculator dialog. For each new line added to the palette a certain index value should be typed in. Double click on the newly added line to type the value in. A customised palette can be saved for future projects using Export Palette button on the Palette tab of the Raster Calculator dialog.

To calculate contour lines based on vegetation index data 1. Select Generate Contours The contour lines will be shown over the index data on the Ortho tab. Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results. Masks can be applied at the following stages of processing:.

Alignment of the photos Masked areas can be excluded during feature point detection. Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions. This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos.

Masking may be also useful when the object of interest occupies only a small part of the photo. In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects. Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process.

Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction. Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing.

Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas. Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available.

PhotoScan supports loading masks from the following sources:. When generating masks from separate or background images, the folder selection dialog will appear. Browse to the folder containing corresponding images and select it. Import masks for Specifies whether masks should be imported for the currently opened photo, active chunk or entire Workspace.

Entire workspace – load masks for all chunks in the project. Mask file names not used in From alpha mode Specifies the file name template used to generate mask file names. This template can contain special tokens, that will be substituted by corresponding data for each photo being processed. The following tokens are supported:. Tolerance From Background method only Specifies the tolerance threshold used for background differencing.

Tolerance value should be set according to the color separation between foreground and background pixels. For larger separation higher tolerance values can be used. Editing masks Modification of the current mask is performed by adding or subtracting selections. A selection is created with one of the supported selection tools and is not incorporated in the current mask until it is merged with a mask using Add Selection or Subtract Selection operations.

To edit the mask 1. The photo will be opened in the main window. The existing mask will be displayed as a shaded region on the photo.

Click on Add Selection toolbar button to add current selection to the mask, or Subtract Selection to subtract the selection from the mask.

Invert Selection button allows to invert current selection prior to adding or subtracting it from the mask. Intelligent paint tool Intelligent paint tool is used to „paint“ a selection by the mouse, continuously adding small image regions, bounded by object boundaries. Magic wand tool Magic Wand tool is used to select uniform areas of the image. To make a selection with a Magic Wand tool, click inside the region to be selected.

The range of pixel colors selected by Magic Wand is controlled by the tolerance value. At lower tolerance values the tool selects fewer colors similar to the pixel you click with the Magic Wand tool. Higher value broadens the range of colors selected. A mask can be inverted using Invert Mask command from the Photo menu. The command is active in Photo View only. Alternatively, you can invert masks either for selected cameras or for all cameras in a chunk using Invert Masks The masks are generated individually for each image.

If some object should be masked out, it should be masked out on all photos, where that object appears. Image with alpha channel – generates color images from source photos combined with mask data in alpha channel.

Mask file names Specifies the file name template used to generate mask file names. Mask file names parameter will not be used in this case. Editing point cloud The following point cloud editing tools are available in PhotoScan:. Reprojection error High reprojection error usually indicates poor localization accuracy of the corresponding point projections at the point matching step. It is also typical for false matches.

Removing such points can improve accuracy of the subsequent optimization step. Reconstruction uncertainty High reconstruction uncertainty is typical for points, reconstructed from nearby photos with small baseline. Such points can noticeably deviate from the object surface, introducing noise in the point cloud. While removal of such points should not affect the accuracy of optimization, it may be useful to remove them before building geometry in Point Cloud mode or for better visual appearance of the point cloud.

Image count PhotoScan reconstruct all the points that are visible at least on two photos. However, points that are visible only on two photos are likely to be located with poor accuracy.

Image count filtering enables to remove such unreliable points from the cloud. Projection Accuracy This criterion allows to filter out points which projections were relatively poorer localised due to their bigger size. To remove points based on specified criterion 1.

Switch to Point Cloud view mode using Point Cloud toolbar button. In the Gradual Selection dialog box specify the criterion to be used for filtering. Adjust the threshold level using the slider. You can observe how the selection changes while dragging the slider. Click OK button to finalize the selection. To remove selected points use Delete Selection command from the Edit menu or click Delete Selection toolbar button or simply press Del button on the keyboard.

Filtering points based on applied masks To remove points based on applied masks 1. Switch to Dense Cloud view mode using Dense Cloud toolbar button. In the Select Masked Points dialog box indicate the photos whose masks to be taken into account. Adjust the edge softness level using the slider. Click OK button to run the selection procedure. Choose Select Points by Color In the Select Points by Color dialog box the color to be used as the criterion.

Adjust the tolerance level using the slider. Tie point per photo limit Tie point limit parameter could be adjusted before Align photos procedure. The number indicates the upper limit for matching points for every image.

Using zero value doesn’t apply any tie-point filtering. The number of tie points can also be reduced after the alignment process with Tie Points – Thin Point Cloud command available from Tools menu. To add new points to the current selection hold the Ctrl key during selection of additional points.

To remove some points from the current selection hold the Shift key during selection of points to be removed. To delete selected points click the Delete Selection toolbar button or select Delete Selection command from the Edit menu. To crop selection to the selected points click the Crop Selection toolbar button or select Crop Selection command from the Edit menu. To classify ground points automatically 1.

Select Classify Ground Points In the Classify Ground Points dialog box select the source point data for the classification procedure. Click OK button to run the classification procedure. Automatic classification procedure consists of two steps.

At the first step the dense cloud is divided into cells of a certain size. In each cell the lowest point is detected. Triangulation of these points gives the first approximation of the terrain model.

At the second step new point is added to the ground class, providing that it satisfies two conditions: it lies within a certain distance from the terrain model and that the angle between terrain model and the line to connect this new point with a point from a ground class is less than a certain angle.

The second step is repeated while there still are points to be checked. Max angle deg Determines one of the conditions to be checked while testing a point as a ground one, i. For nearly flat terrain it is recommended to use default value of 15 deg for the parameter. It is reasonable to set a higher value, if the terrain contains steep slopes.

Max distance m Determines one of the conditions to be checked while testing a point as a ground one, i. In fact, this parameter determines the assumption for the maximum variation of the ground elevation at a time. Cell size m Determines the size of the cells for point cloud to be divided into as a preparatory step in ground points classification procedure. Cell size should be indicated with respect to the size of the largest area within the scene that does not contain any ground points, e.

Manual classification of dense cloud points PhotoScan allows to associate all the points within the dense cloud with a certain standard class see LIDAR data classification. This provides possibility to diversify export of the processing results with respect to different types of objects within the scene, e. DTM for ground, mesh for buildings and point cloud for vegetation. To assign a class to a group of points 1. Switch to Dense Cloud view mode using using Dense Cloud toolbar button.

Dense point cloud classification can be reset with Reset Classification command from Tools – Dense Cloud menu. Editing model geometry The following mesh editing tools are available in PhotoScan:.

More complex editing can be done in the external 3D editing tools. PhotoScan allows to export mesh and then import it back for this purpose. Decimation tool Decimation is a tool used to decrease the geometric resolution of the model by replacing high resolution mesh with a lower resolution one, which is still capable of representing the object geometry with high accuracy. PhotoScan tends to produce 3D models with excessive geometry resolution, so mesh decimation is usually a desirable step after geometry computation.

Highly detailed models may contain hundreds of thousands polygons. While it is acceptable to work with such a complex models in 3D editor tools, in most conventional tools like Adobe Reader or Google Earth high complexity of 3D models may noticeably decrease application performance.

High complexity also results in longer time required to build texture and to export model in pdf file format. In some cases it is desirable to keep as much geometry details as possible like it is needed for scientific and archive purposes.

You will have to rebuild texture atlas after decimation is complete. Close Holes tool Close Holes tool provides possibility to repair your model if the reconstruction procedure resulted in a mesh with several holes, due to insufficient image overlap for example.

Some tasks require a continuous surface disregarding the fact of information shortage. It is necessary to generate a close model, for instance, to fulfill volume measurement task with PhotoScan.

Close holes tool enables to close void areas on the model substituting photogrammetric reconstruction with extrapolation data. It is possible to control an acceptable level of accuracy indicating the maximum size of a hole to be covered with extrapolated data.

To close holes in a 3D model 1. Select Close Holes In the Close Holes dialog box indicate the maximum size of a hole to be covered with the slider. Click on the OK button to start the procedure. To cancel processing click on the Cancel button. Polygon filtering on specified criterion In some cases reconstructed geometry may contain the cloud of small isolated mesh fragments surrounding the „main“ model or big unwanted polygons.

Mesh filtering based on different criteria helps to select polygons, which usually are supposed to be removed. Connected component size This filtering criteria allows to select isolated fragments with a certain number of polygons.

The number of polygons in all isolated components to be selected is set with a slider and is indicated in relation to the number of polygons in the whole model.

The components are ranged in size, so that the selection proceeds from the smallest component to the largest one. Select the size of isolated components to be removed using the slider. To remove the selected components use Delete Selection command from the Edit menu or click Delete Selection toolbar button or simply press Del button on the keyboard. To remove large polygons 1. Select Gradual Selection Select the size of polygons to be removed using the slider.

Note that PhotoScan always selects the fragments starting from the smallest ones. If the model contains only one component the selection will be empty. Manual face removal Unnecessary and excessive sections of model geometry can be also removed manually. To remove part of the mesh polygons manually 1. Select rectangle, circle or free-form selection tool using Rectangle Selection, Circle Selection or Free-Form Selection toolbar buttons.

Make the selection using the mouse. To add new polygons to the current selection hold the Ctrl key during selection of additional polygons. To remove some polygons from the current selection hold the Shift key during selection of polygons to be excluded. To delete selected polygons click the Delete Selection toolbar button or use Delete Selection command from the Edit menu.

To crop selection to the selected polygons click the Crop Selection toolbar button or use Crop Selection command from the Edit menu. To fix mesh topology 1. Select View Mesh Statistics In the Mesh Statistics dialog box you can inspect mesh parameters.

If there are any topological problems, Fix Topology button will be active and can be clicked to solve the problems. Editing mesh in the external program To export mesh for editing in the external program 1. Select Export Model In the Save As dialog box, specify the desired mesh format in the Save as type combo box. Select the file name to be used for the model and click Save button.

In the opened dialog box specify additional parameters specific to the selected file format. To import edited mesh 1. Select Import Mesh Please make sure to select one of these file formats when exporting model from the external 3D editor. Alternatively, shapes can be loaded from a.

KML files using Import Shapes Shapes created in PhotoScan can be exported using Export Shapes Double click on the last point to indicate the end of a polyline. To complete a polygon, place the end point over the starting one. Once the shape is drawn, a shape label will be added to the chunk data structure on the Workspace pane. All shapes drawn on the same DEM and on the corresponding orthomosaic will be shown under the same label on the Workspace pane. Delete Vertex command is active only for a vertex context menu.

To get access to the vertex context menu, select the shape with a double click first, and then select the vertex with a double click on it. To change position of a vertex, drag and drop it to a selected position with the cursor.

Shapes allow to measure distances both on DEM nad 3D model and to measure coordinates, surface areas and volumes on 3D model. Orthomosaic seamlines editing PhotoScan software offers various blending options at orthomosaic generation step for the user to adjust processing to their data and task.

However, in some projects moving objects can cause artifacts which interfere with visual quality of the orthomosaic.

The same problem may result from oblique aerial imagery processing if the area of interest contains high buildings or if the user has captured facade from too oblique positions. To eliminate mentioned artifacts PhotoScan offers seamline editing tool. The functionality allows to choose manually the image or images to texture the indicated part of the orthomosaic from.

Thus, the final orthomosaic can be improved visually according to the user’s expectations. Automatic seamlines can be turned on for inspection in the Ortho view with pressing the Show Seamlines button from the Ortho view toolbar. To edit orthomosaic seamlines 1. Draw a polygon on the orthomosaic using Draw Polygon instrument to indicate the area to be retextured.

In the Assign Images dialog box select the image to texture the area inside the polygon from. Orthomosaic preview on the Ortho tab allows to evaluate the results of the selection. Click OK button to finalise the image selection process. Assign Images dialog, alternatively, allows to exclude selected images from texturing the area of interest. Check Exclude selected images option to follow this way. Please note that in this case preview illustrates the image to be excluded, i.

Automation Using chunks When working with typical data sets, automation of general processing workflow allows to perform routine operations efficiently.

PhotoScan allows to assign several processing steps to be run one by one without user intervention thanks to Batch Processing feature. Manual user intervention can be minimized even further due to ‚multiple chunk project‘ concept, each chunk to include one typical data set. For a project with several chunks of the same nature, common operations available in Batch Processing dialog are applied to each selected chunk individually, thus allowing to set several data sets for automatic processing following predefined workflow pattern.

In addition, multiple chunk project could be useful when it turns out to be hard or even impossible to generate a 3D model of the whole scene in one go. This could happen, for instance, if the total amount of photographs is too large to be processed at a time. To overcome this difficulty PhotoScan offers a possibility to split the set of photos into several separate chunks within the same project.

Alignment of photos, building dense point cloud, building mesh, and forming texture atlas operations can be performed for each chunk separately and then resulting 3D models can be combined together. Working with chunks is not more difficult than using PhotoScan following the general workflow. In fact, in PhotoScan always exists at least one active chunk and all the 3D model processing workflow operations are applied to this chunk.

To work with several chunks you need to know how to create chunks and how to combine resulting 3D models from separate chunks into one model. Creating a chunk To create new chunk click on the Add Chunk toolbar button on the Workspace pane or select Add Chunk command from the Workspace context menu available by right-clicking on the root element on the Workspace pane. After the chunk is created you may load photos in it, align them, generate dense point cloud, generate mesh surface model, build texture atlas, export the models at any stage and so on.

The models in the chunks are not linked with each other. The list of all the chunks created in the current project is displayed in the Workspace pane along with flags reflecting their status.

Working with chunks All operations within the chunk are carried out following the common workflow: loading photographs, aligning them, generating dense point cloud, building mesh, building texture atlas, exporting 3D model and so on. Note that all these operations are applied to the active chunk.

When a new chunk is created it is activated automatically. Save project operation saves the content of all chunks. To save selected chunks as a separate project use Save Chunks command from the chunk context menu. To set another chunk as active 1. Right-click on the chunk title on the Workspace pane. Aligning chunks After the „partial“ 3D models are built in several chunks they can be merged together. Before merging chunks they need to be aligned. To align separate chunks 1. Select Align Chunks command from the Workflow menu.

In the Align Chunks dialog box select chunks to be aligned, indicate reference chunk with a double- click. Set desired alignment options. To cancel processing click the Cancel button. Aligning chunks parameters The following parameters control the chunks alignment procedure and can be modified in the Align Chunks dialog box:. Method Defines the chunks alignment method. Point based method aligns chunks by matching photos across all the chunks.

Marker based method uses markers as common points for different chunks. Fix scale Option is to be enabled in case the scales of the models in different chunks were set precisely and should be left unchanged during chunks alignment process. Preselect image pairs Point based alignment only The alignment process of many chunks may take a long time. A significant portion of this time is spent for matching of detected features across the photos. Image pair preselection option can speed up this process by selection of a subset of image pairs to be matched.

Constrain features by mask Point based alignment only When this option is enabled, features detected in the masked image regions are discarded. For additional information on the usage of masks refer to the Using masks section. Merging chunks After alignment is complete the separate chunks can be merged into a single chunk. In the Merge Chunks dialog box select chunks to be merged and the desired merging options. PhotoScan will merge the separate chunks into one.

The merged chunk will be displayed in the project content list on Workspace pane. The following parameters control the chunks merging procedure and can be modified in the Merge Chunks dialog box:. Merge dense clouds Defines if dense clouds from the selected chunks are combined.

Batch processing can be applied to all chunks in the Workspace, to unprocessed chunks only, or to the chunks selected by the user. Each operation chosen in the Batch processing dialog will be applied to every selected chunk before processing will move on to the next step.

To start batch processing 1. Select Batch Process In the Add Job dialog select the kind of operation to be performed, the list of chunks it should be applied to, and desired processing parameters. Arrange jobs by clicking Up and Down arrows at the right of the Batch Process The progress dialog box will appear displaying the list and status of batch jobs and current operation progress. For this purpose multiple image frames captured at different time moments can be loaded for each camera location, forming a multiframe chunk.

In fact normal chunks capturing a static scene are multiframe chunks with only a single frame loaded. Navigation through the frame sequence is performed using Timeline pane. Although a separate static chunk can be used to process photos for each time moment, aggregate multiframe chunks implementation has several advantages:. There is no need to align chunks to each other after processing.

There is no need to use batch processing, which simplifies the workflow. Multiframe chunks can be also efficient with some limitations for processing of disordered photo sets of the same object or even different objects, provided that cameras remain static throughout the sequence. Managing multiframe chunks Multiframe layout is formed at the moment of adding photos to the chunk.

It will reflect the data layout used to store image files. Therefore it is necessary to organize files on the disk appropriately in advance. The following data layouts can be used with PhotoScan:.

Scale bar tool to set reference distance without implementation of positioning equipment. Measurements: distances, areas, volumes Inbuilt tools to measure distances, areas and volumes. Stereoscopic measurements Professional 3D monitors and 3D controllers support for accurate and convenient stereoscopic vectorization of features and measurement purposes.

Direct upload to various online resources and export to many popular formats. Photorealistic textures: HDR and multifile support incl.

UDIM layout. Hierarchical tiled model generation City scale modeling preserving the original image resolution for texturing. Cesium publishing. Basis for numerous visual effects with 3D models reconstructed in time sequence. Panorama stitching 3D reconstruction for data captured from the same camera position — camera station, provided that at least 2 camera stations are present. Fast reconstruction based on preferable channel.

Table of Contents Overview Capturing photos General workflow Network processing Graphical user interface Application window Menu commands Toolbar buttons Hot keys Supported formats Camera calibration Camera flight log GCP locations Interior and exterior camera orientation parameters Tie points Mesh model Tiled models Shapes and contours Camera models Frame cameras Fisheye cameras Spherical cameras equirectangular projection Spherical cameras cylindrical projection Overview Agisoft PhotoScan is an advanced image-based 3D modeling solution aimed at creating professional quality 3D content from still images.

Based on the latest multi-view 3D reconstruction technology, it operates with arbitrary images and is efficient in both controlled and uncontrolled conditions.

Photos can be taken from any position, providing that the object to be reconstructed is visible on at least two photos. Both image alignment and 3D model reconstruction are fully automated.

How it works Generally the final goal of photographs processing with PhotoScan is to build a textured 3D model. The procedure of photographs processing and 3D model construction comprises four main stages. The first stage is camera alignment. At this stage PhotoScan searches for common points on photographs and matches them, as well as it finds the position of the camera for each picture and refines camera calibration parameters.

As a result a sparse point cloud and a set of camera positions are formed. The sparse point cloud represents the results of photo alignment and will not be directly used in the further 3D model construction procedure except for the sparse point cloud based reconstruction method.

However it can be exported for further usage in external programs. For instance, the sparse point cloud model can be used in a 3D editor as a reference.

On the contrary, the set of camera positions is required for further 3D model reconstruction by PhotoScan. The next stage is building dense point cloud. Based on the estimated camera positions and pictures themselves a dense point cloud is built by PhotoScan. Dense point cloud may be edited and classified prior to export or proceeding to 3D mesh model generation. The third stage is building mesh. PhotoScan reconstructs a 3D polygonal mesh representing the object surface based on the dense or sparse point cloud according to the user’s choice.

Generally there are two algorithmic methods available in PhotoScan that can be applied to 3D mesh generation: Height Field – for planar type surfaces, Arbitrary – for any kind of object.

The mesh having been built, it may be necessary to edit it. Some corrections, such as mesh decimation, removal of detached components, closing of holes in the mesh, smoothing, etc.

For more complex editing you have to engage external 3D editor tools. PhotoScan allows to export mesh, edit it by another software and import it back. After geometry i. Several texturing modes are available in PhotoScan, they are described in the corresponding section of this manual, as well as orthomosaic and DEM generation procedures.

About the manual Basically, the sequence of actions described above covers most of the data processing needs. All these operations are carried out automatically according to the parameters set by user. Instructions on how to get through these operations and descriptions of the parameters controlling each step are given in the corresponding sections of the Chapter 3, General workflow chapter of the manual.

In some cases, however, additional actions may be required to get the desired results. In some capturing scenarios masking of certain regions of the photos may be required to exclude them from the calculations. Application of masks in PhotoScan processing workflow as well as editing options available are described.

Camera calibration issues are discussed in Chapter 4, Referencing, that also describes functionality to optimize camera alignment results and provides guidance on model referencing. A referenced model, be it a mesh or a DEM serves as a ground for measurements. Area, volume, profile measurement procedures are tackled in Chapter 5, Measurements, which also includes information on vegetation indices calculations.

While Chapter 7, Automation describes opportunities to save up on manual intervention to the processing workflow, Chapter 8, Network processing presents guidelines on how to organize distributed processing of the imagery data on several nodes. It can take up quite a long time to reconstruct a 3D model. PhotoScan allows to export obtained results and save intermediate data in a form of project files at any stage of the process. If you are not familiar with the concept of projects, its brief description is given at the end of the Chapter 3, General workflow.

In the manual you can also find instructions on the PhotoScan installation procedure and basic rules for taking „good“ photographs, i. For the information refer to Chapter 1, Installation and Chapter 2, Capturing photos.

Chapter 1. NVidia GeForce 8xxx series and later. PhotoScan is likely to be able to utilize processing power of any OpenCL enabled device during Dense Point Cloud generation stage, provided that OpenCL drivers for the device are properly installed. However, because of the large number of various combinations of video chips, driver versions and operating systems, Agisoft is unable to test and guarantee PhotoScan’s compatibility with every device and on every platform.

The table below lists currently supported devices on Windows platform only. We will pay particular attention to possible problems with PhotoScan running on these devices. Using OpenCL acceleration with mobile or integrated graphics video chips is not recommended because of the low performance of such GPUs.

Start PhotoScan by running photoscan. Restrictions of the Demo mode Once PhotoScan is downloaded and installed on your computer you can run it either in the Demo mode or in the full function mode. On every start until you enter a serial number it will show a registration box offering two options: 1 use PhotoScan in the Demo mode or 2 enter a serial number to confirm the purchase.

The first choice is set by default, so if you are still exploring PhotoScan click the Continue button and PhotoScan will start in the Demo mode. The employment of PhotoScan in the Demo mode is not time limited. Several functions, however, are not available in the Demo mode.

These functions are the following: saving the project;. On purchasing you will get the serial number to enter into the registration box on starting PhotoScan. Once the serial number is entered the registration box will not appear again and you will get full access to all functions of the program.

Chapter 2. Capturing photos Before loading your photographs into PhotoScan you need to take them and select those suitable for 3D model reconstruction. Photographs can be taken by any digital camera both metric and non-metric , as long as you follow some specific capturing guidelines. This section explains general principles of taking and selecting pictures that provide the most appropriate data for 3D model generation.

Make sure you have studied the following rules and read the list of restrictions before you get out for shooting photographs. Equipment Use a digital camera with reasonably high resolution 5 MPix or more.

Avoid ultra-wide angle and fisheye lenses. The best choice is 50 mm focal length 35 mm film equivalent lenses. It is recommended to use focal length from 20 to 80 mm interval in 35mm equivalent. If a data set was captured with fisheye lens, appropriate camera sensor type should be selected in PhotoScan Camera Calibration dialog prior to processing. Fixed lenses are preferred. If zoom lenses are used – focal length should be set either to maximal or to minimal value during the entire shooting session for more stable results.

Take images at maximal possible resolution. ISO should be set to the lowest value, otherwise high ISO values will induce additional noise to images. Aperture value should be high enough to result in sufficient focal depth: it is important to capture sharp, not blurred photos. Shutter speed should not be too slow, otherwise blur can occur due to slight movements. If still have to, shoot shiny objects under a cloudy sky.

Avoid unwanted foregrounds. Avoid moving objects within the scene to be reconstructed. Avoid absolutely flat objects or scenes.

To perform more sophisticated metric analysis the products of photogrammetric processing can be smoothly transferred to external tools thanks to a variety of export formats. Professional 3D monitors and 3D controllers support for accurate and convenient stereoscopic vectorization of features and measurement purposes. Multi camera rig data processing for creative projects in cinematographic art, game industry, etc.

Multichannel orthomosaic generation and user-defined vegetation indices e. NDVI calculation and export. Straightforward and time-efficient for large-scale projects since requires only aligned images as the input. Common processing workflow for panchromatic and multispectral satellite images is supported, provided that sufficiently accurate RPC data is available for each image.

In addition to Batch processing – a way to save on human intervention, Python scripting and Java bindings suggests for sophisticated automation and customization options. Starting from the adding custom processing operations to the application GUI up to the complete job automation and integration to Python or Java pipeline. Distributed calculations over a local computer network to use combined power of multiple nodes for huge data sets processing in one project.

Cloud processing interface allows to save on the hardware infrastructure for photogrammetric pipeline, with further option to visualize and share the variety of the processing results online with colleagues or customers, as well as to embed published projects in your own web platforms.

Photogrammetric triangulation Processing of various types of imagery: aerial nadir, oblique , close-range, satellite. Auto calibration: frame incl. Multi-camera projects support. Scanned images with fiducial marks support. Dense point cloud: editing and classification Elaborate model editing for accurate results. Automatic multi-class points classification to customize further reconstruction.

Configurable vertical datums based on the geoid undulation grids. Export in blocks for huge projects. Color correction for homogeneous texture. Inbuilt ghosting filter to combat artefacts due to moving objects. Custom planar and cylindrical projection options for close range projects. Terrestrial laser scanning TLS registration Simultaneous adjustment of both laser scanner and camera positions. Capability to combine TLS and photogrammetric depth maps. Markers support and automatic targets detection for manual alignment of scanner data.

Masking instruments to ignore unwanted objects in scanner data. Scale bar tool to set reference distance without implementation of positioning equipment. Measurements: distances, areas, volumes Inbuilt tools to measure distances, areas and volumes. Stereoscopic measurements Professional 3D monitors and 3D controllers support for accurate and convenient stereoscopic vectorization of features and measurement purposes.

Direct upload to various online resources and export to many popular formats. Photorealistic textures: HDR and multifile support incl. UDIM layout. Hierarchical tiled model generation City scale modeling preserving the original image resolution for texturing. Cesium publishing. Basis for numerous visual effects with 3D models reconstructed in time sequence.

Panorama stitching 3D reconstruction for data captured from the same camera position — camera station, provided that at least 2 camera stations are present. Fast reconstruction based on preferable channel. Automatic powerlines detection Straightforward and time-efficient for large-scale projects since requires only aligned images as the input. Results export in a form of a 3D polyline model for every wire.

Robust results thanks to catenary curve fitting algorithm. Satellite imagery processing Common processing workflow for panchromatic and multispectral satellite images is supported, provided that sufficiently accurate RPC data is available for each image. Python and Java API In addition to Batch processing – a way to save on human intervention, Python scripting and Java bindings suggests for sophisticated automation and customization options.

Network processing Distributed calculations over a local computer network to use combined power of multiple nodes for huge data sets processing in one project.

Cloud processing Cloud processing interface allows to save on the hardware infrastructure for photogrammetric pipeline, with further option to visualize and share the variety of the processing results online with colleagues or customers, as well as to embed published projects in your own web platforms.

Agisoft Metashape Professional Edition. User Manual (PDF) Agisoft Metashape Standard Edition. User Manual (PDF) Download: in English in Russian. Python API Reference. Version (PDF) Download: in English. Java API Reference. Version (HTML) Browse: in English! Visit Tutorials page for guidance on data processing with Agisoft. Terrestrial laser scanning (TLS) registration. Simultaneous adjustment of both laser scanner and camera positions. Capability to combine TLS and photogrammetric depth maps. Markers support and automatic targets detection for manual alignment of scanner data. Masking instruments to ignore unwanted objects in scanner data. Agisoft will pay particular attention to possible problems with Metashape running on these devices. Table Supported Desktop GPUs on Windows platform NVIDIA AMD GeForce RTX Radeon RX GeForce RTX Ti Radeon VII Tesla V Radeon RX XT Tesla M60 Radeon RX Vega 64 Quadro P Radeon RX Vega 56 Quadro M Radeon Pro . Agisoft Metashape User Manual Standard Edition, Version Agisoft Metashape User Manual: Standard Edition, Version Publication date Quadro M Radeon Pro WX GeForce TITAN X Radeon RX GeForce GTX Ti FirePro W GeForce GTX TITAN X Radeon R9 x. Agisoft PhotoScan User Manual – Free download as PDF File .pdf), Text File .txt) or read online for free. Standard Edition, Version Agisoft PhotoScan User Manual: Standard Edition, Version Photoscan 1 2 En. Uploaded by. Krizzatul. Photoscan-pro 0 9 1 En. Uploaded by. Florian Gheorghe.

After geometry i. Several texturing modes are available in PhotoScan, they are described in the correspondingsectionofthismanual,aswellasorthomosaicandDEMgenerationprocedures.

About the manual Basically,thesequenceofactionsdescribedabovecoversmostofthedataprocessingneeds. Allthese operationsarecarriedoutautomaticallyaccordingtotheparameterssetbyuser. Instructionsonhowto getthroughtheseoperationsanddescriptionsoftheparameterscontrollingeachsteparegiveninthe correspondingsectionsoftheChapter3,Generalworkflowchapterofthemanual.

Insomecapturing scenariosmaskingofcertainregionsofthephotosmayberequiredtoexcludethemfromthecalculations. Overview in Chapter 6, Editing. Camera calibration issues are discussed in Chapter 4, Referencing, that also describes functionality to optimize camera alignment results and provides guidance on model referencing. Area, volume,profilemeasurementproceduresaretackledinChapter5,Measurements,whichalsoincludes informationonvegetationindicescalculations.

WhileChapter7,Automationdescribesopportunitiesto saveuponmanualinterventiontotheprocessingworkflow, Chapter8, Networkprocessing presents guidelinesonhowtoorganizedistributedprocessingoftheimagerydataonseveralnodes.

PhotoScanallowstoexportobtainedresults andsaveintermediatedatainaformofprojectfilesatanystageoftheprocess. Ifyouarenotfamiliar withtheconceptofprojects,itsbriefdescriptionisgivenattheendoftheChapter3,Generalworkflow.

InthemanualyoucanalsofindinstructionsonthePhotoScaninstallationprocedureandbasicrulesfor taking“good“photographs,i. Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous. Carousel Next. What is Scribd? Explore Ebooks. Bestsellers Editors‘ Picks All Ebooks. Explore Audiobooks. Bestsellers Editors‘ Picks All audiobooks. Explore Magazines. Editors‘ Picks All magazines. Frame camera with Fisheye lens. If extra wide lenses were used to get the source data, standard PhotoScan camera model will not allow to estimate camera parameters successfully.

Fisheye camera type setting will initialize implementation of a different camera model to fit ultra-wide lens distortions. In case source images lack EXIF data or the EXIF data is insufficient to calculate focal length in pixels, PhotoScan will assume that focal length equals to 50 mm 35 mm film equivalent.

However, if the initial guess values differ significantly from the actual focal length, it is likely to lead to failure of the alignment process. So, if photos do not contain EXIF metadata, it is preferable to specify focal length mm and sensor pixel size mm manually.

It can be done in Camera Calibration dialog box available from Tools menu. Generally, this data is indicated in camera specification or can be received from some online source. To indicate to the program that camera orientation parameters should be estimated based on the focal length and pixel size information, it is necessary to set the Type parameter on the Initial tab to Auto value.

Camera calibration parameters Once you have tried to run the estimation procedure and got poor results, you can improve them thanks to the additional data on calibration parameters. Select calibration group, which needs reestimation of camera orientation parameters on the left side of the Camera Calibration dialog box. Note Alternatively, initial calibration data can be imported from file using Load button on the Initial tab of the Camera Calibration dialog box.

Initial calibration data will be adjusted during the Align Photos processing step. Once Align Photos processing step is finished adjusted calibration data will be displayed on the Adjusted tab of the Camera Calibration dialog box.

If very precise calibration data is available, to protect it from recalculation one should check Fix calibration box. In this case initial calibration data will not be changed during Align Photos process. Adjusted camera calibration data can be saved to file using Save button on the Adjusted tab of the Camera Calibration dialog box. Estimated camera distortions can be seen on the distortion plot available from context menu of a camera group in the Camera Calibration dialog.

In addition, residuals graph the second tab of the same Distortion Plot dialog allows to evaluate how adequately the camera is described with the applied mathematical model.

Note that residuals are averaged per cell of an image and then across all the images in a camera group. Calibration parameters list fx, fy Focal length in x- and y-dimensions measured in pixels. Optimization Optimization of camera alignment During photo alignment step PhotoScan automatically finds tie points and estimates intrinsic and extrinsic camera parameters.

However, the accuracy of the estimates depends on many factors, like overlap between the neighbouring photos, as well as on the shape of the object surface. Thus, it is recommended to inspect alignment results in order to delete tie points with too large reprojection error if any. Please refer to Editing point cloud section for information on point cloud editing.

Once the set of tie points has been edited, it is necessary to run optimization procedure to reestimate intrinsic and extrinsic camera parameters. Optimization procedure calculates intrinsic and extrinsic camera parameters based on the tie points left after editing procedure.

Providing that outliers have been removed, the estimates will be more accurate. In addition, this step involves estimation of a number of intrinsic camera parameters which are fixed at the alignment step: aspect, skew; and distortion parameters p3, p4, k4. In Optimize Camera Alignment dialog box check camera parameters to be optimized.

Click OK button to start optimization. After optimization is complete, estimated intrinsic camera parameters can be inspected on the Adjusted tab of the Camera Calibration dialog available from the Tools menu.

Note The model data if any is cleared by the optimization procedure. You will have to rebuild the model geometry after optimization. Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results.

Masks can be applied at the following stages of processing: Alignment of the photos Building dense point cloud Building 3D model texture Alignment of the photos Masked areas can be excluded during feature point detection.

Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions. This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos.

Masking may be also useful when the object of interest occupies only a small part of the photo. In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects.

Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process. Masking can be used to reduce the resulting dense cloud complexity, by eliminating the areas on the photos that are not of interest.

Masked areas are always excluded from processing during dense point cloud and texture generation stages. Let’s take for instance a set of photos of some object. Along with an object itself on each photo some background areas are present. These areas may be useful for more precise camera positioning, so it is better to use them while aligning the photos.

However, impact of these areas at the building dense point cloud is exactly opposite: the resulting model will contain object of interest and its background. Background geometry will „consume“ some part of mesh polygons that could be otherwise used for modeling the main object.

Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction. Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing. Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas. Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available.

PhotoScan supports loading masks from the following sources: From alpha channel of the source photos. From separate images. Generated from background photos based on background differencing technique. Based on reconstructed 3D model. When generating masks from separate or background images, the folder selection dialog will appear.

Browse to the folder containing corresponding images and select it. The following parameters can be specified during mask import: Import masks for Specifies whether masks should be imported for the currently opened photo, active chunk or entire Workspace. Current photo – load mask for the currently opened photo if any.

Active chunk – load masks for active chunk. Entire workspace – load masks for all chunks in the project. Method Specifies the source of the mask data. From Alpha – load masks from alpha channel of the source photos.

From File – load masks from separate images. From Background – generate masks from background photos. From Model – generate masks based on reconstructed model. Mask file names not used in From alpha mode Specifies the file name template used to generate mask file names. This template can contain special tokens, that will be substituted by corresponding data for each photo being processed. The following tokens are supported:.

Tolerance From Background method only Specifies the tolerance threshold used for background differencing.

Tolerance value should be set according to the color separation between foreground and background pixels. For larger separation higher tolerance values can be used. Editing masks Modification of the current mask is performed by adding or subtracting selections.

A selection is created with one of the supported selection tools and is not incorporated in the current mask until it is merged with a mask using Add Selection or Subtract Selection operations.

The photo will be opened in the main window. The existing mask will be displayed as a shaded region on the photo. Selection to subtract the selection from the mask. Invert Selection button allows to invert current selection prior to adding or subtracting it from the mask.

The following tools can be used for creating selections: Rectangle selection tool Rectangle selection tool is used to select large areas or to clean up the mask after other selection tools were applied. Intelligent scissors tool Intelligent scissors is used to generate a selection by specifying its boundary.

The boundary is formed by selecting a sequence of vertices with a mouse, which are automatically connected with segments. The segments can be formed either by straight lines, or by curved contours snapped to the object boundaries.

To enable snapping, hold Ctrl key while selecting the next vertex. To complete the selection, the boundary should be closed by clicking on the first boundary vertex. Intelligent paint tool Intelligent paint tool is used to „paint“ a selection by the mouse, continuously adding small image regions, bounded by object boundaries. Magic wand tool Magic Wand tool is used to select uniform areas of the image.

To make a selection with a Magic Wand tool, click inside the region to be selected. The range of pixel colors selected by Magic Wand is controlled by the tolerance value. At lower tolerance values the tool selects fewer colors similar to the pixel you click with the Magic Wand tool.

Higher value broadens the range of colors selected. Note To add new area to the current selection hold the Ctrl key during selection of additional area. To reset mask selection on the current photo press Esc key. A mask can be inverted using Invert Mask command from the Photo menu. The command is active in Photo View only.

Alternatively, you can invert masks either for selected cameras or for all cameras in a chunk using Invert Masks The masks are generated individually for each image. If some object should be masked out, it should be masked out on all photos, where that object appears.

The following parameters can be specified during mask export: Export masks for Specifies whether masks should be exported for the currently opened photo, active chunk or entire Workspace. Current photo – save mask for the currently opened photo if any. Active chunk – save masks for active chunk.

Entire workspace – save masks for all chunks in the project. File type Specifies the type of generated files. Single channel mask image – generates single channel black and white mask images. Image with alpha channel – generates color images from source photos combined with mask data in alpha channel. Mask file names Specifies the file name template used to generate mask file names.

Mask file names parameter will not be used in this case. Editing point cloud The following point cloud editing tools are available in PhotoScan: Automatic filtering based on specified criterion sparse cloud only Automatic filtering based on applied masks dense cloud only Automatic filtering based on points colours dense cloud only Reducing number of points in cloud by setting tie point per photo limit sparse cloud only Manual points removal.

Filtering points based on specified criterion In some cases it may be useful to find out where the points with high reprojection error are located within the sparse cloud, or remove points representing high amount of noise.

Point cloud filtering helps to select such points, which usually are supposed to be removed. PhotoScan supports the following criteria for point cloud filtering: Reprojection error High reprojection error usually indicates poor localization accuracy of the corresponding point projections at the point matching step.

It is also typical for false matches. Removing such points can improve accuracy of the subsequent optimization step. Reconstruction uncertainty High reconstruction uncertainty is typical for points, reconstructed from nearby photos with small baseline. Such points can noticeably deviate from the object surface, introducing noise in the point cloud. While removal of such points should not affect the accuracy of optimization, it may be useful to remove them before building geometry in Point Cloud mode or for better visual appearance of the point cloud.

Image count PhotoScan reconstruct all the points that are visible at least on two photos. However, points that are visible only on two photos are likely to be located with poor accuracy.

Image count filtering enables to remove such unreliable points from the cloud. Projection Accuracy This criterion allows to filter out points which projections were relatively poorer localised due to their bigger size. In the Gradual Selection dialog box specify the criterion to be used for filtering.

Adjust the threshold level using the slider. You can observe how the selection changes while dragging the slider. Click OK button to finalize the selection. To remove selected points use Delete Selection command from the Edit menu or click Selection toolbar button or simply press Del button on the keyboard.

Filtering points based on applied masks To remove points based on applied masks 1. In the Select Masked Points dialog box indicate the photos whose masks to be taken into account. Adjust the edge softness level using the slider.

Click OK button to run the selection procedure. Filtering points based on points colors To remove points based on points colors 1. In the Select Points by Color dialog box the color to be used as the criterion. Adjust the tolerance level using the slider. Tie point per photo limit Tie point limit parameter could be adjusted before Align photos procedure. The number indicates the upper limit for matching points for every image. Using zero value doesn’t apply any tie-point filtering.

The number of tie points can also be reduced after the alignment process with Tie Points – Thin Point Cloud command available from Tools menu. To add new points to the current selection hold the Ctrl key during selection of additional points. To remove some points from the current selection hold the Shift key during selection of points to be removed. To delete selected points click the.

To crop selection to the selected points click the toolbar button or select Crop Selection command from the Edit menu. Editing model geometry The following mesh editing tools are available in PhotoScan: Decimation tool Close Holes tool Automatic filtering based on specified criterion Manual polygon removal Fixing mesh topology More complex editing can be done in the external 3D editing tools.

PhotoScan allows to export mesh and then import it back for this purpose. Note For polygon removal operations such as manual removal and connected component filtering it is possible to undo the last mesh editing operation. Decimation tool Decimation is a tool used to decrease the geometric resolution of the model by replacing high resolution mesh with a lower resolution one, which is still capable of representing the object geometry with high accuracy.

PhotoScan tends to produce 3D models with excessive geometry resolution, so mesh decimation is usually a desirable step after geometry computation. Highly detailed models may contain hundreds of thousands polygons. While it is acceptable to work with such a complex models in 3D editor tools, in most conventional tools like Adobe Reader or Google Earth high complexity of 3D models may noticeably decrease application performance. High complexity also results in longer time required to build texture and to export model in pdf file format.

In some cases it is desirable to keep as much geometry details as possible like it is needed for scientific and archive purposes. However, if there are no special requirements it is recommended to decimate the model down to – polygons for exporting in PDF, and to or even less for displaying in Google Earth and alike tools. In the Decimate Mesh dialog box specify the target number of polygons, which should remain in the final model. Click on the OK button to start decimation. To cancel processing click on the Cancel button.

Note Texture atlas is discarded during decimation process. You will have to rebuild texture atlas after decimation is complete. Close Holes tool Close Holes tool provides possibility to repair your model if the reconstruction procedure resulted in a mesh with several holes, due to insufficient image overlap for example.

Close holes tool enables to close void areas on the model substituting photogrammetric reconstruction with extrapolation data. It is possible to control an acceptable level of accuracy indicating the maximum size of a hole to be covered with extrapolated data.

In the Close Holes dialog box indicate the maximum size of a hole to be covered with the slider. Click on the OK button to start the procedure. Note The slider allows to set the size of a hole in relation to the size of the whole model surface. Polygon filtering on specified criterion In some cases reconstructed geometry may contain the cloud of small isolated mesh fragments surrounding the „main“ model or big unwanted polygons. Mesh filtering based on different criteria helps to select polygons, which usually are supposed to be removed.

PhotoScan supports the following criteria for face filtering:. Connected component size This filtering criteria allows to select isolated fragments with a certain number of polygons. The number of polygons in all isolated components to be selected is set with a slider and is indicated in relation to the number of polygons in the whole model.

The components are ranged in size, so that the selection proceeds from the smallest component to the largest one. Polygon size This filtering criteria allows to select polygons up to a certain size. The size of the polygons to be selected is set with a slider and is indicated in relation to the size of the whole model. This function can be useful, for example, in case the geometry was reconstructed in Smooth type and there is a need to remove extra polygons automatically added by PhotoScan to fill the gaps; these polygons are often of a larger size that the rest.

Select the size of isolated components to be removed using the slider. To remove the selected components use Delete Selection command from the Edit menu or click Delete Selection toolbar button or simply press Del button on the keyboard.

Select the size of polygons to be removed using the slider. Note that PhotoScan always selects the fragments starting from the smallest ones. If the model contains only one component the selection will be empty. Manual face removal Unnecessary and excessive sections of model geometry can be also removed manually. Make the selection using the mouse. To add new polygons to the current selection hold the Ctrl key during selection of additional polygons. To remove some polygons from the current selection hold the Shift key during selection of polygons to be excluded.

To crop selection to the selected polygons click the toolbar button or use Crop Selection command from the Edit menu. To grow current selection press PageUp key in the selection mode. To grow selection by even a larger amount, press PageUp while holding Shift key pressed. To shrink current selection press PageDown key in the selection mode. To shrink selection by even a larger amount, press PageDown while holding Shift key pressed.

In the Mesh Statistics dialog box you can inspect mesh parameters. If there are any topological problems, Fix Topology button will be active and can be clicked to solve the problems. Editing mesh in the external program To export mesh for editing in the external program 1. In the Save As dialog box, specify the desired mesh format in the Save as type combo box. Select the file name to be used for the model and click Save button.

In the opened dialog box specify additional parameters specific to the selected file format. Please make sure to select one of these file formats when exporting model from the external 3D editor. Chapter 6. Automation Using chunks When working with typical data sets, automation of general processing workflow allows to perform routine operations efficiently. PhotoScan allows to assign several processing steps to be run one by one without user intervention thanks to Batch Processing feature.

Manual user intervention can be minimized even further due to ‚multiple chunk project‘ concept, each chunk to include one typical data set. For a project with several chunks of the same nature, common operations available in Batch Processing dialog are applied to each selected chunk individually, thus allowing to set several data sets for automatic processing following predefined workflow pattern.

In addition, multiple chunk project could be useful when it turns out to be hard or even impossible to generate a 3D model of the whole scene in one go. This could happen, for instance, if the total amount of photographs is too large to be processed at a time. To overcome this difficulty PhotoScan offers a possibility to split the set of photos into several separate chunks within the same project. Alignment of photos, building dense point cloud, building mesh, and forming texture atlas operations can be performed for each chunk separately and then resulting 3D models can be combined together.

Working with chunks is not more difficult than using PhotoScan following the general workflow. In fact, in PhotoScan always exists at least one active chunk and all the 3D model processing workflow operations are applied to this chunk. To work with several chunks you need to know how to create chunks and how to combine resulting 3D models from separate chunks into one model. Creating a chunk To create new chunk click on the Add Chunk toolbar button on the Workspace pane or select Add Chunk command from the Workspace context menu available by right-clicking on the root element on the Workspace pane.

After the chunk is created you may load photos in it, align them, generate dense point cloud, generate mesh surface model, build texture atlas, export the models at any stage and so on. The models in the chunks are not linked with each other. The list of all the chunks created in the current project is displayed in the Workspace pane along with flags reflecting their status.

The following flags can appear next to the chunk name: R Referenced Will appear when two or more chunks are aligned with each other. To move photos from one chunk to another simply select them in the list of photos on the Workspace pane, and then drag and drop to the target chunk. Working with chunks All operations within the chunk are carried out following the common workflow: loading photographs, aligning them, generating dense point cloud, building mesh, building texture atlas, exporting 3D model and so on.

Note that all these operations are applied to the active chunk. When a new chunk is created it is activated automatically. Save project operation saves the content of all chunks. To save selected chunks as a separate project use Save Chunks command from the chunk context menu.

Aligning chunks After the „partial“ 3D models are built in several chunks they can be merged together. Before merging chunks they need to be aligned. In the Align Chunks dialog box select chunks to be aligned, indicate reference chunk with a doubleclick. Set desired alignment options. To cancel processing click the Cancel button.

Aligning chunks parameters The following parameters control the chunks alignment procedure and can be modified in the Align Chunks dialog box: Method Defines the chunks alignment method. Point based method aligns chunks by matching photos across all the chunks. Camera based method is used to align chunks based on estimated camera locations. Corresponding cameras should have the same label. Accuracy Point based alignment only Higher accuracy setting helps to obtain more accurate chunk alignment results.

Lower accuracy setting can be used to get the rough chunk alignment in the shorter time. Point limit Point based alignment only The number indicates upper limit of feature points on every image to be taken into account during Point based chunks alignment. Fix scale Option is to be enabled in case the scales of the models in different chunks were set precisely and should be left unchanged during chunks alignment process.

Preselect image pairs Point based alignment only The alignment process of many chunks may take a long time. A significant portion of this time is spent for matching of detected features across the photos. Image pair preselection option can speed up this process by selection of a subset of image pairs to be matched. Constrain features by mask Point based alignment only When this option is enabled, features detected in the masked image regions are discarded.

For additional information on the usage of masks refer to the Using masks section. Merging chunks After alignment is complete the separate chunks can be merged into a single chunk.

In the Merge Chunks dialog box select chunks to be merged and the desired merging options. PhotoScan will merge the separate chunks into one. The merged chunk will be displayed in the project content list on Workspace pane.

The following parameters control the chunks merging procedure and can be modified in the Merge Chunks dialog box: Merge dense clouds Defines if dense clouds from the selected chunks are combined. Terrestrial laser scanning TLS registration Simultaneous adjustment of both laser scanner and camera positions.

Capability to combine TLS and photogrammetric depth maps. Markers support and automatic targets detection for manual alignment of scanner data. Masking instruments to ignore unwanted objects in scanner data. Scale bar tool to set reference distance without implementation of positioning equipment.

Measurements: distances, areas, volumes Inbuilt tools to measure distances, areas and volumes. Stereoscopic measurements Professional 3D monitors and 3D controllers support for accurate and convenient stereoscopic vectorization of features and measurement purposes. Direct upload to various online resources and export to many popular formats. Photorealistic textures: HDR and multifile support incl.

UDIM layout. Hierarchical tiled model generation City scale modeling preserving the original image resolution for texturing. The progress dialog box will appear displaying the current processing status. To cancel processing click Cancel button. Alignment having been completed, computed camera positions and a sparse point cloud will be displayed. You can inspect alignment results and remove incorrectly positioned photos, if any. To see the matches between any two photos use View Matches Incorrectly positioned photos can be realigned.

Reset alignment for incorrectly positioned cameras using Reset Camera Alignment command from the photo context menu.

Set markers at least 4 per photo on these photos and indicate their projections on at least two photos from the already aligned subset.

PhotoScan will consider these points to be true matches. For information on markers placement refer to the Setting coordinate system section. Select photos to be realigned and use Align Selected Cameras command from the photo context menu. When the alignment step is completed, the point cloud and estimated camera positions can be exported for processing with another software if needed. Image quality Poor input, e. To help you to exclude poorly focused images from processing PhotoScan suggests automatic image quality estimation feature.

Images with quality value of less than 0. To disable a photo use. PhotoScan estimates image quality for each input image. The value of the parameter is calculated based on the sharpness level of the most focused part of the picture. Switch to the detailed view in the Photos pane using on the Photos pane toolbar.

Select all photos to be analyzed on the Photos pane. Right button click on the selected photo s and choose Estimate Image Quality command from the context menu. Once the analysis procedure is over, a figure indicating estimated image quality value will be displayed in the Quality column on the Photos pane. Alignment parameters The following parameters control the photo alignment procedure and can be modified in the Align Photos dialog box: Accuracy Higher accuracy settings help to obtain more accurate camera position estimates.

Lower accuracy settings can be used to get the rough camera positions in a shorter period of time. While at High accuracy setting the software works with the photos of the original size, Medium setting causes image downscaling by factor of 4 2 times by each side , at Low accuracy source files are downscaled by factor of 16, and Lowest value means further downscaling by 4 times more.

Highest accuracy setting upscales the image by factor of 4. Since tie point positions are estimated on the basis of feature spots found on the source images, it may be meaningful to upscale a source photo to accurately localize a tie point. However, Highest accuracy setting is recommended only for very sharp image data and mostly for research purposes due to the corresponding processing being quite time consuming.

Pair preselection The alignment process of large photo sets can take a long time. A significant portion of this time period is spent on matching of detected features across the photos. Image pair preselection option may speed up this process due to selection of a subset of image pairs to be matched.

In the Generic preselection mode the overlapping pairs of photos are selected by matching photos using lower accuracy setting first. In the Reference preselection mode the overlapping pairs of photos are selected based on the measured camera locations if present. For oblique imagery it is necessary to set Ground altitude value average ground height in the same coordinate system which is set for camera coordinates data in the Settings dialog of the Reference pane to make the preselection procedure work efficiently.

Ground altitude information must be accompanied with yaw, pitch, roll data for cameras. Yaw, pitch, roll data should be input in the Reference pane. Additionally the following advanced parameters can be adjusted. Key point limit The number indicates upper limit of feature points on every image to be taken into account during current processing stage.

Using zero value allows PhotoScan to find as many key points as possible, but it may result in a big number of less reliable points. Tie point limit The number indicates upper limit of matching points for every image. Using zero value doesn’t apply any tie point filtering.

Constrain features by mask When this option is enabled, masked areas are excluded from feature detection procedure. For additional information on the usage of masks please refer to the Using masks section.

Note Tie point limit parameter allows to optimize performance for the task and does not generally effect the quality of the further model. Recommended value is Too high or too low tie point limit value may cause some parts of the dense point cloud model to be missed. The reason is that PhotoScan generates depth maps only for pairs of photos for which number of matching points is.

As a results sparse point cloud will be thinned, yet the alignment will be kept unchanged. Point cloud generation based on imported camera data PhotoScan supports import of external and internal camera orientation parameters. Thus, if precise camera data is available for the project, it is possible to load them into PhotoScan along with the photos, to be used as initial information for 3D reconstruction job.

The data will be loaded into the software. Camera calibration data can be inspected in the Camera Calibration dialog, Adjusted tab, available from Tools menu. If the input file contains some reference data camera position data in some coordinate system , the data will be shown on the Reference pane, View Estimated tab. Once the data is loaded, PhotoScan will offer to build point cloud.

This step involves feature points detection and matching procedures. As a result, a sparse point cloud – 3D representation of the tie-points data, will be generated. Parameters controlling Build Point Cloud procedure are the same as the ones used at Align Photos step see above.

Building dense point cloud PhotoScan allows to generate and visualize a dense point cloud model. Based on the estimated camera positions the program calculates depth information for each camera to be combined into a single dense point cloud.

PhotoScan tends to produce extra dense point clouds, which are of almost the same density, if not denser, as LIDAR point clouds. A dense point cloud can be edited and classified within PhotoScan environment or exported to an external tool for further analysis.

Rotate the bounding box and then drag corners of the box to the desired positions. In the Build Dense Cloud dialog box select the desired reconstruction parameters. Click OK button when done. Reconstruction parameters Quality Specifies the desired reconstruction quality.

Higher quality settings can be used to obtain more detailed and accurate geometry, but they require longer time for processing. Interpretation of the quality parameters here is similar to that of accuracy settings given in Photo Alignment section.

The only difference is that in this case Ultra High quality setting means processing of original photos, while each following step implies preliminary image size downscaling by factor of 4 2 times by each side. Depth Filtering modes At the stage of dense point cloud generation reconstruction PhotoScan calculates depth maps for every image.

Due to some factors, like noisy or badly focused images, there can be some outliers among the points. To sort out the outliers PhotoScan has several built-in filtering algorithms that answer the challenges of different projects. If there are important small details which are spatially distingueshed in the scene to be reconstructed, then it is recommended to set Mild depth filtering mode, for important features not to be sorted out as outliers. This value of the parameter may also be useful for aerial projects in case the area contains poorly textued roofs, for example.

If the area to be reconstructed does not contain meaningful small details, then it is reasonable to chose Aggressive depth filtering mode to sort out most of the outliers. This value of the parameter normally recommended for aerial data processing, however, mild filtering may be useful in some projects as well see poorly textured roofs comment in the mild parameter valur description above.

Moderate depth filtering mode brings results that are in between the Mild and Aggressive approaches. You can experiment with the setting in case you have doubts which mode to choose. Additionally depth filtering can be Disabled. But this option is not recommended as the resulting dense cloud could be extremely noisy. Check the reconstruction volume bounding box. If the model has already been referenced, the bounding box will be properly positioned automatically.

Otherwise, it is important to control its position manually. To adjust the bounding box manually, use the Resize Region and Rotate Region toolbar buttons. Rotate the bounding box and then drag corners of the box to the desired positions – only part of the scene inside the bounding box will be reconstructed.

If the Height field reconstruction method is to be applied, it is important to control the position of the red side of the bounding box: it defines reconstruction plane. In this case make sure that the bounding box is correctly oriented. In the Build Mesh dialog box select the desired reconstruction parameters. Reconstruction parameters PhotoScan supports several reconstruction methods and settings, which help to produce optimal reconstructions for a given data set.

Surface type Arbitrary surface type can be used for modeling of any kind of object. It should be selected for closed objects, such as statues, buildings, etc.

It doesn’t make any assumptions on the type of the object being modeled, which comes at a cost of higher memory consumption.

Height field surface type is optimized for modeling of planar surfaces, such as terrains or basereliefs. It should be selected for aerial photography processing as it requires lower amount of memory and allows for larger data sets processing. Source data Specifies the source for the mesh generation procedure. Sparse cloud can be used for fast 3D model generation based solely on the sparse point cloud. Dense cloud setting will result in longer processing time but will generate high quality output based on the previously reconstructed dense point cloud.

Polygon count Specifies the maximum number of polygons in the final mesh. They present optimal number of polygons for a mesh of a corresponding level of detail. It is still possible for a user to indicate the target number of polygons in the final mesh according to their choice. It could be done through the Custom value of the Polygon count parameter. Please note that while too small number of polygons is likely to result in too rough mesh, too huge custom number over 10 million polygons is likely to cause model visualization problems in external software.

Interpolation If interpolation mode is Disabled it leads to accurate reconstruction results since only areas corresponding to dense point cloud points are reconstructed. Manual hole filling is usually required at the post processing step. With Enabled default interpolation mode PhotoScan will interpolate some surface areas within a circle of a certain radius around every dense cloud point. As a result some holes can be automatically covered. Yet some holes can still be present on the model and are to be filled at the post processing step.

In Extrapolated mode the program generates holeless model with extrapolated geometry. Large areas of extra geometry might be generated with this method, but they could be easily removed later using selection and cropping tools. Point classes Specifies the classes of the dense point cloud to be used for mesh generation.

Preliminary Classifying dense cloud points procedure should be performed for this option of mesh generation to be active. Note PhotoScan tends to produce 3D models with excessive geometry resolution, so it is recommended to perform mesh decimation after geometry computation. More information on mesh decimation and other 3D model geometry editing tools is given in the Editing model geometry section.

Select the desired texture generation parameters in the Build Texture dialog box. Texture mapping modes The texture mapping mode determines how the object texture will be packed in the texture atlas. Proper texture mapping mode selection helps to obtain optimal texture packing and, consequently, better visual quality of the final model. Generic The default mode is the Generic mapping mode; it allows to parametrize texture atlas for arbitrary geometry.

No assumptions regarding the type of the scene to be processed are made; program tries to create as uniform texture as possible.

Adaptive orthophoto In the Adaptive orthophoto mapping mode the object surface is split into the flat part and vertical regions. The flat part of the surface is textured using the orthographic projection, while vertical regions are textured separately to maintain accurate texture representation in such regions. When in the Adaptive orthophoto mapping mode, program tends to produce more compact texture representation for nearly planar scenes, while maintaining good texture quality for vertical surfaces, such as walls of the buildings.

Orthophoto In the Orthophoto mapping mode the whole object surface is textured in the orthographic projection. The Orthophoto mapping mode produces even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions. Spherical Spherical mapping mode is appropriate only to a certain class of objects that have a ball-like form. It allows for continuous texture atlas being exported for this type of objects, so that it is much easier to edit it later.

When generating texture in Spherical mapping mode it is crucial to set the Bounding box properly. The whole model should be within the Bounding box. The red side of the Bounding box should be under the model; it defines the axis of the spherical projection. The marks on the front side determine the 0 meridian.

Single photo The Single photo mapping mode allows to generate texture from a single photo. The photo to be used for texturing can be selected from ‚Texture from‘ list.

Keep uv The Keep uv mapping mode generates texture atlas using current texture parametrization. It can be used to rebuild texture atlas using different resolution or to generate the atlas for the model parametrized in the external software.

Texture generation parameters The following parameters control various aspects of texture atlas generation: Texture from Single photo mapping mode only Specifies the photo to be used for texturing. Available only in the Single photo mapping mode. Blending mode not used in Single photo mode Selects the way how pixel values from different photos will be combined in the final texture.

Mosaic – implies two-step approach: it does blending of low frequency component for overlapping images to avoid seamline problem weighted average, weight being dependent on a number of parameters including proximity of the pixel in question to the center of the image , while high frequency component, that is in charge of picture details, is taken from a single image – the one that presents good resolution for the area of interest while the camera view is almost along the normal to the reconstructed surface in that point.

Average – uses the weighted average value of all pixels from individual photos, the weight being dependent on the same parameters that are considered for high frequence component in mosaic mode. Max Intensity – the photo which has maximum intensity of the corresponding pixel is selected. Min Intensity – the photo which has minimum intensity of the corresponding pixel is selected.

Disabled – the photo to take the color value for the pixel from is chosen like the one for the high frequency component in mosaic mode. Exporting texture to several files allows to archive greater resolution of the final model texture, while export of high resolution texture to a single file can fail due to RAM limitations.

Enable color correction The feature is useful for processing of data sets with extreme brightness variation. However, please note that color correction process takes up quite a long time, so it is recommended to enable the setting only for the data sets that proved to present results of poor quality. Improving texture quality To improve resulting texture quality it may be reasonable to exclude poorly focused images from processing at this step.

PhotoScan suggests automatic image quality estimation feature. PhotoScan estimates image quality as a relative sharpness of the photo with respect to other images in the data set. Building tiled model Hierarchical tiles format is a good solution for city scale modeling.

It allows for responsive visualisation of large area 3D models in high resolution, a tiled model being opened with Agisoft Viewer – a complementary tool included in PhotoScan installer package. Tiled model is build based on dense point cloud data.

Hierarchical tiles are textured from the source imagery. Check the reconstruction volume bounding box – tiled model will be generated for the area within bounding box only. To adjust the bounding box use the Resize Region and Rotate Region toolbar buttons. In the Build Tiled model dialog box select the desired reconstruction parameters.

Reconstruction parameters Pixel size m Suggested value shows automatically estimated pixel size due to input imagery effective resolution. It can be set by the user in meters. Tile size Tile size can be set in pixels. For smaller tiles faster visualisation should be expected.

Building digital elevation model PhotoScan allows to generate and visualize a digital elevation model DEM. A DEM represents a surface model as a regular grid of height values. DEM can be rasterized from a dense point cloud, a sparse point cloud or a mesh. Most accurate results are calculated based on dense point cloud data. PhotoScan enables to perform DEM-based point, distance, area, volume measurements as well as generate cross-sections for a part of the scene selected by the user.

Additionally, contour lines can be calculated for the model and depicted either over DEM or Orthomosaic in Ortho view within PhotoScan environment.

More information on measurement functionality can be found in Performing measurements on DEM section. Note Build DEM procedure can be performed only for projects saved in. PSX format. DEM can be calculated for referenced models only. So make sure that you have set a coordinate system for your model before going to build DEM operation. For guidance on Setting coordinate system please go to Setting coordinate system DEM is calculated for the part of the model within the bounding box.

Preliminary elevation data results can be generated from a sparse point cloud, avoiding Build Dense Cloud step for time limitation reasons.

With Enabled default interpolation mode PhotoScan will calculate DEM for all areas of the scene that are visible on at least one image. Enabled default setting is recommended for DEM generation. In Extrapolated mode the program generates holeless model with some elevation data being extrapolated.

Point classes The parameter allows to select a point class classes that will be used for DEM calculation. To generate digital terrain model DTM , it is necessary to classify dense cloud points first in order to divide them in at least two classes: ground points and the rest. Please refer to Classifying dense cloud.

Indicate coordinates of the bottom left and top right corners of the region to be exported in the left and right columns of the textboxes respectively. Suggested values indicate coordinates of the bottom left and top right corners of the whole area to be rasterized, the area being defined with the bounding box.

Resolution value shows effective ground resolution for the DEM estimated for the source data. Size of the resulting DEM, calculated with respect to the ground resolution, is presented in Total size textbox. Building orthomosaic Orthomosaic export is normally used for generation of high resolution imagery based on the source photos and reconstructed model.

The most common application is aerial photographic survey data processing, but it may be also useful when a detailed view of the object is required. PhotoScan enables to perform orthomosaic seamline editing for better visual results see Orthomosaic seamlines editing section of the manual.

For multispectral imagery processing workflow Ortho view tab presents Raster Calculator tool for NDVI and other vegetation indices calculation to analyze crop problems and generate prescriptions for variable rate farming equipment.

More information on NDVI calculation functionality can be found in Performing measurements on mesh section. PhotoScan allows to project the orthomosaic onto a plane set by the user, providing that mesh is selected as a surface type.

To generate orthomosaic in a planar projection choose Planar Projection Type in Build Orthomosaic dialog. You can select projection plane and orientation of the orthomosaic. PhotoScan provides an option to project the model to a plane determined by a set of markers if there are no 3 markers in a desired projection plane it can be specified with 2 vectors, i.

Planar projection type may be useful for orthomosaic generation in projects concerning facades or surfaces that are not described with Z X,Y function. To generate an orthomosaic in planar projection, preliminary generation of mesh data is required.

Parameters Surface Orthomosaic creation based on DEM data is especially efficient for aerial survey data processing scenarios allowing for time saving on mesh generation step. Alternatively, mesh surface type allows. Blending mode Mosaic default – implements approach with data division into several frequency domains which are blended independently.

The highest frequency component is blended along the seamline only, each further step away from the seamline resulting in a less number of domains being subject to blending.

Average – uses the weighted average value of all pixels from individual photos. Disabled – the color value for the pixel is taken from the photo with the camera view being almost along the normal to the reconstructed surface in that point. Enable color correction Color correction feature is useful for processing of data sets with extreme brightness variation.

However, please note that color correction process takes up quite a long time, so it is recommended to enable the setting only for the data sets that proved to present results of poor quality before. Pixel size Default value for pixel size in Export Orthomosaic dialog refers to ground sampling resolution, thus, it is useless to set a smaller value: the number of pixels would increase, but the effective resolution would not. However, if it is meaningful for the purpose, pixel size value can be changed by the user.

PhotoScan generates orthomosaic for the whole area, where surface data is available. Bounding box limitations are not applied. To build orthomosaic for a particular rectangular part of the project use Region section of the Build Orthomosaic dialog. Estimate button allows you to see the coordinates of the bottom left and top right corners of the whole area.

Estimate button enables to control total size of the resulting orthomosaic data for the currently selected reconstruction area all available data default or a certain region Region parameter and resolution Pixel size or Max. The information is shown in the Total size pix textbox. Saving intermediate results Certain stages of 3D model reconstruction can take a long time. The full chain of operations could eventually last for hours when building a model from hundreds of photos. It is not always possible to complete all the operations in one run.

PhotoScan allows to save intermediate results in a project file. Photo alignment data such as information on camera positions, sparse point cloud model and set of refined camera calibration parameters for each calibration group.

Masks applied to the photos in project. Depth maps for cameras. Dense point cloud model with information on points classification. Reconstructed 3D polygonal model with any changes made by user. This includes mesh and texture if it was built. List of added markers as well as of scale-bars and information on their positions. Structure of the project, i. Note that since PhotoScan tends to generate extra dense point clouds and highly detailed polygonal models, project saving procedure can take up quite a long time.

You can decrease compression level to speed up the saving process. However, please note that it will result in a larger project file. Compression level setting can be found on the Advanced tab of the Preferences dialog available from Tools menu. This format enables responsive loading of large data dense point clouds, meshes, etc.

You can save the project at the end of any processing stage and return to it later. To restart work simply load the corresponding file into PhotoScan. Project files can also serve as backup files or be used to save different versions of the same model. Project files use relative paths to reference original photos. Thus, when moving or copying the project file to another location do not forget to move or copy photographs with all the folder structure involved as well.

Otherwise, PhotoScan will fail to run any operation requiring source images, although the project file including the reconstructed model will be loaded up correctly. Alternatively, you can enable Store absolute image paths option on the Advanced tab of the Preferences dialog available from Tools menu.

Exporting results PhotoScan supports export of processing results in various representations: sparse and dense point clouds, camera calibration and camera orientation data, mesh, etc.

Orthomosaics and digital elevation models both DSM and DTM , as well as tiled models can be generated according to the user requirements. Point cloud and camera calibration data can be exported right after photo alignment is completed. All other export options are available after the corresponding processing step. To align the model orientation with the default coordinate system use Rotate object button from the Toolbar.

In some cases editing model geometry in the external software may be required. PhotoScan supports model export for editing in external software and then allows to import it back as it is described in the Editing model geometry section of the manual.

Professional Edition, Version 1. Capturing photos General workflow Cluster administration Graphical user interface Supported formats Camera models Based on the latest multi-view 3D reconstruction technology, it operates with arbitrary images and is efficient in both controlled and uncontrolled conditions.

Photos can be taken from any position, providing that the object to be reconstructed is visible on at least two photos. Both image alignment and 3D model reconstruction are fully automated. How it works Generally the final goal of photographs processing with PhotoScan is to build a textured 3D model.

The procedure of photographs processing and 3D model construction comprises four main stages. The first stage is camera alignment. At this stage PhotoScan searches for common points on photographs and matches them, as well as it finds the position of the camera for each picture and refines camera calibration parameters. As a result a sparse point cloud and a set of camera positions are formed. The sparse point cloud represents the results of photo alignment and will not be directly used in the further 3D model construction procedure except for the sparse point cloud based reconstruction method.

However it can be exported for further usage in external programs. For instance, the sparse point cloud model can be used in a 3D editor as a reference. On the contrary, the set of camera positions is required for further 3D model reconstruction by PhotoScan. The next stage is building dense point cloud. Based on the estimated camera positions and pictures themselves a dense point cloud is built by PhotoScan. Dense point cloud may be edited and classified prior to export or proceeding to 3D mesh model generation.

The third stage is building mesh. PhotoScan reconstructs a 3D polygonal mesh representing the object surface based on the dense or sparse point cloud according to the user’s choice. Generally there are two algorithmic methods available in PhotoScan that can be applied to 3D mesh generation: Height Field – for planar type surfaces, Arbitrary – for any kind of object.

The mesh having been built, it may be necessary to edit it. Some corrections, such as mesh decimation, removal of detached components, closing of holes in the mesh, smoothing, etc. For more complex editing you have to engage external 3D editor tools. PhotoScan allows to export mesh, edit it by another software and import it back.

After geometry i. Several texturing modes are available in PhotoScan, they are described in the corresponding section of this manual, as well as orthomosaic and DEM generation procedures. Camera calibration issues are discussed in Chapter 4, Referencing, that also describes functionality to optimize camera alignment results and provides guidance on model referencing. A referenced model, be it a mesh or a DEM serves as a ground for measurements.

Area, volume, profile measurement procedures are tackled in Chapter 5, Measurements, which also includes information on vegetation indices calculations. While Chapter 7, Automation describes opportunities to save up on manual intervention to the processing workflow, Chapter 8, Network processing presents guidelines on how to organize distributed processing of the imagery data on several nodes.

It can take up quite a long time to reconstruct a 3D model. PhotoScan allows to export obtained results and save intermediate data in a form of project files at any stage of the process. If you are not familiar with the concept of projects, its brief description is given at the end of the Chapter 3, General workflow. In the manual you can also find instructions on the PhotoScan installation procedure and basic rules for taking „good“ photographs, i.

For the information refer to Chapter 1, Installation and Chapter 2, Capturing photos. Chapter 1. The number of photos that can be processed by PhotoScan depends on the available RAM and reconstruction parameters used. PhotoScan is likely to be able to utilize processing power of any OpenCL enabled device during Dense Point Cloud generation stage, provided that OpenCL drivers for the device are properly installed. However, because of the large number of various combinations of video chips, driver versions and operating systems, Agisoft is unable to test and guarantee PhotoScan’s compatibility with every device and on every platform.

The table below lists currently supported devices on Windows platform only. We will pay particular attention to possible problems with PhotoScan running on these devices. Although PhotoScan is supposed to be able to utilize other GPU models and being run under a different operating system, Agisoft does not guarantee that it will work correctly.

Start PhotoScan by running photoscan. To use PhotoScan in the full function mode you have to purchase it. On purchasing you will get the serial number to enter into the registration box on starting PhotoScan.

Once the serial number is entered the registration box will not appear again and you will get full access to all functions of the program.

Chapter 2. Capturing photos Before loading your photographs into PhotoScan you need to take them and select those suitable for 3D model reconstruction. Photographs can be taken by any digital camera both metric and non-metric , as long as you follow some specific capturing guidelines. This section explains general principles of taking and selecting pictures that provide the most appropriate data for 3D model generation. Make sure you have studied the following rules and read the list of restrictions before you get out for shooting photographs.

The best choice is 50 mm focal length 35 mm film equivalent lenses. It is recommended to use focal length from 20 to 80 mm interval in 35mm equivalent. If a data set was captured with fisheye lens, appropriate camera sensor type should be selected in PhotoScan Camera Calibration dialog prior to processing.

If zoom lenses are used – focal length should be set either to maximal or to minimal value during the entire shooting session for more stable results. Capturing photos. Capturing scenarios Generally, spending some time planning your shot session might be very useful. In some cases portrait camera orientation should be used. It is recommended to remove sources of light from camera fields of view.

Avoid using flash. Alternatively, you could place a ruler within the shooting area. Interior Incorrect Interior Correct. Restrictions In some cases it might be very difficult or even impossible to build a correct 3D model from a set of pictures.

A short list of typical reasons for photographs unsuitability is given below. Modifications of photographs PhotoScan can process only unmodified photos as they were taken by a digital photo camera.

Processing the photos which were manually cropped or geometrically warped is likely to fail or to produce highly inaccurate results. Photometric modifications do not affect reconstruction results. Lens distortion The distortion of the lenses used to capture the photos should be well simulated with the Brown’s distortion model. Otherwise it is most unlikely that processing results will be accurate. Fisheye and ultra-wide angle lenses are poorly modeled by the common distortion model implemented in PhotoScan software, so it is required to choose proper camera type in Camera Calibration dialog prior to processing.

Chapter 3. General workflow Processing of images with PhotoScan includes the following main steps:. If you are using PhotoScan in the full function not the Demo mode, intermediate results of the image processing can be saved at any stage in the form of project files and can be used later.

The concept of projects and project files is briefly explained in the Saving intermediate results section. The list above represents all the necessary steps involved in the construction of a textured 3D model , DEM and orthomosaic from your photos.

Some additional tools, which you may find to be useful, are described in the successive chapters. Preferences settings Before starting a project with PhotoScan it is recommended to adjust the program settings for your needs. In Preferences dialog General Tab available through the Tools menu you can indicate the path to the PhotoScan log file to be shared with the Agisoft support team in case you face any problem during the processing.

Here you can also change GUI language to the one that is most convenient for you. PhotoScan exploits GPU processing power that speeds up the process significantly. If you have decided to switch on GPUs for photogrammetric data processing with PhotoScan, it is recommended to free one physical CPU core per each active GPU for overall control and resource managing tasks. General workflow. In the Add Photos dialog box browse to the folder containing the images and select files to be processed.

Then click Open button. Photos in any other format will not be shown in the Add Photos dialog box. To work with such photos you will need to convert them in one of the supported formats. To remove unwanted photos 1. On the Workspace pane select the photos to be removed. Right-click on the selected photos and choose Remove Items command from the opened context menu, or click Remove Items toolbar button on the Workspace pane. The selected photos will be removed from the working set.

Camera groups If all the photos or a subset of photos were captured from one camera position – camera station, for PhotoScan to process them correctly it is obligatory to move those photos to a camera group and mark the group as Camera Station. It is important that for all the photos in a Camera Station group distances between camera centers were negligibly small compared to the camera-object minimal distance. However, it is possible to export panoramic picture for the data captured from only one camera station.

Refer to Exporting results section for guidance on panorama export.

 
 

Agisoft photoscan user manual professional edition version 1.2 free

 
 

Browse to the file containing recorded reference coordinates and click Open button. In the Import CSV dialog set the coordinate system if the data presents geographical coordinates. Select the delimiter and indicate the number of the data column for each coordinate.

Indicate columns for the orientation data if present. Information on the accuracy of the source coordinates x, y, z can be loaded with a CSV file as well. Check Load Accuracy option and indicate the number of the column where the accuracy for the data should be read from. The same figure will be tackled as accuracy information for all three coordinates. To assign reference coordinates manually 1.

Additionally, it is possible to indicate accuracy data for the coordinates. Select Set Accuracy It is possible to select several cameras and apply Set Accuracy Alternatively, you can select Accuracy m or Accuracy deg text box for a certain camera on the Reference pane and press F2 button on the keyboard to type the text data directly onto the Reference pane.

The reference coordinates data will be loaded into the Reference pane. After reference coordinates have been assigned PhotoScan automatically estimates coordinates in a local Euclidean system and calculates the referencing errors.

The largest error will be highlighted. To set a georeferenced coordinate system 1. Assign reference coordinates using one of the options described above. In the Reference Settings dialog box select the Coordinate System used to compile reference coordinates data if it has not been set at the previous step. Rotation angles in PhotoScan are defined around the following axes: yaw axis runs from top to bottom, pitch axis runs from left to right wing of the drone, roll axis runs from tail to nose of the drone.

Zero values of the rotation angle triple define the following camera position aboard: camera looks down to the ground, frames are taken in landscape orientation, and horizontal axis of the frame is perpendicular to the central tail-nose axis of the drone. If the camera is fixed in a different position, respective yaw, pitch, roll values should be input in the camera correction section of the Settings dialog. The senses of the angles are defined according to the right-hand rule. A click on the column name on the Reference pane sorts the markers and photos by the data in the column.

At this point you can review the errors and decide whether additional refinement of marker locations is required in case of marker based referencing , or if certain reference points should be excluded. To reset a chunk georeferencing use Reset Transform command from the chunk context menu on the Workspace pane. It should be updated manually using Update toolbar button on the Reference pane.

Each reference point is specified in this file on a separate line. Sample reference coordinates file is provided below:. JPG Individual entries on each line should be separated with a tab space, semicolon, comma, etc character. All lines starting with character are treated as comments. Using different vertical datums On default PhotoScan requires all the source altitude values for both cameras and markers to be input as values mesuared above the ellipsoid.

However, PhotoScan allows for the different geoid models utilization as well. PhotoScan installation package includes only EGM96 geoid model, but additional geoid models can be downloaded from Agisoft’s website if they are required by the coordinate system selected in the Reference pane settings dialog; alternatively, a geoid model can be loaded from a custom PRJ file. Optimization Optimization of camera alignment PhotoScan estimates internal and external camera orientation parameters during photo alignment.

This estimation is performed using image data alone, and there may be some errors in the final estimates. The accuracy of the final estimates depends on many factors, like overlap between the neighboring photos, as well as on the shape of the object surface.

These errors can lead to non-linear deformations of the final model. During georeferencing the model is linearly transformed using 7 parameter similarity transformation 3 parameters for translation, 3 for rotation and 1 for scaling.

Such transformation can compensate only a linear model misalignment. The non-linear component can not be removed with this approach. This is usually the main reason for georeferencing errors. Possible non-linear deformations of the model can be removed by optimizing the estimated point cloud and camera parameters based on the known reference coordinates.

During this optimization PhotoScan adjusts estimated point coordinates and camera parameters minimizing the sum of reprojection error and reference coordinate misalignment error. To achieve greater optimizing results it may be useful to edit sparse point cloud deleting obviously mislocated points beforehand. Georeferencing accuracy can be improved significantly after optimization. It is recommended to perform optimization if the final model is to be used for any kind of measurements.

Click Optimize toolbar button. In Optimize Camera Alignment dialog box check additional camera parameters to be optimized if needed. Click OK button to start optimization. You will have to rebuild the model geometry after optimization. Image coordinates accuracy for markers indicates how precisely the markers were placed by the user or adjusted by the user after being automatically placed by the program. Ground altitude parameter is used to make reference preselection mode of alignment procedure work effectively for oblique imagery.

See Aligning photos for details. Camera, marker and scale bar accuracy can be set per item i. Accuracy values can be typed in on the pane per item or for a group of selected items. Generally it is reasonable to run optimization procedure based on markers data only. It is due to the fact that GCPs coordinates are measured with significantly higher accuracy compared to GPS data that indicates camera positions.

Thus, markers data are sure to give more precise optimization results. Moreover, quite often GCP and camera coordinates are measured in different coordinate systems, that also prevents from using both cameras and markers data in optimization simultaneously.

The results of the optimization procedure can be evaluated with the help of error information on the Reference pane. In addition, distortion plot can be inspected along with mean residuals visualised per calibration group.

They can prove to be useful when there is no way to locate ground control points all over the scene. Scale bars allow to save field work time, since it is significantly easier to place several scale bars with precisely known length, then to measure coordinates of a few markers using special equipment. In addition, PhotoScan allows to place scale bar instances between cameras, thus making it possible to avoid not only marker but ruler placement within the scene as well.

Surely, scale bar based information will not be enough to set a coordinate system, however, the information can be successfully used while optimizing the results of photo alignment. It will also be enough to perform measurements in PhotoScan software. See Performing measurements on model. To add a scale bar 1. Place markers at the start and end points of the bar.

For information on marker placement please refer to the Setting coordinate system section of the manual. Select Create Scale Bar command form the Model view context menu. The scale bar will be created and an instant added to the Scale Bar list on the Reference pane.

Double click on the Distance m box next to the newly created scale bar name and enter the known length of the bar in meters. To add a scale bar between cameras 1. Select the two cameras on the Workspace or Reference pane using Ctrl button. Alternatively, the cameras can be selected in the Model view window using selecting tools from the Toolbar. Select Create Scale Bar command form the context menu. To run scale bar based optimization 1. On the Reference pane check all scale bars to be used in optimization procedure.

Click Settings toolbar button on the Reference pane. To delete a scale bar 1. Select the scale bar to be deleted on the Reference pane. What do the errors in the Reference pane mean? Cameras section 1. Error m – distance between the input source and estimated positions of the camera.

Error pix – root mean square reprojection error calculated over all feature points detected on the photo. Markers section 1. Error m – distance between the input source and estimated positions of the marker.

Error pix – root mean square reprojection error for the marker calculated over all photos where marker is visible. If the total reprojection error for some marker seems to be too large, it is recommended to inspect reprojection errors for the marker on individual photos. The information is available with Show Info command from the marker context menu on the Reference pane.

Moreover, automatic CTs detection and marker placement is more precise then manual marker placement. PhotoScan supports three types of circle CTs: 12 bit, 16 bit and 20 bit. While 12 bit pattern is considered to be decoded more precisely, 16 bit and 20 bit patterns allow for a greater number of CTs to be used within the same project. To be detected successfully CTs must take up a significant number of pixels on the original photos.

This leads to a natural limitation of CTs implementation: while they generally prove to be useful in close-range imagery projects, aerial photography projects will demand too huge CTs to be placed on the ground, for the CTs to be detected correctly.

Coded targets in workflow Sets of all patterns of CTs supported by PhotoScan can be generated by the program itself. To create a printable PDF with coded targets 1. Select Print Markers Once generated, the pattern set can be printed and the CTs can be placed over the scene to be shot and reconstructed.

When the images with CTs seen on them are uploaded to the program, PhotoScan can detect and match the CTs automatically. To detect coded targets on source images 1. Select Detect Markers CTs generated with PhotoScan software contain even number of sectors. However, previous versions of PhotoScan software had no restriction of the kind. Thus, if the project to be processed contains CTs from previous versions of PhotoScan software, it is required to disable parity check in order to make the detector work.

Chapter 5. Measurements Performing measurements on model PhotoScan supports measuring of distances on the model, as well as of surface area and volume of the reconstructed 3D model. All the instructions of this section are applicable for working in the Model view of the program window, both for analysis of Dense Point Cloud or of Mesh data. When working in the model view, all measurements are performed in 3D space, unlike measurements in Ortho view, which are planar ones.

Distance measurement PhotoScan enables measurements of distances between the points of the reconstructed 3D scene. Obviously, model coordinate system must be initialized before the distance measurements can be performed. Alternatively, the model can be scaled based on known distance scale bar information to become suitable for measurements. For instructions on setting coordinate system please refer to the Setting coordinate system section of the manual.

Scale bar concept is described in the Optimization section. To measure distance 1. Select Ruler instrument from the Toolbar of the Model view. Upon the second click on the model the distance between the indicated points will be shown right in the Model view. To complete the measurement and to proceed to a new one, please press Escape button on the keyboard.

The result of the measurement will be shownon the Console pane. Shape drawing is enabled in Model view as well. See Shapes section of the manual for information on shape drawing. Measure command available from the context menu of a selected shape allows to learn the coordinates of the vertices as well as the perimeter of the shape. To measure several distances between pairs of points and automatically keep the resulting data, markers can be used. To measure distance between two markers 1. Place the markers in the scene at the targeted locations.

To measure distance between cameras 1. Switch to the estimated values mode using View Estimated button from the Reference pane toolbar. The estimated distance for the newly created scale bar equals to the distance that should have been measured. Surface area and volume measurement Surface area or volume measurements of the reconstructed 3D model can be performed only after the scale or coordinate system of the scene is defined. To measure surface area and volume 1.

Select Measure Area and Volume The whole model surface area and volume will be displayed in the Measure Area and Volume dialog box. Surface area is measured in square meters, while mesh volume is measured in cubic meters. Volume measurement can be performed only for the models with closed geometry. If there are any holes in the model surface PhotoScan will report zero volume. Existing holes in the mesh surface can be filled in before performing volume measurements using Close Holes Distance measurement To measure distance with a Ruler 1.

Select Ruler instrument from the Toolbar of the Ortho view. Upon the second click on the DEM the distance between the indicated points will be shown right in the Ortho view. To measure distance with shapes 1. Connect the points of interest with a polyline using Draw Polyline tool from the Ortho view toolbar.

Right button click on the polyline and select Measure In the Measure Shape dialog inspect the results. Perimeter value equals to the distance that should have been measured.

In addition to polyline length value see perimeter value in the Measure Shape , coordinates of the vertices of the polyline are shown on the Planar tab of the Measure Shape dialog. To select a polyline, double-click on it. A selected polyline is coloured in red. Surface area and volume measurement To measure area and volume 1. Right button click on the polygon and select Measure Cross sections and contour lines PhotoScan enables to calculate cross sections, using shapes to indicate the plane s for a cut s , the cut being made with a plane parallel to Z axis.

To calculate cross section 1. Generate Contours Set values for Minimal altitude, Maximal altitude parameters as well as the Interval for the contours. All the values shoudl be indicated in meters. When the procedure is finished, a contour lines label will be added to the project file structure shown on the Workspace pane. Contour lines can be shown over the DEM or orthomosaic on the Ortho tab of the program window.

Use Show contour lines tool from the Ortho tab toolbal to switch the function on and off. Contour lines can be deleted using Remove Contours command from the contour lines label context menu on the Workspace pane.

To calculate a vegetation index 1. Open orthomosaic in the Ortho tab doubleclicking on the orthomosaic label on the Workspace pane. Input an index expression using keyboard input and operators buttons of the raster calculator if necessary. Once the operation is completed, the result will be shown in the Ortho view, index values being visualised with colours according to the palette set in the Raster Calculator dialog.

Palette defines the colour for each index value to be shown with. PhotoScan offers several standard palette presets on the Palette tab of the Raster Calculator dialog. For each new line added to the palette a certain index value should be typed in. Double click on the newly added line to type the value in.

A customised palette can be saved for future projects using Export Palette button on the Palette tab of the Raster Calculator dialog. To calculate contour lines based on vegetation index data 1. Select Generate Contours The contour lines will be shown over the index data on the Ortho tab.

Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results.

Masks can be applied at the following stages of processing:. Alignment of the photos Masked areas can be excluded during feature point detection. Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions.

This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos. Masking may be also useful when the object of interest occupies only a small part of the photo.

In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects. Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process. Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction. Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing.

Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas. Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available.

PhotoScan supports loading masks from the following sources:. When generating masks from separate or background images, the folder selection dialog will appear.

Browse to the folder containing corresponding images and select it. Import masks for Specifies whether masks should be imported for the currently opened photo, active chunk or entire Workspace. Entire workspace – load masks for all chunks in the project. Mask file names not used in From alpha mode Specifies the file name template used to generate mask file names.

This template can contain special tokens, that will be substituted by corresponding data for each photo being processed. The following tokens are supported:. Tolerance From Background method only Specifies the tolerance threshold used for background differencing. Tolerance value should be set according to the color separation between foreground and background pixels. For larger separation higher tolerance values can be used.

Editing masks Modification of the current mask is performed by adding or subtracting selections. A selection is created with one of the supported selection tools and is not incorporated in the current mask until it is merged with a mask using Add Selection or Subtract Selection operations.

To edit the mask 1. The photo will be opened in the main window. The existing mask will be displayed as a shaded region on the photo. Ifyouarenotfamiliar withtheconceptofprojects,itsbriefdescriptionisgivenattheendoftheChapter3,Generalworkflow.

InthemanualyoucanalsofindinstructionsonthePhotoScaninstallationprocedureandbasicrulesfor taking“good“photographs,i. Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous.

Carousel Next. What is Scribd? Explore Ebooks. Bestsellers Editors‘ Picks All Ebooks. Explore Audiobooks. Bestsellers Editors‘ Picks All audiobooks. Explore Magazines. Editors‘ Picks All magazines. Explore Podcasts All podcasts. Difficulty Beginner Intermediate Advanced. Explore Documents. Agisoft Photoscan User Manual. Document Information click to expand document information Description: Agisoft photoscan user manual.

Did you find this document useful? Shutter speed should not be too slow, otherwise blur can occur due to slight movements. If still have to, shoot shiny objects under a cloudy sky. Avoid unwanted foregrounds. Avoid moving objects within the scene to be reconstructed. Avoid absolutely flat objects or scenes.

Image preprocessing PhotoScan operates with the original images. So do not crop or geometrically transform, i. Capturing scenarios Generally, spending some time planning your shot session might be very useful. Number of photos: more than required is better than not enough. Number of „blind-zones“ should be minimized since PhotoScan is able to reconstruct only geometry visible from at least two cameras. Each photo should effectively use the frame size: object of interest should take up the maximum area.

In some cases portrait camera orientation should be used. Do not try to place full object in the image frame, if some parts are missing it is not a problem providing that these parts appear on other images. Good lighting is required to achieve better quality of the results, yet blinks should be avoided.

It is recommended to remove sources of light from camera fields of view. Avoid using flash. The following figures represent advice on appropriate capturing scenarios:. Restrictions In some cases it might be very difficult or even impossible to build a correct 3D model from a set of pictures. A short list of typical reasons for photographs unsuitability is given below. Modifications of photographs PhotoScan can process only unmodified photos as they were taken by a digital photo camera.

Processing the photos which were manually cropped or geometrically warped is likely to fail or to produce highly inaccurate results. Photometric modifications do not affect reconstruction results. In this case PhotoScan assumes that focal length in 35 mm equivalent equals to 50 mm and tries to align the photos in accordance with this assumption.

If the correct focal length value differs significantly from 50 mm, the alignment can give incorrect results or even fail. In such cases it is required to specify initial camera calibration manually. The details of necessary EXIF tags and instructions for manual setting of the calibration parameters are given in the Camera calibration section.

Lens distortion The distortion of the lenses used to capture the photos should be well simulated with the Brown’s distortion model. Otherwise it is most unlikely that processing results will be accurate. Fisheye and ultra-wide angle lenses are poorly modeled by the common distortion model implemented in PhotoScan software, so it is required to choose proper camera type in Camera Calibration dialog prior to processing.

Chapter 3. General workflow Processing of images with PhotoScan includes the following main steps: loading photos into PhotoScan; inspecting loaded images, removing unnecessary images; aligning photos; building dense point cloud; building mesh 3D polygonal model ; generating texture; exporting results. If you are using PhotoScan in the full function not the Demo mode, intermediate results of the image processing can be saved at any stage in the form of project files and can be used later. The concept of projects and project files is briefly explained in the Saving intermediate results section.

The list above represents all the necessary steps involved in the construction of a textured 3D model from your photos. Some additional tools, which you may find to be useful, are described in the successive chapters. Preferences settings Before starting a project with PhotoScan it is recommended to adjust the program settings for your needs. In Preferences dialog General Tab available through the Tools menu you can indicate the path to the PhotoScan log file to be shared with the Agisoft support team in case you face any problem during the processing.

Here you can also change GUI language to the one that is most convenient for you. PhotoScan exploits GPU processing power that speeds up the process significantly. If you have decided to switch on GPUs for photogrammetric data processing with PhotoScan, it is recommended to free one physical CPU core per each active GPU for overall control and resource managing tasks.

Loading photos Before starting any operation it is necessary to point out what photos will be used as a source for 3D reconstruction. In fact, photographs themselves are not loaded into PhotoScan until they are needed. So, when you „load photos“ you only indicate photographs that will be used for further processing. In the Add Photos dialog box browse to the folder containing the images and select files to be processed.

Then click Open button. Photos in any other format will not be shown in the Add Photos dialog box. To work with such photos you will need to convert them in one of the supported formats. If you have loaded some unwanted photos, you can easily remove them at any moment.

Right-click on the selected photos and choose Remove Items command from the opened context menu, or click Remove Items toolbar button on the Workspace pane. The selected photos will be removed from the working set. Camera groups If all the photos or a subset of photos were captured from one camera position – camera station, for PhotoScan to process them correctly it is obligatory to move those photos to a camera group and mark the group as Camera Station.

It is important that for all the photos in a Camera Station group distances between camera centers were negligibly small compared to the camera-object minimal distance. However, it is possible to export panoramic picture for the data captured from only one camera station. Refer to Exporting results section for guidance on panorama export. Alternatively, camera group structure can be used to manipulate the image data in a chunk easily, e. Right-click on the selected photos and choose Move Cameras – New Camera Group command from the opened context menu.

A new group will be added to the active chunk structure and selected photos will be moved to that group. To mark group as camera station, right click on the camera group name and select Set Group Type command from the context menu. Inspecting loaded photos Loaded photos are displayed on the Workspace pane along with flags reflecting their status.

The following flags can appear next to the photo name:. In this case PhotoScan assumes that the corresponding photo was taken using 50mm lens 35mm film equivalent. If the actual focal length differs significantly from this value, manual calibration may be required. More details on manual camera calibration can be found in the Camera calibration section.

NA Not aligned Notifies that external camera orientation parameters have not been estimated for the current photo yet. Images loaded to PhotoScan will not be aligned until you perform the next step – photos alignment. Aligning photos Once photos are loaded into PhotoScan, they need to be aligned. At this stage PhotoScan finds the camera position and orientation for each photo and builds a sparse point cloud model.

The progress dialog box will appear displaying the current processing status. To cancel processing click Cancel button. Alignment having been completed, computed camera positions and a sparse point cloud will be displayed. You can inspect alignment results and remove incorrectly positioned photos, if any.

To see the matches between any two photos use View Matches Incorrectly positioned photos can be realigned. Reset alignment for incorrectly positioned cameras using Reset Camera Alignment command from the photo context menu. Select photos to be realigned and use Align Selected Cameras command from the photo context menu.

When the alignment step is completed, the point cloud and estimated camera positions can be exported for processing with another software if needed. Image quality Poor input, e. To help you to exclude poorly focused images from processing PhotoScan suggests automatic image quality estimation feature.

Images with quality value of less than 0. To disable a photo use. PhotoScan estimates image quality for each input image. The value of the parameter is calculated based on the sharpness level of the most focused part of the picture. Right button click on the selected photo s and choose Estimate Image Quality command from the context menu. Once the analysis procedure is over, a figure indicating estimated image quality value will be displayed in the Quality column on the Photos pane.

Alignment parameters The following parameters control the photo alignment procedure and can be modified in the Align Photos dialog box: Accuracy Higher accuracy settings help to obtain more accurate camera position estimates. Lower accuracy settings can be used to get the rough camera positions in a shorter period of time.

While at High accuracy setting the software works with the photos of the original size, Medium setting causes image downscaling by factor of 4 2 times by each side , at Low accuracy source files are downscaled by factor of 16, and Lowest value means further downscaling by 4 times more.

Highest accuracy setting upscales the image by factor of 4. Since tie point positions are estimated on the basis of feature spots found on the source images, it may be meaningful to upscale a source photo to accurately localize a tie point. However, Highest accuracy setting is recommended only for very sharp image data and mostly for research purposes due to the corresponding processing being quite time consuming. Pair preselection The alignment process of large photo sets can take a long time.

A significant portion of this time period is spent on matching of detected features across the photos. Image pair preselection option may speed up this process due to selection of a subset of image pairs to be matched. In the Generic preselection mode the overlapping pairs of photos are selected by matching photos using lower accuracy setting first.

Additionally the following advanced parameters can be adjusted. Key point limit The number indicates upper limit of feature points on every image to be taken into account during current processing stage. Using zero value allows PhotoScan to find as many key points as possible, but it may result in a big number of less reliable points. Tie point limit The number indicates upper limit of matching points for every image.

Using zero value doesn’t apply any tie point filtering. Constrain features by mask When this option is enabled, masked areas are excluded from feature detection procedure.

For additional information on the usage of masks please refer to the Using masks section. Note Tie point limit parameter allows to optimize performance for the task and does not generally effect the quality of the further model.

Recommended value is Too high or too low tie point limit value may cause some parts of the dense point cloud model to be missed. The reason is that PhotoScan generates depth maps only for pairs of photos for which number of matching points is above certain limit.

As a results sparse point cloud will be thinned, yet the alignment will be kept unchanged. Point cloud generation based on imported camera data PhotoScan supports import of external and internal camera orientation parameters. Thus, if precise camera data is available for the project, it is possible to load them into PhotoScan along with the photos, to be used as initial information for 3D reconstruction job. The data will be loaded into the software. Camera calibration data can be inspected in the Camera Calibration dialog, Adjusted tab, available from Tools menu.

Once the data is loaded, PhotoScan will offer to build point cloud. This step involves feature points detection and matching procedures. As a result, a sparse point cloud – 3D representation of the tie-points data, will be generated.

Parameters controlling Build Point Cloud procedure are the same as the ones used at Align Photos step see above. Building dense point cloud PhotoScan allows to generate and visualize a dense point cloud model. Based on the estimated camera positions the program calculates depth information for each camera to be combined into a single dense point cloud.

PhotoScan tends to produce extra dense point clouds, which are of almost the same density, if not denser, as LIDAR point clouds. A dense point cloud can be edited within PhotoScan environment or exported to an external tool for further analysis. Rotate the bounding box and then drag corners of the box to the desired positions. In the Build Dense Cloud dialog box select the desired reconstruction parameters.

Click OK button when done. Reconstruction parameters Quality Specifies the desired reconstruction quality. Higher quality settings can be used to obtain more detailed and accurate geometry, but they require longer time for processing. Interpretation of the quality parameters here is similar to that of accuracy settings given in Photo Alignment section.

The only difference is that in this case Ultra High quality setting means processing of original photos, while each following step implies preliminary image size downscaling by factor of 4 2 times by each side. Depth Filtering modes At the stage of dense point cloud generation reconstruction PhotoScan calculates depth maps for every image.

Due to some factors, like noisy or badly focused images, there can be some outliers among the points. To sort out the outliers PhotoScan has several built-in filtering algorithms that answer the challenges of different projects. If there are important small details which are spatially distingueshed in the scene to be reconstructed, then it is recommended to set Mild depth filtering mode, for important features not to be sorted out as outliers.

This value of the parameter may also be useful for aerial projects in case the area contains poorly textued roofs, for example. If the area to be reconstructed does not contain meaningful small details, then it is reasonable to chose Aggressive depth filtering mode to sort out most of the outliers. This value of the parameter normally recommended for aerial data processing, however, mild filtering may be useful in some projects as well see poorly textured roofs comment in the mild parameter valur description above.

Moderate depth filtering mode brings results that are in between the Mild and Aggressive approaches. You can experiment with the setting in case you have doubts which mode to choose.

Additionally depth filtering can be Disabled. But this option is not recommended as the resulting dense cloud could be extremely noisy. Check the reconstruction volume bounding box.

If the model has already been referenced, the bounding box will be properly positioned automatically. Otherwise, it is important to control its position manually. To adjust the bounding box manually, use the Resize Region and Rotate Region toolbar buttons. Rotate the bounding box and then drag corners of the box to the desired positions – only part of the scene inside the bounding box will be reconstructed.

If the Height field reconstruction method is to be applied, it is important to control the position of the red side of the bounding box: it defines reconstruction plane. In this case make sure that the bounding box is correctly oriented. In the Build Mesh dialog box select the desired reconstruction parameters. Reconstruction parameters PhotoScan supports several reconstruction methods and settings, which help to produce optimal reconstructions for a given data set.

Surface type Arbitrary surface type can be used for modeling of any kind of object. It should be selected for closed objects, such as statues, buildings, etc. It doesn’t make any assumptions on the type of the object being modeled, which comes at a cost of higher memory consumption. Height field surface type is optimized for modeling of planar surfaces, such as terrains or basereliefs.

It should be selected for aerial photography processing as it requires lower amount of memory and allows for larger data sets processing. Source data Specifies the source for the mesh generation procedure. Sparse cloud can be used for fast 3D model generation based solely on the sparse point cloud. Dense cloud setting will result in longer processing time but will generate high quality output based on the previously reconstructed dense point cloud.

Polygon count Specifies the maximum number of polygons in the final mesh. They present optimal number of polygons for a mesh of a corresponding level of detail. It is still possible for a user to indicate the target number of polygons in the final mesh according to their choice. It could be done through the Custom value of the Polygon count parameter. Please note that while too small number of polygons is likely to result in too rough mesh, too huge custom number over 10 million polygons is likely to cause model visualization problems in external software.

Interpolation If interpolation mode is Disabled it leads to accurate reconstruction results since only areas corresponding to dense point cloud points are reconstructed. Manual hole filling is usually required at the post processing step. With Enabled default interpolation mode PhotoScan will interpolate some surface areas within a circle of a certain radius around every dense cloud point. As a result some holes can be automatically covered.

Yet some holes can still be present on the model and are to be filled at the post processing step. In Extrapolated mode the program generates holeless model with extrapolated geometry.

Large areas of extra geometry might be generated with this method, but they could be easily removed later using selection and cropping tools. Note PhotoScan tends to produce 3D models with excessive geometry resolution, so it is recommended to perform mesh decimation after geometry computation.

More information on mesh decimation and other 3D model geometry editing tools is given in the Editing model geometry section. Select the desired texture generation parameters in the Build Texture dialog box.

Texture mapping modes The texture mapping mode determines how the object texture will be packed in the texture atlas. Proper texture mapping mode selection helps to obtain optimal texture packing and, consequently, better visual quality of the final model. Generic The default mode is the Generic mapping mode; it allows to parametrize texture atlas for arbitrary geometry.

No assumptions regarding the type of the scene to be processed are made; program tries to create as uniform texture as possible. Adaptive orthophoto In the Adaptive orthophoto mapping mode the object surface is split into the flat part and vertical regions. The flat part of the surface is textured using the orthographic projection, while vertical regions are textured separately to maintain accurate texture representation in such regions.

When in the Adaptive orthophoto mapping mode, program tends to produce more compact texture representation for nearly planar scenes, while maintaining good texture quality for vertical surfaces, such as walls of the buildings.

Orthophoto In the Orthophoto mapping mode the whole object surface is textured in the orthographic projection. The Orthophoto mapping mode produces even more compact texture representation than the Adaptive orthophoto mode at the expense of texture quality in vertical regions.

Single photo The Single photo mapping mode allows to generate texture from a single photo. The photo to be used for texturing can be selected from ‚Texture from‘ list. Keep uv The Keep uv mapping mode generates texture atlas using current texture parametrization. It can be used to rebuild texture atlas using different resolution or to generate the atlas for the model parametrized in the external software. Texture generation parameters The following parameters control various aspects of texture atlas generation:.

Texture from Single photo mapping mode only Specifies the photo to be used for texturing. Available only in the Single photo mapping mode.

Blending mode not used in Single photo mode Selects the way how pixel values from different photos will be combined in the final texture. Mosaic – implies two-step approach: it does blending of low frequency component for overlapping images to avoid seamline problem weighted average, weight being dependent on a number of parameters including proximity of the pixel in question to the center of the image , while high frequency component, that is in charge of picture details, is taken from a single image – the one that presents good resolution for the area of interest while the camera view is almost along the normal to the reconstructed surface in that point.

Average – uses the weighted average value of all pixels from individual photos, the weight being dependent on the same parameters that are considered for high frequence component in mosaic mode. Max Intensity – the photo which has maximum intensity of the corresponding pixel is selected. Min Intensity – the photo which has minimum intensity of the corresponding pixel is selected. Disabled – the photo to take the color value for the pixel from is chosen like the one for the high frequency component in mosaic mode.

PhotoScan requires z value to indicate height above the ellipsoid. Using different vertical datums On default PhotoScan requires all the source altitude values for both cameras and markers to be input as values mesuared above the ellipsoid. However, PhotoScan allows for the different geoid models utilization as well.

PhotoScan installation package includes only EGM96 geoid model, but additional geoid models can be downloaded from Agisoft’s website if they are required by the coordinate system selected in the Reference pane settings dialog; alternatively, a geoid model can be loaded from a custom PRJ file. Optimization Optimization of camera alignment PhotoScan estimates internal and external camera orientation parameters during photo alignment.

This estimation is performed using image data alone, and there may be some errors in the final estimates. The accuracy of the final estimates depends on many factors, like overlap between the neighboring photos, as well as on the shape of the object surface. These errors can lead to non-linear deformations of the final model.

During georeferencing the model is linearly transformed using 7 parameter similarity transformation 3 parameters for translation, 3 for rotation and 1 for scaling. Such transformation can compensate only a linear model misalignment. The non-linear component can not be removed with this approach. This is usually the main reason for georeferencing errors. Possible non-linear deformations of the model can be removed by optimizing the estimated point cloud and camera parameters based on the known reference coordinates.

During this optimization PhotoScan adjusts estimated point coordinates and camera parameters minimizing the sum of reprojection error and reference coordinate misalignment error. To achieve greater optimizing results it may be useful to edit sparse point cloud deleting obviously mislocated points beforehand. Georeferencing accuracy can be improved significantly after optimization.

It is recommended to perform optimization if the final model is to be used for any kind of measurements. In the Reference pane Settings dialog box specify the assumed accuracy of measured values as well as the assumed accuracy of marker projections on the source photos. Click Optimize toolbar button. In Optimize Camera Alignment dialog box check additional camera parameters to be optimized if needed.

Click OK button to start optimization. After the optimization is complete, the georeferencing errors will be updated. Note Step 5 can be safely skipped if you are using standard GPS not that of extremely high precision. Tangential distortion parameters p3, p4, are available for optimization only if p1, p2 values are not equal to zero after alignment step. The model data if any is cleared by the optimization procedure.

You will have to rebuild the model geometry after optimization. Image coordinates accuracy for markers indicates how precisely the markers were placed by the user or adjusted by the user after being automatically placed by the program. Ground altitude parameter is used to make reference preselection mode of alignment procedure work effectively for oblique imagery. See Aligning photos for details. Camera, marker and scale bar accuracy can be set per item i. Accuracy values can be typed in on the pane per item or for a group of selected items.

Generally it is reasonable to run optimization procedure based on markers data only. It is due to the fact that GCPs coordinates are measured with significantly higher accuracy compared to GPS data that indicates camera positions.

Thus, markers data are sure to give more precise optimization results. Moreover, quite often GCP and camera coordinates are measured in different coordinate systems, that also prevents from using both cameras and markers data in optimization simultaneously.

The results of the optimization procedure can be evaluated with the help of error information on the Reference pane. In addition, distortion plot can be inspected along with mean residuals visualised per calibration group. This data is available from Camera Calibration dialog Tools menu , from context menu of a camera group – Distortion Plot In case optimization results does not seem to be satisfactory, you can try recalculating with lower values of parameters, i.

Scale bar based optimization Scale bar is program representation of any known distance within the scene. It can be a standard ruler or a specially prepared bar of a known length.

Scale bar is a handy tool to add supportive reference data to. They can prove to be useful when there is no way to locate ground control points all over the scene. Scale bars allow to save field work time, since it is significantly easier to place several scale bars with precisely known length, then to measure coordinates of a few markers using special equipment.

In addition, PhotoScan allows to place scale bar instances between cameras, thus making it possible to avoid not only marker but ruler placement within the scene as well.

Surely, scale bar based information will not be enough to set a coordinate system, however, the information can be successfully used while optimizing the results of photo alignment.

It will also be enough to perform measurements in PhotoScan software. See Performing measurements on mesh. Place markers at the start and end points of the bar. For information on marker placement please refer to the Setting coordinate system section of the manual. Select Create Scale Bar command form the Model view context menu.

The scale bar will be created and an instant added to the Scale Bar list on the Reference pane. Switch to the. Double click on the Distance m box next to the newly created scale bar name and enter the known length of the bar in meters.

Select the two cameras on the Workspace or Reference pane using Ctrl button. Alternatively, the cameras can be selected in the Model view window using selecting tools from the Toolbar. Select Create Scale Bar command form the context menu. On the Reference pane check all scale bars to be used in optimization procedure. Click Settings toolbar button on the Reference pane. In the Reference pane Settings dialog box specify the assumed accuracy of scale bars measurements.

Click OK button. After the optimization is complete, cameras and markers estimated coordinates will be updated as well as all the georeferencing errors. To analyze optimization results switch to the View Estimated mode using the Reference pane toolbar button. In scale bar section of the Reference pane estimated scale bar distance will be displayed.

Error pix – root mean square reprojection error calculated over all feature points detected on the photo. Error pix – root mean square reprojection error for the marker calculated over all photos where marker is visible.

Error m – difference between the input source scale bar length and the measured distance between two cameras or markers representing start and end points of the scale bar. If the total reprojection error for some marker seems to be too large, it is recommended to inspect reprojection errors for the marker on individual photos. The information is available with Show Info command from the marker context menu on the Reference pane. Working with coded and non-coded targets Overview Coded and non-coded targets are specially prepared, yet quite simple, real world markers that can add up to successful 3D model reconstruction of a scene.

Coded targets advantages and limitations Coded targets CTs can be used as markers to define local coordinate system and scale of the model or as true matches to improve photo alignment procedure. PhotoScan functionality includes automatic detection and matching of CTs on source photos, which allows to benefit from marker implementation in the project.

Moreover, automatic CTs detection and marker placement is more precise then manual marker placement. PhotoScan supports three types of circle CTs: 12 bit, 16 bit and 20 bit.

While 12 bit pattern is considered to be decoded more precisely, 16 bit and 20 bit patterns allow for a greater number of CTs to be used within the same project. To be detected successfully CTs must take up a significant number of pixels on the original photos.

This leads to a natural limitation of CTs implementation: while they generally prove to be useful in close-range imagery projects, aerial photography projects will demand too huge CTs to be placed on the ground, for the CTs to be detected correctly. Coded targets in workflow Sets of all patterns of CTs supported by PhotoScan can be generated by the program itself.

Once generated, the pattern set can be printed and the CTs can be placed over the scene to be shot and reconstructed. When the images with CTs seen on them are uploaded to the program, PhotoScan can detect and match the CTs automatically. PhotoScan will detect and match CTs and add corresponding markers to the Reference pane. CTs generated with PhotoScan software contain even number of sectors. However, previous versions of PhotoScan software had no restriction of the kind.

Thus, if the project to be processed contains CTs from previous versions of PhotoScna software, it is required to disable parity check in order to make the detector work. Non-coded targets implementation Non-coded targets can also be automatically detected by PhotoScan see Detect Markers dialog. However, for non-coded targets to be matched automatically, it is necessary to run align photos procedure first.

Non-coded targets are more appropriate for aerial surveying projects due to the simplicity of the pattern to be printed on a large scale. But, looking alike, they does not allow for automatic identification, so manual assignment of an identifier is required if some referencing coordinates are to be imported from a file correctly.

Chapter 5. Measurements Performing measurements on mesh PhotoScan supports measuring of distances between control points, as well as of surface area and volume of the reconstructed 3D model. Distance measurement PhotoScan enables measurements of direct distances between the points of the reconstructed 3D scene. The points used for distance measurement must be defined by placing markers in the corresponding locations.

Model coordinate system must be also initialized before the distance measurements can be performed. Alternatively, the model can be scaled based on known distance scale bar information to become suitable for measurements.

For instructions on placing markers, refining their positions and setting coordinate system please refer to the Setting coordinate system section of the manual. Scale bar concept is described in the Optimization section. Place the markers in the scene at the locations to be used for distance measurement.

Select both markers to be used for distance measurements on the Reference pane using Ctrl button. Select Create Scale Bar command form the 3D view context menu. Switch to the estimated values mode using toolbar. The estimated distance for the newly created scale bar equals to the distance that should have been measured. Note Please note that the scale bar used for distance measurements must be unchecked on the Reference pane.

Surface area and volume measurement Surface area or volume measurements of the reconstructed 3D model can be performed only after the scale or coordinate system of the scene is defined. For instructions on setting coordinate system please refer to the Setting coordinate system section of the manual. The whole model surface area and volume will be displayed in the Measure Area and Volume dialog box.

Surface area is measured in square meters, while mesh volume is measured in cubic meters. Volume measurement can be performed only for the models with closed geometry. If there are any holes in the model surface PhotoScan will report zero volume. Existing holes in the mesh surface can be filled in before performing volume measurements using Close Holes Performing measurements on DEM PhotoScan is capable of DEM-based point, distance, area, and volume measurements as well as of generating cross-sections for a part of the scene selected by the user.

Measurements on the DEM are controlled with shapes: points, polylines and polygons. Alternatively, shapes can be loaded from a. SHP file using Import Shapes Shapes created in PhotoScan can be exported using Export Shapes Double click on the last point to indicate the end of a polyline. To complete a polygon, place the end point over the starting one. Once the shape is drawn, a shape label will be added to the chunk data structure on the Workspace pane. All shapes drawn on the same DEM and on the corresponding orthomosaic will be shown under the same label on the Workspace pane.

The program will switch to a navigation mode once a shape is completed. Delete Vertex command is active only for a vertex context menu. To get access to the vertex context menu, select the shape with a double click first, and then select the vertex with a double click on it. To change position of a vertex, drag and drop it to a selected position with the cursor. Point measurement Ortho view allows to measure coordinates of any point on the reconstructed model. X and Y coordinates of the point indicated with the cursor as well as height of the point above the vertical datum selected by the user are shown in the bottom right corner of the Ortho view.

In the Measure Shape dialog inspect the results. Perimeter value equals to the distance that should have been measured. In addition to polyline length value see perimeter value in the Measure Shape , coordinates of the vertices of the polyline are shown on the Planar tab of the Measure Shape dialog. Note Measure option is available from the context menu of a selected polyline.

To select a polyline, double-click on it. A selected polyline is coloured in red. In the Measure Shape dialog inspect the results: see area value on the Planar tab and volume values on the Volume tab. Best fit and mean level planes are calculated based on the drawn polygon vertices.

Volume measured against custom level plane allows to trace volume changes for the same area in the course of time.

Note Measure option is available from the context menu of a selected polygon. To select a polygon, double-click on it. A selected polygon is coloured in red. Cross sections and contour lines PhotoScan enables to calculate cross sections, using shapes to indicate the plane s for a cut s , the cut being made with a plane parallel to Z axis. Generate Contours Set values for Minimal altitude, Maximal altitude parameters as well as the Interval for the contours. All the values shoudl be indicated in meters.

When the procedure is finished, a contour lines label will be added to the project file structure shown on the Workspace pane. Contour lines can be shown over the DEM or orthomosaic on the Ortho tab of the program window. Use Show contour lines tool from the Ortho tab toolbal to switch the function on and off. Contour lines can be deleted using Remove Contours command from the contour lines label context menu on the Workspace pane.

Contour lines can be exported using Export Contours command from the contour lines label context menu on the Workspace pane. Alternatively the command is available from the Tools menu. In the Export Contour Lines dialog it is necessary to select the type of the contour lines to be exported. SHP file can store the lines of the same type only: either polygons or polylines.

Vegetation indices calculation PhotoScan enables to calculate NDVI and other vegetation indices based on the multispectral imagery input. A vegetation index formula can be set by the user, thus allowing for great flexibility in data analysis. Calculated data can be exported as a grid of floating point index values calculated per pixel of orthomosaic or as an orthomosaic in pseudocolors according to a pallet set by the user.

Open orthomosaic in the Ortho tab doubleclicking on the orthomosaic label on the Workspace pane. Open Raster Calculator tool using. Input an index expression using keyboard input and operators buttons of the raster calculator if necessary. Once the operation is completed, the result will be shown in the Ortho view, index values being visualised with colours according to the palette set in the Raster Calculator dialog.

Palette defines the colour for each index value to be shown with. PhotoScan offers several standard palette presets on the Palette tab of the Raster Calculator dialog. For each new line added to the palette a certain index value should be typed in. Double click on the newly added line to type the value in.

A customised palette can be saved for future projects using. Select Generate Contours The contour lines will be shown over the index data on the Ortho tab. Note PhotoScan keeps only the latest contour lines data calculated.

After vegetation index results having been inspected, the original orthomosaic can be opened with unchecking Enable transform box in the Raster Calculator and pressing OK button. Index data can be saved with Export orthomosaic command from the File menu. For guidance on the export procedure, please refer to NDVI data export section of the manual. Masks are used in PhotoScan to specify the areas on the photos which can otherwise be confusing to the program or lead to incorrect reconstruction results.

Masks can be applied at the following stages of processing: Alignment of the photos Building dense point cloud Building 3D model texture Exporting Orthomosaic Alignment of the photos Masked areas can be excluded during feature point detection. Thus, the objects on the masked parts of the photos are not taken into account while estimating camera positions.

This is important in the setups, where the object of interest is not static with respect to the scene, like when using a turn table to capture the photos. Masking may be also useful when the object of interest occupies only a small part of the photo. In this case a small number of useful matches can be filtered out mistakenly as a noise among a much greater number of matches between background objects.

Building dense point cloud While building dense point cloud, masked areas are not used in the depth maps computation process. Masking can be used to reduce the resulting dense cloud complexity, by eliminating the areas on the photos that are not of interest. Masked areas are always excluded from processing during dense point cloud and texture generation stages. Let’s take for instance a set of photos of some object.

Along with an object itself on each photo some background areas are present. These areas may be useful for more precise camera positioning, so it is better to use them while aligning the photos. However, impact of these areas at the building dense point cloud is exactly opposite: the resulting model will contain object of interest and its background. Background geometry will „consume“ some part of mesh polygons that could be otherwise used for modeling the main object.

Setting the masks for such background areas allows to avoid this problem and increases the precision and quality of geometry reconstruction. Building texture atlas During texture atlas generation, masked areas on the photos are not used for texturing.

Masking areas on the photos that are occluded by outliers or obstacles helps to prevent the „ghosting“ effect on the resulting texture atlas. Loading masks Masks can be loaded from external sources, as well as generated automatically from background images if such data is available. PhotoScan supports loading masks from the following sources: From alpha channel of the source photos.

From separate images. Generated from background photos based on background differencing technique. Based on reconstructed 3D model. When generating masks from separate or background images, the folder selection dialog will appear. Browse to the folder containing corresponding images and select it.

Measurements: distances, areas, volumes Inbuilt tools to measure distances, areas and volumes. Stereoscopic measurements Professional 3D monitors and 3D controllers support for accurate and convenient stereoscopic vectorization of features and measurement purposes.

Direct upload to various online resources and export to many popular formats. Photorealistic textures: HDR and multifile support incl. UDIM layout. Hierarchical tiled model generation City scale modeling preserving the original image resolution for texturing. Cesium publishing. Basis for numerous visual effects with 3D models reconstructed in time sequence.

Table of Contents Overview Overview AgisoftPhotoScanisanadvancedimagebased3Dmodelingsolutionaimedatcreatingprofessional quality3Dcontentfromstillimages. Basedonthelatestmultiview3Dreconstructiontechnology,it operateswitharbitraryimagesandisefficientinbothcontrolledanduncontrolledconditions. Photoscan betakenfromanyposition,providingthattheobjecttobereconstructedisvisibleonatleasttwophotos.

How it works GenerallythefinalgoalofphotographsprocessingwithPhotoScanistobuildatextured3Dmodel. The procedureofphotographsprocessingand3Dmodelconstructioncomprisesfourmainstages.

AtthisstagePhotoScansearchesforcommonpointsonphotographs andmatchesthem,aswellasitfindsthepositionofthecameraforeachpictureandrefinescamera calibrationparameters. Thesparsepointcloudrepresentstheresultsofphotoalignmentandwillnotbedirectlyusedinthe further 3Dmodelconstructionprocedure except forthesparsepoint cloudbasedreconstruction method.

Forinstance,thesparse pointcloudmodelcanbeusedina3Deditorasareference. Onthe contrary, theset of camera positionsisrequired for further 3Dmodel reconstruction by PhotoScan. Basedontheestimatedcamerapositionsandpictures themselvesadensepointcloudisbuiltbyPhotoScan. Densepointcloudmaybeeditedandclassified priortoexportorproceedingto3Dmeshmodelgeneration. The third stageisbuildingmesh.

PhotoScan reconstructs a3D polygonal meshrepresenting the objectsurfacebasedonthedenseorsparsepointcloudaccordingtotheuser’schoice. Generallythere aretwoalgorithmicmethodsavailableinPhotoScanthat canbeappliedto3Dmeshgeneration: HeightFieldforplanartypesurfaces,Arbitraryforanykindofobject.

The mesh having been built, it may be necessary to edit it. Some corrections, such as mesh decimation,removalofdetachedcomponents,closingofholesinthemesh,smoothing,etc. After geometry i. Several texturing modes are available in PhotoScan, they easeus data recovery software free for windows 10 described in the correspondingsectionofthismanual,aswellasorthomosaicandDEMgenerationprocedures.

About the manual Basically,thesequenceofactionsdescribedabovecoversmostofthedataprocessingneeds. Allthese operationsarecarriedoutautomaticallyaccordingtotheparameterssetbyuser.

Instructionsonhowto agisoft photoscan user manual professional edition version 1.2 free correspondingsectionsoftheChapter3,Generalworkflowchapterofthemanual. Insomecapturing scenariosmaskingofcertainregionsofthephotosmayberequiredtoexcludethemfromthecalculations. Overview in Chapter 6, Editing. Camera calibration issues are discussed profedsional Chapter 4, Referencing, that also describes functionality to optimize camera alignment results and provides guidance on model referencing.

Area, volume,profilemeasurementproceduresaretackledinChapter5,Measurements,whichalsoincludes informationonvegetationindicescalculations. WhileChapter7,Automationdescribesopportunitiesto saveuponmanualinterventiontotheprocessingworkflow, Chapter8, Networkprocessing presents guidelinesonhowtoorganizedistributedprocessingoftheimagerydataonseveralnodes. PhotoScanallowstoexportobtainedresults andsaveintermediatedatainaformofprojectfilesatanystageoftheprocess.

Ifyouarenotfamiliar withtheconceptofprojects,itsbriefdescriptionisgivenattheendoftheChapter3,Generalworkflow. InthemanualyoucanalsofindinstructionsonthePhotoScaninstallationprocedureandbasicrulesfor taking“good“photographs,i. Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous. Carousel Next. What is Scribd? Explore Ebooks. Bestsellers Editors‘ Picks All Ebooks. Explore Audiobooks. Bestsellers Editors‘ Picks All audiobooks.

Explore Magazines. Editors‘ Picks All magazines. Explore Podcasts All agksoft. Difficulty Beginner Intermediate Advanced. Explore Documents. Agisoft Photoscan User Manual. Document Information click to expand document information Description: Agisoft photoscan user manual. Did you find this document useful?

Is this content inappropriate? Report this Document. Description: Agisoft photoscan user manual. Flag for inappropriate content. Download now. Jump to Page. Search inside document. Grit: The Power of Passion and Perseverance. Cost of Capital. Yes Please. Cheezy Muff Spaghetti. Working Capital Mgt. Twelve Tribes New Pamphlet. Financial Statement Analysis. Promotion Mix of Retail Sector. Principles: Life and Work. Fear: Trump in the White House. Http://replace.me/12543.txt Annual Report Breast Cancer.

Agisoft photoscan user manual professional edition version 1.2 free World Is Flat 3. Introduction and History of Company. The Outsider: A Novel.

Becoming and Effective Leader. The Handmaid’s Tale. Samsung Case Review. The Alice Network: A Novel. Life of Pi. The Professionap of Being a Wallflower.

Manhattan Beach: A Novel. Bat Sample Questionsfgdfg. Little Women. Mobile Broadband A4 Dabur India Report. A Tree Grows in Brooklyn. Mail Survey. Sing, Unburied, Sing: A Novel. Everything Is Illuminated. The Constant Gardener: A Novel. Alert when Licenses Consumed reaches 1,2,80,1, Percent- use this one most current – Copy. Ec Ocn Qp. Printer Comparison 22 Sep Amiga Shopper Magazine Issue 0 April Chapter 7 – Scheduling. Introduction Читать. Glam Agisoft photoscan user manual professional edition version 1.2 free Advertising Agreement.

Working With Mapinfo Professional 11 0 English. Total Supra DML. Automatic transfer function synthesis from a Bode plot. Vereion Code of the Extraordinary Mind. Flexem HMI sistemas de suplencia.

Pages: Author Topic: Agisoft PhotoScan 1. Hello Maxime, I think you need to use: PhotoScan. ElevationData or PhotoScan. ModelData in. Member Posts: Split in chunks script doesn’t work with current version. Airgo Newbie Posts: Dense point cloud has 38Mio points, mesh 2,4 Mio faces. Regards, Mathias. Hello bisenberger, you need to comment or remove the following line: Code: [Select].

Code: [Select]. Hello Mathias, Are you using the same parameters, maybe you can post the screenshots and attach the processing logs? Hello Alexey, I was using the same parameters standard options. Hers is a screenshot and the logfile. During the first attempt my computer crashed. What is the right flow to create the google map title.

Impossible to have real zoom. Dear Alexey, as I mentioned earlier, in this beta version I experienced some problems when creating dem or orthomosaic from dense cloud. Indeed, although I „clean“ the dense cloud, when I create a new DEM or ortho in different coordinate system, the results ignores the editing I did to the dense cloud, and instead it uses the original cloud.

Cheers, G. Hello Giancan, Still we are not able to reproduce the problem: we got dense cloud in one coordinate system, then we crop the cloud and build DEM in another coordinate system: the resulting DEM is based only on the cropped part of the cloud though it may be interpolated to the bounding box limits.

Hi We still run the 1. The update to 1. Today the mesh is based on arbitrary and mesh resolution cannot be decided manually. Previously we could export for example collada using the already meshed model. Question: Is it possible to add a manual choice for the 1. The mesh should be built at high and then decimated to create a sharper mesh. And only autogenerate the tiled texture based on generic and a resolution by choice. We use agisoft for city projects and love it but the removal of manual control of the tiled models will soon force us to look at other programs.

Thanks for a great program! Mohammed Full Member Posts: Thanks Mohamed. Haven’t read all the posts here, apologies if it’s already been reported. VERY useful. It will spend all of the time calculating but not write out any files. I would like to see this working – of course – and also a batch process across numerous files.

Great to see this work being done! Hi all, is it possible have an import mask like 1. For some reason, photoscan have not been able to create a complete mesh in the area also, but it is extrapolated. The seamlineeditor is very promising! LAS Thanks, looks exciting! SMF 2.

Это как раз было ее специальностью.  – Дело в том, что это и есть ключ. Энсей Танкадо дразнит нас, заставляя искать ключ в считанные минуты.

– Да будет. На вид вы человек состоятельный. Дайте немножко денег, чтобы я могла вернуться домой. Я вам все верну. Беккер подумал, что деньги, которые он ей даст, в конечном счете окажутся в кармане какого-нибудь наркоторговца из Трианы.

Empfohlene Beiträge