2D数字图像相关应用
Created by Elisha Byrne, Last modified by Micah Simonsen on 19 June 2017 11:14 AM
VIC-2D 应用 Applications
要使测量设置适用于二维DIC应用(仅使用单台相机),试件必须满足以下条件:
▪ 试件必须平坦
▪ 试件必须与相机传感器平行
▪ 试件任何部分均不得存在朝向或远离相机的离面运动
由于二维DIC仅使用一台相机,软件必须假定图像中的试件仅进行面内运动,并且在该图像内是完全平坦且与传感器平行的。如果测试不符合这些准则,则将产生错误的应变。例如,在拉伸试验中,如果试件出现颈缩,某个区域远离相机移动,则会产生压缩偏差。任何远离传感器的运动都将被报告为压缩应变,而任何朝向传感器的运动都将被报告为拉伸应变。
In order for a setup to be a good 2D application (with just one camera), the specimen must be:
▪ Specimen must be flat
▪ Specimen must be planar to camera sensor
▪ No part of the specimen can have any out-of-plane motion towards or away from the camera
Since 2D is only working with one camera, the software must assume that the specimen in the image moves only in-plane and is completely flat and planar within that image. If the test does not fall within these guidelines, then erroneous strains will be produced. For example, in a tensile test, if the specimen necks and a region moves away from the camera, there will be a compression bias. Any motion away from the sensor will be reported as a compressive strain. Any motion towards the sensor will be reported as a tensile strain.
二维DIC应用技巧 Tips for 2D applications
使用较长焦距的镜头将最大限度地减少因离面运动引起的偏差。离面运动产生的虚假应变等于离面位移量除以镜头与试件之间的工作距离。
二维DIC不涉及(三维意义上的)校准(仅进行简单的比例校准),因此优选低畸变镜头。由于二维DIC的校准方式与三维不同,透过观察窗或其他介质进行测量在二维DIC中问题更大。不过,二维DIC中存在一种逆映射方法可用于消除这些畸变。该方法需要使用平移台、一个高质量且平坦的、大于视场的散斑图。其操作流程不像标准的二维DIC应用那样简单,有时甚至在逻辑上也难以实现。
Using a longer focal length will minimize bias due to out-of-plane motion. The false strain produced by out of plane motion is equal to the amount of out-of-plane motion divided by the standoff distances between the lens and the specimen.
2D does not involve a calibration (other than a simple scale calibration), so low distortion lenses are preferred. Since 2D is not calibrated the same way that 3D is, viewing through windows and mediums are more problematic in 2D. Although, there is an inverse mapping method in 2D to remove these distortions. This method involves translation stages, a high quality and flat speckle pattern that is larger than the field of view. Procedurally, it is not as simple as standard 2d applications and sometimes not even logistically possible.
SEM Drift Correction 扫描电镜电子束漂移修正
Posted by Micah Simonsen, Last modified by Micah Simonsen on 13 October 2016 01:09 PM
以下链接的PDF文档的详细介绍了在VIC-2D中如何校正SEM漂移和畸变的过程。 ZIP文件中包含失真校正数据的Demo以及图像序列。
The linked PDF details the procedure for correcting SEM drift and distortion in Vic-2D. The ZIP file contains a demo set of distortion correction data together with an image list.
Attachments
说明文档 SEM Drift Correction
练习文件Sample Files
Subset, Step Size and Strain Filter Selection 子集、步长与应变滤波器的选择
Posted by Elisha Byrne, Last modified by Micah Simonsen
子集、步长与应变滤波器的选择 Subset, Step Size and Strain Filter Selection
概述与总结 Summary and Overview
本文将讨论子集大小、步长和应变滤波器的选择。简而言之,您需要确保子集尺寸大于散斑尺寸,并且在大多数应用中,步长应约为子集尺寸的1/4。根据经验,如果使用较小的步长,则需要使用较大的应变滤波器;如果使用较大的步长,则需要使用较小的应变滤波器。
本文将详细探讨搜索/跟踪功能的原理,以及为何子集尺寸依赖于散斑尺寸和质量,同时还将讨论子集尺寸对云图中数据空洞、噪声和边缘数据的影响。本文还将阐述软件建议较大子集尺寸的可能原因,并解释步长和滤波器尺寸如何影响应变计算和虚拟应变计尺寸。最后,我们将探讨子集尺寸、步长和滤波器尺寸如何影响软件的实际运行时间。
Here we will discuss subset size, step size and strain filter selection. In short, you want your subset to be larger than your speckle sizes and for most applications your step size to be roughly 1/4 of the subset size. As a rule of thumb, if you use a small step size then you'll want to use a larger strain filter and if you use a large step size then you'll want to use a smaller strain filter.
This article will detail the search/tracking function and why the subsets are speckle size/quality dependent, and will also discuss the subset size effect on holes within a plot, noise and edge data. This article will also discuss the implications of a high subset suggestion, and will then address how the step size and filter size affect strain calculations and virtual strain gauge sizes. Finally, we will discuss how subset size, step size and filter size affect the actual run time of the software.
跟踪功能 Tracking Function
这些"子集"使我们能够跟踪散斑图案上的点。数字图像相关技术要求试件具有适当且致密的散斑。这为我们提供了可供搜索和跟踪的标记/指纹。我们需要散斑尺寸至少为5像素,并且间距至少为5像素,以便在图像中解析这些散斑。同时,散斑尺寸和间距的一致性也至关重要。然而,我们并不直接跟踪单个散斑;我们软件的工作方式是在图像上分配一个由"子集"或窗口组成的网格。我们需要在每个子集内拥有独特的散斑图案,以便为该子集找到一个唯一的跟踪点。因此,子集尺寸由用户定义,并取决于散斑尺寸。例如,如果您拥有细小致密的5像素散斑图案,则可以使用较小的子集(我们软件能跟踪的最小尺寸是9x9像素)。然而,要获得足够小且足够致密以匹配9x9子集的图案非常困难,因此我们允许用户观察图案并相应调整子集尺寸。在当前软件版本中,默认子集尺寸为29,默认步长为7(在早期版本中,默认子集尺寸范围为21-29)。这意味着我们每间隔7个像素就跟踪一个29x29像素的区域。Vic-3D中显示的可视化网格很好地展示了所选子集尺寸的效果,您可以使用该可视化工具与您的散斑尺寸进行比较。但请注意,该工具可能产生误导,因为实际的数据网格远比您看到的网格密集。我们使子集重叠,并按每个步长进行跟踪。在默认情况下,我们每7个像素获取一个数据点。重叠的子集彼此之间并非独立,这就是我们不默认将步长设为1的原因(那将显著增加处理时间,但通常几乎不带来任何益处)。为获得独立且非重复的数据,我们通常选择约为子集尺寸1/4的步长。
These subsets allow us to track points on the speckle pattern. Digital image correlation requires that the specimen is properly and densely speckled. This provides us markers/fingerprints to search for and track. We need speckles that are at least 5 pixels in size with at least a 5 pixel spacing in order to resolve the speckles in the images. It's also important that these speckles are consistent in size and spacing. However, we don't track the actual speckles; the way our software works is that we assign a mesh of "subsets" or windows across the image. We need to have a unique speckle pattern within each subset in order to find a unique point to track for each subset. So the subset size is user-defined and depends on the speckle size. For example, if you have small, dense pattern of 5 pixel speckles, you can use a small subset (the smallest our software can track is 9x9 pixels). However, it's hard to get a pattern small enough and dense enough for a subset of 9, so we allow the user to look at the pattern and adjust the subset size accordingly. For current software versions, the default subset size is 29 and default step size is 7 (in previous versions the default subset size ranges from 21-29). So this means that we are tracking a 29x29 pixel area for every 7 pixels. The visual grid that is displayed in Vic-3D is a nice display of what the selected subset size looks like and you can use that visual tool to compare to your speckle sizes. However, that tool can be misleading because the mesh of data is actually much denser than the grid you see. We overlap the subsets and track for every step size. In the default case, we are obtaining data points for every 7 pixels. The overlapping subsets won't be independent of each other, so that's why we don't default to a step size of 1 (which would significantly increase processing time while typically providing little to no gain). To get independent and non-repetitive data, we typically choose a step size about 1/4 of the size of the subset.
为何我的云图中存在数据空洞? Why Does My Contour Plot Contain Holes?
数据空洞可能由多种原因造成。如果您的参考图像在整个云图中存在大量空洞,那么您很可能需要增大子集尺寸。如果某个子集内部没有散斑信息,则该处的数据将被丢弃。每个子集内部都需要同时包含黑色和白色的信息。因此,如果您的子集尺寸是29,但存在一些大于29像素的散斑,那么每个完全为黑色的29x29像素区域的数据将被丢弃,从而在云图中形成空洞。同样,如果您的散斑图案不够致密,那么那些大于29像素的白色区域(或散斑之间的间隙)的数据也将被丢弃。顺带一提,此描述假设的是白底黑斑的散斑图案,但您也可能使用黑底白斑的图案。造成空洞的其他原因还包括眩光/反光区域(如果存在眩光,每台相机看到的该点反射光会不同,导致无法匹配,即使匹配成功,数据中也会出现尖峰)、模糊、对比度差、散斑图案质量差(过于稀疏或散斑尺寸不一致)或失焦。如果您在变形后的图像中开始出现空洞,则很可能是裂纹(可能是图案裂纹或试件本身裂纹),但也可能是失焦(通常由于移出景深范围导致)、眩光或碎片造成。在"运行"菜单中调整阈值可能有助于恢复这些数据。但请记住,数据被丢弃大多是有原因的。如果数据是因裂纹、眩光或碎片等问题而被丢弃,最好在分析中排除这些数据。与其引入可能导致虚假位移/应变的错误数据点,不如保留数据的缺失状态。
Holes in your data can be attributed to several things. If your reference image has a lot of holes throughout the contour plot, it's likely that you need to have a bigger subset size. Data will be dropped if there is no speckle information within the subset. You need black AND white information within each subset. So if you have a subset size of 29 and if you have some speckles that are larger than 29 pixels, each 29x29 pixel area that is all black will be dropped out of the data and you'll get a hole there in your contour plot. Similarly, if your pattern isn't dense enough then those areas of white (or the areas in between the speckles) that are larger than 29 pixels will be dropped as well. As a side note, this description assumes a black-on-white speckle pattern, but you may have a white-on-black speckle pattern too. Other reasons for holes can be areas of glare/reflection (if there's glare, then each camera will see the light reflected off that point differently and won't be able to make the match or if it does match then you will see a spike in the data), blur, poor contrast, poor speckle pattern (too sparse or inconsistent speckle size), or de-focus. If you see a hole start to occur in the deformed images, then it's likely a crack (either of the pattern or the sample itself), but can also be de-focus (often due to moving out of the depth-of-field), glare, or shrapnel. Adjusting your thresholds the Run menu can help you bring data back in. However, remember that if data is dropped, then it is most likely for a reason. If the data is dropped due to issues such as cracks, glare or shrapnel, it's best to leave that data out of the analysis. It is always better to have an absence of data, than to include erroneous data points that can contribute to artificial displacements/strains.
子集尺寸与Sigma值 Subset size and sigma
子集越大,包含的信息就越多。因此,子集越大,每个子集的独特性就越强。子集之间的独特性越高,我们的跟踪置信度就越高。
The larger a subset is, the more information it'll contain. Therefore, the larger the subset, the more unique each subset is. The more unique the subsets are from subset to subset, the better our confidence will be.
散斑图案质量如何影响跟踪功能 How Speckle Pattern Quality Affects Tracking Function
我们希望整个散斑图案中的每个子集内部都包含良好的、独特的信息。因此,具有均匀散斑尺寸、50%覆盖率且高对比度的图案将产生最易跟踪的特征和最低的数据噪声水平。我们需要亮白和暗黑。灰色区域难以跟踪。包含大"斑块"同时又夹杂细小灰色散斑雾状区域的图案尤其难以跟踪。有关散斑图案质量及其如何影响噪声的更多信息,请参阅我们支持网站下载专区中的《最小化偏差与噪声报告》。
We want each subset throughout the speckle pattern to have nice, unique information within it. For this reason a pattern with uniformly sized subsets, 50% coverage, and high contrast will result in the most traceable features and the lowest noise levels in our data. We want bright whites and dark blacks. Grey areas are hard to track. Areas with big "blobs" and then grey mists of small speckles are particularly hard to track. For more information on speckle pattern quality and how it affects noise, please refer to our Minimizing Bias and Noise Presentation in the Downloads section of our Support site (http://www.correlatedsolutions.com/supportcontent/dic-noise-bias.pdf).
为何我看不到边缘数据? Why Don't I See Edge Data?
每个子集对应一个数据点。我们在子集的中心报告该数据。因此,我们能够报告的数据最远只能到达距离边缘半个子集尺寸的位置。如果您将感兴趣区域一直绘制到边缘,软件会跟踪所绘制的感兴趣区域内的所有数据,但边缘数据将在边缘子集的中心位置被报告。为了使云图更接近边缘,您可以使用较小的子集,这意味着您需要更细小的散斑图案。在理想情况下,如果拥有5像素散斑、间距5像素的致密理想散斑图案,您或许可以使用9像素的子集尺寸。这意味着在理想情况下,您可以让云图数据到达距离边缘4-5像素的位置。此外,物理上放大图像的边缘区域将使您能够使用更细小的散斑图案,并且那个9x9子集的物理尺寸也会更小,因此该子集的中心在物理上会更靠近边缘。再次强调,如果您放大了样本图像,您需要调整您的散斑尺寸,以便能够跟踪一个小的9x9子集。另请注意,由于子集和滤波器在子集/滤波器的中心报告数据,我们需要那个中心点。这就是为什么我们必须始终使用奇数值作为子集尺寸和滤波器尺寸——偶数值无法提供我们报告数据值所需的中心点。
We have one data point for every subset. We report the data in the center of the subset. For this reason, the closest we can report data to the edge is one half of the subset size. If you draw the area of interest right up to the edge, then the software is tracking all of that data in the drawn area of interest, but the edge data will be reported in the center of the edge subset. To get the contour plot closer to the edge you can use a smaller subset, which means you'll need a small speckle pattern. With an ideal speckle pattern of 5 pixel speckles that are densely speckled 5 pixels apart, you can likely use a subset size of 9 pixels. This means that in the ideal case, you can get your contour plot within 4-5 pixels of the edge. Also, physically zooming in on the edge will enable you to use a smaller speckle pattern and that 9x9 subset will also be physically smaller, so the center of that subset will be physically closer to the edge. Again, if you zoom in on the sample, you’ll need to adjust your speckle size so that you’ll be able to track a small 9x9 subset. One a side note, since subsets and filters report data in the center of the subset/filter, we need that center point. This is the reason that we must always use odd numbers for subsets sizes and filters. Even numbers do not provide the center point that we need in order to report the values.
为何软件建议的子集尺寸如此之大? Why is My Suggested Subset So High?
在Vic-3D的感兴趣区域工具中,您可以点击"?"获取建议的子集尺寸。如果软件建议的子集尺寸远大于根据您的图案看起来所需的尺寸,这可能表明图像和/或实验设置的其他方面可能存在可改进之处。建议子集尺寸的功能是基于对子集跟踪函数的一个估计Sigma值(一个标准差置信区间)来计算的。如果子集难以跟踪,这会推高Sigma值,从而导致建议的子集尺寸变大。这种情况下,问题很可能出在对比度差、失焦、衍射极限(光圈缩得太小;在高放大倍率情况下难以避免)或散斑图案质量差(即散斑尺寸不一致或不够致密)。
In the Vic-3D AOI (Area of interest) tools, you may click the ? for a suggested subset size. If your suggested subset is much larger than what it looks like your pattern should dictate, then that might be an indication that there are some other aspects of the image and/or experimental setup that could likely be improved. The suggested subset function is based off of an estimated sigma (one standard deviation confidence interval) for the subset tracking function. If the subsets are hard to track, this drives up the sigma and thus the suggested subset size. In this case, it's likely an issue of poor contrast, defocus, diffraction limit (aperture is too far closed; hard to avoid in high magnification situations), or poor speckle pattern quality (meaning the speckles are not a consistent size or the speckles are not dense enough).
步长与应变计算 Step Size and Strain Calculations
选择应变滤波器大小时,请记住,它是以数据点为单位来衡量的,而数据点之间由步长分隔。因此,如果您的滤波器大小为15,步长为5,则总平滑区域为15*5 = 75像素。这就是您的虚拟应变计尺寸。如果您将步长减小到1,并使用15的滤波器大小,则平滑范围仅为15*1 = 15像素,因此应变数据会噪声更大。所以,根据经验,如果使用较小的步长,则需要使用较大的应变滤波器;如果使用较大的步长,则需要使用较小的应变滤波器。还需注意,此应变滤波器是中心加权的,这意味着边缘数据的权重仅为中心数据权重的10%。
选择步长时需要考虑的另一个因素是试件几何形状。在大多数应用中,几何形状足够平坦,无需考虑此点,但对于复杂几何形状的情况,我们需要考虑沿曲面采样的数据点数量。对于每个数据点,应变是使用3个相邻数据点计算的(类似于有限元模型)。这3个数据点的间距由步长决定。因此,步长必须始终沿着曲面的切线方向,以确保我们计算的是沿曲面方向的表面应变,而不是穿过曲面(那会产生错误的应变)。对于具有小曲率半径的几何形状,我们需要使用较小的步长。关于应变计算的更多信息,请参阅我们支持网站下载专区中的《Vic-3D中的应变计算》。
When selecting the filter size for strain, keep in mind that this is in terms of data points, which are separated by the step size. So if your filter size is 15, and your step is 5, the total smoothing area is 15*5 = 75 pixels. This is your virtual strain gauge size. If you reduce the step to 1, and use a 15 filter size, you will only be smoothing by 15*1 = 15 pixels, so the strain will be noisier. So as a rule of thumb, if you use a small step size then you'll want to use a larger strain filter and if you use a large step size then you'll want to use a smaller strain filter. Also note that this strain filter is also center weighted, so the edge values will be worth 10% of the center values.
One more thing to consider when selecting your step size is your specimen geometry. For most applications, geometries are flat enough that you do not need to consider this, but for the instances of complicated geometries, we need to consider how many points we sample along a curved surface. For each data point, the strain is calculated using 3 neighboring data points (similar to FEA models). The spacing of the 3 data points are determined by the step size. So the step size must always be tangent along the curved surface in order to ensure that we are calculating the surface strain along the surface and not cutting through the surface, which would produce erroneous strains. For geometries with sharp radii, we’ll need to use a small step size. For more on the strain calculation, please refer to Strain Calculations in Vic-3D in our Downloads section of the Support site (http://www.correlatedsolutions.com/supportcontent/strain.pdf).
子集尺寸、步长和应变滤波器尺寸如何影响运行时间? How Subset Size, Step Size and Strain Filter Size Affect Run Time?
子集尺寸决定了您所跟踪的每个数据点的大小。较大的子比较小的子集需要更长的跟踪时间。步长决定了您跟踪的数据点数量。较小的步长(意味着更多的数据点)比较大的步长需要更长的跟踪时间。对更多数据点进行滤波也需要更长时间。因此,较大的应变滤波器比较小的应变滤波器需要更长的处理时间。
The subset size determines how large each data point is that you are tracking. A larger subset will take longer to track than a smaller subset. The step size determines how many data points you are tracking. A smaller step size (which means more data points) will take longer to track than a larger step size. It will also take longer to filter over more data points. So a larger strain filter will take longer to process than a smaller strain filter.
Strain Filter Selection 应变滤波器的选择
Posted by Elisha Byrne on 22 August 2017 05:50 PM
应变滤波器的选择 Strain Filter Selection
VIC-2D和VIC-3D中的应变滤波器可由用户自定义,以便用户选择应变数据呈现的局部化或平均化程度。应变滤波器有助于确定云图中所有数据点的虚拟应变计尺寸。较小的应变滤波器可提供更高的分辨率和更局部的数据;然而,较大的应变滤波器通过包含更多数据来降低不确定性,从而提高了测量的准确性。本文档提供了应变计算的背景知识,解释了不同应变滤波器尺寸的影响,并针对不同应用场景给出了应变滤波器的选择建议。
Strain filters in Vic-2D and Vic-3D are user-defined so that users can select how localized or how averaged the strains will be presented in the data. The strain filter helps determine the strain gauge size for all of the individual points on the contour plot. Small strain filters provide better resolution and more localized data. However, large strain filters increase accuracy because they contain more data which will result in less uncertainty. This document provides the stain calculation background, explains the effects of different strain gauge filter sizes, and provides strain filter selection advice for different instances.
附件 Attachments
Strain Tensors and Criteria in Vic VIC软件中的应变张量与准则
Posted by Micah Simonsen, Last modified by Micah Simonsen
应变张量 Strain Tensors
将DIC获取的应变值与解析结果或其他测量方法得到的应变结果进行比对时,选择与您预期值相匹配的应变张量至关重要。在低应变情况下,许多张量会给出非常相似的结果,但在大应变条件下,它们可能产生显著差异,选择错误的张量可能导致意料之外的结果。
When comparing strain values obtained from DIC to analytical results or to strain results obtained from other measurement methods, it's important to select the strain tensor which matches your expected values. At low strains, many of these tensors will give very similar results, but at larger strains they can diverge and selecting the wrong tensor can give unexpected results.
张量可以在运行时(通过"运行"对话框的"后处理"选项卡)或在应变计算对话框中进行选择。一些较常用的张量包括:
The tensor can be selected either at run-time - in the Postprocessing tab of the Run dialog - or in the strain calculation dialog. Some of the more commonly used tensors are:
▪ 拉格朗日张量:此为默认张量。拉格朗日有限应变张量,也称为格林-拉格朗日应变张量,是一种包含高阶位移项的有限应变度量;它基于初始构型定义梯度。此度量通常用于经历大应变的材料,如弹性体。请注意,在大应变条件下,由于高阶项的存在,拉格朗日应变可能远大于伸长率或工程应变。拉格朗日应变公式如下:
▪ Lagrange: this is the default tensor. The Lagrangian finite strain tensor, also known as the Green-Lagrangian strain tensor, is a finite strain measure which includes higher order displacement terms; it defines gradients in terms of the original configuration. This measure is commonly used for materials undergoing large strains such as elastomers. Please note that at large strains, the Lagrangian strain can become much larger than the extension or engineering strain due to the higher order term. The Lagrangian strain formulations are as follows:

▪ 工程应变:为避免刚体旋转导致的无意义应变,工程应变并非直接由位移导数计算得出。为使应变对任意刚体运动不敏感,工程应变通过以下方式从拉格朗日应变张量计算得出:
▪ Engineering: The engineering strain, also known as the Cauchy strain, is the ratio of total deformation to initial length. This strain measure is frequently used in measures of small strains in structural mechanics.
The engineering strain is calculated based on the initial Lagrangian strain and should provide a measure that can be compared to data from strain gauges/clip extensometers/etc.

▪ 亨基(对数)应变:亨基应变,也称为真实应变、自然应变或对数应变,是一种增量应变度量。
▪ Hencky (Logarithmic): The Hencky strain, also known as true, natural, or logarithmic strain, is an incremental strain measure.
▪ 欧拉-阿尔曼西张量:这是一种参考变形后构型的有限应变张量。
▪ Euler-Almansi: This is a finite strain tensor which is reference to the deformed strain configuration.
所有这些张量都将以 exx(沿X轴的应变)、eyy(Y轴应变)、exy(剪切应变张量——请注意,其值等于工程剪应变的一半)以及 e1(主应变)、e2(次应变)和 gamma(主应变角——+x轴与主应变轴之间的夹角,以弧度表示)的形式给出。
All of these tensors will be given in terms of exx (the strain along the X axis), eyy (strain in the Y axis), exy (the shear strain tensor - note that this is equal to half the engineering shear strain), as well as e1 (major strain), e2 (minor strain), and gamma (the major strain angle - the angle, in radians, between the +x axis and the major strain axis).
剪应变符号约定 Shear strain sign convention
为了直观理解剪应变符号与角度变化的对应关系,我们可以从(工程)剪应变定义 gamma = du/dy + dv/dx 开始。假设我们有一个初始为正方形的材料单元,并使其变形,以致顶边和底边发生水平移动,即 dv/dx = dv/dy = 0。这将正方形变为平行四边形。在右手坐标系中(x轴向右,y轴向上,z轴朝向观察者),如果顶边相对于底边向右移动,则剪应变为正;如果顶边相对于底边向左移动,则剪应变为负。
To visualize what shear strain sign corresponds to what angle change, we can start from the (engineering) shear definition gamma = du/dy + dv/dx. Assume we have an initially square material element and deform it such that the top and bottom edges move horizontally, i.e., dv/dx = dv/dy = 0. This turns the square into a parallelogram. In a right-hand coordinate system, where x points right, y points up and z towards us, the shear strain is positive if the top edge moves to the right and negative if the top edge moves to the left relative to the bottom edge.
冯·米塞斯应变计算 Von Mises strain calculation
由于VIC-3D仅计算表面应变,其内置的冯·米塞斯计算采用了平面应变主应变公式;其他公式则需要关于材料厚度方向行为的假设/推断。计算公式如下:
Because Vic-3D only calculates surface strains, the built-in Von Mises calculation uses a principal plane strain formulation; other formulations would require assumptions/inferences about the through-thickness behavior of the material. The equation is as follows:

特雷斯卡计算 Tresca calculation
特雷斯卡计算同样采用平面应变约束。
▪ 当 ε1 和 ε2 符号相反时,结果为 (ε1 - ε2) / 2。
▪ 当符号相同时,我们使用 εmax / 2,其中 εmax 是量值较大的应变分量。
使用函数编辑器可以轻松组合其他计算公式。
The Tresca calculation also uses a plane strain constraint.
▪ Where ε1 and ε2 have opposite signs, the result is (ε1-ε2)/2.
▪ Where they have the same sign, we use εmax/2 where εmax is the higher magnitude strain component.
Additional formulations can be easily composed using the function editor.
应变轴 Strain axes
在许多实验设置中,X轴或Y轴会自然与轴向或横向应变轴对齐,但如果对齐至关重要,您应使用主应变,或指定一个坐标系(例如,将Y轴强制指定为试样的纵轴)。请注意,应用坐标变换并不会变换相关的应变——当您应用变换时,软件会提示这一点;需要重新计算应变以获取新坐标系下的应变值。
For many setups, the X axis or Y axis will naturally align with the axial or transverse strain axis, but if alignment is critical, you should either use major strains, or specify a coordinate system (for instance, force the Y-axis to the longitudinal axis of your specimen). Note that applying a coordinate transform will not transform associated strains - you will be prompted of this when applying transforms; re-calculate strain to get strains in your new coordinate system.
有关变形描述子和张量的更多信息,请参阅维基百科相应条目:
For some additional information about deformation descriptors and tensors, see the respective Wikipedia articles:
File Naming in Vic-3D
Posted by Elisha Byrne, Last modified by Micah Simonsen
Vic-Snap automatically names the images with the correct naming conventions for Vic-3D. So other than remembering not to use and underscore in the filename, there is no actions necessary for naming files correctly when using Vic-Snap.
When using acquisition software other than Vic-Snap, however, the user must follow the naming convention needed for Vic-3D. File names and file paths should not contain any underscores except for the _0 or _1 camera designation (which again, is automatically generated in Vic-3D). The _0 and _1 indicates to Vic-3D which camera the image corresponds to, so an underscore elsewhere in the file name or file path is incompatible with Vic-3D.
The file name for camera 0 and camera 1 must be identical except for the _0 or _1 indicator.
Vic-Snap automatically names both the images with the same user-defined prefix, assigns the proper sequential numbering system, and designates each image with the _0 and _1 suffix. Only the prefix must be entered (so in the example below, only "test-image" was entered as the file name and Vic-Snap assigned the rest of the name).
For example:
test-image-000_0.tif is the image for camera 0 and
test-image-000_1.tif is the image for camera 1
The next image pair will be test-image-001_0.tif and test-image-001_1.tif.
It is preferred that the camera indicator (_0 and _1) is at the end of the file name. However, some software packages (Photron's PFV software, for example) don't allow for that. So if the camera designation is the middle of the filename, it should be surrounded by underscores (_1_ and _0). For example:
filename_0_00001.tif
filename_1_00001.tif
When files are loaded into Vic-2D and Vic-3D they will be sorted in alphanumerical order. This means that if you have a lot of images, you should make sure that you have enough digits in the file numbering so that the images will be ordered correctly. For example, if there are only three file number digits, then the images will be numbered 000-999 and then jump to 1000 and so on. The problem here is that image 1000 will not be after 999, rather it'll be between 100 and 101. In that case you should check to see that the assigned file number digits parameter is at least 4, so the first image will be 0000. This can be set in File>Advanced Options>System Settings>File Number Digits.
Troubleshooting
Blank Images: When importing an image into Vic-3D, if the images come in blank or all white, then it's likely a naming issue. Also if you see that images for both camera 0 and camera 1 are listed separately in the image tab, it is also most likely a naming issue. In these cases, check there there are no underscores in both the file name and also even in the file path/folder name except for the _0 and _1 at the end, which must be present.
Missing Images: If it looks like not all of your images were imported into the software, check to see that you had enough digits for all of the images (so if you have over 9999 images, your first image should be 00000 and not 0000). If you have the incorrect number of digits for the number of files that you have then it's likely that all of the images loaded into the software correctly and that they are just hard to find because it would be ordered incorrectly. Also, with a large number of files it is best to load the images by group, rather than selecting all of them. So go to File>Speckle Images By Group and select the speckle image group prefix. It will import all images with that prefix.
Projection Error: Explanation and Causes 关于投影误差的解释与说明
Posted by Micah Simonsen, Last modified by Micah Simonsen on 13 October 2016 01:05 PM | |
Introduction When running a correlation in Vic-3D, one of the values given for each image is the Projection error. This article will explain how the projection error is calculated and what can cause a high projection error. Based on the calibration, we can take a given point in the Camera 1 image and predict a line along which it must lie in the Camera 2 image. This constraint is called the epipolar constraint*, and the line is called the epipolar line. If we find the point away from this line, the distance away is called the projection error; this distance is reported in terms of pixels. Expected Values With a good test setup, projection error should be low - approximately on the same order as the calibration score, or typically in the 0.02-0.05 range. Slightly higher errors are generally not an indication of a problem although errors of approximately 0.1 or above may indicate issues. Troubleshooting If the error is significant, it means that we either made an incorrect match, or that we made a good match, but that our epipolar line is not where we expected it. Here are some possible causes:
For cases where the calibration has been bumped or disturbed, it may be possible to correct the calibration without recalibrating. This will only work if the camera orientations - but not the focus or aperture - have been disturbed. Otherwise, it may be necessary to fully recalibrate. Note that it is just as valid to calibrate after a test as before, as long as the cameras do not experience any motion between the test and the calibration, regardless of order.
*For more information on epipolar geometry, reference the Wikipedia article. |
Troubleshooting Calibration Problems
Posted by Elisha Byrne, Last modified by Micah Simonsen
Calibration in Vic-3D is well automated but certain situations can lead to problems with either high calibration scores, or a failure to calibrate at all. This article lists some common causes for each situation.
What is the calibration score?
Calibration in Vic-3D is a process where the software builds up a model that includes the cameras' intrinsic calibration (focal length, distortion, perspective), the extrinsic relationship between the cameras (distance and angles), and the geometry of the calibration grid. Once the model is established, we can use it to project theoretical locations for all of the grid points in each image. The calibration score is the average distance between the theoretical point location and the actual position where the point was found in the camera image.
High calibration scores
If the calibration score is high, it means that either our theoretical model is not an accurate model of our camera system, or, more likely, that the grid points were extracted incorrectly or noisily. Note that even just one very bad image can cause all the scores to be high, since it can bias the calibration result; so in some cases, even if all the image scores are slightly high, removing one very bad image will drop all the rest of the scores.
Some common causes of incorrect grid point extraction:
Glare/highlighting on grid: with some lighting setups, the black ink of the grid dots can appear slightly reflective, and white or light grey areas can appear within the black dots. This will bias the dot center and can increase error. You can compensate by either moving the light, or simply avoiding angles that cause this reflection. Also, note that for high-speed setups where very bright direct lighting is required, you can always calibrate at a different frame rate - in many cases, you can perform your calibration under room lighting only at low frame rates (ie, 60fps), while performing your actual test with the supplemental lighting at your test frame rate. As long as the aperture is not changed, the calibration will hold.
Objects in front of grid points: if you are calibrating behind a frame or structure, it may be that some grid points are partially blocked; these points will be extracted incorrectly. It's also easy to accidentally hold the grid so that your hand blocks part of a point, giving the same result. Reposition the grid, or simply remove these images from consideration (right-click, and select "Remove row").
Uneven backlighting: for the small, glass grids, it's important that the transparent dots be very evenly backlit. Use a very diffuse and even light source, and try to adjust it so all the dots look even.
If all the points are extracted well but the score is still high, the algorithm might not be modelling the system correctly. Some possible causes:
Higher order distortions: the default distortion order for calibration is 1, but some short lenses (12mm, 8mm) can have 2nd and 3rd order distortions. You can try raising the distortion order spin box to 2 or 3 and see if the score improves.
To obtain an accurate estimate of 2nd and 3rd order distortion, it will be necessary to take quite a few images - approximately 30 or more. You should also be sure to use a grid that fills the field of view - using a much smaller grid will make accurate distortion modelling impossible.
Check to see that the distortion coefficients (kappa 1, kappa 2, kappa 3) approximately match between the two cameras - if they do not match, they are either poorly estimated, or not present. Running with a poor higher order distortion estimate can be worse than not using higher order distortions at all as it can cause false strains away from the image center.
The required distortion order for a given lens won't change from test to test, so once you establish the required distortion order for that lens, you should use the same setting in future tests.Other (non-radial) distortions: if your test involves other distortion sources such as a glass furnace window, or a liquid medium, or non-standard optics (such as stereomicroscopes), the standard calibration may not work. In this case, you may have to either change the physical setup, or apply parametric distortion correction - contact support directly for more information on this.
Calibration grid flexing: the calibration grid does not have to be precisely flat or even, but it does need to be rigid. If you have printed your own grid, be sure it is fixed to something very rigid; also, the largest (50mm and 70mm) provided aluminum grids can flex if they are torqued while being held. Be sure to support these grids in a way that doesn't apply excessive twisting force.
Not enough data to converge well: in some setups where the grid does not or cannot tilt out-of-plane very much, the algorithm may converge to poor values. You can try taking images with more tilt, or if this is impossible, you can use the "High magnification" option to force the Center values to the sensor center; this may allow the calibration to converge.
Failure to calibrate
If no points are extracted at all, there are a few potential causes.
Incorrect grid parameters: if the Offset or Length values are entered incorrectly, the grid geometry will not be recognized. Check the Help for an explanation of each parameter; or, if you have a clear, head-on shot of the grid, you can click the "New" or "+" button, and Vic-3D will guess the parameters - only spacing must be entered, in this case.
Transparent grid held backwards: for the glass grids, if the grid is held with the 'back' side to the camera, the geometry will appear inverted and points may be extracted incorrectly or not at all. Consult the glass grid document for details on identifying the 'front' side.
Grid points to small: for grids that are very small in the field of view, there may be too few pixels to accurately represent an ellipse. In these cases, use a larger grid, or use a custom target with larger dots, if needed. Please note that even for reduced-resolution high speed tests, you may (and should) calibrate at the full resolution of the camera, for best accuracy. Use the "Adjust for cropping" menu option to correct for the resolution change.
In some cases, points will be extracted, but the calibration will not converge at all and will return an error such as "Linear calibration failed". There are two likely causes for this.
Points extracted incorrectly: it may be that one or more of the images has points that are extracted in the wrong location or order; for example, a solid dot is identified as hollow, or a grid point is placed on a background feature. You can check your images to see if this is the case; when the number of points is displayed, any image which displays a very low number of points might be the culprit. Right-click on the image, and select "Remove row", then recalibrate.
Some points covered up: the new calibration grids are designed with little runoff to allow the best possible calibration; because of this, it's easy to hold the grid so that some points are blocked by your hand. Be sure to hold the grid from behind, by the edges, to avoid this.
Not enough data to calibrate: if all of your grid images are taken in very similar positions, or all grid images are taken in the same plane, the algorithm may not have enough data to model the system. In this case, it's best to retake the calibration images, adding some more tilt and out-of-plane motion to the grid.
In some cases, you may be very limited in grid position by a low depth of field - this can occur in high magnification setups, or where you must run with very large apertures, as in some high speed tests. In these cases, Vic-3D can have trouble estimating values for the pinhole centers - the values labeled "Center (X)" and "Center (Y)". To remedy this, you can click the "High magnification" checkbox in and try recalibrating; this forces the center values to the geometric center of the sensor and can often allow a calibration to proceed.
High Magnification Calibration
High Magnification Calibration
Posted by Micah Simonsen, Last modified by Micah Simonsen
Calibrating for a high magnification setup can present a few challenges. This article will discuss techniques for getting the best calibration result at small fields of view.
What is 'high magnification'?
A high magnification test is one where the lens magnification is roughly in the range of 1-4x; for example, with a standard 2/3" sensor, this would be fields of view between about 8mm and about 30mm.
Challenges & techniques
At small fields of view, the depth of field can become very limited. It may be difficult or impossible to achieve enough depth to allow for good tilting of the calibration grid. This will result in calibrations which have very poorly estimated, unrealistic values for "Center (X)" and "Center (Y)". In extreme cases you may see a "Sync Error" warning caused by these poor estimates.
Use small apertures (high F-numbers). This will maximize depth of field; note that, at very high F-numbers, the resolution of a lens will suffer. Apertures above F/8 or F/11 may result in blurry-appearing images. It may be necessary to balance the need for depth of field with the increased measurement noise due to this blur.
If you cannot get enough tilt to give a good center estimate, you can check the "High mag" option in the calibration dialog. This forces the center values to be at the geometrical center of the sensor. This is not optimal, but may allow the calibration to converge when it otherwise wouldn't, and is better than having a very poor value.
Additionally, it may be difficult to select a calibration target which fits well in a very small field of view.
For fields down to approximately 10-15mm, a printed paper grid can work well, provided the paper is coated smooth, and a very high quality laser printer is used. Using standard copy paper will result in very visible paper texture at these magnifications; likewise, using a lower end printer will result in poorly formed target circles. This will result in images which fail to extract, or high calibration scores.
Correlated Solutions provides glass calibration grids which have very accurate targets for fields of view down to 5mm. These grids must be backlit with a diffuse, even white field. Be sure that the transparent dots are evenly shaded, and that the correct side of the grid faces the camera (see grid documentation for details.) If the dots show shadows or uneven lighting, you may see consistently slightly high calibration scores in the 0.2-0.5 range.
Calibrating for Reduced Resolution
Calibrating for Reduced Resolution Posted by Micah Simonsen, Last modified by Micah Simonsen on 28 February 2018 09:41 AM | |
Overview |
Many high-speed cameras allow speed increases by reducing (cropping) image resolution. However, calibration can be difficult or impossible at the reduced resolution; in most cases calibrating at the full sensor resolution is easier and will also give a more accurate result.
Problems with calibrating at reduced resolution
A typical high speed camera may have a resolution of 1024 x 1024 and at this resolution a standard 14x10 calibration grid, chosen to fill the field of view, will calibrate well - all coding and target dots should be recognized.
As the resolution decreases much below 1024 x 1024, the smallest dots on the grid - the two coding dots - will no longer be rendered as recognizable ellipses, instead looking more like this (greatly zoomed in):

In this case, you can still manually select the correct grid and proceed to calibrate. However, once the resolution starts to decrease more, towards 512 x 512*, the small circles concentric with the three orientation dots will also become poorly resolved.

In this case, calibration will be impossible. To avoid this, you can calibrate at full resolution, and then perform a simple adjustment to correct for cropping.
Even if calibration is slightly possible at reduced resolution, we can often get a better result with the full field, because we can get better estimates of parameters like distortion by using data from the corners of the sensor. Because of this, it is always recommended to calibrate at full resolution, especially for critical tests.
Note: For cameras with a limited maximum resolution (IR cameras, ultra high speed cameras) we can also use special grids with sparser but larger dots - i.e., an 8 x 6 grid with very large dots. These can be generated with the target generator or you can contact Correlated Solutions to inquire about purchasing a finished grid.
Calibrating at full resolution
For full resolution calibration, set your camera for its max resolution. Speed and exposure time can be set as necessary - these will not change the calibration parameters - but aperture must not be adjusted. Black reference the cameras, if necessary; choose a grid which fills the full field of view of the sensor, and take a good calibration set.
You can then return to the reduced resolution and set up for your test - do not move the cameras or change the aperture, but lighting, FPS, and exposure time adjustment are all allowable.
Software procedure and theory
Two of the parameters we calibrate for are Center (X) and Center (Y). These are the coordinates of the pinhole center of the sensor; they tend to be roughly in the geometric center of the sensor, but never exactly, because of real-world manufacturing variation. This variation does not harm accuracy but must be calibrated for.
Vic-3D represents this as a pixel coordinate referenced to the top left of the sensor.

When we reduce the resolution - for our example, to 512 x 384 - the camera crops the image to the center of the sensor. (512, 512) is no longer the center of this reduced image - we must offset it.

To calculate this offset automatically, add both the calibration images as well as at least one speckle image (at reduced resolution) to the project in Vic-3D. Calibrate as usual, and then click File... Adjust for cropping.

Assuming the image was cropped to the center, the correct values will be filled in. Click Ok and the Center (X) and Center (Y) values will be offset as necessary. You should do this once and only once - if you click through again, the values will be offset again. Check the Calibration tab in your project - the Center (X) and Center (Y) values should be roughly in the center of your reduced resolution image.
If the image was not cropped to the optical center, you must manually enter the necessary offset values.
This correction does not affect accuracy in any way - the digital nature of sensors means that the offset is an exact, knowable integer value.
Note: If you fail to correct for cropping, or the values are incorrect, you will most likely see a very high Projection Error in your analysis. In this case, check through the steps above and try again.
Save the project at this point, and run as usual.
*All numerical values in this application note are examples and will not apply to every cases.