Search Thermo Fisher Scientific
Search Thermo Fisher Scientific
To be used efficiently, Amira, Avizo, Avizo Trueput, Amira-Avizo2D and PerGeos Software need specific hardware components or other software resources to be present on a computer. These prerequisites are known as (computer) system requirements and are used as a guideline as opposed to an absolute rule.
As newer versions of our software may bring advances which potentially require higher processing power and resources, system requirements tend to increase over time and we recommend that you regularly check the information provided below.
Amira-Avizo Software supports the following operating systems:
Note: From release 2022.2, CentOS 7 is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Limitations
Some of the Editions and Extensions or functionalities are limited to some platforms:
Extension/Edition/Feature | Limitation |
---|---|
Avizo XReadIGES Extension, Avizo XReadSTEP Extension | Microsoft Windows only |
Avizo XMetrology Extension | Microsoft Windows only |
Olympus and TXM file formats | Microsoft Windows only |
Deep Learning Prediction, DL Training - Segmentation 2D, DL Training - Segmentation 3D, DL Training - StarDist 2D, DL Training - StarDist 3D, DL Training - Noise to Void 2D and the AI Assisted Segmentation Tool | Microsoft Windows only An NVIDIA GPU supporting at least CUDA Compute Capability 3.5 is also required for 2D, and 5.2 for 3D. Drivers should be up to date. Compatible GPUs are listed at https://developer.nvidia.com/cuda-gpus. The compatible cuda_runtime version is found in the List of Python Packages. A CPU which supports the AVX2 extensions is also required. See the GPU table below for drivers tested for the current release. Due to potential library conflicts between the Deep Learning modules and the Calculus MATLAB module , it is not possible to instantiate these modules in the same time. |
Other deep learning related modules: Compute Ambient Occlusion Anisotropic Diffusion Non-Local Means Filter / GPU Adaptive Manifold | No longer compatible with Windows platforms older than Windows 10, because of the CUDA toolkit version used by Amira-Avizo Software. |
Prioritizing Hardware
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive. These general criteria need to be considered:
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Amira-Avizo Software refers to all Amira-Avizo editions and all Amira-Avizo extensions.
Graphics Cards
The single most important determinant of Amira-Avizo Software performance for visualization is the graphics card.
Amira-Avizo Software should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 4.2 or higher. However, graphics card and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 32 GB of memory or more. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules are able to go around this limitation).
Amira-Avizo Software will not benefit from multiple graphics cards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics card, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics card configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics cards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics cards are not recommended for graphics-intensive applications such beyond basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro cards will detail specific performance metrics:
Feature | Impact on performance |
---|---|
Memory size | Very important for volume visualization (both volume rendering and slices) and geometry rendering (with a very large number of triangles. More memory maximizes image quality and performance because volume data is stored in the GPU's texture memory for rendering. |
Memory interface / bandwidth | Important for volume rendering as large amounts of texture data need to be moved from the system to the GPU during rendering. The PCI Express 3 buses are the fastest interfaces available today. |
Number of cores (stream processors) | Important for volume rendering as every additional high-quality rendering feature enabled requires additional code to be executed on the GPU during rendering. |
Triangles per second | Important for geometry rendering (surfaces, meshes). |
Texels per second / Fill rate | Very important for volume visualization (especially for volume rendering), because a large number of textures will be rendered and pixels will be "filled" multiple times to blend the final image. |
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, we are not able to commit to providing a fix for bugs caused by the driver on standard graphics cards.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla T4 | 551.61 |
Windows 11 | NVIDIA RTX A4500 | 551.61 |
Ubuntu 20.04 | NVIDIA T1000 | 550 |
Note: If your system has multiple display adapters, you should ensure that you are starting Amira-Avizo Software using a compatible one. Amira-Avizo Software uses the first display adapter by default. If this device is not compatible, please manually select a compatible device. If manually selecting a device is not possible, please deactivate other display adapters.
System Memory
System memory is the second most important factor for users who need to process large data.
You may need much more memory than the actual size of the data you want to load. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also be aware that the size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Data that exceed your system's physical memory are handled using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires the XPlore5D Extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using our large data technologies in combination with a large amount of system memory.
Amira-Avizo 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
Hard Drives
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
CPU
While Amira-Avizo Software mostly relies on GPU performance for visualization, many modules are computationally intensive and their performance will be strongly affected by CPU performance.
More and more Amira-Avizo modules are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Amira-Avizo Software a number of modules of the Avizo XLabSuite Extension, and also various computation modules.
A fast CPU clock, number of cores, and memory cache are the three most important factors affecting performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Optimizing for Specific Tasks
To optimize the software for specific tasks, the following should be considered:
Task | Optimization |
---|---|
Visualizing large data (LDA or SMS) |
|
Basic volume rendering | GPU fill rate (texels per second) |
Advanced volume rendering (Volume Rendering module) |
|
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes | GPU clock frequency, number of triangles per second |
Image processing and quantification (Amira-Avizo 3D Pro Software) |
|
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters), Avizo XLabSuite Extension (absolute permeability computation) | GPU speed, number of GPU cores (stream processors), CUDA-compatible (NVIDIA) |
Other compute modules, display module data extraction |
|
GPU computing using custom module programmed using Avizo XPand C++ API |
|
Environment Variables
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
Firewall
An internet access is necessary to activate the software. Your firewall may prevent the connection to the license server.
Linux
Amira-Avizo Software is only available for Intel64/AMD64 systems.
The official Linux distribution is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, some other 64-bit Linux distributions may be compatible if the required version of system libraries can be found, but technical support for those platforms will be limited.
Note that:
Section "Extensions"
Option "Composite" "disable"
EndSection
Windows
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing the software starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained at https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd.
XPand C++ API
To create custom extensions using the C++ API on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to running Amira-Avizo Software in debug mode.
To create custom extensions with the C++ API on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
The specific compiler version to use depends on Amira-Avizo Software's application version on which you want to run the extension. In order to obtain the required compiler version, launch your target version of Amira-Avizo Software and type app uname in the TCL console.
MATLAB
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow the software to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embeded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
Dell Backup and Recovery Application
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make the software crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Remote display
Amira-Avizo Software is not tested in remote sessions; remote display is not supported.
For technical or license issues, please contact VDS Customer Support using the online portal at: https://www.thermofisher.com/software-em-3d-vis/customersupport
Thermo Scientific™ Amira-Avizo 3D v2024.1, Copyright© 2024 by Thermo Fisher Scientific.
Thermo Scientific Avizo Trueput Software runs on:
Prioritizing hardware for Avizo Trueput Software
Introduction
This document provides recommendations about choosing a suitable workstation to run Avizo Trueput Software.
The four most important components to consider are the graphics card (GPU), the CPU, the RAM, and the hard drive.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
Graphics Cards
Avizo Trueput Software should run on any graphics system (this includes the GPU and its driver) that provides a complete implementation of OpenGL 4.2 or higher. However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require graphics memory large enough to hold the actual data.
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release. Due to vendor support policies, we are not able to commit to providing a fix for bugs caused by the driver on standard graphics boards. We recommend using a professional graphics board to benefit from the professional support offered by the vendors (driver bugs fixes, etc.).
List of tested GPUs:
Vendor | Model |
NVIDIA | Quadro P400, Quadro RTX 4000, Quadro T1200 Laptop GPU |
Intel | UHD 630, UHD 770 |
System Memory
System memory is the second most important determinant for processing large data with Avizo Trueput Software.
You may need much more memory than the actual size of the data you want to load within Avizo Trueput Software. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. It is common to need two to three times the memory footprint of the data being processed for basic operations. For more complex workflows, you may need up to six or eight times the memory (e.g. 32 GB may be required for a 4 GB dataset).
Similarly, the size of the data on disk may be much smaller than the memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Hard Drives
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. To read a 1 GB file from the disk, you will likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over four minutes. Large data technologies will greatly reduce the wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
Reading data across a network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and the number and size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
CPU
While Avizo Trueput Software mostly relies on GPU performance for visualization, many modules are computationally intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Avizo Trueput Software are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Avizo Trueput Software and various computation modules.
CPU clock speed, number of cores, and memory cache are the three most important factors affecting the performance of Avizo Trueput Software. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. For example, up to eight cores show almost linear scalability while more than eight cores do not show much gain in performance. A larger memory cache will improve performance.
Each sub-application (Analyzer, Trainer and Labeler) of Amira-Avizo2D Software have their own system requirements.
The Analyzer and Trainer applications require more processing power than Labeler. For these applications, refer to the respective system requirements documentation below.
The following recommendations are intended to help you choose a suitable workstation to run the application.
Analyzer runs on Microsoft™ Windows 10 (64-bit). Other than the operating system requirement, the most important components to consider are the graphics card (GPU), the CPU, the RAM, and the hard drive.
The performance of image processing algorithms depends heavily on the performance of the CPU, the GPU, or both. The GPU performance is important for CUDA®-optimized algorithms. Loading or saving large amounts of data depends on the hard drive performance. The amount of system RAM is the main limitation on the size of the data that can be loaded and processed.
Analyzer is a 2D application; therefore, it does not require a high-end graphics card for visualization. Any graphics system (GPU+driver) that provides a complete implementation of OpenGL 2.1 or higher is sufficient. However, some algorithms are optimized with a CUDA implementation. The amount of GPU memory required depends mainly on the use of CUDA-optimized algorithms. The minimum recommendation is 1 GB of GPU memory if your sole use of Analyzer is for visualization. For CUDA usage, we highly recommend either 16 or 32 GB of GPU memory. When choosing the graphics card for your workstation, consider whether you require CUDA support. The CUDA technology is available only on NVIDIA graphics cards.
Analyzer does not benefit from multiple graphics boards for visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics card, this computation can also run on a second CUDA-enabled graphics card installed on the system.
If you need to process a large amount of data, system memory is an important consideration. At a minimum, you need at least the size of your complete tile set. In practice, you are likely to need much more memory than the actual size of the data being loaded. Some processing can require several times the memory required by the original data set. For example, if you load a 4 GB data set in memory, apply a non-local means filter to it and then compute a distance map, you might need as much as 16 to 20 GB of additional memory for the intermediate results.
Workflow processing occurs separately for each tile of the tile set; therefore, when computation is performed only a single tile is loaded in memory at a time. For a basic workflow, you need, in addition of the size of the input data set, 2 or 3 times the memory footprint of a single tile in the tile set. For a complex workflow, you need up to 6 or 8 times the size of a tile.
Also keep in mind that certain file formats might compress the data so that the disk size of the data is significantly smaller than the memory required to load it.
When working with large files, reading data from the disk can slow productivity. A standard hard disk drive (for example, a 7200 rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; the actual performance is likely to be closer to 40 MB/second. Therefore, reading a 1 GB file from disk typically takes 25 seconds. For a 10 GB file, the wait is over 4 minutes. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID 5 mode; however, be aware that RAID configurations might require substantially more system administration. For performance only, you could use RAID 0, but at the risk of data loss upon a hard-drive failure. If you want both performance and data redundancy, then RAID 5 is recommended.
A fast CPU clock, the number of cores, and the memory cache are the most important factors affecting performance. While most multi-threaded modules scale up nicely according to the number of cores, a scaling bottleneck might come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Internet access is necessary to activate the product; however, your firewall might prevent the connection to the license server. For more information, refer to activation documentation. Also be aware that reading data across the network (a file server, for example) is normally much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and the number and size of other requests to the file server, so in practice you are unlikely to achieve the theoretical bandwidth.
Trainer runs on Microsoft Windows 10 (64-bit).
The following recommendations are intended to help you choose a suitable workstation to run the Trainer application.
Other than the operating system requirement, the most important components to consider for the Trainer application are the graphics card (GPU), and the hard drive. RAM and CPU are used mostly for pre-processing tasks, so the following can be considered sufficient:
Trainer requires an NVIDIA graphics board that supports CUDA Compute Capability 3.5 or higher. Compatible GPUs can be found here: https://developer.nvidia.com/cuda-gpus.
The minimum amount of dedicated GPU memory is 4 GB. However, deep learning is a compute-intensive task, and performance is directly related to the GPU memory and speed. Therefore, a high-end GPU is recommended, and a recent graphics driver must be installed.
Also note that Trainer does not take advantage of multi-GPU configurations.
It is recommended to store data on a local fast hard drive (SSD preferred) for quicker data access.
Labeler runs on Microsoft Windows 10 (64-bit).
The following recommendations are intended to help you choose a suitable workstation to run the Labeler application.
Labeler can run with any graphics card that supports OpenGL 2.1 or higher. A high-end graphics card is not required.
It is recommended that you store data on a local fast hard drive (SSD preferred) for quicker data access.
PerGeos Software runs on:
Note: Starting with the release 2022.2, CentOS 7 support is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the extensions or functionalities are limited to some platforms:
This document is intended to give recommendations about choosing a suitable workstation to run PerGeos.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
The single most important determinant of PerGeos performance for visualization is the graphics card.
PerGeos should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 4.2 or higher. However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of PerGeos are able to go around this limitation).
PerGeos will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as PerGeos except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro cards will detail specific performance metrics:
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla M60 | 512.78 |
Windows 11 | NVIDIA RTX A4500 | 528.02 |
Ubuntu-20.04 | NVIDIA T1000 | 525.105.17 |
System memory is the second most important determinant for PerGeos users who need to process large data.
You may need much more memory than the actual size of the data you want to load within PerGeos. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you will need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
PerGeos's Large Data Access (LDA) technology will enable you to work with data sizes exceeding your system's physical memory. LDA is an excellent way to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution will be achieved by using PerGeos's LDA technology in combination with a large amount of system memory. LDA provides a very convenient way to quickly load and browse your whole dataset. Note that LDA data will not work with most compute modules, which require the full resolution data to be loaded in memory.
PerGeos provides another loading option to support 2D and 3D image processing from disk to disk (``read as external disk data''), without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you will likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. LDA technology will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. LDA technology may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
While PerGeos mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside PerGeos are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with PerGeos, a number of modules of the Petrophysics Extension and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting PerGeos performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes,...:
Image processing and quantification:
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters) :
Other compute modules, display module data extraction:
GPU computing using custom module programmed using PerGeos XPand C++ API and GPU API:
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
PerGeos documentation is rendered by a sandboxed embedded browser (WebEngine). If PerGeos is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
An internet access is necessary to activate PerGeos. Your firewall may prevent the connection to the license server.
PerGeos is only available for Intel64/AMD64 systems.
The official Linux distribution for PerGeos is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, PerGeos is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing PerGeos starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
To create custom extensions for PerGeos with the C++ API available in PerGeos on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run PerGeos in debug mode.
To create custom extensions for PerGeos with the C++ API available in PerGeos on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow PerGeos to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make PerGeos crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
PerGeos is not tested in remote sessions; remote display is not supported.
Services
Shorten your learning curve and maximize your investment with this introductory training specifically designed for new users of Amira, Avizo and PerGeos Software.
The course consists of a lecture with hands-on sessions. The training material highlights the basic features and functionalities of Amira, Avizo and PerGeos Software.
Maximize your investment and reduce your time-to-results with this advanced training specifically designed for existing users of Amira, Avizo and PerGeos Software.
The course consists of a lecture with hands-on sessions. The training material highlights advanced features and functionalities of Amira, Avizo and PerGeos Software.
With over 25 years of experience in 3D and image processing and hundreds of custom projects delivered to organizations small and large, Thermo Fisher Scientific can provide you with a solution tailored to fit your specific needs.
We can customize and expand our software solutions at various levels.
Amira runs on:
Note: Starting with the release 2022.2, CentOS 7 is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the Editions and Extensions or functionalities are limited to some platforms:
Prioritizing hardware for Amira
Introduction
This document is intended to give recommendations about choosing a suitable workstation to run Amira.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Amira refers to Amira and all Amira extensions.
Graphics Cards
The single most important determinant of Amira performance for visualization is the graphics card.
Amira should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 4.2 or higher. However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of Amira are able to go around this limitation).
Amira will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as Amira except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro cards will detail specific performance metrics:
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla M60 | 512.78 |
Windows 11 | NVIDIA RTX A4500 | 528.02 |
Ubuntu-20.04 | NVIDIA T1000 | 525.105.17 |
Note: If your system has multiple display adapters, you should ensure that you are starting Amira using a compatible one. Amira is using by default the first display adapter. If this device is not compatible, please manually select a proper device. If manually selecting a device is not possible, please deactivate other display adapters.
System Memory
System memory is the second most important determinant for Amira users who need to process large data.
You may need much more memory than the actual size of the data you want to load within Amira. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Amira can handle data that exceed your system's physical memory using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires Xplore5D extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using Amira large data technologies in combination with a large amount of system memory.
Amira 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
Hard Drives
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
CPU
While Amira mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Amira are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Amira XImagePAQ and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting Amira performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
How hardware can help optimizing
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA or SMS):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes:
Image processing and quantification (Amira 3D Pro):
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters):
Other compute modules, display module data extraction:
GPU computing using custom module programmed using Amira XPand C++ API and GPU API:
Special considerations
Environment variables
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
Embedded documentation browser
Amira documentation is rendered by a sandboxed embedded browser (WebEngine). If Amira is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
Firewall
An internet access is necessary to activate Amira. Your firewall may prevent the connection to the license server.
Linux
Amira is only available for Intel64/AMD64 systems.
The official Linux distribution for Amira is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, Amira is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
Windows
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing Amira starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
XPand C++ API
To create custom extensions for Amira with the C++ API available in Amira 3D Pro on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run Amira in debug mode.
To create custom extensions for Amira with the C++ API available in Amira 3D Pro on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Notes:
MATLAB
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow Amira to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
Dell Backup and Recovery Application
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make Amira crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Remote display
Amira is not tested in remote sessions; remote display is not supported.
File path limitation
Support of file path with non-ASCII characters is not guaranteed. Some files (Project files, data,...) could be not readable from (or writable to) a directory containing such a character.
Amira runs on:
Note: Starting with the release 2022.2, CentOS 7 is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the Editions and Extensions or functionalities are limited to some platforms:
Prioritizing hardware for Amira
Introduction
This document is intended to give recommendations about choosing a suitable workstation to run Amira.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Amira refers to Amira and all Amira extensions.
Graphics Cards
The single most important determinant of Amira performance for visualization is the graphics card.
Amira should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of Amira are able to go around this limitation).
Amira will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as Amira except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro cards will detail specific performance metrics:
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla M60 | 512.78 |
Windows 11 | NVIDIA RTX A4500 | 528.02 |
Ubuntu-20.04 | NVIDIA T1000 | 525.105.17 |
Note: If your system has multiple display adapters, you should ensure that you are starting Amira using a compatible one. Amira is using by default the first display adapter. If this device is not compatible, please manually select a proper device. If manually selecting a device is not possible, please deactivate other display adapters.
System Memory
System memory is the second most important determinant for Amira users who need to process large data.
You may need much more memory than the actual size of the data you want to load within Amira. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Amira can handle data that exceed your system's physical memory using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires Xplore5D extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using Amira large data technologies in combination with a large amount of system memory.
Amira 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
Hard Drives
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
CPU
While Amira mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Amira are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Amira XImagePAQ and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting Amira performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
How hardware can help optimizing
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA or SMS):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes:
Image processing and quantification (Amira 3D Pro):
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters):
Other compute modules, display module data extraction:
GPU computing using custom module programmed using Amira XPand C++ API and GPU API:
Special considerations
Environment variables
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
Embedded documentation browser
Amira documentation is rendered by a sandboxed embedded browser (WebEngine). If Amira is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
Firewall
An internet access is necessary to activate Amira. Your firewall may prevent the connection to the license server.
Linux
Amira is only available for Intel64/AMD64 systems.
The official Linux distribution for Amira is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, Amira is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
Windows
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing Amira starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
XPand C++ API
To create custom extensions for Amira with the C++ API available in Amira 3D Pro on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run Amira in debug mode.
To create custom extensions for Amira with the C++ API available in Amira 3D Pro on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Notes:
MATLAB
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow Amira to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
Dell Backup and Recovery Application
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make Amira crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Remote display
Amira is not tested in remote sessions; remote display is not supported.
Avizo Software runs on:
Note: Starting with the release 2022.2, CentOS 7 is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the Editions and Extensions or functionalities are limited to some platforms:
Prioritizing hardware for Avizo
Introduction
This document is intended to give recommendations about choosing a suitable workstation to run Avizo.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Avizo refers to all Avizo editions and all Avizo extensions.
Graphics Cards
The single most important determinant of Avizo performance for visualization is the graphics card.
Avizo should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 4.2 or higher. However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of Avizo are able to go around this limitation).
Avizo will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as Avizo except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro cards will detail specific performance metrics:
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla M60 | 512.78 |
Windows 11 | NVIDIA RTX A4500 | 528.02 |
Ubuntu-20.04 | NVIDIA T1000 | 525.105.17 |
Note: If your system has multiple display adapters, you should ensure that you are starting Avizo using a compatible one. Avizo is using by default the first display adapter. If this device is not compatible, please manually select a proper device. If manually selecting a device is not possible, please deactivate other display adapters.
System Memory
System memory is the second most important determinant for Avizo users who need to process large data.
You may need much more memory than the actual size of the data you want to load within Avizo. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Avizo can handle data that exceed your system's physical memory using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires Xplore5D extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using Avizo large data technologies in combination with a large amount of system memory.
Avizo 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
Hard Drives
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
CPU
While Avizo mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Avizo are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Avizo a number of modules of the Avizo XLabSuite Extension, and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting Avizo performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
How hardware can help optimizing
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA or SMS):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes:
Image processing and quantification (Avizo 3D Pro):
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters), Avizo XLabSuite Extension (absolute permeability computation):
Other compute modules, display module data extraction:
GPU computing using custom module programmed using Avizo XPand C++ API and GPU API:
Special considerations
Environment variables
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
Embedded documentation browser
Avizo documentation is rendered by a sandboxed embedded browser (WebEngine). If Avizo is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
Firewall
An internet access is necessary to activate Avizo. Your firewall may prevent the connection to the license server.
Linux
Avizo is only available for Intel64/AMD64 systems.
The official Linux distribution for Avizo is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, Avizo is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
Windows
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing Avizo starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
XPand C++ API
To create custom extensions for Avizo with the C++ API available in Avizo 3D Pro on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run Avizo in debug mode.
To create custom extensions for Avizo with the C++ API available in Avizo 3D Pro on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Notes:
MATLAB
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow Avizo to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
Dell Backup and Recovery Application
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make Avizo crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Remote display
Avizo is not tested in remote sessions; remote display is not supported.
File path limitation
Support of file path with non-ASCII characters is not guaranteed. Some files (Project files, data,...) could be not readable from (or writable to) a directory containing such a character.
Avizo Software runs on:
Note: Starting with the release 2022.2, CentOS 7 is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the Editions and Extensions or functionalities are limited to some platforms:
Prioritizing hardware for Avizo
Introduction
This document is intended to give recommendations about choosing a suitable workstation to run Avizo.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Avizo refers to all Avizo editions and all Avizo extensions.
Graphics Cards
The single most important determinant of Avizo performance for visualization is the graphics card.
Avizo should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of Avizo are able to go around this limitation).
Avizo will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as Avizo except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro and AMD Radeon/FirePro cards will detail specific performance metrics:
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla M60 | 512.78 |
Windows 11 | NVIDIA RTX A4500 | 528.02 |
Ubuntu-20.04 | NVIDIA T1000 | 525.105.17 |
Note: If your system has multiple display adapters, you should ensure that you are starting Avizo using a compatible one. Avizo is using by default the first display adapter. If this device is not compatible, please manually select a proper device. If manually selecting a device is not possible, please deactivate other display adapters.
System Memory
System memory is the second most important determinant for Avizo users who need to process large data.
You may need much more memory than the actual size of the data you want to load within Avizo. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Avizo can handle data that exceed your system's physical memory using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires Xplore5D extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using Avizo large data technologies in combination with a large amount of system memory.
Avizo 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
Hard Drives
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
CPU
While Avizo mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Avizo are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Avizo a number of modules of the Avizo XLabSuite Extension, and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting Avizo performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
How hardware can help optimizing
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA or SMS):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes:
Image processing and quantification (Avizo 3D Pro):
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters), Avizo XLabSuite Extension (absolute permeability computation):
Other compute modules, display module data extraction:
GPU computing using custom module programmed using Avizo XPand C++ API and GPU API:
Special considerations
Environment variables
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
Embedded documentation browser
Avizo documentation is rendered by a sandboxed embedded browser (WebEngine). If Avizo is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
Firewall
An internet access is necessary to activate Avizo. Your firewall may prevent the connection to the license server.
Linux
Avizo is only available for Intel64/AMD64 systems.
The official Linux distribution for Avizo is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, Avizo is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
Windows
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing Avizo starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
XPand C++ API
To create custom extensions for Avizo with the C++ API available in Avizo 3D Pro on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run Avizo in debug mode.
To create custom extensions for Avizo with the C++ API available in Avizo 3D Pro on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Notes:
MATLAB
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow Avizo to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
Dell Backup and Recovery Application
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make Avizo crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Remote display
Avizo is not tested in remote sessions; remote display is not supported.
PerGeos Software runs on:
Note: Starting with the release 2022.2, CentOS 7 support is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the extensions or functionalities are limited to some platforms:
This document is intended to give recommendations about choosing a suitable workstation to run PerGeos.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
The single most important determinant of PerGeos performance for visualization is the graphics card.
PerGeos should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of PerGeos are able to go around this limitation).
PerGeos will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as PerGeos except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro cards will detail specific performance metrics:
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla M60 | 512.78 |
Windows 11 | NVIDIA RTX A4500 | 528.02 |
Ubuntu-20.04 | NVIDIA T1000 | 525.105.17 |
System memory is the second most important determinant for PerGeos users who need to process large data.
You may need much more memory than the actual size of the data you want to load within PerGeos. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you will need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
PerGeos's Large Data Access (LDA) technology will enable you to work with data sizes exceeding your system's physical memory. LDA is an excellent way to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution will be achieved by using PerGeos's LDA technology in combination with a large amount of system memory. LDA provides a very convenient way to quickly load and browse your whole dataset. Note that LDA data will not work with most compute modules, which require the full resolution data to be loaded in memory.
PerGeos provides another loading option to support 2D and 3D image processing from disk to disk (``read as external disk data''), without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you will likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. LDA technology will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. LDA technology may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
While PerGeos mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside PerGeos are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with PerGeos, a number of modules of the Petrophysics Extension and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting PerGeos performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes,...:
Image processing and quantification:
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters) :
Other compute modules, display module data extraction:
GPU computing using custom module programmed using PerGeos XPand C++ API and GPU API:
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
PerGeos documentation is rendered by a sandboxed embedded browser (WebEngine). If PerGeos is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
An internet access is necessary to activate PerGeos. Your firewall may prevent the connection to the license server.
PerGeos is only available for Intel64/AMD64 systems.
The official Linux distribution for PerGeos is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, PerGeos is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing PerGeos starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
To create custom extensions for PerGeos with the C++ API available in PerGeos on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run PerGeos in debug mode.
To create custom extensions for PerGeos with the C++ API available in PerGeos on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow PerGeos to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make PerGeos crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
PerGeos is not tested in remote sessions; remote display is not supported.
Amira runs on:
Note: Starting with the release 2022.2, CentOS 7 is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the Editions and Extensions or functionalities are limited to some platforms:
Prioritizing hardware for Amira
Introduction
This document is intended to give recommendations about choosing a suitable workstation to run Amira.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Amira refers to Amira and all Amira extensions.
Graphics Cards
The single most important determinant of Amira performance for visualization is the graphics card.
Amira should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of Amira are able to go around this limitation).
Amira will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as Amira except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro cards will detail specific performance metrics:
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla M60 | 512.78 |
Windows 11 | NVIDIA RTX A4500 | 528.02 |
Ubuntu-20.04 | NVIDIA T1000 | 525.105.17 |
Note: If your system has multiple display adapters, you should ensure that you are starting Amira using a compatible one. Amira is using by default the first display adapter. If this device is not compatible, please manually select a proper device. If manually selecting a device is not possible, please deactivate other display adapters.
System Memory
System memory is the second most important determinant for Amira users who need to process large data.
You may need much more memory than the actual size of the data you want to load within Amira. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Amira can handle data that exceed your system's physical memory using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires Xplore5D extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using Amira large data technologies in combination with a large amount of system memory.
Amira 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
Hard Drives
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
CPU
While Amira mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Amira are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Amira XImagePAQ and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting Amira performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
How hardware can help optimizing
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA or SMS):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes:
Image processing and quantification (Amira 3D Pro):
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters):
Other compute modules, display module data extraction:
GPU computing using custom module programmed using Amira XPand C++ API and GPU API:
Special considerations
Environment variables
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
Embedded documentation browser
Amira documentation is rendered by a sandboxed embedded browser (WebEngine). If Amira is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
Firewall
An internet access is necessary to activate Amira. Your firewall may prevent the connection to the license server.
Linux
Amira is only available for Intel64/AMD64 systems.
The official Linux distribution for Amira is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, Amira is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
Windows
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing Amira starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
XPand C++ API
To create custom extensions for Amira with the C++ API available in Amira 3D Pro on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run Amira in debug mode.
To create custom extensions for Amira with the C++ API available in Amira 3D Pro on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Notes:
MATLAB
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow Amira to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
Dell Backup and Recovery Application
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make Amira crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Remote display
Amira is not tested in remote sessions; remote display is not supported.
Avizo Software runs on:
Note: Starting with the release 2022.2, CentOS 7 is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the Editions and Extensions or functionalities are limited to some platforms:
Prioritizing hardware for Avizo
Introduction
This document is intended to give recommendations about choosing a suitable workstation to run Avizo.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Avizo refers to all Avizo editions and all Avizo extensions.
Graphics Cards
The single most important determinant of Avizo performance for visualization is the graphics card.
Avizo should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of Avizo are able to go around this limitation).
Avizo will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as Avizo except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro and AMD Radeon/FirePro cards will detail specific performance metrics:
Professional graphics boards
Vendor | Family | Series |
NVIDIA | Quadro | Maxwell, Kepler, Pascal, RTX, Turing, Ampere |
AMD | FirePro | W, V |
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Standard graphics boards
Vendor | Family | Series |
NVIDIA | GeForce | Maxwell, Kepler, Pascal, RTX, Turing, Ampere |
AMD | Radeon | since GCN 1.1 |
Intel | HD Graphics | Broadwell, Skylake |
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
System Memory
System memory is the second most important determinant for Avizo users who need to process large data.
You may need much more memory than the actual size of the data you want to load within Avizo. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Avizo can handle data that exceed your system's physical memory using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires Xplore5D extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using Avizo large data technologies in combination with a large amount of system memory.
Avizo 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
Hard Drives
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
CPU
While Avizo mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Avizo are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Avizo a number of modules of the Avizo XLabSuite Extension, and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting Avizo performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
How hardware can help optimizing
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA or SMS):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes:
Image processing and quantification (Avizo 3D Pro):
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters), Avizo XLabSuite Extension (absolute permeability computation):
Other compute modules, display module data extraction:
GPU computing using custom module programmed using Avizo XPand C++ API and GPU API:
Special considerations
Environment variables
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
Embedded documentation browser
Avizo documentation is rendered by a sandboxed embedded browser (WebEngine). If Avizo is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
Firewall
An internet access is necessary to activate Avizo. Your firewall may prevent the connection to the license server.
Linux
Avizo is only available for Intel64/AMD64 systems.
The official Linux distribution for Avizo is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, Avizo is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
Windows
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing Avizo starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
XPand C++ API
To create custom extensions for Avizo with the C++ API available in Avizo 3D Pro on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run Avizo in debug mode.
To create custom extensions for Avizo with the C++ API available in Avizo 3D Pro on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Notes:
MATLAB
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow Avizo to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
Dell Backup and Recovery Application
We have detected some incompatibility issues with former versions ( 1.9) of Dell Backup and Recovery Application which can make Avizo crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Remote display
Avizo is not tested in remote sessions; remote display is not supported.
PerGeos Software runs on:
Note: Starting with the release 2022.2, CentOS 7 support is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the extensions or functionalities are limited to some platforms:
This document is intended to give recommendations about choosing a suitable workstation to run PerGeos.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
The single most important determinant of PerGeos performance for visualization is the graphics card.
PerGeos should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of PerGeos are able to go around this limitation).
PerGeos will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as PerGeos except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro cards will detail specific performance metrics:
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
Our products are tested on the following configurations:
Platform | GPU | Driver number |
Windows 10 | NVIDIA Tesla M60 | 512.78 |
Windows 11 | NVIDIA RTX A4500 | 528.02 |
Ubuntu-20.04 | NVIDIA T1000 | 525.105.17 |
System memory is the second most important determinant for PerGeos users who need to process large data.
You may need much more memory than the actual size of the data you want to load within PerGeos. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you will need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
PerGeos's Large Data Access (LDA) technology will enable you to work with data sizes exceeding your system's physical memory. LDA is an excellent way to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution will be achieved by using PerGeos's LDA technology in combination with a large amount of system memory. LDA provides a very convenient way to quickly load and browse your whole dataset. Note that LDA data will not work with most compute modules, which require the full resolution data to be loaded in memory.
PerGeos provides another loading option to support 2D and 3D image processing from disk to disk (``read as external disk data''), without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you will likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. LDA technology will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. LDA technology may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
While PerGeos mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside PerGeos are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with PerGeos, a number of modules of the Petrophysics Extension and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting PerGeos performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes,...:
Image processing and quantification:
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters) :
Other compute modules, display module data extraction:
GPU computing using custom module programmed using PerGeos XPand C++ API and GPU API:
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
PerGeos documentation is rendered by a sandboxed embedded browser (WebEngine). If PerGeos is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
An internet access is necessary to activate PerGeos. Your firewall may prevent the connection to the license server.
PerGeos is only available for Intel64/AMD64 systems.
The official Linux distribution for PerGeos is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, PerGeos is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing PerGeos starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
To create custom extensions for PerGeos with the C++ API available in PerGeos on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run PerGeos in debug mode.
To create custom extensions for PerGeos with the C++ API available in PerGeos on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow PerGeos to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
We have detected some incompatibility issues with former versions (<1.9) of Dell Backup and Recovery Application which can make PerGeos crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
PerGeos is not tested in remote sessions; remote display is not supported.
PerGeos Software runs on:
Note: Starting with the release 2022.2, CentOS 7 support is discontinued and Ubuntu 20.04 becomes the officially supported Linux platform. There will be no new product development nor update tested on CentOS 7 after the 2022.1 version. You can still use the CentOS 7 versions of our Software Products and we will continue to provide bug fixes for 12 months after the 2022.1 release date.
Some of the extensions or functionalities are limited to some platforms:
This document is intended to give recommendations about choosing a suitable workstation to run PerGeos.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
The single most important determinant of PerGeos performance for visualization is the graphics card.
PerGeos should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of PerGeos are able to go around this limitation).
PerGeos will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
Modules leveraging CUDA features contain compiled kernels in binary form just for some target GPUs' "compute capabilities". New, more recent, GPUs having a compute capability major greater that those for which binaries have been compiled requires a Just in Time compilation step (that can take till some tenth of minutes) and whose result is stored in a file system cache whose size is controlled by this environment variable: CUDA_CACHE_MAXSIZE. Please be warned that GPU driver updates can reset this CUDA cache.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as PerGeos except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro and AMD Radeon/FirePro cards will detail specific performance metrics:
Professional graphics boards
Vendor | Family | Series |
NVIDIA | Quadro | Maxwell, Kepler, Pascal, RTX, Turing, Ampere |
AMD | FirePro | W, V |
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Standard graphics boards
Vendor | Family | Series |
NVIDIA | GeForce | Maxwell, Kepler, Pascal, RTX, Turing, Ampere |
AMD | Radeon | since GCN 1.1 |
Intel | HD Graphics | Broadwell, Skylake |
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
System memory is the second most important determinant for PerGeos users who need to process large data.
You may need much more memory than the actual size of the data you want to load within PerGeos. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you will need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
PerGeos's Large Data Access (LDA) technology will enable you to work with data sizes exceeding your system's physical memory. LDA is an excellent way to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution will be achieved by using PerGeos's LDA technology in combination with a large amount of system memory. LDA provides a very convenient way to quickly load and browse your whole dataset. Note that LDA data will not work with most compute modules, which require the full resolution data to be loaded in memory.
PerGeos provides another loading option to support 2D and 3D image processing from disk to disk (``read as external disk data''), without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you will likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. LDA technology will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. LDA technology may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
While PerGeos mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside PerGeos are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with PerGeos, a number of modules of the Petrophysics Extension and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting PerGeos performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes,...:
Image processing and quantification:
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters) :
Other compute modules, display module data extraction:
GPU computing using custom module programmed using PerGeos XPand C++ API and GPU API:
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
PerGeos documentation is rendered by a sandboxed embedded browser (WebEngine). If PerGeos is executed locally via a remote executable file, the user must manually disable the sandbox to access to the documentation. Set the environment variable QTWEBENGINE_DISABLE_SANDBOX to 1 to disable WebEngine sandbox. The same setting may be needed on some Linux if the anonymous namespaces feature is disabled and you do not want to or cannot activate it. More details can be found here.
An internet access is necessary to activate PerGeos. Your firewall may prevent the connection to the license server.
PerGeos is only available for Intel64/AMD64 systems.
The official Linux distribution for PerGeos is Ubuntu 20.04 (64-bit PC desktop image). Nevertheless, PerGeos is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support for those platforms will be limited.
Notes:
In Windows, the maximum path length (MAX_PATH) is 260 characters. You may hit this limitation if you are installing PerGeos starting from an initial "long-length" folder path and this will result in a failing or uncomplete installation. One way to overcome this issue can be to activate Windows support for extended-length path, as explained here.
To create custom extensions for PerGeos with the C++ API available in PerGeos on Windows, you will need Microsoft Visual Studio® or an equivalent IDE with a Microsoft VS2019 toolchain (MSVC v142 - VS 2019 C++ x64/x86) It is important to install Visual Studio prior to run PerGeos in debug mode.
To create custom extensions for PerGeos with the C++ API available in PerGeos on Linux, you will need gcc 9.x on Ubuntu 20.04 (64-bit PC desktop image). Use the following command to determine the version of the GNU compiler:
gcc --version
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow PerGeos to find MATLAB libraries.
Linux
You must have the C shell csh installed at /bin/csh. If it is not present, you can install it using apt-get install csh.
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
We have detected some incompatibility issues with former versions ( 1.9) of Dell Backup and Recovery Application which can make PerGeos crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
PerGeos is not tested in remote sessions; remote display is not supported.
Amira Software runs on:
Note: For the next release 2022.2, CentOS7 will be discontinued and replaced by Ubuntu 20.04 as officially supported linux platform. There will be no new product development nor update on CentOS7 after this version. You can still use the CentOS7 versions of our Software Products and we will continue to provide bug fixes for 12 months.
Some of the Editions and Extensions or functionalities are limited to some platforms:
This document is intended to give recommendations about choosing a suitable workstation to run Amira.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Amira refers to Amira Software and all Amira Software extensions.
The single most important determinant of Amira performance for visualization is the graphics card.
Amira should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of Amira are able to go around this limitation).
Amira will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as Amira except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro and AMD Radeon/FirePro cards will detail specific performance metrics:
Professional graphics boards
Vendor | Family | Series |
NVIDIA | Quadro | Maxwell, Kepler, Pascal, RTX, Turing |
AMD | FirePro | W, V |
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Standard graphics boards
Vendor | Family | Series |
NVIDIA | GeForce | Maxwell, Kepler, Pascal, RTX, Turing |
AMD | Radeon | since GCN 1.1 |
Intel | HD Graphics | Broadwell, Skylake |
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
System memory is the second most important determinant for Amira users who need to process large data.
You may need much more memory than the actual size of the data you want to load within Amira. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Amira can handle data that exceed your system's physical memory using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires Xplore5D extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using Amira large data technologies in combination with a large amount of system memory.
Amira 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
While Amira mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Amira are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Amira XImagePAQ and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting Amira performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA or SMS):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes:
Image processing and quantification (Amira 3D Pro):
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters):
Other compute modules, display module data extraction:
GPU computing using custom module programmed using Amira XPand C++ API and GPU API:
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
An internet access is necessary to activate Amira. Your firewall may prevent the connection to the license server.
Amira is only available for Intel64/AMD64 systems.
The official Linux distribution for Amira is CentOS 7 64-bit. Nevertheless, Amira is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support of those platforms will be limited. Here is a non-exhaustive list of these 64-bit Linux distributions:
Notes:
Section "Extensions"
Option "Composite" "disable"
EndSection
To create custom extensions for Amira with the C++ API available in Amira 3D Pro on Windows, you will need Microsoft Visual Studio® 2013, Update 4. It is important to install Visual Studio prior to run Amira in debug mode.
To create custom extensions for Amira with the C++ API available in Amira 3D Pro on Linux, you will need gcc 4.8.x on RHEL 7. Use the following command to determine the version of the GNU compiler:
gcc --version
Notes:
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow Amira to find MATLAB libraries.
Linux
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
We have detected some incompatibility issues with former versions ( 1.9) of Dell Backup and Recovery Application which can make Amira crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Amira is not tested in remote sessions; remote display is not supported.
Avizo Software runs on:
Note: For the next release 2022.2, CentOS7 will be discontinued and replaced by Ubuntu 20.04 as officially supported linux platform. There will be no new product development nor update on CentOS7 after this version. You can still use the CentOS7 versions of our Software Products and we will continue to provide bug fixes for 12 months.
Some of the Editions and Extensions or functionalities are limited to some platforms:
This document is intended to give recommendations about choosing a suitable workstation to run Avizo Software.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
In this document, the term Avizo refers to all Avizo Software editions and all Avizo Software extensions.
The single most important determinant of Avizo performance for visualization is the graphics card.
Avizo should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of Avizo are able to go around this limitation).
Avizo will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as Avizo except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro and AMD Radeon/FirePro cards will detail specific performance metrics:
Professional graphics boards
Vendor | Family | Series |
NVIDIA | Quadro | Maxwell, Kepler, Pascal, RTX, Turing |
AMD | FirePro | W, V |
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Standard graphics boards
Vendor | Family | Series |
NVIDIA | GeForce | Maxwell, Kepler, Pascal, RTX, Turing |
AMD | Radeon | since GCN 1.1 |
Intel | HD Graphics | Broadwell, Skylake |
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
System memory is the second most important determinant for Avizo users who need to process large data.
You may need much more memory than the actual size of the data you want to load within Avizo. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
Avizo can handle data that exceed your system's physical memory using Large Data Access (LDA) or Smart Multichannel Series (SMS) technologies - SMS requires Xplore5D extension. They are excellent ways to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution is achieved by using Avizo large data technologies in combination with a large amount of system memory.
Avizo 3D Pro provides another loading option to support for 2D and 3D image processing from disk to disk, without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. Large data technologies will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. Large data technologies may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
While Avizo mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside Avizo are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with Avizo a number of modules of the Avizo XLabSuite Extension, and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting Avizo performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA or SMS):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes:
Image processing and quantification (Avizo 3D Pro):
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters), Avizo XLabSuite Extension (absolute permeability computation):
Other compute modules, display module data extraction:
GPU computing using custom module programmed using Avizo XPand C++ API and GPU API:
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
An internet access is necessary to activate Avizo. Your firewall may prevent the connection to the license server.
Avizo is only available for Intel64/AMD64 systems.
The official Linux distribution for Avizo is CentOS 7 64-bit. Nevertheless, Avizo is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support of those platforms will be limited. Here is a non-exhaustive list of these 64-bit Linux distributions:
Notes:
Section "Extensions"
Option "Composite" "disable"
EndSection
To create custom extensions for Avizo with the C++ API available in Avizo 3D Pro on Windows, you will need Microsoft Visual Studio® 2013, Update 4. It is important to install Visual Studio prior to run Avizo in debug mode.
To create custom extensions for Avizo with the C++ API available in Avizo 3D Pro on Linux, you will need gcc 4.8.x on RHEL 7. Use the following command to determine the version of the GNU compiler:
gcc --version
Notes:
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow Avizo to find MATLAB libraries.
Linux
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
We have detected some incompatibility issues with former versions ( 1.9) of Dell Backup and Recovery Application which can make Avizo crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
Avizo is not tested in remote sessions; remote display is not supported.
PerGeos Software runs on:
Some of the extensions or functionalities are limited to some platforms:
This document is intended to give recommendations about choosing a suitable workstation to run PerGeos Software.
The four most important components that need to be considered are the graphics card (GPU), the CPU, the RAM and the hard drive.
The performance of direct volume rendering of large volumetric data or large triangulated surface visualization extracted from the data depends heavily on the GPU capability. The performance of image processing algorithms depends heavily on the performance of the CPU. The ability to quickly load or save large data depends heavily on the hard drive performance. And, of course, the amount of available memory in the system will be the main limitation on the size of the data that can be loaded and processed.
Because the hardware requirements will widely vary according to the size of your data and your workflow, we strongly suggest that you take advantage of our supported evaluation version to try working with one of your typical data sets.
The single most important determinant of PerGeos performance for visualization is the graphics card.
PerGeos should run on any graphics system (this includes GPU and its driver) that provides a complete implementation of OpenGL 2.1 or higher (certain features may not be available depending on the OpenGL version and extensions supported). However, graphics board and driver bugs are not unusual.
The amount of GPU memory needed depends on the size of the data. We recommend a minimum of 1 GB on the card. Some visualization modules may require having graphics memory large enough to hold the actual data.
High-end graphics cards have 16 to 32 GB of memory. Optimal performance volumetric visualization at full resolution requires that data fit in graphics memory (some volume rendering modules of PerGeos are able to go around this limitation).
PerGeos will not benefit from multiple graphics boards for the purpose of visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics board, this computation can also run on a second CUDA-enabled graphics card installed on the system. A multiple graphics board configuration can be useful to drive many screens or in immersive environments.
When comparing graphics boards, there are many different criteria and performance numbers to consider. Some are more important than others, and some are more important for certain kinds of rendering. Thus, it's important to consider your specific visualization requirements. Integrated graphics boards are not recommended for graphics-intensive applications such as PerGeos except for basic visualization.
Wikipedia articles on NVIDIA GeForce/Quadro and AMD Radeon/FirePro cards will detail specific performance metrics:
Professional graphics boards
Vendor | Family | Series |
NVIDIA | Quadro | Maxwell, Kepler, Pascal, RTX, Turing |
AMD | FirePro | W, V |
All driver bugs are submitted to the vendors. A fix may be expected in a future driver release.
Standard graphics boards
Vendor | Family | Series |
NVIDIA | GeForce | Maxwell, Kepler, Pascal, RTX, Turing |
AMD | Radeon | since GCN 1.1 |
Intel | HD Graphics | Broadwell, Skylake |
Due to vendor support policies, on standard graphics boards we are not able to commit to providing a fix for bugs caused by the driver.
System memory is the second most important determinant for PerGeos users who need to process large data.
You may need much more memory than the actual size of the data you want to load within PerGeos. Some processing may require several times the memory required by the original data set. If you want to load, for instance, a 4 GB data set in memory and apply a non-local means filter to the original data and then compute a distance map, you may need up to 16 or 20 GB of additional memory for the intermediate results of your processing. Commonly you will need 2 or 3 times the memory footprint of the data being processed for basic operations. For more complex workflows you may need up to 6 or 8 times amount of memory, so 32 GB may be required for a 4 GB dataset.
Also notice that size of the data on disk may be much smaller than memory needed to load the data as the file format may have compressed the data (for instance, loading a stack of JPEG files).
PerGeos's Large Data Access (LDA) technology will enable you to work with data sizes exceeding your system's physical memory. LDA is an excellent way to stretch the performance, but it is not a direct substitute for having more physical memory. The best performance and optimal resolution will be achieved by using PerGeos's LDA technology in combination with a large amount of system memory. LDA provides a very convenient way to quickly load and browse your whole dataset. Note that LDA data will not work with most compute modules, which require the full resolution data to be loaded in memory.
PerGeos provides another loading option to support 2D and 3D image processing from disk to disk (``read as external disk data''), without requiring loading the entire data into memory; modules then operate per data slab. This enables processing and quantification of large image data even with limited hardware memory. Since processing of each slab requires loading data and saving results from/to the hard drive, it dramatically increases processing time. Thus, processing data fully loaded in memory is always preferred for best performance.
When working with large files, reading data from the disk can slow down your productivity. A standard hard drive (HDD) (e.g., 7200rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; your actual experience is likely to be closer to 40 MB/second. When you want to read a 1 GB file from the disk, you will likely have to wait 25 seconds. For a 10 GB file, the wait is 250 seconds, over 4 minutes. LDA technology will greatly reduce wait time for data visualization, but disk access will still be a limiting factor when you want to read data files at full resolution for data processing. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID5 mode; note that RAID configurations may require substantially more system administration. For performance only, RAID 0 could be used, but be warned of risk of data loss upon hard-drive failure. If you want performance and data redundancy then RAID 5 is recommended.
Reading data across the network, for example from a file server, will normally be much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and number/size of other requests to the file server. Remember, you are (usually) sharing the network and server and will not get the theoretical bandwidth. LDA technology may also facilitate visualization of volume data through the network, but if data loading is a bottleneck for your workflow, we recommend making a local copy of your data.
While PerGeos mostly relies on GPU performance for visualization, many modules are computational intensive and their performance will be strongly affected by CPU performance.
More and more modules inside PerGeos are multi-threaded and thus can take advantage of multiple CPUs or multiple CPU cores available on your system. This is the case for most of the quantification modules provided with PerGeos, a number of modules of the Petrophysics Extension and also various computation modules.
Fast CPU clock, number of cores, and memory cache are the three most important factors affecting PerGeos performance. While most multi-threaded modules will scale up nicely according to the number of cores, the scaling bottleneck may come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Here is a summary of hardware characteristics to consider for optimizing particular tasks.
Visualizing large data (LDA):
Basic volume rendering:
Advanced volume rendering (Volume Rendering module):
Large geometry rendering such as large surfaces from Isosurface or Generate Surface, large point clusters, large numerical simulation meshes,...:
Image processing and quantification:
Anisotropic Diffusion, Non-Local Means Filter (high-performance smoothing and noise reduction image filters) :
Other compute modules, display module data extraction:
GPU computing using custom module programmed using PerGeos XPand C++ API and GPU API:
QT_PLUGIN_PATH must not be exported as a system-wide environment variable because it can interfere with this application.
An internet access is necessary to activate PerGeos. Your firewall may prevent the connection to the license server.
PerGeos is only available for Intel64/AMD64 systems.
The official Linux distribution for PerGeos is CentOS 7 64-bit. Nevertheless, PerGeos is likely to work on some other 64-bit Linux distributions if the required version of system libraries can be found, but technical support of those platforms will be limited. Here is a non-exhaustive list of these 64-bit Linux distributions:
Notes:
Section "Extensions"
Option "Composite" "disable"
EndSection
To add custom extensions to PerGeos with PerGeos XPand C++ API on Windows, you will need Microsoft Visual Studio 2013 Update 4. The compiler you need depends on the version of PerGeos you have. You can obtain the version information by typing app uname into the PerGeos console. It is important to install Visual Studio prior to run PerGeos in debug mode.
To add custom extensions to PerGeos with PerGeos XPand C++ API on Linux, you will need gcc 4.8.x on RHEL 7. Use the following command to determine the version of the GNU compiler:
gcc --version
Currently supported version of MATLAB on all platforms is 2020a. To use the Calculus MATLAB module that establishes a connection to MATLAB (MathWorks, Inc.), follow these installation instructions:
Windows
If you did not register during installation, enter the following command on the Windows command line: matlab /regserver.
In addition, add MATLAB_INSTALLATION_PATH/bin and MATLAB_INSTALLATION_PATH/bin/win64 in your PATH environment variable to allow PerGeos to find MATLAB libraries.
Linux
The LD_LIBRARY_PATH environment variable should be set to MATLAB_INSTALLATION_PATH/bin/glnxa64 on Linux 64-bit.
The PATH environment variable should be also set to MATLAB_INSTALLATION_PATH/bin.
If you still have trouble starting Calculus MATLAB after setting the environment variable, it might be because the GNU Standard C++ Library (libstdc++) installed on your platform is older than the one required by MATLAB. You can check MATLAB's embedded libstdc++ version in MATLAB_INSTALLATION_PATH/sys/os/glnxa64 on Linux 64-bit.
If needed, add this path to LD_LIBRARY_PATH.
We have detected some incompatibility issues with former versions ( 1.9) of Dell Backup and Recovery Application which can make PerGeos crash when opening files with the file dialog. Please update your Dell Backup and Recovery Application to 1.9.2.8 or higher if you encounter this issue.
PerGeos is not tested in remote sessions; remote display is not supported.
Each sub-application (Analyzer, Trainer and Labeler) of Amira-Avizo2D Software have their own system requirements.
The Analyzer and Trainer applications require more processing power than Labeler. For these applications, refer to the respective system requirements documentation below.
The following recommendations are intended to help you choose a suitable workstation to run the application.
Analyzer runs on Microsoft™ Windows 10 (64-bit). Other than the operating system requirement, the most important components to consider are the graphics card (GPU), the CPU, the RAM, and the hard drive.
The performance of image processing algorithms depends heavily on the performance of the CPU, the GPU, or both. The GPU performance is important for CUDA®-optimized algorithms. Loading or saving large amounts of data depends on the hard drive performance. The amount of system RAM is the main limitation on the size of the data that can be loaded and processed.
Analyzer is a 2D application; therefore, it does not require a high-end graphics card for visualization. Any graphics system (GPU+driver) that provides a complete implementation of OpenGL 2.1 or higher is sufficient. However, some algorithms are optimized with a CUDA implementation. The amount of GPU memory required depends mainly on the use of CUDA-optimized algorithms. The minimum recommendation is 1 GB of GPU memory if your sole use of Analyzer is for visualization. For CUDA usage, we highly recommend either 16 or 32 GB of GPU memory. When choosing the graphics card for your workstation, consider whether you require CUDA support. The CUDA technology is available only on NVIDIA graphics cards.
Analyzer does not benefit from multiple graphics boards for visualization on a single monitor. However, some of the image processing algorithms rely on CUDA for computation, and while the computation can run on the single CUDA-enabled graphics card, this computation can also run on a second CUDA-enabled graphics card installed on the system.
If you need to process a large amount of data, system memory is an important consideration. At a minimum, you need at least the size of your complete tile set. In practice, you are likely to need much more memory than the actual size of the data being loaded. Some processing can require several times the memory required by the original data set. For example, if you load a 4 GB data set in memory, apply a non-local means filter to it and then compute a distance map, you might need as much as 16 to 20 GB of additional memory for the intermediate results.
Workflow processing occurs separately for each tile of the tile set; therefore, when computation is performed only a single tile is loaded in memory at a time. For a basic workflow, you need, in addition of the size of the input data set, 2 or 3 times the memory footprint of a single tile in the tile set. For a complex workflow, you need up to 6 or 8 times the size of a tile.
Also keep in mind that certain file formats might compress the data so that the disk size of the data is significantly smaller than the memory required to load it.
When working with large files, reading data from the disk can slow productivity. A standard hard disk drive (for example, a 7200 rpm SATA disk) can only stream data to your application at a sustained rate of about 60 MB/second. That is the theoretical limit; the actual performance is likely to be closer to 40 MB/second. Therefore, reading a 1 GB file from disk typically takes 25 seconds. For a 10 GB file, the wait is over 4 minutes. Compared to traditional HDDs, solid state drives (SSD) can improve read and write speeds.
For best performance, the recommended solution is to configure multiple hard drives (3 or more HDD or SSD) in RAID 5 mode; however, be aware that RAID configurations might require substantially more system administration. For performance only, you could use RAID 0, but at the risk of data loss upon a hard-drive failure. If you want both performance and data redundancy, then RAID 5 is recommended.
A fast CPU clock, the number of cores, and the memory cache are the most important factors affecting performance. While most multi-threaded modules scale up nicely according to the number of cores, a scaling bottleneck might come from memory access. From experience, up to 8 cores show almost linear scalability while more than 8 cores do not show much gain in performance. A larger memory cache improves performance.
Internet access is necessary to activate the product; however, your firewall might prevent the connection to the license server. For more information, refer to activation documentation. Also be aware that reading data across the network (a file server, for example) is normally much slower than reading from a local disk. The performance of your network depends on the network technology (100 Mb, 1 Gb, etc.), the amount of other traffic on the network, and the number and size of other requests to the file server, so in practice you are unlikely to achieve the theoretical bandwidth.
Trainer runs on Microsoft Windows 10 (64-bit).
The following recommendations are intended to help you choose a suitable workstation to run the Trainer application.
Other than the operating system requirement, the most important components to consider for the Trainer application are the graphics card (GPU), and the hard drive. RAM and CPU are used mostly for pre-processing tasks, so the following can be considered sufficient:
Trainer requires an NVIDIA graphics board that supports CUDA Compute Capability 3.5 or higher. Compatible GPUs can be found here: https://developer.nvidia.com/cuda-gpus.
The minimum amount of dedicated GPU memory is 4 GB. However, deep learning is a compute-intensive task, and performance is directly related to the GPU memory and speed. Therefore, a high-end GPU is recommended, and a recent graphics driver must be installed.
Also note that Trainer does not take advantage of multi-GPU configurations.
It is recommended to store data on a local fast hard drive (SSD preferred) for quicker data access.
Labeler runs on Microsoft Windows 10 (64-bit).
The following recommendations are intended to help you choose a suitable workstation to run the Labeler application.
Labeler can run with any graphics card that supports OpenGL 2.1 or higher. A high-end graphics card is not required.
It is recommended that you store data on a local fast hard drive (SSD preferred) for quicker data access.