Automated testing
GDAL includes a comprehensive test suite, implemented using a combination of Python (via pytest) and C++ (via gtest).
After building GDAL using CMake, the complete test suite can be run using ctest -v --output-on-failure
. This will automatically set environment variables so that tests are run on the
the built version of GDAL, rather than an installed system copy.
Running a subset of tests using ctest
The complete set of test suites known to ctest
can be viewed running ctest -N
.
A subset of tests can be run using the -R
argument to ctest
, which selects tests using a provided regular expression.
For example, ctest -R autotest
would run the Python-based tests.
The -E
argument can be used to exclude tests using a regular expression. For example, ctest -E gdrivers
would exclude the suite of driver tests.
Running a subset of tests using pytest
The test subsets exposed by ctest
are still rather large and some may take several minutes to run.
If a higher level of specificity is needed, pytest
can be called directly to run groups of tests or individual tests.
Before running pytest
, it is important to set development environment variables so that the development build of GDAL is tested, rather than a system version.
Tests can then be run by calling pytest
, for example on an individual file.
On Linux and MacOS builds, the tests are symlinked into the build directory, so this
can be done by running the following from the build directory:
pytest autotest/gcore/vrt_read.py
On Windows, the test files remain in the source tree, but the pytest configuration file pytest.ini
is only available in the build directory. To accommodate this, the above command would be modified as follows:
pytest -c pytest.ini ../autotest/gcore/vrt_read.py
A subset of tests within an individual test file can be run by providing a regular expression to the -k
argument to pytest
.
pytest autotest/gcore/vrt_read.py -k test_vrt_read_non_existing_source
pytest
can also report information on the tests without running them. For
example, to list tests containing "tiff" in the name:
pytest --collect-only autotest -k tiff
Warning
Not all Python tests can be run independently; some tests depend on state set by a previous tests in the same file.
Checking for memory leaks and access errors using Valgrind
The GDAL unit test suite can be run using the Valgrind tool to detect memory errors such as leaks and incorrect reads/writes.
The test suite will run considerably slower under Valgrind (perhaps by a factor of ten) so it is generally advisable to run a subset of the tests using the methods described above.
Warning
Calling valgrind ctest
will _not_ run the tests under
valgrind
. Although it is possible to use Valgrind with ctest, it is
simpler to call pytest
or gdal_unit_test
directly.
The following preparatory steps are necessary to avoid numerous false-positive errors from Valgrind:
Many false-positive errors are generated by Python itself. Most of these can be removed by obtaining a suppression file that corresponds to the version of the Python interpreter used to run the tests. This file can be located in a source distribution of Python, or downloaded directly from GitHub (for example, at https://raw.githubusercontent.com/python/cpython/3.11/Misc/valgrind-python.supp)
A few false-positive errors are generated by the GDAL test suite or libraries that it uses (e.g., SWIG, numpy). These can be removed by
autotest/valgrind-gdal.supp
file in the GDAL repository.When running Python unit tests, the default system memory allocator should be used instead of Python's internal memory allocator. This can be done by setting the
PYTHONMALLOC
environment variable tomalloc
.When running Python unit tests, Valgrind will report numerous "Invalid file descriptor" warnings that cannot currently be suppressed. These can be removed from the output using
grep -v "invalid file descriptor\|alternative log fd"
.
Combining the following, we can run valgrind for a subset of Python tests as follows:
PYTHONMALLOC=malloc valgrind \
--leak-check=full \
--suppressions=/home/dan/dev/valgrind-python.supp \
--suppressions=/home/dan/dev/gdal/autotest/valgrind-gdal.supp \
pytest -v autotest/utilities 2>&1 | (grep -v "invalid file descriptor\|alternative log fd" || true) | tee valgrind-out
Note
To avoid verbose commands such as the one above, it may be useful to
reference the suppression files and other common arguments in a
~/.valgrindrc
file.
Recommendations on how to write new tests
Python-based tests should be preferred when possible, as productivity is higher in Python and there is no associated compilation time (compilation time affects feedback received from continuous integration).
C/C++-based test should be reserved for C++-specific aspects that cannot be tested with the SWIG Python bindings, which use the C interface. For example testing of C++ operators (copy/move constructors/assignment operators, iterator interfaces, etc.) or C/C++ functionality not mapped to SWIG (e.g. CPL utility functions/classes)
Python tests
Python tests use the pytest framework since RFC 72: Update autotest suite to use pytest.
Test cases should be written in a way where they are independent from other
ones, so they can potentially be run in a isolated way or in parallel of other
test cases. In particular temporary files should be created with a name that
cannot conflict with other tests: preferably use pytest's `tmp_path
fixture <https://docs.pytest.org/en/7.1.x/how-to/tmp_path.html#the-tmp-path-fixture>`__.
Use @pytest.mark.require_driver(driver_name)
as an annotation for a test
case that requires an optional driver to be present.
Use pytestmark = pytest.mark.require_driver("driver_name")
towards the
beginning of a test file that requires a given driver to be available for
all its test cases. This is typically when writing tests for a particular
driver.
Use @pytest.mark.require_run_on_demand
as an annotation to signal a test
that should not be run by default, typically because it requires special
pre-conditions, use a lot of RAM, etc. and is thus not appropriate to be
automatically run by continuous integration.
Use @pytest.mark.parametrize(...)
as an annotation for test functions
that test for variations, instead of for() constructs.
More details at https://docs.pytest.org/en/latest/parametrize.html
e.g.:
@pytest.mark.parametrize("dt,expected_size", [(gdal.GDT_Byte, 1),
(gdal.GDT_UInt16, 2)]
def test_datatypesize(dt,expected_size):
assert gdal.GetDataTypeSizeBytes(dt) == expected_size
instead of
def test_datatypesize_DO_NOT_DO_THAT():
for dt, expected_size in [(gdal.GDT_Byte, 1), (gdal.GDT_UInt16, 2)]:
assert gdal.GetDataTypeSizeBytes(dt) == expected_size
Fixtures can be used to share set-up and tear-down code between test cases.
e.g. a fixture automatically loaded for all test cases of a test file, that takes care to unregister a given driver before the test case is run, and re-register it afterwards:
@pytest.fixture(scope="module", autouse=True)
def without_filegdb_driver():
# remove FileGDB driver before running tests
filegdb_driver = ogr.GetDriverByName("FileGDB")
if filegdb_driver is not None:
filegdb_driver.Deregister()
yield
if filegdb_driver is not None:
print("Reregistering FileGDB driver")
filegdb_driver.Register()
or a fixture that runs preliminary checks to discover if a driver has some optional capabilities, and skip a test case if not:
@pytest.fixture()
def require_auto_load_extension():
if ogr.GetDriverByName("SQLite") is None:
pytest.skip()
ds = ogr.Open(":memory:")
with gdaltest.error_handler():
sql_lyr = ds.ExecuteSQL("PRAGMA compile_options")
if sql_lyr:
for f in sql_lyr:
if f.GetField(0) == "OMIT_LOAD_EXTENSION":
ds.ReleaseResultSet(sql_lyr)
pytest.skip("SQLite3 built with OMIT_LOAD_EXTENSION")
ds.ReleaseResultSet(sql_lyr)
def test_ogr_virtualogr_1(require_auto_load_extension):
# Invalid syntax
assert not ogr_virtualogr_run_sql("CREATE VIRTUAL TABLE poly USING VirtualOGR()")
C++ tests
GDAL C++ tests use the GoogleTest framework since RFC 88: Use GoogleTest framework for C/C++ unit tests.
Common non-failing assertions are: EXPECT_TRUE(cond)
, EXPECT_FALSE(cond)
,
EXPECT_EQ(a, b)
, EXPECT_NE(a, b)
, EXPECT_STREQ(a, b)
, EXPECT_LE(a, b)
,
EXPECT_LT(a, b)
, EXPECT_GE(a, b)
, EXPECT_GT(a, b)
, EXPECT_NEAR(a, b, tolerance)
If one of those assertions fail, the execution of the rest of the test cases
continues, hence they should not typically be used if testing a pointer against
NULL and dereferencing it unconditionally afterwards. The ASSERT_xxxx family
of assertions should be used for such cases where early exit of the test case
is desired.
GoogleTest also offers capabilities for parametrized tests. For example:
class DataTypeTupleFixture:
public test_gdal,
public ::testing::WithParamInterface<std::tuple<GDALDataType, GDALDataType>>
{
public:
static std::vector<std::tuple<GDALDataType, GDALDataType>> GetTupleValues()
{
std::vector<std::tuple<GDALDataType, GDALDataType>> ret;
for( GDALDataType eIn = GDT_Byte; eIn < GDT_TypeCount; eIn = static_cast<GDALDataType>(eIn + 1) )
{
for( GDALDataType eOut = GDT_Byte; eOut < GDT_TypeCount; eOut = static_cast<GDALDataType>(eOut + 1) )
{
ret.emplace_back(std::make_tuple(eIn, eOut));
}
}
return ret;
}
};
// Test GDALDataTypeUnion() on all (GDALDataType, GDALDataType) combinations
TEST_P(DataTypeTupleFixture, GDALDataTypeUnion_generic)
{
GDALDataType eDT1 = std::get<0>(GetParam());
GDALDataType eDT2 = std::get<1>(GetParam());
GDALDataType eDT = GDALDataTypeUnion(eDT1,eDT2 );
EXPECT_EQ( eDT, GDALDataTypeUnion(eDT2,eDT1) );
EXPECT_GE( GDALGetDataTypeSize(eDT), GDALGetDataTypeSize(eDT1) );
EXPECT_GE( GDALGetDataTypeSize(eDT), GDALGetDataTypeSize(eDT2) );
EXPECT_TRUE( (GDALDataTypeIsComplex(eDT) && (GDALDataTypeIsComplex(eDT1) || GDALDataTypeIsComplex(eDT2))) ||
(!GDALDataTypeIsComplex(eDT) && !GDALDataTypeIsComplex(eDT1) && !GDALDataTypeIsComplex(eDT2)) );
EXPECT_TRUE( !(GDALDataTypeIsFloating(eDT1) || GDALDataTypeIsFloating(eDT2)) || GDALDataTypeIsFloating(eDT));
EXPECT_TRUE( !(GDALDataTypeIsSigned(eDT1) || GDALDataTypeIsSigned(eDT2)) || GDALDataTypeIsSigned(eDT));
}
INSTANTIATE_TEST_SUITE_P(
test_gdal,
DataTypeTupleFixture,
::testing::ValuesIn(DataTypeTupleFixture::GetTupleValues()),
[](const ::testing::TestParamInfo<DataTypeTupleFixture::ParamType>& l_info) {
GDALDataType eDT1 = std::get<0>(l_info.param);
GDALDataType eDT2 = std::get<1>(l_info.param);
return std::string(GDALGetDataTypeName(eDT1)) + "_" + GDALGetDataTypeName(eDT2);
}
);
Test coverage reports
GDAL continuous integration has a coverage
configuration that builds
GDAL with the gcov
GCC module to get the line coverage of running Python
and C++ autotest tests.
This is used by the Coveralls GitHub Action to upload results to https://coveralls.io/github/OSGeo/gdal, for both push and pull requests events.
A somewhat nicer looking output of line coverage results for the latest master
build, generated by lcov
, is also available at
https://gdalautotest-coverage-results.github.io/coverage_html/index.html
Post-commit testing
A weekly static analysis is run by Coverity. Developers/maintainers can request access on the GDAL project page.