Document status - Public


QA Software Build Report

ClickHouse 23.8.8.21.altinitystable

(c) 2023 Altinity Inc. All Rights Reserved.

Approval

Status: Approved for release by QA

Reviewed by: vzakaznikov@altinity.com

Date: Wed 27 Dec 2023 03:23:25 PM EST

Table of Contents

Test Results

Stage Status Note
x86_64
Integration Pass with known fails.
Stateful Pass
Stateless Pass with known fails.
Regression Pass with known fail.
Trivy Pass
Scout Pass
Aarch64
Stateful Pass
Stateless Pass with known fails.
Regression Pass
Trivy Pass
Scout Pass
ClickHouse Keeper
Trivy Pass
Scout Pass

Results https://altinity-test-reports.s3.amazonaws.com/index.html#builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/
GitLab Pipeline https://gitlab.com/altinity-qa/clickhouse/cicd/release/-/pipelines/1119689211
GitHub Actions https://github.com/Altinity/ClickHouse/actions/runs/7328846900

Results Analysis

x86_64

Integration x86_64 Results

Results

https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/integration/integration_results_1.html
https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/integration/integration_results_2.html
https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/integration/integration_results_3.html
https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/integration/integration_results_4.html
https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/integration/integration_results_5.html
https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/integration/integration_results_6.html

Fails

Test: test_format_schema_on_server/test.py::test_protobuf_format_output


Reason:

_________________________ test_protobuf_format_output __________________________
[gw8] linux -- Python 3.10.12 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f0b165ac4c0>

    def test_protobuf_format_output(started_cluster):
        create_simple_table()
        instance.query("INSERT INTO test.simple VALUES (1, 'abc'), (2, 'def')")
>       assert (
            instance.http_query(
                "SELECT * FROM test.simple FORMAT Protobuf SETTINGS format_schema='simple:KeyValuePair'"
            )
            == "\x07\x08\x01\x12\x03abc\x07\x08\x02\x12\x03def"
        )
E       AssertionError: assert '܈Ē͡扣܈Ȓͤ敦' == '\x07\x08\x01\x12\x03abc\x07\x08\x02\x12\x03def'
E         - abcdef
E         + ܈Ē͡扣܈Ȓͤ敦

test_format_schema_on_server/test.py:41: AssertionErrort status 125.



Comment: Environment misconfiguration (output must be in UTF-8)
Status: OK to FAIL

Test: test_system_merges/test.py::test_mutation_simple[]


Reason:

___________________________ test_mutation_simple[] ____________________________
[gw5] linux -- Python 3.10.12 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f896afabe50>
replicated = ''

    @pytest.mark.parametrize("replicated", ["", "replicated"])
    def test_mutation_simple(started_cluster, replicated):
        clickhouse_path = "/var/lib/clickhouse"
        db_name = "test"
        table_name = "mutation_simple"
        name = db_name + "." + table_name
        table_path = "data/" + db_name + "/" + table_name
        nodes = [node1, node2] if replicated else [node1]
        engine = (
            "ReplicatedMergeTree('/clickhouse/test_mutation_simple', '{replica}')"
            if replicated
            else "MergeTree()"
        )
        node_check = nodes[-1]
        starting_block = 0 if replicated else 1

        try:
            for node in nodes:
                node.query(
                    f"create table {name} (a Int64) engine={engine} order by tuple()"
                )

            node1.query(f"INSERT INTO {name} VALUES (1), (2), (3)")

            part = "all_{}_{}_0".format(starting_block, starting_block)
            result_part = "all_{}_{}_0_{}".format(
                starting_block, starting_block, starting_block + 1
            )

            # ALTER will sleep for 3s * 3 (rows) = 9s
            def alter():
                node1.query(
                    f"ALTER TABLE {name} UPDATE a = 42 WHERE sleep(9) = 0",
                    settings=settings,
                )

            t = threading.Thread(target=alter)
            t.start()

            # Wait for the mutation to actually start
            assert_eq_with_retry(
                node_check,
                f"select count() from system.merges where table='{table_name}'",
                "1\n",
                retry_count=30,
                sleep_time=0.1,
            )

>           assert (
                split_tsv(
                    node_check.query(
                        """
                SELECT database, table, num_parts, source_part_names, source_part_paths, result_part_name, result_part_path, partition_id, is_mutation
                    FROM system.merges
                    WHERE table = '{name}'
            """.format(
                            name=table_name
                        )
                    )
                )
                == [
                    [
                        db_name,
                        table_name,
                        "1",
                        "['{}']".format(part),
                        "['{clickhouse}/{table_path}/{}/']".format(
                            part, clickhouse=clickhouse_path, table_path=table_path
                        ),
                        result_part,
                        "{clickhouse}/{table_path}/{}/".format(
                            result_part, clickhouse=clickhouse_path, table_path=table_path
                        ),
                        "all",
                        "1",
                    ],
                ]
            )
E           assert [] == [['test', 'mutation_simple', '1', "['all_1_1_0']", "['/var/lib/clickhouse/data/test/mutation_simple/all_1_1_0/']", 'all_1_1_0_2', '/var/lib/clickhouse/data/test/mutation_simple/all_1_1_0_2/', 'all', '1']]
E             Right contains one more item: ['test', 'mutation_simple', '1', "['all_1_1_0']", "['/var/lib/clickhouse/data/test/mutation_simple/all_1_1_0/']", 'all_1_1_0_2', ...]
E             Full diff:
E               [
E             +  ,
E             -  ['test',
E             -   'mutation_simple',
E             -   '1',
E             -   "['all_1_1_0']",
E             -   "['/var/lib/clickhouse/data/test/mutation_simple/all_1_1_0/']",
E             -   'all_1_1_0_2',
E             -   '/var/lib/clickhouse/data/test/mutation_simple/all_1_1_0_2/',
E             -   'all',
E             -   '1'],
E               ]

test_system_merges/test.py:205: AssertionError



Comment: Environment-dependent, can be fixed by adjusting sleep times.
Status: OK to FAIL

Test: test_system_merges/test.py::test_mutation_simple[replicated]


Reason:

_______________________ test_mutation_simple[replicated] _______________________
[gw5] linux -- Python 3.10.12 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f896afabe50>
replicated = 'replicated'

    @pytest.mark.parametrize("replicated", ["", "replicated"])
    def test_mutation_simple(started_cluster, replicated):
        clickhouse_path = "/var/lib/clickhouse"
        db_name = "test"
        table_name = "mutation_simple"
        name = db_name + "." + table_name
        table_path = "data/" + db_name + "/" + table_name
        nodes = [node1, node2] if replicated else [node1]
        engine = (
            "ReplicatedMergeTree('/clickhouse/test_mutation_simple', '{replica}')"
            if replicated
            else "MergeTree()"
        )
        node_check = nodes[-1]
        starting_block = 0 if replicated else 1

        try:
            for node in nodes:
                node.query(
                    f"create table {name} (a Int64) engine={engine} order by tuple()"
                )

            node1.query(f"INSERT INTO {name} VALUES (1), (2), (3)")

            part = "all_{}_{}_0".format(starting_block, starting_block)
            result_part = "all_{}_{}_0_{}".format(
                starting_block, starting_block, starting_block + 1
            )

            # ALTER will sleep for 3s * 3 (rows) = 9s
            def alter():
                node1.query(
                    f"ALTER TABLE {name} UPDATE a = 42 WHERE sleep(9) = 0",
                    settings=settings,
                )

            t = threading.Thread(target=alter)
            t.start()

            # Wait for the mutation to actually start
>           assert_eq_with_retry(
                node_check,
                f"select count() from system.merges where table='{table_name}'",
                "1\n",
                retry_count=30,
                sleep_time=0.1,
            )

test_system_merges/test.py:197: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

instance = <helpers.cluster.ClickHouseInstance object at 0x7f896b01c1c0>
query = "select count() from system.merges where table='mutation_simple'"
expectation = '1\n', retry_count = 30, sleep_time = 0.1, stdin = None
timeout = None, settings = None, user = None, ignore_error = False
get_result = <function <lambda> at 0x7f896c478b80>

    def assert_eq_with_retry(
        instance,
        query,
        expectation,
        retry_count=20,
        sleep_time=0.5,
        stdin=None,
        timeout=None,
        settings=None,
        user=None,
        ignore_error=False,
        get_result=lambda x: x,
    ):
        expectation_tsv = TSV(expectation)
        for i in range(retry_count):
            try:
                if (
                    TSV(
                        get_result(
                            instance.query(
                                query,
                                user=user,
                                stdin=stdin,
                                timeout=timeout,
                                settings=settings,
                                ignore_error=ignore_error,
                            )
                        )
                    )
                    == expectation_tsv
                ):
                    break
                time.sleep(sleep_time)
            except Exception as ex:
                logging.exception(f"assert_eq_with_retry retry {i+1} exception {ex}")
                time.sleep(sleep_time)
        else:
            val = TSV(
                get_result(
                    instance.query(
                        query,
                        user=user,
                        stdin=stdin,
                        timeout=timeout,
                        settings=settings,
                        ignore_error=ignore_error,
                    )
                )
            )
            if expectation_tsv != val:
>               raise AssertionError(
                    "'{}' != '{}'\n{}".format(
                        expectation_tsv,
                        val,
                        "\n".join(expectation_tsv.diff(val, n1="expectation", n2="query")),
                    )
                )
E               AssertionError: '1' != '0'
E               @@ -1 +1 @@
E               -1
E               +0

helpers/test_tools.py:114: AssertionError



Comment: Environment-dependent, can be fixed by adjusting sleep times Status: OK to FAIL.

Stateful x86_64 Results

Results https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/stateful/amd64/stateful_results.html

Stateless x86_64 Results

Results https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/stateless/amd64/stateless_results.html

Fails

Test: 02888_system_tables_with_inaccsessible_table_function


Reason:

2023-12-27 00:50:47 Expected server error code: 279 but got: 1000 (query: CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc02_without_schema AS mysql('127.123.0.1:3306', 'mysql_db', 'mysql_table', 'mysql_user','123123'); -- {serverError 279 }).
2023-12-27 00:50:47 Received exception from server (version 23.8.8):
2023-12-27 00:50:47 Code: 1000. DB::Exception: Received from localhost:9000. DB::Exception: Exception: Connections to mysql failed: mysql_db@127.123.0.1:3306 as user mysql_user, ERROR 2002 : mysqlxx::ConnectionFailed: Can't connect to MySQL server on '127.123.0.1' (115) ((nullptr):0). (POCO_EXCEPTION)
2023-12-27 00:50:47 (query: CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc02_without_schema AS mysql('127.123.0.1:3306', 'mysql_db', 'mysql_table', 'mysql_user','123123'); -- {serverError 279 })
2023-12-27 00:50:47 , result:
2023-12-27 00:50:47 
2023-12-27 00:50:47 
2023-12-27 00:50:47 
2023-12-27 00:50:47 stdout:
2023-12-27 00:50:47 
2023-12-27 00:50:47 
2023-12-27 00:50:47 Settings used in the test: --max_insert_threads 0 --group_by_two_level_threshold 574287 --group_by_two_level_threshold_bytes 46318675 --distributed_aggregation_memory_efficient 1 --fsync_metadata 0 --output_format_parallel_formatting 0 --input_format_parallel_parsing 1 --min_chunk_bytes_for_parallel_parsing 7572807 --max_read_buffer_size 643268 --prefer_localhost_replica 0 --max_block_size 63771 --max_threads 21 --optimize_or_like_chain 0 --optimize_read_in_order 1 --enable_multiple_prewhere_read_steps 0 --read_in_order_two_level_merge_threshold 82 --optimize_aggregation_in_order 1 --aggregation_in_order_max_block_bytes 36270021 --min_compress_block_size 1143303 --max_compress_block_size 535587 --use_uncompressed_cache 1 --min_bytes_to_use_direct_io 1 --min_bytes_to_use_mmap_io 193508573 --local_filesystem_read_method io_uring --remote_filesystem_read_method threadpool --local_filesystem_read_prefetch 1 --remote_filesystem_read_prefetch 1 --allow_prefetched_read_pool_for_remote_filesystem 0 --filesystem_prefetch_max_memory_usage 64Mi --filesystem_prefetches_limit 0 --filesystem_prefetch_min_bytes_for_single_read_task 1Mi --filesystem_prefetch_step_marks 0 --filesystem_prefetch_step_bytes 0 --compile_aggregate_expressions 0 --compile_sort_description 0 --merge_tree_coarse_index_granularity 20 --optimize_distinct_in_order 0 --optimize_sorting_by_input_stream_properties 1 --http_response_buffer_size 6558478 --http_wait_end_of_query False --enable_memory_bound_merging_of_aggregation_results 0 --min_count_to_compile_expression 0 --min_count_to_compile_aggregate_expression 0 --min_count_to_compile_sort_description 0 --session_timezone Africa/Juba
2023-12-27 00:50:47 
2023-12-27 00:50:47 MergeTree settings used in test: --ratio_of_defaults_for_sparse_serialization 0.0 --prefer_fetch_merged_part_size_threshold 1 --vertical_merge_algorithm_min_rows_to_activate 1 --vertical_merge_algorithm_min_columns_to_activate 100 --allow_vertical_merges_from_compact_to_wide_parts 1 --min_merge_bytes_to_use_direct_io 1 --index_granularity_bytes 5373291 --merge_max_block_size 22623 --index_granularity 51083 --min_bytes_for_wide_part 1073741824 --marks_compress_block_size 16791 --primary_key_compress_block_size 87953
2023-12-27 00:50:47 
2023-12-27 00:50:47 Database: test_k0fbpflh



Comment: Unable to connect to MySQL.
Status: OK to FAIL.

Test: 02907_backup_mv_with_no_inner_table


Reason:

2023-12-27 00:50:48 [d56b6b90ca41] 2023.12.26 05:50:47.889228 [ 1163 ] {5ae35194-4d7f-452c-8342-a23bd3da57f1} <Error> BackupsWorker: Failed to start backup Disk('backups', '02907_backup_mv_with_no_inner_table_test_bud14gtt'): Code: 36. DB::Exception: Disk ''backups'' is not allowed for backups, see the 'backups.allowed_disk' configuration parameter. (BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):
2023-12-27 00:50:48 
2023-12-27 00:50:48 0. ./build_docker/./src/Common/Exception.cpp:98: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c644ed7 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 1. DB::Exception::Exception<String>(int, FormatStringHelperImpl<std::type_identity<String>::type>, String&&) @ 0x0000000007154f6d in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 2. ./build_docker/./src/Backups/registerBackupEnginesFileAndDisk.cpp:0: std::shared_ptr<DB::IBackup> std::__function::__policy_invoker<std::shared_ptr<DB::IBackup> (DB::BackupFactory::CreateParams const&)>::__call_impl<std::__function::__default_alloc_func<DB::registerBackupEnginesFileAndDisk(DB::BackupFactory&)::$_0, std::shared_ptr<DB::IBackup> (DB::BackupFactory::CreateParams const&)>>(std::__function::__policy_storage const*, DB::BackupFactory::CreateParams const&) @ 0x00000000110b6c02 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 3. ./build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:0: DB::BackupsWorker::doBackup(std::shared_ptr<DB::ASTBackupQuery> const&, String const&, String const&, DB::BackupInfo const&, DB::BackupSettings, std::shared_ptr<DB::IBackupCoordination>, std::shared_ptr<DB::Context const> const&, std::shared_ptr<DB::Context>, bool) @ 0x0000000010fdd9a9 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 4. ./build_docker/./src/Backups/BackupsWorker.cpp:0: DB::BackupsWorker::startMakingBackup(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context const> const&) @ 0x0000000010fd9800 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 5. ./build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:606: DB::BackupsWorker::start(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>) @ 0x0000000010fd90c3 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 6. ./build_docker/./src/Interpreters/InterpreterBackupQuery.cpp:0: DB::InterpreterBackupQuery::execute() @ 0x0000000011ebd4b3 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 7. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000122e2d15 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 8. ./build_docker/./src/Interpreters/executeQuery.cpp:1229: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000122de475 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 9. ./build_docker/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x0000000013155799 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 10. ./build_docker/./src/Server/TCPHandler.cpp:2161: DB::TCPHandler::run() @ 0x0000000013167b79 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 11. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x0000000015b5e154 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 12. ./build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b5f351 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 13. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x0000000015c95b87 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-27 00:50:48 14. ./build_docker/./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015



Comment: Test environment misconfiguration
Status: OK to FAIL.

Test: 02833_local_with_dialect


Reason:

2023-12-26 10:14:51 --- /usr/share/clickhouse-test/queries/0_stateless/02833_local_with_dialect.reference   2023-12-26 09:56:02.219277292 -0600
2023-12-26 10:14:51 +++ /tmp/clickhouse-test/0_stateless/02833_local_with_dialect.stdout    2023-12-26 10:14:51.754916853 -0600
2023-12-26 10:14:51 @@ -1,2 +1,2 @@
2023-12-26 10:14:51  0
2023-12-26 10:14:51 -[?2004h[?2004lBye.
2023-12-26 10:14:51 +[?2004h[?2004lHappy new year.
2023-12-26 10:14:51 
2023-12-26 10:14:51 
2023-12-26 10:14:51 
2023-12-26 10:14:51 Database: test_jqi6yhua



Comment: Passed in previous run.
Status: OK to FAIL.

Regression x86_64 Results

Results
- Aes Encryption
- Aggregate Functions
- Atomic Insert
- Base58
- Benchmark AWS
- Benchmark Minio
- Benchmark GCS
- ClickHouse Keeper
- ClickHouse Keeper SSL
- Data Types
- DateTime64 Extended Range
- Disk Level Encryption
- DNS
- Example
- Extended Precision Data Types
- Kafka
- Kerberos
- Key value
- LDAP Authentication
- LDAP External User Directory
- LDAP Role Mapping
- Lightweight Delete
- Parquet
- Parquet AWS
- Parquet Minio
- Part Moves Between Shards
- RBAC
- Selects
- Session Timezone
- SSL Server
- S3 AWS
- S3 Minio
- S3 GCS
- Tiered Storage
- Tiered Storage AWS
- Tiered Storage Minio
- Tiered Storage GCS
- Window Functions

Failed:

Suite: Tiered Storage (GCS)


Reason:

[ Fail ] /tiered storage/with s3gcs/background move/adding another volume (4m 3s)



Comment: Unstable test, passed in previous run.
Status: OK to FAIL.

Trivy x86_64 Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/staging-docker-trivy-ubuntu-server-amd64/results.html

Scout x86_64 Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/staging-docker-scout-ubuntu-server-amd64/results.html

Aarch64

Stateful Aarch64 Results

Results https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/stateful/aarch64/stateful_results.html

Stateless Aarch64 Results

Results https://s3.amazonaws.com/altinity-test-reports/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/stateless/aarch64/stateless_results.html

Test: 02888_system_tables_with_inaccsessible_table_function


Reason:

2023-12-26 17:45:11 Expected server error code: 279 but got: 1000 (query: CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc02_without_schema AS mysql('127.123.0.1:3306', 'mysql_db', 'mysql_table', 'mysql_user','123123'); -- {serverError 279 }).
2023-12-26 17:45:11 Received exception from server (version 23.8.8):
2023-12-26 17:45:11 Code: 1000. DB::Exception: Received from localhost:9000. DB::Exception: Exception: Connections to mysql failed: mysql_db@127.123.0.1:3306 as user mysql_user, ERROR 2002 : mysqlxx::ConnectionFailed: Can't connect to MySQL server on '127.123.0.1' (115) ((nullptr):0). (POCO_EXCEPTION)
2023-12-26 17:45:11 (query: CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc02_without_schema AS mysql('127.123.0.1:3306', 'mysql_db', 'mysql_table', 'mysql_user','123123'); -- {serverError 279 })
2023-12-26 17:45:11 , result:
2023-12-26 17:45:11 
2023-12-26 17:45:11 
2023-12-26 17:45:11 
2023-12-26 17:45:11 stdout:
2023-12-26 17:45:11 
2023-12-26 17:45:11 
2023-12-26 17:45:11 Settings used in the test: --max_insert_threads 0 --group_by_two_level_threshold 459845 --group_by_two_level_threshold_bytes 21449018 --distributed_aggregation_memory_efficient 0 --fsync_metadata 0 --output_format_parallel_formatting 0 --input_format_parallel_parsing 1 --min_chunk_bytes_for_parallel_parsing 20533988 --max_read_buffer_size 774209 --prefer_localhost_replica 1 --max_block_size 86154 --max_threads 3 --optimize_or_like_chain 1 --optimize_read_in_order 0 --enable_multiple_prewhere_read_steps 0 --read_in_order_two_level_merge_threshold 18 --optimize_aggregation_in_order 1 --aggregation_in_order_max_block_bytes 555302 --min_compress_block_size 2610232 --max_compress_block_size 1004130 --use_uncompressed_cache 0 --min_bytes_to_use_direct_io 4757515468 --min_bytes_to_use_mmap_io 10737418240 --local_filesystem_read_method io_uring --remote_filesystem_read_method read --local_filesystem_read_prefetch 0 --remote_filesystem_read_prefetch 0 --allow_prefetched_read_pool_for_remote_filesystem 1 --filesystem_prefetch_max_memory_usage 32Mi --filesystem_prefetches_limit 0 --filesystem_prefetch_min_bytes_for_single_read_task 16Mi --filesystem_prefetch_step_marks 50 --filesystem_prefetch_step_bytes 100Mi --compile_aggregate_expressions 1 --compile_sort_description 0 --merge_tree_coarse_index_granularity 31 --optimize_distinct_in_order 0 --optimize_sorting_by_input_stream_properties 1 --http_response_buffer_size 2183594 --http_wait_end_of_query True --enable_memory_bound_merging_of_aggregation_results 1 --min_count_to_compile_expression 0 --min_count_to_compile_aggregate_expression 0 --min_count_to_compile_sort_description 0 --session_timezone America/Hermosillo
2023-12-26 17:45:11 
2023-12-26 17:45:11 MergeTree settings used in test: --ratio_of_defaults_for_sparse_serialization 1.0 --prefer_fetch_merged_part_size_threshold 8521445138 --vertical_merge_algorithm_min_rows_to_activate 1000000 --vertical_merge_algorithm_min_columns_to_activate 1 --allow_vertical_merges_from_compact_to_wide_parts 0 --min_merge_bytes_to_use_direct_io 1 --index_granularity_bytes 10441177 --merge_max_block_size 23015 --index_granularity 8943 --min_bytes_for_wide_part 0 --marks_compress_block_size 70779 --primary_key_compress_block_size 81619
2023-12-26 17:45:11 
2023-12-26 17:45:11 Database: test_dir5ktlz



Comment: Unable to connect to MySQL.
Status: OK to FAIL.

Test: 02907_backup_mv_with_no_inner_table


Reason:

2023-12-26 17:45:13 [0b1d8008f100] 2023.12.26 05:45:13.139926 [ 1601 ] {ce82f733-9ee3-473c-9add-36972ee0f649} <Error> BackupsWorker: Failed to start backup Disk('backups', '02907_backup_mv_with_no_inner_table_test_znk42fzk'): Code: 36. DB::Exception: Disk ''backups'' is not allowed for backups, see the 'backups.allowed_disk' configuration parameter. (BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):
2023-12-26 17:45:13 
2023-12-26 17:45:13 0. ./build_docker/./src/Common/Exception.cpp:98: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000bfe9dc4 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 1. DB::Exception::Exception<String>(int, FormatStringHelperImpl<std::type_identity<String>::type>, String&&) @ 0x00000000075f5fb0 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 2. ./build_docker/./src/Backups/registerBackupEnginesFileAndDisk.cpp:43: std::shared_ptr<DB::IBackup> std::__function::__policy_invoker<std::shared_ptr<DB::IBackup> (DB::BackupFactory::CreateParams const&)>::__call_impl<std::__function::__default_alloc_func<DB::registerBackupEnginesFileAndDisk(DB::BackupFactory&)::$_0, std::shared_ptr<DB::IBackup> (DB::BackupFactory::CreateParams const&)>>(std::__function::__policy_storage const*, DB::BackupFactory::CreateParams const&) @ 0x0000000010476ec8 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 3. ./build_docker/./src/Backups/BackupsWorker.cpp:0: DB::BackupsWorker::doBackup(std::shared_ptr<DB::ASTBackupQuery> const&, String const&, String const&, DB::BackupInfo const&, DB::BackupSettings, std::shared_ptr<DB::IBackupCoordination>, std::shared_ptr<DB::Context const> const&, std::shared_ptr<DB::Context>, bool) @ 0x000000001039b1bc in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 4. ./build_docker/./src/Backups/BackupsWorker.cpp:304: DB::BackupsWorker::startMakingBackup(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context const> const&) @ 0x0000000010397e3c in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 5. ./build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: DB::BackupsWorker::start(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>) @ 0x0000000010397800 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 6. ./build_docker/./src/Interpreters/InterpreterBackupQuery.cpp:41: DB::InterpreterBackupQuery::execute() @ 0x000000001113f838 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 7. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000114e62f4 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 8. ./build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000114e2be8 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 9. ./build_docker/./src/Server/TCPHandler.cpp:496: DB::TCPHandler::runImpl() @ 0x00000000121d8b78 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 10. ./build_docker/./src/Common/CurrentThread.cpp:94: DB::TCPHandler::run() @ 0x00000000121e8cc8 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 11. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x00000000146f4284 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 12. ./build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x00000000146f5780 in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 13. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001486707c in /usr/lib/debug/usr/bin/clickhouse.debug
2023-12-26 17:45:13 14. ./build_docker/./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000014



Comment: Test environment misconfiguration
Status: OK to FAIL.

Regression Aarch64 Results

Results
- Aes Encryption
- Atomic Insert
- Base58
- Benchmark AWS
- Benchmark Minio
- Benchmark GCS
- ClickHouse Keeper
- ClickHouse Keeper SSL
- Data Types
- DateTime64 Extended Range
- Disk Level Encryption
- DNS
- Example
- Extended Precision Data Types
- Kafka
- Kerberos
- Key Value
- LDAP Authentication
- LDAP External User Directory
- LDAP Role Mapping
- Lightweight Delete
- Parquet
- Parquet AWS
- Parquet Minio
- Part Moves Between Shards
- RBAC
- Selects
- Session Timezone
- SSL Server
- S3 AWS
- S3 Minio
- S3 GCS
- Tiered Storage
- Tiered Storage AWS
- Tiered Storage Minio
- Tiered Storage GCS
- Window Functions

Failed:

Suite: Aggregate Functions
Comment: Timeout, took over 3 hours. Successful run - https://github.com/Altinity/clickhouse-regression/actions/runs/7324808186/job/19948716863
Status: OK to FAIL.

Suite: S3 (AWS)
Comment: Issue with test. Successful run - https://github.com/Altinity/clickhouse-regression/actions/runs/7341138782/job/19988336729
Status: Fixed.

Suite: S3 (Minio)
Comment: Issue with test. Successful run - https://github.com/Altinity/clickhouse-regression/actions/runs/7341138782/job/19988336986
Status: Fixed.

Suite: Window Functions
Comment: Issue with test. Successful run - https://github.com/Altinity/clickhouse-regression/actions/runs/7339424527
Status: Fixed on rerun.

Trivy Aarch64 Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/staging-docker-trivy-ubuntu-server-arm64/results.html

Scout Aarch64 Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/staging-docker-scout-ubuntu-server-arm64/results.html

ClickHouseKeeper

Trivy Keeper x86_64 Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/staging-docker-trivy-ubuntu-keeper-amd64/results.html

Scout Keeper x86_64 Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/staging-docker-scout-ubuntu-keeper-amd64/results.html

Trivy Keeper Aarch64 Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/staging-docker-trivy-ubuntu-keeper-arm64/results.html

Scout Keeper Aarch64 Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.8.8.21.altinitystable/2023-12-26T23-40-44.341/staging-docker-scout-ubuntu-keeper-arm64/results.html