Document status - Public


QA Software Build Report

ClickHouse 22.3.15.34 / x86_64

(c) 2022 Altinity Inc. All Rights Reserved.

Approval

Status: Approved for release by QA [vzakaznikov@altinity.com Tue 27 Dec 2022 10:16:03 AM EST]

Reviewed by: azvonov@altinity.com

Date: 26 December 2022

Table of Contents

Test Results

Stage Status
Integration Fail
Stateful Pass
Stateless Fail
TestFlows Fail

Results https://altinity-test-reports.s3.amazonaws.com/index.html#builds/stable/v22.3.15.34/2022-12-24T18-03-18.017/
GitLab Pipeline https://gitlab.com/altinity-qa/clickhouse/cicd/release/-/pipelines/731468355
GitHub Pipeline https://github.com/Altinity/ClickHouse/actions/runs/3758181630

Results Analysis

Integration Results

Results
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.3.15.34/2022-12-24T18-03-18.017/integration/integration_results_1.html
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.3.15.34/2022-12-24T18-03-18.017/integration/integration_results_2.html

Test: /integration/test_cgroup_limit/test.py::test_cgroup_cpu_limit

Reason:

____________________________ test_cgroup_cpu_limit _____________________________
[gw4] linux -- Python 3.8.10 /usr/bin/python3

    def test_cgroup_cpu_limit():
        for num_cpus in (1, 2, 4, 2.8):
            result = run_with_cpu_limit(
                "clickhouse local -q \"select value from system.settings where name='max_threads'\"",
                num_cpus,
            )
            expect_output = (r"\'auto({})\'".format(math.ceil(num_cpus))).encode()
>           assert (
                result.strip() == expect_output
            ), f"fail for cpu limit={num_cpus}, result={result.strip()}, expect={expect_output}"
E           AssertionError: fail for cpu limit=2, result=b"\\'auto(1)\\'", expect=b"\\'auto(2)\\'"
E           assert b"\\'auto(1)\\'" == b"\\'auto(2)\\'"
E             At index 7 diff: b'1' != b'2'
E             Full diff:
E             - b"\\'auto(2)\\'"
E             ?           ^
E             + b"\\'auto(1)\\'"
E             ?           ^

test_cgroup_limit/test.py:48: AssertionError

Comment: Misconfiguration: 1 CPU/hardware thread available to the CH instead of at least 2.
Status: FAIL (OK to fail)

Test: /integration/test_http_handlers_config/test.py::test_predefined_query_handler

Reason:

________________________ test_predefined_query_handler _________________________
[gw3] linux -- Python 3.8.10 /usr/bin/python3

    def test_predefined_query_handler():
        with contextlib.closing(
            SimpleCluster(
                ClickHouseCluster(__file__), "predefined_handler", "test_predefined_handler"
            )
        ) as cluster:
            assert (
                404
                == cluster.instance.http_request(
                    "?max_threads=1", method="GET", headers={"XXX": "xxx"}
                ).status_code
            )

            assert (
                404
                == cluster.instance.http_request(
                    "test_predefined_handler_get?max_threads=1",
                    method="GET",
                    headers={"XXX": "bad"},
                ).status_code
            )

            assert (
                404
                == cluster.instance.http_request(
                    "test_predefined_handler_get?max_threads=1",
                    method="POST",
                    headers={"XXX": "xxx"},
                ).status_code
            )

            assert (
                500
                == cluster.instance.http_request(
                    "test_predefined_handler_get?max_threads=1",
                    method="GET",
                    headers={"XXX": "xxx"},
                ).status_code
            )

>           assert (
                b"max_threads\t1\n"
                == cluster.instance.http_request(
                    "test_predefined_handler_get?max_threads=1&setting_name=max_threads",
                    method="GET",
                    headers={"XXX": "xxx"},
                ).content
            )
E           assert b'max_threads\t1\n' == b"max_threads\t\\'auto(1)\\'\n"
E             At index 12 diff: b'1' != b'\\'
E             Full diff:
E             - b"max_threads\t\\'auto(1)\\'\n"
E             + b'max_threads\t1\n'

test_http_handlers_config/test.py:119: AssertionError

Comment: Minor: number of threads wasn't explicitly set to 1 but rather implicitly deduced to 1. Could be due to misconfiguration in test environment coupled by some minor issue in CH.
Status: FAIL (OK to fail)

Test: test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1]

Reason:

____________________ test_mysql_settings[clickhouse_node1] _____________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f59347359a0>
started_mysql_8_0 = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f5934632550>
started_mysql_5_7 = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f5932516430>
clickhouse_node = <helpers.cluster.ClickHouseInstance object at 0x7f5934735970>

    @pytest.mark.parametrize(
        ("clickhouse_node"), [node_disable_bytes_settings, node_disable_rows_settings]
    )
    def test_mysql_settings(
        started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node
    ):
>       materialize_with_ddl.mysql_settings_test(
            clickhouse_node, started_mysql_5_7, "mysql57"
        )

test_materialized_mysql_database/test.py:448: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

clickhouse_node = <helpers.cluster.ClickHouseInstance object at 0x7f5934735970>
mysql_node = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f5932516430>
service_name = 'mysql57'

    def mysql_settings_test(clickhouse_node, mysql_node, service_name):
        mysql_node.query("DROP DATABASE IF EXISTS test_database")
        clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
        mysql_node.query("CREATE DATABASE test_database")
        mysql_node.query(
            "CREATE TABLE test_database.a (id INT(11) NOT NULL PRIMARY KEY, value VARCHAR(255))"
        )
        mysql_node.query("INSERT INTO test_database.a VALUES(1, 'foo')")
        mysql_node.query("INSERT INTO test_database.a VALUES(2, 'bar')")

        clickhouse_node.query(
            "CREATE DATABASE test_database ENGINE = MaterializedMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format(
                service_name
            )
        )
        check_query(
            clickhouse_node, "SELECT COUNT() FROM test_database.a FORMAT TSV", "2\n"
        )

>       assert (
            clickhouse_node.query(
                "SELECT COUNT(DISTINCT  blockNumber()) FROM test_database.a FORMAT TSV"
            )
            == "2\n"
        )
E       AssertionError

test_materialized_mysql_database/materialize_with_ddl.py:1795: AssertionError

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1)
Status: FAIL (OK to fail)

Test: test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0]

Reason:

____________________ test_mysql_settings[clickhouse_node0] _____________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f59347359a0>
started_mysql_8_0 = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f5934632550>
started_mysql_5_7 = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f5932516430>
clickhouse_node = <helpers.cluster.ClickHouseInstance object at 0x7f5934735940>

    @pytest.mark.parametrize(
        ("clickhouse_node"), [node_disable_bytes_settings, node_disable_rows_settings]
    )
    def test_mysql_settings(
        started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node
    ):
>       materialize_with_ddl.mysql_settings_test(
            clickhouse_node, started_mysql_5_7, "mysql57"
        )

test_materialized_mysql_database/test.py:448: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

clickhouse_node = <helpers.cluster.ClickHouseInstance object at 0x7f5934735940>
mysql_node = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f5932516430>
service_name = 'mysql57'

    def mysql_settings_test(clickhouse_node, mysql_node, service_name):
        mysql_node.query("DROP DATABASE IF EXISTS test_database")
        clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
        mysql_node.query("CREATE DATABASE test_database")
        mysql_node.query(
            "CREATE TABLE test_database.a (id INT(11) NOT NULL PRIMARY KEY, value VARCHAR(255))"
        )
        mysql_node.query("INSERT INTO test_database.a VALUES(1, 'foo')")
        mysql_node.query("INSERT INTO test_database.a VALUES(2, 'bar')")

        clickhouse_node.query(
            "CREATE DATABASE test_database ENGINE = MaterializedMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format(
                service_name
            )
        )
        check_query(
            clickhouse_node, "SELECT COUNT() FROM test_database.a FORMAT TSV", "2\n"
        )

>       assert (
            clickhouse_node.query(
                "SELECT COUNT(DISTINCT  blockNumber()) FROM test_database.a FORMAT TSV"
            )
            == "2\n"
        )
E       AssertionError

test_materialized_mysql_database/materialize_with_ddl.py:1795: AssertionError

Comment: Minor: number of threads wasn't explicitly set to 1 but rather implicitly deduced to 1. Could be due to misconfiguration in test environment coupled by some minor issue in CH.
Status: FAIL (OK to fail)

Test: test_storage_kafka/test.py::test_kafka_consumer_hang

Reason:

___________________________ test_kafka_consumer_hang ___________________________
[gw1] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7fd952f0e490>

    def test_kafka_consumer_hang(kafka_cluster):
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )

        topic_name = "consumer_hang"
        kafka_create_topic(admin_client, topic_name, num_partitions=8)

>       instance.query(
            f"""
            DROP TABLE IF EXISTS test.kafka;
            DROP TABLE IF EXISTS test.view;
            DROP TABLE IF EXISTS test.consumer;

            CREATE TABLE test.kafka (key UInt64, value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = '{topic_name}',
                         kafka_group_name = '{topic_name}',
                         kafka_format = 'JSONEachRow',
                         kafka_num_consumers = 8;
            CREATE TABLE test.view (key UInt64, value UInt64) ENGINE = Memory();
            CREATE MATERIALIZED VIEW test.consumer TO test.view AS SELECT * FROM test.kafka;
            """
        )

test_storage_kafka/test.py:1016: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <helpers.client.CommandRequest object at 0x7fd92a736c40>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.15):
E           Code: 36. DB::Exception: Received from 172.16.5.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3a9eda in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x151e9411 in /usr/bin/clickhouse
E           2. ? @ 0x151e87c9 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x16037afa in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x158bcff9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x158b7d1f in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x158c054b in /usr/bin/clickhouse
E           7. ? @ 0x15e1150f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15e0efb5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x16787970 in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x16797519 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19a625af in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19a64a01 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19c22389 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19c1f6e0 in /usr/bin/clickhouse
E           15. ? @ 0x7fcbe8f32609 in ?
E           16. clone @ 0x7fcbe8e57133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (key UInt64, value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'consumer_hang',
E                                kafka_group_name = 'consumer_hang',
E                                kafka_format = 'JSONEachRow',
E                                kafka_num_consumers = 8;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configure to use more.
Status: FAIL (OK to fail)

Test: test_storage_kafka/test.py::test_kafka_virtual_columns2

Reason:

_________________________ test_kafka_virtual_columns2 __________________________
[gw1] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7fd952f0e490>

    def test_kafka_virtual_columns2(kafka_cluster):
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )

        topic_config = {
            # default retention, since predefined timestamp_ms is used.
            "retention.ms": "-1",
        }
        kafka_create_topic(admin_client, "virt2_0", num_partitions=2, config=topic_config)
        kafka_create_topic(admin_client, "virt2_1", num_partitions=2, config=topic_config)

>       instance.query(
            """
            CREATE TABLE test.kafka (value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = 'virt2_0,virt2_1',
                         kafka_group_name = 'virt2',
                         kafka_num_consumers = 2,
                         kafka_format = 'JSONEachRow';

            CREATE MATERIALIZED VIEW test.view Engine=Log AS
            SELECT value, _key, _topic, _partition, _offset, toUnixTimestamp(_timestamp), toUnixTimestamp64Milli(_timestamp_ms), _headers.name, _headers.value FROM test.kafka;
            """
        )

test_storage_kafka/test.py:2150: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <helpers.client.CommandRequest object at 0x7fd926221df0>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.15):
E           Code: 36. DB::Exception: Received from 172.16.5.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3a9eda in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x151e9411 in /usr/bin/clickhouse
E           2. ? @ 0x151e87c9 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x16037afa in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x158bcff9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x158b7d1f in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x158c054b in /usr/bin/clickhouse
E           7. ? @ 0x15e1150f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15e0efb5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x16787970 in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x16797519 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19a625af in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19a64a01 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19c22389 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19c1f6e0 in /usr/bin/clickhouse
E           15. ? @ 0x7fcbe8f32609 in ?
E           16. clone @ 0x7fcbe8e57133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'virt2_0,virt2_1',
E                                kafka_group_name = 'virt2',
E                                kafka_num_consumers = 2,
E                                kafka_format = 'JSONEachRow';)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail)

Test: test_storage_kafka/test.py::test_kafka_read_consumers_in_parallel

Reason:

____________________ test_kafka_read_consumers_in_parallel _____________________
[gw1] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7fd952f0e490>

    def test_kafka_read_consumers_in_parallel(kafka_cluster):
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )

        topic_name = "read_consumers_in_parallel"
        kafka_create_topic(admin_client, topic_name, num_partitions=8)

        cancel = threading.Event()

        def produce():
            while not cancel.is_set():
                messages = []
                for _ in range(100):
                    messages.append(json.dumps({"key": 0, "value": 0}))
                kafka_produce(kafka_cluster, "read_consumers_in_parallel", messages)
                time.sleep(1)

        kafka_thread = threading.Thread(target=produce)
        kafka_thread.start()

        # when we have more than 1 consumer in a single table,
        # and kafka_thread_per_consumer=0
        # all the consumers should be read in parallel, not in sequence.
        # then reading in parallel 8 consumers with 1 seconds kafka_poll_timeout_ms and less than 1 sec limit
        # we should have exactly 1 poll per consumer (i.e. 8 polls) every 1 seconds (from different threads)
        # in case parallel consuming is not working we will have only 1 poll every 1 seconds (from the same thread).
>       instance.query(
            f"""
            DROP TABLE IF EXISTS test.kafka;
            DROP TABLE IF EXISTS test.view;
            DROP TABLE IF EXISTS test.consumer;

            CREATE TABLE test.kafka (key UInt64, value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = '{topic_name}',
                         kafka_group_name = '{topic_name}',
                         kafka_format = 'JSONEachRow',
                         kafka_num_consumers = 8,
                         kafka_thread_per_consumer = 0,
                         kafka_poll_timeout_ms = 1000,
                         kafka_flush_interval_ms = 999;
            CREATE TABLE test.view (key UInt64, value UInt64) ENGINE = Memory();
            CREATE MATERIALIZED VIEW test.consumer TO test.view AS SELECT * FROM test.kafka;
            """
        )

test_storage_kafka/test.py:1179: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <helpers.client.CommandRequest object at 0x7fd950c704c0>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.15):
E           Code: 36. DB::Exception: Received from 172.16.5.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3a9eda in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x151e9411 in /usr/bin/clickhouse
E           2. ? @ 0x151e87c9 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x16037afa in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x158bcff9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x158b7d1f in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x158c054b in /usr/bin/clickhouse
E           7. ? @ 0x15e1150f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15e0efb5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x16787970 in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x16797519 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19a625af in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19a64a01 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19c22389 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19c1f6e0 in /usr/bin/clickhouse
E           15. ? @ 0x7fcbe8f32609 in ?
E           16. clone @ 0x7fcbe8e57133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (key UInt64, value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'read_consumers_in_parallel',
E                                kafka_group_name = 'read_consumers_in_parallel',
E                                kafka_format = 'JSONEachRow',
E                                kafka_num_consumers = 8,
E                                kafka_thread_per_consumer = 0,
E                                kafka_poll_timeout_ms = 1000,
E                                kafka_flush_interval_ms = 999;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail)

Test: test_storage_kafka/test.py::test_kafka_recreate_kafka_table

Reason:

_______________________ test_kafka_recreate_kafka_table ________________________
[gw1] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7fd952f0e490>

    def test_kafka_recreate_kafka_table(kafka_cluster):
        """
        Checks that materialized view work properly after dropping and recreating the Kafka table.
        """
        # line for backporting:
        # admin_client = KafkaAdminClient(bootstrap_servers="localhost:9092")
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )

        topic_name = "recreate_kafka_table"
        kafka_create_topic(admin_client, topic_name, num_partitions=6)

>       instance.query(
            """
            DROP TABLE IF EXISTS test.view;
            DROP TABLE IF EXISTS test.consumer;
            CREATE TABLE test.kafka (key UInt64, value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = 'recreate_kafka_table',
                         kafka_group_name = 'recreate_kafka_table_group',
                         kafka_format = 'JSONEachRow',
                         kafka_num_consumers = 6,
                         kafka_flush_interval_ms = 1000,
                         kafka_skip_broken_messages = 1048577;

            CREATE TABLE test.view (key UInt64, value UInt64)
                ENGINE = MergeTree()
                ORDER BY key;
            CREATE MATERIALIZED VIEW test.consumer TO test.view AS
                SELECT * FROM test.kafka;
        """
        )

test_storage_kafka/test.py:1556: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <helpers.client.CommandRequest object at 0x7fd92fb267c0>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.15):
E           Code: 36. DB::Exception: Received from 172.16.5.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3a9eda in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x151e9411 in /usr/bin/clickhouse
E           2. ? @ 0x151e87c9 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x16037afa in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x158bcff9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x158b7d1f in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x158c054b in /usr/bin/clickhouse
E           7. ? @ 0x15e1150f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15e0efb5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x16787970 in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x16797519 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19a625af in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19a64a01 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19c22389 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19c1f6e0 in /usr/bin/clickhouse
E           15. ? @ 0x7fcbe8f32609 in ?
E           16. clone @ 0x7fcbe8e57133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (key UInt64, value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'recreate_kafka_table',
E                                kafka_group_name = 'recreate_kafka_table_group',
E                                kafka_format = 'JSONEachRow',
E                                kafka_num_consumers = 6,
E                                kafka_flush_interval_ms = 1000,
E                                kafka_skip_broken_messages = 1048577;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail)

Test: test_storage_mysql/test.py::test_settings

Reason:

________________________________ test_settings _________________________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f61af888c40>

    def test_settings(started_cluster):
        table_name = "test_settings"
        node1.query(f"DROP TABLE IF EXISTS {table_name}")
        wait_timeout = 123
        rw_timeout = 10123001
        connect_timeout = 10123002
        connection_pool_size = 1

        conn = get_mysql_conn(started_cluster, cluster.mysql_ip)
        drop_mysql_table(conn, table_name)
        create_mysql_table(conn, table_name)

        node1.query(
            f"""
            CREATE TABLE {table_name}
            (
                id UInt32,
                name String,
                age UInt32,
                money UInt32
            )
            ENGINE = MySQL('mysql57:3306', 'clickhouse', '{table_name}', 'root', 'clickhouse')
            SETTINGS connection_wait_timeout={wait_timeout}, connect_timeout={connect_timeout}, read_write_timeout={rw_timeout}, connection_pool_size={connection_pool_size}
            """
        )

        node1.query(f"SELECT * FROM {table_name}")
        assert node1.contains_in_log(
            f"with settings: connect_timeout={connect_timeout}, read_write_timeout={rw_timeout}"
        )

        rw_timeout = 20123001
        connect_timeout = 20123002
        node1.query(f"SELECT * FROM mysql(mysql_with_settings)")
        assert node1.contains_in_log(
            f"with settings: connect_timeout={connect_timeout}, read_write_timeout={rw_timeout}"
        )

        rw_timeout = 30123001
        connect_timeout = 30123002
>       node1.query(
            f"""
            SELECT *
                FROM mysql('mysql57:3306', 'clickhouse', '{table_name}', 'root', 'clickhouse',
                           SETTINGS
                               connection_wait_timeout={wait_timeout},
                               connect_timeout={connect_timeout},
                               read_write_timeout={rw_timeout},
                               connection_pool_size={connection_pool_size})
        """
        )

test_storage_mysql/test.py:775: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <helpers.client.CommandRequest object at 0x7f61af5d7190>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 62, stderr: Code: 62. DB::Exception: Syntax error: failed at position 161 ('connection_wait_timeout') (line 4, col 28): connection_wait_timeout=123,
E                                      connect_timeout=30123002,
E                                      read_write_timeout=30123001,
E                                 . Expected one of: token, Comma, Arrow, Dot, UUID, DoubleColon, MOD, DIV, NOT, BETWEEN, LIKE, ILIKE, NOT LIKE, NOT ILIKE, IN, NOT IN, GLOBAL IN, GLOBAL NOT IN, IS, AND, OR, QuestionMark, alias, AS, end of query. (SYNTAX_ERROR), Stack trace (when copying this message, always include the lines below):
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3a9eda in /clickhouse
E           1. DB::parseQueryAndMovePosition(DB::IParser&, char const*&, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, unsigned long) @ 0x16d1eabf in /clickhouse
E           2. DB::ClientBase::parseQuery(char const*&, char const*, bool) const @ 0x166715d0 in /clickhouse
E           3. DB::ClientBase::analyzeMultiQueryText(char const*&, char const*&, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, std::__1::shared_ptr<DB::IAST>&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::optional<DB::Exception>&) @ 0x16681a9c in /clickhouse
E           4. DB::ClientBase::executeMultiQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x16682861 in /clickhouse
E           5. DB::ClientBase::processQueryText(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x166837ce in /clickhouse
E           6. DB::ClientBase::runNonInteractive() @ 0x166861c6 in /clickhouse
E           7. DB::Client::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xb49d9bf in /clickhouse
E           8. Poco::Util::Application::run() @ 0x19a7b0a6 in /clickhouse
E           9. mainEntryClickHouseClient(int, char**) @ 0xb4ab321 in /clickhouse
E           10. main @ 0xb3a42aa in /clickhouse
E           11. __libc_start_main @ 0x7f3a9bdd0083 in ?
E           12. _start @ 0xb18be2e in /clickhouse

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configure to use more.
Status: FAIL (OK to fail)

Test: test_storage_kafka/test.py::test_kafka_csv_with_thread_per_consumer

Reason:

___________________ test_kafka_csv_with_thread_per_consumer ____________________
[gw1] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7fd952f0e490>

    def test_kafka_csv_with_thread_per_consumer(kafka_cluster):
>       instance.query(
            """
            CREATE TABLE test.kafka (key UInt64, value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = 'csv_with_thread_per_consumer',
                         kafka_group_name = 'csv_with_thread_per_consumer',
                         kafka_format = 'CSV',
                         kafka_row_delimiter = '\\n',
                         kafka_num_consumers = 4,
                         kafka_commit_on_select = 1,
                         kafka_thread_per_consumer = 1;
            """
        )

test_storage_kafka/test.py:3304: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <helpers.client.CommandRequest object at 0x7fd950510f40>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.15):
E           Code: 36. DB::Exception: Received from 172.16.5.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3a9eda in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x151e9411 in /usr/bin/clickhouse
E           2. ? @ 0x151e87c9 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x16037afa in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x158bcff9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x158b7d1f in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x158c054b in /usr/bin/clickhouse
E           7. ? @ 0x15e1150f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15e0efb5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x16787970 in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x16797519 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19a625af in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19a64a01 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19c22389 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19c1f6e0 in /usr/bin/clickhouse
E           15. ? @ 0x7fcbe8f32609 in ?
E           16. clone @ 0x7fcbe8e57133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (key UInt64, value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'csv_with_thread_per_consumer',
E                                kafka_group_name = 'csv_with_thread_per_consumer',
E                                kafka_format = 'CSV',
E                                kafka_row_delimiter = '\n',
E                                kafka_num_consumers = 4,
E                                kafka_commit_on_select = 1,
E                                kafka_thread_per_consumer = 1;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configure to use more.
Status: FAIL (OK to fail)

Test: test_storage_kafka/test.py::test_issue26643

Reason:


Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail)

Test: test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0]

Reason:

_______________________________ test_issue26643 ________________________________
[gw1] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7fd952f0e490>

    def test_issue26643(kafka_cluster):

        # for backporting:
        # admin_client = KafkaAdminClient(bootstrap_servers="localhost:9092")
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )
        producer = KafkaProducer(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port),
            value_serializer=producer_serializer,
        )

        topic_list = []
        topic_list.append(
            NewTopic(name="test_issue26643", num_partitions=4, replication_factor=1)
        )
        admin_client.create_topics(new_topics=topic_list, validate_only=False)

        msg = message_with_repeated_pb2.Message(
            tnow=1629000000,
            server="server1",
            clien="host1",
            sPort=443,
            cPort=50000,
            r=[
                message_with_repeated_pb2.dd(
                    name="1", type=444, ttl=123123, data=b"adsfasd"
                ),
                message_with_repeated_pb2.dd(name="2"),
            ],
            method="GET",
        )

        data = b""
        serialized_msg = msg.SerializeToString()
        data = data + _VarintBytes(len(serialized_msg)) + serialized_msg

        msg = message_with_repeated_pb2.Message(tnow=1629000002)

        serialized_msg = msg.SerializeToString()
        data = data + _VarintBytes(len(serialized_msg)) + serialized_msg

        producer.send(topic="test_issue26643", value=data)

        data = _VarintBytes(len(serialized_msg)) + serialized_msg
        producer.send(topic="test_issue26643", value=data)
        producer.flush()

>       instance.query(
            """
            CREATE TABLE IF NOT EXISTS test.test_queue
            (
                `tnow` UInt32,
                `server` String,
                `client` String,
                `sPort` UInt16,
                `cPort` UInt16,
                `r.name` Array(String),
                `r.class` Array(UInt16),
                `r.type` Array(UInt16),
                `r.ttl` Array(UInt32),
                `r.data` Array(String),
                `method` String
            )
            ENGINE = Kafka
            SETTINGS
                kafka_broker_list = 'kafka1:19092',
                kafka_topic_list = 'test_issue26643',
                kafka_group_name = 'test_issue26643_group',
                kafka_format = 'Protobuf',
                kafka_schema = 'message_with_repeated.proto:Message',
                kafka_num_consumers = 4,
                kafka_skip_broken_messages = 10000;

            SET allow_suspicious_low_cardinality_types=1;

            CREATE TABLE test.log
            (
                `tnow` DateTime('Asia/Istanbul') CODEC(DoubleDelta, LZ4),
                `server` LowCardinality(String),
                `client` LowCardinality(String),
                `sPort` LowCardinality(UInt16),
                `cPort` UInt16 CODEC(T64, LZ4),
                `r.name` Array(String),
                `r.class` Array(LowCardinality(UInt16)),
                `r.type` Array(LowCardinality(UInt16)),
                `r.ttl` Array(LowCardinality(UInt32)),
                `r.data` Array(String),
                `method` LowCardinality(String)
            )
            ENGINE = MergeTree
            PARTITION BY toYYYYMMDD(tnow)
            ORDER BY (tnow, server)
            TTL toDate(tnow) + toIntervalMonth(1000)
            SETTINGS index_granularity = 16384, merge_with_ttl_timeout = 7200;

            CREATE MATERIALIZED VIEW test.test_consumer TO test.log AS
            SELECT
                toDateTime(a.tnow) AS tnow,
                a.server AS server,
                a.client AS client,
                a.sPort AS sPort,
                a.cPort AS cPort,
                a.`r.name` AS `r.name`,
                a.`r.class` AS `r.class`,
                a.`r.type` AS `r.type`,
                a.`r.ttl` AS `r.ttl`,
                a.`r.data` AS `r.data`,
                a.method AS method
            FROM test.test_queue AS a;
            """
        )

test_storage_kafka/test.py:4040: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <helpers.client.CommandRequest object at 0x7fd924f915e0>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.15):
E           Code: 36. DB::Exception: Received from 172.16.5.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3a9eda in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x151e9411 in /usr/bin/clickhouse
E           2. ? @ 0x151e87c9 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x16037afa in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x158bcff9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x158b7d1f in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x158c054b in /usr/bin/clickhouse
E           7. ? @ 0x15e1150f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15e0efb5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x16787970 in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x16797519 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19a625af in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19a64a01 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19c22389 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19c1f6e0 in /usr/bin/clickhouse
E           15. ? @ 0x7fcbe8f32609 in ?
E           16. clone @ 0x7fcbe8e57133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE IF NOT EXISTS test.test_queue
E                   (
E                       `tnow` UInt32,
E                       `server` String,
E                       `client` String,
E                       `sPort` UInt16,
E                       `cPort` UInt16,
E                       `r.name` Array(String),
E                       `r.class` Array(UInt16),
E                       `r.type` Array(UInt16),
E                       `r.ttl` Array(UInt32),
E                       `r.data` Array(String),
E                       `method` String
E                   )
E                   ENGINE = Kafka
E                   SETTINGS
E                       kafka_broker_list = 'kafka1:19092',
E                       kafka_topic_list = 'test_issue26643',
E                       kafka_group_name = 'test_issue26643_group',
E                       kafka_format = 'Protobuf',
E                       kafka_schema = 'message_with_repeated.proto:Message',
E                       kafka_num_consumers = 4,
E                       kafka_skip_broken_messages = 10000;)

helpers/client.py:187: QueryRuntimeException

Comment: Status: FAIL (OK to fail)

Stateful Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.3.15.34/2022-12-24T18-03-18.017/stateful/stateful_results.html

Stateless Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.3.15.34/2022-12-24T18-03-18.017/stateless/stateless_results.html

Test: /stateless/02381_arrow_dict_to_lc

Reason:

2022-12-22 18:21:34 --- /usr/share/clickhouse-test/queries/0_stateless/02381_arrow_dict_to_lc.reference 2022-12-22 18:18:55.750345866 +0000
2022-12-22 18:21:34 +++ /tmp/clickhouse-test/0_stateless/02381_arrow_dict_to_lc.stdout  2022-12-22 18:21:34.329591719 +0000
2022-12-22 18:21:34 @@ -1,5 +1,5 @@
2022-12-22 18:21:34  id lc_nullable lc_int_nullable bool_nullable
2022-12-22 18:21:34 -Nullable(Int64)    LowCardinality(Nullable(String))    LowCardinality(Nullable(Int64)) Nullable(UInt8)
2022-12-22 18:21:34 +Int64  LowCardinality(String)  LowCardinality(Int64)   UInt8
2022-12-22 18:21:34  1  onee    1   1
2022-12-22 18:21:34  2  twoo    2   0
2022-12-22 18:21:34  3  three   3   1
2022-12-22 18:21:34 
2022-12-22 18:21:34 
2022-12-22 18:21:34 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=1152921504606846976 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=0 --fsync_metadata=1 --priority=1 --output_format_parallel_formatting=0 --input_format_parallel_parsing=1
2022-12-22 18:21:34 
2022-12-22 18:21:34 Database: test_fyker2

Comment: No support for Nullable(X) and LowCardinality(Nullable(X)) in arrow/parquet: backported a fix of parquet that adds test for Nullable support introduced prior.
Status: FAIL (OK to fail)

Test: /stateless/02381_arrow_dict_of_nullable_string_to_lc

Reason:

2022-12-22 18:21:34 Code: 349. DB::Exception: Cannot convert NULL value to non-Nullable type: while converting column `lc_nullable_string` from type LowCardinality(Nullable(String)) to type LowCardinality(String): While executing ArrowBlockInputFormat: While executing File. (CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN)
2022-12-22 18:21:34 , result:
2022-12-22 18:21:34 
2022-12-22 18:21:34 
2022-12-22 18:21:34 
2022-12-22 18:21:34 stdout:
2022-12-22 18:21:34 
2022-12-22 18:21:34 
2022-12-22 18:21:34 Settings used in the test: --max_insert_threads=12 --group_by_two_level_threshold=1 --group_by_two_level_threshold_bytes=1 --distributed_aggregation_memory_efficient=0 --fsync_metadata=1 --priority=3 --output_format_parallel_formatting=1 --input_format_parallel_parsing=0
2022-12-22 18:21:34 
2022-12-22 18:21:34 Database: test_4v6zzu

Comment: No support for Nullable(X) and LowCardinality(Nullable(X)) in arrow/parquet: backported a fix of parquet that adds test for Nullable support introduced prior.
Status: FAIL (OK to fail)

Test: /stateless/02149_read_in_order_fixed_prefix

Reason:

2022-12-22 18:22:08 --- /usr/share/clickhouse-test/queries/0_stateless/02149_read_in_order_fixed_prefix.reference   2022-12-22 18:18:55.738339866 +0000
2022-12-22 18:22:08 +++ /tmp/clickhouse-test/0_stateless/02149_read_in_order_fixed_prefix.stdout    2022-12-22 18:22:08.242538830 +0000
2022-12-22 18:22:08 @@ -29,10 +29,8 @@
2022-12-22 18:22:08        ExpressionTransform × 2
2022-12-22 18:22:08          (SettingQuotaAndLimits)
2022-12-22 18:22:08            (ReadFromMergeTree)
2022-12-22 18:22:08 -          ReverseTransform
2022-12-22 18:22:08 -            MergeTreeReverse 01
2022-12-22 18:22:08 -              ReverseTransform
2022-12-22 18:22:08 -                MergeTreeReverse 01
2022-12-22 18:22:08 +          ReverseTransform × 2
2022-12-22 18:22:08 +            MergeTreeReverse × 2 01
2022-12-22 18:22:08  2020-10-01 9
2022-12-22 18:22:08  2020-10-01 9
2022-12-22 18:22:08  2020-10-01 9
2022-12-22 18:22:08 
2022-12-22 18:22:08 
2022-12-22 18:22:08 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=0 --fsync_metadata=0 --priority=0 --output_format_parallel_formatting=0 --input_format_parallel_parsing=1
2022-12-22 18:22:08 
2022-12-22 18:22:08 Database: test_21cabz

Comment: Not enough threads.
Status: FAIL (OK to fail)

Test: /stateless/01701_parallel_parsing_infinite_segmentation

Reason:

2022-12-22 18:24:26 --- /usr/share/clickhouse-test/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.reference   2022-12-22 18:18:55.722331867 +0000
2022-12-22 18:24:26 +++ /tmp/clickhouse-test/0_stateless/01701_parallel_parsing_infinite_segmentation.stdout    2022-12-22 18:24:26.611685179 +0000
2022-12-22 18:24:26 @@ -1 +1 @@
2022-12-22 18:24:26 -Ok.
2022-12-22 18:24:26 +FAIL
2022-12-22 18:24:26 
2022-12-22 18:24:26 
2022-12-22 18:24:26 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=0 --fsync_metadata=0 --priority=0 --output_format_parallel_formatting=0 --input_format_parallel_parsing=0
2022-12-22 18:24:26 
2022-12-22 18:24:26 Database: test_lsmfu7

Comment: Not enough threads.
Status: FAIL (OK to fail)

Test: /stateless/01532_primary_key_without_order_by_zookeeper

Reason:

2022-12-22 18:25:04 --- /usr/share/clickhouse-test/queries/0_stateless/01532_primary_key_without_order_by_zookeeper.reference   2022-12-22 18:18:55.706323866 +0000
2022-12-22 18:25:04 +++ /tmp/clickhouse-test/0_stateless/01532_primary_key_without_order_by_zookeeper.stdout    2022-12-22 18:25:04.650694186 +0000
2022-12-22 18:25:04 @@ -9,8 +9,8 @@
2022-12-22 18:25:04  1  c
2022-12-22 18:25:04  2  b
2022-12-22 18:25:04  1  c   0
2022-12-22 18:25:04 -2  e   555
2022-12-22 18:25:04  2  b   0
2022-12-22 18:25:04 +2  e   555
2022-12-22 18:25:04  CREATE TABLE default.merge_tree_pk_sql\n(\n    `key` UInt64,\n    `value` String,\n    `key2` UInt64\n)\nENGINE = ReplacingMergeTree\nPRIMARY KEY key\nORDER BY (key, key2)\nSETTINGS index_granularity = 8192
2022-12-22 18:25:04  CREATE TABLE default.replicated_merge_tree_pk_sql\n(\n    `key` UInt64,\n    `value` String\n)\nENGINE = ReplicatedReplacingMergeTree(\'/clickhouse/test/01532_primary_key_without\', \'r1\')\nPRIMARY KEY key\nORDER BY key\nSETTINGS index_granularity = 8192
2022-12-22 18:25:04  1  a
2022-12-22 18:25:04 @@ -18,6 +18,6 @@
2022-12-22 18:25:04  1  c
2022-12-22 18:25:04  2  b
2022-12-22 18:25:04  1  c   0
2022-12-22 18:25:04 -2  e   555
2022-12-22 18:25:04  2  b   0
2022-12-22 18:25:04 +2  e   555
2022-12-22 18:25:04  CREATE TABLE default.replicated_merge_tree_pk_sql\n(\n    `key` UInt64,\n    `value` String,\n    `key2` UInt64\n)\nENGINE = ReplicatedReplacingMergeTree(\'/clickhouse/test/01532_primary_key_without\', \'r1\')\nPRIMARY KEY key\nORDER BY (key, key2)\nSETTINGS index_granularity = 8192
2022-12-22 18:25:04 
2022-12-22 18:25:04 
2022-12-22 18:25:04 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=1152921504606846976 --distributed_aggregation_memory_efficient=1 --fsync_metadata=1 --priority=0 --output_format_parallel_formatting=0 --input_format_parallel_parsing=0
2022-12-22 18:25:04 
2022-12-22 18:25:04 Database: test_1wjwt1

Comment: Minor: wrong order of elements (column not included in ORDER BY).
Status: FAIL (OK to fail).

Test: /stateless/01524_do_not_merge_across_partitions_select_final

Reason:

2022-12-22 18:25:06 --- /usr/share/clickhouse-test/queries/0_stateless/01524_do_not_merge_across_partitions_select_final.reference  2022-12-22 18:18:55.706323866 +0000
2022-12-22 18:25:06 +++ /tmp/clickhouse-test/0_stateless/01524_do_not_merge_across_partitions_select_final.stdout   2022-12-22 18:25:06.699718129 +0000
2022-12-22 18:25:06 @@ -6,4 +6,4 @@
2022-12-22 18:25:06  2020-01-01 00:00:00    2   
2022-12-22 18:25:06  1
2022-12-22 18:25:06  499999
2022-12-22 18:25:06 -5
2022-12-22 18:25:06 +2
2022-12-22 18:25:06 
2022-12-22 18:25:06 
2022-12-22 18:25:06 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=1152921504606846976 --distributed_aggregation_memory_efficient=1 --fsync_metadata=0 --priority=2 --output_format_parallel_formatting=1 --input_format_parallel_parsing=1
2022-12-22 18:25:06 
2022-12-22 18:25:06 Database: test_x1g19d

Comment: depends on number of threads availble to the system. Expects 5, actually only 2 used (same configuration issue as others) .
Status: FAIL (OK to fail)

Test: /stateless/01275_parallel_mv

Reason:

2022-12-22 18:26:01 --- /usr/share/clickhouse-test/queries/0_stateless/01275_parallel_mv.reference  2022-12-22 18:18:55.698319867 +0000
2022-12-22 18:26:01 +++ /tmp/clickhouse-test/0_stateless/01275_parallel_mv.gen.stdout   2022-12-22 18:26:01.691198678 +0000
2022-12-22 18:26:01 @@ -113,7 +113,7 @@
2022-12-22 18:26:01      Settings['parallel_view_processing'] = '1' and
2022-12-22 18:26:01      Settings['optimize_trivial_insert_select'] = '0' and
2022-12-22 18:26:01      Settings['max_insert_threads'] = '0';
2022-12-22 18:26:01 -5
2022-12-22 18:26:01 +2
2022-12-22 18:26:01  select count() from testX;
2022-12-22 18:26:01  50
2022-12-22 18:26:01  select count() from testXA;
2022-12-22 18:26:01 @@ -137,7 +137,7 @@
2022-12-22 18:26:01      Settings['parallel_view_processing'] = '1' and
2022-12-22 18:26:01      Settings['optimize_trivial_insert_select'] = '0' and
2022-12-22 18:26:01      Settings['max_insert_threads'] = '16';
2022-12-22 18:26:01 -5
2022-12-22 18:26:01 +2
2022-12-22 18:26:01  select count() from testX;
2022-12-22 18:26:01  60
2022-12-22 18:26:01  select count() from testXA;
2022-12-22 18:26:01 @@ -161,7 +161,7 @@
2022-12-22 18:26:01      Settings['parallel_view_processing'] = '1' and
2022-12-22 18:26:01      Settings['optimize_trivial_insert_select'] = '1' and
2022-12-22 18:26:01      Settings['max_insert_threads'] = '0';
2022-12-22 18:26:01 -5
2022-12-22 18:26:01 +2
2022-12-22 18:26:01  select count() from testX;
2022-12-22 18:26:01  70
2022-12-22 18:26:01  select count() from testXA;
2022-12-22 18:26:01 @@ -185,7 +185,7 @@
2022-12-22 18:26:01      Settings['parallel_view_processing'] = '1' and
2022-12-22 18:26:01      Settings['optimize_trivial_insert_select'] = '1' and
2022-12-22 18:26:01      Settings['max_insert_threads'] = '16';
2022-12-22 18:26:01 -5
2022-12-22 18:26:01 +2
2022-12-22 18:26:01  select count() from testX;
2022-12-22 18:26:01  80
2022-12-22 18:26:01  select count() from testXA;
2022-12-22 18:26:01 
2022-12-22 18:26:01 
2022-12-22 18:26:01 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=1 --distributed_aggregation_memory_efficient=0 --fsync_metadata=0 --priority=3 --output_format_parallel_formatting=0 --input_format_parallel_parsing=1
2022-12-22 18:26:01 
2022-12-22 18:26:01 Database: test_ujxovv

Comment: Not enough awailable threads/CPUs for CH.
Status: FAIL (OK to fail)

Test: /stateless/01091_num_threads

Reason:

2022-12-22 18:26:19 --- /usr/share/clickhouse-test/queries/0_stateless/01091_num_threads.reference  2022-12-22 18:18:55.690315867 +0000
2022-12-22 18:26:19 +++ /tmp/clickhouse-test/0_stateless/01091_num_threads.stdout   2022-12-22 18:26:19.512104213 +0000
2022-12-22 18:26:19 @@ -3,4 +3,4 @@
2022-12-22 18:26:19  499999500000
2022-12-22 18:26:19  1
2022-12-22 18:26:19  499999500000
2022-12-22 18:26:19 -1
2022-12-22 18:26:19 +0
2022-12-22 18:26:19 
2022-12-22 18:26:19 
2022-12-22 18:26:19 Settings used in the test: --max_insert_threads=4 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=1 --fsync_metadata=0 --priority=1 --output_format_parallel_formatting=0 --input_format_parallel_parsing=1
2022-12-22 18:26:19 
2022-12-22 18:26:19 Database: test_434d7p

Comment: Not enough awailable threads/CPUs for CH.
Status: FAIL (OK to fail)

Test: /stateless/01193_metadata_loading

Reason:

2022-12-22 18:44:11 --- /usr/share/clickhouse-test/queries/0_stateless/01193_metadata_loading.reference 2022-12-22 18:18:55.694317867 +0000
2022-12-22 18:44:11 +++ /tmp/clickhouse-test/0_stateless/01193_metadata_loading.stdout  2022-12-22 18:44:11.143624150 +0000
2022-12-22 18:44:11 @@ -1,5 +1,5 @@
2022-12-22 18:44:11  1000   0   2020-06-25  hello   [1,2]   [3,4]
2022-12-22 18:44:11  1000   1   2020-06-26  word    [10,20] [30,40]
2022-12-22 18:44:11 -ok
2022-12-22 18:44:11 +[4399,4137,4191,4274,4242]
2022-12-22 18:44:11  8000   0   2020-06-25  hello   [1,2]   [3,4]
2022-12-22 18:44:11  8000   1   2020-06-26  word    [10,20] [30,40]
2022-12-22 18:44:11 
2022-12-22 18:44:11 
2022-12-22 18:44:11 Settings used in the test: --max_insert_threads=9 --group_by_two_level_threshold=1 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=0 --fsync_metadata=0 --priority=0 --output_format_parallel_formatting=0 --input_format_parallel_parsing=0
2022-12-22 18:44:11 
2022-12-22 18:44:11 Database: test_ueuirg

Comment: Minor: known failure, depends on execution speed - inconsistency of hardware (different from upstream CI/CD runners).
Status: FAIL (OK to fail)

TestFlows Results

Skipped:
- Base85
Reason: Only supported on ClickHouse version >= 22.7.
- Lightweight Delete
Reason: Only supported on ClickHouse version >= 22.8.

Passed:
- AES Encryption
- Aggregate Functions
- ClickHouse Keeper
- DateTime64 Extended Range
- Disk Level Encryption
- Example
- Extended Precision Data Types
- Kafka
- Kerberos
- LDAP
- Map Type
- Part Moves Between Shards
- RBAC
- S3 AWS
- S3 GCS
- SSL Server
- Tiered Storage AWS
- Tiered Storage GCS
- Tiered Storage Minio
- Tiered Storage original
- Window Functions

Failed: