Document status - Public


QA Software Build Report

ClickHouse 22.3.10.24 / x86_64

(c) 2022 Altinity Inc. All Rights Reserved.

Approval

Status: Not approved for release

Reviewed by: azvonov@altinity.com

Date: 19 September 2022

Table of Contents

Test Results

Stage Status
Integration Fail
Stateful Pass
Stateless Fail
TestFlows Fail

Results https://altinity-test-reports.s3.amazonaws.com/index.html#builds/stable/v22.3.10.24/2022-09-16T18-02-49.256/
Pipeline https://gitlab.com/altinity-qa/clickhouse/cicd/release/-/pipelines/642655063

Results Analysis

Integration Results

Results
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.3.10.24/2022-09-16T18-02-49.256/integration/integration_results_1.html
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.3.10.24/2022-09-16T18-02-49.256/integration/integration_results_2.html

Test: /integration/test_cgroup_limit/test.py::test_cgroup_cpu_limit

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

    def test_cgroup_cpu_limit():
        for num_cpus in (1, 2, 4, 2.8):
            result = run_with_cpu_limit(
                "clickhouse local -q \"select value from system.settings where name='max_threads'\"",
                num_cpus,
            )
            expect_output = (r"\'auto({})\'".format(math.ceil(num_cpus))).encode()
>           assert (
                result.strip() == expect_output
            ), f"fail for cpu limit={num_cpus}, result={result.strip()}, expect={expect_output}"
E           AssertionError: fail for cpu limit=2, result=b"\\'auto(1)\\'", expect=b"\\'auto(2)\\'"
E           assert b"\\'auto(1)\\'" == b"\\'auto(2)\\'"
E             At index 7 diff: b'1' != b'2'
E             Full diff:
E             - b"\\'auto(2)\\'"
E             ?           ^
E             + b"\\'auto(1)\\'"
E             ?           ^

test_cgroup_limit/test.py:48: AssertionError

Comment: Misconfiguration: 1 CPU/hardware thread available to the CH instead of at least 2.
Status: FAIL (OK to fail)

Test: /integration/test_http_handlers_config/test.py::test_predefined_query_handler

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

    def test_predefined_query_handler():
        with contextlib.closing(
            SimpleCluster(
                ClickHouseCluster(__file__), "predefined_handler", "test_predefined_handler"
            )
        ) as cluster:
            assert (
                404
                == cluster.instance.http_request(
                    "?max_threads=1", method="GET", headers={"XXX": "xxx"}
                ).status_code
            )

            assert (
                404
                == cluster.instance.http_request(
                    "test_predefined_handler_get?max_threads=1",
                    method="GET",
                    headers={"XXX": "bad"},
                ).status_code
            )

            assert (
                404
                == cluster.instance.http_request(
                    "test_predefined_handler_get?max_threads=1",
                    method="POST",
                    headers={"XXX": "xxx"},
                ).status_code
            )

            assert (
                500
                == cluster.instance.http_request(
                    "test_predefined_handler_get?max_threads=1",
                    method="GET",
                    headers={"XXX": "xxx"},
                ).status_code
            )

>           assert (
                b"max_threads\t1\n"
                == cluster.instance.http_request(
                    "test_predefined_handler_get?max_threads=1&setting_name=max_threads",
                    method="GET",
                    headers={"XXX": "xxx"},
                ).content
            )
E           assert b'max_threads\t1\n' == b"max_threads\t\\'auto(1)\\'\n"
E             At index 12 diff: b'1' != b'\\'
E             Full diff:
E             - b"max_threads\t\\'auto(1)\\'\n"
E             + b'max_threads\t1\n'

test_http_handlers_config/test.py:119: AssertionError

Comment: Minor: number of threads was not explicitly set to 1 but rather implicitly deduced to 1. Could be due to misconfiguration in test environment coupled by some minor issue in CH.
Status: FAIL (OK to fail)

Test: /integration/test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0]

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f2f1afd7ac0>
started_mysql_8_0 = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f2f1aeb0d90>
started_mysql_5_7 = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f2f1aeb09d0>
clickhouse_node = <helpers.cluster.ClickHouseInstance object at 0x7f2f1afd7a30>

    @pytest.mark.parametrize(
        ("clickhouse_node"), [node_disable_bytes_settings, node_disable_rows_settings]
    )
    def test_mysql_settings(
        started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node
    ):
>       materialize_with_ddl.mysql_settings_test(
            clickhouse_node, started_mysql_5_7, "mysql57"
        )

test_materialized_mysql_database/test.py:448:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

clickhouse_node = <helpers.cluster.ClickHouseInstance object at 0x7f2f1afd7a30>
mysql_node = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f2f1aeb09d0>
service_name = 'mysql57'

    def mysql_settings_test(clickhouse_node, mysql_node, service_name):
        mysql_node.query("DROP DATABASE IF EXISTS test_database")
        clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
        mysql_node.query("CREATE DATABASE test_database")
        mysql_node.query(
            "CREATE TABLE test_database.a (id INT(11) NOT NULL PRIMARY KEY, value VARCHAR(255))"
        )
        mysql_node.query("INSERT INTO test_database.a VALUES(1, 'foo')")
        mysql_node.query("INSERT INTO test_database.a VALUES(2, 'bar')")

        clickhouse_node.query(
            "CREATE DATABASE test_database ENGINE = MaterializedMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format(
                service_name
            )
        )
        check_query(
            clickhouse_node, "SELECT COUNT() FROM test_database.a FORMAT TSV", "2\n"
        )

>       assert (
            clickhouse_node.query(
                "SELECT COUNT(DISTINCT  blockNumber()) FROM test_database.a FORMAT TSV"
            )
            == "2\n"
        )
E       AssertionError

test_materialized_mysql_database/materialize_with_ddl.py:1795: AssertionError

Comment:
Status: FAIL

Test: /integration/test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1]

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f2f1afd7ac0>
started_mysql_8_0 = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f2f1aeb0d90>
started_mysql_5_7 = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f2f1aeb09d0>
clickhouse_node = <helpers.cluster.ClickHouseInstance object at 0x7f2f1afd7370>

    @pytest.mark.parametrize(
        ("clickhouse_node"), [node_disable_bytes_settings, node_disable_rows_settings]
    )
    def test_mysql_settings(
        started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node
    ):
>       materialize_with_ddl.mysql_settings_test(
            clickhouse_node, started_mysql_5_7, "mysql57"
        )

test_materialized_mysql_database/test.py:448:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

clickhouse_node = <helpers.cluster.ClickHouseInstance object at 0x7f2f1afd7370>
mysql_node = <test_materialized_mysql_database.test.MySQLConnection object at 0x7f2f1aeb09d0>
service_name = 'mysql57'

    def mysql_settings_test(clickhouse_node, mysql_node, service_name):
        mysql_node.query("DROP DATABASE IF EXISTS test_database")
        clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
        mysql_node.query("CREATE DATABASE test_database")
        mysql_node.query(
            "CREATE TABLE test_database.a (id INT(11) NOT NULL PRIMARY KEY, value VARCHAR(255))"
        )
        mysql_node.query("INSERT INTO test_database.a VALUES(1, 'foo')")
        mysql_node.query("INSERT INTO test_database.a VALUES(2, 'bar')")

        clickhouse_node.query(
            "CREATE DATABASE test_database ENGINE = MaterializedMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format(
                service_name
            )
        )
        check_query(
            clickhouse_node, "SELECT COUNT() FROM test_database.a FORMAT TSV", "2\n"
        )

>       assert (
            clickhouse_node.query(
                "SELECT COUNT(DISTINCT  blockNumber()) FROM test_database.a FORMAT TSV"
            )
            == "2\n"
        )
E       AssertionError

test_materialized_mysql_database/materialize_with_ddl.py:1795: AssertionError

Comment:
Status: FAIL

Test: /integration/test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication

Reason:

[gw4] linux -- Python 3.8.10 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7fbe88563160>

    def test_abrupt_connection_loss_while_heavy_replication(started_cluster):
        def transaction(thread_id):
            if thread_id % 2:
                conn = get_postgres_conn(
                    ip=started_cluster.postgres_ip,
                    port=started_cluster.postgres_port,
                    database=True,
                    auto_commit=True,
                )
            else:
                conn = get_postgres_conn(
                    ip=started_cluster.postgres_ip,
                    port=started_cluster.postgres_port,
                    database=True,
                    auto_commit=False,
                )
            cursor = conn.cursor()
            for query in queries:
                cursor.execute(query.format(thread_id))
                print("thread {}, query {}".format(thread_id, query))
            if thread_id % 2 == 0:
                conn.commit()

        NUM_TABLES = 6
        pg_manager.create_and_fill_postgres_tables(NUM_TABLES, numbers=0)

        threads_num = 6
        threads = []
        for i in range(threads_num):
            threads.append(threading.Thread(target=transaction, args=(i,)))

        pg_manager.create_materialized_db(
            ip=started_cluster.postgres_ip, port=started_cluster.postgres_port
        )

        for thread in threads:
            time.sleep(random.uniform(0, 0.5))
            thread.start()

        for thread in threads:
            thread.join()  # Join here because it takes time for data to reach wal

        time.sleep(2)
        started_cluster.pause_container("postgres1")

        # for i in range(NUM_TABLES):
        #     result = instance.query(f"SELECT count() FROM test_database.postgresql_replica_{i}")
        #     print(result) # Just debug

        started_cluster.unpause_container("postgres1")
>       check_several_tables_are_synchronized(instance, NUM_TABLES)

test_postgresql_replica_database_engine_1/test.py:818:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
helpers/postgres_utility.py:342: in check_several_tables_are_synchronized
    check_tables_are_synchronized(instance, f"postgresql_replica_{i}")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

instance = <helpers.cluster.ClickHouseInstance object at 0x7fbe88563700>
table_name = 'postgresql_replica_0', order_by = 'key'
postgres_database = 'postgres_database', materialized_database = 'test_database'
schema_name = ''

    def check_tables_are_synchronized(
        instance,
        table_name,
        order_by="key",
        postgres_database="postgres_database",
        materialized_database="test_database",
        schema_name="",
    ):
        assert_nested_table_is_created(
            instance, table_name, materialized_database, schema_name
        )

        table_path = ""
        if len(schema_name) == 0:
            table_path = f"{materialized_database}.{table_name}"
        else:
            table_path = f"{materialized_database}.`{schema_name}.{table_name}`"

        print(f"Checking table is synchronized: {table_path}")
        result_query = f"select * from {table_path} order by {order_by};"

        expected = instance.query(
            f"select * from {postgres_database}.{table_name} order by {order_by};"
        )
        result = instance.query(result_query)

        for _ in range(30):
            if result == expected:
                break
            else:
                time.sleep(0.5)
            result = instance.query(result_query)

>       assert result == expected
E       AssertionError

helpers/postgres_utility.py:330: AssertionError

Comment:
Status: FAIL

Test: /integration/test_storage_kafka/test.py::test_kafka_virtual_columns2

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f26aa8dbfa0>

    def test_kafka_virtual_columns2(kafka_cluster):
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )

        topic_config = {
            # default retention, since predefined timestamp_ms is used.
            "retention.ms": "-1",
        }
        kafka_create_topic(admin_client, "virt2_0", num_partitions=2, config=topic_config)
        kafka_create_topic(admin_client, "virt2_1", num_partitions=2, config=topic_config)

>       instance.query(
            """
            CREATE TABLE test.kafka (value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = 'virt2_0,virt2_1',
                         kafka_group_name = 'virt2',
                         kafka_num_consumers = 2,
                         kafka_format = 'JSONEachRow';

            CREATE MATERIALIZED VIEW test.view Engine=Log AS
            SELECT value, _key, _topic, _partition, _offset, toUnixTimestamp(_timestamp), toUnixTimestamp64Milli(_timestamp_ms), _headers.name, _headers.value FROM test.kafka;
            """
        )

test_storage_kafka/test.py:2150:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <helpers.client.CommandRequest object at 0x7f26a842bf70>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.10):
E           Code: 36. DB::Exception: Received from 172.16.1.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3763ba in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x15302851 in /usr/bin/clickhouse
E           2. ? @ 0x15301c09 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x15f59fba in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x159fe1b9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x159f8edf in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x15a0170b in /usr/bin/clickhouse
E           7. ? @ 0x15d38a2f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15d364d5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x168c133a in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x168d0c99 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19b9314f in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19b955a1 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19d52569 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19d4f8c0 in /usr/bin/clickhouse
E           15. ? @ 0x7f6a3f41b609 in ?
E           16. clone @ 0x7f6a3f340133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'virt2_0,virt2_1',
E                                kafka_group_name = 'virt2',
E                                kafka_num_consumers = 2,
E                                kafka_format = 'JSONEachRow';)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka consumer requires/configured to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail).

Test: /integration/test_storage_kafka/test.py::test_kafka_consumer_hang

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f26aa8dbfa0>

    def test_kafka_consumer_hang(kafka_cluster):
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )

        topic_name = "consumer_hang"
        kafka_create_topic(admin_client, topic_name, num_partitions=8)

>       instance.query(
            f"""
            DROP TABLE IF EXISTS test.kafka;
            DROP TABLE IF EXISTS test.view;
            DROP TABLE IF EXISTS test.consumer;

            CREATE TABLE test.kafka (key UInt64, value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = '{topic_name}',
                         kafka_group_name = '{topic_name}',
                         kafka_format = 'JSONEachRow',
                         kafka_num_consumers = 8;
            CREATE TABLE test.view (key UInt64, value UInt64) ENGINE = Memory();
            CREATE MATERIALIZED VIEW test.consumer TO test.view AS SELECT * FROM test.kafka;
            """
        )

test_storage_kafka/test.py:1016:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <helpers.client.CommandRequest object at 0x7f26a8a594f0>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.10):
E           Code: 36. DB::Exception: Received from 172.16.1.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3763ba in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x15302851 in /usr/bin/clickhouse
E           2. ? @ 0x15301c09 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x15f59fba in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x159fe1b9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x159f8edf in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x15a0170b in /usr/bin/clickhouse
E           7. ? @ 0x15d38a2f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15d364d5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x168c133a in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x168d0c99 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19b9314f in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19b955a1 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19d52569 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19d4f8c0 in /usr/bin/clickhouse
E           15. ? @ 0x7f6a3f41b609 in ?
E           16. clone @ 0x7f6a3f340133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (key UInt64, value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'consumer_hang',
E                                kafka_group_name = 'consumer_hang',
E                                kafka_format = 'JSONEachRow',
E                                kafka_num_consumers = 8;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka cosumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail)

Test: /integration/test_storage_kafka/test.py::test_kafka_recreate_kafka_table

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f26aa8dbfa0>

    def test_kafka_recreate_kafka_table(kafka_cluster):
        """
        Checks that materialized view work properly after dropping and recreating the Kafka table.
        """
        # line for backporting:
        # admin_client = KafkaAdminClient(bootstrap_servers="localhost:9092")
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )

        topic_name = "recreate_kafka_table"
        kafka_create_topic(admin_client, topic_name, num_partitions=6)

>       instance.query(
            """
            DROP TABLE IF EXISTS test.view;
            DROP TABLE IF EXISTS test.consumer;
            CREATE TABLE test.kafka (key UInt64, value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = 'recreate_kafka_table',
                         kafka_group_name = 'recreate_kafka_table_group',
                         kafka_format = 'JSONEachRow',
                         kafka_num_consumers = 6,
                         kafka_flush_interval_ms = 1000,
                         kafka_skip_broken_messages = 1048577;

            CREATE TABLE test.view (key UInt64, value UInt64)
                ENGINE = MergeTree()
                ORDER BY key;
            CREATE MATERIALIZED VIEW test.consumer TO test.view AS
                SELECT * FROM test.kafka;
        """
        )

test_storage_kafka/test.py:1556:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <helpers.client.CommandRequest object at 0x7f26a8422cd0>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.10):
E           Code: 36. DB::Exception: Received from 172.16.1.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3763ba in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x15302851 in /usr/bin/clickhouse
E           2. ? @ 0x15301c09 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x15f59fba in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x159fe1b9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x159f8edf in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x15a0170b in /usr/bin/clickhouse
E           7. ? @ 0x15d38a2f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15d364d5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x168c133a in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x168d0c99 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19b9314f in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19b955a1 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19d52569 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19d4f8c0 in /usr/bin/clickhouse
E           15. ? @ 0x7f6a3f41b609 in ?
E           16. clone @ 0x7f6a3f340133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (key UInt64, value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'recreate_kafka_table',
E                                kafka_group_name = 'recreate_kafka_table_group',
E                                kafka_format = 'JSONEachRow',
E                                kafka_num_consumers = 6,
E                                kafka_flush_interval_ms = 1000,
E                                kafka_skip_broken_messages = 1048577;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka cosumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail)

Test: /integration/test_storage_kafka/test.py::test_issue26643

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f26aa8dbfa0>

    def test_issue26643(kafka_cluster):

        # for backporting:
        # admin_client = KafkaAdminClient(bootstrap_servers="localhost:9092")
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )
        producer = KafkaProducer(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port),
            value_serializer=producer_serializer,
        )

        topic_list = []
        topic_list.append(
            NewTopic(name="test_issue26643", num_partitions=4, replication_factor=1)
        )
        admin_client.create_topics(new_topics=topic_list, validate_only=False)

        msg = message_with_repeated_pb2.Message(
            tnow=1629000000,
            server="server1",
            clien="host1",
            sPort=443,
            cPort=50000,
            r=[
                message_with_repeated_pb2.dd(
                    name="1", type=444, ttl=123123, data=b"adsfasd"
                ),
                message_with_repeated_pb2.dd(name="2"),
            ],
            method="GET",
        )

        data = b""
        serialized_msg = msg.SerializeToString()
        data = data + _VarintBytes(len(serialized_msg)) + serialized_msg

        msg = message_with_repeated_pb2.Message(tnow=1629000002)

        serialized_msg = msg.SerializeToString()
        data = data + _VarintBytes(len(serialized_msg)) + serialized_msg

        producer.send(topic="test_issue26643", value=data)

        data = _VarintBytes(len(serialized_msg)) + serialized_msg
        producer.send(topic="test_issue26643", value=data)
        producer.flush()

>       instance.query(
            """
            CREATE TABLE IF NOT EXISTS test.test_queue
            (
                `tnow` UInt32,
                `server` String,
                `client` String,
                `sPort` UInt16,
                `cPort` UInt16,
                `r.name` Array(String),
                `r.class` Array(UInt16),
                `r.type` Array(UInt16),
                `r.ttl` Array(UInt32),
                `r.data` Array(String),
                `method` String
            )
            ENGINE = Kafka
            SETTINGS
                kafka_broker_list = 'kafka1:19092',
                kafka_topic_list = 'test_issue26643',
                kafka_group_name = 'test_issue26643_group',
                kafka_format = 'Protobuf',
                kafka_schema = 'message_with_repeated.proto:Message',
                kafka_num_consumers = 4,
                kafka_skip_broken_messages = 10000;

            SET allow_suspicious_low_cardinality_types=1;

            CREATE TABLE test.log
            (
                `tnow` DateTime('Asia/Istanbul') CODEC(DoubleDelta, LZ4),
                `server` LowCardinality(String),
                `client` LowCardinality(String),
                `sPort` LowCardinality(UInt16),
                `cPort` UInt16 CODEC(T64, LZ4),
                `r.name` Array(String),
                `r.class` Array(LowCardinality(UInt16)),
                `r.type` Array(LowCardinality(UInt16)),
                `r.ttl` Array(LowCardinality(UInt32)),
                `r.data` Array(String),
                `method` LowCardinality(String)
            )
            ENGINE = MergeTree
            PARTITION BY toYYYYMMDD(tnow)
            ORDER BY (tnow, server)
            TTL toDate(tnow) + toIntervalMonth(1000)
            SETTINGS index_granularity = 16384, merge_with_ttl_timeout = 7200;

            CREATE MATERIALIZED VIEW test.test_consumer TO test.log AS
            SELECT
                toDateTime(a.tnow) AS tnow,
                a.server AS server,
                a.client AS client,
                a.sPort AS sPort,
                a.cPort AS cPort,
                a.`r.name` AS `r.name`,
                a.`r.class` AS `r.class`,
                a.`r.type` AS `r.type`,
                a.`r.ttl` AS `r.ttl`,
                a.`r.data` AS `r.data`,
                a.method AS method
            FROM test.test_queue AS a;
            """
        )

test_storage_kafka/test.py:4040:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <helpers.client.CommandRequest object at 0x7f26a8988100>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.10):
E           Code: 36. DB::Exception: Received from 172.16.1.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3763ba in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x15302851 in /usr/bin/clickhouse
E           2. ? @ 0x15301c09 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x15f59fba in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x159fe1b9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x159f8edf in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x15a0170b in /usr/bin/clickhouse
E           7. ? @ 0x15d38a2f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15d364d5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x168c133a in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x168d0c99 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19b9314f in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19b955a1 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19d52569 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19d4f8c0 in /usr/bin/clickhouse
E           15. ? @ 0x7f6a3f41b609 in ?
E           16. clone @ 0x7f6a3f340133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE IF NOT EXISTS test.test_queue
E                   (
E                       `tnow` UInt32,
E                       `server` String,
E                       `client` String,
E                       `sPort` UInt16,
E                       `cPort` UInt16,
E                       `r.name` Array(String),
E                       `r.class` Array(UInt16),
E                       `r.type` Array(UInt16),
E                       `r.ttl` Array(UInt32),
E                       `r.data` Array(String),
E                       `method` String
E                   )
E                   ENGINE = Kafka
E                   SETTINGS
E                       kafka_broker_list = 'kafka1:19092',
E                       kafka_topic_list = 'test_issue26643',
E                       kafka_group_name = 'test_issue26643_group',
E                       kafka_format = 'Protobuf',
E                       kafka_schema = 'message_with_repeated.proto:Message',
E                       kafka_num_consumers = 4,
E                       kafka_skip_broken_messages = 10000;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka cosumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail)

Test: /integration/test_storage_kafka/test.py::test_kafka_read_consumers_in_parallel

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f26aa8dbfa0>

    def test_kafka_read_consumers_in_parallel(kafka_cluster):
        admin_client = KafkaAdminClient(
            bootstrap_servers="localhost:{}".format(kafka_cluster.kafka_port)
        )

        topic_name = "read_consumers_in_parallel"
        kafka_create_topic(admin_client, topic_name, num_partitions=8)

        cancel = threading.Event()

        def produce():
            while not cancel.is_set():
                messages = []
                for _ in range(100):
                    messages.append(json.dumps({"key": 0, "value": 0}))
                kafka_produce(kafka_cluster, "read_consumers_in_parallel", messages)
                time.sleep(1)

        kafka_thread = threading.Thread(target=produce)
        kafka_thread.start()

        # when we have more than 1 consumer in a single table,
        # and kafka_thread_per_consumer=0
        # all the consumers should be read in parallel, not in sequence.
        # then reading in parallel 8 consumers with 1 seconds kafka_poll_timeout_ms and less than 1 sec limit
        # we should have exactly 1 poll per consumer (i.e. 8 polls) every 1 seconds (from different threads)
        # in case parallel consuming is not working we will have only 1 poll every 1 seconds (from the same thread).
>       instance.query(
            f"""
            DROP TABLE IF EXISTS test.kafka;
            DROP TABLE IF EXISTS test.view;
            DROP TABLE IF EXISTS test.consumer;

            CREATE TABLE test.kafka (key UInt64, value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = '{topic_name}',
                         kafka_group_name = '{topic_name}',
                         kafka_format = 'JSONEachRow',
                         kafka_num_consumers = 8,
                         kafka_thread_per_consumer = 0,
                         kafka_poll_timeout_ms = 1000,
                         kafka_flush_interval_ms = 999;
            CREATE TABLE test.view (key UInt64, value UInt64) ENGINE = Memory();
            CREATE MATERIALIZED VIEW test.consumer TO test.view AS SELECT * FROM test.kafka;
            """
        )

test_storage_kafka/test.py:1179:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <helpers.client.CommandRequest object at 0x7f26a8150700>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.10):
E           Code: 36. DB::Exception: Received from 172.16.1.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3763ba in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x15302851 in /usr/bin/clickhouse
E           2. ? @ 0x15301c09 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x15f59fba in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x159fe1b9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x159f8edf in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x15a0170b in /usr/bin/clickhouse
E           7. ? @ 0x15d38a2f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15d364d5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x168c133a in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x168d0c99 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19b9314f in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19b955a1 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19d52569 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19d4f8c0 in /usr/bin/clickhouse
E           15. ? @ 0x7f6a3f41b609 in ?
E           16. clone @ 0x7f6a3f340133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (key UInt64, value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'read_consumers_in_parallel',
E                                kafka_group_name = 'read_consumers_in_parallel',
E                                kafka_format = 'JSONEachRow',
E                                kafka_num_consumers = 8,
E                                kafka_thread_per_consumer = 0,
E                                kafka_poll_timeout_ms = 1000,
E                                kafka_flush_interval_ms = 999;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka cosumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail)

Test: /integration/test_storage_kafka/test.py::test_kafka_csv_with_thread_per_consumer

Reason:

[gw0] linux -- Python 3.8.10 /usr/bin/python3

kafka_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f26aa8dbfa0>

    def test_kafka_csv_with_thread_per_consumer(kafka_cluster):
>       instance.query(
            """
            CREATE TABLE test.kafka (key UInt64, value UInt64)
                ENGINE = Kafka
                SETTINGS kafka_broker_list = 'kafka1:19092',
                         kafka_topic_list = 'csv_with_thread_per_consumer',
                         kafka_group_name = 'csv_with_thread_per_consumer',
                         kafka_format = 'CSV',
                         kafka_row_delimiter = '\\n',
                         kafka_num_consumers = 4,
                         kafka_commit_on_select = 1,
                         kafka_thread_per_consumer = 1;
            """
        )

test_storage_kafka/test.py:3304:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
helpers/cluster.py:2802: in query
    return self.client.query(
helpers/client.py:31: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <helpers.client.CommandRequest object at 0x7f26a8a4dbe0>

    def get_answer(self):
        self.process.wait()
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 36, stderr: Received exception from server (version 22.3.10):
E           Code: 36. DB::Exception: Received from 172.16.1.8:9000. DB::Exception: Number of consumers can not be bigger than 1. Stack trace:
E           
E           0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3763ba in /usr/bin/clickhouse
E           1. DB::Exception::Exception<unsigned int&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int&) @ 0x15302851 in /usr/bin/clickhouse
E           2. ? @ 0x15301c09 in /usr/bin/clickhouse
E           3. DB::StorageFactory::get(DB::ASTCreateQuery const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, std::__1::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x15f59fba in /usr/bin/clickhouse
E           4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0x159fe1b9 in /usr/bin/clickhouse
E           5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x159f8edf in /usr/bin/clickhouse
E           6. DB::InterpreterCreateQuery::execute() @ 0x15a0170b in /usr/bin/clickhouse
E           7. ? @ 0x15d38a2f in /usr/bin/clickhouse
E           8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x15d364d5 in /usr/bin/clickhouse
E           9. DB::TCPHandler::runImpl() @ 0x168c133a in /usr/bin/clickhouse
E           10. DB::TCPHandler::run() @ 0x168d0c99 in /usr/bin/clickhouse
E           11. Poco::Net::TCPServerConnection::start() @ 0x19b9314f in /usr/bin/clickhouse
E           12. Poco::Net::TCPServerDispatcher::run() @ 0x19b955a1 in /usr/bin/clickhouse
E           13. Poco::PooledThread::run() @ 0x19d52569 in /usr/bin/clickhouse
E           14. Poco::ThreadImpl::runnableEntry(void*) @ 0x19d4f8c0 in /usr/bin/clickhouse
E           15. ? @ 0x7f6a3f41b609 in ?
E           16. clone @ 0x7f6a3f340133 in ?
E           . (BAD_ARGUMENTS)
E           (query: CREATE TABLE test.kafka (key UInt64, value UInt64)
E                       ENGINE = Kafka
E                       SETTINGS kafka_broker_list = 'kafka1:19092',
E                                kafka_topic_list = 'csv_with_thread_per_consumer',
E                                kafka_group_name = 'csv_with_thread_per_consumer',
E                                kafka_format = 'CSV',
E                                kafka_row_delimiter = '\n',
E                                kafka_num_consumers = 4,
E                                kafka_commit_on_select = 1,
E                                kafka_thread_per_consumer = 1;)

helpers/client.py:187: QueryRuntimeException

Comment: Misconfiguration: only 1 CPU/thread is available to CH, while Kafka cosumer requires/configure to use more (DB::Exception: Number of consumers can not be bigger than 1).
Status: FAIL (OK to fail).

Stateful Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.3.10.24/2022-09-16T18-02-49.256/stateful/stateful_results.html

Stateless Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.3.10.24/2022-09-16T18-02-49.256/stateless/stateless_results.html

Test: /stateless/02149_read_in_order_fixed_prefix

Reason:

2022-08-19 02:39:18 --- /usr/share/clickhouse-test/queries/0_stateless/02149_read_in_order_fixed_prefix.reference   2022-08-19 02:36:17.097020306 -0930
2022-08-19 02:39:18 +++ /tmp/clickhouse-test/0_stateless/02149_read_in_order_fixed_prefix.stdout    2022-08-19 02:39:18.415628096 -0930
2022-08-19 02:39:18 @@ -29,10 +29,8 @@
2022-08-19 02:39:18        ExpressionTransform × 2
2022-08-19 02:39:18          (SettingQuotaAndLimits)
2022-08-19 02:39:18            (ReadFromMergeTree)
2022-08-19 02:39:18 -          ReverseTransform
2022-08-19 02:39:18 -            MergeTreeReverse 01
2022-08-19 02:39:18 -              ReverseTransform
2022-08-19 02:39:18 -                MergeTreeReverse 01
2022-08-19 02:39:18 +          ReverseTransform × 2
2022-08-19 02:39:18 +            MergeTreeReverse × 2 01
2022-08-19 02:39:18  2020-10-01 9
2022-08-19 02:39:18  2020-10-01 9
2022-08-19 02:39:18  2020-10-01 9
2022-08-19 02:39:18
2022-08-19 02:39:18
2022-08-19 02:39:18 Settings used in the test: --max_insert_threads=13 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=1 --fsync_metadata=0 --priority=1 --output_format_parallel_formatting=0 --input_format_parallel_parsing=1
2022-08-19 02:39:18
2022-08-19 02:39:18 Database: test_bcznak

Comment:
Status: FAIL

Test: /stateless/01701_parallel_parsing_infinite_segmentation

Reason:

2022-08-19 02:41:24 --- /usr/share/clickhouse-test/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.reference   2022-08-19 02:36:17.073008307 -0930
2022-08-19 02:41:24 +++ /tmp/clickhouse-test/0_stateless/01701_parallel_parsing_infinite_segmentation.stdout    2022-08-19 02:41:24.214491786 -0930
2022-08-19 02:41:24 @@ -1 +1 @@
2022-08-19 02:41:24 -Ok.
2022-08-19 02:41:24 +FAIL
2022-08-19 02:41:24
2022-08-19 02:41:24
2022-08-19 02:41:24 Settings used in the test: --max_insert_threads=4 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=1 --fsync_metadata=1 --priority=4 --output_format_parallel_formatting=0 --input_format_parallel_parsing=0
2022-08-19 02:41:24
2022-08-19 02:41:24 Database: test_c8bx4g

Comment: Not enough threads.
Status: FAIL (OK to fail)

Test: /stateless/01532_primary_key_without_order_by_zookeeper

Reason:

2022-08-19 02:42:06 --- /usr/share/clickhouse-test/queries/0_stateless/01532_primary_key_without_order_by_zookeeper.reference   2022-08-19 02:36:17.061002308 -0930
2022-08-19 02:42:06 +++ /tmp/clickhouse-test/0_stateless/01532_primary_key_without_order_by_zookeeper.stdout    2022-08-19 02:42:06.703724333 -0930
2022-08-19 02:42:06 @@ -9,8 +9,8 @@
2022-08-19 02:42:06  1  c
2022-08-19 02:42:06  2  b
2022-08-19 02:42:06  1  c   0
2022-08-19 02:42:06 -2  e   555
2022-08-19 02:42:06  2  b   0
2022-08-19 02:42:06 +2  e   555
2022-08-19 02:42:06  CREATE TABLE default.merge_tree_pk_sql\n(\n    `key` UInt64,\n    `value` String,\n    `key2` UInt64\n)\nENGINE = ReplacingMergeTree\nPRIMARY KEY key\nORDER BY (key, key2)\nSETTINGS index_granularity = 8192
2022-08-19 02:42:06  CREATE TABLE default.replicated_merge_tree_pk_sql\n(\n    `key` UInt64,\n    `value` String\n)\nENGINE = ReplicatedReplacingMergeTree(\'/clickhouse/test/01532_primary_key_without\', \'r1\')\nPRIMARY KEY key\nORDER BY key\nSETTINGS index_granularity = 8192
2022-08-19 02:42:06  1  a
2022-08-19 02:42:06 @@ -18,6 +18,6 @@
2022-08-19 02:42:06  1  c
2022-08-19 02:42:06  2  b
2022-08-19 02:42:06  1  c   0
2022-08-19 02:42:06 -2  e   555
2022-08-19 02:42:06  2  b   0
2022-08-19 02:42:06 +2  e   555
2022-08-19 02:42:06  CREATE TABLE default.replicated_merge_tree_pk_sql\n(\n    `key` UInt64,\n    `value` String,\n    `key2` UInt64\n)\nENGINE = ReplicatedReplacingMergeTree(\'/clickhouse/test/01532_primary_key_without\', \'r1\')\nPRIMARY KEY key\nORDER BY (key, key2)\nSETTINGS index_granularity = 8192
2022-08-19 02:42:06
2022-08-19 02:42:06
2022-08-19 02:42:06 Settings used in the test: --max_insert_threads=15 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=1152921504606846976 --distributed_aggregation_memory_efficient=0 --fsync_metadata=1 --priority=1 --output_format_parallel_formatting=1 --input_format_parallel_parsing=1
2022-08-19 02:42:06
2022-08-19 02:42:06 Database: test_k84p48

Comment: Minor: wrong order of elements (column not included in ORDER BY).
Status: FAIL (OK to fail).

Test: /stateless/01524_do_not_merge_across_partitions_select_final

Reason:

2022-08-19 02:42:08 --- /usr/share/clickhouse-test/queries/0_stateless/01524_do_not_merge_across_partitions_select_final.reference  2022-08-19 02:36:17.061002308 -0930
2022-08-19 02:42:08 +++ /tmp/clickhouse-test/0_stateless/01524_do_not_merge_across_partitions_select_final.stdout   2022-08-19 02:42:08.852798262 -0930
2022-08-19 02:42:08 @@ -6,4 +6,4 @@
2022-08-19 02:42:08  2020-01-01 00:00:00    2
2022-08-19 02:42:08  1
2022-08-19 02:42:08  499999
2022-08-19 02:42:08 -5
2022-08-19 02:42:08 +2
2022-08-19 02:42:08
2022-08-19 02:42:08
2022-08-19 02:42:08 Settings used in the test: --max_insert_threads=8 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=1 --fsync_metadata=1 --priority=0 --output_format_parallel_formatting=0 --input_format_parallel_parsing=0
2022-08-19 02:42:08
2022-08-19 02:42:08 Database: test_pm9tfg

Comment:
Status: FAIL

Test: /stateless/01275_parallel_mv

Reason:

2022-08-19 02:43:17 --- /usr/share/clickhouse-test/queries/0_stateless/01275_parallel_mv.reference  2022-08-19 02:36:17.044994309 -0930
2022-08-19 02:43:17 +++ /tmp/clickhouse-test/0_stateless/01275_parallel_mv.stdout   2022-08-19 02:43:17.611157884 -0930
2022-08-19 02:43:17 @@ -3,7 +3,7 @@
2022-08-19 02:43:17  insert into testX select number from numbers(10) settings log_queries=1; -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO }
2022-08-19 02:43:17  system flush logs;
2022-08-19 02:43:17  select length(thread_ids) >= 8 from system.query_log where current_database = currentDatabase() and type != 'QueryStart' and query like '%insert into testX %' and Settings['parallel_view_processing'] = '1';
2022-08-19 02:43:17 -1
2022-08-19 02:43:17 +0
2022-08-19 02:43:17  select count() from testX;
2022-08-19 02:43:17  10
2022-08-19 02:43:17  select count() from testXA;
2022-08-19 02:43:17
2022-08-19 02:43:17
2022-08-19 02:43:17 Settings used in the test: --max_insert_threads=8 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=0 --fsync_metadata=0 --priority=1 --output_format_parallel_formatting=1 --input_format_parallel_parsing=0
2022-08-19 02:43:17
2022-08-19 02:43:17 Database: test_jllszi

Comment: Not enough awailable threads/CPUs for CH.
Status: FAIL (OK to fail)

Test: /stateless/01091_num_threads

Reason:

2022-08-19 02:43:37 --- /usr/share/clickhouse-test/queries/0_stateless/01091_num_threads.reference  2022-08-19 02:36:17.040992309 -0930
2022-08-19 02:43:37 +++ /tmp/clickhouse-test/0_stateless/01091_num_threads.stdout   2022-08-19 02:43:37.180937215 -0930
2022-08-19 02:43:37 @@ -3,4 +3,4 @@
2022-08-19 02:43:37  499999500000
2022-08-19 02:43:37  1
2022-08-19 02:43:37  499999500000
2022-08-19 02:43:37 -1
2022-08-19 02:43:37 +0
2022-08-19 02:43:37
2022-08-19 02:43:37
2022-08-19 02:43:37 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=1 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=1 --fsync_metadata=1 --priority=3 --output_format_parallel_formatting=0 --input_format_parallel_parsing=0
2022-08-19 02:43:37
2022-08-19 02:43:37 Database: test_nd2x1k

Comment: Not enough awailable threads/CPUs for CH.
Status: FAIL (OK to fail)

Test: /stateless/01193_metadata_loading

Reason:

2022-08-19 03:01:07 --- /usr/share/clickhouse-test/queries/0_stateless/01193_metadata_loading.reference 2022-08-19 02:36:17.044994309 -0930
2022-08-19 03:01:07 +++ /tmp/clickhouse-test/0_stateless/01193_metadata_loading.stdout  2022-08-19 03:01:07.293695138 -0930
2022-08-19 03:01:07 @@ -1,5 +1,5 @@
2022-08-19 03:01:07  1000   0   2020-06-25  hello   [1,2]   [3,4]
2022-08-19 03:01:07  1000   1   2020-06-26  word    [10,20] [30,40]
2022-08-19 03:01:07 -ok
2022-08-19 03:01:07 +[4656,4025,4291,4371,4124]
2022-08-19 03:01:07  8000   0   2020-06-25  hello   [1,2]   [3,4]
2022-08-19 03:01:07  8000   1   2020-06-26  word    [10,20] [30,40]
2022-08-19 03:01:07
2022-08-19 03:01:07
2022-08-19 03:01:07 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=1 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=1 --fsync_metadata=0 --priority=0 --output_format_parallel_formatting=1 --input_format_parallel_parsing=1
2022-08-19 03:01:07
2022-08-19 03:01:07 Database: test_zjwatq

Comment: Minor: known failure, depends on execution speed - inconsistency of hardware (different from upstream CI/CD runners).
Status: FAIL (OK to fail)

TestFlows Results

Passed:
- AES Encryption
- ClickHouse Keeper
- DateTime64 Extended Range
- Disk Level Encryption
- Example
- Extended Precision Data Types
- Kafka
- Kerberos
- LDAP
- Lightweight Delete
- Map Type
- Part Moves Between Shards
- RBAC
- S3 AWS
- S3 GCS
- SSL Server
- Tiered Storage AWS
- Tiered Storage original
- Tiered Storage Minio
- Window Functions

Failed: - S3 Minio
Test: /s3/minio zero copy replication/lost data during mutation Details:

Description
  Clickhouse1   9000    57  Code: 57. DB::Exception: Table default.table_d39a32c4_3823_11ed_8707_0242ac110005 already exists. (TABLE_ALREADY_EXISTS) (version 22.3.10.24.altinitystable (altinity build))   2   0
Received exception from server (version 22.3.10):
Code: 57. DB::Exception: Received from localhost:9000. DB::Exception: There was an error on [clickhouse1:9000]: Code: 57. DB::Exception: Table default.table_d39a32c4_3823_11ed_8707_0242ac110005 already exists. (TABLE_ALREADY_EXISTS) (version 22.3.10.24.altinitystable (altinity build)). (TABLE_ALREADY_EXISTS)
(query: create table table_d39a32c4_3823_11ed_8707_0242ac110005 on cluster 'sharded_cluster' (key UInt32, value1 String, value2 String, value3 String) engine=ReplicatedMergeTree('/table_d39a32c4_3823_11ed_8707_0242ac110005', '{replica}')
            order by key
            partition by (key % 4)
            settings storage_policy='external'

)

Assertion values
  assert False, error(r.output)
  ^ is False

Where
  File '/builds/altinity-qa/clickhouse/cicd/release/regression/s3/../helpers/cluster.py', line 653 in 'query'

645\|              ) if steps else NullStep():
646\|                  assert message in r.output, error(r.output)
647\|  
648\|          if message is None or "Exception:" not in message:
649\|              with Then("check if output has exception") if steps else NullStep():
650\|                  if "Exception:" in r.output:
651\|                      if raise_on_exception:
652\|                          raise QueryRuntimeException(r.output)
653\|>                     assert False, error(r.output)
654\|  
655\|          return r
656\|