Document status - Public
Copyright 2022, Altinity Inc. All Rights Reserved. All information contained herein is, and remains the property of Altinity Inc. Any dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Altinity Inc.
(c) 2023 Altinity Inc. All Rights Reserved.
Status: Approved for release by QA
Reviewed by: vzakaznikov@altinity.com
Date: Fri 14 Jul 2023 06:10:43 PM EDT
| Stage | Status | Note | 
|---|---|---|
| Integration | Pass | with error | 
| Stateful | Pass | |
| Stateless | Pass | with known fail | 
| TestFlows | Pass | |
| Trivy | Pass | |
| Scout | Pass | 
Results  https://altinity-test-reports.s3.amazonaws.com/index.html#builds/stable/v23.3.8.22.altinitystable/2023-07-14T19-23-19.127/ 
GitLab Pipeline  https://gitlab.com/altinity-qa/clickhouse/cicd/release/-/pipelines/931650333 
GitHub Actions  https://github.com/Altinity/ClickHouse/actions/runs/5553084583 
TestFlows Base58, ClickHouse Keeper, Engines, Parquet, Lightweight Delete, SSL Server, and Tiered Storage suites GitHub Actions https://github.com/Altinity/clickhouse-regression/actions/runs/5555121278  
Results 
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.3.8.22.altinitystable/2023-07-14T19-23-19.127/integration/integration_results_1.html 
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.3.8.22.altinitystable/2023-07-14T19-23-19.127/integration/integration_results_2.html  
Test: /integration/test_cgroup_limit/test.py::test_cgroup_cpu_limit 
Reason:
  
____________________________ test_cgroup_cpu_limit _____________________________
[gw0] linux -- Python 3.8.10 /usr/bin/python3
    def test_cgroup_cpu_limit():
        for num_cpus in (1, 2, 4, 2.8):
>           result = run_with_cpu_limit(
                "clickhouse local -q \"select value from system.settings where name='max_threads'\"",
                num_cpus,
            )
test_cgroup_limit/test.py:43: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
test_cgroup_limit/test.py:38: in run_with_cpu_limit
    return run_command_in_container(cmd, *args)
test_cgroup_limit/test.py:19: in run_command_in_container
    return subprocess.check_output(
/usr/lib/python3.8/subprocess.py:415: in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
input = None, capture_output = False, timeout = None, check = True
popenargs = (['docker', 'run', '--rm', '--cpus', '1', '--volume', ...],)
kwargs = {'stdout': -1}, process = <subprocess.Popen object at 0x7f3ed7d9c730>
stdout = b'', stderr = None, retcode = 125
    def run(*popenargs,
            input=None, capture_output=False, timeout=None, check=False, **kwargs):
        """Run command with arguments and return a CompletedProcess instance.
        The returned instance will have attributes args, returncode, stdout and
        stderr. By default, stdout and stderr are not captured, and those attributes
        will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them.
        If check is True and the exit code was non-zero, it raises a
        CalledProcessError. The CalledProcessError object will have the return code
        in the returncode attribute, and output & stderr attributes if those streams
        were captured.
        If timeout is given, and the process takes too long, a TimeoutExpired
        exception will be raised.
        There is an optional argument "input", allowing you to
        pass bytes or a string to the subprocess's stdin.  If you use this argument
        you may not also use the Popen constructor's "stdin" argument, as
        it will be used internally.
        By default, all communication is in bytes, and therefore any "input" should
        be bytes, and the stdout and stderr will be bytes. If in text mode, any
        "input" should be a string, and stdout and stderr will be strings decoded
        according to locale encoding, or by "encoding" if set. Text mode is
        triggered by setting any of text, encoding, errors or universal_newlines.
        The other arguments are the same as for the Popen constructor.
        """
        if input is not None:
            if kwargs.get('stdin') is not None:
                raise ValueError('stdin and input arguments may not both be used.')
            kwargs['stdin'] = PIPE
        if capture_output:
            if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
                raise ValueError('stdout and stderr arguments may not be used '
                                 'with capture_output.')
            kwargs['stdout'] = PIPE
            kwargs['stderr'] = PIPE
        with Popen(*popenargs, **kwargs) as process:
            try:
                stdout, stderr = process.communicate(input, timeout=timeout)
            except TimeoutExpired as exc:
                process.kill()
                if _mswindows:
                    # Windows accumulates the output in a single blocking
                    # read() call run on child threads, with the timeout
                    # being done in a join() on those threads.  communicate()
                    # _after_ kill() is required to collect that and add it
                    # to the exception.
                    exc.stdout, exc.stderr = process.communicate()
                else:
                    # POSIX _communicate already populated the output so
                    # far into the TimeoutExpired exception.
                    process.wait()
                raise
            except:  # Including KeyboardInterrupt, communicate handled that.
                process.kill()
                # We don't call process.wait() as .__exit__ does that for us.
                raise
            retcode = process.poll()
            if check and retcode:
>               raise CalledProcessError(retcode, process.args,
                                         output=stdout, stderr=stderr)
E               subprocess.CalledProcessError: Command '['docker', 'run', '--rm', '--cpus', '1', '--volume', '/clickhouse:/usr/bin/clickhouse', 'ubuntu:20.04', 'sh', '-c', 'clickhouse local -q "select value from system.settings where name=\'max_threads\'"']' returned non-zero exit status 125.
/usr/lib/python3.8/subprocess.py:516: CalledProcessError
Comment: Misconfiguration: 1 CPU/hardware thread available to the CH instead of at least 2. 
Status: FAIL (OK to fail)  
Test: test_https_replication/test_change_ip.py::test_replication_when_node_ip_changed 
Reason:
  
___________ ERROR at setup of test_replication_when_node_ip_changed ____________
[gw1] linux -- Python 3.8.10 /usr/bin/python3
    @pytest.fixture(scope="module")
    def both_https_cluster():
        try:
>           cluster.start()
test_https_replication/test_change_ip.py:56: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2541: in start
    run_and_check(self.base_zookeeper_cmd + common_opts, env=self.env)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
args = ['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_https_replication/_instances_change_ip_0/.env', '--project-name', 'roottesthttpsreplicationchangeip', '--file', ...]
env = {'CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH': '/clickhouse-library-bridge', 'CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH': '/clickh...dge', 'CLICKHOUSE_TESTS_BASE_CONFIG_DIR': '/clickhouse-config', 'CLICKHOUSE_TESTS_CLIENT_BIN_PATH': '/clickhouse', ...}
shell = False, stdout = -1, stderr = -1, timeout = 300, nothrow = False
detach = False
    def run_and_check(
        args,
        env=None,
        shell=False,
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE,
        timeout=300,
        nothrow=False,
        detach=False,
    ):
        if detach:
            subprocess.Popen(
                args,
                stdout=subprocess.DEVNULL,
                stderr=subprocess.DEVNULL,
                env=env,
                shell=shell,
            )
            return
        logging.debug(f"Command:{args}")
        res = subprocess.run(
            args, stdout=stdout, stderr=stderr, env=env, shell=shell, timeout=timeout
        )
        out = res.stdout.decode("utf-8")
        err = res.stderr.decode("utf-8")
        # check_call(...) from subprocess does not print stderr, so we do it manually
        for outline in out.splitlines():
            logging.debug(f"Stdout:{outline}")
        for errline in err.splitlines():
            logging.debug(f"Stderr:{errline}")
        if res.returncode != 0:
            logging.debug(f"Exitcode:{res.returncode}")
            if env:
                logging.debug(f"Env:{env}")
            if not nothrow:
>               raise Exception(
                    f"Command {args} return non-zero code {res.returncode}: {res.stderr.decode('utf-8')}"
                )
E               Exception: Command ['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_https_replication/_instances_change_ip_0/.env', '--project-name', 'roottesthttpsreplicationchangeip', '--file', '/compose/docker_compose_keeper.yml', '--file', '/compose/docker_compose_net.yml', '--verbose', 'up', '-d'] return non-zero code 1: compose.config.config.find: Using configuration files: /compose/docker_compose_keeper.yml,/compose/docker_compose_net.yml
E               compose.cli.docker_client.get_client: docker-compose version 1.29.2, build unknown
E               docker-py version: <module 'docker.version' from '/usr/local/lib/python3.8/dist-packages/docker/version.py'>
E               CPython version: 3.8.10
E               OpenSSL version: OpenSSL 1.1.1f  31 Mar 2020
E               compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost
E               compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '24.0.4', 'Details': {'ApiVersion': '1.43', 'Arch': 'amd64', 'BuildTime': '2023-07-07T14:50:57.000000000+00:00', 'Experimental': 'false', 'GitCommit': '4ffc614', 'GoVersion': 'go1.20.5', 'KernelVersion': '5.15.0-60-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.6.21', 'Details': {'GitCommit': '3dce8eb055cbb6872793272b4f20ed16117344f8'}}, {'Name': 'runc', 'Version': '1.1.7', 'Details': {'GitCommit': 'v1.1.7-0-g860f061'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=24.0.4, ApiVersion=1.43, MinAPIVersion=1.12, GitCommit=4ffc614, GoVersion=go1.20.5, Os=linux, Arch=amd64, KernelVersion=5.15.0-60-generic, BuildTime=2023-07-07T14:50:57.000000000+00:00
E               compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottesthttpsreplicationchangeip_default')
E               compose.cli.verbose_proxy.proxy_callable: docker info <- ()
E               compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64',
E                'BridgeNfIp6tables': True,
E                'BridgeNfIptables': True,
E                'CPUSet': True,
E                'CPUShares': True,
E                'CgroupDriver': 'cgroupfs',
E                'CgroupVersion': '2',
E                'ContainerdCommit': {'Expected': '3dce8eb055cbb6872793272b4f20ed16117344f8',
E                                     'ID': '3dce8eb055cbb6872793272b4f20ed16117344f8'},
E                'Containers': 8,
E               ...
E               compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottesthttpsreplicationchangeip_default')
E               compose.network.ensure: Creating network "roottesthttpsreplicationchangeip_default" with driver "bridge"
E               compose.cli.verbose_proxy.proxy_callable: docker create_network <- (name='roottesthttpsreplicationchangeip_default', driver='bridge', options=None, ipam={'Driver': 'default', 'Config': [{'Subnet': '10.5.0.0/12', 'IPRange': None, 'Gateway': '10.5.1.1', 'AuxiliaryAddresses': None}, {'Subnet': '2001:3984:3989::/64', 'IPRange': None, 'Gateway': '2001:3984:3989::1', 'AuxiliaryAddresses': None}]}, internal=None, enable_ipv6=True, labels={'com.docker.compose.project': 'roottesthttpsreplicationchangeip', 'com.docker.compose.network': 'default', 'com.docker.compose.version': '1.29.2'}, attachable=True, check_duplicate=True)
E               compose.cli.errors.log_api_error: Pool overlaps with other one on this address space
helpers/cluster.py:113: Exception
Comment: Error during setup. 
Status: ERROR  
Test: 01680_date_time_add_ubsan 
Reason:
  
2023-07-14 17:25:46 The query succeeded but the server error '407' was expected (query: SELECT DISTINCT result FROM (SELECT toStartOfFifteenMinutes(toDateTime(toStartOfFifteenMinutes(toDateTime(1000.0001220703125) + (number * 65536))) + (number * 9223372036854775807)) AS result FROM system.numbers LIMIT 1048576) ORDER BY result DESC NULLS FIRST FORMAT Null; -- { serverError 407 }).
2023-07-14 17:25:46 
2023-07-14 17:25:46 stdout:
2023-07-14 17:25:46 \N
2023-07-14 17:25:46 
2023-07-14 17:25:46 Settings used in the test: --max_insert_threads=10 --group_by_two_level_threshold=501968 --group_by_two_level_threshold_bytes=6605032 --distributed_aggregation_memory_efficient=1 --fsync_metadata=0 --output_format_parallel_formatting=1 --input_format_parallel_parsing=0 --min_chunk_bytes_for_parallel_parsing=3929023 --max_read_buffer_size=653605 --prefer_localhost_replica=1 --max_block_size=81269 --max_threads=38 --optimize_or_like_chain=0 --optimize_read_in_order=0 --read_in_order_two_level_merge_threshold=18 --optimize_aggregation_in_order=0 --aggregation_in_order_max_block_bytes=7041507 --min_compress_block_size=1484905 --max_compress_block_size=2498167 --use_uncompressed_cache=1 --min_bytes_to_use_direct_io=10737418240 --min_bytes_to_use_mmap_io=1 --local_filesystem_read_method=mmap --remote_filesystem_read_method=read --local_filesystem_read_prefetch=1 --remote_filesystem_read_prefetch=0 --compile_expressions=1 --compile_aggregate_expressions=0 --compile_sort_description=0 --merge_tree_coarse_index_granularity=29 --optimize_distinct_in_order=0 --optimize_sorting_by_input_stream_properties=1 --http_response_buffer_size=6779096 --http_wait_end_of_query=False --enable_memory_bound_merging_of_aggregation_results=1 --min_count_to_compile_expression=0 --min_count_to_compile_aggregate_expression=0 --min_count_to_compile_sort_description=0
2023-07-14 17:25:46 
2023-07-14 17:25:46 MergeTree settings used in test: --ratio_of_defaults_for_sparse_serialization=1.0 --prefer_fetch_merged_part_size_threshold=10737418240 --vertical_merge_algorithm_min_rows_to_activate=285769 --vertical_merge_algorithm_min_columns_to_activate=100 --min_merge_bytes_to_use_direct_io=10737418240 --index_granularity_bytes=8791451 --merge_max_block_size=7050 --index_granularity=28105 --min_bytes_for_wide_part=976741469
2023-07-14 17:25:46 
2023-07-14 17:25:46 Database: test_f9qn7hlf
Comment: Query excepted to fail but passes. 
Status: FAIL (OK to fail)  
Passed: 
- Aes Encryption 
- Aggregate Functions 
- Atomic Insert 
- Base58 
- Benchmark AWS 
- Benchmark GCS 
- Benchmark Minio 
- ClickHouse Keeper 
- DateTime64 Extended Range 
- Disk Level Encryption 
- DNS 
- Engines
- Example 
- Extended Precision Data Types 
- Kafka 
- Kerberos 
- LDAP Authentication 
- LDAP External User Directory 
- LDAP Role Mapping 
- Lightweight Delete 
- Map Type 
- Parquet AWS 
- Parquet Minio 
- Parquet No S3 
- Part Moves Between Shards 
- RBAC 
- Selects 
- SSL Server 
- S3 AWS 
- S3 GCS 
- S3 Minio 
- Tiered Storage 
- Tiered Storage AWS 
- Tiered Storage GCS 
- Tiered Storage Minio 
- Window Functions