Document status - Public


QA Software Build Report

ClickHouse 23.3.8.23.altinityfips / x86_64

(c) 2023 Altinity Inc. All Rights Reserved.

Approval

Status: Approved for release by QA

Reviewed by: vzakaznikov@altinity.com

Date: Thu 27 Jul 2023 12:39:51 PM EDT

Table of Contents

Test Results

Stage Status Note
Integration Fail
Stateful Pass
Stateless Fail
TestFlows Pass
Trivy Pass
Scout Pass

Results https://altinity-test-reports.s3.amazonaws.com/index.html#builds/stable/v23.3.8.23.altinityfips/2023-07-26T01-49-29.915/
GitLab Pipeline https://gitlab.com/altinity-qa/clickhouse/cicd/release/-/pipelines/945413584
GitHub Actions https://github.com/Altinity/ClickHouse/actions/runs/5601674688
TestFlows ClickHouse Keeper, Engines, Parquet, and SSL Server suites GitHub Actions https://github.com/Altinity/clickhouse-regression/actions/runs/5612444287

Results Analysis

Integration Results

Results
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.3.8.23.altinityfips/2023-07-26T01-49-29.915/integration/integration_results_1.html
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.3.8.23.altinityfips/2023-07-26T01-49-29.915/integration/integration_results_2.html

Test: /integration/test_cgroup_limit/test.py::test_cgroup_cpu_limit

Reason:

____________________________ test_cgroup_cpu_limit _____________________________
[gw0] linux -- Python 3.8.10 /usr/bin/python3

    def test_cgroup_cpu_limit():
        for num_cpus in (1, 2, 4, 2.8):
>           result = run_with_cpu_limit(
                "clickhouse local -q \"select value from system.settings where name='max_threads'\"",
                num_cpus,
            )

test_cgroup_limit/test.py:43: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
test_cgroup_limit/test.py:38: in run_with_cpu_limit
    return run_command_in_container(cmd, *args)
test_cgroup_limit/test.py:19: in run_command_in_container
    return subprocess.check_output(
/usr/lib/python3.8/subprocess.py:415: in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

input = None, capture_output = False, timeout = None, check = True
popenargs = (['docker', 'run', '--rm', '--cpus', '1', '--volume', ...],)
kwargs = {'stdout': -1}, process = <subprocess.Popen object at 0x7f3ed7d9c730>
stdout = b'', stderr = None, retcode = 125

    def run(*popenargs,
            input=None, capture_output=False, timeout=None, check=False, **kwargs):
        """Run command with arguments and return a CompletedProcess instance.

        The returned instance will have attributes args, returncode, stdout and
        stderr. By default, stdout and stderr are not captured, and those attributes
        will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them.

        If check is True and the exit code was non-zero, it raises a
        CalledProcessError. The CalledProcessError object will have the return code
        in the returncode attribute, and output & stderr attributes if those streams
        were captured.

        If timeout is given, and the process takes too long, a TimeoutExpired
        exception will be raised.

        There is an optional argument "input", allowing you to
        pass bytes or a string to the subprocess's stdin.  If you use this argument
        you may not also use the Popen constructor's "stdin" argument, as
        it will be used internally.

        By default, all communication is in bytes, and therefore any "input" should
        be bytes, and the stdout and stderr will be bytes. If in text mode, any
        "input" should be a string, and stdout and stderr will be strings decoded
        according to locale encoding, or by "encoding" if set. Text mode is
        triggered by setting any of text, encoding, errors or universal_newlines.

        The other arguments are the same as for the Popen constructor.
        """
        if input is not None:
            if kwargs.get('stdin') is not None:
                raise ValueError('stdin and input arguments may not both be used.')
            kwargs['stdin'] = PIPE

        if capture_output:
            if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
                raise ValueError('stdout and stderr arguments may not be used '
                                 'with capture_output.')
            kwargs['stdout'] = PIPE
            kwargs['stderr'] = PIPE

        with Popen(*popenargs, **kwargs) as process:
            try:
                stdout, stderr = process.communicate(input, timeout=timeout)
            except TimeoutExpired as exc:
                process.kill()
                if _mswindows:
                    # Windows accumulates the output in a single blocking
                    # read() call run on child threads, with the timeout
                    # being done in a join() on those threads.  communicate()
                    # _after_ kill() is required to collect that and add it
                    # to the exception.
                    exc.stdout, exc.stderr = process.communicate()
                else:
                    # POSIX _communicate already populated the output so
                    # far into the TimeoutExpired exception.
                    process.wait()
                raise
            except:  # Including KeyboardInterrupt, communicate handled that.
                process.kill()
                # We don't call process.wait() as .__exit__ does that for us.
                raise
            retcode = process.poll()
            if check and retcode:
>               raise CalledProcessError(retcode, process.args,
                                         output=stdout, stderr=stderr)
E               subprocess.CalledProcessError: Command '['docker', 'run', '--rm', '--cpus', '1', '--volume', '/clickhouse:/usr/bin/clickhouse', 'ubuntu:20.04', 'sh', '-c', 'clickhouse local -q "select value from system.settings where name=\'max_threads\'"']' returned non-zero exit status 125.

/usr/lib/python3.8/subprocess.py:516: CalledProcessError

Comment: Misconfiguration: 1 CPU/hardware thread available to the CH instead of at least 2.
Status: FAIL (OK to fail)

Test: test_https_replication/test_change_ip.py::test_replication_when_node_ip_changed

Reason:

___________ ERROR at setup of test_replication_when_node_ip_changed ____________
[gw1] linux -- Python 3.8.10 /usr/bin/python3

    @pytest.fixture(scope="module")
    def both_https_cluster():
        try:
>           cluster.start()

test_https_replication/test_change_ip.py:56: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
helpers/cluster.py:2541: in start
    run_and_check(self.base_zookeeper_cmd + common_opts, env=self.env)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = ['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_https_replication/_instances_change_ip_0/.env', '--project-name', 'roottesthttpsreplicationchangeip', '--file', ...]
env = {'CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH': '/clickhouse-library-bridge', 'CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH': '/clickh...dge', 'CLICKHOUSE_TESTS_BASE_CONFIG_DIR': '/clickhouse-config', 'CLICKHOUSE_TESTS_CLIENT_BIN_PATH': '/clickhouse', ...}
shell = False, stdout = -1, stderr = -1, timeout = 300, nothrow = False
detach = False

    def run_and_check(
        args,
        env=None,
        shell=False,
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE,
        timeout=300,
        nothrow=False,
        detach=False,
    ):
        if detach:
            subprocess.Popen(
                args,
                stdout=subprocess.DEVNULL,
                stderr=subprocess.DEVNULL,
                env=env,
                shell=shell,
            )
            return

        logging.debug(f"Command:{args}")
        res = subprocess.run(
            args, stdout=stdout, stderr=stderr, env=env, shell=shell, timeout=timeout
        )
        out = res.stdout.decode("utf-8")
        err = res.stderr.decode("utf-8")
        # check_call(...) from subprocess does not print stderr, so we do it manually
        for outline in out.splitlines():
            logging.debug(f"Stdout:{outline}")
        for errline in err.splitlines():
            logging.debug(f"Stderr:{errline}")
        if res.returncode != 0:
            logging.debug(f"Exitcode:{res.returncode}")
            if env:
                logging.debug(f"Env:{env}")
            if not nothrow:
>               raise Exception(
                    f"Command {args} return non-zero code {res.returncode}: {res.stderr.decode('utf-8')}"
                )
E               Exception: Command ['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_https_replication/_instances_change_ip_0/.env', '--project-name', 'roottesthttpsreplicationchangeip', '--file', '/compose/docker_compose_keeper.yml', '--file', '/compose/docker_compose_net.yml', '--verbose', 'up', '-d'] return non-zero code 1: compose.config.config.find: Using configuration files: /compose/docker_compose_keeper.yml,/compose/docker_compose_net.yml
E               compose.cli.docker_client.get_client: docker-compose version 1.29.2, build unknown
E               docker-py version: <module 'docker.version' from '/usr/local/lib/python3.8/dist-packages/docker/version.py'>
E               CPython version: 3.8.10
E               OpenSSL version: OpenSSL 1.1.1f  31 Mar 2020
E               compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost
E               compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '24.0.4', 'Details': {'ApiVersion': '1.43', 'Arch': 'amd64', 'BuildTime': '2023-07-07T14:50:57.000000000+00:00', 'Experimental': 'false', 'GitCommit': '4ffc614', 'GoVersion': 'go1.20.5', 'KernelVersion': '5.15.0-60-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.6.21', 'Details': {'GitCommit': '3dce8eb055cbb6872793272b4f20ed16117344f8'}}, {'Name': 'runc', 'Version': '1.1.7', 'Details': {'GitCommit': 'v1.1.7-0-g860f061'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=24.0.4, ApiVersion=1.43, MinAPIVersion=1.12, GitCommit=4ffc614, GoVersion=go1.20.5, Os=linux, Arch=amd64, KernelVersion=5.15.0-60-generic, BuildTime=2023-07-07T14:50:57.000000000+00:00
E               compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottesthttpsreplicationchangeip_default')
E               compose.cli.verbose_proxy.proxy_callable: docker info <- ()
E               compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64',
E                'BridgeNfIp6tables': True,
E                'BridgeNfIptables': True,
E                'CPUSet': True,
E                'CPUShares': True,
E                'CgroupDriver': 'cgroupfs',
E                'CgroupVersion': '2',
E                'ContainerdCommit': {'Expected': '3dce8eb055cbb6872793272b4f20ed16117344f8',
E                                     'ID': '3dce8eb055cbb6872793272b4f20ed16117344f8'},
E                'Containers': 8,
E               ...
E               compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottesthttpsreplicationchangeip_default')
E               compose.network.ensure: Creating network "roottesthttpsreplicationchangeip_default" with driver "bridge"
E               compose.cli.verbose_proxy.proxy_callable: docker create_network <- (name='roottesthttpsreplicationchangeip_default', driver='bridge', options=None, ipam={'Driver': 'default', 'Config': [{'Subnet': '10.5.0.0/12', 'IPRange': None, 'Gateway': '10.5.1.1', 'AuxiliaryAddresses': None}, {'Subnet': '2001:3984:3989::/64', 'IPRange': None, 'Gateway': '2001:3984:3989::1', 'AuxiliaryAddresses': None}]}, internal=None, enable_ipv6=True, labels={'com.docker.compose.project': 'roottesthttpsreplicationchangeip', 'com.docker.compose.network': 'default', 'com.docker.compose.version': '1.29.2'}, attachable=True, check_duplicate=True)
E               compose.cli.errors.log_api_error: Pool overlaps with other one on this address space

helpers/cluster.py:113: Exception

Comment: Error during setup.
Status: ERROR

Test: test_hedged_requests_parallel/test.py::test_send_data

Reason:

________________________________ test_send_data ________________________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7f3bbd166070>

    def test_send_data(started_cluster):
        update_configs(
            node_1_sleep_in_send_data=sleep_time, node_2_sleep_in_send_data=sleep_time
        )
>       check_query()

test_hedged_requests_parallel/test.py:187: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
test_hedged_requests_parallel/test.py:81: in check_query
    NODES["node"].query(query)
helpers/cluster.py:3239: in query
    return self.client.query(
helpers/client.py:36: in wrap
    return func(self, *args, **kwargs)
helpers/client.py:63: in query
    return self.get_query_request(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <helpers.client.CommandRequest object at 0x7f3bbd9eda00>

    def get_answer(self):
        self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT)
        self.stdout_file.seek(0)
        self.stderr_file.seek(0)

        stdout = self.stdout_file.read().decode("utf-8", errors="replace")
        stderr = self.stderr_file.read().decode("utf-8", errors="replace")

        if (
            self.timer is not None
            and not self.process_finished_before_timeout
            and not self.ignore_error
        ):
            logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}")
            raise QueryTimeoutExceedException("Client timed out!")

        if (self.process.returncode != 0 or stderr) and not self.ignore_error:
>           raise QueryRuntimeException(
                "Client failed! Return code: {}, stderr: {}".format(
                    self.process.returncode, stderr
                ),
                self.process.returncode,
                stderr,
            )
E           helpers.client.QueryRuntimeException: Client failed! Return code: 0, stderr: E0720 03:56:44.031312383   38114 completion_queue.cc:743]              Kick failed: UNKNOWN:Bad file descriptor {created_time:"2023-07-20T03:56:44.031193272+00:00", errno:9, os_error:"Bad file descriptor", syscall:"eventfd_write"}

helpers/client.py:222: QueryRuntimeException

Comment: Query exception.
Status: FAIL

Test: test_tlsv1_3/test.py::test_https_wrong_cert
test_tlsv1_3/test.py::test_https
test_tlsv1_3/test.py::test_create_user
test_tlsv1_3/test.py::test_https_non_ssl_auth

Reason:

_______________________________ test_create_user _______________________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3

self = <urllib.request.HTTPSHandler object at 0x7f93902de550>
http_class = <class 'http.client.HTTPSConnection'>
req = <urllib.request.Request object at 0x7f93902deb80>
http_conn_args = {'check_hostname': None, 'context': <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f93903df040>}
host = '172.16.9.2:8443'
h = <http.client.HTTPSConnection object at 0x7f93902de340>

    def do_open(self, http_class, req, **http_conn_args):
        """Return an HTTPResponse object for the request, using http_class.

        http_class must implement the HTTPConnection API from http.client.
        """
        host = req.host
        if not host:
            raise URLError('no host given')

        # will parse host:port
        h = http_class(host, timeout=req.timeout, **http_conn_args)
        h.set_debuglevel(self._debuglevel)

        headers = dict(req.unredirected_hdrs)
        headers.update({k: v for k, v in req.headers.items()
                        if k not in headers})

        # TODO(jhylton): Should this be redesigned to handle
        # persistent connections?

        # We want to make an HTTP/1.1 request, but the addinfourl
        # class isn't prepared to deal with a persistent connection.
        # It will try to read all remaining data from the socket,
        # which will block while the server waits for the next request.
        # So make sure the connection gets closed after the (only)
        # request.
        headers["Connection"] = "close"
        headers = {name.title(): val for name, val in headers.items()}

        if req._tunnel_host:
            tunnel_headers = {}
            proxy_auth_hdr = "Proxy-Authorization"
            if proxy_auth_hdr in headers:
                tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
                # Proxy-Authorization should not be sent to origin
                # server.
                del headers[proxy_auth_hdr]
            h.set_tunnel(req._tunnel_host, headers=tunnel_headers)

        try:
            try:
>               h.request(req.get_method(), req.selector, req.data, headers,
                          encode_chunked=req.has_header('Transfer-encoding'))

/usr/lib/python3.8/urllib/request.py:1354: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3.8/http/client.py:1256: in request
    self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.8/http/client.py:1302: in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.8/http/client.py:1251: in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.8/http/client.py:1011: in _send_output
    self.send(msg)
/usr/lib/python3.8/http/client.py:951: in send
    self.connect()
/usr/lib/python3.8/http/client.py:1425: in connect
    self.sock = self._context.wrap_socket(self.sock,
helpers/ssl_context.py:12: in wrap_socket
    return super().wrap_socket(sock, *args, **kwargs)
/usr/lib/python3.8/ssl.py:500: in wrap_socket
    return self.sslsocket_class._create(
/usr/lib/python3.8/ssl.py:1040: in _create
    self.do_handshake()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
block = False

    @_sslcopydoc
    def do_handshake(self, block=False):
        self._check_connected()
        timeout = self.gettimeout()
        try:
            if timeout == 0.0 and block:
                self.settimeout(None)
>           self._sslobj.do_handshake()
E           ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1131)

/usr/lib/python3.8/ssl.py:1309: SSLEOFError

During handling of the above exception, another exception occurred:

    def test_create_user():
        instance.query("CREATE USER emma IDENTIFIED WITH ssl_certificate CN 'client3'")
>       assert (
            execute_query_https("SELECT currentUser()", user="emma", cert_name="client3")
            == "emma\n"
        )

test_tlsv1_3/test.py:206: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
test_tlsv1_3/test.py:64: in execute_query_https
    response = urllib.request.urlopen(
/usr/lib/python3.8/urllib/request.py:222: in urlopen
    return opener.open(url, data, timeout)
/usr/lib/python3.8/urllib/request.py:525: in open
    response = self._open(req, data)
/usr/lib/python3.8/urllib/request.py:542: in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
/usr/lib/python3.8/urllib/request.py:502: in _call_chain
    result = func(*args)
/usr/lib/python3.8/urllib/request.py:1397: in https_open
    return self.do_open(http.client.HTTPSConnection, req,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <urllib.request.HTTPSHandler object at 0x7f93902de550>
http_class = <class 'http.client.HTTPSConnection'>
req = <urllib.request.Request object at 0x7f93902deb80>
http_conn_args = {'check_hostname': None, 'context': <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f93903df040>}
host = '172.16.9.2:8443'
h = <http.client.HTTPSConnection object at 0x7f93902de340>

    def do_open(self, http_class, req, **http_conn_args):
        """Return an HTTPResponse object for the request, using http_class.

        http_class must implement the HTTPConnection API from http.client.
        """
        host = req.host
        if not host:
            raise URLError('no host given')

        # will parse host:port
        h = http_class(host, timeout=req.timeout, **http_conn_args)
        h.set_debuglevel(self._debuglevel)

        headers = dict(req.unredirected_hdrs)
        headers.update({k: v for k, v in req.headers.items()
                        if k not in headers})

        # TODO(jhylton): Should this be redesigned to handle
        # persistent connections?

        # We want to make an HTTP/1.1 request, but the addinfourl
        # class isn't prepared to deal with a persistent connection.
        # It will try to read all remaining data from the socket,
        # which will block while the server waits for the next request.
        # So make sure the connection gets closed after the (only)
        # request.
        headers["Connection"] = "close"
        headers = {name.title(): val for name, val in headers.items()}

        if req._tunnel_host:
            tunnel_headers = {}
            proxy_auth_hdr = "Proxy-Authorization"
            if proxy_auth_hdr in headers:
                tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
                # Proxy-Authorization should not be sent to origin
                # server.
                del headers[proxy_auth_hdr]
            h.set_tunnel(req._tunnel_host, headers=tunnel_headers)

        try:
            try:
                h.request(req.get_method(), req.selector, req.data, headers,
                          encode_chunked=req.has_header('Transfer-encoding'))
            except OSError as err: # timeout error
>               raise URLError(err)
E               urllib.error.URLError: <urlopen error EOF occurred in violation of protocol (_ssl.c:1131)>

/usr/lib/python3.8/urllib/request.py:1357: URLError

Comment: tlsv1_3 is disabled in FIPS.
Status: OK to FAIL.

Test: test_keeper_internal_secure/test.py::test_secure_raft_works

Reason:

____________________________ test_secure_raft_works ____________________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3

started_cluster = <helpers.cluster.ClickHouseCluster object at 0x7ff094fdcc10>

    def test_secure_raft_works(started_cluster):
        try:
            node1_zk = get_fake_zk("node1")
            node2_zk = get_fake_zk("node2")
            node3_zk = get_fake_zk("node3")

>           node1_zk.create("/test_node", b"somedata1")

test_keeper_internal_secure/test.py:70: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/local/lib/python3.8/dist-packages/kazoo/client.py:955: in create
    return self.create_async(
/usr/local/lib/python3.8/dist-packages/kazoo/handlers/utils.py:86: in get
    raise self._exception
/usr/local/lib/python3.8/dist-packages/kazoo/handlers/utils.py:292: in captured_function
    return function(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/kazoo/handlers/utils.py:313: in captured_function
    value = function(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/kazoo/client.py:1022: in create_completion
    return self.unchroot(result.get())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <kazoo.handlers.threading.AsyncResult object at 0x7ff094dca070>
block = True, timeout = None

    def get(self, block=True, timeout=None):
        """Return the stored value or raise the exception.

        If there is no value raises TimeoutError.

        """
        with self._condition:
            if self._exception is not _NONE:
                if self._exception is None:
                    return self.value
>               raise self._exception
E               kazoo.exceptions.ConnectionLoss

/usr/local/lib/python3.8/dist-packages/kazoo/handlers/utils.py:80: ConnectionLoss

Comment: Keeper connection fail.
Status: FAIL

Stateful Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.3.8.23.altinityfips/2023-07-26T01-49-29.915/stateful/stateful_results.html

Stateless Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.3.8.23.altinityfips/2023-07-26T01-49-29.915/stateless/stateless_results.html

Test: 02417_opentelemetry_insert_on_distributed_table

Reason:

2023-07-20 03:23:17 --- /usr/share/clickhouse-test/queries/0_stateless/02417_opentelemetry_insert_on_distributed_table.reference    2023-07-20 03:18:17.576290658 +0000
2023-07-20 03:23:17 +++ /tmp/clickhouse-test/0_stateless/02417_opentelemetry_insert_on_distributed_table.stdout 2023-07-20 03:23:17.457623766 +0000
2023-07-20 03:23:17 @@ -4,9 +4,8 @@
2023-07-20 03:23:17  1
2023-07-20 03:23:17  ===2===
2023-07-20 03:23:17  {"operation_name":"void DB::DistributedAsyncInsertDirectoryQueue::processFile(const std::string &)","cluster":"test_cluster_two_shards_localhost","shard":"1","rows":"1","bytes":"8"}
2023-07-20 03:23:17 -{"operation_name":"void DB::DistributedAsyncInsertDirectoryQueue::processFile(const std::string &)","cluster":"test_cluster_two_shards_localhost","shard":"2","rows":"1","bytes":"8"}
2023-07-20 03:23:17 -3
2023-07-20 03:23:17  2
2023-07-20 03:23:17 +1
2023-07-20 03:23:17  ===3===
2023-07-20 03:23:17  {"operation_name":"auto DB::DistributedSink::runWritingJob(DB::DistributedSink::JobReplica &, const DB::Block &, size_t)::(anonymous class)::operator()() const","cluster":"test_cluster_two_shards_localhost","shard":"1","rows":"1","bytes":"8"}
2023-07-20 03:23:17  {"operation_name":"auto DB::DistributedSink::runWritingJob(DB::DistributedSink::JobReplica &, const DB::Block &, size_t)::(anonymous class)::operator()() const","cluster":"test_cluster_two_shards_localhost","shard":"2","rows":"1","bytes":"8"}
2023-07-20 03:23:17 
2023-07-20 03:23:17 
2023-07-20 03:23:17 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=1 --group_by_two_level_threshold_bytes=5302740 --distributed_aggregation_memory_efficient=0 --fsync_metadata=1 --output_format_parallel_formatting=1 --input_format_parallel_parsing=0 --min_chunk_bytes_for_parallel_parsing=6141237 --max_read_buffer_size=602854 --prefer_localhost_replica=1 --max_block_size=85850 --max_threads=37 --optimize_or_like_chain=0 --optimize_read_in_order=0 --read_in_order_two_level_merge_threshold=100 --optimize_aggregation_in_order=1 --aggregation_in_order_max_block_bytes=11197259 --min_compress_block_size=1609565 --max_compress_block_size=2688120 --use_uncompressed_cache=1 --min_bytes_to_use_direct_io=10737418240 --min_bytes_to_use_mmap_io=10737418240 --local_filesystem_read_method=read --remote_filesystem_read_method=threadpool --local_filesystem_read_prefetch=0 --remote_filesystem_read_prefetch=0 --compile_expressions=1 --compile_aggregate_expressions=0 --compile_sort_description=0 --merge_tree_coarse_index_granularity=23 --optimize_distinct_in_order=1 --optimize_sorting_by_input_stream_properties=1 --http_response_buffer_size=3396332 --http_wait_end_of_query=True --enable_memory_bound_merging_of_aggregation_results=1 --min_count_to_compile_expression=3 --min_count_to_compile_aggregate_expression=0 --min_count_to_compile_sort_description=0
2023-07-20 03:23:17 
2023-07-20 03:23:17 MergeTree settings used in test: --ratio_of_defaults_for_sparse_serialization=1.0 --prefer_fetch_merged_part_size_threshold=10737418240 --vertical_merge_algorithm_min_rows_to_activate=1000000 --vertical_merge_algorithm_min_columns_to_activate=1 --min_merge_bytes_to_use_direct_io=678297125 --index_granularity_bytes=10427201 --merge_max_block_size=7155 --index_granularity=57350 --min_bytes_for_wide_part=1073741824
2023-07-20 03:23:17 
2023-07-20 03:23:17 Database: test_4iw7v2vr

Comment: Output does not match expected output.
Status: FAIL

TestFlows Results

Passed:
- Aes Encryption
- Aggregate Functions
- Atomic Insert
- Base58
- Benchmark AWS
- Benchmark GCS
- Benchmark Minio
- ClickHouse Keeper
- ClickHouse Keeper SSL FIPS
- DateTime64 Extended Range
- Disk Level Encryption
- DNS
- Engines - Example
- Extended Precision Data Types
- Kafka
- Kerberos
- LDAP Authentication
- LDAP External User Directory
- LDAP Role Mapping
- Lightweight Delete
- Map Type
- Parquet AWS
- Parquet Minio
- Parquet No S3
- Part Moves Between Shards
- RBAC
- Selects
- SSL Server
- S3 AWS
- S3 GCS
- S3 Minio
- Tiered Storage
- Tiered Storage AWS
- Tiered Storage GCS
- Tiered Storage Minio
- Window Functions

Trivy Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.3.8.23.altinityfips/2023-07-26T01-49-29.915/staging-docker-trivy-ubuntu/results.html

Scout Results

Results https://altinity-test-reports.s3.amazonaws.com/builds/stable/v23.3.8.23.altinityfips/2023-07-26T01-49-29.915/staging-docker-scout-ubuntu/results.html