Document status - Public
Copyright 2022, Altinity Inc. All Rights Reserved. All information contained herein is, and remains the property of Altinity Inc. Any dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Altinity Inc.
(c) 2023 Altinity Inc. All Rights Reserved.
Status: Approved for release by QA
Reviewed by: vzakaznikov@altinity.com
Date: Fri 26 May 2023 06:19:41 PM EDT
Stage | Status | Note |
---|---|---|
Integration | Pass | with known fails |
Stateful | Pass | |
Stateless | Pass | |
TestFlows | Pass | with known fails |
Trivy | Pass | |
Scout | Pass |
Results https://altinity-test-reports.s3.amazonaws.com/index.html#builds/stable/v22.8.15.25.altinityfips/2023-05-25T20-01-58.093/
GitLab Pipeline https://gitlab.com/altinity-qa/clickhouse/cicd/release/-/pipelines/879432090
GitHub Actions https://github.com/Altinity/ClickHouse/actions/runs/5080932159
GitHub Extended Precision Data Types https://github.com/Altinity/clickhouse-regression/actions/runs/5084456281
GitHub Part Moves Between Shards https://github.com/Altinity/clickhouse-regression/actions/runs/5086078006
GitHub Lightweight Delete https://github.com/Altinity/clickhouse-regression/actions/runs/5086188464
Results
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.8.15.25.altinityfips/2023-05-25T20-01-58.093/integration/integration_results_1.html
https://altinity-test-reports.s3.amazonaws.com/builds/stable/v22.8.15.25.altinityfips/2023-05-25T20-01-58.093/integration/integration_results_2.html
Fails
Test: /integration/test_cgroup_limit/test.py::test_cgroup_cpu_limit
Reason:
____________________________ test_cgroup_cpu_limit _____________________________
[gw0] linux -- Python 3.8.10 /usr/bin/python3
def test_cgroup_cpu_limit():
for num_cpus in (1, 2, 4, 2.8):
> result = run_with_cpu_limit(
"clickhouse local -q \"select value from system.settings where name='max_threads'\"",
num_cpus,
)
test_cgroup_limit/test.py:43:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_cgroup_limit/test.py:38: in run_with_cpu_limit
return run_command_in_container(cmd, *args)
test_cgroup_limit/test.py:19: in run_command_in_container
return subprocess.check_output(
/usr/lib/python3.8/subprocess.py:415: in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = False, timeout = None, check = True
popenargs = (['docker', 'run', '--rm', '--cpus', '1', '--volume', ...],)
kwargs = {'stdout': -1}, process = <subprocess.Popen object at 0x7eff47d75850>
stdout = b'', stderr = None, retcode = 125
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'run', '--rm', '--cpus', '1', '--volume', '/clickhouse:/usr/bin/clickhouse', 'ubuntu:20.04', 'sh', '-c', 'clickhouse local -q "select value from system.settings where name=\'max_threads\'"']' returned non-zero exit status 125.
/usr/lib/python3.8/subprocess.py:516: CalledProcessError
----------------------------- Captured stderr call -----------------------------
Unable to find image 'ubuntu:20.04' locally
20.04: Pulling from library/ubuntu
ca1778b69356: Already exists
Digest: sha256:db8bf6f4fb351aa7a26e27ba2686cf35a6a409f65603e59d4c203e58387dc6b3
Status: Downloaded newer image for ubuntu:20.04
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: cannot enter cgroupv2 "/sys/fs/cgroup/docker" with domain controllers -- it is in threaded mode: unknown.
Comment: Misconfiguration: 1 CPU/hardware thread available to the CH instead of at least 2.
Status: FAIL (OK to fail)
Test: /integration/test_tlsv1_3/test.py::test_create_user
Reason:
_______________________________ test_create_user _______________________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3
self = <urllib.request.HTTPSHandler object at 0x7f219d2df310>
http_class = <class 'http.client.HTTPSConnection'>
req = <urllib.request.Request object at 0x7f219d3cda90>
http_conn_args = {'check_hostname': None, 'context': <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d380a40>}
host = '172.16.11.2:8443'
h = <http.client.HTTPSConnection object at 0x7f219d2ab670>
def do_open(self, http_class, req, **http_conn_args):
"""Return an HTTPResponse object for the request, using http_class.
http_class must implement the HTTPConnection API from http.client.
"""
host = req.host
if not host:
raise URLError('no host given')
# will parse host:port
h = http_class(host, timeout=req.timeout, **http_conn_args)
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update({k: v for k, v in req.headers.items()
if k not in headers})
# TODO(jhylton): Should this be redesigned to handle
# persistent connections?
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# request.
headers["Connection"] = "close"
headers = {name.title(): val for name, val in headers.items()}
if req._tunnel_host:
tunnel_headers = {}
proxy_auth_hdr = "Proxy-Authorization"
if proxy_auth_hdr in headers:
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
# Proxy-Authorization should not be sent to origin
# server.
del headers[proxy_auth_hdr]
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
try:
> h.request(req.get_method(), req.selector, req.data, headers,
encode_chunked=req.has_header('Transfer-encoding'))
/usr/lib/python3.8/urllib/request.py:1354:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d2ab670>, method = 'GET'
url = '/?query=SELECT%20currentUser%28%29', body = None
headers = {'Connection': 'close', 'Host': '172.16.11.2:8443', 'User-Agent': 'Python-urllib/3.8', 'X-Clickhouse-Ssl-Certificate-Auth': 'on', ...}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.8/http/client.py:1256:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d2ab670>, method = 'GET'
url = '/?query=SELECT%20currentUser%28%29', body = None
headers = {'Connection': 'close', 'Host': '172.16.11.2:8443', 'User-Agent': 'Python-urllib/3.8', 'X-Clickhouse-Ssl-Certificate-Auth': 'on', ...}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.8/http/client.py:1302:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d2ab670>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.8/http/client.py:1251:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d2ab670>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.8/http/client.py:1011:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d2ab670>
data = b'GET /?query=SELECT%20currentUser%28%29 HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: 172.16.11.2:8443\r\nUser-Agent: Python-urllib/3.8\r\nX-Clickhouse-User: emma\r\nX-Clickhouse-Ssl-Certificate-Auth: on\r\nConnection: close\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.8/http/client.py:951:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d2ab670>
def connect(self):
"Connect to a host on a given (SSL) port."
super().connect()
if self._tunnel_host:
server_hostname = self._tunnel_host
else:
server_hostname = self.host
> self.sock = self._context.wrap_socket(self.sock,
server_hostname=server_hostname)
/usr/lib/python3.8/http/client.py:1425:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d380a40>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
args = (), kwargs = {'server_hostname': 'integration-tests.clickhouse.com'}
def wrap_socket(self, sock, *args, **kwargs):
kwargs["server_hostname"] = self._server_hostname
> return super().wrap_socket(sock, *args, **kwargs)
helpers/ssl_context.py:12:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d380a40>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
server_side = False, do_handshake_on_connect = True, suppress_ragged_eofs = True
server_hostname = 'integration-tests.clickhouse.com', session = None
def wrap_socket(self, sock, server_side=False,
do_handshake_on_connect=True,
suppress_ragged_eofs=True,
server_hostname=None, session=None):
# SSLSocket class handles server_hostname encoding before it calls
# ctx._wrap_socket()
> return self.sslsocket_class._create(
sock=sock,
server_side=server_side,
do_handshake_on_connect=do_handshake_on_connect,
suppress_ragged_eofs=suppress_ragged_eofs,
server_hostname=server_hostname,
context=self,
session=session
)
/usr/lib/python3.8/ssl.py:500:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'ssl.SSLSocket'>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
server_side = False, do_handshake_on_connect = True, suppress_ragged_eofs = True
server_hostname = 'integration-tests.clickhouse.com'
context = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d380a40>
session = None
@classmethod
def _create(cls, sock, server_side=False, do_handshake_on_connect=True,
suppress_ragged_eofs=True, server_hostname=None,
context=None, session=None):
if sock.getsockopt(SOL_SOCKET, SO_TYPE) != SOCK_STREAM:
raise NotImplementedError("only stream sockets are supported")
if server_side:
if server_hostname:
raise ValueError("server_hostname can only be specified "
"in client mode")
if session is not None:
raise ValueError("session can only be specified in "
"client mode")
if context.check_hostname and not server_hostname:
raise ValueError("check_hostname requires server_hostname")
kwargs = dict(
family=sock.family, type=sock.type, proto=sock.proto,
fileno=sock.fileno()
)
self = cls.__new__(cls, **kwargs)
super(SSLSocket, self).__init__(**kwargs)
self.settimeout(sock.gettimeout())
sock.detach()
self._context = context
self._session = session
self._closed = False
self._sslobj = None
self.server_side = server_side
self.server_hostname = context._encode_hostname(server_hostname)
self.do_handshake_on_connect = do_handshake_on_connect
self.suppress_ragged_eofs = suppress_ragged_eofs
# See if we are connected
try:
self.getpeername()
except OSError as e:
if e.errno != errno.ENOTCONN:
raise
connected = False
else:
connected = True
self._connected = connected
if connected:
# create the SSL object
try:
self._sslobj = self._context._wrap_socket(
self, server_side, self.server_hostname,
owner=self, session=self._session,
)
if do_handshake_on_connect:
timeout = self.gettimeout()
if timeout == 0.0:
# non-blocking
raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
> self.do_handshake()
/usr/lib/python3.8/ssl.py:1040:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
block = False
@_sslcopydoc
def do_handshake(self, block=False):
self._check_connected()
timeout = self.gettimeout()
try:
if timeout == 0.0 and block:
self.settimeout(None)
> self._sslobj.do_handshake()
E ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1131)
/usr/lib/python3.8/ssl.py:1309: SSLEOFError
During handling of the above exception, another exception occurred:
def test_create_user():
instance.query("CREATE USER emma IDENTIFIED WITH ssl_certificate CN 'client3'")
> assert (
execute_query_https("SELECT currentUser()", user="emma", cert_name="client3")
== "emma\n"
)
test_tlsv1_3/test.py:206:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_tlsv1_3/test.py:64: in execute_query_https
response = urllib.request.urlopen(
/usr/lib/python3.8/urllib/request.py:222: in urlopen
return opener.open(url, data, timeout)
/usr/lib/python3.8/urllib/request.py:525: in open
response = self._open(req, data)
/usr/lib/python3.8/urllib/request.py:542: in _open
result = self._call_chain(self.handle_open, protocol, protocol +
/usr/lib/python3.8/urllib/request.py:502: in _call_chain
result = func(*args)
/usr/lib/python3.8/urllib/request.py:1397: in https_open
return self.do_open(http.client.HTTPSConnection, req,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <urllib.request.HTTPSHandler object at 0x7f219d2df310>
http_class = <class 'http.client.HTTPSConnection'>
req = <urllib.request.Request object at 0x7f219d3cda90>
http_conn_args = {'check_hostname': None, 'context': <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d380a40>}
host = '172.16.11.2:8443'
h = <http.client.HTTPSConnection object at 0x7f219d2ab670>
def do_open(self, http_class, req, **http_conn_args):
"""Return an HTTPResponse object for the request, using http_class.
http_class must implement the HTTPConnection API from http.client.
"""
host = req.host
if not host:
raise URLError('no host given')
# will parse host:port
h = http_class(host, timeout=req.timeout, **http_conn_args)
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update({k: v for k, v in req.headers.items()
if k not in headers})
# TODO(jhylton): Should this be redesigned to handle
# persistent connections?
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# request.
headers["Connection"] = "close"
headers = {name.title(): val for name, val in headers.items()}
if req._tunnel_host:
tunnel_headers = {}
proxy_auth_hdr = "Proxy-Authorization"
if proxy_auth_hdr in headers:
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
# Proxy-Authorization should not be sent to origin
# server.
del headers[proxy_auth_hdr]
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
try:
h.request(req.get_method(), req.selector, req.data, headers,
encode_chunked=req.has_header('Transfer-encoding'))
except OSError as err: # timeout error
> raise URLError(err)
E urllib.error.URLError: <urlopen error EOF occurred in violation of protocol (_ssl.c:1131)>
/usr/lib/python3.8/urllib/request.py:1357: URLError
Comment: FIPS mode only supports TLSv1.2
Status: FAIL (OK to fail)
Test: /integration/test_tlsv1_3/test.py::test_https
Reason:
__________________________________ test_https __________________________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3
self = <urllib.request.HTTPSHandler object at 0x7f219d300dc0>
http_class = <class 'http.client.HTTPSConnection'>
req = <urllib.request.Request object at 0x7f219d300d60>
http_conn_args = {'check_hostname': None, 'context': <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2c7040>}
host = '172.16.11.2:8443'
h = <http.client.HTTPSConnection object at 0x7f219d33d040>
def do_open(self, http_class, req, **http_conn_args):
"""Return an HTTPResponse object for the request, using http_class.
http_class must implement the HTTPConnection API from http.client.
"""
host = req.host
if not host:
raise URLError('no host given')
# will parse host:port
h = http_class(host, timeout=req.timeout, **http_conn_args)
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update({k: v for k, v in req.headers.items()
if k not in headers})
# TODO(jhylton): Should this be redesigned to handle
# persistent connections?
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# request.
headers["Connection"] = "close"
headers = {name.title(): val for name, val in headers.items()}
if req._tunnel_host:
tunnel_headers = {}
proxy_auth_hdr = "Proxy-Authorization"
if proxy_auth_hdr in headers:
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
# Proxy-Authorization should not be sent to origin
# server.
del headers[proxy_auth_hdr]
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
try:
> h.request(req.get_method(), req.selector, req.data, headers,
encode_chunked=req.has_header('Transfer-encoding'))
/usr/lib/python3.8/urllib/request.py:1354:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d33d040>, method = 'GET'
url = '/?query=SELECT%20currentUser%28%29', body = None
headers = {'Connection': 'close', 'Host': '172.16.11.2:8443', 'User-Agent': 'Python-urllib/3.8', 'X-Clickhouse-Ssl-Certificate-Auth': 'on', ...}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.8/http/client.py:1256:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d33d040>, method = 'GET'
url = '/?query=SELECT%20currentUser%28%29', body = None
headers = {'Connection': 'close', 'Host': '172.16.11.2:8443', 'User-Agent': 'Python-urllib/3.8', 'X-Clickhouse-Ssl-Certificate-Auth': 'on', ...}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.8/http/client.py:1302:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d33d040>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.8/http/client.py:1251:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d33d040>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.8/http/client.py:1011:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d33d040>
data = b'GET /?query=SELECT%20currentUser%28%29 HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: 172.16.11.2:8443\r\nUser-Agent: Python-urllib/3.8\r\nX-Clickhouse-User: john\r\nX-Clickhouse-Ssl-Certificate-Auth: on\r\nConnection: close\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.8/http/client.py:951:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d33d040>
def connect(self):
"Connect to a host on a given (SSL) port."
super().connect()
if self._tunnel_host:
server_hostname = self._tunnel_host
else:
server_hostname = self.host
> self.sock = self._context.wrap_socket(self.sock,
server_hostname=server_hostname)
/usr/lib/python3.8/http/client.py:1425:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2c7040>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
args = (), kwargs = {'server_hostname': 'integration-tests.clickhouse.com'}
def wrap_socket(self, sock, *args, **kwargs):
kwargs["server_hostname"] = self._server_hostname
> return super().wrap_socket(sock, *args, **kwargs)
helpers/ssl_context.py:12:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2c7040>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
server_side = False, do_handshake_on_connect = True, suppress_ragged_eofs = True
server_hostname = 'integration-tests.clickhouse.com', session = None
def wrap_socket(self, sock, server_side=False,
do_handshake_on_connect=True,
suppress_ragged_eofs=True,
server_hostname=None, session=None):
# SSLSocket class handles server_hostname encoding before it calls
# ctx._wrap_socket()
> return self.sslsocket_class._create(
sock=sock,
server_side=server_side,
do_handshake_on_connect=do_handshake_on_connect,
suppress_ragged_eofs=suppress_ragged_eofs,
server_hostname=server_hostname,
context=self,
session=session
)
/usr/lib/python3.8/ssl.py:500:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'ssl.SSLSocket'>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
server_side = False, do_handshake_on_connect = True, suppress_ragged_eofs = True
server_hostname = 'integration-tests.clickhouse.com'
context = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2c7040>
session = None
@classmethod
def _create(cls, sock, server_side=False, do_handshake_on_connect=True,
suppress_ragged_eofs=True, server_hostname=None,
context=None, session=None):
if sock.getsockopt(SOL_SOCKET, SO_TYPE) != SOCK_STREAM:
raise NotImplementedError("only stream sockets are supported")
if server_side:
if server_hostname:
raise ValueError("server_hostname can only be specified "
"in client mode")
if session is not None:
raise ValueError("session can only be specified in "
"client mode")
if context.check_hostname and not server_hostname:
raise ValueError("check_hostname requires server_hostname")
kwargs = dict(
family=sock.family, type=sock.type, proto=sock.proto,
fileno=sock.fileno()
)
self = cls.__new__(cls, **kwargs)
super(SSLSocket, self).__init__(**kwargs)
self.settimeout(sock.gettimeout())
sock.detach()
self._context = context
self._session = session
self._closed = False
self._sslobj = None
self.server_side = server_side
self.server_hostname = context._encode_hostname(server_hostname)
self.do_handshake_on_connect = do_handshake_on_connect
self.suppress_ragged_eofs = suppress_ragged_eofs
# See if we are connected
try:
self.getpeername()
except OSError as e:
if e.errno != errno.ENOTCONN:
raise
connected = False
else:
connected = True
self._connected = connected
if connected:
# create the SSL object
try:
self._sslobj = self._context._wrap_socket(
self, server_side, self.server_hostname,
owner=self, session=self._session,
)
if do_handshake_on_connect:
timeout = self.gettimeout()
if timeout == 0.0:
# non-blocking
raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
> self.do_handshake()
/usr/lib/python3.8/ssl.py:1040:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
block = False
@_sslcopydoc
def do_handshake(self, block=False):
self._check_connected()
timeout = self.gettimeout()
try:
if timeout == 0.0 and block:
self.settimeout(None)
> self._sslobj.do_handshake()
E ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1131)
/usr/lib/python3.8/ssl.py:1309: SSLEOFError
During handling of the above exception, another exception occurred:
def test_https():
> assert (
execute_query_https("SELECT currentUser()", user="john", cert_name="client1")
== "john\n"
)
test_tlsv1_3/test.py:71:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_tlsv1_3/test.py:64: in execute_query_https
response = urllib.request.urlopen(
/usr/lib/python3.8/urllib/request.py:222: in urlopen
return opener.open(url, data, timeout)
/usr/lib/python3.8/urllib/request.py:525: in open
response = self._open(req, data)
/usr/lib/python3.8/urllib/request.py:542: in _open
result = self._call_chain(self.handle_open, protocol, protocol +
/usr/lib/python3.8/urllib/request.py:502: in _call_chain
result = func(*args)
/usr/lib/python3.8/urllib/request.py:1397: in https_open
return self.do_open(http.client.HTTPSConnection, req,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <urllib.request.HTTPSHandler object at 0x7f219d300dc0>
http_class = <class 'http.client.HTTPSConnection'>
req = <urllib.request.Request object at 0x7f219d300d60>
http_conn_args = {'check_hostname': None, 'context': <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2c7040>}
host = '172.16.11.2:8443'
h = <http.client.HTTPSConnection object at 0x7f219d33d040>
def do_open(self, http_class, req, **http_conn_args):
"""Return an HTTPResponse object for the request, using http_class.
http_class must implement the HTTPConnection API from http.client.
"""
host = req.host
if not host:
raise URLError('no host given')
# will parse host:port
h = http_class(host, timeout=req.timeout, **http_conn_args)
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update({k: v for k, v in req.headers.items()
if k not in headers})
# TODO(jhylton): Should this be redesigned to handle
# persistent connections?
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# request.
headers["Connection"] = "close"
headers = {name.title(): val for name, val in headers.items()}
if req._tunnel_host:
tunnel_headers = {}
proxy_auth_hdr = "Proxy-Authorization"
if proxy_auth_hdr in headers:
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
# Proxy-Authorization should not be sent to origin
# server.
del headers[proxy_auth_hdr]
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
try:
h.request(req.get_method(), req.selector, req.data, headers,
encode_chunked=req.has_header('Transfer-encoding'))
except OSError as err: # timeout error
> raise URLError(err)
E urllib.error.URLError: <urlopen error EOF occurred in violation of protocol (_ssl.c:1131)>
/usr/lib/python3.8/urllib/request.py:1357: URLError
Comment: FIPS mode only supports TLSv1.2
Status: FAIL (OK to fail)
Test: /integration/test_tlsv1_3/test.py::test_https_non_ssl_auth
Reason:
___________________________ test_https_non_ssl_auth ____________________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3
self = <urllib.request.HTTPSHandler object at 0x7f219d3cd760>
http_class = <class 'http.client.HTTPSConnection'>
req = <urllib.request.Request object at 0x7f219d3cd2b0>
http_conn_args = {'check_hostname': None, 'context': <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2a87c0>}
host = '172.16.11.2:8443'
h = <http.client.HTTPSConnection object at 0x7f219d3c21c0>
def do_open(self, http_class, req, **http_conn_args):
"""Return an HTTPResponse object for the request, using http_class.
http_class must implement the HTTPConnection API from http.client.
"""
host = req.host
if not host:
raise URLError('no host given')
# will parse host:port
h = http_class(host, timeout=req.timeout, **http_conn_args)
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update({k: v for k, v in req.headers.items()
if k not in headers})
# TODO(jhylton): Should this be redesigned to handle
# persistent connections?
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# request.
headers["Connection"] = "close"
headers = {name.title(): val for name, val in headers.items()}
if req._tunnel_host:
tunnel_headers = {}
proxy_auth_hdr = "Proxy-Authorization"
if proxy_auth_hdr in headers:
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
# Proxy-Authorization should not be sent to origin
# server.
del headers[proxy_auth_hdr]
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
try:
> h.request(req.get_method(), req.selector, req.data, headers,
encode_chunked=req.has_header('Transfer-encoding'))
/usr/lib/python3.8/urllib/request.py:1354:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d3c21c0>, method = 'GET'
url = '/?query=SELECT%20currentUser%28%29', body = None
headers = {'Connection': 'close', 'Host': '172.16.11.2:8443', 'User-Agent': 'Python-urllib/3.8', 'X-Clickhouse-Key': 'qwe123', ...}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.8/http/client.py:1256:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d3c21c0>, method = 'GET'
url = '/?query=SELECT%20currentUser%28%29', body = None
headers = {'Connection': 'close', 'Host': '172.16.11.2:8443', 'User-Agent': 'Python-urllib/3.8', 'X-Clickhouse-Key': 'qwe123', ...}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.8/http/client.py:1302:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d3c21c0>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.8/http/client.py:1251:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d3c21c0>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.8/http/client.py:1011:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d3c21c0>
data = b'GET /?query=SELECT%20currentUser%28%29 HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: 172.16.11.2:8443\r\nUser-Agent: Python-urllib/3.8\r\nX-Clickhouse-User: jane\r\nX-Clickhouse-Key: qwe123\r\nConnection: close\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.8/http/client.py:951:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <http.client.HTTPSConnection object at 0x7f219d3c21c0>
def connect(self):
"Connect to a host on a given (SSL) port."
super().connect()
if self._tunnel_host:
server_hostname = self._tunnel_host
else:
server_hostname = self.host
> self.sock = self._context.wrap_socket(self.sock,
server_hostname=server_hostname)
/usr/lib/python3.8/http/client.py:1425:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2a87c0>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
args = (), kwargs = {'server_hostname': 'integration-tests.clickhouse.com'}
def wrap_socket(self, sock, *args, **kwargs):
kwargs["server_hostname"] = self._server_hostname
> return super().wrap_socket(sock, *args, **kwargs)
helpers/ssl_context.py:12:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2a87c0>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
server_side = False, do_handshake_on_connect = True, suppress_ragged_eofs = True
server_hostname = 'integration-tests.clickhouse.com', session = None
def wrap_socket(self, sock, server_side=False,
do_handshake_on_connect=True,
suppress_ragged_eofs=True,
server_hostname=None, session=None):
# SSLSocket class handles server_hostname encoding before it calls
# ctx._wrap_socket()
> return self.sslsocket_class._create(
sock=sock,
server_side=server_side,
do_handshake_on_connect=do_handshake_on_connect,
suppress_ragged_eofs=suppress_ragged_eofs,
server_hostname=server_hostname,
context=self,
session=session
)
/usr/lib/python3.8/ssl.py:500:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'ssl.SSLSocket'>
sock = <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
server_side = False, do_handshake_on_connect = True, suppress_ragged_eofs = True
server_hostname = 'integration-tests.clickhouse.com'
context = <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2a87c0>
session = None
@classmethod
def _create(cls, sock, server_side=False, do_handshake_on_connect=True,
suppress_ragged_eofs=True, server_hostname=None,
context=None, session=None):
if sock.getsockopt(SOL_SOCKET, SO_TYPE) != SOCK_STREAM:
raise NotImplementedError("only stream sockets are supported")
if server_side:
if server_hostname:
raise ValueError("server_hostname can only be specified "
"in client mode")
if session is not None:
raise ValueError("session can only be specified in "
"client mode")
if context.check_hostname and not server_hostname:
raise ValueError("check_hostname requires server_hostname")
kwargs = dict(
family=sock.family, type=sock.type, proto=sock.proto,
fileno=sock.fileno()
)
self = cls.__new__(cls, **kwargs)
super(SSLSocket, self).__init__(**kwargs)
self.settimeout(sock.gettimeout())
sock.detach()
self._context = context
self._session = session
self._closed = False
self._sslobj = None
self.server_side = server_side
self.server_hostname = context._encode_hostname(server_hostname)
self.do_handshake_on_connect = do_handshake_on_connect
self.suppress_ragged_eofs = suppress_ragged_eofs
# See if we are connected
try:
self.getpeername()
except OSError as e:
if e.errno != errno.ENOTCONN:
raise
connected = False
else:
connected = True
self._connected = connected
if connected:
# create the SSL object
try:
self._sslobj = self._context._wrap_socket(
self, server_side, self.server_hostname,
owner=self, session=self._session,
)
if do_handshake_on_connect:
timeout = self.gettimeout()
if timeout == 0.0:
# non-blocking
raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
> self.do_handshake()
/usr/lib/python3.8/ssl.py:1040:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
block = False
@_sslcopydoc
def do_handshake(self, block=False):
self._check_connected()
timeout = self.gettimeout()
try:
if timeout == 0.0 and block:
self.settimeout(None)
> self._sslobj.do_handshake()
E ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1131)
/usr/lib/python3.8/ssl.py:1309: SSLEOFError
During handling of the above exception, another exception occurred:
def test_https_non_ssl_auth():
# Users with non-SSL authentication are allowed, in this case we can skip sending a client certificate at all (because "verificationMode" is set to "relaxed").
# assert execute_query_https("SELECT currentUser()", user="peter", enable_ssl_auth=False) == "peter\n"
> assert (
execute_query_https(
"SELECT currentUser()",
user="jane",
enable_ssl_auth=False,
password="qwe123",
)
== "jane\n"
)
test_tlsv1_3/test.py:114:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_tlsv1_3/test.py:64: in execute_query_https
response = urllib.request.urlopen(
/usr/lib/python3.8/urllib/request.py:222: in urlopen
return opener.open(url, data, timeout)
/usr/lib/python3.8/urllib/request.py:525: in open
response = self._open(req, data)
/usr/lib/python3.8/urllib/request.py:542: in _open
result = self._call_chain(self.handle_open, protocol, protocol +
/usr/lib/python3.8/urllib/request.py:502: in _call_chain
result = func(*args)
/usr/lib/python3.8/urllib/request.py:1397: in https_open
return self.do_open(http.client.HTTPSConnection, req,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <urllib.request.HTTPSHandler object at 0x7f219d3cd760>
http_class = <class 'http.client.HTTPSConnection'>
req = <urllib.request.Request object at 0x7f219d3cd2b0>
http_conn_args = {'check_hostname': None, 'context': <helpers.ssl_context.WrapSSLContextWithSNI object at 0x7f219d2a87c0>}
host = '172.16.11.2:8443'
h = <http.client.HTTPSConnection object at 0x7f219d3c21c0>
def do_open(self, http_class, req, **http_conn_args):
"""Return an HTTPResponse object for the request, using http_class.
http_class must implement the HTTPConnection API from http.client.
"""
host = req.host
if not host:
raise URLError('no host given')
# will parse host:port
h = http_class(host, timeout=req.timeout, **http_conn_args)
h.set_debuglevel(self._debuglevel)
headers = dict(req.unredirected_hdrs)
headers.update({k: v for k, v in req.headers.items()
if k not in headers})
# TODO(jhylton): Should this be redesigned to handle
# persistent connections?
# We want to make an HTTP/1.1 request, but the addinfourl
# class isn't prepared to deal with a persistent connection.
# It will try to read all remaining data from the socket,
# which will block while the server waits for the next request.
# So make sure the connection gets closed after the (only)
# request.
headers["Connection"] = "close"
headers = {name.title(): val for name, val in headers.items()}
if req._tunnel_host:
tunnel_headers = {}
proxy_auth_hdr = "Proxy-Authorization"
if proxy_auth_hdr in headers:
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
# Proxy-Authorization should not be sent to origin
# server.
del headers[proxy_auth_hdr]
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
try:
try:
h.request(req.get_method(), req.selector, req.data, headers,
encode_chunked=req.has_header('Transfer-encoding'))
except OSError as err: # timeout error
> raise URLError(err)
E urllib.error.URLError: <urlopen error EOF occurred in violation of protocol (_ssl.c:1131)>
/usr/lib/python3.8/urllib/request.py:1357: URLError
Comment: FIPS mode only supports TLSv1.2
Status: FAIL (OK to fail)
Test: /integration/test_tlsv1_3/test.py::test_https_wrong_cert
Reason:
____________________________ test_https_wrong_cert_____________________________
[gw2] linux -- Python 3.8.10 /usr/bin/python3
def test_https_wrong_cert():
# Wrong certificate: different user's certificate
with pytest.raises(Exception) as err:
execute_query_https("SELECT currentUser()", user="john", cert_name="client2")
> assert "HTTP Error 403" in str(err.value)
E AssertionError: assert 'HTTP Error 403' in '<urlopen error EOF occurred in violation of protocol (_ssl.c:1131)>'
E + where '<urlopen error EOF occurred in violation of protocol (_ssl.c:1131)>' = str(URLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)')))
E + where URLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)')) = <ExceptionInfo URLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)')) tblen=8>.value
test_tlsv1_3/test.py:89: AssertionError
Comment: FIPS mode only supports TLSv1.2
Status: FAIL (OK to fail)
Passed:
- Aes Encryption
- Aggregate Functions
- Atomic Insert
- Base58
- Benchmark Minio
- ClickHouse Keeper
- DateTime64 Extended Range
- Disk Level Encryption
- DNS
- Example
- Extended Precision Data Types
- Kafka
- Kerberos
- LDAP Authentication
- LDAP External User Directory
- LDAP Role Mapping
- Lightweight Delete
- Map Type
- Parquet
- Part Moves Between Shards
- RBAC
- Selects
- SSL Server
- S3 Minio
- S3 AWS
- S3 GCS
- Tiered Storage
- Tiered Storage AWS
- Tiered Storage Minio
- Window Functions
Failed:
Benchmark AWS
Reason: timeout when dropping table during cleanup.
Benchmark GCS
Reason: timeout when dropping table during cleanup.
Tiered Storage GCS
Reason: timeout during parallel runs.