Parquet Test Run Report

DateJul 22, 2024 9:27
Duration41m 41s
Framework TestFlows 2.0.240705.1133204

Artifacts

Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#0/b6642564dbf296d6e55b907d7af6c8087470b672/regression/

Attributes

projectAltinity/ClickHouse
project.id159717931
packagehttps://s3.amazonaws.com/altinity-build-artifacts/24.3/b6642564dbf296d6e55b907d7af6c8087470b672/package_release/clickhouse-common-static_24.3.5.47.altinitystable_amd64.deb
version24.3.5.47.altinitystable
user.nameEnmk
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hashc5e1513a2214ee33696c29717935e0a94989ac2a
job.id10037417827
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/10037417827
archx86_64
localTrue
clickhouse_versionNone
clickhouse_binary_pathhttps://s3.amazonaws.com/altinity-build-artifacts/24.3/b6642564dbf296d6e55b907d7af6c8087470b672/package_release/clickhouse-common-static_24.3.5.47.altinitystable_amd64.deb
keeper_binary_pathNone
zookeeper_versionNone
use_keeperFalse
stressFalse
collect_service_logsTrue
thread_fuzzerFalse
with_analyzerFalse
reuse_envFalse
storagesNone
minio_urihttp://minio1:9001
minio_root_userminio
minio_root_passwordminio123
aws_s3_bucketNone
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriNone
gcs_key_idNone
gcs_key_secretNone

Summary

100%OK
<1%Known

Statistics

Units Skip OK Fail XFail
Modules
1
1
Suites
2
2
Features
40
2
37
1
Scenarios
218
13
201
4
Checks
59682
59682
Examples
12
12
Steps
379940
13
377731
13
2183

Known Fails

Test NameResultMessage
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engineXFail 16s 731ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 512, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 582, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__NONE__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime
    snapshot_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    actual_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
         {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 582 in 'execute_query'

574\|                  with values() as that:
575\|                      snapshot_result = snapshot(
576\|                          "\n" + r.output.strip() + "\n",
577\|                          id=snapshot_id,
578\|                          name=snapshot_name,
579\|                          encoder=str,
580\|                          mode=snapshot.CHECK,
581\|                      )
582\|>                     assert that(snapshot_result), error()
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engineXFail 16s 833ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 512, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 582, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__GZIP__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime
    snapshot_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    actual_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
         {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 582 in 'execute_query'

574\|                  with values() as that:
575\|                      snapshot_result = snapshot(
576\|                          "\n" + r.output.strip() + "\n",
577\|                          id=snapshot_id,
578\|                          name=snapshot_name,
579\|                          encoder=str,
580\|                          mode=snapshot.CHECK,
581\|                      )
582\|>                     assert that(snapshot_result), error()
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engineXFail 16s 930ms
This fails because of the difference in snapshot values. We used to capture the datetime value `0` be converted as 2106-02-07 06:28:16 instead of the correct 1970-01-01 01:00:00. But when steps are repeated manually, we can not reproduce it
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 512, in execute_query_step
    execute_query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py", line 582, in execute_query
    assert that(snapshot_result), error()
           ^^^^^^^^^^^^^^^^^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert that(snapshot_result), error()

Assertion values
  assert that(snapshot_result), error()
         ^ is = SnapshotError(
    filename=/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
    name=_parquet_postgresql_compression_type__LZ4__postgresql_engine_to_parquet_file_to_postgresql_engine_I_check_the_data_on_the_table_datetime
    snapshot_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    actual_value="""

        {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
        {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
        {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
        {"datetime":"2011-09-30 02:34:30","toTypeName(datetime)":"DateTime"}
        {"datetime":"2019-04-27 02:01:34","toTypeName(datetime)":"DateTime"}
        {"datetime":"2017-02-11 09:09:25","toTypeName(datetime)":"DateTime"}
        {"datetime":"2009-06-14 11:12:39","toTypeName(datetime)":"DateTime"}
        {"datetime":"2020-01-24 15:10:50","toTypeName(datetime)":"DateTime"}
    """,
    diff="""
        --- /home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/snapshots/common.py.postgresql engine to parquet file to postgresql engine.snapshot
        +++ 
        @@ -1,6 +1,6 @@

         {"datetime":"2106-02-07 06:28:15","toTypeName(datetime)":"DateTime"}
        -{"datetime":"2106-02-07 06:28:16","toTypeName(datetime)":"DateTime"}
        +{"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"1970-01-01 01:00:00","toTypeName(datetime)":"DateTime"}
         {"datetime":"2006-11-27 02:50:49","toTypeName(datetime)":"DateTime"}
         {"datetime":"2015-06-29 09:43:07","toTypeName(datetime)":"DateTime"}
    """)
  assert that(snapshot_result), error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/common.py', line 582 in 'execute_query'

574\|                  with values() as that:
575\|                      snapshot_result = snapshot(
576\|                          "\n" + r.output.strip() + "\n",
577\|                          id=snapshot_id,
578\|                          name=snapshot_name,
579\|                          encoder=str,
580\|                          mode=snapshot.CHECK,
581\|                      )
582\|>                     assert that(snapshot_result), error()
/parquet/chunked arrayXFail 20s 28ms
Not supported
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/chunked_array.py", line 30, in feature
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1079, in query
    assert False, error(r.output)
           ^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert False, error(r.output)

Description
  Error on processing query: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/chunked_array_test_file.parquet): While executing ParquetBlockInputFormat: While executing File: data for INSERT was parsed from file. (CANNOT_READ_ALL_DATA) (version 24.3.5.47.altinitystable (altinity build))
(query: INSERT INTO table_70ebdfc2_480f_11ef_a008_9600038d4026 FROM INFILE '/var/lib/clickhouse/user_files/chunked_array_test_file.parquet' FORMAT Parquet)

Assertion values
  assert False, error(r.output)
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1079 in 'query'

1071\|                  assert message in r.output, error(r.output)
1072\|  
1073\|          if not ignore_exception:
1074\|              if message is None or "Exception:" not in message:
1075\|                  with Then("check if output has exception") if steps else NullStep():
1076\|                      if "Exception:" in r.output:
1077\|                          if raise_on_exception:
1078\|                              raise QueryRuntimeException(r.output)
1079\|>                         assert False, error(r.output)
1080\|  
1081\|          return r
1082\|
/parquet/datatypes/large string mapXFail 8s 852ms
Will fail until the, https://github.com/apache/arrow/pull/35825, gets merged.
AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.12/threading.py", line 1030, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 888, in feature
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/datatypes.py", line 762, in large_string_map
    import_export(snapshot_name="large_string_map_structure", import_file=import_file)
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../parquet/tests/outline.py", line 34, in import_export
    node.query(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py", line 1079, in query
    assert False, error(r.output)
           ^^^^^
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert False, error(r.output)

Description
  Received exception from server (version 24.3.5):
Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: (in file/uri /var/lib/clickhouse/user_files/arrow/large_string_map.brotli.parquet): While executing ParquetBlockInputFormat: While executing File. (CANNOT_READ_ALL_DATA)
(query: CREATE TABLE table_b90b90a8_480f_11ef_96b9_9600038d4026
            ENGINE = MergeTree
            ORDER BY tuple() AS SELECT * FROM file('arrow/large_string_map.brotli.parquet', Parquet) LIMIT 100 FORMAT TabSeparated
            )

Assertion values
  assert False, error(r.output)
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/parquet/../helpers/cluster.py', line 1079 in 'query'

1071\|                  assert message in r.output, error(r.output)
1072\|  
1073\|          if not ignore_exception:
1074\|              if message is None or "Exception:" not in message:
1075\|                  with Then("check if output has exception") if steps else NullStep():
1076\|                      if "Exception:" in r.output:
1077\|                          if raise_on_exception:
1078\|                              raise QueryRuntimeException(r.output)
1079\|>                         assert False, error(r.output)
1080\|  
1081\|          return r
1082\|

Results

Test Name Result Duration
/parquet OK 41m 41s
/parquet/file OK 23m 50s
/parquet/file/engine OK 23m 50s
/parquet/file/function OK 11m 46s
/parquet/file/engine/insert into engine OK 14m 55s
/parquet/file/engine/select from engine OK 6m 47s
/parquet/query OK 31m 43s
/parquet/query/compression type OK 31m 43s
/parquet/file/function/insert into function manual cast types OK 11m 25s
/parquet/query/compression type/=NONE OK 31m 40s
/parquet/query/compression type/=NONE /insert into memory table from file OK 6m 36s
/parquet/query/compression type/=GZIP OK 31m 42s
/parquet/query/compression type/=GZIP /insert into memory table from file OK 6m 37s
/parquet/query/compression type/=LZ4 OK 31m 43s
/parquet/file/engine/engine to file to engine OK 20m 27s
/parquet/file/function/insert into function auto cast types OK 11m 46s
/parquet/list in multiple chunks OK 27s 685ms
/parquet/file/function/select from function manual cast types OK 7m 24s
/parquet/file/engine/insert into engine from file OK 14m 25s
/parquet/file/function/select from function auto cast types OK 6m 46s
/parquet/file/engine/engine select output to file OK 23m 50s
/parquet/url OK 24m 52s
/parquet/query/compression type/=LZ4 /insert into memory table from file OK 6m 37s
/parquet/url/engine OK 24m 10s
/parquet/url/engine/insert into engine OK 15m 18s
/parquet/url/function OK 12m 34s
/parquet/url/engine/select from engine OK 6m 49s
/parquet/url/function/insert into function OK 11m 22s
/parquet/url/engine/engine to file to engine OK 20m 40s
/parquet/url/engine/insert into engine from file OK 19m 52s
/parquet/url/function/select from function manual cast types OK 12m 33s
/parquet/url/engine/engine select output to file OK 24m 10s
/parquet/url/function/select from function auto cast types OK 11m 10s
/parquet/mysql OK 44s 171ms
/parquet/mysql/compression type OK 44s 96ms
/parquet/mysql/compression type/=NONE OK 44s 39ms
/parquet/mysql/compression type/=NONE /mysql engine to parquet file to mysql engine OK 26s 212ms
/parquet/mysql/compression type/=GZIP OK 43s 134ms
/parquet/mysql/compression type/=GZIP /mysql engine to parquet file to mysql engine OK 25s 57ms
/parquet/mysql/compression type/=LZ4 OK 43s 971ms
/parquet/mysql/compression type/=LZ4 /mysql engine to parquet file to mysql engine OK 25s 947ms
/parquet/mysql/compression type/=GZIP /mysql function to parquet file to mysql function OK 18s 14ms
/parquet/mysql/compression type/=LZ4 /mysql function to parquet file to mysql function OK 17s 965ms
/parquet/mysql/compression type/=NONE /mysql function to parquet file to mysql function OK 17s 795ms
/parquet/postgresql OK 36s 482ms
/parquet/postgresql/compression type OK 36s 402ms
/parquet/postgresql/compression type/=NONE OK 36s 333ms
/parquet/postgresql/compression type/=GZIP OK 36s 325ms
/parquet/postgresql/compression type/=LZ4 OK 36s 39ms
/parquet/postgresql/compression type/=NONE /postgresql engine to parquet file to postgresql engine XFail 16s 731ms
/parquet/postgresql/compression type/=GZIP /postgresql engine to parquet file to postgresql engine XFail 16s 833ms
/parquet/postgresql/compression type/=LZ4 /postgresql engine to parquet file to postgresql engine XFail 16s 930ms
/parquet/postgresql/compression type/=NONE /postgresql function to parquet file to postgresql function OK 19s 402ms
/parquet/postgresql/compression type/=GZIP /postgresql function to parquet file to postgresql function OK 19s 292ms
/parquet/postgresql/compression type/=LZ4 /postgresql function to parquet file to postgresql function OK 18s 935ms
/parquet/remote OK 15m 47s
/parquet/remote/compression type OK 15m 47s
/parquet/remote/compression type/=NONE OK 15m 47s
/parquet/remote/compression type/=GZIP OK 15m 46s
/parquet/remote/compression type/=LZ4 OK 15m 46s
/parquet/remote/compression type/=NONE /outline OK 15m 47s
/parquet/remote/compression type/=LZ4 /outline OK 15m 46s
/parquet/remote/compression type/=GZIP /outline OK 15m 46s
/parquet/remote/compression type/=NONE /outline/insert into function OK 6m 7s
/parquet/remote/compression type/=LZ4 /outline/insert into function OK 6m 6s
/parquet/remote/compression type/=GZIP /outline/insert into function OK 6m 5s
/parquet/query/compression type/=NONE /insert into mergetree table from file OK 4m 22s
/parquet/query/compression type/=GZIP /insert into mergetree table from file OK 4m 24s
/parquet/query/compression type/=LZ4 /insert into mergetree table from file OK 4m 24s
/parquet/remote/compression type/=GZIP /outline/select from function OK 9m 41s
/parquet/remote/compression type/=LZ4 /outline/select from function OK 9m 40s
/parquet/remote/compression type/=NONE /outline/select from function OK 9m 40s
/parquet/query/compression type/=NONE /insert into replicated mergetree table from file OK 3m 13s
/parquet/query/compression type/=GZIP /insert into replicated mergetree table from file OK 3m 15s
/parquet/query/compression type/=LZ4 /insert into replicated mergetree table from file OK 3m 15s
/parquet/query/compression type/=NONE /insert into distributed table from file OK 2m 31s
/parquet/query/compression type/=LZ4 /insert into distributed table from file OK 2m 29s
/parquet/query/compression type/=GZIP /insert into distributed table from file OK 2m 29s
/parquet/query/compression type/=NONE /select from memory table into file OK 4m 14s
/parquet/query/compression type/=LZ4 /select from memory table into file OK 4m 13s
/parquet/query/compression type/=GZIP /select from memory table into file OK 4m 13s
/parquet/chunked array XFail 20s 28ms
/parquet/broken OK 364ms
/parquet/broken/file Skip 19ms
/parquet/broken/read broken bigint Skip 28ms
/parquet/broken/read broken date Skip 10ms
/parquet/broken/read broken int Skip 17ms
/parquet/broken/read broken smallint Skip 51ms
/parquet/broken/read broken timestamp ms Skip 6ms
/parquet/broken/read broken timestamp us Skip 12ms
/parquet/broken/read broken tinyint Skip 12ms
/parquet/broken/read broken ubigint Skip 26ms
/parquet/broken/read broken uint Skip 14ms
/parquet/broken/read broken usmallint Skip 10ms
/parquet/broken/read broken utinyint Skip 8ms
/parquet/broken/string Skip 47ms
/parquet/encoding OK 15s 544ms
/parquet/encoding/deltabytearray1 OK 2s 787ms
/parquet/encoding/deltabytearray2 OK 1s 658ms
/parquet/encoding/deltalengthbytearray OK 1s 733ms
/parquet/encoding/dictionary OK 1s 942ms
/parquet/encoding/plain OK 1s 974ms
/parquet/encoding/plainrlesnappy OK 3s 793ms
/parquet/encoding/rleboolean OK 1s 592ms
/parquet/compression OK 36s 884ms
/parquet/compression/arrow snappy OK 1s 725ms
/parquet/compression/brotli OK 1s 600ms
/parquet/compression/gzippages OK 3s 214ms
/parquet/compression/largegzip OK 2s 5ms
/parquet/compression/lz4 hadoop OK 1s 687ms
/parquet/compression/lz4 hadoop large OK 1s 681ms
/parquet/compression/lz4 non hadoop OK 1s 716ms
/parquet/compression/lz4 raw OK 1s 856ms
/parquet/compression/lz4 raw large OK 1s 636ms
/parquet/compression/lz4pages OK 3s 442ms
/parquet/compression/nonepages OK 3s 608ms
/parquet/compression/snappypages OK 3s 351ms
/parquet/compression/snappyplain OK 2s 106ms
/parquet/compression/snappyrle OK 1s 681ms
/parquet/compression/zstd OK 1s 795ms
/parquet/compression/zstdpages OK 3s 663ms
/parquet/datatypes OK 2m 22s
/parquet/datatypes/arrowtimestamp OK 1s 766ms
/parquet/datatypes/arrowtimestampms OK 1s 863ms
/parquet/datatypes/binary OK 1s 622ms
/parquet/datatypes/binary string OK 1s 863ms
/parquet/datatypes/blob OK 1s 630ms
/parquet/datatypes/boolean OK 1s 820ms
/parquet/datatypes/byte array OK 1s 556ms
/parquet/datatypes/columnname OK 1s 899ms
/parquet/datatypes/columnwithnull OK 1s 504ms
/parquet/datatypes/columnwithnull2 OK 1s 765ms
/parquet/datatypes/date OK 1s 535ms
/parquet/datatypes/decimal with filter OK 2s 28ms
/parquet/datatypes/decimalvariousfilters OK 1s 868ms
/parquet/datatypes/decimalwithfilter2 OK 1s 530ms
/parquet/datatypes/enum OK 2s 90ms
/parquet/datatypes/enum2 OK 1s 725ms
/parquet/datatypes/fixed length decimal OK 1s 478ms
/parquet/datatypes/fixed length decimal legacy OK 1s 860ms
/parquet/datatypes/fixedstring OK 1s 681ms
/parquet/datatypes/h2oai OK 2s 47ms
/parquet/datatypes/hive OK 3s 575ms
/parquet/datatypes/int32 OK 1s 725ms
/parquet/datatypes/int32 decimal OK 1s 756ms
/parquet/datatypes/int64 OK 1s 678ms
/parquet/datatypes/int64 decimal OK 1s 727ms
/parquet/datatypes/json OK 2s 260ms
/parquet/datatypes/large string map XFail 8s 852ms
/parquet/datatypes/largedouble OK 2s 403ms
/parquet/datatypes/manydatatypes OK 1s 804ms
/parquet/datatypes/manydatatypes2 OK 2s 278ms
/parquet/datatypes/maps OK 1s 631ms
/parquet/datatypes/nameswithemoji OK 1s 905ms
/parquet/datatypes/nandouble OK 1s 631ms
/parquet/datatypes/negativeint64 OK 2s 958ms
/parquet/datatypes/nullbyte OK 1s 604ms
/parquet/datatypes/nullbytemultiple OK 1s 823ms
/parquet/datatypes/nullsinid OK 1s 813ms
/parquet/datatypes/pandasdecimal OK 1s 665ms
/parquet/datatypes/pandasdecimaldate OK 1s 746ms
/parquet/datatypes/parquetgo OK 1s 564ms
/parquet/datatypes/selectdatewithfilter OK 34s 622ms
/parquet/datatypes/singlenull OK 1s 312ms
/parquet/datatypes/sparkv21 OK 2s 231ms
/parquet/datatypes/sparkv22 OK 1s 259ms
/parquet/datatypes/statdecimal OK 1s 286ms
/parquet/datatypes/string OK 1s 216ms
/parquet/datatypes/string int list inconsistent offset multiple batches OK 7s 33ms
/parquet/query/compression type/=NONE /select from mergetree table into file OK 2m 34s
/parquet/datatypes/stringtypes OK 1s 351ms
/parquet/query/compression type/=GZIP /select from mergetree table into file OK 2m 33s
/parquet/query/compression type/=LZ4 /select from mergetree table into file OK 2m 35s
/parquet/datatypes/struct OK 1s 576ms
/parquet/datatypes/supporteduuid OK 1s 146ms
/parquet/datatypes/timestamp1 OK 1s 76ms
/parquet/datatypes/timestamp2 OK 1s 916ms
/parquet/datatypes/timezone OK 2s 169ms
/parquet/datatypes/unsigned OK 2s 411ms
/parquet/datatypes/unsupportednull OK 310ms
/parquet/complex OK 27s 522ms
/parquet/complex/arraystring OK 1s 562ms
/parquet/complex/big tuple with nulls OK 1s 315ms
/parquet/complex/bytearraydictionary OK 1s 300ms
/parquet/complex/complex null OK 1s 282ms
/parquet/complex/lagemap OK 1s 405ms
/parquet/complex/largenestedarray OK 1s 293ms
/parquet/complex/largestruct OK 1s 210ms
/parquet/complex/largestruct2 OK 1s 520ms
/parquet/complex/largestruct3 OK 1s 557ms
/parquet/complex/list OK 1s 150ms
/parquet/complex/nested array OK 1s 66ms
/parquet/complex/nested map OK 1s 395ms
/parquet/complex/nestedallcomplex OK 1s 756ms
/parquet/complex/nestedarray2 OK 1s 564ms
/parquet/complex/nestedstruct OK 1s 211ms
/parquet/complex/nestedstruct2 OK 1s 55ms
/parquet/complex/nestedstruct3 OK 1s 420ms
/parquet/complex/nestedstruct4 OK 1s 387ms
/parquet/complex/tupleofnulls OK 1s 853ms
/parquet/complex/tuplewithdatetime OK 1s 160ms
/parquet/indexing OK 2s 604ms
/parquet/indexing/bigtuplewithnulls OK 1s 155ms
/parquet/indexing/bloom filter OK 1s 433ms
/parquet/cache OK 2s 510ms
/parquet/cache/cache1 OK 1s 219ms
/parquet/cache/cache2 OK 1s 281ms
/parquet/glob OK 37s 980ms
/parquet/glob/fastparquet globs OK 2s 148ms
/parquet/glob/glob1 OK 1s 683ms
/parquet/glob/glob2 OK 2s 67ms
/parquet/glob/glob with multiple elements OK 384ms
/parquet/glob/million extensions OK 31s 675ms
/parquet/rowgroups OK 2s 301ms
/parquet/rowgroups/manyrowgroups OK 1s 186ms
/parquet/rowgroups/manyrowgroups2 OK 1s 112ms
/parquet/encrypted Skip 1ms
/parquet/fastparquet OK 35s 532ms
/parquet/fastparquet/airlines OK 1s 277ms
/parquet/fastparquet/baz OK 1s 283ms
/parquet/fastparquet/empty date OK 3s 938ms
/parquet/fastparquet/evo OK 7s 29ms
/parquet/fastparquet/fastparquet OK 21s 997ms
/parquet/bloom Skip 1ms
/parquet/read and write OK 14m 56s
/parquet/read and write/read and write parquet file OK 14m 56s
/parquet/query/compression type/=GZIP /select from replicated mergetree table into file OK 2m 28s
/parquet/query/compression type/=NONE /select from replicated mergetree table into file OK 2m 26s
/parquet/query/compression type/=LZ4 /select from replicated mergetree table into file OK 2m 27s
/parquet/column related errors OK 1s 824ms
/parquet/column related errors/check error with 500 columns OK 1s 820ms
/parquet/query/compression type/=NONE /select from distributed table into file OK 2m 37s
/parquet/query/compression type/=GZIP /select from distributed table into file OK 2m 37s
/parquet/query/compression type/=LZ4 /select from distributed table into file OK 2m 37s
/parquet/query/compression type/=NONE /select from mat view into file OK 2m 11s
/parquet/query/compression type/=GZIP /select from mat view into file OK 2m 14s
/parquet/query/compression type/=LZ4 /select from mat view into file OK 2m 17s
/parquet/query/compression type/=NONE /insert into table with projection from file OK 51s 265ms
/parquet/query/compression type/=GZIP /insert into table with projection from file OK 48s 957ms
/parquet/query/compression type/=LZ4 /insert into table with projection from file OK 45s 872ms

Generated by TestFlows Open-Source Test Framework v2.0.240705.1133204