S3 Test Run Report

DateOct 05, 2023 14:19
Duration1h 1m
Framework TestFlows 1.9.230315.1003122

Artifacts

Test artifacts can be found at https://altinity-test-reports.s3.amazonaws.com/index.html#clickhouse/23.3.13.7.altinitystable/6416627213/testflows/

Attributes

projectAltinity/ClickHouse
project.id159717931
packagehttps://s3.amazonaws.com/altinity-build-artifacts/23.3/f040635d5f373c5e4be08e5fbec2622a9889adde/package_aarch64/clickhouse-common-static_23.3.13.7.altinitystable_arm64.deb
version23.3.13.7.altinitystable
user.nameEnmk
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hash19e8624c5e4ccc65b128d27b19836c0570e53991
job.id6416627213
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/6416627213
archaarch64
localTrue
clickhouse_versionNone
clickhouse_binary_pathhttps://s3.amazonaws.com/altinity-build-artifacts/23.3/f040635d5f373c5e4be08e5fbec2622a9889adde/package_aarch64/clickhouse-common-static_23.3.13.7.altinitystable_arm64.deb
stressFalse
collect_service_logsTrue
storages['aws_s3']
minio_urihttp://minio1:9001
minio_root_userminio
minio_root_passwordminio123
aws_s3_bucketSecret(name='aws_s3_bucket')
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriSecret(name='gcs_uri')
gcs_key_idSecret(name='gcs_key_id')
gcs_key_secretSecret(name='gcs_key_secret')

Summary

78.4%OK
18.4%Known

Statistics

Units Skip OK Fail XFail
Modules
1
1
Suites
11
3
8
Features
11
3
8
Scenarios
102
92
10
Examples
60
36
24
Steps
13229
13059
148
22

Known Fails

Test NameResultMessage
/s3/aws s3 disk/cacheXFail 1ms
Under development for 22.8 and newer.
None
/s3/aws s3 disk/cache defaultXFail 719us
Under development for 22.8 and newer.
None
/s3/aws s3 disk/cache pathXFail 919us
Under development for 22.8 and newer.
None
/s3/aws s3 disk/generic urlXFail 3ms
not yet supported
Generic URL is treated as invalid configuration, ClickHouse
             will not start if config is added
/s3/aws s3 disk/low cardinality offsetXFail 42s 808ms
https://github.com/ClickHouse/ClickHouse/pull/44875
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 483, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 461, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 342, in aws_s3_regression
    Feature(test=load("s3.tests.disk", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2755, in aws_s3
    disk_tests()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2742, in disk_tests
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2711, in low_cardinality_offset
    assert output == "23999\n", error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert output == "23999\n", error()

Assertion values
  assert output == "23999\n", error()
         ^ is '23999'
  assert output == "23999\n", error()
                ^ is = False

  assert output == "23999\n", error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py', line 2711 in 'low_cardinality_offset'

2703\|                          "1",
2704\|                      ),
2705\|                      (
2706\|                          "merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem",
2707\|                          "1",
2708\|                      ),
2709\|                  ],
2710\|              ).output
2711\|>             assert output == "23999\n", error()
2712\|  
2713\|      finally:
2714\|          with Finally(f"I remove the table {name}"):
/s3/aws s3 invalid disk/cache path conflictXFail 1ms
Under development for 22.8 and newer.
None
/s3/aws s3 zero copy replication/add replicaXFail 53s 911ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 483, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 461, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 351, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 382, in add_replica
    assert size_after + 1 == get_bucket_size(
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()

Assertion values
  assert size_after + 1 == get_bucket_size(
         ^ is 228462742
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
                    ^ is = 228462743
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
           ^ is 'altinity-qa-test'
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
             ^ is 'data/zero-copy-replication'
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
                    ^ is False
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is = 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is = '[masked]:Secret(name='aws_s3_access_key')'
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is = 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is = '[masked]:Secret(name='aws_s3_key_id')'
  ), error()
  assert size_after + 1 == get_bucket_size(
                           ^ is = 228462742
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
                        ^ is = False
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
  ^ is False
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 388 in 'add_replica'

380\|                        than previously because of the additional replica"""
381\|              ):
382\|                  assert size_after + 1 == get_bucket_size(
383\|                      name=bucket_name,
384\|                      prefix=bucket_path,
385\|                      minio_enabled=minio_enabled,
386\|                      access_key=self.context.secret_access_key,
387\|                      key_id=self.context.access_key_id,
388\|>                 ), error()
389\|  
390\|              with And("I check simple queries on the first node"):
391\|                  check_query_node(
/s3/aws s3 zero copy replication/delete allXFail 56s 37ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 483, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 461, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 351, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1386, in delete_all
    get_bucket_size(
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()

Assertion values
  assert (
      get_bucket_size(
          name=bucket_name,
               ^ is 'altinity-qa-test'
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
                 ^ is 'data/zero-copy-replication'
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
                        ^ is False
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is = 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is = '[masked]:Secret(name='aws_s3_access_key')'
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is = 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is = '[masked]:Secret(name='aws_s3_key_id')'
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
      ^ is = 228462742
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
        ^ is 228462742
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
      ^ is = False
  ), error()
  assert (
  ^ is False
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 1394 in 'delete_all'

1386\|                      get_bucket_size(
1387\|                          name=bucket_name,
1388\|                          prefix=bucket_path,
1389\|                          minio_enabled=minio_enabled,
1390\|                          access_key=self.context.secret_access_key,
1391\|                          key_id=self.context.access_key_id,
1392\|                      )
1393\|                      > size_before
1394\|>                 ), error()
1395\|  
1396\|          finally:
1397\|              with Finally("I drop the table on each node"):
/s3/aws s3 zero copy replication/insert multiple replicasXFail 55s 829ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 483, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 461, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 351, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1195, in insert_multiple_replicas
    assert added_size >= expected * 0.99, error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert added_size >= expected * 0.99, error()

Assertion values
  assert added_size >= expected * 0.99, error()
         ^ is 0
  assert added_size >= expected * 0.99, error()
                       ^ is 6306510
  assert added_size >= expected * 0.99, error()
                                ^ is = 6243444.9
  assert added_size >= expected * 0.99, error()
                    ^ is = False
  assert added_size >= expected * 0.99, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 1195 in 'insert_multiple_replicas'

1187\|                      name=bucket_name,
1188\|                      prefix=bucket_path,
1189\|                      minio_enabled=minio_enabled,
1190\|                      access_key=self.context.secret_access_key,
1191\|                      key_id=self.context.access_key_id,
1192\|                  )
1193\|                  added_size = current_size - size_before
1194\|  
1195\|>                 assert added_size >= expected * 0.99, error()
1196\|                  assert added_size <= expected * 1.01, error()
1197\|  
1198\|          finally:

Results

Test Name Result Duration
/s3 OK 1h 1m
/s3/aws s3 table function OK 3m 40s
/s3/aws s3 table function/auto OK 14s 390ms
/s3/aws s3 table function/compression OK 14s 669ms
/s3/aws s3 table function/credentials OK 1s 629ms
/s3/aws s3 table function/data format OK 13s 90ms
/s3/aws s3 table function/multipart OK 6s 524ms
/s3/aws s3 table function/multiple columns OK 2s 46ms
/s3/aws s3 table function/partition OK 1s 554ms
/s3/aws s3 table function/remote host filter OK 41s 802ms
/s3/aws s3 table function/syntax OK 1s 635ms
/s3/aws s3 table function/wildcard OK 19s 29ms
/s3/aws s3 table function/ssec encryption check Skip 1ms
/s3/aws s3 table function/ssec OK 1m 43s
/s3/aws s3 table function/ssec/auto OK 10s 844ms
/s3/aws s3 table function/ssec/compression OK 11s 123ms
/s3/aws s3 table function/ssec/credentials OK 1s 540ms
/s3/aws s3 table function/ssec/data format OK 9s 913ms
/s3/aws s3 table function/ssec/multipart OK 5s 294ms
/s3/aws s3 table function/ssec/multiple columns OK 1s 912ms
/s3/aws s3 table function/ssec/partition OK 1s 291ms
/s3/aws s3 table function/ssec/remote host filter OK 42s 541ms
/s3/aws s3 table function/ssec/syntax OK 1s 600ms
/s3/aws s3 table function/ssec/wildcard OK 4s 780ms
/s3/aws s3 invalid table function OK 3s 442ms
/s3/aws s3 invalid table function/empty path OK 402ms
/s3/aws s3 invalid table function/empty structure OK 391ms
/s3/aws s3 invalid table function/invalid compression OK 437ms
/s3/aws s3 invalid table function/invalid credentials OK 529ms
/s3/aws s3 invalid table function/invalid format OK 831ms
/s3/aws s3 invalid table function/invalid path OK 415ms
/s3/aws s3 invalid table function/invalid structure OK 421ms
/s3/aws s3 disk OK 32m 21s
/s3/aws s3 disk/access OK 44s 50ms
/s3/aws s3 disk/access skip check OK 45s 501ms
/s3/aws s3 disk/add storage OK 1m 28s
/s3/aws s3 disk/alter move OK 56s 575ms
/s3/aws s3 disk/alter on cluster modify ttl OK 1m 32s
/s3/aws s3 disk/cache XFail 1ms
/s3/aws s3 disk/cache default XFail 719us
/s3/aws s3 disk/cache path XFail 919us
/s3/aws s3 disk/compact parts OK 42s 566ms
/s3/aws s3 disk/config over restart OK 1m 19s
/s3/aws s3 disk/default move factor OK 1m 4s
/s3/aws s3 disk/delete OK 2m 31s
/s3/aws s3 disk/download appropriate disk OK 1m 16s
/s3/aws s3 disk/environment credentials OK 1m 21s
/s3/aws s3 disk/exports OK 42s 416ms
/s3/aws s3 disk/generic url XFail 3ms
/s3/aws s3 disk/imports OK 44s 719ms
/s3/aws s3 disk/low cardinality offset XFail 42s 808ms
/s3/aws s3 disk/max single part upload size syntax OK 49s 50ms
/s3/aws s3 disk/mergetree OK 3m 21s
/s3/aws s3 disk/mergetree collapsing OK 50s 462ms
/s3/aws s3 disk/mergetree versionedcollapsing OK 47s 764ms
/s3/aws s3 disk/metadata OK 46s 929ms
/s3/aws s3 disk/min bytes for seek syntax OK 41s 25ms
/s3/aws s3 disk/multiple storage OK 52s 845ms
/s3/aws s3 disk/multiple storage query OK 55s 486ms
/s3/aws s3 disk/perform ttl move on insert OK 1m 25s
/s3/aws s3 disk/perform ttl move on insert default OK 50s 923ms
/s3/aws s3 disk/performance ttl move OK 1m 5s
/s3/aws s3 disk/remote host filter OK 1m 24s
/s3/aws s3 disk/restart OK 18s 455ms
/s3/aws s3 disk/specific url OK 43s 582ms
/s3/aws s3 disk/syntax OK 51s 227ms
/s3/aws s3 disk/wide parts OK 44s 94ms
/s3/aws s3 disk/ssec Skip 838us
/s3/aws s3 sanity OK 56s 740ms
/s3/aws s3 sanity/sanity OK 12s 327ms
/s3/aws s3 invalid disk OK 1m 49s
/s3/aws s3 invalid disk/access default OK 15s 941ms
/s3/aws s3 invalid disk/access failed OK 14s 959ms
/s3/aws s3 invalid disk/access failed skip check OK 42s 909ms
/s3/aws s3 invalid disk/cache path conflict XFail 1ms
/s3/aws s3 invalid disk/empty endpoint OK 8s 909ms
/s3/aws s3 invalid disk/invalid endpoint OK 8s 876ms
/s3/aws s3 invalid disk/invalid type OK 17s 699ms
/s3/aws s3 zero copy replication OK 17m 47s
/s3/aws s3 zero copy replication/add replica XFail 53s 911ms
/s3/aws s3 zero copy replication/alter OK 56s 258ms
/s3/aws s3 zero copy replication/alter repeat OK 1m 47s
/s3/aws s3 zero copy replication/default value OK 13s 5ms
/s3/aws s3 zero copy replication/delete OK 57s 803ms
/s3/aws s3 zero copy replication/delete all XFail 56s 37ms
/s3/aws s3 zero copy replication/drop alter replica OK 1m 13s
/s3/aws s3 zero copy replication/drop replica OK 1m 9s
/s3/aws s3 zero copy replication/global setting OK 52s 803ms
/s3/aws s3 zero copy replication/insert multiple replicas XFail 55s 829ms
/s3/aws s3 zero copy replication/lost data during mutation OK 4s 100ms
/s3/aws s3 zero copy replication/metadata OK 47s 889ms
/s3/aws s3 zero copy replication/performance alter OK 1m 29s
/s3/aws s3 zero copy replication/performance insert OK 1m 19s
/s3/aws s3 zero copy replication/performance select OK 1m 28s
/s3/aws s3 zero copy replication/ttl delete OK 57s 901ms
/s3/aws s3 zero copy replication/ttl move OK 1m 1s
/s3/aws s3 reconnect OK 2m 29s
/s3/aws s3 reconnect/local and s3 disk OK 49s 275ms
/s3/aws s3 reconnect/local and s3 volumes OK 48s 850ms
/s3/aws s3 reconnect/s3 disk OK 50s 973ms
/s3/aws s3 backup Skip 1ms

Generated by TestFlows Open-Source Test Framework v1.9.230315.1003122