S3 Test Run Report

DateSep 19, 2023 8:01
Duration1h 7m
Framework TestFlows 1.9.230315.1003122

Artifacts

Test artifacts can be found at https://altinity-test-reports.s3.amazonaws.com/index.html#clickhouse/22.8.15.25.altinitystable/6231175917/testflows/

Attributes

projectAltinity/ClickHouse
project.id159717931
packagehttps://s3.amazonaws.com/altinity-build-artifacts/265/6b52da82b6ca003e8ebd34ead5f4eabbc45d1cd6/package_aarch64/clickhouse-common-static_22.8.15.25.altinitystable_arm64.deb
version22.8.15.25.altinitystable
user.nameMyroTk
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hashc23ebf091e566342377fb79fbd78ac3f7a42e678
job.id6231175917
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/6231175917
archaarch64
localTrue
clickhouse_versionNone
clickhouse_binary_pathhttps://s3.amazonaws.com/altinity-build-artifacts/265/6b52da82b6ca003e8ebd34ead5f4eabbc45d1cd6/package_aarch64/clickhouse-common-static_22.8.15.25.altinitystable_arm64.deb
stressFalse
collect_service_logsTrue
storages['aws_s3']
minio_urihttp://minio1:9001
minio_root_userminio
minio_root_passwordminio123
aws_s3_bucketSecret(name='aws_s3_bucket')
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriSecret(name='gcs_uri')
gcs_key_idSecret(name='gcs_key_id')
gcs_key_secretSecret(name='gcs_key_secret')

Summary

78.4%OK
18.4%Known

Statistics

Units Skip OK Fail XFail
Modules
1
1
Suites
11
3
8
Features
11
3
8
Scenarios
102
92
10
Examples
60
36
24
Steps
13221
13051
148
22

Known Fails

Test NameResultMessage
/s3/aws s3 disk/cacheXFail 1ms
Under development for 22.8 and newer.
None
/s3/aws s3 disk/cache defaultXFail 903us
Under development for 22.8 and newer.
None
/s3/aws s3 disk/cache pathXFail 1ms
Under development for 22.8 and newer.
None
/s3/aws s3 disk/generic urlXFail 3ms
not yet supported
Generic URL is treated as invalid configuration, ClickHouse
             will not start if config is added
/s3/aws s3 disk/low cardinality offsetXFail 43s 568ms
https://github.com/ClickHouse/ClickHouse/pull/44875
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 486, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 464, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 344, in aws_s3_regression
    Feature(test=load("s3.tests.disk", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2755, in aws_s3
    disk_tests()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2742, in disk_tests
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2711, in low_cardinality_offset
    assert output == "23999\n", error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert output == "23999\n", error()

Assertion values
  assert output == "23999\n", error()
         ^ is '23999'
  assert output == "23999\n", error()
                ^ is = False

  assert output == "23999\n", error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py', line 2711 in 'low_cardinality_offset'

2703\|                          "1",
2704\|                      ),
2705\|                      (
2706\|                          "merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem",
2707\|                          "1",
2708\|                      ),
2709\|                  ],
2710\|              ).output
2711\|>             assert output == "23999\n", error()
2712\|  
2713\|      finally:
2714\|          with Finally(f"I remove the table {name}"):
/s3/aws s3 invalid disk/cache path conflictXFail 2ms
Under development for 22.8 and newer.
None
/s3/aws s3 zero copy replication/add replicaXFail 57s 701ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 486, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 464, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 353, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 382, in add_replica
    assert size_after + 1 == get_bucket_size(
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()

Assertion values
  assert size_after + 1 == get_bucket_size(
         ^ is 228462742
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
                    ^ is = 228462743
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
           ^ is 'altinity-qa-test'
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
             ^ is 'data/zero-copy-replication'
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
                    ^ is False
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is = 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is = '[masked]:Secret(name='aws_s3_access_key')'
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is = 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is = '[masked]:Secret(name='aws_s3_key_id')'
  ), error()
  assert size_after + 1 == get_bucket_size(
                           ^ is = 228462742
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
                        ^ is = False
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
  ^ is False
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 388 in 'add_replica'

380\|                        than previously because of the additional replica"""
381\|              ):
382\|                  assert size_after + 1 == get_bucket_size(
383\|                      name=bucket_name,
384\|                      prefix=bucket_path,
385\|                      minio_enabled=minio_enabled,
386\|                      access_key=self.context.secret_access_key,
387\|                      key_id=self.context.access_key_id,
388\|>                 ), error()
389\|  
390\|              with And("I check simple queries on the first node"):
391\|                  check_query_node(
/s3/aws s3 zero copy replication/delete allXFail 1m 0s
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 486, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 464, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 353, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1386, in delete_all
    get_bucket_size(
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()

Assertion values
  assert (
      get_bucket_size(
          name=bucket_name,
               ^ is 'altinity-qa-test'
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
                 ^ is 'data/zero-copy-replication'
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
                        ^ is False
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is = 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is = '[masked]:Secret(name='aws_s3_access_key')'
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is = 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is = '[masked]:Secret(name='aws_s3_key_id')'
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
      ^ is = 228462742
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
        ^ is 228462742
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
      ^ is = False
  ), error()
  assert (
  ^ is False
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 1394 in 'delete_all'

1386\|                      get_bucket_size(
1387\|                          name=bucket_name,
1388\|                          prefix=bucket_path,
1389\|                          minio_enabled=minio_enabled,
1390\|                          access_key=self.context.secret_access_key,
1391\|                          key_id=self.context.access_key_id,
1392\|                      )
1393\|                      > size_before
1394\|>                 ), error()
1395\|  
1396\|          finally:
1397\|              with Finally("I drop the table on each node"):
/s3/aws s3 zero copy replication/insert multiple replicasXFail 1m 4s
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 486, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 464, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 353, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1195, in insert_multiple_replicas
    assert added_size >= expected * 0.99, error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert added_size >= expected * 0.99, error()

Assertion values
  assert added_size >= expected * 0.99, error()
         ^ is 0
  assert added_size >= expected * 0.99, error()
                       ^ is 6306510
  assert added_size >= expected * 0.99, error()
                                ^ is = 6243444.9
  assert added_size >= expected * 0.99, error()
                    ^ is = False
  assert added_size >= expected * 0.99, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 1195 in 'insert_multiple_replicas'

1187\|                      name=bucket_name,
1188\|                      prefix=bucket_path,
1189\|                      minio_enabled=minio_enabled,
1190\|                      access_key=self.context.secret_access_key,
1191\|                      key_id=self.context.access_key_id,
1192\|                  )
1193\|                  added_size = current_size - size_before
1194\|  
1195\|>                 assert added_size >= expected * 0.99, error()
1196\|                  assert added_size <= expected * 1.01, error()
1197\|  
1198\|          finally:

Results

Test Name Result Duration
/s3 OK 1h 7m
/s3/aws s3 table function OK 3m 56s
/s3/aws s3 table function/auto OK 14s 494ms
/s3/aws s3 table function/compression OK 14s 183ms
/s3/aws s3 table function/credentials OK 1s 538ms
/s3/aws s3 table function/data format OK 12s 172ms
/s3/aws s3 table function/multipart OK 6s 461ms
/s3/aws s3 table function/multiple columns OK 1s 992ms
/s3/aws s3 table function/partition OK 1s 703ms
/s3/aws s3 table function/remote host filter OK 40s 733ms
/s3/aws s3 table function/syntax OK 1s 688ms
/s3/aws s3 table function/wildcard OK 26s 523ms
/s3/aws s3 table function/ssec encryption check Skip 1ms
/s3/aws s3 table function/ssec OK 1m 55s
/s3/aws s3 table function/ssec/auto OK 11s 884ms
/s3/aws s3 table function/ssec/compression OK 11s 662ms
/s3/aws s3 table function/ssec/credentials OK 2s 880ms
/s3/aws s3 table function/ssec/data format OK 9s 528ms
/s3/aws s3 table function/ssec/multipart OK 6s 662ms
/s3/aws s3 table function/ssec/multiple columns OK 3s 415ms
/s3/aws s3 table function/ssec/partition OK 6s 319ms
/s3/aws s3 table function/ssec/remote host filter OK 41s 984ms
/s3/aws s3 table function/ssec/syntax OK 2s 951ms
/s3/aws s3 table function/ssec/wildcard OK 4s 931ms
/s3/aws s3 invalid table function OK 3s 451ms
/s3/aws s3 invalid table function/empty path OK 396ms
/s3/aws s3 invalid table function/empty structure OK 395ms
/s3/aws s3 invalid table function/invalid compression OK 411ms
/s3/aws s3 invalid table function/invalid credentials OK 548ms
/s3/aws s3 invalid table function/invalid format OK 845ms
/s3/aws s3 invalid table function/invalid path OK 469ms
/s3/aws s3 invalid table function/invalid structure OK 369ms
/s3/aws s3 disk OK 36m 4s
/s3/aws s3 disk/access OK 48s 438ms
/s3/aws s3 disk/access skip check OK 42s 838ms
/s3/aws s3 disk/add storage OK 1m 32s
/s3/aws s3 disk/alter move OK 1m 3s
/s3/aws s3 disk/alter on cluster modify ttl OK 1m 42s
/s3/aws s3 disk/cache XFail 1ms
/s3/aws s3 disk/cache default XFail 903us
/s3/aws s3 disk/cache path XFail 1ms
/s3/aws s3 disk/compact parts OK 43s 302ms
/s3/aws s3 disk/config over restart OK 1m 18s
/s3/aws s3 disk/default move factor OK 1m 11s
/s3/aws s3 disk/delete OK 3m 46s
/s3/aws s3 disk/download appropriate disk OK 1m 17s
/s3/aws s3 disk/environment credentials OK 1m 23s
/s3/aws s3 disk/exports OK 45s 258ms
/s3/aws s3 disk/generic url XFail 3ms
/s3/aws s3 disk/imports OK 45s 59ms
/s3/aws s3 disk/low cardinality offset XFail 43s 568ms
/s3/aws s3 disk/max single part upload size syntax OK 49s 466ms
/s3/aws s3 disk/mergetree OK 3m 24s
/s3/aws s3 disk/mergetree collapsing OK 51s 810ms
/s3/aws s3 disk/mergetree versionedcollapsing OK 53s 499ms
/s3/aws s3 disk/metadata OK 51s 850ms
/s3/aws s3 disk/min bytes for seek syntax OK 41s 697ms
/s3/aws s3 disk/multiple storage OK 55s 903ms
/s3/aws s3 disk/multiple storage query OK 55s 149ms
/s3/aws s3 disk/perform ttl move on insert OK 2m 30s
/s3/aws s3 disk/perform ttl move on insert default OK 54s 853ms
/s3/aws s3 disk/performance ttl move OK 1m 20s
/s3/aws s3 disk/remote host filter OK 1m 26s
/s3/aws s3 disk/restart OK 18s 85ms
/s3/aws s3 disk/specific url OK 45s 766ms
/s3/aws s3 disk/syntax OK 54s 500ms
/s3/aws s3 disk/wide parts OK 44s 948ms
/s3/aws s3 disk/ssec Skip 1ms
/s3/aws s3 sanity OK 58s 131ms
/s3/aws s3 sanity/sanity OK 14s 931ms
/s3/aws s3 invalid disk OK 1m 58s
/s3/aws s3 invalid disk/access default OK 17s 988ms
/s3/aws s3 invalid disk/access failed OK 19s 2ms
/s3/aws s3 invalid disk/access failed skip check OK 42s 74ms
/s3/aws s3 invalid disk/cache path conflict XFail 2ms
/s3/aws s3 invalid disk/empty endpoint OK 9s 863ms
/s3/aws s3 invalid disk/invalid endpoint OK 9s 793ms
/s3/aws s3 invalid disk/invalid type OK 19s 817ms
/s3/aws s3 zero copy replication OK 19m 15s
/s3/aws s3 zero copy replication/add replica XFail 57s 701ms
/s3/aws s3 zero copy replication/alter OK 1m 1s
/s3/aws s3 zero copy replication/alter repeat OK 1m 58s
/s3/aws s3 zero copy replication/default value OK 12s 858ms
/s3/aws s3 zero copy replication/delete OK 58s 587ms
/s3/aws s3 zero copy replication/delete all XFail 1m 0s
/s3/aws s3 zero copy replication/drop alter replica OK 1m 14s
/s3/aws s3 zero copy replication/drop replica OK 1m 16s
/s3/aws s3 zero copy replication/global setting OK 1m 1s
/s3/aws s3 zero copy replication/insert multiple replicas XFail 1m 4s
/s3/aws s3 zero copy replication/lost data during mutation OK 4s 725ms
/s3/aws s3 zero copy replication/metadata OK 52s 46ms
/s3/aws s3 zero copy replication/performance alter OK 1m 36s
/s3/aws s3 zero copy replication/performance insert OK 1m 27s
/s3/aws s3 zero copy replication/performance select OK 1m 35s
/s3/aws s3 zero copy replication/ttl delete OK 1m 4s
/s3/aws s3 zero copy replication/ttl move OK 1m 4s
/s3/aws s3 reconnect OK 2m 37s
/s3/aws s3 reconnect/local and s3 disk OK 51s 443ms
/s3/aws s3 reconnect/local and s3 volumes OK 49s 517ms
/s3/aws s3 reconnect/s3 disk OK 56s 361ms
/s3/aws s3 backup Skip 2ms

Generated by TestFlows Open-Source Test Framework v1.9.230315.1003122