S3 Test Run Report

DateOct 05, 2023 14:43
Duration1h 0m
Framework TestFlows 1.9.230315.1003122

Artifacts

Test artifacts can be found at https://altinity-test-reports.s3.amazonaws.com/index.html#clickhouse/23.3.13.7.altinitystable/6416627213/testflows/

Attributes

projectAltinity/ClickHouse
project.id159717931
packagehttps://s3.amazonaws.com/altinity-build-artifacts/23.3/f040635d5f373c5e4be08e5fbec2622a9889adde/package_release/clickhouse-common-static_23.3.13.7.altinitystable_amd64.deb
version23.3.13.7.altinitystable
user.nameEnmk
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hash53dd0b4af71fdeac6704e70f4ff79eacbae2159a
job.id6416627213
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/6416627213
archx86_64
localTrue
clickhouse_versionNone
clickhouse_binary_pathhttps://s3.amazonaws.com/altinity-build-artifacts/23.3/f040635d5f373c5e4be08e5fbec2622a9889adde/package_release/clickhouse-common-static_23.3.13.7.altinitystable_amd64.deb
stressFalse
collect_service_logsTrue
storages['aws_s3']
minio_urihttp://minio1:9001
minio_root_userminio
minio_root_passwordminio123
aws_s3_bucketSecret(name='aws_s3_bucket')
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriSecret(name='gcs_uri')
gcs_key_idSecret(name='gcs_key_id')
gcs_key_secretSecret(name='gcs_key_secret')

Summary

78.4%OK
18.4%Known

Statistics

Units Skip OK Fail XFail
Modules
1
1
Suites
11
3
8
Features
11
3
8
Scenarios
102
92
10
Examples
60
36
24
Steps
13229
13059
148
22

Known Fails

Test NameResultMessage
/s3/aws s3 disk/cacheXFail 805us
Under development for 22.8 and newer.
None
/s3/aws s3 disk/cache defaultXFail 542us
Under development for 22.8 and newer.
None
/s3/aws s3 disk/cache pathXFail 576us
Under development for 22.8 and newer.
None
/s3/aws s3 disk/generic urlXFail 2ms
not yet supported
Generic URL is treated as invalid configuration, ClickHouse
             will not start if config is added
/s3/aws s3 disk/low cardinality offsetXFail 43s 658ms
https://github.com/ClickHouse/ClickHouse/pull/44875
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 324, in aws_s3_regression
    Feature(test=load("s3.tests.disk", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2755, in aws_s3
    disk_tests()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2742, in disk_tests
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2711, in low_cardinality_offset
    assert output == "23999\n", error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert output == "23999\n", error()

Assertion values
  assert output == "23999\n", error()
         ^ is '23999'
  assert output == "23999\n", error()
                ^ is = False

  assert output == "23999\n", error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py', line 2711 in 'low_cardinality_offset'

2703\|                          "1",
2704\|                      ),
2705\|                      (
2706\|                          "merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem",
2707\|                          "1",
2708\|                      ),
2709\|                  ],
2710\|              ).output
2711\|>             assert output == "23999\n", error()
2712\|  
2713\|      finally:
2714\|          with Finally(f"I remove the table {name}"):
/s3/aws s3 invalid disk/cache path conflictXFail 775us
Under development for 22.8 and newer.
None
/s3/aws s3 zero copy replication/add replicaXFail 55s 171ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 333, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 382, in add_replica
    assert size_after + 1 == get_bucket_size(
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()

Assertion values
  assert size_after + 1 == get_bucket_size(
         ^ is 228462742
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
                    ^ is = 228462743
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
           ^ is 'altinity-qa-test'
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
             ^ is 'data/zero-copy-replication'
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
                    ^ is False
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is = 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is 
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
                 ^ is = '[masked]:Secret(name='aws_s3_access_key')'
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is = 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is 
  ), error()
  assert size_after + 1 == get_bucket_size(
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
             ^ is = '[masked]:Secret(name='aws_s3_key_id')'
  ), error()
  assert size_after + 1 == get_bucket_size(
                           ^ is = 228462742
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
                        ^ is = False
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()
  assert size_after + 1 == get_bucket_size(
  ^ is False
      name=bucket_name,
      prefix=bucket_path,
      minio_enabled=minio_enabled,
      access_key=self.context.secret_access_key,
      key_id=self.context.access_key_id,
  ), error()

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 388 in 'add_replica'

380\|                        than previously because of the additional replica"""
381\|              ):
382\|                  assert size_after + 1 == get_bucket_size(
383\|                      name=bucket_name,
384\|                      prefix=bucket_path,
385\|                      minio_enabled=minio_enabled,
386\|                      access_key=self.context.secret_access_key,
387\|                      key_id=self.context.access_key_id,
388\|>                 ), error()
389\|  
390\|              with And("I check simple queries on the first node"):
391\|                  check_query_node(
/s3/aws s3 zero copy replication/delete allXFail 56s 420ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 333, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1386, in delete_all
    get_bucket_size(
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()

Assertion values
  assert (
      get_bucket_size(
          name=bucket_name,
               ^ is 'altinity-qa-test'
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
                 ^ is 'data/zero-copy-replication'
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
                        ^ is False
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is = 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is = '[masked]:Secret(name='aws_s3_access_key')'
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is = 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is = '[masked]:Secret(name='aws_s3_key_id')'
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
      ^ is = 228462742
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
        ^ is 228462742
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
      ^ is = False
  ), error()
  assert (
  ^ is False
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 1394 in 'delete_all'

1386\|                      get_bucket_size(
1387\|                          name=bucket_name,
1388\|                          prefix=bucket_path,
1389\|                          minio_enabled=minio_enabled,
1390\|                          access_key=self.context.secret_access_key,
1391\|                          key_id=self.context.access_key_id,
1392\|                      )
1393\|                      > size_before
1394\|>                 ), error()
1395\|  
1396\|          finally:
1397\|              with Finally("I drop the table on each node"):
/s3/aws s3 zero copy replication/insert multiple replicasXFail 56s 164ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 333, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2148, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2133, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1195, in insert_multiple_replicas
    assert added_size >= expected * 0.99, error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert added_size >= expected * 0.99, error()

Assertion values
  assert added_size >= expected * 0.99, error()
         ^ is 0
  assert added_size >= expected * 0.99, error()
                       ^ is 6306510
  assert added_size >= expected * 0.99, error()
                                ^ is = 6243444.9
  assert added_size >= expected * 0.99, error()
                    ^ is = False
  assert added_size >= expected * 0.99, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 1195 in 'insert_multiple_replicas'

1187\|                      name=bucket_name,
1188\|                      prefix=bucket_path,
1189\|                      minio_enabled=minio_enabled,
1190\|                      access_key=self.context.secret_access_key,
1191\|                      key_id=self.context.access_key_id,
1192\|                  )
1193\|                  added_size = current_size - size_before
1194\|  
1195\|>                 assert added_size >= expected * 0.99, error()
1196\|                  assert added_size <= expected * 1.01, error()
1197\|  
1198\|          finally:

Results

Test Name Result Duration
/s3 OK 1h 0m
/s3/aws s3 table function OK 3m 8s
/s3/aws s3 table function/auto OK 11s 239ms
/s3/aws s3 table function/compression OK 11s 331ms
/s3/aws s3 table function/credentials OK 1s 268ms
/s3/aws s3 table function/data format OK 11s 143ms
/s3/aws s3 table function/multipart OK 5s 875ms
/s3/aws s3 table function/multiple columns OK 1s 508ms
/s3/aws s3 table function/partition OK 1s 650ms
/s3/aws s3 table function/remote host filter OK 36s 396ms
/s3/aws s3 table function/syntax OK 1s 289ms
/s3/aws s3 table function/wildcard OK 19s 755ms
/s3/aws s3 table function/ssec encryption check Skip 664us
/s3/aws s3 table function/ssec OK 1m 26s
/s3/aws s3 table function/ssec/auto OK 7s 663ms
/s3/aws s3 table function/ssec/compression OK 7s 553ms
/s3/aws s3 table function/ssec/credentials OK 1s 101ms
/s3/aws s3 table function/ssec/data format OK 8s 541ms
/s3/aws s3 table function/ssec/multipart OK 4s 365ms
/s3/aws s3 table function/ssec/multiple columns OK 1s 334ms
/s3/aws s3 table function/ssec/partition OK 1s 99ms
/s3/aws s3 table function/ssec/remote host filter OK 37s 420ms
/s3/aws s3 table function/ssec/syntax OK 1s 85ms
/s3/aws s3 table function/ssec/wildcard OK 3s 353ms
/s3/aws s3 invalid table function OK 2s 32ms
/s3/aws s3 invalid table function/empty path OK 252ms
/s3/aws s3 invalid table function/empty structure OK 242ms
/s3/aws s3 invalid table function/invalid compression OK 228ms
/s3/aws s3 invalid table function/invalid credentials OK 394ms
/s3/aws s3 invalid table function/invalid format OK 467ms
/s3/aws s3 invalid table function/invalid path OK 223ms
/s3/aws s3 invalid table function/invalid structure OK 211ms
/s3/aws s3 disk OK 32m 1s
/s3/aws s3 disk/access OK 42s 762ms
/s3/aws s3 disk/access skip check OK 37s 821ms
/s3/aws s3 disk/add storage OK 1m 28s
/s3/aws s3 disk/alter move OK 55s 365ms
/s3/aws s3 disk/alter on cluster modify ttl OK 1m 33s
/s3/aws s3 disk/cache XFail 805us
/s3/aws s3 disk/cache default XFail 542us
/s3/aws s3 disk/cache path XFail 576us
/s3/aws s3 disk/compact parts OK 43s 513ms
/s3/aws s3 disk/config over restart OK 1m 16s
/s3/aws s3 disk/default move factor OK 1m 1s
/s3/aws s3 disk/delete OK 2m 27s
/s3/aws s3 disk/download appropriate disk OK 1m 14s
/s3/aws s3 disk/environment credentials OK 1m 19s
/s3/aws s3 disk/exports OK 45s 216ms
/s3/aws s3 disk/generic url XFail 2ms
/s3/aws s3 disk/imports OK 45s 112ms
/s3/aws s3 disk/low cardinality offset XFail 43s 658ms
/s3/aws s3 disk/max single part upload size syntax OK 45s 916ms
/s3/aws s3 disk/mergetree OK 3m 23s
/s3/aws s3 disk/mergetree collapsing OK 50s 914ms
/s3/aws s3 disk/mergetree versionedcollapsing OK 51s 655ms
/s3/aws s3 disk/metadata OK 49s 594ms
/s3/aws s3 disk/min bytes for seek syntax OK 45s 71ms
/s3/aws s3 disk/multiple storage OK 50s 970ms
/s3/aws s3 disk/multiple storage query OK 51s 6ms
/s3/aws s3 disk/perform ttl move on insert OK 1m 24s
/s3/aws s3 disk/perform ttl move on insert default OK 50s 535ms
/s3/aws s3 disk/performance ttl move OK 1m 5s
/s3/aws s3 disk/remote host filter OK 1m 23s
/s3/aws s3 disk/restart OK 16s 397ms
/s3/aws s3 disk/specific url OK 43s 229ms
/s3/aws s3 disk/syntax OK 50s 424ms
/s3/aws s3 disk/wide parts OK 43s 832ms
/s3/aws s3 disk/ssec Skip 832us
/s3/aws s3 sanity OK 52s 830ms
/s3/aws s3 sanity/sanity OK 11s 51ms
/s3/aws s3 invalid disk OK 1m 45s
/s3/aws s3 invalid disk/access default OK 14s 536ms
/s3/aws s3 invalid disk/access failed OK 15s 536ms
/s3/aws s3 invalid disk/access failed skip check OK 38s 269ms
/s3/aws s3 invalid disk/cache path conflict XFail 775us
/s3/aws s3 invalid disk/empty endpoint OK 8s 474ms
/s3/aws s3 invalid disk/invalid endpoint OK 9s 521ms
/s3/aws s3 invalid disk/invalid type OK 19s 43ms
/s3/aws s3 zero copy replication OK 17m 53s
/s3/aws s3 zero copy replication/add replica XFail 55s 171ms
/s3/aws s3 zero copy replication/alter OK 54s 261ms
/s3/aws s3 zero copy replication/alter repeat OK 1m 49s
/s3/aws s3 zero copy replication/default value OK 13s 281ms
/s3/aws s3 zero copy replication/delete OK 54s 446ms
/s3/aws s3 zero copy replication/delete all XFail 56s 420ms
/s3/aws s3 zero copy replication/drop alter replica OK 1m 7s
/s3/aws s3 zero copy replication/drop replica OK 1m 10s
/s3/aws s3 zero copy replication/global setting OK 56s 214ms
/s3/aws s3 zero copy replication/insert multiple replicas XFail 56s 164ms
/s3/aws s3 zero copy replication/lost data during mutation OK 4s 146ms
/s3/aws s3 zero copy replication/metadata OK 49s 159ms
/s3/aws s3 zero copy replication/performance alter OK 1m 30s
/s3/aws s3 zero copy replication/performance insert OK 1m 23s
/s3/aws s3 zero copy replication/performance select OK 1m 31s
/s3/aws s3 zero copy replication/ttl delete OK 58s 315ms
/s3/aws s3 zero copy replication/ttl move OK 57s 444ms
/s3/aws s3 reconnect OK 2m 38s
/s3/aws s3 reconnect/local and s3 disk OK 53s 323ms
/s3/aws s3 reconnect/local and s3 volumes OK 51s 171ms
/s3/aws s3 reconnect/s3 disk OK 53s 615ms
/s3/aws s3 backup Skip 1ms

Generated by TestFlows Open-Source Test Framework v1.9.230315.1003122