S3 Test Run Report

DateDec 26, 2023 12:45
Duration1h 0m
Framework TestFlows 2.0.231130.1212236

Artifacts

Test artifacts can be found at https://altinity-build-artifacts.s3.amazonaws.com/index.html#/

Attributes

projectAltinity/ClickHouse
project.id159717931
packagehttps://s3.amazonaws.com/altinity-build-artifacts/23.8/57762a45dd8d7579a37ae9643654cde6896312dc/package_release/clickhouse-common-static_23.8.8.21.altinitystable_amd64.deb
version23.8.8.21.altinitystable
user.nameEnmk
repositoryhttps://github.com/Altinity/clickhouse-regression
commit.hash09db6160aac30e37169797b73fac77b4cbca41c6
job.id7328846900
job.urlhttps://github.com/Altinity/ClickHouse/actions/runs/7328846900
archx86_64
localTrue
clickhouse_versionNone
clickhouse_binary_pathhttps://s3.amazonaws.com/altinity-build-artifacts/23.8/57762a45dd8d7579a37ae9643654cde6896312dc/package_release/clickhouse-common-static_23.8.8.21.altinitystable_amd64.deb
stressFalse
collect_service_logsTrue
storages['aws_s3']
minio_urihttp://minio1:9001
minio_root_userminio
minio_root_passwordminio123
aws_s3_bucketSecret(name='aws_s3_bucket')
aws_s3_regionSecret(name='aws_s3_region')
aws_s3_key_idSecret(name='aws_s3_key_id')
aws_s3_access_keySecret(name='aws_s3_access_key')
gcs_uriSecret(name='gcs_uri')
gcs_key_idSecret(name='gcs_key_id')
gcs_key_secretSecret(name='gcs_key_secret')

Summary

77.5%OK
19.3%Known

Statistics

Units Skip OK Fail XFail
Modules
1
1
Suites
11
3
8
Features
11
3
8
Scenarios
103
92
11
Checks
1
1
Examples
60
36
24
Steps
13963
13789
152
22

Known Fails

Test NameResultMessage
/s3/aws s3 disk/cacheXFail 1ms
Under development for 22.8 and newer.
None
/s3/aws s3 disk/cache defaultXFail 925us
Under development for 22.8 and newer.
None
/s3/aws s3 disk/cache pathXFail 763us
Under development for 22.8 and newer.
None
/s3/aws s3 disk/generic urlXFail 2ms
not yet supported
Generic URL is treated as invalid configuration, ClickHouse
             will not start if config is added
/s3/aws s3 disk/low cardinality offsetXFail 43s 611ms
https://github.com/ClickHouse/ClickHouse/pull/44875
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 324, in aws_s3_regression
    Feature(test=load("s3.tests.disk", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2755, in aws_s3
    disk_tests()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2742, in disk_tests
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py", line 2711, in low_cardinality_offset
    assert output == "23999\n", error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert output == "23999\n", error()

Assertion values
  assert output == "23999\n", error()
         ^ is '23999'
  assert output == "23999\n", error()
                ^ is = False

  assert output == "23999\n", error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/disk.py', line 2711 in 'low_cardinality_offset'

2703\|                          "1",
2704\|                      ),
2705\|                      (
2706\|                          "merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem",
2707\|                          "1",
2708\|                      ),
2709\|                  ],
2710\|              ).output
2711\|>             assert output == "23999\n", error()
2712\|  
2713\|      finally:
2714\|          with Finally(f"I remove the table {name}"):
/s3/aws s3 invalid disk/cache path conflictXFail 683us
Under development for 22.8 and newer.
None
/s3/aws s3 zero copy replication/Bucket should be empty before test beginsXFail 3s 315ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 333, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2219, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2196, in outline
    check_bucket_size(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/common.py", line 767, in check_bucket_size
    assert expected_size == current_size, error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert expected_size == current_size, error()

Assertion values
  assert expected_size == current_size, error()
         ^ is 0
  assert expected_size == current_size, error()
                          ^ is 228462742
  assert expected_size == current_size, error()
                       ^ is = False
  assert expected_size == current_size, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/common.py', line 767 in 'check_bucket_size'

759\|      current_size = get_bucket_size(
760\|          name=name,
761\|          prefix=prefix,
762\|          minio_enabled=minio_enabled,
763\|          access_key=self.context.secret_access_key,
764\|          key_id=self.context.access_key_id,
765\|      )
766\|      if tolerance is None or tolerance == 0:
767\|>         assert expected_size == current_size, error()
768\|      else:
769\|          msg = f"{current_size} is not within {expected_size}±{tolerance}"
770\|          lower_bound = expected_size - tolerance
/s3/aws s3 zero copy replication/add replicaXFail 48s 477ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 333, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2219, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2204, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 384, in add_replica
    assert size_after + 1 == size, error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert size_after + 1 == size, error()

Assertion values
  assert size_after + 1 == size, error()
         ^ is 228462742
  assert size_after + 1 == size, error()
                    ^ is = 228462743
  assert size_after + 1 == size, error()
                           ^ is 228462742
  assert size_after + 1 == size, error()
                        ^ is = False
  assert size_after + 1 == size, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 384 in 'add_replica'

376\|              ):
377\|                  size = get_bucket_size(
378\|                      name=bucket_name,
379\|                      prefix=bucket_path,
380\|                      minio_enabled=minio_enabled,
381\|                      access_key=self.context.secret_access_key,
382\|                      key_id=self.context.access_key_id,
383\|                  )
384\|>                 assert size_after + 1 == size, error()
385\|  
386\|              with And("I check simple queries on the first node"):
387\|                  check_query_node(
/s3/aws s3 zero copy replication/delete allXFail 51s 601ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 333, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2219, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2204, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1376, in delete_all
    get_bucket_size(
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()

Assertion values
  assert (
      get_bucket_size(
          name=bucket_name,
               ^ is 'altinity-qa-test'
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
                 ^ is 'data/zero-copy-replication'
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
                        ^ is False
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is = 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is 
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
                     ^ is = '[masked]:Secret(name='aws_s3_access_key')'
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is = 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is 
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
                 ^ is = '[masked]:Secret(name='aws_s3_key_id')'
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
      ^ is = 228462742
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
        ^ is 228462742
  ), error()
  assert (
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
      ^ is = False
  ), error()
  assert (
  ^ is False
      get_bucket_size(
          name=bucket_name,
          prefix=bucket_path,
          minio_enabled=minio_enabled,
          access_key=self.context.secret_access_key,
          key_id=self.context.access_key_id,
      )
      > size_before
  ), error()

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 1384 in 'delete_all'

1376\|                      get_bucket_size(
1377\|                          name=bucket_name,
1378\|                          prefix=bucket_path,
1379\|                          minio_enabled=minio_enabled,
1380\|                          access_key=self.context.secret_access_key,
1381\|                          key_id=self.context.access_key_id,
1382\|                      )
1383\|                      > size_before
1384\|>                 ), error()
1385\|  
1386\|          finally:
1387\|              with Finally("I drop the table on each node"):
/s3/aws s3 zero copy replication/insert multiple replicasXFail 51s 919ms
Under investigation
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 333, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2219, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2204, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1189, in insert_multiple_replicas
    assert added_size >= expected * 0.99, error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert added_size >= expected * 0.99, error()

Assertion values
  assert added_size >= expected * 0.99, error()
         ^ is 0
  assert added_size >= expected * 0.99, error()
                       ^ is 6306510
  assert added_size >= expected * 0.99, error()
                                ^ is = 6243444.9
  assert added_size >= expected * 0.99, error()
                    ^ is = False
  assert added_size >= expected * 0.99, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py', line 1189 in 'insert_multiple_replicas'

1181\|                      name=bucket_name,
1182\|                      prefix=bucket_path,
1183\|                      minio_enabled=minio_enabled,
1184\|                      access_key=self.context.secret_access_key,
1185\|                      key_id=self.context.access_key_id,
1186\|                  )
1187\|                  added_size = current_size - size_before
1188\|  
1189\|>                 assert added_size >= expected * 0.99, error()
1190\|                  assert added_size <= expected * 1.01, error()
1191\|  
1192\|          finally:
/s3/aws s3 zero copy replication/ttl deleteXFail 49s 394ms
https://github.com/ClickHouse/ClickHouse/issues/22679
AssertionError
Traceback (most recent call last):
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 456, in 
    regression()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 434, in regression
    aws_s3_regression(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/regression.py", line 333, in aws_s3_regression
    Feature(test=load("s3.tests.zero_copy_replication", "aws_s3"))(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2219, in aws_s3
    outline()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 2204, in outline
    scenario()
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/zero_copy_replication.py", line 1643, in ttl_delete
    check_query_node(
  File "/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/common.py", line 693, in check_query_node
    assert r == expected, error()
AssertionError: Oops! Assertion failed

The following assertion was not satisfied
  assert r == expected, error()

Assertion values
  assert r == expected, error()
         ^ is '1441794'
  assert r == expected, error()
              ^ is '1310721'
  assert r == expected, error()
           ^ is = False
    @@ -1 +1 @@
    -1441794
    +1310721
  assert r == expected, error()
  ^ is False

Where
  File '/home/ubuntu/_work/ClickHouse/ClickHouse/s3/../s3/tests/common.py', line 693 in 'check_query_node'

685\|  
686\|  @TestStep(Then)
687\|  def check_query_node(self, node, num, query, expected):
688\|      node = current().context.node
689\|  
690\|      with By(f"executing query {num}", description=query):
691\|          r = node.query(query).output.strip()
692\|          with Then(f"result should match the expected", description=expected):
693\|>             assert r == expected, error()
694\|  
695\|  
696\|  def get_s3_file_content(cluster, bucket, filename, decode=True):

Results

Test Name Result Duration
/s3 OK 1h 0m
/s3/aws s3 table function OK 2m 55s
/s3/aws s3 table function/auto OK 11s 447ms
/s3/aws s3 table function/compression OK 11s 600ms
/s3/aws s3 table function/credentials OK 1s 307ms
/s3/aws s3 table function/data format OK 8s 842ms
/s3/aws s3 table function/multipart OK 4s 754ms
/s3/aws s3 table function/multiple columns OK 1s 506ms
/s3/aws s3 table function/partition OK 1s 674ms
/s3/aws s3 table function/remote host filter OK 35s 127ms
/s3/aws s3 table function/syntax OK 1s 413ms
/s3/aws s3 table function/wildcard OK 20s 408ms
/s3/aws s3 table function/ssec encryption check Skip 917us
/s3/aws s3 table function/ssec OK 1m 17s
/s3/aws s3 table function/ssec/auto OK 7s 410ms
/s3/aws s3 table function/ssec/compression OK 7s 146ms
/s3/aws s3 table function/ssec/credentials OK 1s 96ms
/s3/aws s3 table function/ssec/data format OK 5s 808ms
/s3/aws s3 table function/ssec/multipart OK 2s 737ms
/s3/aws s3 table function/ssec/multiple columns OK 1s 342ms
/s3/aws s3 table function/ssec/partition OK 820ms
/s3/aws s3 table function/ssec/remote host filter OK 36s 382ms
/s3/aws s3 table function/ssec/syntax OK 1s 83ms
/s3/aws s3 table function/ssec/wildcard OK 3s 432ms
/s3/aws s3 invalid table function OK 2s 11ms
/s3/aws s3 invalid table function/empty path OK 260ms
/s3/aws s3 invalid table function/empty structure OK 256ms
/s3/aws s3 invalid table function/invalid compression OK 238ms
/s3/aws s3 invalid table function/invalid credentials OK 375ms
/s3/aws s3 invalid table function/invalid format OK 438ms
/s3/aws s3 invalid table function/invalid path OK 218ms
/s3/aws s3 invalid table function/invalid structure OK 209ms
/s3/aws s3 disk OK 31m 10s
/s3/aws s3 disk/access OK 40s 818ms
/s3/aws s3 disk/access skip check OK 38s 745ms
/s3/aws s3 disk/add storage OK 1m 24s
/s3/aws s3 disk/alter move OK 53s 195ms
/s3/aws s3 disk/alter on cluster modify ttl OK 1m 30s
/s3/aws s3 disk/cache XFail 1ms
/s3/aws s3 disk/cache default XFail 925us
/s3/aws s3 disk/cache path XFail 763us
/s3/aws s3 disk/compact parts OK 41s 370ms
/s3/aws s3 disk/config over restart OK 1m 19s
/s3/aws s3 disk/default move factor OK 54s 958ms
/s3/aws s3 disk/delete OK 2m 42s
/s3/aws s3 disk/download appropriate disk OK 1m 13s
/s3/aws s3 disk/environment credentials OK 1m 15s
/s3/aws s3 disk/exports OK 43s 93ms
/s3/aws s3 disk/generic url XFail 2ms
/s3/aws s3 disk/imports OK 42s 906ms
/s3/aws s3 disk/low cardinality offset XFail 43s 611ms
/s3/aws s3 disk/max single part upload size syntax OK 48s 514ms
/s3/aws s3 disk/mergetree OK 3m 7s
/s3/aws s3 disk/mergetree collapsing OK 47s 916ms
/s3/aws s3 disk/mergetree versionedcollapsing OK 47s 384ms
/s3/aws s3 disk/metadata OK 45s 539ms
/s3/aws s3 disk/min bytes for seek syntax OK 43s 91ms
/s3/aws s3 disk/multiple storage OK 52s 181ms
/s3/aws s3 disk/multiple storage query OK 54s 492ms
/s3/aws s3 disk/perform ttl move on insert OK 1m 21s
/s3/aws s3 disk/perform ttl move on insert default OK 45s 606ms
/s3/aws s3 disk/performance ttl move OK 1m 3s
/s3/aws s3 disk/remote host filter OK 1m 24s
/s3/aws s3 disk/restart OK 14s 861ms
/s3/aws s3 disk/specific url OK 41s 110ms
/s3/aws s3 disk/syntax OK 47s 281ms
/s3/aws s3 disk/wide parts OK 39s 896ms
/s3/aws s3 disk/ssec Skip 1ms
/s3/aws s3 sanity OK 50s 722ms
/s3/aws s3 sanity/sanity OK 8s 736ms
/s3/aws s3 invalid disk OK 3m 54s
/s3/aws s3 invalid disk/access default OK 17s 685ms
/s3/aws s3 invalid disk/access failed OK 18s 713ms
/s3/aws s3 invalid disk/access failed skip check OK 39s 593ms
/s3/aws s3 invalid disk/cache path conflict XFail 683us
/s3/aws s3 invalid disk/empty endpoint OK 8s 609ms
/s3/aws s3 invalid disk/invalid endpoint OK 2m 10s
/s3/aws s3 invalid disk/invalid type OK 19s 274ms
/s3/aws s3 zero copy replication OK 17m 6s
/s3/aws s3 zero copy replication/Bucket should be empty before test begins XFail 3s 315ms
/s3/aws s3 zero copy replication/add replica XFail 48s 477ms
/s3/aws s3 zero copy replication/alter OK 53s 978ms
/s3/aws s3 zero copy replication/alter repeat OK 1m 48s
/s3/aws s3 zero copy replication/check refcount after mutation OK 3s 505ms
/s3/aws s3 zero copy replication/consistency during double mutation OK 22s 214ms
/s3/aws s3 zero copy replication/default value OK 13s 506ms
/s3/aws s3 zero copy replication/delete OK 52s 695ms
/s3/aws s3 zero copy replication/delete all XFail 51s 601ms
/s3/aws s3 zero copy replication/drop alter replica OK 1m 6s
/s3/aws s3 zero copy replication/drop replica OK 1m 10s
/s3/aws s3 zero copy replication/global setting OK 52s 176ms
/s3/aws s3 zero copy replication/insert multiple replicas XFail 51s 919ms
/s3/aws s3 zero copy replication/metadata OK 43s 437ms
/s3/aws s3 zero copy replication/performance alter OK 1m 24s
/s3/aws s3 zero copy replication/performance insert OK 1m 11s
/s3/aws s3 zero copy replication/performance select OK 1m 20s
/s3/aws s3 zero copy replication/ttl delete XFail 49s 394ms
/s3/aws s3 zero copy replication/ttl move OK 55s 50ms
/s3/aws s3 reconnect OK 2m 22s
/s3/aws s3 reconnect/local and s3 disk OK 45s 745ms
/s3/aws s3 reconnect/local and s3 volumes OK 47s 81ms
/s3/aws s3 reconnect/s3 disk OK 49s 747ms
/s3/aws s3 backup Skip 1ms

Generated by TestFlows Open-Source Test Framework v2.0.231130.1212236