Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save davinci26/6fd3c2073303b0b71ed1244585aa3ef7 to your computer and use it in GitHub Desktop.
Save davinci26/6fd3c2073303b0b71ed1244585aa3ef7 to your computer and use it in GitHub Desktop.
./cx_limit_integration_test.exe : TestRandomGenerator running with seed -56759716
At line:1 char:1
+ ./cx_limit_integration_test.exe --gtest_filter=*IpVersions/Connection ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (TestRandomGener... seed -56759716:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
[info][testing] [test/integration/fake_upstream.cc:465] starting fake server on socket 0.0.0.0:0. Address version is v4. UDP=false
[debug][misc] [test/integration/integration.cc:352] Setting up file-based LDS
[debug][misc] [test/config/utility.cc:607] No tap path set for tests
[debug][misc] [test/integration/integration.cc:384] Running Envoy with configuration:
static_resources:
clusters:
- name: cluster_0
connect_timeout: 5s
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 60253
secrets:
- name: secret_static_0
tls_certificate:
certificate_chain:
inline_string: DUMMY_INLINE_BYTES
private_key:
inline_string: DUMMY_INLINE_BYTES
password:
inline_string: DUMMY_INLINE_BYTES
dynamic_resources:
lds_config:
path: c:\tmp/1644_15953080324178456
admin:
access_log_path: NUL
address:
socket_address:
address: 127.0.0.1
port_value: 0
layered_runtime:
layers:
- name: static_layer
static_layer:
overload.global_downstream_max_connections: 4
envoy.resource_limits.listener.listener_0.connection_limit: 100
- name: admin
admin_layer:
{}
[info][testing] [test/integration/server.cc:92] starting integration test server
[info][main] [source/server/server.cc:297] initializing epoch 0 (base id=0, hot restart version=disabled)
[info][main] [source/server/server.cc:299] statically linked extensions:
[info][main] [source/server/server.cc:301] envoy.udp_listeners: raw_udp_listener
[info][main] [source/server/server.cc:301] envoy.upstreams: envoy.filters.connection_pools.http.generic
[info][main] [source/server/server.cc:301] envoy.grpc_credentials: envoy.grpc_credentials.default
[info][main] [source/server/server.cc:301] envoy.resolvers: envoy.ip
[info][main] [source/server/server.cc:301] envoy.bootstrap: envoy.extensions.network.socket_interface.default_socket_interface
[info][main] [source/server/server.cc:301] envoy.transport_sockets.downstream: envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[info][main] [source/server/server.cc:301] envoy.access_loggers: envoy.access_loggers.file, envoy.file_access_log
[info][main] [source/server/server.cc:301] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns
[info][main] [source/server/server.cc:301] envoy.filters.network: envoy.filters.network.http_connection_manager, envoy.filters.network.tcp_proxy, envoy.http_connection_manager,
envoy.tcp_proxy
[info][main] [source/server/server.cc:301] envoy.filters.http: add-body-filter, add-trailers-filter, call-decodedata-once-filter, decode-headers-only, decode-headers-return-stop-all-filter,
encode-headers-only, encode-headers-return-stop-all-filter, envoy.filters.http.on_demand, envoy.filters.http.router, envoy.router, modify-buffer-filter, passthrough-filter, pause-filter,
wait-for-whole-request-and-response-filter
[info][main] [source/server/server.cc:301] envoy.transport_sockets.upstream: envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[info][main] [source/server/server.cc:317] HTTP header map info:
[info][main] [source/server/server.cc:320] request header map: 424 bytes: :authority,:method,:path,:protocol,:scheme,connection,content-length,content-type,expect,grpc-timeout,keep-alive,pro
xy-connection,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node
,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy
-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy
-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-request-id
[info][main] [source/server/server.cc:320] request trailer map: 80 bytes:
[info][main] [source/server/server.cc:320] response header map: 272 bytes: :status,connection,content-length,content-type,date,grpc-message,grpc-status,keep-alive,location,proxy-connection,s
erver,transfer-encoding,upgrade,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-u
pstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[info][main] [source/server/server.cc:320] response trailer map: 104 bytes: grpc-message,grpc-status
[debug][main] [source/server/overload_manager_impl.cc:203] No overload action is configured for envoy.overload_actions.shrink_heap.
[debug][main] [source/server/overload_manager_impl.cc:203] No overload action is configured for envoy.overload_actions.stop_accepting_connections.
[info][main] [source/server/server.cc:439] admin address: 127.0.0.1:0
[info][main] [source/server/server.cc:567] runtime: layers:
- name: static_layer
static_layer:
overload.global_downstream_max_connections: 4
envoy.resource_limits.listener.listener_0.connection_limit: 100
- name: admin
admin_layer:
{}
[info][config] [source/server/configuration_impl.cc:103] loading tracing configuration
[info][config] [source/server/configuration_impl.cc:69] loading 1 static secret(s)
[debug][config] [source/server/configuration_impl.cc:71] static secret #0: secret_static_0
[info][config] [source/server/configuration_impl.cc:75] loading 1 cluster(s)
[debug][grpc] [source/common/grpc/google_async_client_impl.cc:49] completionThread running
[debug][upstream] [source/common/upstream/upstream_impl.cc:285] transport socket match, socket default selected for host with address 127.0.0.1:60253
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:981] adding TLS initial cluster cluster_0
[debug][upstream] [source/common/upstream/upstream_impl.cc:983] initializing Primary cluster cluster_0 completed
[debug][init] [source/common/init/manager_impl.cc:45] init manager Cluster cluster_0 contains no targets
[debug][init] [source/common/init/watcher_impl.cc:14] init manager Cluster cluster_0 initialized, notifying ClusterImplBase
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1140] membership update for TLS cluster cluster_0 added 1 removed 0
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:107] cm init: init complete: cluster=cluster_0 primary=0 secondary=0
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 0
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:79] cm init: adding: cluster=cluster_0 primary=0 secondary=0
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 1
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:136] maybe finish initialize primary init clusters empty: true
[info][config] [source/server/configuration_impl.cc:79] loading 0 listener(s)
[info][config] [source/server/configuration_impl.cc:129] loading stats sink configuration
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:102] created watch for directory: 'c:\tmp' handle: 0x1e0
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:106] added watch for file '1644_15953080324178456' in directory 'c:\tmp'
[debug][init] [source/common/init/manager_impl.cc:20] added target LDS to init manager Server
[debug][init] [source/common/init/manager_impl.cc:45] init manager RTDS contains no targets
[debug][init] [source/common/init/watcher_impl.cc:14] init manager RTDS initialized, notifying RDTS
[info][runtime] [source/common/runtime/runtime_impl.cc:404] RTDS has finished initialization
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:198] continue initializing secondary clusters
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 2
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:136] maybe finish initialize primary init clusters empty: true
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:151] maybe finish initialize secondary init clusters empty: true
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:174] maybe finish initialize cds api ready: false
[info][upstream] [source/common/upstream/cluster_manager_impl.cc:180] cm init: all clusters initialized
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:183] notification: handle: 0x1e0 action: 1 file: A61F.tmp
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:183] notification: handle: 0x1e0 action: 2 file: A61F.tmp
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:183] notification: handle: 0x1e0 action: 1 file: A61F.tmp
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:183] notification: handle: 0x1e0 action: 3 file: A61F.tmp
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:183] notification: handle: 0x1e0 action: 1 file: A630.tmp
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:183] notification: handle: 0x1e0 action: 2 file: A630.tmp
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:183] notification: handle: 0x1e0 action: 1 file: A630.tmp
[debug][file] [source/common/filesystem/win32/watcher_impl.cc:183] notification: handle: 0x1e0 action: 3 file: A630.tmp
[info][testing] [test/integration/server.cc:82] listener wait complete
[info][main] [source/server/server.cc:644] all clusters initialized. initializing init manager
[debug][init] [source/common/init/manager_impl.cc:49] init manager Server initializing
[debug][init] [source/common/init/target_impl.cc:15] init manager Server initializing target LDS
[debug][config] [source/common/config/filesystem_subscription_impl.cc:50] Filesystem config refresh for c:\tmp/1644_15953080324178456
[debug][config] [source/server/listener_manager_impl.cc:391] begin add/update listener: name=listener_0 hash=10784813463872291260
[debug][config] [source/server/listener_manager_impl.cc:417] use full listener update path for listener name=listener_0 hash=10784813463872291260
[debug][config] [source/server/listener_manager_impl.cc:95] filter #0:
[debug][config] [source/server/listener_manager_impl.cc:96] name: tcp
[debug][config] [source/server/listener_manager_impl.cc:103] config: {
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
"stat_prefix": "tcp_stats",
"cluster": "cluster_0"
}
[debug][config] [source/server/filter_chain_manager_impl.cc:215] new fc_contexts has 1 filter chains, including 1 newly built
[debug][init] [source/common/init/target_impl.cc:15] init manager Server initializing target Listener-init-target listener_0
[debug][init] [source/common/init/manager_impl.cc:45] init manager Listener-local-init-manager listener_0 10784813463872291260 contains no targets
[debug][init] [source/common/init/watcher_impl.cc:14] init manager Listener-local-init-manager listener_0 10784813463872291260 initialized, notifying Listener-local-init-watcher listener_0
[debug][init] [source/common/init/watcher_impl.cc:14] target Listener-init-target listener_0 initialized, notifying init manager Server
[debug][config] [source/server/listener_impl.cc:105] Create listen socket for listener listener_0 on address 127.0.0.1:0
[debug][config] [source/server/listener_impl.cc:95] Set listener listener_0 socket factory local address to 127.0.0.1:60257
[debug][config] [source/server/listener_impl.cc:626] add active listener: name=listener_0, hash=10784813463872291260, address=127.0.0.1:0
[info][upstream] [source/server/lds_api.cc:72] lds: add/update listener 'listener_0'
[debug][init] [source/common/init/watcher_impl.cc:14] target LDS initialized, notifying init manager Server
[debug][init] [source/common/init/watcher_impl.cc:14] init manager Server initialized, notifying RunHelper
[info][config] [source/server/listener_manager_impl.cc:844] all dependencies initialized. starting workers
[debug][config] [source/server/listener_manager_impl.cc:855] starting worker 0
[debug][main] [source/server/worker_impl.cc:125] worker entering dispatch loop
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:981] adding TLS initial cluster cluster_0
[debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1140] membership update for TLS cluster cluster_0 added 1 removed 0
[debug][grpc] [source/common/grpc/google_async_client_impl.cc:49] completionThread running
[debug][config] [source/common/config/filesystem_subscription_impl.cc:64] Filesystem config update accepted for c:\tmp/1644_15953080324178456: version_info: "0"
resources {
[type.googleapis.com/envoy.config.listener.v3.Listener] {
name: "listener_0"
address {
socket_address {
address: "127.0.0.1"
port_value: 0
}
}
filter_chains {
filters {
name: "tcp"
typed_config {
[type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy] {
stat_prefix: "tcp_stats"
cluster: "cluster_0"
}
}
}
}
}
}
183412668: "envoy.api.v2.DiscoveryResponse"
[info][main] [source/server/server.cc:663] starting main dispatch loop
[debug][testing] [test/integration/integration.cc:459] registered 'listener_0' as port 60257.
[debug][connection] [source/common/network/connection_impl.cc:756] [C0] connecting to 127.0.0.1:60257
[debug][connection] [source/common/network/connection_impl.cc:772] [C0] connection in progress
[debug][testing] [test/integration/fake_upstream.cc:617] waiting for raw connection
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:236] [C1] new tcp proxy session
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:380] [C1] Creating connection to cluster cluster_0
[debug][pool] [source/common/tcp/original_conn_pool.cc:98] creating a new connection
[debug][pool] [source/common/tcp/original_conn_pool.cc:383] [C2] connecting
[debug][connection] [source/common/network/connection_impl.cc:756] [C2] connecting to 127.0.0.1:60253
[debug][connection] [source/common/network/connection_impl.cc:772] [C2] connection in progress
[debug][pool] [source/common/tcp/original_conn_pool.cc:125] queueing request due to no available connections
[debug][conn_handler] [source/server/connection_handler_impl.cc:423] [C1] new connection
[debug][connection] [source/common/network/connection_impl.cc:616] [C2] connected
[debug][conn_handler] [source/server/connection_handler_impl.cc:423] [C3] new connection
[debug][pool] [source/common/tcp/original_conn_pool.cc:303] [C2] assigning connection
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:631] TCP:onUpstreamEvent(), requestedServerName:
[debug][connection] [source/common/network/connection_impl.cc:756] [C4] connecting to 127.0.0.1:60257
[debug][connection] [source/common/network/connection_impl.cc:772] [C4] connection in progress
[debug][testing] [test/integration/fake_upstream.cc:617] waiting for raw connection
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:236] [C5] new tcp proxy session
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:380] [C5] Creating connection to cluster cluster_0
[debug][pool] [source/common/tcp/original_conn_pool.cc:98] creating a new connection
[debug][pool] [source/common/tcp/original_conn_pool.cc:383] [C6] connecting
[debug][connection] [source/common/network/connection_impl.cc:756] [C6] connecting to 127.0.0.1:60253
[debug][connection] [source/common/network/connection_impl.cc:772] [C6] connection in progress
[debug][pool] [source/common/tcp/original_conn_pool.cc:125] queueing request due to no available connections
[debug][conn_handler] [source/server/connection_handler_impl.cc:423] [C5] new connection
[debug][conn_handler] [source/server/connection_handler_impl.cc:423] [C7] new connection
[debug][connection] [source/common/network/connection_impl.cc:616] [C6] connected
[debug][pool] [source/common/tcp/original_conn_pool.cc:303] [C6] assigning connection
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:631] TCP:onUpstreamEvent(), requestedServerName:
[debug][connection] [source/common/network/connection_impl.cc:756] [C8] connecting to 127.0.0.1:60257
[debug][connection] [source/common/network/connection_impl.cc:772] [C8] connection in progress
[debug][testing] [test/integration/fake_upstream.cc:617] waiting for raw connection
[debug][connection] [source/common/network/connection_impl.cc:616] [C8] connected
[debug][connection] [source/common/network/connection_impl.cc:584] [C8] remote close
[debug][connection] [source/common/network/connection_impl.cc:208] [C8] closing socket: 0
[debug][connection] [source/common/network/connection_impl.cc:616] [C0] connected
[debug][connection] [source/common/network/connection_impl.cc:616] [C4] connected
[debug][connection] [source/common/network/connection_impl.cc:112] [C0] closing data_to_write=0 type=1
[debug][connection] [source/common/network/connection_impl.cc:208] [C0] closing socket: 1
[debug][connection] [source/common/network/connection_impl.cc:584] [C3] remote close
[debug][connection] [source/common/network/connection_impl.cc:208] [C3] closing socket: 0
[debug][conn_handler] [source/server/connection_handler_impl.cc:111] [C3] adding to cleanup list
[debug][connection] [source/common/network/connection_impl.cc:756] [C9] connecting to 127.0.0.1:60257
[debug][connection] [source/common/network/connection_impl.cc:772] [C9] connection in progress
[debug][testing] [test/integration/fake_upstream.cc:617] waiting for raw connection
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:236] [C10] new tcp proxy session
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:380] [C10] Creating connection to cluster cluster_0
[debug][pool] [source/common/tcp/original_conn_pool.cc:98] creating a new connection
[debug][pool] [source/common/tcp/original_conn_pool.cc:383] [C11] connecting
[debug][connection] [source/common/network/connection_impl.cc:756] [C11] connecting to 127.0.0.1:60253
[debug][connection] [source/common/network/connection_impl.cc:772] [C11] connection in progress
[debug][pool] [source/common/tcp/original_conn_pool.cc:125] queueing request due to no available connections
[debug][conn_handler] [source/server/connection_handler_impl.cc:423] [C10] new connection
[debug][connection] [source/common/network/connection_impl.cc:584] [C2] remote close
[debug][connection] [source/common/network/connection_impl.cc:208] [C2] closing socket: 0
[debug][pool] [source/common/tcp/original_conn_pool.cc:140] [C2] client disconnected
[debug][connection] [source/common/network/connection_impl.cc:112] [C1] closing data_to_write=0 type=0
[debug][connection] [source/common/network/connection_impl.cc:208] [C1] closing socket: 1
[debug][conn_handler] [source/server/connection_handler_impl.cc:111] [C1] adding to cleanup list
[debug][pool] [source/common/tcp/original_conn_pool.cc:255] [C2] connection destroyed
[debug][connection] [source/common/network/connection_impl.cc:616] [C11] connected
[debug][pool] [source/common/tcp/original_conn_pool.cc:303] [C11] assigning connection
[debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:631] TCP:onUpstreamEvent(), requestedServerName:
[info][main] [source/server/drain_manager_impl.cc:70] shutting down parent after drain
[debug][main] [source/server/server.cc:191] flushing stats
[debug][main] [source/server/server.cc:191] flushing stats
[critical][assert] [source/common/network/connection_impl.cc:85] assert failure: !ioHandle().isOpen() && delayed_close_timer_ == nullptr. Details: ConnectionImpl was unexpectedly torn down
without being closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment