I’m getting timeouts on production but no problem in local

Since a few hours, I get timeouts on API requests, here are some logs:

[Nest] 2314  - 04/08/2023, 2:22:09 PM   ERROR [ExceptionsHandler] connect ETIMEDOUT 137.221.106.102:443
Error: connect ETIMEDOUT 137.221.106.102:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16)
[Nest] 2314  - 04/08/2023, 2:22:11 PM   ERROR [ExceptionsHandler] connect ETIMEDOUT 137.221.106.102:443
Error: connect ETIMEDOUT 137.221.106.102:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16)
[Nest] 2314  - 04/08/2023, 2:22:11 PM   ERROR [ExceptionsHandler] connect ETIMEDOUT 137.221.106.102:443
Error: connect ETIMEDOUT 137.221.106.102:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16)
[Nest] 2314  - 04/08/2023, 2:22:11 PM   ERROR [ExceptionsHandler] connect ETIMEDOUT 137.221.106.102:443
Error: connect ETIMEDOUT 137.221.106.102:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16)
[Nest] 2314  - 04/08/2023, 2:22:11 PM   ERROR [ExceptionsHandler] connect ETIMEDOUT 137.221.106.102:443
Error: connect ETIMEDOUT 137.221.106.102:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16)
[Nest] 2314  - 04/08/2023, 2:22:11 PM   ERROR [ExceptionsHandler] connect ETIMEDOUT 137.221.106.102:443
Error: connect ETIMEDOUT 137.221.106.102:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16)

Here a traceroute :

$traceroute 137.221.106.102
traceroute to 137.221.106.102 (137.221.106.102), 30 hops max, 60 byte packets
 1  217.182.174.252 (217.182.174.252)  0.798 ms  0.907 ms  1.081 ms
 2  10.50.40.253 (10.50.40.253)  0.124 ms 10.50.40.252 (10.50.40.252)  0.159 ms  0.194 ms
 3  10.17.49.8 (10.17.49.8)  0.187 ms  0.171 ms 10.17.49.10 (10.17.49.10)  0.095 ms
 4  10.95.66.16 (10.95.66.16)  0.240 ms 10.95.66.20 (10.95.66.20)  0.079 ms  0.113 ms
 5  10.95.64.154 (10.95.64.154)  1.190 ms 10.95.64.0 (10.95.64.0)  1.697 ms 10.95.64.142 (10.95.64.142)  1.187 ms
 6  lon-drch-sbb1-nc5.uk.eu (54.36.50.230)  4.534 ms lon-thw-sbb1-nc5.uk.eu (54.36.50.240)  4.270 ms  4.553 ms
 7  nyc-ny1-sbb1-8k.nj.us (192.99.146.127)  79.038 ms  72.929 ms nyc-ny1-sbb2-8k.nj.us (192.99.146.133)  73.407 ms
 8  10.200.3.137 (10.200.3.137)  79.414 ms  79.345 ms 10.200.3.133 (10.200.3.133)  77.858 ms
 9  * * *
10  * * *
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

From my localhost, it works :

  1    <1 ms    <1 ms    <1 ms  192.168.1.1
  2    10 ms     9 ms     9 ms  XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  3    13 ms    13 ms    15 ms  XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  4    15 ms    13 ms    14 ms  57.146.6.194.rev.sfr.net [194.6.146.57]
  5    14 ms    13 ms    13 ms  57.146.6.194.rev.sfr.net [194.6.146.57]
  6    13 ms    13 ms    14 ms  ae27.parigi51.par.seabone.net [213.144.168.174]
  7   118 ms   120 ms   120 ms  et5-1-5.miami15.mia.seabone.net [89.221.41.177]
  8   122 ms   128 ms   121 ms  blizzard.miami15.mia.seabone.net [89.221.41.3]
  9   160 ms   160 ms   160 ms  137.221.76.35
 10   163 ms   160 ms   160 ms  xe-0-0-1-1-br01-eqda6.as57976.net [137.221.65.62]
 11   214 ms   176 ms   181 ms  et-0-0-2-br02-swlv10.as57976.net [137.221.65.67]
 12   174 ms   162 ms   160 ms  et-0-0-1-pe04-swlv10.cs57976.net [137.221.83.95]
 13   164 ms   160 ms   160 ms  las-swlv10-ia-bons-04.as57976.net [137.221.66.23]
 14   163 ms   160 ms   175 ms  137.221.105.17
 15   168 ms   165 ms   165 ms  137.221.106.102

Hi, could you please check from when the pvp ladder API is/was? Reason im asking is, i suspect it to be outdated, which also results in odd estimated season cutoffs on 3rd party websites. Ideally, they are updating that along various other things, thanks.

Last call logged on my side is :

2023-04-08 03:25:23 AM UTC

Experencing same problem. It started for me about 16h ago from now. I could look into the logs for exact timestamp if that matters.

I haven’t been trying to debug it beyond some basic troubleshoting. eu.api.blizzard.com , kr.api.blizzard.com was timing out, not responding to pings. us.api.blizzard.com worked at least few hours ago… but this whole thing seems unstable.

Same issue here. I have a feeling that it’s related to the latest DDoS attacks which happened yesterday and Blizzard blocked a few providers/ip ranges. I have tried with several servers from the same provider and all are getting timeouts.

Temporary workaround - which works for me - maybe it’ll work for others before it gets sorted out. I’ve forced us.api.blizzard.com to be resolved to old DNS record, before they updated it.

This works for me, for now:

diff --git a/docker-compose.prod.yml b/docker-compose.prod.yml
index 3e352dd..2212337 100644
--- a/docker-compose.prod.yml
+++ b/docker-compose.prod.yml
@@ -80,6 +80,10 @@ services:
     depends_on:
       redis: { condition: service_healthy }
       mariadb: { condition: service_healthy }
+    extra_hosts:
+      - 'us.api.blizzard.com:117.52.35.145'
+      - 'eu.api.blizzard.com:117.52.35.145'
+      - 'kr.api.blizzard.com:117.52.35.145'

 networks:
   webproxy:

(one can do the same by just editing /etc/hosts if running w/o docker)

Currently this IP is only assigned to kr.api.blizzard.com on a global DNS:

$ dig us.api.blizzard.com @1.1.1.1

; <<>> DiG 9.16.37-Debian <<>> us.api.blizzard.com @1.1.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2070
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;us.api.blizzard.com.		IN	A

;; ANSWER SECTION:
us.api.blizzard.com.	534	IN	A	137.221.106.121

;; Query time: 3 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Sat Apr 08 21:50:30 UTC 2023
;; MSG SIZE  rcvd: 64


kk@w6:~
$ dig eu.api.blizzard.com @1.1.1.1

; <<>> DiG 9.16.37-Debian <<>> eu.api.blizzard.com @1.1.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62037
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;eu.api.blizzard.com.		IN	A

;; ANSWER SECTION:
eu.api.blizzard.com.	484	IN	A	37.244.28.187

;; Query time: 3 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Sat Apr 08 21:50:36 UTC 2023
;; MSG SIZE  rcvd: 64


kk@w6:~
$ dig kr.api.blizzard.com @1.1.1.1

; <<>> DiG 9.16.37-Debian <<>> kr.api.blizzard.com @1.1.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4908
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;kr.api.blizzard.com.		IN	A

;; ANSWER SECTION:
kr.api.blizzard.com.	600	IN	A	117.52.35.145

;; Query time: 11 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Sat Apr 08 21:50:40 UTC 2023
;; MSG SIZE  rcvd: 64

But the other two, i.e.:

eu.api.blizzard.com.	484	IN	A	37.244.28.187
us.api.blizzard.com.	534	IN	A	137.221.106.121

Don’t really work for me at the moment (timing out / not responding to pings when connecting from OVH PL @ 51.68.155.183).

They do work from home though (residential network).

Hi. I also confirm the problem with timeouts on the Hetzener network, and I also have timeouts accessing eu battle net (37.244.28.102) to get a token.

Workaround does not work for me, from 117.52.35.145 I got 403 Forbidden (my token is fresh, I renew it via home internet)

For OAuth tokens, I’ve switched to kr.battle.net without any modifications, as for some reason everything @ kr. continues to work for me. And tokens were always global anyway. Even before this indicident, one could use token from KR gateway to talk to EU & US gateways.

But well, at least you’re getting an actual HTTP response :stuck_out_tongue: That’s something. Given that the host behind it replies with a correct SSL cert for us.api.blizzard.com and others also means that they’re kinda interchangeable.

In my case I talk to SC2 community APIs, and get the correct response back, even when requesting stuff from another region/realm.

Blocking complete providers is a mess. I know it probably is to prevent attacks, but then atleast acknowledge it and don’t ignore the fact that there are apps using the API which are now unusable.

Did anyone figure out a workaround? I’m still getting timeouts and my app is basically dead right now. Even when I would get a token from kr servers, I would not be able to use the api.

I’m using a proxy server but it’s a degraded solution that adds a lot of latency… I hope Blizzard will communicate or unblock the situation…

Edit: 12-04-2023
The requests seem to work again without proxy

Unfortunately DDoS mitigation efforts will be a nuisance from time to time to ensure platform stability to serve our players. Are any of you still seeing impacts to your providers?

1 Like

Thank you for your update. Can confirm the issue is gone for me since ~12-04-2023. I’m able to communicate with APIs again, without any workarounds.