http-client-brread-timeout: Http client with time-limited brRead

[ library, mit, network ] [ Propose Tags ] [ Report a vulnerability ]

Http client with timeouts applied in between body read events.

Note that the response timeout in http-client is applied only when receiving the response headers which is not always satisfactory given that a slow server may send the rest of the response very slowly.


[Skip to Readme]

Downloads

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

  • No Candidates
Versions [RSS] 0.1.0.0, 0.1.0.1, 0.1.0.2, 0.1.1.0
Change log Changelog.md
Dependencies base (>=4.8 && <5), bytestring, http-client (>=0.5.0) [details]
License MIT
Copyright 2022 Alexey Radkov
Author Alexey Radkov <alexey.radkov@gmail.com>
Maintainer Alexey Radkov <alexey.radkov@gmail.com>
Category Network
Home page https://github.com/lyokha/http-client-brread-timeout
Source repo head: git clone https://github.com/lyokha/http-client-brread-timeout
Uploaded by lyokha at 2022-06-28T16:29:42Z
Distributions NixOS:0.1.1.0
Reverse Dependencies 2 direct, 0 indirect [details]
Downloads 404 total (13 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2022-06-28 [all 1 reports]

Readme for http-client-brread-timeout-0.1.1.0

[back to package description]

Http client with time-limited brRead

Build Status Hackage

Http client with timeouts applied in between body read events.

Note that the response timeout in http-client is applied only when receiving the response headers which is not always satisfactory given that a slow server may send the rest of the response very slowly.

How do I test this?

A slow server can be emulated in Nginx with the following configuration.

user                    nobody;
worker_processes        2;

events {
    worker_connections  1024;
}

http {
    default_type        application/octet-stream;
    sendfile            on;

    server {
        listen          8010;
        server_name     main;

        location /slow {
            echo 1; echo_flush;
            # send extra chunks of the response body once in 20 sec
            echo_sleep 20; echo 2; echo_flush;
            echo_sleep 20; echo 3; echo_flush;
            echo_sleep 20; echo 4;
        }

        location /very/slow {
            echo 1; echo_flush;
            echo_sleep 20; echo 2; echo_flush;
            # chunk 3 is extremely slow (40 sec)
            echo_sleep 40; echo 3; echo_flush;
            echo_sleep 20; echo 4;
        }
    }
}

GHCI session.

Prelude> import Network.HTTP.Client as HTTP.Client
Prelude HTTP.Client> import Network.HTTP.Client.BrReadWithTimeout as BrReadWithTimeout
Prelude HTTP.Client BrReadWithTimeout> httpManager = newManager defaultManagerSettings
Prelude HTTP.Client BrReadWithTimeout> man <- httpManager
Prelude HTTP.Client BrReadWithTimeout> reqVerySlow <- parseRequest "GET http://127.0.0.1:8010/very/slow"
Prelude HTTP.Client BrReadWithTimeout> reqSlow <- parseRequest "GET http://127.0.0.1:8010/slow"
Prelude HTTP.Client BrReadWithTimeout> :set +s
Prelude HTTP.Client BrReadWithTimeout> httpLbs reqVerySlow man
Response {responseStatus = Status {statusCode = 200, statusMessage = "OK"}, responseVersion = HTTP/1.1, responseHeaders = [("Server","nginx/1.22.0"),("Date","Thu, 23 Jun 2022 22:04:02 GMT"),("Content-Type","application/octet-stream"),("Transfer-Encoding","chunked"),("Connection","keep-alive")], responseBody = "1\n2\n3\n4\n", responseCookieJar = CJ {expose = []}, responseClose' = ResponseClose, responseOriginalRequest = Request {
  host                 = "127.0.0.1"
  port                 = 8010
  secure               = False
  requestHeaders       = []
  path                 = "/very/slow"
  queryString          = ""
  method               = "GET"
  proxy                = Nothing
  rawBody              = False
  redirectCount        = 10
  responseTimeout      = ResponseTimeoutDefault
  requestVersion       = HTTP/1.1
  proxySecureMode      = ProxySecureWithConnect
}
}
(80.09 secs, 1,084,840 bytes)
Prelude HTTP.Client BrReadWithTimeout> httpLbsBrReadWithTimeout reqVerySlow man
*** Exception: HttpExceptionRequest Request {
  host                 = "127.0.0.1"
  port                 = 8010
  secure               = False
  requestHeaders       = []
  path                 = "/very/slow"
  queryString          = ""
  method               = "GET"
  proxy                = Nothing
  rawBody              = False
  redirectCount        = 10
  responseTimeout      = ResponseTimeoutMicro 30000000
  requestVersion       = HTTP/1.1
  proxySecureMode      = ProxySecureWithConnect
}
 ResponseTimeout
Prelude HTTP.Client BrReadWithTimeout> httpLbsBrReadWithTimeout reqSlow man
Response {responseStatus = Status {statusCode = 200, statusMessage = "OK"}, responseVersion = HTTP/1.1, responseHeaders = [("Server","nginx/1.22.0"),("Date","Thu, 23 Jun 2022 22:08:46 GMT"),("Content-Type","application/octet-stream"),("Transfer-Encoding","chunked"),("Connection","keep-alive")], responseBody = "1\n2\n3\n4\n", responseCookieJar = CJ {expose = []}, responseClose' = ResponseClose, responseOriginalRequest = Request {
  host                 = "127.0.0.1"
  port                 = 8010
  secure               = False
  requestHeaders       = []
  path                 = "/slow"
  queryString          = ""
  method               = "GET"
  proxy                = Nothing
  rawBody              = False
  redirectCount        = 10
  responseTimeout      = ResponseTimeoutDefault
  requestVersion       = HTTP/1.1
  proxySecureMode      = ProxySecureWithConnect
}
}
(60.07 secs, 1,082,880 bytes)

Here, the first request comes from the standard httpLbs which, after timely receiving of the first chunk of the response (including headers and the first chunk of the body), no longer applies any timeouts and may last as long as the response endures: in this case, it lasts 80 seconds and successfully returns. In the second request, httpLbsBrReadWithTimeout timely receives the first chunk of the response too, the second chunk comes in 20 seconds, and finally, as the third chunk is going to come in 40 seconds which exceeds the default response timeout value (30 seconds), the function throws ResponseTimeout exception after 50 seconds from the start of the request. In the third request, httpLbsBrReadWithTimeout returns successfully after 60 seconds because every extra chunk of the response was coming every 20 seconds without triggering the timeout.