HAProxy Management


HAProxy Management

HAProxy Management Guide

version 2.6

Management Guide 원래 문서

This document describes how to start, stop, manage, and troubleshoot HAProxy, as well as some known limitations and traps to avoid. It does not describe how to configure it (for this please read configuration.txt).

이 문서에서는 HAProxy를 시작, 중지, 관리 및 문제 해결하는 방법과 피해야 할 몇 가지 알려진 제한 사항 및 함정에 대해 설명합니다. 구성 방법에 대해서는 설명하지 않습니다(configuration.txt 참조).

Note to documentation contributors :
This document is formatted with 80 columns per line, with even number of spaces for indentation and without tabs. Please follow these rules strictly so that it remains easily printable everywhere. If you add sections, please update the summary below for easier searching.

문서 기여자 참고 사항 :
이 문서는 한 줄에 80개의 열로 구성되어 있으며 탭 없이 들여쓰기를 위한 짝수의 공백이 있습니다. 어디에서나 쉽게 인쇄할 수 있도록 이 규칙을 엄격히 따르십시오. 섹션을 추가하는 경우 더 쉽게 검색할 수 있도록 아래 목차를 업데이트하십시오.

Summary 목차

  • 1. Prerequisites 전제 조건
  • 2. Quick reminder about HAProxy's architecture HAProxy 아키텍처에 대한 빠른 알림
  • 3. Starting HAProxy HAProxy 시작
  • 4. Stopping and restarting HAProxy HAProxy 중지 및 다시 시작
  • 5. File-descriptor limitations 파일 설명자 제한 사항
  • 6. Memory management 메모리 관리
  • 7. CPU usage CPU 사용량
  • 8. Logging 기록
  • 9. Statistics and monitoring 통계 및 모니터링
  • 9.1. CSV format CSV 형식
  • 9.2. Typed output format 입력된 출력 형식
  • 9.3. Unix Socket commands 유닉스 소켓 명령
  • 9.4. Master CLI 마스터 CLI
  • 9.4.1. Master CLI commands 마스터 CLI 명령
  • 10. Tricks for easier configuration management 더 쉬운 구성 관리를 위한 요령
  • 11. Well-known traps to avoid 피해야 할 잘 알려진 함정
  • 12. Debugging and performance issues 디버깅 및 성능 문제
  • 13. Security considerations 보안 고려 사항

1. Prerequisites

In this document it is assumed that the reader has sufficient administration skills on a UNIX-like operating system, uses the shell on a daily basis and is familiar with troubleshooting utilities such as strace and tcpdump.

이 문서에서는 독자가 UNIX와 유사한 운영 체제에 대한 충분한 관리 기술을 가지고 있고 매일 셸(shell)을 사용하며 strace 및 tcpdump와 같은 문제 해결 유틸리티에 익숙하다고 가정합니다.


2. Quick reminder about HAProxy's architecture

HAProxy is a multi-threaded, event-driven, non-blocking daemon. This means is uses event multiplexing to schedule all of its activities instead of relying on the system to schedule between multiple activities. Most of the time it runs as a single process, so the output of "ps aux" on a system will report only one "haproxy" process, unless a soft reload is in progress and an older process is finishing its job in parallel to the new one. It is thus always easy to trace its activity using the strace utility. In order to scale with the number of available processors, by default haproxy will start one worker thread per processor it is allowed to run on. Unless explicitly configured differently, the incoming traffic is spread over all these threads, all running the same event loop. A great care is taken to limit inter-thread dependencies to the strict minimum, so as to try to achieve near-linear scalability. This has some impacts such as the fact that a given connection is served by a single thread. Thus in order to use all available processing capacity, it is needed to have at least as many connections as there are threads, which is almost always granted.

HAProxy는 다중 스레드, 이벤트 중심, 비차단 데몬입니다. 즉, 시스템이 여러 활동 사이를 예약하는 대신 이벤트 다중화를 사용하여 모든 활동을 예약합니다. 대부분의 경우 단일 프로세스로 실행되므로 시스템의 "ps aux" 출력은 소프트 재로드가 진행 중이고 이전 프로세스가 새 프로세스와 병렬로 작업을 완료하지 않는 한 하나의 "haproxy" 프로세스만 보고합니다. 따라서 strace 유틸리티를 사용하여 활동을 추적하는 것은 항상 쉽습니다. 사용 가능한 프로세서 수로 확장하기 위해 기본적으로 haproxy는 실행이 허용된 프로세서당 하나의 작업자 스레드를 시작합니다. 명시적으로 다르게 구성하지 않는 한 들어오는 트래픽은 이러한 모든 스레드에 분산되어 모두 동일한 이벤트 루프를 실행합니다. 선형에 가까운 확장성을 달성하기 위해 스레드 간 종속성을 최소한으로 제한하는 데 많은 주의를 기울입니다. 이것은 주어진 연결이 단일 스레드에 의해 제공된다는 사실과 같은 몇 가지 영향을 미칩니다. 따라서 사용 가능한 모든 처리 용량을 사용하려면 거의 항상 허용되는 스레드 수만큼 연결이 필요합니다.

HAProxy is designed to isolate itself into a chroot jail during startup, where it cannot perform any file-system access at all. This is also true for the libraries it depends on (eg: libc, libssl, etc). The immediate effect is that a running process will not be able to reload a configuration file to apply changes, instead a new process will be started using the updated configuration file. Some other less obvious effects are that some timezone files or resolver files the libc might attempt to access at run time will not be found, though this should generally not happen as they're not needed after startup. A nice consequence of this principle is that the HAProxy process is totally stateless, and no cleanup is needed after it's killed, so any killing method that works will do the right thing.

HAProxy는 시작하는 동안 파일 시스템 액세스를 전혀 수행할 수 없는 chroot 감옥으로 자체를 격리하도록 설계되었습니다. 의존하는 라이브러리(예: libc, libssl 등)의 경우에도 마찬가지입니다. 즉각적인 효과는 실행 중인 프로세스가 변경 사항을 적용하기 위해 구성 파일을 다시 로드할 수 없고 대신 업데이트된 구성 파일을 사용하여 새 프로세스가 시작된다는 것입니다. 덜 분명한 다른 효과는 libc가 실행 시간에 액세스하려고 시도할 수 있는 일부 시간대 파일이나 리졸버 파일을 찾을 수 없다는 것입니다. 이 원칙의 좋은 결과는 HAProxy 프로세스가 완전히 상태 비저장이며 종료된 후 정리가 필요하지 않으므로 작동하는 모든 종료 방법이 올바른 작업을 수행한다는 것입니다.

HAProxy doesn't write log files, but it relies on the standard syslog protocol to send logs to a remote server (which is often located on the same system).

HAProxy는 로그 파일을 작성하지 않지만 표준 syslog 프로토콜을 사용하여 로그를 원격 서버(종종 동일한 시스템에 있음)로 보냅니다.

HAProxy uses its internal clock to enforce timeouts, that is derived from the system's time but where unexpected drift is corrected. This is done by limiting the time spent waiting in poll() for an event, and measuring the time it really took. In practice it never waits more than one second. This explains why, when running strace over a completely idle process, periodic calls to poll() (or any of its variants) surrounded by two gettimeofday() calls are noticed. They are normal, completely harmless and so cheap that the load they imply is totally undetectable at the system scale, so there's nothing abnormal there. Example :

HAProxy는 내부 시계를 사용하여 시스템 시간에서 파생되지만 예기치 않은 드리프트(drift)가 수정되는 시간 초과를 적용합니다. 이것은 poll()에서 이벤트를 기다리는 데 소요되는 시간을 제한하고 실제로 걸린 시간을 측정하여 수행됩니다. 실제로는 1초 이상 기다리지 않습니다. 이는 완전히 유휴 프로세스에서 strace를 실행할 때 두 개의 gettimeofday() 호출로 둘러싸인 poll()(또는 그 변형)에 대한 주기적인 호출이 발견되는 이유를 설명합니다. 그것들은 정상적이고 완전히 무해하며 너무 싸서 그것들이 의미하는 부하가 시스템 규모에서 완전히 감지할 수 없기 때문에 거기에는 비정상적인 것이 없습니다. 예 :

  16:35:40.002320 gettimeofday({1442759740, 2605}, NULL) = 0
  16:35:40.002942 epoll_wait(0, {}, 200, 1000) = 0
  16:35:41.007542 gettimeofday({1442759741, 7641}, NULL) = 0
  16:35:41.007998 gettimeofday({1442759741, 8114}, NULL) = 0
  16:35:41.008391 epoll_wait(0, {}, 200, 1000) = 0
  16:35:42.011313 gettimeofday({1442759742, 11411}, NULL) = 0

HAProxy is a TCP proxy, not a router. It deals with established connections that have been validated by the kernel, and not with packets of any form nor with sockets in other states (eg: no SYN_RECV nor TIME_WAIT), though their existence may prevent it from binding a port. It relies on the system to accept incoming connections and to initiate outgoing connections. An immediate effect of this is that there is no relation between packets observed on the two sides of a forwarded connection, which can be of different size, numbers and even family. Since a connection may only be accepted from a socket in LISTEN state, all the sockets it is listening to are necessarily visible using the "netstat" utility to show listening sockets. Example :

HAProxy는 라우터가 아닌 TCP 프록시입니다. 이것은 커널에 의해 검증된 확립된 연결을 처리하며, 어떤 형태의 패킷이나 다른 상태의 소켓(예: SYN_RECV 또는 TIME_WAIT 없음)이 아닌 연결을 처리합니다. 들어오는 연결을 수락하고 나가는 연결을 시작하기 위해 시스템에 의존합니다. 이것의 즉각적인 효과는 전달된 연결의 양쪽에서 관찰되는 패킷 사이에 관계가 없다는 것입니다. 이 패킷은 크기, 수 및 패밀리가 다를 수 있습니다. 연결은 LISTEN 상태의 소켓에서만 수락될 수 있으므로 청취 중인 모든 소켓은 "netstat" 유틸리티를 사용하여 반드시 표시되어 청취 소켓을 표시합니다. 예시 :

  # netstat -ltnp
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address   Foreign Address   State    PID/Program name
  tcp        0      0 0.0.0.0:22      0.0.0.0:*         LISTEN   1629/sshd
  tcp        0      0 0.0.0.0:80      0.0.0.0:*         LISTEN   2847/haproxy
  tcp        0      0 0.0.0.0:443     0.0.0.0:*         LISTEN   2847/haproxy

3. Starting HAProxy

HAProxy is started by invoking the "haproxy" program with a number of arguments passed on the command line. The actual syntax is :
HAProxy는 명령줄에 전달된 여러 인수와 함께 "haproxy" 프로그램을 호출하여 시작됩니다. 실제 구문은 다음과 같습니다.

  $ haproxy []*

where [<options>]* is any number of options. An option always starts with '-' followed by one of more letters, and possibly followed by one or multiple extra arguments. Without any option, HAProxy displays the help page with a reminder about supported options. Available options may vary slightly based on the operating system. A fair number of these options overlap with an equivalent one if the "global" section. In this case, the command line always has precedence over the configuration file, so that the command line can be used to quickly enforce some settings without touching the configuration files. The current list of options is :

여기서 [<options>]*는 임의 개수의 옵션입니다. 옵션은 항상 '-'로 시작하고 그 뒤에 하나 이상의 문자가 오고 하나 이상의 추가 인수가 올 수 있습니다. 옵션이 없으면 HAProxy는 지원되는 옵션에 대한 알림과 함께 도움말 페이지를 표시합니다. 사용 가능한 옵션은 운영 체제에 따라 약간 다를 수 있습니다. 이러한 옵션 중 상당수는 "전역" 섹션의 경우 동등한 옵션과 겹칩니다. 이 경우 명령줄은 항상 구성 파일보다 우선하므로 명령줄을 사용하여 구성 파일을 건드리지 않고 일부 설정을 신속하게 적용할 수 있습니다. 현재 옵션 목록은 다음과 같습니다.

  • -- <cfgfile>* : all the arguments following "--" are paths to configuration file/directory to be loaded and processed in the declaration order. It is mostly useful when relying on the shell to load many files that are numerically ordered. See also "-f". The difference between "--" and "-f" is that one "-f" must be placed before each file name, while a single "--" is needed before all file names. Both options can be used together, the command line ordering still applies. When more than one file is specified, each file must start on a section boundary, so the first keyword of each file must be one of "global", "defaults", "peers", "listen", "frontend", "backend", and so on. A file cannot contain just a server list for example.

    -- <cfgfile>* : "--" 다음의 모든 인수는 선언 순서에 따라 로드되고 처리될 구성 파일/디렉토리의 경로입니다. 숫자로 정렬된 많은 파일을 로드하기 위해 쉘에 의존할 때 주로 유용합니다. "-f"도 참조하십시오. "--"와 "-f"의 차이점은 각 파일 이름 앞에 하나의 "-f"가 있어야 하고 모든 파일 이름 앞에 하나의 "--"가 필요하다는 것입니다. 두 옵션을 함께 사용할 수 있으며 명령줄 순서는 여전히 적용됩니다. 둘 이상의 파일이 지정된 경우 각 파일은 섹션 경계에서 시작해야 하므로 각 파일의 첫 번째 키워드는 "global", "defaults", "peers", "listen", "frontend", "backend", 등등 중 하나여야 합니다. 예를 들어 파일에는 서버 목록만 포함될 수 없습니다.

  • -f <cfgfile|cfgdir> : adds <cfgfile> to the list of configuration files to be loaded. If <cfgdir> is a directory, all the files (and only files) it contains are added in lexical order (using LC_COLLATE=C) to the list of configuration files to be loaded ; only files with ".cfg" extension are added, only non hidden files (not prefixed with ".") are added. Configuration files are loaded and processed in their declaration order. This option may be specified multiple times to load multiple files. See also "--". The difference between "--" and "-f" is that one "-f" must be placed before each file name, while a single "--" is needed before all file names. Both options can be used together, the command line ordering still applies. When more than one file is specified, each file must start on a section boundary, so the first keyword of each file must be one of "global", "defaults", "peers", "listen", "frontend", "backend", and so on. A file cannot contain just a server list for example.

    -f <cfgfile|cfgdir> : 로드할 구성 파일 목록에 <cfgfile>을 추가합니다. <cfgdir>이 디렉토리인 경우 포함된 모든 파일(및 파일만)이 로드할 구성 파일 목록에 어휘 순서(LC_COLLATE=C 사용)로 추가됩니다. 확장자가 ".cfg"인 파일만 추가되고 숨김이 아닌 파일(접두사 "."가 붙지 않음)만 추가됩니다. 구성 파일은 선언 순서대로 로드되고 처리됩니다. 이 옵션은 여러 파일을 로드하기 위해 여러 번 지정할 수 있습니다. "--" 참조. "--"와 "-f"의 차이점은 각 파일 이름 앞에 하나의 "-f"가 있어야 하고 모든 파일 이름 앞에 하나의 "--"가 필요하다는 것입니다. 두 옵션을 함께 사용할 수 있으며 명령줄 순서는 여전히 적용됩니다. 하나 이상의 파일이 지정된 경우 각 파일은 섹션 경계에서 시작해야 하므로 각 파일의 첫 번째 키워드는 "global", "defaults", "peers", "listen", "frontend", "backend", 등등 중 하나여야 합니다. 예를 들어 파일에는 서버 목록만 포함될 수 없습니다.

  • -C <dir> : changes to directory <dir> before loading configuration files. This is useful when using relative paths. Warning when using wildcards after "--" which are in fact replaced by the shell before starting haproxy.
    -C <dir> : 구성 파일을 로드하기 전에 디렉토리 <dir>로 변경합니다. 이는 상대 경로를 사용할 때 유용합니다. 실제로 haproxy를 시작하기 전에 셸로 대체되는 "--" 다음에 와일드카드를 사용할 때 경고합니다.

  • -D : start as a daemon. The process detaches from the current terminal after forking, and errors are not reported anymore in the terminal. It is equivalent to the "daemon" keyword in the "global" section of the configuration. It is recommended to always force it in any init script so that a faulty configuration doesn't prevent the system from booting.

    -D : 데몬으로 시작합니다. 포크 후 프로세스가 현재 터미널에서 분리되고 터미널에서 더 이상 오류가 보고되지 않습니다. 구성의 "global" 섹션에 있는 "daemon" 키워드와 동일합니다. 잘못된 구성으로 인해 시스템이 부팅되지 않도록 항상 init 스크립트에서 이를 강제 실행하는 것이 좋습니다.

  • -L <name> : change the local peer name to <name>, which defaults to the local hostname. This is used only with peers replication. You can use the variable $HAPROXY_LOCALPEER in the configuration file to reference the peer name.
    -L <name> : 로컬 피어 이름을 로컬 호스트 이름으로 기본 설정되는 <name>으로 변경합니다. 피어 복제에만 사용됩니다. 구성 파일에서 $HAPROXY_LOCALPEER 변수를 사용하여 피어 이름을 참조할 수 있습니다.

  • -N <limit> : sets the default per-proxy maxconn to <limit> instead of the builtin default value (usually 2000). Only useful for debugging.
    -N <limit> : 기본 프록시당 maxconn을 기본 제공 기본값(일반적으로 2000) 대신 <limit>로 설정합니다. 디버깅에만 유용합니다.

  • -V : enable verbose mode (disables quiet mode). Reverts the effect of "-q" or "quiet".
    -V : 상세 모드 활성화(조용 모드 비활성화). "-q" 또는 "quiet"의 효과를 되돌립니다.

  • -W : master-worker mode. It is equivalent to the "master-worker" keyword in the "global" section of the configuration. This mode will launch a "master" which will monitor the "workers". Using this mode, you can reload HAProxy directly by sending a SIGUSR2 signal to the master. The master-worker mode is compatible either with the foreground or daemon mode. It is recommended to use this mode with multiprocess and systemd.

    -W : 마스터-작업자 모드. 구성의 "global" 섹션에 있는 "master-worker" 키워드와 동일합니다. 이 모드는 "작업자"를 모니터링하는 "마스터"를 시작합니다. 이 모드를 사용하면 SIGUSR2 신호를 마스터로 전송하여 HAProxy를 직접 다시 로드할 수 있습니다. 마스터-작업자 모드는 포그라운드 또는 데몬 모드와 호환됩니다. 이 모드는 multiprocess 및 systemd와 함께 사용하는 것이 좋습니다.

  • -Ws : master-worker mode with support of `notify` type of systemd service. This option is only available when HAProxy was built with `USE_SYSTEMD` build option enabled.
    -Ws : 'notify' 유형의 systemd 서비스를 지원하는 마스터-작업자 모드입니다. 이 옵션은 HAProxy가 'USE_SYSTEMD' 빌드 옵션이 활성화된 상태로 빌드된 경우에만 사용할 수 있습니다.

  • -c : only performs a check of the configuration files and exits before trying to bind. The exit status is zero if everything is OK, or non-zero if an error is encountered. Presence of warnings will be reported if any.
    -c : 구성 파일 확인만 수행하고 바인딩을 시도하기 전에 종료합니다. 종료 상태는 모든 것이 정상이면 0이고, 오류가 발생하면 0이 아닙니다. 경고가 있으면 보고됩니다.

  • -cc : evaluates a condition as used within a conditional block of the configuration. The exit status is zero if the condition is true, 1 if the condition is false or 2 if an error is encountered.
    -cc : 구성의 조건부 블록 내에서 사용되는 조건을 평가합니다. 종료 상태는 조건이 참이면 0, 조건이 거짓이면 1, 오류가 발생하면 2입니다.

  • -d : enable debug mode. This disables daemon mode, forces the process to stay in foreground and to show incoming and outgoing events. It must never be used in an init script.
    -d : 디버그 모드를 활성화합니다. 이렇게 하면 데몬 모드가 비활성화되고 프로세스가 포그라운드에 머물며 들어오고 나가는 이벤트가 표시됩니다. init 스크립트에서는 절대 사용하면 안 됩니다.

  • -dD : enable diagnostic mode. This mode will output extra warnings about suspicious configuration statements. This will never prevent startup even in "zero-warning" mode nor change the exit status code.
    -dD : 진단 모드를 활성화합니다. 이 모드는 의심스러운 구성 설명에 대한 추가 경고를 출력합니다. 이것은 "제로 경고" 모드에서도 시작을 막거나 종료 상태 코드를 변경하지 않습니다.

  • -dG : disable use of getaddrinfo() to resolve host names into addresses. It can be used when suspecting that getaddrinfo() doesn't work as expected. This option was made available because many bogus implementations of getaddrinfo() exist on various systems and cause anomalies that are difficult to troubleshoot.
    -dG : 호스트 이름을 주소로 해석하기 위해 getaddrinfo() 사용을 비활성화합니다. getaddrinfo()가 예상대로 작동하지 않는다고 의심될 때 사용할 수 있습니다. 이 옵션은 getaddrinfo()의 많은 가짜 구현이 다양한 시스템에 존재하고 문제를 해결하기 어려운 이상 현상을 일으키기 때문에 사용할 수 있습니다.

  • -dK<class[,class]*> : dumps the list of registered keywords in each class. The list of classes is available with "-dKhelp". All classes may be dumped using "-dKall", otherwise a selection of those shown in the help can be specified as a comma-delimited list. The output format will vary depending on what class of keywords is being dumped (e.g. "cfg" will show the known configuration keywords in a format resembling the config file format while "smp" will show sample fetch functions prefixed with a compatibility matrix with each rule set). These may rarely be used as-is by humans but can be of great help for external tools that try to detect the appearance of new keywords at certain places to automatically update some documentation, syntax highlighting files, configuration parsers, API etc. The output format may evolve a bit over time so it is really recommended to use this output mostly to detect differences with previous archives. Note that not all keywords are listed because many keywords have existed long before the different keyword registration subsystems were created, and they do not appear there. However since new keywords are only added via the modern mechanisms, it's reasonably safe to assume that this output may be used to detect language additions with a good accuracy. The keywords are only dumped after the configuration is fully parsed, so that even dynamically created keywords can be dumped. A good way to dump and exit is to run a silent config check on an existing configuration:

    -dK<class[,class]*> : 각 클래스에 등록된 키워드 목록을 덤프합니다. 클래스 목록은 "-dKhelp"로 사용할 수 있습니다. 모든 클래스는 "-dKall"을 사용하여 덤프될 수 있습니다. 그렇지 않으면 도움말에 표시된 선택 항목을 쉼표로 구분된 목록으로 지정할 수 있습니다. 출력 형식은 덤프되는 키워드 클래스에 따라 다릅니다 (예: "cfg"는 구성 파일 형식과 유사한 형식으로 알려진 구성 키워드를 표시하고 "smp"는 각 규칙과 함께 호환성 매트릭스가 접두사로 붙은 샘플 가져오기 기능을 표시합니다. 세트). 이들은 사람이 있는 그대로 거의 사용하지 않을 수 있지만 일부 문서, 구문 강조 파일, 구성 파서, API 등을 자동으로 업데이트하기 위해 특정 위치에서 새 키워드의 모양을 감지하려는 외부 도구에 큰 도움이 될 수 있습니다. 출력 형식은 시간이 지남에 따라 조금씩 발전할 수 있으므로 이 출력을 대부분 이전 아카이브와의 차이점을 감지하는 데 사용하는 것이 좋습니다. 다른 키워드 등록 하위 시스템이 만들어지기 오래 전에 많은 키워드가 존재했고 거기에 나타나지 않기 때문에 모든 키워드가 나열되지는 않습니다. 그러나 새 키워드는 최신 메커니즘을 통해서만 추가되기 때문에 이 출력이 언어 추가를 매우 정확하게 감지하는 데 사용될 수 있다고 가정하는 것이 상당히 안전합니다. 키워드는 구성이 완전히 구문 분석된 후에만 덤프되므로 동적으로 생성된 키워드도 덤프할 수 있습니다. 덤프하고 종료하는 좋은 방법은 기존 구성에서 자동 구성 검사를 실행하는 것입니다.

    ./haproxy -dKall -q -c -f foo.cfg

    If no configuration file is available, using "-f /dev/null" will work as well to dump all default keywords, but then the return status will not be zero since there will be no listener, and will have to be ignored.
    구성 파일을 사용할 수 없는 경우 "-f /dev/null"을 사용하면 모든 기본 키워드를 덤프할 수 있지만 수신기가 없으므로 반환 상태가 0이 아니므로 무시해야 합니다.

  • -dL : dumps the list of dynamic shared libraries that are loaded at the end of the config processing. This will generally also include deep dependencies such as anything loaded from Lua code for example, as well as the executable itself. The list is printed in a format that ought to be easy enough to sanitize to directly produce a tarball of all dependencies. Since it doesn't stop the program's startup, it is recommended to only use it in combination with "-c" and "-q" where only the list of loaded objects will be displayed (or nothing in case of error). In addition, keep in mind that when providing such a package to help with a core file analysis, most libraries are in fact symbolic links that need to be dereferenced when creating the archive:

    -dL : 구성 처리가 끝날 때 로드되는 동적 공유 라이브러리 목록을 덤프합니다. 여기에는 일반적으로 예를 들어 Lua 코드에서 로드된 것과 같은 깊은 종속성과 실행 파일 자체도 포함됩니다. 이 목록은 모든 종속성의 tarball을 직접 생성하기 위해 삭제하기에 충분히 쉬운 형식으로 인쇄됩니다. 프로그램 시작을 멈추지 않기 때문에 로드된 개체 목록만 표시되는 (또는 오류의 경우 아무것도 표시되지 않는) "-c" 및 "-q"와 함께만 사용하는 것이 좋습니다. 또한 코어 파일 분석을 돕기 위해 이러한 패키지를 제공할 때 대부분의 라이브러리는 실제로 아카이브를 생성할 때 역참조해야 하는 심볼릭 링크라는 점을 명심하십시오.

    ./haproxy -W -q -c -dL -f foo.cfg | tar -T - -hzcf archive.tgz

  • -dM[<byte>[,]][help|options,...] : forces memory poisoning, and/or changes memory other debugging options. Memory poisonning means that each and every memory region allocated with malloc() or pool_alloc() will be filled with <byte> before being passed to the caller. When <byte> is not specified, it defaults to 0x50 ('P'). While this slightly slows down operations, it is useful to reliably trigger issues resulting from missing initializations in the code that cause random crashes. Note that -dM0 has the effect of turning any malloc() into a calloc(). In any case if a bug appears or disappears when using this option it means there is a bug in haproxy, so please report it. A number of other options are available either alone or after a comma following the byte. The special option "help" will list the currently supported options and their current value. Each debugging option may be forced on or off. The most optimal options are usually chosen at build time based on the operating system and do not need to be adjusted, unless suggested by a developer. Supported debugging options include (set/clear):

    -dM[<byte>[,]][help|options,...] : 강제로 메모리 중독을 일으키거나 메모리를 다른 디버깅 옵션으로 변경합니다. 메모리 포이즈닝은 malloc() 또는 pool_alloc()으로 할당된 각각의 모든 메모리 영역이 호출자에게 전달되기 전에 <byte>로 채워지는 것을 의미합니다. <byte>가 지정되지 않은 경우 기본값은 0x50('P')입니다. 이렇게 하면 작업 속도가 약간 느려지지만 무작위 충돌을 일으키는 코드의 초기화 누락으로 인해 문제를 안정적으로 트리거하는 데 유용합니다. -dM0은 모든 malloc()을 calloc()으로 바꾸는 효과가 있습니다. 어쨌든 이 옵션을 사용할 때 버그가 나타나거나 사라지면 haproxy에 버그가 있는 것이므로 보고해 주십시오. 단독으로 또는 바이트 다음의 쉼표 뒤에 여러 다른 옵션을 사용할 수 있습니다. 특수 옵션 "도움말"은 현재 지원되는 옵션과 현재 값을 나열합니다. 각 디버깅 옵션은 강제로 켜거나 끌 수 있습니다. 가장 최적의 옵션은 일반적으로 운영 체제를 기반으로 빌드 시 선택되며 개발자가 제안하지 않는 한 조정할 필요가 없습니다. 지원되는 디버깅 옵션은 다음과 같습니다(설정/지우기).

    • - fail / no-fail:
      This enables randomly failing memory allocations, in conjunction with the global "tune.fail-alloc" setting. This is used to detect missing error checks in the code.
      이렇게 하면 전역 "tune.fail-alloc" 설정과 함께 무작위로 실패한 메모리 할당이 활성화됩니다. 이는 코드에서 누락된 오류 검사를 감지하는 데 사용됩니다.
    • - no-merge / merge:
      By default, pools of very similar sizes are merged, resulting in more efficiency, but this complicates the analysis of certain memory dumps. This option allows to disable this mechanism, and may slightly increase the memory usage.
      기본적으로 매우 유사한 크기의 풀이 병합되어 효율성이 향상되지만 이로 인해 특정 메모리 덤프의 분석이 복잡해집니다. 이 옵션을 사용하면 이 메커니즘을 비활성화할 수 있으며 메모리 사용량이 약간 증가할 수 있습니다.
    • - cold-first / hot-first:
      In order to optimize the CPU cache hit ratio, by default the most recently released objects ("hot") are recycled for new allocations. But doing so also complicates analysis of memory dumps and may hide use-after-free bugs. This option allows to instead pick the coldest objects first, which may result in a slight increase of CPU usage.
      CPU 캐시 적중률을 최적화하기 위해 기본적으로 가장 최근에 릴리스된 개체("핫")가 새 할당을 위해 재활용됩니다. 그러나 그렇게 하면 메모리 덤프 분석이 복잡해지고 use-after-free 버그를 숨길 수 있습니다. 이 옵션을 사용하면 가장 차가운 개체를 먼저 선택할 수 있으므로 CPU 사용량이 약간 증가할 수 있습니다.
    • - integrity / no-integrity:
      When this option is enabled, memory integrity checks are enabled on the allocated area to verify that it hasn't been modified since it was last released. This works best with "no-merge", "cold-first" and "tag". Enabling this option will slightly increase the CPU usage.
      이 옵션이 활성화되면 할당된 영역에서 메모리 무결성 검사가 활성화되어 마지막으로 릴리스된 이후 수정되지 않았는지 확인합니다. 이것은 "no-merge", "cold-first" 및 "tag"에서 가장 잘 작동합니다. 이 옵션을 활성화하면 CPU 사용량이 약간 증가합니다.
    • - no-global / global:
      Depending on the operating system, a process-wide global memory cache may be enabled if it is estimated that the standard allocator is too slow or inefficient with threads. This option allows to forcefully disable it or enable it. Disabling it may result in a CPU usage increase with inefficient allocators. Enabling it may result in a higher memory usage with efficient allocators.
      운영 체제에 따라 표준 할당자가 스레드에서 너무 느리거나 비효율적이라고 추정되는 경우 프로세스 전체 전역 메모리 캐시를 사용할 수 있습니다. 이 옵션을 사용하면 강제로 비활성화하거나 활성화할 수 있습니다. 비활성화하면 비효율적인 할당자로 인해 CPU 사용량이 증가할 수 있습니다. 이를 활성화하면 효율적인 할당자를 사용하면 메모리 사용량이 높아질 수 있습니다.
    • - no-cache / cache:
      Each thread uses a very fast local object cache for allocations, which is always enabled by default. This option allows to disable it. Since the global cache also passes via the local caches, this will effectively result in disabling all caches and allocating directly from the default allocator. This may result in a significant increase of CPU usage, but may also result in small memory savings on tiny systems.
      각 스레드는 할당을 위해 매우 빠른 로컬 개체 캐시를 사용하며 기본적으로 항상 활성화되어 있습니다. 이 옵션을 사용하면 비활성화할 수 있습니다. 전역 캐시도 로컬 캐시를 통과하기 때문에 모든 캐시를 비활성화하고 기본 할당자에서 직접 할당하게 됩니다. 이로 인해 CPU 사용량이 크게 증가할 수 있지만 작은 시스템에서는 메모리가 약간 절약될 수도 있습니다.
    • - caller / no-caller:
      Enabling this option reserves some extra space in each allocated object to store the address of the last caller that allocated or released it. This helps developers go back in time when analysing memory dumps and to guess how something unexpected happened.
      이 옵션을 활성화하면 할당 또는 해제한 마지막 호출자의 주소를 저장하기 위해 할당된 각 개체에 약간의 추가 공간을 예약합니다. 이를 통해 개발자는 메모리 덤프를 분석할 때 시간을 거슬러 올라가 예상치 못한 일이 어떻게 발생했는지 추측할 수 있습니다.
    • - tag / no-tag:
      Enabling this option reserves some extra space in each allocated object to store a tag that allows to detect bugs such as double-free, freeing an invalid object, and buffer overflows. It offers much stronger reliability guarantees at the expense of 4 or 8 extra bytes per allocation. It usually is the first step to detect memory corruption.
      이 옵션을 활성화하면 double-free, 유효하지 않은 개체 해제, 버퍼 오버플로와 같은 버그를 감지할 수 있는 태그를 저장하기 위해 할당된 각 개체에 약간의 추가 공간을 예약합니다. 할당당 4 또는 8 추가 바이트를 희생하여 훨씬 더 강력한 안정성 보장을 제공합니다. 일반적으로 메모리 손상을 감지하는 첫 번째 단계입니다.
    • - poison / no-poison:
      Enabling this option will fill allocated objects with a fixed pattern that will make sure that some accidental values such as 0 will not be present if a newly added field was mistakenly forgotten in an initialization routine. Such bugs tend to rarely reproduce, especially when pools are not merged. This is normally enabled by directly passing the byte's value to -dM but using this option allows to disable/enable use of a previously set value.
      이 옵션을 활성화하면 새로 추가된 필드가 초기화 루틴에서 실수로 잊어버린 경우 0과 같은 임의의 값이 존재하지 않도록 고정된 패턴으로 할당된 개체를 채웁니다. 이러한 버그는 특히 풀이 병합되지 않은 경우 거의 재현되지 않는 경향이 있습니다. 이것은 일반적으로 바이트의 값을 -dM에 직접 전달하여 활성화되지만 이 옵션을 사용하면 이전에 설정한 값의 사용을 비활성화/활성화할 수 있습니다.
  • -dS : disable use of the splice() system call. It is equivalent to the "global" section's "nosplice" keyword. This may be used when splice() is suspected to behave improperly or to cause performance issues, or when using strace to see the forwarded data (which do not appear when using splice()).
    -dS : splice() 시스템 호출 사용을 비활성화합니다. "global" 섹션의 "nosplice" 키워드와 동일합니다. 이것은 splice()가 부적절하게 동작하거나 성능 문제를 일으키는 것으로 의심되거나 strace를 사용하여 전달된 데이터(splice()를 사용할 때 나타나지 않음)를 볼 때 사용할 수 있습니다.
    splice(): 커널 주소 공간과 사용자 주소 공간 사이 복사 없이 두 파일 디스크립터 간에 데이터를 옮긴다. 리눅스 2.6.17에서 splice() 시스템 호출이 처음 등장했다. glibc 버전 2.5에서 라이브러리 지원이 추가되었다.

  • -dV : disable SSL verify on the server side. It is equivalent to having "ssl-server-verify none" in the "global" section. This is useful when trying to reproduce production issues out of the production environment. Never use this in an init script as it degrades SSL security to the servers.
    -dV : 서버 측에서 SSL 확인을 비활성화합니다. "global" 섹션에 "ssl-server-verify none"이 있는 것과 같습니다. 이는 프로덕션 환경 외부에서 프로덕션 문제를 재현하려고 할 때 유용합니다. 서버에 대한 SSL 보안을 저하시키므로 init 스크립트에서 사용하지 마십시오.

  • -dW : if set, haproxy will refuse to start if any warning was emitted while processing the configuration. This helps detect subtle mistakes and keep the configuration clean and portable across versions. It is recommended to set this option in service scripts when configurations are managed by humans, but it is recommended not to use it with generated configurations, which tend to emit more warnings. It may be combined with "-c" to cause warnings in checked configurations to fail. This is equivalent to global option "zero-warning".
    -dW : 설정하면 구성을 처리하는 동안 경고가 발생하면 haproxy가 시작을 거부합니다. 이를 통해 미묘한 실수를 감지하고 구성을 깨끗하고 여러 버전에서 이식할 수 있습니다. 사람이 구성을 관리하는 경우 서비스 스크립트에서 이 옵션을 설정하는 것이 좋지만 생성된 구성에는 더 많은 경고를 표시하는 경향이 있으므로 사용하지 않는 것이 좋습니다. "-c"와 결합하여 확인된 구성의 경고가 실패하도록 할 수 있습니다. 이는 전역 옵션 "zero-warning"와 동일합니다.

  • -db : disable background mode and multi-process mode. The process remains in foreground. It is mainly used during development or during small tests, as Ctrl-C is enough to stop the process. Never use it in an init script.
    -db : 백그라운드 모드 및 다중 프로세스 모드를 비활성화합니다. 프로세스는 전경에 남아 있습니다. Ctrl-C로 프로세스를 중지하기에 충분하므로 주로 개발 중이나 소규모 테스트 중에 사용됩니다. 초기화 스크립트에서 사용하지 마십시오.

  • -de : disable the use of the "epoll" poller. It is equivalent to the "global" section's keyword "noepoll". It is mostly useful when suspecting a bug related to this poller. On systems supporting epoll, the fallback will generally be the "poll" poller.
    -de : "epoll" 폴러 사용을 비활성화합니다. "global" 섹션의 키워드 "noepoll"과 동일합니다. 이 폴러와 관련된 버그가 의심될 때 주로 유용합니다. epoll을 지원하는 시스템에서 폴백은 일반적으로 "poll" 폴러입니다.

  • -dk : disable the use of the "kqueue" poller. It is equivalent to the "global" section's keyword "nokqueue". It is mostly useful when suspecting a bug related to this poller. On systems supporting kqueue, the fallback will generally be the "poll" poller.
    -dk : "kqueue" 폴러 사용을 비활성화합니다. "global" 섹션의 키워드 "nokqueue"와 동일합니다. 이 폴러와 관련된 버그가 의심될 때 주로 유용합니다. kqueue를 지원하는 시스템에서 폴백은 일반적으로 "poll" 폴러입니다.

  • -dp : disable the use of the "poll" poller. It is equivalent to the "global" section's keyword "nopoll". It is mostly useful when suspecting a bug related to this poller. On systems supporting poll, the fallback will generally be the "select" poller, which cannot be disabled and is limited to 1024 file descriptors.
    -dp : "poll" 폴러 사용을 비활성화합니다. "global" 섹션의 키워드 "nopoll"과 동일합니다. 이 폴러와 관련된 버그가 의심될 때 주로 유용합니다. 폴백을 지원하는 시스템에서 폴백은 일반적으로 비활성화할 수 없고 1024개의 파일 설명자로 제한되는 "select" 폴러입니다.

  • -dr : ignore server address resolution failures. It is very common when validating a configuration out of production not to have access to the same resolvers and to fail on server address resolution, making it difficult to test a configuration. This option simply appends the "none" method to the list of address resolution methods for all servers, ensuring that even if the libc fails to resolve an address, the startup sequence is not interrupted.
    -dr : 서버 주소 확인 실패를 무시합니다. 프로덕션에서 벗어난 구성의 유효성을 검사할 때 동일한 확인자에 대한 액세스 권한이 없고 서버 주소 확인에 실패하여 구성을 테스트하기 어려운 경우가 매우 일반적입니다. 이 옵션은 단순히 모든 서버의 주소 확인 방법 목록에 "none" 방법을 추가하여 libc가 주소 확인에 실패하더라도 시작 시퀀스가 중단되지 않도록 합니다.

  • -m <limit> : limit the total allocatable memory to <limit> megabytes across all processes. This may cause some connection refusals or some slowdowns depending on the amount of memory needed for normal operations. This is mostly used to force the processes to work in a constrained resource usage scenario. It is important to note that the memory is not shared between processes, so in a multi-process scenario, this value is first divided by global.nbproc before forking.
    -m <limit> : 할당 가능한 총 메모리를 모든 프로세스에서 <limit> 메가바이트로 제한합니다. 이로 인해 정상 작동에 필요한 메모리 양에 따라 일부 연결 거부 또는 속도 저하가 발생할 수 있습니다. 이는 제한된 리소스 사용 시나리오에서 프로세스가 작동하도록 하는 데 주로 사용됩니다. 메모리는 프로세스 간에 공유되지 않으므로 다중 프로세스 시나리오에서 이 값은 분기 전에 먼저 global.nbproc로 나누어집니다.

  • -n <limit> : limits the per-process connection limit to <limit>. This is equivalent to the global section's keyword "maxconn". It has precedence over this keyword. This may be used to quickly force lower limits to avoid a service outage on systems where resource limits are too low.
    -n <limit> : 프로세스별 연결 제한을 <limit>로 제한합니다. 이는 전역 섹션의 키워드 "maxconn"과 동일합니다. 이 키워드보다 우선합니다. 리소스 제한이 너무 낮은 시스템에서 서비스 중단을 방지하기 위해 신속하게 제한을 낮추는 데 사용할 수 있습니다.

  • -p <file> : write all processes' pids into <file> during startup. This is equivalent to the "global" section's keyword "pidfile". The file is opened before entering the chroot jail, and after doing the chdir() implied by "-C". Each pid appears on its own line.
    -p <file> : 시작하는 동안 모든 프로세스의 pid를 <file>에 씁니다. 이것은 "global" 섹션의 키워드 "pidfile"과 동일합니다. 파일은 chroot 감옥에 들어가기 전과 "-C"에 의해 암시된 chdir()을 수행한 후에 열립니다. 각 pid는 자체 줄에 나타납니다.

  • -q : set "quiet" mode. This disables some messages during the configuration parsing and during startup. It can be used in combination with "-c" to just check if a configuration file is valid or not.
    -q : "조용한(quiet)" 모드를 설정합니다. 이렇게 하면 구성 구문 분석 및 시작 중에 일부 메시지가 비활성화됩니다. 구성 파일이 유효한지 여부를 확인하기 위해 "-c"와 함께 사용할 수 있습니다.

  • -S <bind>[,bind_options...]: in master-worker mode, bind a master CLI, which allows the access to every processes, running or leaving ones. For security reasons, it is recommended to bind the master CLI to a local UNIX socket. The bind options are the same as the keyword "bind" in the configuration file with words separated by commas instead of spaces.
    Note that this socket can't be used to retrieve the listening sockets from an old process during a seamless reload.
    -S <bind>[,bind_options...]: 마스터-작업자 모드에서 마스터 CLI를 바인딩하여 실행 중이거나 종료된 모든 프로세스에 대한 액세스를 허용합니다. 보안상의 이유로 마스터 CLI를 로컬 UNIX 소켓에 바인딩하는 것이 좋습니다. 바인드 옵션은 공백 대신 쉼표로 구분된 단어가 있는 구성 파일의 "bind" 키워드와 동일합니다.
    원활한 재로드 중에 이전 프로세스에서 청취 소켓을 검색하는 데 이 소켓을 사용할 수 없습니다.

  • -sf <pid>* : send the "finish" signal (SIGUSR1) to older processes after boot completion to ask them to finish what they are doing and to leave. <pid> is a list of pids to signal (one per argument). The list ends on any option starting with a "-". It is not a problem if the list of pids is empty, so that it can be built on the fly based on the result of a command like "pidof" or "pgrep". QUIC connections will be aborted.
    -sf <pid>* : 부팅 완료 후 이전 프로세스에 "종료" 신호(SIGUSR1)를 전송하여 수행 중인 작업을 끝내고 떠나도록 요청합니다. <pid>는 신호를 보낼 pid 목록입니다(인수당 하나). 목록은 "-"로 시작하는 옵션에서 끝납니다. pid 목록이 비어 있어도 "pidof" 또는 "pgrep"와 같은 명령의 결과에 따라 즉석에서 빌드할 수 있으므로 문제가 되지 않습니다. QUIC 연결이 중단됩니다.

  • -st <pid>* : send the "terminate" signal (SIGTERM) to older processes after boot completion to terminate them immediately without finishing what they were doing. <pid> is a list of pids to signal (one per argument). The list is ends on any option starting with a "-". It is not a problem if the list of pids is empty, so that it can be built on the fly based on the result of a command like "pidof" or "pgrep".
    -st <pid>* : 부팅 완료 후 이전 프로세스에 "종료" 신호(SIGTERM)를 전송하여 수행 중인 작업을 완료하지 않고 즉시 종료합니다. <pid>는 신호를 보낼 pid 목록입니다(인수당 하나). 목록은 "-"로 시작하는 옵션에서 끝납니다. pid 목록이 비어 있어도 "pidof" 또는 "pgrep"와 같은 명령의 결과에 따라 즉석에서 빌드할 수 있으므로 문제가 되지 않습니다.

  • -v : report the version and build date.
    -v : 버전 및 빌드 날짜를 보고합니다.

  • -vv : display the version, build options, libraries versions and usable pollers. This output is systematically requested when filing a bug report.
    -vv : 버전, 빌드 옵션, 라이브러리 버전 및 사용 가능한 폴러를 표시합니다. 이 출력은 버그 보고서를 제출할 때 체계적으로 요청됩니다.

  • -x <unix_socket> : connect to the specified socket and try to retrieve any listening sockets from the old process, and use them instead of trying to bind new ones. This is useful to avoid missing any new connection when reloading the configuration on Linux. The capability must be enable on the stats socket using "expose-fd listeners" in your configuration. In master-worker mode, the master will use this option upon a reload with the "sockpair@" syntax, which allows the master to connect directly to a worker without using stats socket declared in the configuration.
    -x <unix_socket> : 지정된 소켓에 연결하고 이전 프로세스에서 청취 소켓을 검색하고 새 소켓을 바인드하는 대신 사용하십시오. 이는 Linux에서 구성을 다시 로드할 때 새 연결이 누락되지 않도록 하는 데 유용합니다. 구성에서 "expose-fd listeners"를 사용하여 stats 소켓에서 기능을 활성화해야 합니다. 마스터-작업자 모드에서 마스터는 "sockpair@" 구문을 사용하여 다시 로드할 때 이 옵션을 사용하므로 구성에서 선언된 통계 소켓을 사용하지 않고 마스터가 작업자에 직접 연결할 수 있습니다.

A safe way to start HAProxy from an init file consists in forcing the daemon mode, storing existing pids to a pid file and using this pid file to notify older processes to finish before leaving :
init 파일에서 HAProxy를 시작하는 안전한 방법은 데몬 모드를 강제 실행하고, 기존 pid를 pid 파일에 저장하고, 이 pid 파일을 사용하여 떠나기 전에 이전 프로세스가 완료되도록 알리는 것입니다.

   haproxy -f /etc/haproxy.cfg \
           -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
		   
     -f: cfg 파일 지정 
     -D: 데몬으로 시작 
     -p: pidfile 지정 
     -sf: 실행중인 pid process를 kill한다.

When the configuration is split into a few specific files (eg: tcp vs http), it is recommended to use the "-f" option :
구성이 몇 개의 특정 파일로 분할된 경우(예: tcp 대 http) "-f" 옵션을 사용하는 것이 좋습니다.

   haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
           -f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
           -f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
           -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)

When an unknown number of files is expected, such as customer-specific files, it is recommended to assign them a name starting with a fixed-size sequence number and to use "--" to load them, possibly after loading some defaults :
고객별 파일과 같이 알 수 없는 수의 파일이 예상되는 경우 고정 크기 시퀀스 번호로 시작하는 이름을 지정하고 일부 기본값을 로드한 후 "--"를 사용하여 로드하는 것이 좋습니다.

   haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
           -f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
           -f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
           -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) \
           -f /etc/haproxy/default-customers.cfg -- /etc/haproxy/customers/*

Sometimes a failure to start may happen for whatever reason. Then it is important to verify if the version of HAProxy you are invoking is the expected version and if it supports the features you are expecting (eg: SSL, PCRE, compression, Lua, etc). This can be verified using "haproxy -vv". Some important information such as certain build options, the target system and the versions of the libraries being used are reported there. It is also what you will systematically be asked for when posting a bug report :

때때로 시작 실패는 어떤 이유로든 발생할 수 있습니다. 그런 다음 호출하는 HAProxy의 버전이 예상 버전인지, 예상하는 기능(예: SSL, PCRE, 압축, Lua 등)을 지원하는지 확인하는 것이 중요합니다. 이는 "haproxy -vv"를 사용하여 확인할 수 있습니다. 특정 빌드 옵션, 대상 시스템 및 사용 중인 라이브러리 버전과 같은 몇 가지 중요한 정보가 보고됩니다. 또한 버그 보고서를 게시할 때 체계적으로 묻는 것이기도 합니다.

HAProxy version 1.6
  $ haproxy -vv
  HAProxy version 1.6-dev7-a088d3-4 2015/10/08
  Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>

  Build options :
    TARGET  = linux2628
    CPU     = generic
    CC      = gcc
    CFLAGS  = -pg -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement \
              -DBUFSIZE=8030 -DMAXREWRITE=1030 -DSO_MARK=36 -DTCP_REPAIR=19
    OPTIONS = USE_ZLIB=1 USE_DLMALLOC=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

  Default settings :
    maxconn = 2000, bufsize = 8030, maxrewrite = 1030, maxpollevents = 200

  Encrypted password support via crypt(3): yes
  Built with zlib version : 1.2.6
  Compression algorithms supported : identity("identity"), deflate("deflate"), \
                                     raw-deflate("deflate"), gzip("gzip")
  Built with OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
  Running on OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
  OpenSSL library supports TLS extensions : yes
  OpenSSL library supports SNI : yes
  OpenSSL library supports prefer-server-ciphers : yes
  Built with PCRE version : 8.12 2011-01-15
  PCRE library supports JIT : no (USE_PCRE_JIT not set)
  Built with Lua version : Lua 5.3.1
  Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND

  Available polling systems :
        epoll : pref=300,  test result OK
         poll : pref=200,  test result OK
       select : pref=150,  test result OK
  Total: 3 (3 usable), will use epoll.
HAProxy version 2.6
  $ haproxy -vv
  HAProxy version 2.6.6-274d1a4 2022/09/22 - https://haproxy.org/
  Status: long-term supported branch - will stop receiving fixes around Q2 2027.
  Known bugs: http://www.haproxy.org/bugs/bugs-2.6.6.html
  Running on: Linux 3.10.0-862.3.2.el7.x86_64 #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64
  Build options :
    TARGET  = linux-glibc
    CPU     = generic
    CC      = cc
    CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement -Wfatal-errors \
        -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond \
        -Wnull-dereference -fwrapv -Wno-address-of-packed-member -Wno-unused-label \
        -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered \
        -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int \
        -Wno-atomic-alignment
    OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_SYSTEMD=1
    DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS
  
  Feature list : +EPOLL -KQUEUE +NETFILTER +PCRE -PCRE_JIT -PCRE2 -PCRE2_JIT +POLL \
        +THREAD +BACKTRACE -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY \
        +LINUX_SPLICE +LIBCRYPT +CRYPT_H -ENGINE +GETADDRINFO +OPENSSL -LUA \
        +ACCEPT4 -CLOSEFROM -ZLIB +SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS \
        -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL -PROCCTL +THREAD_DUMP \
        -EVPORTS -OT -QUIC -PROMEX -MEMORY_PROFILING
  
  Default settings :
    bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
  
  Built with multi-threading support (MAX_THREADS=64, default=8).
  Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
  Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
  OpenSSL library supports TLS extensions : yes
  OpenSSL library supports SNI : yes
  OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
  Built with network namespace support.
  Support for malloc_trim() is enabled.
  Built with libslz for stateless compression.
  Compression algorithms supported : identity("identity"), deflate("deflate"), \
                             raw-deflate("deflate"), gzip("gzip")
  Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
  Built with PCRE version : 8.32 2012-11-30
  Running on PCRE version : 8.32 2012-11-30
  PCRE library supports JIT : no (USE_PCRE_JIT not set)
  Encrypted password support via crypt(3): yes
  Built with gcc compiler version 8.3.1 20190311 (Red Hat 8.3.1-3)
  
  Available polling systems :
       epoll : pref=300,  test result OK
        poll : pref=200,  test result OK
      select : pref=150,  test result OK
  Total: 3 (3 usable), will use epoll.
  
  Available multiplexer protocols :
  (protocols marked as  cannot be specified using 'proto' keyword)
           h2 : mode=HTTP  side=FE|BE  mux=H2    flags=HTX|HOL_RISK|NO_UPG
  	 fcgi : mode=HTTP  side=BE     mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
    <default> : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
  	   h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
    <default> : mode=TCP   side=FE|BE  mux=PASS  flags=
  	 none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG
  
  Available services : none
  
  Available filters :
          [CACHE] cache
          [COMP] compression
          [FCGI] fcgi-app
          [SPOE] spoe
          [TRACE] trace

The relevant information that many non-developer users can verify here are :
개발자가 아닌 많은 사용자가 여기에서 확인할 수 있는 관련 정보는 다음과 같습니다.

  • the version : 1.6-dev7-a088d3-4 above means the code is currently at commit ID "a088d3" which is the 4th one after after official version "1.6-dev7". Version 1.6-dev7 would show as "1.6-dev7-8c1ad7". What matters here is in fact "1.6-dev7". This is the 7th development version of what will become version 1.6 in the future. A development version not suitable for use in production (unless you know exactly what you are doing). A stable version will show as a 3-numbers version, such as "1.5.14-16f863", indicating the 14th level of fix on top of version 1.5. This is a production-ready version.

    버전 : 1.6-dev7-a088d3-4 위의 코드는 현재 공식 버전 "1.6-dev7"에 이어 4번째 커밋 ID "a088d3"에 있음을 의미합니다. 버전 1.6-dev7은 "1.6-dev7-8c1ad7"로 표시됩니다. 여기서 중요한 것은 사실 "1.6-dev7"입니다. 앞으로 버전 1.6이 될 것의 7번째 개발 버전입니다. 생산(production)에 사용하기에 적합하지 않은 개발 버전입니다 (무엇을 하고 있는지 정확히 알지 못하는 경우). 안정적인 버전은 "1.5.14-16f863"과 같이 3자리 숫자 버전으로 표시되어 버전 1.5 위에 있는 14번째 수정 수준을 나타냅니다. 이것은 생산 준비(production-ready)가 된 버전입니다.

  • the release date : 2015/10/08. It is represented in the universal year/month/day format. Here this means August 8th, 2015. Given that stable releases are issued every few months (1-2 months at the beginning, sometimes 6 months once the product becomes very stable), if you're seeing an old date here, it means you're probably affected by a number of bugs or security issues that have since been fixed and that it might be worth checking on the official site.

    출시일 : 2015/10/08. 범용 년/월/일 형식으로 표시됩니다. 여기서는 2015년 8월 8일을 의미합니다. 안정적인 릴리스가 몇 달마다(처음에는 1-2개월, 때로는 제품이 매우 안정되면 6개월) 발행된다는 점을 감안할 때 여기에 오래된 날짜가 표시된다면 이후 수정된 여러 버그 또는 보안 문제의 영향을 받을 수 있으며 공식 사이트에서 확인할 가치가 있습니다.

  • build options : they are relevant to people who build their packages themselves, they can explain why things are not behaving as expected. For example the development version above was built for Linux 2.6.28 or later, targeting a generic CPU (no CPU-specific optimizations), and lacks any code optimization (-O0) so it will perform poorly in terms of performance.

    빌드 옵션 : 패키지를 직접 빌드하는 사람들과 관련이 있으며 예상대로 작동하지 않는 이유를 설명할 수 있습니다. 예를 들어 위의 개발 버전은 일반 CPU(CPU별 최적화 없음)를 대상으로 하는 Linux 2.6.28 이상용으로 빌드되었으며 코드 최적화(-O0)가 없으므로 성능 측면에서 성능이 저하됩니다.

  • libraries versions : zlib version is reported as found in the library itself. In general zlib is considered a very stable product and upgrades are almost never needed. OpenSSL reports two versions, the version used at build time and the one being used, as found on the system. These ones may differ by the last letter but never by the numbers. The build date is also reported because most OpenSSL bugs are security issues and need to be taken seriously, so this library absolutely needs to be kept up to date. Seeing a 4-months old version here is highly suspicious and indeed an update was missed. PCRE provides very fast regular expressions and is highly recommended. Certain of its extensions such as JIT are not present in all versions and still young so some people prefer not to build with them, which is why the build status is reported as well. Regarding the Lua scripting language, HAProxy expects version 5.3 which is very young since it was released a little time before HAProxy 1.6. It is important to check on the Lua web site if some fixes are proposed for this branch.

    라이브러리 버전:
    zlib 버전은 라이브러리 자체에서 발견된 것으로 보고됩니다. 일반적으로 zlib는 매우 안정적인 제품으로 간주되며 업그레이드가 거의 필요하지 않습니다.
    OpenSSL은 빌드 시 사용되는 버전과 시스템에서 발견되는 사용 중인 버전의 두 가지 버전을 보고합니다. 이것들은 마지막 문자에 의해 다를 수 있지만 숫자에 의해서는 결코 다를 수 없습니다. 대부분의 OpenSSL 버그는 보안 문제이고 심각하게 받아들여야 하기 때문에 빌드 날짜도 보고되므로 이 라이브러리는 절대적으로 최신 상태로 유지되어야 합니다. 여기에서 4개월 된 버전을 보는 것은 매우 의심스럽고 실제로 업데이트가 누락되었습니다.
    PCRE는 매우 빠른 정규식을 제공하며 적극 권장됩니다. JIT와 같은 특정 확장 기능은 모든 버전에 존재하지 않으며 아직 어리기 때문에 일부 사람들은 JIT로 빌드하지 않는 것을 선호합니다. 이것이 빌드 상태도 보고되는 이유입니다.
    Lua 스크립팅 언어와 관련하여 HAProxy는 HAProxy 1.6 이전에 출시된 지 얼마 되지 않은 버전 5.3을 예상하고 있습니다. 이 분기에 대해 몇 가지 수정 사항이 제안되면 Lua 웹 사이트를 확인하는 것이 중요합니다.

  • Available polling systems will affect the process's scalability when dealing with more than about one thousand of concurrent connections. These ones are only available when the correct system was indicated in the TARGET variable during the build. The "epoll" mechanism is highly recommended on Linux, and the kqueue mechanism is highly recommended on BSD. Lacking them will result in poll() or even select() being used, causing a high CPU usage when dealing with a lot of connections.

    사용 가능한 폴링 시스템은 약 1,000개 이상의 동시 연결을 처리할 때 프로세스의 확장성에 영향을 미칩니다. 이러한 항목은 빌드 중에 TARGET 변수에 올바른 시스템이 표시된 경우에만 사용할 수 있습니다. "epoll" 메커니즘은 Linux에서 적극 권장되며 kqueue 메커니즘은 BSD에서 적극 권장됩니다. 부족하면 poll() 또는 select()가 사용되어 많은 연결을 처리할 때 CPU 사용량이 높아집니다.

4. Stopping and restarting HAProxy

HAProxy supports a graceful and a hard stop. The hard stop is simple, when the SIGTERM signal is sent to the haproxy process, it immediately quits and all established connections are closed. The graceful stop is triggered when the SIGUSR1 signal is sent to the haproxy process. It consists in only unbinding from listening ports, but continue to process existing connections until they close. Once the last connection is closed, the process leaves.

HAProxy는 Graceful Stop과 Hard Stop을 지원합니다.
정상 종료(Graceful Stop): SIGUSR1 신호가 haproxy 프로세스로 전송되면 정상적인(Graceful) 중지가 트리거됩니다. 수신 포트에서 바인딩을 해제하는 것만으로 구성되지만 닫힐 때까지 기존 연결을 계속 처리합니다. 마지막 연결이 닫히면 프로세스가 종료됩니다.
하드 중지(Hard Stop): SIGTERM 신호가 haproxy 프로세스로 전송되면 즉시 종료되고 설정된 모든 연결이 닫힙니다.

The hard stop method is used for the "stop" or "restart" actions of the service management script. The graceful stop is used for the "reload" action which tries to seamlessly reload a new configuration in a new process.

하드 중지 방법은 서비스 관리 스크립트의 "중지" 또는 "다시 시작" 작업에 사용됩니다. 정상적인 중지는 새 프로세스에서 새 구성을 원활하게 다시 로드하려고 시도하는 "다시 로드" 작업에 사용됩니다.

Both of these signals may be sent by the new haproxy process itself during a reload or restart, so that they are sent at the latest possible moment and only if absolutely required. This is what is performed by the "-st" (hard) and "-sf" (graceful) options respectively.

이 두 신호는 다시 로드하거나 다시 시작하는 동안 새 haproxy 프로세스 자체에서 보낼 수 있으므로 가능한 가장 최근에 절대적으로 필요한 경우에만 보냅니다. 이것은 각각 "-st"(hard) 및 "-sf"(graceful) 옵션에 의해 수행되는 것입니다.

In master-worker mode, it is not needed to start a new haproxy process in order to reload the configuration. The master process reacts to the SIGUSR2 signal by reexecuting itself with the -sf parameter followed by the PIDs of the workers. The master will then parse the configuration file and fork new workers.

마스터-작업자 모드에서는 구성을 다시 로드하기 위해 새로운 haproxy 프로세스를 시작할 필요가 없습니다. 마스터 프로세스는 작업자의 PID가 뒤따르는 -sf 매개변수로 자체를 재실행하여 SIGUSR2 신호에 반응합니다. 그런 다음 마스터는 구성 파일을 구문 분석하고 새 작업자를 포크합니다.

To understand better how these signals are used, it is important to understand the whole restart mechanism.

이러한 신호가 어떻게 사용되는지 더 잘 이해하려면 전체 재시작 메커니즘을 이해하는 것이 중요합니다.

First, an existing haproxy process is running. The administrator uses a system specific command such as "/etc/init.d/haproxy reload" to indicate they want to take the new configuration file into effect. What happens then is the following. First, the service script (/etc/init.d/haproxy or equivalent) will verify that the configuration file parses correctly using "haproxy -c". After that it will try to start haproxy with this configuration file, using "-st" or "-sf".

먼저 기존 haproxy 프로세스가 실행 중입니다. 관리자는 "/etc/init.d/haproxy reload"와 같은 시스템 특정 명령을 사용하여 새 구성 파일을 적용할 것임을 나타냅니다. 그러면 다음과 같은 일이 발생합니다. 먼저 서비스 스크립트(/etc/init.d/haproxy 또는 동급)는 "haproxy -c"를 사용하여 구성 파일이 올바르게 구문 분석되는지 확인합니다. 그런 다음 "-st" 또는 "-sf"를 사용하여 이 구성 파일로 haproxy를 시작하려고 시도합니다.

Then HAProxy tries to bind to all listening ports. If some fatal errors happen (eg: address not present on the system, permission denied), the process quits with an error. If a socket binding fails because a port is already in use, then the process will first send a SIGTTOU signal to all the pids specified in the "-st" or "-sf" pid list. This is what is called the "pause" signal. It instructs all existing haproxy processes to temporarily stop listening to their ports so that the new process can try to bind again. During this time, the old process continues to process existing connections. If the binding still fails (because for example a port is shared with another daemon), then the new process sends a SIGTTIN signal to the old processes to instruct them to resume operations just as if nothing happened. The old processes will then restart listening to the ports and continue to accept connections. Note that this mechanism is system dependent and some operating systems may not support it in multi-process mode.

그런 다음 HAProxy는 모든 수신 포트에 바인딩을 시도합니다. 치명적인 오류가 발생하면(예: 시스템에 주소가 없거나 권한이 거부됨) 프로세스가 오류와 함께 종료됩니다. 포트가 이미 사용 중이어서 소켓 바인딩이 실패하면 프로세스는 먼저 "-st" 또는 "-sf" pid 목록에 지정된 모든 pid에 SIGTTOU 신호를 보냅니다. 이것이 바로 "일시 중지(pause)" 신호입니다. 새 프로세스가 다시 바인딩을 시도할 수 있도록 기존의 모든 haproxy 프로세스에 포트 수신을 일시적으로 중지하도록 지시합니다. 이 시간 동안 이전 프로세스는 기존 연결을 계속 처리합니다. 바인딩이 여전히 실패하면(예를 들어 포트가 다른 데몬과 공유되기 때문에) 새 프로세스는 이전 프로세스에 SIGTTIN 신호를 보내 아무 일도 일어나지 않은 것처럼 작업을 재개하도록 지시합니다. 그런 다음 이전 프로세스는 포트 수신을 다시 시작하고 연결을 계속 수락합니다. 이 메커니즘은 시스템에 따라 다르며 일부 운영 체제는 다중 프로세스 모드에서 지원하지 않을 수 있습니다.

If the new process manages to bind correctly to all ports, then it sends either the SIGTERM (hard stop in case of "-st") or the SIGUSR1 (graceful stop in case of "-sf") to all processes to notify them that it is now in charge of operations and that the old processes will have to leave, either immediately or once they have finished their job.

새 프로세스가 모든 포트에 올바르게 바인딩하는 경우 모든 프로세스에 SIGTERM ("-st"의 경우 하드 중지) 또는 SIGUSR1("-sf"의 경우 정상 중지)을 전송하여 다음을 알립니다. 이제 운영을 담당하고 이전 프로세스는 즉시 또는 작업을 마친 후에 떠나야 합니다.

It is important to note that during this timeframe, there are two small windows of a few milliseconds each where it is possible that a few connection failures will be noticed during high loads. Typically observed failure rates are around 1 failure during a reload operation every 10000 new connections per second, which means that a heavily loaded site running at 30000 new connections per second may see about 3 failed connection upon every reload. The two situations where this happens are :

이 기간 동안 각각 몇 밀리초의 두 개의 작은 기간이 있으며 여기서 높은 로드 중에 몇 가지 연결 실패가 감지될 수 있습니다. 일반적으로 관찰된 실패율은 초당 10000개의 새 연결마다 다시 로드 작업 중 약 1개의 실패입니다. 즉, 초당 30000개의 새 연결로 실행되는 과도하게 로드된 사이트는 다시 로드할 때마다 약 3개의 실패한 연결을 볼 수 있습니다.
이것이 발생하는 두 가지 상황은 다음과 같습니다.

  • if the new process fails to bind due to the presence of the old process, it will first have to go through the SIGTTOU+SIGTTIN sequence, which typically lasts about one millisecond for a few tens of frontends, and during which some ports will not be bound to the old process and not yet bound to the new one. HAProxy works around this on systems that support the SO_REUSEPORT socket options, as it allows the new process to bind without first asking the old one to unbind. Most BSD systems have been supporting this almost forever. Linux has been supporting this in version 2.0 and dropped it around 2.2, but some patches were floating around by then. It was reintroduced in kernel 3.9, so if you are observing a connection failure rate above the one mentioned above, please ensure that your kernel is 3.9 or newer, or that relevant patches were backported to your kernel (less likely).

    이전 프로세스의 존재로 인해 새 프로세스가 바인딩에 실패하면 먼저 SIGTTOU+SIGTTIN 시퀀스를 거쳐야 합니다. 이 시퀀스는 일반적으로 수십 개의 프런트엔드 동안 약 1밀리초 동안 지속되며 이 동안 일부 포트는 연결되지 않습니다. 이전 프로세스에 바인딩되고 아직 새 프로세스에 바인딩되지 않았습니다. HAProxy는 SO_REUSEPORT 소켓 옵션을 지원하는 시스템에서 이 문제를 해결합니다. 이전 프로세스에 먼저 바인딩 해제를 요청하지 않고 새 프로세스가 바인딩할 수 있도록 하기 때문입니다. 대부분의 BSD 시스템은 거의 영원히 이것을 지원해 왔습니다. Linux는 버전 2.0에서 이를 지원해 왔으며 2.2 부근에서 삭제했지만 그때까지 일부 패치가 떠돌았습니다. 커널 3.9에 다시 도입되었으므로 위에서 언급한 것보다 높은 연결 실패율을 관찰하는 경우 커널이 3.9 이상인지 확인하거나 관련 패치가 커널에 백포트되었는지 확인하십시오(가능성이 낮음).

  • when the old processes close the listening ports, the kernel may not always redistribute any pending connection that was remaining in the socket's backlog. Under high loads, a SYN packet may happen just before the socket is closed, and will lead to an RST packet being sent to the client. In some critical environments where even one drop is not acceptable, these ones are sometimes dealt with using firewall rules to block SYN packets during the reload, forcing the client to retransmit. This is totally system-dependent, as some systems might be able to visit other listening queues and avoid this RST. A second case concerns the ACK from the client on a local socket that was in SYN_RECV state just before the close. This ACK will lead to an RST packet while the haproxy process is still not aware of it. This one is harder to get rid of, though the firewall filtering rules mentioned above will work well if applied one second or so before restarting the process.

    이전 프로세스가 수신 포트를 닫을 때 커널은 소켓의 백로그에 남아 있던 보류 중인 연결을 항상 재배포하지 않을 수 있습니다. 부하가 높은 경우 소켓이 닫히기 직전에 SYN 패킷이 발생할 수 있으며 RST 패킷이 클라이언트로 전송됩니다. 한 방울도 허용되지 않는 일부 중요한 환경에서 이러한 것들은 방화벽 규칙을 사용하여 다시 로드하는 동안 SYN 패킷을 차단하여 클라이언트가 재전송하도록 강제하는 경우가 있습니다. 이는 일부 시스템이 다른 청취 대기열을 방문하고 이 RST를 피할 수 있으므로 완전히 시스템에 따라 다릅니다. 두 번째 경우는 닫기 직전에 SYN_RECV 상태에 있던 로컬 소켓에서 클라이언트의 ACK와 관련이 있습니다. 이 ACK는 haproxy 프로세스가 아직 인식하지 못하는 동안 RST 패킷으로 이어집니다. 프로세스를 다시 시작하기 전에 1초 정도 적용하면 위에서 언급한 방화벽 필터링 규칙이 잘 작동하지만 이것은 제거하기가 더 어렵습니다.

For the vast majority of users, such drops will never ever happen since they don't have enough load to trigger the race conditions. And for most high traffic users, the failure rate is still fairly within the noise margin provided that at least SO_REUSEPORT is properly supported on their systems.

대부분의 사용자에게는 경쟁 조건을 트리거할 만큼 부하가 충분하지 않기 때문에 이러한 드롭은 절대 발생하지 않습니다. 그리고 대부분의 트래픽이 많은 사용자의 경우 시스템에서 최소한 SO_REUSEPORT가 적절하게 지원된다면 실패율은 여전히 노이즈 마진 내에 상당히 있습니다.

QUIC limitations: soft-stop is not supported. In case of reload, QUIC connections will not be preserved.
QUIC 제한: 소프트 스톱이 지원되지 않습니다. 다시 로드하는 경우 QUIC 연결이 유지되지 않습니다.


5. File-descriptor limitations

In order to ensure that all incoming connections will successfully be served, HAProxy computes at load time the total number of file descriptors that will be needed during the process's life. A regular Unix process is generally granted 1024 file descriptors by default, and a privileged process can raise this limit itself. This is one reason for starting HAProxy as root and letting it adjust the limit. The default limit of 1024 file descriptors roughly allow about 500 concurrent connections to be processed. The computation is based on the global maxconn parameter which limits the total number of connections per process, the number of listeners, the number of servers which have a health check enabled, the agent checks, the peers, the loggers and possibly a few other technical requirements. A simple rough estimate of this number consists in simply doubling the maxconn value and adding a few tens to get the approximate number of file descriptors needed.

들어오는 모든 연결이 성공적으로 제공되도록 하기 위해 HAProxy는 로드 시 프로세스 수명 동안 필요한 총 파일 설명자 수를 계산합니다. 일반 Unix 프로세스에는 일반적으로 기본적으로 1024개의 파일 설명자가 부여되며 권한이 있는 프로세스는 이 제한 자체를 높일 수 있습니다. 이것이 HAProxy를 루트로 시작하고 제한을 조정하도록 하는 한 가지 이유입니다. 기본 제한인 1024개의 파일 디스크립터는 약 500개의 동시 연결을 처리할 수 있도록 허용합니다. 계산은 프로세스당 총 연결 수, 리스너 수, 상태 확인이 활성화된 서버 수, 에이전트 확인, 피어, 로거 및 기타 몇 가지 기술 요구 사항을 제한하는 전역 maxconn 매개 변수를 기반으로 합니다. 이 숫자의 간단한 대략적인 추정은 maxconn 값을 두 배로 늘리고 수십을 추가하여 필요한 대략적인 파일 설명자 수를 얻는 것으로 구성됩니다.

Originally HAProxy did not know how to compute this value, and it was necessary to pass the value using the "ulimit-n" setting in the global section. This explains why even today a lot of configurations are seen with this setting present. Unfortunately it was often miscalculated resulting in connection failures when approaching maxconn instead of throttling incoming connection while waiting for the needed resources. For this reason it is important to remove any vestigial "ulimit-n" setting that can remain from very old versions.

원래 HAProxy는 이 값을 계산하는 방법을 몰랐고 전역 섹션에서 "ulimit-n" 설정을 사용하여 값을 전달해야 했습니다. 이는 오늘날에도 이 설정이 있는 많은 구성이 표시되는 이유를 설명합니다. 불행하게도 필요한 리소스를 기다리는 동안 들어오는 연결을 제한하는 대신 maxconn에 접근할 때 종종 잘못 계산되어 연결 실패가 발생했습니다. 이러한 이유로 아주 오래된 버전에서 남아 있을 수 있는 흔적이 남은 "ulimit-n" 설정을 제거하는 것이 중요합니다.

Raising the number of file descriptors to accept even moderate loads is mandatory but comes with some OS-specific adjustments. First, the select() polling system is limited to 1024 file descriptors. In fact on Linux it used to be capable of handling more but since certain OS ship with excessively restrictive SELinux policies forbidding the use of select() with more than 1024 file descriptors, HAProxy now refuses to start in this case in order to avoid any issue at run time. On all supported operating systems, poll() is available and will not suffer from this limitation. It is automatically picked so there is nothing to do to get a working configuration. But poll's becomes very slow when the number of file descriptors increases. While HAProxy does its best to limit this performance impact (eg: via the use of the internal file descriptor cache and batched processing), a good rule of thumb is that using poll() with more than a thousand concurrent connections will use a lot of CPU.

적당한 로드도 허용하도록 파일 설명자 수를 늘리는 것은 필수이지만 일부 OS별 조정이 필요합니다. 첫째, select() 폴링 시스템은 1024개의 파일 디스크립터로 제한됩니다. 실제로 Linux에서는 더 많은 것을 처리할 수 있었지만 특정 OS에는 1024개 이상의 파일 설명자가 있는 select()의 사용을 금지하는 지나치게 제한적인 SELinux 정책이 포함되어 있기 때문에 이제 HAProxy는 런타임에 문제를 피하기 위해 이 경우 시작을 거부합니다. 지원되는 모든 운영 체제에서 poll()을 사용할 수 있으며 이러한 제한이 적용되지 않습니다. 자동으로 선택되므로 작업 구성을 얻기 위해 수행할 작업이 없습니다. 그러나 파일 디스크립터 수가 증가하면 폴링이 매우 느려집니다. HAProxy는 이러한 성능 영향을 제한하기 위해 최선을 다하지만 (예: 내부 파일 설명자 캐시 및 일괄 처리 사용을 통해), 좋은 경험 법칙은 동시 연결이 1,000개 이상인 poll()을 사용하면 많은 CPU를 사용할 것입니다.

For Linux systems base on kernels 2.6 and above, the epoll() system call will be used. It's a much more scalable mechanism relying on callbacks in the kernel that guarantee a constant wake up time regardless of the number of registered monitored file descriptors. It is automatically used where detected, provided that HAProxy had been built for one of the Linux flavors. Its presence and support can be verified using "haproxy -vv".

커널 2.6 이상 기반의 Linux 시스템의 경우 epoll() 시스템 호출이 사용됩니다. 등록된 모니터링 파일 디스크립터의 수에 관계없이 일정한 웨이크업 시간을 보장하는 커널의 콜백에 의존하는 훨씬 더 확장 가능한 메커니즘입니다. HAProxy가 Linux 버전 중 하나에 대해 빌드된 경우 감지된 경우 자동으로 사용됩니다. 그것의 존재와 지원은 "haproxy -vv"를 사용하여 확인할 수 있습니다.

For BSD systems which support it, kqueue() is available as an alternative. It is much faster than poll() and even slightly faster than epoll() thanks to its batched handling of changes. At least FreeBSD and OpenBSD support it. Just like with Linux's epoll(), its support and availability are reported in the output of "haproxy -vv".

이를 지원하는 BSD 시스템의 경우 대안으로 kqueue()를 사용할 수 있습니다. poll()보다 훨씬 빠르며 변경 사항의 일괄 처리 덕분에 epoll()보다 약간 더 빠릅니다. 적어도 FreeBSD와 OpenBSD는 이를 지원합니다. Linux의 epoll()과 마찬가지로 지원 및 가용성은 "haproxy -vv"의 출력에 보고됩니다.

Having a good poller is one thing, but it is mandatory that the process can reach the limits. When HAProxy starts, it immediately sets the new process's file descriptor limits and verifies if it succeeds. In case of failure, it reports it before forking so that the administrator can see the problem. As long as the process is started by as root, there should be no reason for this setting to fail. However, it can fail if the process is started by an unprivileged user. If there is a compelling reason for *not* starting haproxy as root (eg: started by end users, or by a per-application account), then the file descriptor limit can be raised by the system administrator for this specific user. The effectiveness of the setting can be verified by issuing "ulimit -n" from the user's command line. It should reflect the new limit.

좋은 폴러를 갖는 것도 중요하지만 프로세스가 한계에 도달할 수 있어야 합니다. HAProxy가 시작되면 즉시 새 프로세스의 파일 설명자 제한을 설정하고 성공 여부를 확인합니다. 장애가 발생하면 관리자가 문제를 볼 수 있도록 포크하기 전에 보고합니다. 프로세스가 루트로 시작되는 한 이 설정이 실패할 이유가 없습니다. 그러나 권한이 없는 사용자가 프로세스를 시작한 경우 실패할 수 있습니다. haproxy를 루트로 시작하지 *않는* 강력한 이유가 있는 경우 (예: 최종 사용자 또는 응용 프로그램별 계정에 의해 시작) 시스템 관리자가 이 특정 사용자에 대해 파일 설명자 제한을 높일 수 있습니다. 설정의 유효성은 사용자의 명령줄에서 "ulimit -n"을 실행하여 확인할 수 있습니다. 새 제한을 반영해야 합니다.

Warning: when an unprivileged user's limits are changed in this user's account, it is fairly common that these values are only considered when the user logs in and not at all in some scripts run at system boot time nor in crontabs. This is totally dependent on the operating system, keep in mind to check "ulimit -n" before starting haproxy when running this way. The general advice is never to start haproxy as an unprivileged user for production purposes. Another good reason is that it prevents haproxy from enabling some security protections.

경고: 권한이 없는 사용자의 제한이 이 사용자의 계정에서 변경되면 이러한 값은 사용자가 로그인할 때만 고려되고 시스템 부팅 시 또는 crontab에서 실행되는 일부 스크립트에서는 전혀 고려되지 않는 것이 일반적입니다. 이것은 전적으로 운영 체제에 따라 다르므로 이 방식으로 실행할 때 haproxy를 시작하기 전에 "ulimit -n"을 확인하십시오. 일반적인 조언은 프로덕션 목적으로 권한이 없는 사용자로 haproxy를 시작하지 않는 것입니다. 또 다른 좋은 이유는 haproxy가 일부 보안 보호를 활성화하는 것을 방지하기 때문입니다.

Once it is certain that the system will allow the haproxy process to use the requested number of file descriptors, two new system-specific limits may be encountered. The first one is the system-wide file descriptor limit, which is the total number of file descriptors opened on the system, covering all processes. When this limit is reached, accept() or socket() will typically return ENFILE. The second one is the per-process hard limit on the number of file descriptors, it prevents setrlimit() from being set higher. Both are very dependent on the operating system. On Linux, the system limit is set at boot based on the amount of memory. It can be changed with the "fs.file-max" sysctl. And the per-process hard limit is set to 1048576 by default, but it can be changed using the "fs.nr_open" sysctl.

시스템이 haproxy 프로세스가 요청된 수의 파일 디스크립터를 사용하도록 허용하는 것이 확실하면 두 가지 새로운 시스템별 제한이 발생할 수 있습니다. 첫 번째는 시스템 전체 파일 설명자 제한으로, 모든 프로세스를 포함하여 시스템에서 열린 총 파일 설명자 수입니다. 이 제한에 도달하면 accept() 또는 socket()은 일반적으로 ENFILE을 반환합니다. 두 번째는 파일 설명자 수에 대한 프로세스별 하드 제한으로, setrlimit()가 더 높게 설정되는 것을 방지합니다. 둘 다 운영 체제에 크게 의존합니다. Linux에서 시스템 제한은 부팅 시 메모리 양에 따라 설정됩니다. "fs.file-max" sysctl로 변경할 수 있습니다. 그리고 프로세스별 하드 제한은 기본적으로 1048576으로 설정되지만 "fs.nr_open" sysctl을 사용하여 변경할 수 있습니다.

File descriptor limitations may be observed on a running process when they are set too low. The strace utility will report that accept() and socket() return "-1 EMFILE" when the process's limits have been reached. In this case, simply raising the "ulimit-n" value (or removing it) will solve the problem. If these system calls return "-1 ENFILE" then it means that the kernel's limits have been reached and that something must be done on a system-wide parameter. These trouble must absolutely be addressed, as they result in high CPU usage (when accept() fails) and failed connections that are generally visible to the user. One solution also consists in lowering the global maxconn value to enforce serialization, and possibly to disable HTTP keep-alive to force connections to be released and reused faster.

파일 설명자 제한이 너무 낮게 설정되면 실행 중인 프로세스에서 관찰될 수 있습니다. strace 유틸리티는 프로세스의 한계에 도달했을 때 accept() 및 socket()이 "-1 EMFILE"을 반환한다고 보고합니다. 이 경우 단순히 "ulimit-n" 값을 올리거나 제거하면 문제가 해결됩니다. 이러한 시스템 호출이 "-1 ENFILE"을 반환하면 커널의 한계에 도달했으며 시스템 전체 매개변수에서 무언가를 수행해야 함을 의미합니다. 이러한 문제는 높은 CPU 사용량(accept() 실패 시)과 일반적으로 사용자에게 표시되는 연결 실패를 초래하므로 반드시 해결해야 합니다. 또한 한 가지 솔루션은 전역 maxconn 값을 낮추어 직렬화를 적용하고 HTTP 연결 유지를 비활성화하여 연결을 강제로 해제하고 더 빠르게 재사용하도록 하는 것입니다.


6. Memory management

HAProxy uses a simple and fast pool-based memory management. Since it relies on a small number of different object types, it's much more efficient to pick new objects from a pool which already contains objects of the appropriate size than to call malloc() for each different size. The pools are organized as a stack or LIFO, so that newly allocated objects are taken from recently released objects still hot in the CPU caches. Pools of similar sizes are merged together, in order to limit memory fragmentation.

HAProxy는 간단하고 빠른 풀 기반 메모리 관리를 사용합니다. 적은 수의 다른 객체 유형에 의존하기 때문에 서로 다른 크기에 대해 malloc()을 호출하는 것보다 이미 적절한 크기의 객체가 포함된 풀에서 새 객체를 선택하는 것이 훨씬 더 효율적입니다. 풀은 스택 또는 LIFO로 구성되므로 새로 할당된 개체는 CPU 캐시에서 여전히 핫한 최근에 릴리스된 개체에서 가져옵니다. 메모리 조각화를 제한하기 위해 비슷한 크기의 풀이 함께 병합됩니다.

By default, since the focus is set on performance, each released object is put back into the pool it came from, and allocated objects are never freed since they are expected to be reused very soon.

기본적으로 초점은 성능에 맞춰져 있기 때문에 해제된 각 개체는 원래 풀에 다시 배치되고 할당된 개체는 곧 재사용될 것으로 예상되므로 해제되지 않습니다.

On the CLI, it is possible to check how memory is being used in pools thanks to the "show pools" command :
CLI에서 "show pools" 명령 덕분에 풀에서 메모리가 어떻게 사용되고 있는지 확인할 수 있습니다.

  > show pools
  Dumping pools usage. Use SIGQUIT to flush them.
    - Pool cache_st (16 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccc40=03 [SHARED]
    - Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 0 failures, 2 users, @0x9ccac0=00 [SHARED]
    - Pool comp_state (48 bytes) : 3 allocated (144 bytes), 3 used, 0 failures, 5 users, @0x9cccc0=04 [SHARED]
    - Pool filter (64 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 3 users, @0x9ccbc0=02 [SHARED]
    - Pool vars (80 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccb40=01 [SHARED]
    - Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9cd240=15 [SHARED]
    - Pool task (144 bytes) : 55 allocated (7920 bytes), 55 used, 0 failures, 1 users, @0x9cd040=11 [SHARED]
    - Pool session (160 bytes) : 1 allocated (160 bytes), 1 used, 0 failures, 1 users, @0x9cd140=13 [SHARED]
    - Pool h2s (208 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccec0=08 [SHARED]
    - Pool h2c (288 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cce40=07 [SHARED]
    - Pool spoe_ctx (304 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccf40=09 [SHARED]
    - Pool connection (400 bytes) : 2 allocated (800 bytes), 2 used, 0 failures, 1 users, @0x9cd1c0=14 [SHARED]
    - Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd340=17 [SHARED]
    - Pool dns_resolut (480 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccdc0=06 [SHARED]
    - Pool dns_answer_ (576 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccd40=05 [SHARED]
    - Pool stream (960 bytes) : 1 allocated (960 bytes), 1 used, 0 failures, 1 users, @0x9cd0c0=12 [SHARED]
    - Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd2c0=16 [SHARED]
    - Pool buffer (8030 bytes) : 3 allocated (24090 bytes), 2 used, 0 failures, 1 users, @0x9cd3c0=18 [SHARED]
    - Pool trash (8062 bytes) : 1 allocated (8062 bytes), 1 used, 0 failures, 1 users, @0x9cd440=19
  Total: 19 pools, 42296 bytes allocated, 34266 used.

The pool name is only indicative, it's the name of the first object type using this pool. The size in parenthesis is the object size for objects in this pool. Object sizes are always rounded up to the closest multiple of 16 bytes. The number of objects currently allocated and the equivalent number of bytes is reported so that it is easy to know which pool is responsible for the highest memory usage. The number of objects currently in use is reported as well in the "used" field. The difference between "allocated" and "used" corresponds to the objects that have been freed and are available for immediate use. The address at the end of the line is the pool's address, and the following number is the pool index when it exists, or is reported as -1 if no index was assigned.

풀 이름은 예시일 뿐이며 이 풀을 사용하는 첫 번째 개체 유형의 이름입니다. 괄호 안의 크기는 이 풀에 있는 개체의 개체 크기입니다. 개체 크기는 항상 가장 가까운 16바이트의 배수로 반올림됩니다. 현재 할당된 개체 수와 이에 상응하는 바이트 수가 보고되므로 가장 높은 메모리 사용량을 담당하는 풀을 쉽게 알 수 있습니다. 현재 사용 중인 개체 수는 "used" 필드에도 보고됩니다. "할당된(allocated)"과 "사용된(used)"의 차이는 해제되어 즉시 사용할 수 있는 개체에 해당합니다. 줄 끝에 있는 주소는 풀의 주소이고, 다음 숫자는 풀 인덱스가 있는 경우 풀 인덱스이거나 인덱스가 할당되지 않은 경우 -1로 보고됩니다.

It is possible to limit the amount of memory allocated per process using the "-m" command line option, followed by a number of megabytes. It covers all of the process's addressable space, so that includes memory used by some libraries as well as the stack, but it is a reliable limit when building a resource constrained system. It works the same way as "ulimit -v" on systems which have it, or "ulimit -d" for the other ones.

"-m" 명령줄 옵션을 사용하여 프로세스당 할당된 메모리 양을 제한할 수 있습니다. 이는 프로세스의 주소 지정 가능한 모든 공간을 포함하므로 일부 라이브러리와 스택에서 사용하는 메모리를 포함하지만 리소스 제약 시스템을 구축할 때 신뢰할 수 있는 제한입니다. 이 기능이 있는 시스템에서는 "ulimit -v" 또는 다른 시스템에서는 "ulimit -d"와 동일한 방식으로 작동합니다.

If a memory allocation fails due to the memory limit being reached or because the system doesn't have any enough memory, then haproxy will first start to free all available objects from all pools before attempting to allocate memory again. This mechanism of releasing unused memory can be triggered by sending the signal SIGQUIT to the haproxy process. When doing so, the pools state prior to the flush will also be reported to stderr when the process runs in foreground.

메모리 제한에 도달했거나 시스템에 충분한 메모리가 없기 때문에 메모리 할당이 실패하면 haproxy는 메모리를 다시 할당하려고 시도하기 전에 먼저 모든 풀에서 사용 가능한 모든 개체를 해제하기 시작합니다. 사용하지 않는 메모리를 해제하는 이 메커니즘은 SIGQUIT 신호를 haproxy 프로세스로 전송하여 트리거할 수 있습니다. 이렇게 하면 플러시 이전의 풀 상태도 프로세스가 포그라운드에서 실행될 때 stderr에 보고됩니다.

During a reload operation, the process switched to the graceful stop state also automatically performs some flushes after releasing any connection so that all possible memory is released to save it for the new process.
다시 로드 작업 중에 정상적인 중지 상태로 전환된 프로세스는 연결을 해제한 후 자동으로 일부 플러시를 수행하여 가능한 모든 메모리가 해제되어 새 프로세스를 위해 저장합니다.


7. CPU usage

HAProxy normally spends most of its time in the system and a smaller part in userland. A finely tuned 3.5 GHz CPU can sustain a rate about 80000 end-to-end connection setups and closes per second at 100% CPU on a single core. When one core is saturated, typical figures are :

HAProxy는 일반적으로 대부분의 시간을 시스템에서 보내고 일부는 사용자 영역에서 보냅니다. 미세하게 조정된 3.5GHz CPU는 단일 코어에서 100% CPU로 초당 약 80000개의 엔드 투 엔드 연결 설정 속도를 유지할 수 있습니다. 하나의 코어가 포화되면 일반적인 수치는 다음과 같습니다.

  • 95% system, 5% user for long TCP connections or large HTTP objects
    긴 TCP 연결 또는 큰 HTTP 개체의 경우: 시스템 모드 95%, 사용자 모드 5%.
  • 85% system and 15% user for short TCP connections or small HTTP objects in close mode
    짧은 TCP 연결 또는 닫기 모드의 작은 HTTP 개체의 경우: 시스템 모드 85%, 사용자 모드 15%
  • 70% system and 30% user for small HTTP objects in keep-alive mode
    keep-alive 모드의 작은 HTTP 개체의 경우: 시스템 모드 70%, 사용자 모드 30%

The amount of rules processing and regular expressions will increase the user land part. The presence of firewall rules, connection tracking, complex routing tables in the system will instead increase the system part.

규칙 처리 및 정규 표현식의 양은 사용자 모드 부분을 증가시킵니다. 시스템의 방화벽 규칙, 연결 추적, 복잡한 라우팅 테이블의 존재는 대신 시스템 모드 부분을 증가시킵니다.

On most systems, the CPU time observed during network transfers can be cut in 4 parts :
대부분의 시스템에서 네트워크 전송 중에 관찰되는 CPU 시간은 4 부분으로 나눌 수 있습니다.

  • the interrupt part, which concerns all the processing performed upon I/O receipt, before the target process is even known. Typically Rx packets are accounted for in interrupt. On some systems such as Linux where interrupt processing may be deferred to a dedicated thread, it can appear as softirq, and the thread is called ksoftirqd/0 (for CPU 0). The CPU taking care of this load is generally defined by the hardware settings, though in the case of softirq it is often possible to remap the processing to another CPU. This interrupt part will often be perceived as parasitic since it's not associated with any process, but it actually is some processing being done to prepare the work for the process.

    인터럽트 부분: 대상 프로세스가 알려지기 전에 I/O 수신 시 수행되는 모든 처리와 관련된 인터럽트 부분. 일반적으로 Rx 패킷은 인터럽트에서 처리됩니다. 인터럽트 처리가 전용 스레드로 연기될 수 있는 Linux와 같은 일부 시스템에서는 softirq로 나타날 수 있으며 스레드는 ksoftirqd/0(CPU 0의 경우)이라고 합니다. 이 부하를 처리하는 CPU는 일반적으로 하드웨어 설정에 의해 정의되지만 softirq의 경우 처리를 다른 CPU로 다시 매핑하는 것이 종종 가능합니다. 이 인터럽트 부분은 어떤 프로세스와도 연결되어 있지 않기 때문에 종종 기생적인 것으로 인식되지만 실제로는 프로세스를 위한 작업을 준비하기 위해 수행되는 일부 처리입니다.

  • the system part, which concerns all the processing done using kernel code called from userland. System calls are accounted as system for example. All synchronously delivered Tx packets will be accounted for as system time. If some packets have to be deferred due to queues filling up, they may then be processed in interrupt context later (eg: upon receipt of an ACK opening a TCP window).

    시스템 부분: 사용자 영역에서 호출된 커널 코드를 사용하여 수행되는 모든 처리와 관련된 시스템 부분. 예를 들어 시스템 호출은 시스템으로 간주됩니다. 동기적으로 전달되는 모든 Tx 패킷은 시스템 시간으로 간주됩니다. 대기열이 가득 차서 일부 패킷을 연기해야 하는 경우 나중에 인터럽트 컨텍스트에서 처리될 수 있습니다(예: TCP 창을 여는 ACK 수신 시).

  • the user part, which exclusively runs application code in userland. HAProxy runs exclusively in this part, though it makes heavy use of system calls. Rules processing, regular expressions, compression, encryption all add to the user portion of CPU consumption.

    사용자 부분: userland에서 애플리케이션 코드를 독점적으로 실행하는 사용자 부분. HAProxy는 시스템 호출을 많이 사용하지만 이 부분에서만 독점적으로 실행됩니다. 규칙 처리, 정규 표현식, 압축, 암호화는 모두 CPU 소비의 사용자 부분에 추가됩니다.

  • the idle part, which is what the CPU does when there is nothing to do. For example HAProxy waits for an incoming connection, or waits for some data to leave, meaning the system is waiting for an ACK from the client to push these data.

    유휴 부분: 할 일이 없을 때 CPU가 하는 일입니다. 예를 들어 HAProxy는 들어오는 연결을 기다리거나 일부 데이터가 나갈 때까지 기다립니다. 즉, 시스템은 이러한 데이터를 푸시하기 위해 클라이언트의 ACK를 기다리고 있습니다.

In practice regarding HAProxy's activity, it is in general reasonably accurate (but totally inexact) to consider that interrupt/softirq are caused by Rx processing in kernel drivers, that user-land is caused by layer 7 processing in HAProxy, and that system time is caused by network processing on the Tx path.

실제로 HAProxy의 활동과 관련하여 interrupt/softirq가 커널 드라이버의 Rx 처리로 인해 발생하고 사용자 영역이 HAProxy의 계층 7 처리로 인해 발생하며 시스템 시간이 Tx 경로에서 네트워크 처리로 인해 발생합니다.

Since HAProxy runs around an event loop, it waits for new events using poll() (or any alternative) and processes all these events as fast as possible before going back to poll() waiting for new events. It measures the time spent waiting in poll() compared to the time spent doing processing events. The ratio of polling time vs total time is called the "idle" time, it's the amount of time spent waiting for something to happen. This ratio is reported in the stats page on the "idle" line, or "Idle_pct" on the CLI. When it's close to 100%, it means the load is extremely low. When it's close to 0%, it means that there is constantly some activity. While it cannot be very accurate on an overloaded system due to other processes possibly preempting the CPU from the haproxy process, it still provides a good estimate about how HAProxy considers it is working : if the load is low and the idle ratio is low as well, it may indicate that HAProxy has a lot of work to do, possibly due to very expensive rules that have to be processed. Conversely, if HAProxy indicates the idle is close to 100% while things are slow, it means that it cannot do anything to speed things up because it is already waiting for incoming data to process. In the example below, haproxy is completely idle :

HAProxy는 이벤트 루프 주위에서 실행되기 때문에 poll()(또는 다른 대안)을 사용하여 새 이벤트를 기다리고 poll()으로 돌아가서 새 이벤트를 기다리기 전에 이러한 모든 이벤트를 가능한 한 빨리 처리합니다. 이벤트 처리에 소요된 시간과 비교하여 poll()에서 대기하는 데 소요된 시간을 측정합니다. 폴링 시간 대 총 시간의 비율을 "유휴(idle)" 시간이라고 하며, 어떤 일이 일어나기를 기다리는 데 소요되는 시간입니다. 이 비율은 "유휴" 줄의 통계 페이지 또는 CLI의 "Idle_pct"에 보고됩니다. 100%에 가까우면 부하가 매우 낮다는 의미입니다. 0%에 가까우면 지속적으로 약간의 활동이 있음을 의미합니다. haproxy 프로세스에서 CPU를 선점할 수 있는 다른 프로세스로 인해 오버로드된 시스템에서 매우 정확할 수는 없지만 HAProxy가 작동하는 것으로 간주하는 방법에 대한 좋은 추정치를 제공합니다. 부하가 낮고 유휴 비율도 낮으면 HAProxy가 처리해야 하는 매우 비싼 규칙으로 인해 할 일이 많다는 것을 나타낼 수 있습니다. 반대로 HAProxy가 작업 속도가 느린 동안 유휴 상태가 100%에 가깝다고 표시하면 이미 들어오는 데이터가 처리되기를 기다리고 있기 때문에 작업 속도를 높이기 위해 아무것도 할 수 없다는 의미입니다. 아래 예에서 haproxy는 완전히 유휴 상태입니다.

  $ echo "show info" | socat - /var/run/haproxy.sock | grep ^Idle
  Idle_pct: 100

When the idle ratio starts to become very low, it is important to tune the system and place processes and interrupts correctly to save the most possible CPU resources for all tasks. If a firewall is present, it may be worth trying to disable it or to tune it to ensure it is not responsible for a large part of the performance limitation. It's worth noting that unloading a stateful firewall generally reduces both the amount of interrupt/softirq and of system usage since such firewalls act both on the Rx and the Tx paths. On Linux, unloading the nf_conntrack and ip_conntrack modules will show whether there is anything to gain. If so, then the module runs with default settings and you'll have to figure how to tune it for better performance. In general this consists in considerably increasing the hash table size. On FreeBSD, "pfctl -d" will disable the "pf" firewall and its stateful engine at the same time.

유휴 비율이 매우 낮아지기 시작하면 시스템을 조정하고 프로세스와 인터럽트를 올바르게 배치하여 모든 작업에 대해 가능한 CPU 리소스를 최대한 절약하는 것이 중요합니다. 방화벽이 있는 경우 성능 제한의 많은 부분을 담당하지 않도록 비활성화하거나 조정하는 것이 좋습니다. 상태 저장 방화벽을 언로드하면 이러한 방화벽이 Rx 및 Tx 경로 모두에서 작동하기 때문에 일반적으로 interrupt/softirq 및 시스템 사용량이 모두 감소한다는 점은 주목할 가치가 있습니다. Linux에서 nf_conntrack 및 ip_conntrack 모듈을 언로드하면 얻을 수 있는 것이 있는지 여부가 표시됩니다. 그렇다면 모듈은 기본 설정으로 실행되며 더 나은 성능을 위해 모듈을 조정하는 방법을 파악해야 합니다. 일반적으로 이것은 해시 테이블 크기를 상당히 증가시키는 것으로 구성됩니다. FreeBSD에서 "pfctl -d"는 "pf" 방화벽과 상태 저장 엔진을 동시에 비활성화합니다.

If it is observed that a lot of time is spent in interrupt/softirq, it is important to ensure that they don't run on the same CPU. Most systems tend to pin the tasks on the CPU where they receive the network traffic because for certain workloads it improves things. But with heavily network-bound workloads it is the opposite as the haproxy process will have to fight against its kernel counterpart. Pinning haproxy to one CPU core and the interrupts to another one, all sharing the same L3 cache tends to sensibly increase network performance because in practice the amount of work for haproxy and the network stack are quite close, so they can almost fill an entire CPU each. On Linux this is done using taskset (for haproxy) or using cpu-map (from the haproxy config), and the interrupts are assigned under /proc/irq. Many network interfaces support multiple queues and multiple interrupts. In general it helps to spread them across a small number of CPU cores provided they all share the same L3 cache. Please always stop irq_balance which always does the worst possible thing on such workloads.

interrupt/softirq에서 많은 시간이 소요되는 것이 관찰되면 동일한 CPU에서 실행되지 않도록 하는 것이 중요합니다. 대부분의 시스템은 특정 워크로드의 경우 상황을 개선하기 때문에 네트워크 트래픽을 수신하는 CPU에 작업을 고정하는 경향이 있습니다. 그러나 심하게 네트워크에 묶인 워크로드에서는 haproxy 프로세스가 해당 커널 상대와 싸워야 하므로 반대입니다. 하나의 CPU 코어에 haproxy를 고정하고 다른 코어에 인터럽트를 고정하면 모두 동일한 L3 캐시를 공유하므로 네트워크 성능이 눈에 띄게 향상되는 경향이 있습니다. 그래서 그들은 각각 전체 CPU를 거의 채울 수 있습니다. Linux에서 이것은 taskset(haproxy용) 또는 cpu-map(haproxy 구성에서)을 사용하여 수행되며 인터럽트는 /proc/irq에 할당됩니다. 많은 네트워크 인터페이스는 다중 대기열과 다중 인터럽트를 지원합니다. 일반적으로 모두 동일한 L3 캐시를 공유하는 경우 적은 수의 CPU 코어에 분산하는 데 도움이 됩니다. 이러한 워크로드에서 항상 최악의 작업을 수행하는 irq_balance를 항상 중지하십시오.

For CPU-bound workloads consisting in a lot of SSL traffic or a lot of compression, it may be worth using multiple processes dedicated to certain tasks, though there is no universal rule here and experimentation will have to be performed.

많은 SSL 트래픽 또는 많은 압축으로 구성된 CPU 바운드 워크로드의 경우 여기에는 보편적인 규칙이 없고 실험을 수행해야 하지만 특정 작업 전용으로 여러 프로세스를 사용하는 것이 좋습니다.

In order to increase the CPU capacity, it is possible to make HAProxy run as several processes, using the "nbproc" directive in the global section. There are some limitations though :
CPU 용량을 늘리기 위해 전역 섹션의 "nbproc" 지시문을 사용하여 HAProxy를 여러 프로세스로 실행하도록 할 수 있습니다. 그러나 몇 가지 제한 사항이 있습니다.

  • health checks are run per process, so the target servers will get as many checks as there are running processes ;
    상태 검사는 프로세스별로 실행되므로 대상 서버는 실행 중인 프로세스만큼 많은 검사를 받습니다.
  • maxconn values and queues are per-process so the correct value must be set to avoid overloading the servers ;
    maxconn 값과 대기열은 프로세스별이므로 서버 과부하를 방지하려면 올바른 값을 설정해야 합니다.
  • outgoing connections should avoid using port ranges to avoid conflicts ;
    나가는 연결은 충돌을 피하기 위해 포트 범위를 사용하지 않아야 합니다.
  • stick-tables are per process and are not shared between processes ;
    스틱 테이블은 프로세스별로 제공되며 프로세스 간에 공유되지 않습니다.
  • each peers section may only run on a single process at a time ;
    각 피어 섹션은 한 번에 하나의 프로세스에서만 실행될 수 있습니다.
  • the CLI operations will only act on a single process at a time.
    CLI 작업은 한 번에 하나의 프로세스에서만 작동합니다.

With this in mind, it appears that the easiest setup often consists in having one first layer running on multiple processes and in charge for the heavy processing, passing the traffic to a second layer running in a single process. This mechanism is suited to SSL and compression which are the two CPU-heavy features. Instances can easily be chained over UNIX sockets (which are cheaper than TCP sockets and which do not waste ports), and the proxy protocol which is useful to pass client information to the next stage. When doing so, it is generally a good idea to bind all the single-process tasks to process number 1 and extra tasks to next processes, as this will make it easier to generate similar configurations for different machines.

이를 염두에 두고 가장 쉬운 설정은 종종 여러 프로세스에서 실행되는 하나의 첫 번째 레이어로 구성되고 과도한 처리를 담당하여 트래픽을 단일 프로세스에서 실행되는 두 번째 레이어로 전달하는 것으로 보입니다. 이 메커니즘은 CPU를 많이 사용하는 두 가지 기능인 SSL 및 압축에 적합합니다. 인스턴스는 UNIX 소켓(TCP 소켓보다 저렴하고 포트를 낭비하지 않음)과 클라이언트 정보를 다음 단계로 전달하는 데 유용한 프록시 프로토콜을 통해 쉽게 연결할 수 있습니다. 그렇게 할 때 일반적으로 모든 단일 프로세스 작업을 프로세스 번호 1에 바인딩하고 추가 작업을 다음 프로세스에 바인딩하는 것이 좋습니다. 이렇게 하면 서로 다른 시스템에 대해 유사한 구성을 더 쉽게 생성할 수 있습니다.

On Linux versions 3.9 and above, running HAProxy in multi-process mode is much more efficient when each process uses a distinct listening socket on the same IP:port ; this will make the kernel evenly distribute the load across all processes instead of waking them all up. Please check the "process" option of the "bind" keyword lines in the configuration manual for more information.

Linux 버전 3.9 이상에서는 각 프로세스가 동일한 IP:port에서 고유한 청취 소켓을 사용할 때 다중 프로세스 모드에서 HAProxy를 실행하는 것이 훨씬 더 효율적입니다. 이렇게 하면 커널이 모든 프로세스를 깨우는 대신 모든 프로세스에 부하를 고르게 분산시킵니다. 자세한 내용은 구성 매뉴얼에서 "bind" 키워드 행의 "process" 옵션을 확인하십시오.


8. Logging

For logging, HAProxy always relies on a syslog server since it does not perform any file-system access. The standard way of using it is to send logs over UDP to the log server (by default on port 514). Very commonly this is configured to 127.0.0.1 where the local syslog daemon is running, but it's also used over the network to log to a central server. The central server provides additional benefits especially in active-active scenarios where it is desirable to keep the logs merged in arrival order. HAProxy may also make use of a UNIX socket to send its logs to the local syslog daemon, but it is not recommended at all, because if the syslog server is restarted while haproxy runs, the socket will be replaced and new logs will be lost. Since HAProxy will be isolated inside a chroot jail, it will not have the ability to reconnect to the new socket. It has also been observed in field that the log buffers in use on UNIX sockets are very small and lead to lost messages even at very light loads. But this can be fine for testing however.

로깅을 위해 HAProxy는 파일 시스템 액세스를 수행하지 않기 때문에 항상 syslog 서버에 의존합니다. 이를 사용하는 표준 방법은 UDP를 통해 로그를 로그 서버(기본적으로 포트 514)로 보내는 것입니다. 매우 일반적으로 이것은 로컬 syslog 데몬이 실행되는 127.0.0.1로 구성되지만 네트워크를 통해 중앙 서버에 로그인하는 데에도 사용됩니다. 중앙 서버는 특히 도착 순서대로 병합된 로그를 유지하는 것이 바람직한 활성-활성 시나리오에서 추가적인 이점을 제공합니다. HAProxy는 UNIX 소켓을 사용하여 로그를 로컬 syslog 데몬으로 보낼 수도 있지만 전혀 권장되지 않습니다. haproxy가 실행되는 동안 syslog 서버가 다시 시작되면 소켓이 교체되고 새 로그가 손실되기 때문입니다. HAProxy는 chroot 감옥 내부에서 격리되므로 새 소켓에 다시 연결할 수 없습니다. 또한 UNIX 소켓에서 사용 중인 로그 버퍼가 매우 작고 매우 가벼운 로드에서도 메시지가 손실된다는 사실이 현장에서 관찰되었습니다. 그러나 이것은 테스트에 적합할 수 있습니다.

It is recommended to add the following directive to the "global" section to make HAProxy log to the local daemon using facility "local0" :
HAProxy가 "local0" 기능을 사용하여 로컬 데몬에 로그하도록 하려면 "global" 섹션에 다음 지시문을 추가하는 것이 좋습니다.

      log 127.0.0.1:514 local0

and then to add the following one to each "defaults" section or to each frontend and backend section :
그런 다음 각 "defaults" 섹션이나 각 프런트엔드 및 백엔드 섹션에 다음 항목을 추가합니다.

      log global

This way, all logs will be centralized through the global definition of where the log server is.
이렇게 하면 모든 로그가 로그 서버가 있는 전역 정의를 통해 중앙 집중화됩니다.

Some syslog daemons do not listen to UDP traffic by default, so depending on the daemon being used, the syntax to enable this will vary :
일부 syslog 데몬은 기본적으로 UDP 트래픽을 수신하지 않으므로 사용 중인 데몬에 따라 이를 활성화하는 구문이 달라집니다.

  • on sysklogd, you need to pass argument "-r" on the daemon's command line so that it listens to a UDP socket for "remote" logs ; note that there is no way to limit it to address 127.0.0.1 so it will also receive logs from remote systems ;
    sysklogd에서 데몬의 명령줄에 "-r" 인수를 전달해야 "원격" 로그에 대한 UDP 소켓을 수신할 수 있습니다. 주소 127.0.0.1로 제한할 방법이 없으므로 원격 시스템에서도 로그를 수신합니다.
  • on rsyslogd, the following lines must be added to the configuration file :
    rsyslogd에서 구성 파일에 다음 행을 추가해야 합니다.
      $ModLoad imudp
      $UDPServerAddress *
      $UDPServerRun 514
  • on syslog-ng, a new source can be created the following way, it then needs to be added as a valid source in one of the "log" directives :
    syslog-ng에서 다음과 같은 방법으로 새 소스를 생성할 수 있으며 "log" 지시문 중 하나에 유효한 소스로 추가해야 합니다.
      source s_udp {
        udp(ip(127.0.0.1) port(514));
      };

Please consult your syslog daemon's manual for more information. If no logs are seen in the system's log files, please consider the following tests :
자세한 내용은 syslog 데몬 설명서를 참조하십시오. 시스템의 로그 파일에 로그가 표시되지 않으면 다음 테스트를 고려하십시오.

  • restart haproxy. Each frontend and backend logs one line indicating it's starting. If these logs are received, it means logs are working.
    haproxy를 다시 시작합니다. 각 프런트엔드와 백엔드는 시작 중임을 나타내는 한 줄을 기록합니다. 이러한 로그가 수신되면 로그가 작동 중임을 의미합니다.
  • run "strace -tt -s100 -etrace=sendmsg -p <haproxy's pid>" and perform some activity that you expect to be logged. You should see the log messages being sent using sendmsg() there. If they don't appear, restart using strace on top of haproxy. If you still see no logs, it definitely means that something is wrong in your configuration.
    "strace -tt -s100 -etrace=sendmsg -p <haproxy's pid>"를 실행하고 기록될 것으로 예상되는 일부 활동을 수행합니다. 거기에서 sendmsg()를 사용하여 전송되는 로그 메시지를 볼 수 있습니다. 표시되지 않으면 haproxy 위에 strace를 사용하여 다시 시작합니다. 여전히 로그가 표시되지 않으면 확실히 구성에 문제가 있음을 의미합니다.
  • run tcpdump to watch for port 514, for example on the loopback interface if the traffic is being sent locally : "tcpdump -As0 -ni lo port 514". If the packets are seen there, it's the proof they're sent then the syslogd daemon needs to be troubleshooted.
    tcpdump를 실행하여 포트 514를 감시합니다 (예: 트래픽이 로컬로 전송되는 경우 루프백 인터페이스에서 "tcpdump -As0 -ni lo port 514"). 패킷이 거기에 표시되면 패킷이 전송되었다는 증거이므로 syslogd 데몬의 문제를 해결해야 합니다.

While traffic logs are sent from the frontends (where the incoming connections are accepted), backends also need to be able to send logs in order to report a server state change consecutive to a health check. Please consult HAProxy's configuration manual for more information regarding all possible log settings.

프런트엔드(수신 연결이 수락되는 곳)에서 트래픽 로그가 전송되는 동안 백엔드도 상태 확인에 이어 서버 상태 변경을 보고하기 위해 로그를 보낼 수 있어야 합니다. 가능한 모든 로그 설정에 대한 자세한 내용은 HAProxy의 구성 설명서를 참조하십시오.

It is convenient to chose a facility that is not used by other daemons. HAProxy examples often suggest "local0" for traffic logs and "local1" for admin logs because they're never seen in field. A single facility would be enough as well. Having separate logs is convenient for log analysis, but it's also important to remember that logs may sometimes convey confidential information, and as such they must not be mixed with other logs that may accidentally be handed out to unauthorized people.

다른 데몬이 사용하지 않는 시설을 선택하는 것이 편리합니다. HAProxy 예제는 트래픽 로그의 경우 "local0"을, 관리자 로그의 경우 "local1"을 제안하는 경우가 많습니다. 필드에 표시되지 않기 때문입니다. 하나의 시설로도 충분할 것입니다. 별도의 로그를 갖는 것은 로그 분석에 편리하지만 로그가 때때로 기밀 정보를 전달할 수 있으므로 권한이 없는 사람에게 실수로 전달될 수 있는 다른 로그와 혼합되어서는 안 된다는 점을 기억하는 것도 중요합니다.

For in-field troubleshooting without impacting the server's capacity too much, it is recommended to make use of the "halog" utility provided with HAProxy. This is sort of a grep-like utility designed to process HAProxy log files at a very fast data rate. Typical figures range between 1 and 2 GB of logs per second. It is capable of extracting only certain logs (eg: search for some classes of HTTP status codes, connection termination status, search by response time ranges, look for errors only), count lines, limit the output to a number of lines, and perform some more advanced statistics such as sorting servers by response time or error counts, sorting URLs by time or count, sorting client addresses by access count, and so on. It is pretty convenient to quickly spot anomalies such as a bot looping on the site, and block them.

서버 용량에 큰 영향을 주지 않고 현장에서 문제를 해결하려면 HAProxy와 함께 제공되는 "halog" 유틸리티를 사용하는 것이 좋습니다. 이것은 매우 빠른 데이터 속도로 HAProxy 로그 파일을 처리하도록 설계된 일종의 grep 유사 유틸리티입니다. 일반적인 수치 범위는 초당 1~2GB의 로그입니다. 특정 로그만 추출할 수 있습니다(예: HTTP 상태 코드의 일부 클래스 검색, 연결 종료 상태, 응답 시간 범위로 검색, 오류만 검색), 라인 수, 출력을 라인 수로 제한, 수행 응답 시간 또는 오류 수에 따라 서버 정렬, 시간 또는 수에 따라 URL 정렬, 액세스 수에 따라 클라이언트 주소 정렬 등과 같은 일부 고급 통계. 사이트에서 반복되는 봇과 같은 이상 징후를 빠르게 발견하고 차단하는 것이 매우 편리합니다.


9. Statistics and monitoring

It is possible to query HAProxy about its status. The most commonly used mechanism is the HTTP statistics page. This page also exposes an alternative CSV output format for monitoring tools. The same format is provided on the Unix socket.

Statistics are regroup in categories labelled as domains, corresponding to the multiple components of HAProxy. There are two domains available: proxy and dns. If not specified, the proxy domain is selected. Note that only the proxy statistics are printed on the HTTP page.

9.1. CSV format

The statistics may be consulted either from the unix socket or from the HTTP page. Both means provide a CSV format whose fields follow. The first line begins with a sharp ('#') and has one word per comma-delimited field which represents the title of the column. All other lines starting at the second one use a classical CSV format using a comma as the delimiter, and the double quote ('"') as an optional text delimiter, but only if the enclosed text is ambiguous (if it contains a quote or a comma). The double-quote character ('"') in the text is doubled ('""'), which is the format that most tools recognize. Please do not insert any column before these ones in order not to break tools which use hard-coded column positions.

통계는 유닉스 소켓이나 HTTP 페이지에서 참조할 수 있습니다. 두 가지 방법 모두 필드가 따르는 CSV 형식을 제공합니다. 첫 번째 줄은 샤프('#')로 시작하고 열 제목을 나타내는 쉼표로 구분된 필드당 한 단어가 있습니다. 두 번째 줄에서 시작하는 다른 모든 줄은 구분 기호로 쉼표를 사용하고 선택적인 텍스트 구분 기호로 큰따옴표('"')를 사용하는 클래식 CSV 형식을 사용하지만 동봉된 텍스트가 모호한 경우(따옴표 또는 쉼표). 텍스트의 큰따옴표 문자('"')는 대부분의 도구에서 인식하는 형식인 이중 따옴표('""')입니다. 하드 코딩된 열 위치를 사용하는 도구를 중단하지 않도록 이 열 앞에 열을 삽입하지 마십시오.

For proxy statistics, after each field name, the types which may have a value for that field are specified in brackets. The types are L (Listeners), F (Frontends), B (Backends), and S (Servers). There is a fixed set of static fields that are always available in the same order. A column containing the character '-' delimits the end of the static fields, after which presence or order of the fields are not guaranteed.

프록시 통계의 경우 각 필드 이름 뒤에 해당 필드에 대한 값을 가질 수 있는 유형이 괄호 안에 지정됩니다. 유형은 L(리스너), F(프론트엔드), B(백엔드) 및 S(서버)입니다. 항상 동일한 순서로 사용할 수 있는 고정된 정적 필드 세트가 있습니다. '-' 문자를 포함하는 열은 정적 필드의 끝을 구분하며 그 이후에는 필드의 존재 또는 순서가 보장되지 않습니다.

Here is the list of static fields using the proxy statistics domain:
프록시 통계 도메인을 사용하는 정적 필드 목록은 다음과 같습니다.

  • 0. pxname [LFBS]: proxy name 프록시 이름
  • 1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend, any name for server/listener)
    서비스 이름(프런트엔드의 경우 FRONTEND, 백엔드의 경우 BACKEND, 서버/리스너의 모든 이름)
  • 2. qcur(Queue Cur) [..BS]: current queued requests. For the backend this reports the number queued without a server assigned.
    현재 대기중인 요청. 백엔드의 경우 할당된 서버 없이 대기 중인 번호를 보고합니다.
  • 3. qmax(Queue Max) [..BS]: max value of qcur qcur의 최대 값
  • 4. scur(Session rate Cur) [LFBS]: current sessions 현재 세션수
  • 5. smax(Session rate Max) [LFBS]: max sessions 최대 세션수
  • 6. slim(Sessions Limit) [LFBS]: configured session limit 구성된 세션 제한
  • 7. stot(Sessions Total) [LFBS]: cumulative number of sessions 누적 세션 수
  • 8. bin(Bytes In) [LFBS]: bytes in 입력 바이트 수
  • 9. bout(Bytes Out) [LFBS]: bytes out 출력 바이트 수
  • 10. dreq(Denied Req) [LFB.]: requests denied because of security concerns.
    보안 문제로 인해 거부된 요청 수
    • For tcp this is because of a matched tcp-request content rule.
      tcp의 경우 일치하는 tcp-request 콘텐츠 규칙 때문입니다.
    • For http this is because of a matched http-request or tarpit rule.
      http의 경우 일치하는 http-request 또는 tarpit 규칙 때문입니다.
  • 11. dresp(Denied Resp) [LFBS]: responses denied because of security concerns.
    보안 문제로 인해 응답이 거부되었습니다.
    • For http this is because of a matched http-request rule, or "option checkcache".
      http의 경우 일치하는 http 요청 규칙 또는 "option checkcache" 때문입니다.
  • 12. ereq(Errors Req) [LF..]: request errors. Some of the possible causes are:
    요청 오류. 가능한 원인 중 일부는 다음과 같습니다.
    • early termination from the client, before the request has been sent.
      요청이 전송되기 전에 클라이언트에서 조기 종료됩니다.
    • read error from the client   클라이언트에서 읽기 오류
    • client timeout   클라이언트 시간 초과
    • client closed connection   클라이언트 폐쇄 연결
    • various bad requests from the client.   클라이언트의 다양한 잘못된 요청.
    • request was tarpitted.   요청이 타피팅(?)되었습니다.
  • 13. econ(Errors Conn) [..BS]: number of requests that encountered an error trying to connect to a backend server. The backend stat is the sum of the stat for all servers of that backend, plus any connection errors not associated with a particular server (such as the backend having no active servers).
    백엔드 서버에 연결하는 동안 오류가 발생한 요청 수입니다. 백엔드 통계는 해당 백엔드의 모든 서버에 대한 통계와 특정 서버와 관련되지 않은 연결 오류(예: 활성 서버가 없는 백엔드)의 합계입니다.
  • 14. eresp(Errors Resp) [..BS]: response errors. srv_abrt will be counted here also.
    응답 오류. srv_abrt도 여기서 계산됩니다.
    Some other errors are: 다른 오류는 다음과 같습니다.
    • write error on the client socket (won't be counted for the server stat)
      클라이언트 소켓의 쓰기 오류(서버 통계에 포함되지 않음)
    • failure applying filters to the response.
      응답에 필터를 적용하지 못했습니다.
  • 15. wretr(Warnings Retr) [..BS]: number of times a connection to a server was retried.
    서버에 대한 연결을 재시도한 횟수입니다.
  • 16. wredis(Warnings Redis) [..BS]: number of times a request was redispatched to another server. The server value counts the number of times that server was switched away from.
    요청이 다른 서버로 재전송된 횟수입니다. 서버 값은 서버가 전환된 횟수를 계산합니다.
  • 17. status(Server Server) [LFBS]: status (UP/DOWN/NOLB/MAINT/MAINT(via)/MAINT(resolution)...)
  • 18. weight(Server Wght) [..BS]: total effective weight (backend), effective weight (server)
    총 유효 가중치(백엔드), 유효 가중치(서버)
  • 19. act(Server Act) [..BS]: number of active servers (backend), server is active (server)
    활성 서버 수(백엔드), 서버가 활성 상태임(서버)
  • 20. bck(Server Bck) [..BS]: number of backup servers (backend), server is backup (server)
    백업 서버 수(백엔드), 서버는 백업(서버)
  • 21. chkfail(Server Chk) [...S]: number of failed checks. (Only counts checks failed when the server is up.)
    실패한 검사 수. (카운트만 체크는 서버가 올라온 상태에서 실패했습니다.)
  • 22. chkdown(Server Dwn) [..BS]: number of UP->DOWN transitions. The backend counter counts transitions to the whole backend being down, rather than the sum of the counters for each server.
    UP->DOWN 전환 횟수. 백엔드 카운터는 각 서버에 대한 카운터의 합이 아니라 전체 백엔드로의 전환을 계산합니다.
  • 23. lastchg(Server LastChk) [..BS]: number of seconds since the last UP<->DOWN transition
    마지막 UP<->DOWN 전환 이후 초 수
  • 24. downtime(Server Dwntme) [..BS]: total downtime (in seconds). The value for the backend is the downtime for the whole backend, not the sum of the server downtime.
    총 가동 중지 시간(초). 백엔드 값은 서버 다운타임의 합이 아니라 전체 백엔드의 다운타임입니다.
  • 25. qlimit(Queue Limit) [...S]: configured maxqueue for the server, or nothing in the value is 0 (default, meaning no limit)
    서버에 대해 구성된 maxqueue 또는 값의 아무것도 0(기본값, 제한 없음을 의미)
  • 26. pid [LFBS]: process id (0 for first instance, 1 for second, ...)
    프로세스 ID(첫 번째 인스턴스의 경우 0, 두 번째 인스턴스의 경우 1, ...)
  • 27. iid [LFBS]: unique proxy id 고유한 프록시 ID
  • 28. sid [L..S]: server id (unique inside a proxy) 서버 ID(프록시 내에서 고유함)
  • 29. throttle(Server Thrtle) [...S]: current throttle percentage for the server, when slowstart is active, or no value if not in slowstart.
    slowstart가 활성화된 경우 서버의 현재 스로틀 백분율 또는 slowstart가 아닌 경우 값이 없습니다.
  • 30. lbtot(Sessions LbTot) [..BS]: total number of times a server was selected, either for new sessions, or when re-dispatching. The server counter is the number of times that server was selected.
    새 세션 또는 재분배 시 서버가 선택된 총 횟수입니다. 서버 카운터는 서버가 선택된 횟수입니다.
  • 31. tracked [...S]: id of proxy/server if tracking is enabled.
    추적이 활성화된 경우 프록시/서버의 ID입니다.
  • 32. type [LFBS]: (0=frontend, 1=backend, 2=server, 3=socket/listener)
  • 33. rate [.FBS]: number of sessions per second over last elapsed second
    마지막 경과 시간 동안 초당 세션 수
  • 34. rate_lim [.F..]: configured limit on new sessions per second
    초당 새 세션에 대해 구성된 제한
  • 35. rate_max [.FBS]: max number of new sessions per second
    초당 최대 새 세션 수
  • 36. check_status(Server Status) [...S]: status of last health check, one of:
    다음 중 하나인 마지막 상태 확인 상태:
    • UNK -> unknown 알 수 없음.
    • INI -> initializing 초기화
    • SOCKERR -> socket error 소켓 오류
    • L4OK -> check passed on layer 4, no upper layers testing enabled
      레이어 4에서 검사 통과, 상위 레이어 테스트가 활성화되지 않음
    • L4TOUT -> layer 1-4 timeout 계층 1-4 시간 초과
    • L4CON -> layer 1-4 connection problem, for example "Connection refused" (tcp rst) or "No route to host" (icmp)
      계층 1-4 연결 문제, 예: "연결 거부됨"(tcp rst) 또는 "호스트에 대한 경로 없음"(icmp)
    • L6OK -> check passed on layer 6. 레이어 6 검사 통과
    • L6TOUT -> layer 6 (SSL) timeout. 계층 6(SSL) 시간 초과
    • L6RSP -> layer 6 invalid response - protocol error. 계층 6 잘못된 응답 - 프로토콜 오류
    • L7OK -> check passed on layer 7. 레이어 7 검사 통과
    • L7OKC -> check conditionally passed on layer 7, for example 404 with disable-on-404
      예를 들어 disable-on-404가 있는 404와 같이 레이어 7에 조건부로 통과된 검사
    • L7TOUT -> layer 7 (HTTP/SMTP) timeout. 계층 7(HTTP/SMTP) 시간 초과
    • L7RSP -> layer 7 invalid response - protocol error. 계층 7 잘못된 응답 - 프로토콜 오류
    • L7STS -> layer 7 response error, for example HTTP 5xx. 계층 7 응답 오류(예: HTTP 5xx)
    Notice: If a check is currently running, the last known status will be reported, prefixed with "* ". e. g. "* L7OK".
    참고: 검사가 현재 실행 중인 경우 마지막으로 알려진 상태가 보고되며 접두어 "*"가 붙습니다. 예) "* L7OK".
  • 37. check_code [...S]: layer5-7 code, if available. layer5-7 코드(사용 가능한 경우)
  • 38. check_duration [...S]: time in ms took to finish last health check
    마지막 상태 확인을 완료하는 데 걸린 시간(ms)
  • 39. hrsp_1xx [.FBS]: http responses with 1xx code. 1xx 코드가 있는 http 응답
  • 40. hrsp_2xx [.FBS]: http responses with 2xx code. 2xx 코드가 있는 http 응답
  • 41. hrsp_3xx [.FBS]: http responses with 3xx code. 3xx 코드가 있는 http 응답
  • 42. hrsp_4xx [.FBS]: http responses with 4xx code. 4xx 코드가 있는 http 응답
  • 43. hrsp_5xx [.FBS]: http responses with 5xx code. 5xx 코드가 있는 http 응답
  • 44. hrsp_other [.FBS]: http responses with other codes (protocol error).
    다른 코드가 포함된 http 응답(프로토콜 오류)
  • 45. hanafail [...S]: failed health checks details. 실패한 상태 확인 세부정보
  • 46. req_rate [.F..]: HTTP requests per second over last elapsed second.
    마지막 경과 시간 동안 초당 HTTP 요청 수
  • 47. req_rate_max [.F..]: max number of HTTP requests per second observed.
    관찰된 초당 최대 HTTP 요청 수
  • 48. req_tot [.FB.]: total number of HTTP requests received.
    수신된 총 HTTP 요청 수
  • 49. cli_abrt [..BS]: number of data transfers aborted by the client.
    클라이언트가 중단한 데이터 전송 수
  • 50. srv_abrt [..BS]: number of data transfers aborted by the server(inc. in eresp).
    서버에 의해 중단된 데이터 전송 수(eresp에 포함)
  • 51. comp_in [.FB.]: number of HTTP response bytes fed to the compressor.
    압축기에 공급된 HTTP 응답 바이트 수
  • 52. comp_out [.FB.]: number of HTTP response bytes emitted by the compressor.
    압축기에서 내보낸 HTTP 응답 바이트 수입니다.
  • 53. comp_byp [.FB.]: number of bytes that bypassed the HTTP compressor (CPU/BW limit).
    HTTP 압축기를 우회한 바이트 수(CPU/BW 제한)
  • 54. comp_rsp [.FB.]: number of HTTP responses that were compressed.
    압축된 HTTP 응답 수
  • 55. lastsess [..BS]: number of seconds since last session assigned to server/backend.
    서버/백엔드에 할당된 마지막 세션 이후의 시간(초)
  • 56. last_chk [...S]: last health check contents or textual error.
    마지막 상태 확인 내용 또는 텍스트 오류
  • 57. last_agt [...S]: last agent check contents or textual error.
    마지막 에이전트 확인 내용 또는 텍스트 오류
  • 58. qtime [..BS]: the average queue time in ms over the 1024 last requests
    1024개의 마지막 요청에 대한 평균 대기열 시간(ms)
  • 59. ctime [..BS]: the average connect time in ms over the 1024 last requests
    1024개의 마지막 요청에 대한 평균 연결 시간(ms)
  • 60. rtime [..BS]: the average response time in ms over the 1024 last requests (0 for TCP)
    1024개의 마지막 요청에 대한 평균 응답 시간(ms)(TCP의 경우 0)
  • 61. ttime [..BS]: the average total session time in ms over the 1024 last requests.
    1024개의 마지막 요청에 대한 평균 총 세션 시간(ms)
  • 62. agent_status [...S]: status of last agent check, one of:
    마지막 에이전트 확인 상태, 다음 중 하나:
    • UNK -> unknown
    • INI -> initializing
    • SOCKERR -> socket error
    • L4OK -> check passed on layer 4, no upper layers testing enabled
    • L4TOUT -> layer 1-4 timeout
    • L4CON -> layer 1-4 connection problem, for example "Connection refused" (tcp rst) or "No route to host" (icmp)
    • L7OK -> agent reported "up"
    • L7STS -> agent reported "fail", "stop", or "down"
  • 63. agent_code [...S]: numeric code reported by agent if any (unused for now)
  • 64. agent_duration [...S]: time in ms taken to finish last check
  • 65. check_desc [...S]: short human-readable description of check_status
  • 66. agent_desc [...S]: short human-readable description of agent_status
  • 67. check_rise [...S]: server's "rise" parameter used by checks
  • 68. check_fall [...S]: server's "fall" parameter used by checks
  • 69. check_health [...S]: server's health check value between 0 and rise+fall-1
  • 70. agent_rise [...S]: agent's "rise" parameter, normally 1
  • 71. agent_fall [...S]: agent's "fall" parameter, normally 1
  • 72. agent_health [...S]: agent's health parameter, between 0 and rise+fall-1
  • 73. addr [L..S]: address:port or "unix". IPv6 has brackets around the address.
  • 74: cookie [..BS]: server's cookie value or backend's cookie name
  • 75: mode [LFBS]: proxy mode (tcp, http, health, unknown)
  • 76: algo [..B.]: load balancing algorithm
  • 77: conn_rate [.F..]: number of connections over the last elapsed second
  • 78: conn_rate_max [.F..]: highest known conn_rate
  • 79: conn_tot [.F..]: cumulative number of connections
  • 80: intercepted [.FB.]: cum. number of intercepted requests (monitor, stats)
  • 81: dcon [LF..]: requests denied by "tcp-request connection" rules
  • 82: dses [LF..]: requests denied by "tcp-request session" rules
  • 83: wrew [LFBS]: cumulative number of failed header rewriting warnings
  • 84: connect [..BS]: cumulative number of connection establishment attempts
  • 85: reuse [..BS]: cumulative number of connection reuses
  • 86: cache_lookups [.FB.]: cumulative number of cache lookups
  • 87: cache_hits [.FB.]: cumulative number of cache hits
  • 88: srv_icur [...S]: current number of idle connections available for reuse
  • 89: src_ilim [...S]: limit on the number of available idle connections
  • 90. qtime_max [..BS]: the maximum observed queue time in ms
  • 91. ctime_max [..BS]: the maximum observed connect time in ms
  • 92. rtime_max [..BS]: the maximum observed response time in ms (0 for TCP)
  • 93. ttime_max [..BS]: the maximum observed total session time in ms
  • 94. eint [LFBS]: cumulative number of internal errors
  • 95. idle_conn_cur [...S]: current number of unsafe idle connections
  • 96. safe_conn_cur [...S]: current number of safe idle connections
  • 97. used_conn_cur [...S]: current number of connections in use
  • 98. need_conn_est [...S]: estimated needed number of connections
  • 99. uweight [..BS]: total user weight (backend), server user weight (server)

For all other statistics domains, the presence or the order of the fields are not guaranteed. In this case, the header line should always be used to parse the CSV data.
다른 모든 통계 도메인의 경우 필드의 존재 또는 순서가 보장되지 않습니다. 이 경우 CSV 데이터를 구문 분석하는 데 항상 헤더 행을 사용해야 합니다.

9.2. Typed output format

Both "show info" and "show stat" support a mode where each output value comes with its type and sufficient information to know how the value is supposed to be aggregated between processes and how it evolves.
"show info" 및 "show stat" 모두 각 출력 값이 해당 유형과 함께 제공되는 모드와 값이 프로세스 간에 집계되는 방식과 발전 방식을 알 수 있는 충분한 정보를 지원합니다.

In all cases, the output consists in having a single value per line with all the information split into fields delimited by colons (':').
모든 경우에 출력은 모든 정보가 콜론(':')으로 구분된 필드로 분할된 라인당 단일 값을 갖는 것으로 구성됩니다.

The first column designates the object or metric being dumped. Its format is specific to the command producing this output and will not be described in this section. Usually it will consist in a series of identifiers and field names.
첫 번째 열은 덤프되는 개체 또는 메트릭을 지정합니다. 형식은 이 출력을 생성하는 명령에 따라 다르며 이 섹션에서는 설명하지 않습니다. 일반적으로 일련의 식별자와 필드 이름으로 구성됩니다.

The second column contains 3 characters respectively indicating the origin, the nature and the scope of the value being reported. The first character (the origin) indicates where the value was extracted from. Possible characters are :
두 번째 열에는 보고되는 값의 출처, 특성 및 범위를 각각 나타내는 3개의 문자가 포함됩니다. 첫 번째 문자(원점)는 값이 추출된 위치를 나타냅니다. 가능한 문자는 다음과 같습니다.

  • M   The value is a metric. It is valid at one instant any may change depending on its nature.
    값은 메트릭입니다. 그것은 한 순간에 유효하며 그 특성에 따라 변경될 수 있습니다.
  • S   The value is a status. It represents a discrete value which by definition cannot be aggregated. It may be the status of a server ("UP" or "DOWN"), the PID of the process, etc.
    값은 상태입니다. 이는 정의상 집계할 수 없는 개별 값을 나타냅니다. 서버의 상태("UP" 또는 "DOWN"), 프로세스의 PID 등일 수 있습니다.
  • K   The value is a sorting key. It represents an identifier which may be used to group some values together because it is unique among its class. All internal identifiers are keys. Some names can be listed as keys if they are unique (eg: a frontend name is unique). In general keys come from the configuration, even though some of them may automatically be assigned. For most purposes keys may be considered as equivalent to configuration.
    값은 정렬 키입니다. 클래스 간에 고유하기 때문에 일부 값을 함께 그룹화하는 데 사용할 수 있는 식별자를 나타냅니다. 모든 내부 식별자는 키입니다. 일부 이름은 고유한 경우 키로 나열될 수 있습니다(예: 프런트엔드 이름이 고유한 경우). 일반적으로 키 중 일부는 자동으로 할당될 수 있지만 구성에서 가져옵니다. 대부분의 경우 키는 구성과 동등한 것으로 간주될 수 있습니다.
  • C   The value comes from the configuration. Certain configuration values make sense on the output, for example a concurrent connection limit or a cookie name. By definition these values are the same in all processes started from the same configuration file.
    값은 구성에서 가져옵니다. 예를 들어 동시 연결 제한 또는 쿠키 이름과 같은 특정 구성 값은 출력에서 의미가 있습니다. 정의에 따라 이러한 값은 동일한 구성 파일에서 시작된 모든 프로세스에서 동일합니다.
  • P   The value comes from the product itself. There are very few such values, most common use is to report the product name, version and release date. These elements are also the same between all processes.
    가치는 제품 자체에서 나옵니다. 그러한 값은 거의 없으며 가장 일반적인 사용은 제품 이름, 버전 및 릴리스 날짜를 보고하는 것입니다. 이러한 요소는 모든 프로세스 간에도 동일합니다.

The second character (the nature) indicates the nature of the information carried by the field in order to let an aggregator decide on what operation to use to aggregate multiple values. Possible characters are :
두 번째 문자(성질)는 집계자가 여러 값을 집계하는 데 사용할 작업을 결정할 수 있도록 필드에 포함된 정보의 특성을 나타냅니다. 가능한 문자는 다음과 같습니다.

  • A   The value represents an age since a last event. This is a bit different from the duration in that an age is automatically computed based on the current date. A typical example is how long ago did the last session happen on a server. Ages are generally aggregated by taking the minimum value and do not need to be stored.
    값은 마지막 이벤트 이후의 기간을 나타냅니다. 현재 날짜를 기준으로 나이가 자동으로 계산된다는 점에서 기간과는 조금 다릅니다. 일반적인 예는 서버에서 마지막 세션이 얼마나 오래 전에 발생했는지입니다. 연령은 일반적으로 최소값을 취하여 집계되며 저장할 필요가 없습니다.
  • a   The value represents an already averaged value. The average response times and server weights are of this nature. Averages can typically be averaged between processes.
    값은 이미 평균화된 값을 나타냅니다. 평균 응답 시간과 서버 가중치는 이러한 특성을 가집니다. 평균은 일반적으로 프로세스 간에 평균화될 수 있습니다.
  • C   The value represents a cumulative counter. Such measures perpetually increase until they wrap around. Some monitoring protocols need to tell the difference between a counter and a gauge to report a different type. In general counters may simply be summed since they represent events or volumes. Examples of metrics of this nature are connection counts or byte counts.
    값은 누적 카운터를 나타냅니다. 이러한 조치는 감쌀 때까지 지속적으로 증가합니다. 일부 모니터링 프로토콜은 다른 유형을 보고하기 위해 카운터와 게이지 간의 차이를 알려야 합니다. 일반적으로 카운터는 이벤트나 볼륨을 나타내므로 간단히 합산할 수 있습니다. 이러한 특성의 메트릭의 예는 연결 수 또는 바이트 수입니다.
  • D   The value represents a duration for a status. There are a few usages of this, most of them include the time taken by the last health check and the time a server has spent down. Durations are generally not summed, most of the time the maximum will be retained to compute an SLA.
    값은 상태의 기간을 나타냅니다. 여기에는 몇 가지 용도가 있으며, 대부분은 마지막 상태 확인에 소요된 시간과 서버가 다운된 시간을 포함합니다. 기간은 일반적으로 합산되지 않으며 대부분의 경우 SLA를 계산하기 위해 최대값이 유지됩니다.
  • G   The value represents a gauge. It's a measure at one instant. The memory usage or the current number of active connections are of this nature. Metrics of this type are typically summed during aggregation.
    값은 게이지를 나타냅니다. 한 순간에 측정입니다. 메모리 사용량 또는 현재 활성 연결 수는 이러한 특성을 가집니다. 이 유형의 메트릭은 일반적으로 집계 중에 합산됩니다.
  • L   The value represents a limit (generally a configured one). By nature, limits are harder to aggregate since they are specific to the point where they were retrieved. In certain situations they may be summed or be kept separate.
    값은 제한(일반적으로 구성된 제한)을 나타냅니다. 본질적으로 제한은 검색된 지점에 따라 다르기 때문에 집계하기가 더 어렵습니다. 특정 상황에서는 합산하거나 별도로 보관할 수 있습니다.
  • M   The value represents a maximum. In general it will apply to a gauge and keep the highest known value. An example of such a metric could be the maximum amount of concurrent connections that was encountered in the product's life time. To correctly aggregate maxima, you are supposed to output a range going from the maximum of all maxima and the sum of all of them. There is indeed no way to know if they were encountered simultaneously or not.
    값은 최대값을 나타냅니다. 일반적으로 게이지에 적용되며 가장 높은 알려진 값을 유지합니다. 이러한 메트릭의 예로는 제품 수명 동안 발생한 최대 동시 연결 수를 들 수 있습니다. 최대값을 올바르게 집계하려면 모든 최대값의 최대값과 그 합계의 범위를 출력해야 합니다. 실제로 그들이 동시에 발생했는지 여부를 알 방법이 없습니다.
  • m   The value represents a minimum. In general it will apply to a gauge and keep the lowest known value. An example of such a metric could be the minimum amount of free memory pools that was encountered in the product's life time. To correctly aggregate minima, you are supposed to output a range going from the minimum of all minima and the sum of all of them. There is indeed no way to know if they were encountered simultaneously or not.
    값은 최소값을 나타냅니다. 일반적으로 게이지에 적용되며 알려진 가장 낮은 값을 유지합니다. 이러한 메트릭의 예로는 제품 수명 동안 발생한 사용 가능한 메모리 풀의 최소 양이 될 수 있습니다. 최소값을 올바르게 집계하려면 모든 최소값의 최소값과 그 합계의 범위를 출력해야 합니다. 실제로 그들이 동시에 발생했는지 여부를 알 방법이 없습니다.
  • N   The value represents a name, so it is a string. It is used to report proxy names, server names and cookie names. Names have configuration or keys as their origin and are supposed to be the same among all processes.
    값은 이름을 나타내므로 문자열입니다. 프록시 이름, 서버 이름 및 쿠키 이름을 보고하는 데 사용됩니다. 이름은 구성 또는 키를 원본으로 가지며 모든 프로세스에서 동일해야 합니다.
  • O   The value represents a free text output. Outputs from various commands, returns from health checks, node descriptions are of such nature.
    값은 자유 텍스트 출력을 나타냅니다. 다양한 명령의 출력, 상태 확인의 반환, 노드 설명이 그러한 특성을 가집니다.
  • R   The value represents an event rate. It's a measure at one instant. It is quite similar to a gauge except that the recipient knows that this measure moves slowly and may decide not to keep all values. An example of such a metric is the measured amount of connections per second. Metrics of this type are typically summed during aggregation.
    값은 이벤트 비율을 나타냅니다. 한 순간에 측정입니다. 받는 사람이 이 측정값이 느리게 이동하고 모든 값을 유지하지 않기로 결정할 수 있다는 것을 알고 있다는 점을 제외하면 게이지와 매우 유사합니다. 이러한 메트릭의 예는 측정된 초당 연결 수입니다. 이 유형의 메트릭은 일반적으로 집계 중에 합산됩니다.
  • T   The value represents a date or time. A field emitting the current date would be of this type. The method to aggregate such information is left as an implementation choice. For now no field uses this type.
    값은 날짜 또는 시간을 나타냅니다. 현재 날짜를 내보내는 필드는 이 유형입니다. 이러한 정보를 집계하는 방법은 구현 선택 사항으로 남아 있습니다. 현재로서는 이 유형을 사용하는 필드가 없습니다.

The third character (the scope) indicates what extent the value reflects. Some elements may be per process while others may be per configuration or per system. The distinction is important to know whether or not a single value should be kept during aggregation or if values have to be aggregated. The following characters are currently supported :
세 번째 문자(범위)는 값이 반영하는 범위를 나타냅니다. 일부 요소는 프로세스별일 수 있고 다른 요소는 구성별 또는 시스템별일 수 있습니다. 집계 중에 단일 값을 유지해야 하는지 또는 값을 집계해야 하는지 여부를 구분하는 것이 중요합니다. 현재 지원되는 문자는 다음과 같습니다.

  • C   The value is valid for a whole cluster of nodes, which is the set of nodes communicating over the peers protocol. An example could be the amount of entries present in a stick table that is replicated with other peers. At the moment no metric use this scope.
    이 값은 피어 프로토콜을 통해 통신하는 노드 집합인 노드의 전체 클러스터에 대해 유효합니다. 예를 들어 다른 피어와 함께 복제되는 스틱 테이블에 있는 항목의 양이 될 수 있습니다. 현재 이 범위를 사용하는 메트릭이 없습니다.
  • P   The value is valid only for the process reporting it. Most metrics use this scope.
    이 값은 보고하는 프로세스에 대해서만 유효합니다. 대부분의 지표는 이 범위를 사용합니다.
  • S   The value is valid for the whole service, which is the set of processes started together from the same configuration file. All metrics originating from the configuration use this scope. Some other metrics may use it as well for some shared resources (eg: shared SSL cache statistics).
    이 값은 동일한 구성 파일에서 함께 시작된 프로세스 집합인 전체 서비스에 대해 유효합니다. 구성에서 발생하는 모든 지표는 이 범위를 사용합니다. 일부 다른 메트릭은 일부 공유 리소스(예: 공유 SSL 캐시 통계)에 대해서도 사용할 수 있습니다.
  • s   The value is valid for the whole system, such as the system's hostname, current date or resource usage. At the moment this scope is not used by any metric.
    이 값은 시스템의 호스트 이름, 현재 날짜 또는 리소스 사용과 같은 전체 시스템에 대해 유효합니다. 현재 이 범위는 메트릭에서 사용되지 않습니다.

Consumers of these information will generally have enough of these 3 characters to determine how to accurately report aggregated information across multiple processes.

이러한 정보의 소비자는 일반적으로 여러 프로세스에서 집계된 정보를 정확하게 보고하는 방법을 결정하는 데 이러한 3개의 문자로 충분합니다.

After this column, the third column indicates the type of the field, among "s32" (signed 32-bit integer), "s64" (signed 64-bit integer), "u32" (unsigned 32-bit integer), "u64" (unsigned 64-bit integer), "str" (string). It is important to know the type before parsing the value in order to properly read it. For example a string containing only digits is still a string an not an integer (eg: an error code extracted by a check).

이 열 다음의 세 번째 열은 "s32"(부호 있는 32비트 정수), "s64"(부호 있는 64비트 정수), "u32"(부호 없는 32비트 정수), "u64"(부호 없는 64비트 정수), "str" (문자열) 중에서 필드의 유형을 나타냅니다. 값을 올바르게 읽으려면 값을 구문 분석하기 전에 유형을 아는 것이 중요합니다. 예를 들어 숫자만 포함하는 문자열은 정수가 아닌 문자열입니다(예: 검사로 추출한 오류 코드).

Then the fourth column is the value itself, encoded according to its type. Strings are dumped as-is immediately after the colon without any leading space. If a string contains a colon, it will appear normally. This means that the output should not be exclusively split around colons or some check outputs or server addresses might be truncated.

그런 다음 네 번째 열은 유형에 따라 인코딩된 값 자체입니다. 문자열은 선행 공백 없이 콜론 바로 뒤에 있는 그대로 덤프됩니다. 문자열에 콜론이 포함되어 있으면 정상적으로 나타납니다. 이는 출력이 콜론으로만 분할되어서는 안 되며 일부 검사 출력 또는 서버 주소가 잘릴 수 있음을 의미합니다.

9.3. Unix Socket commands

The stats socket is not enabled by default. In order to enable it, it is necessary to add one line in the global section of the haproxy configuration. A second line is recommended to set a larger timeout, always appreciated when issuing commands by hand :
통계 소켓은 기본적으로 활성화되어 있지 않습니다. 이를 활성화하려면 haproxy 구성의 전역 섹션에 한 줄을 추가해야 합니다. 더 큰 시간 제한을 설정하려면 두 번째 줄을 사용하는 것이 좋습니다. 직접 명령을 내릴 때 항상 감사합니다.

    global
        stats socket /var/run/haproxy.sock mode 600 level admin
        stats timeout 2m

It is also possible to add multiple instances of the stats socket by repeating the line, and make them listen to a TCP port instead of a UNIX socket. This is never done by default because this is dangerous, but can be handy in some situations :
줄을 반복하여 stats 소켓의 여러 인스턴스를 추가하고 UNIX 소켓 대신 TCP 포트를 수신하도록 할 수도 있습니다. 이것은 위험하기 때문에 기본적으로 수행되지 않지만 일부 상황에서는 편리할 수 있습니다.

    global
        stats socket /var/run/haproxy.sock mode 600 level admin
        stats socket ipv4@192.168.0.1:9999 level admin
        stats timeout 2m

To access the socket, an external utility such as "socat" is required. Socat is a swiss-army knife to connect anything to anything. We use it to connect terminals to the socket, or a couple of stdin/stdout pipes to it for scripts. The two main syntaxes we'll use are the following :
소켓에 액세스하려면 "socat"과 같은 외부 유틸리티가 필요합니다. Socat은 무엇이든 연결해주는 스위스군용 칼입니다. 터미널을 소켓에 연결하거나 스크립트를 위해 두 개의 stdin/stdout 파이프를 소켓에 연결하는 데 사용합니다. 우리가 사용할 두 가지 주요 구문은 다음과 같습니다.

    # socat /var/run/haproxy.sock stdio
    # socat /var/run/haproxy.sock readline

The first one is used with scripts. It is possible to send the output of a script to haproxy, and pass haproxy's output to another script. That's useful for retrieving counters or attack traces for example.

The second one is only useful for issuing commands by hand. It has the benefit that the terminal is handled by the readline library which supports line editing and history, which is very convenient when issuing repeated commands (eg: watch a counter).

The socket supports two operation modes :
- interactive
- non-interactive

The non-interactive mode is the default when socat connects to the socket. In this mode, a single line may be sent. It is processed as a whole, responses are sent back, and the connection closes after the end of the response. This is the mode that scripts and monitoring tools use. It is possible to send multiple commands in this mode, they need to be delimited by a semi-colon (';'). For example :

    # echo "show info;show stat;show table" | socat /var/run/haproxy stdio

If a command needs to use a semi-colon or a backslash (eg: in a value), it must be preceded by a backslash ('\').

The interactive mode displays a prompt ('>') and waits for commands to be entered on the line, then processes them, and displays the prompt again to wait for a new command. This mode is entered via the "prompt" command which must be sent on the first line in non-interactive mode. The mode is a flip switch, if "prompt" is sent in interactive mode, it is disabled and the connection closes after processing the last command of the same line.

For this reason, when debugging by hand, it's quite common to start with the "prompt" command :

   # socat /var/run/haproxy readline
   prompt
   > show info
   ...
   >

Since multiple commands may be issued at once, haproxy uses the empty line as a delimiter to mark an end of output for each command, and takes care of ensuring that no command can emit an empty line on output. A script can thus easily parse the output even when multiple commands were pipelined on a single line.

Some commands may take an optional payload. To add one to a command, the first line needs to end with the "<<\n" pattern. The next lines will be treated as the payload and can contain as many lines as needed. To validate a command with a payload, it needs to end with an empty line.

Limitations do exist: the length of the whole buffer passed to the CLI must not be greater than tune.bfsize and the pattern "<<" must not be glued to the last word of the line.

When entering a paylod while in interactive mode, the prompt will change from "> " to "+ ".

It is important to understand that when multiple haproxy processes are started on the same sockets, any process may pick up the request and will output its own stats.

The list of commands currently supported on the stats socket is provided below. If an unknown command is sent, haproxy displays the usage message which reminds all supported commands. Some commands support a more complex syntax, generally it will explain what part of the command is invalid when this happens.

Some commands require a higher level of privilege to work. If you do not have enough privilege, you will get an error "Permission denied". Please check the "level" option of the "bind" keyword lines in the configuration manual for more information.

abort ssl ca-file <cafile>
Abort and destroy a temporary CA file update transaction.

See also "set ssl ca-file" and "commit ssl ca-file".

abort ssl cert <filename>
Abort and destroy a temporary SSL certificate update transaction.

See also "set ssl cert" and "commit ssl cert".

abort ssl crl-file <crlfile>
Abort and destroy a temporary CRL file update transaction.

See also "set ssl crl-file" and "commit ssl crl-file".

add acl [@<ver>] <acl> <pattern>
Add an entry into the acl <acl>. <acl> is the #<id> or the <file> returned by "show acl". This command does not verify if the entry already exists. Entries are added to the current version of the ACL, unless a specific version is specified with "@<ver>". This version number must have preliminary been allocated by "prepare acl", and it will be comprised between the versions reported in "curr_ver" and "next_ver" on the output of "show acl". Entries added with a specific version number will not match until a "commit acl" operation is performed on them. They may however be consulted using the "show acl @<ver>" command, and cleared using a "clear acl @<ver>" command. This command cannot be used if the reference <acl> is a file also used with a map. In this case, the "add map" command must be used instead.
add map [@<ver>] <map> <key> <value>
add map [@<ver>] <map> <payload>
Add an entry into the map <map> to associate the value <value> to the key <key>. This command does not verify if the entry already exists. It is mainly used to fill a map after a "clear" or "prepare" operation. Entries are added to the current version of the ACL, unless a specific version is specified with "@<ver>". This version number must have preliminary been allocated by "prepare acl", and it will be comprised between the versions reported in "curr_ver" and "next_ver" on the output of "show acl". Entries added with a specific version number will not match until a "commit map" operation is performed on them. They may however be consulted using the "show map @<ver>" command, and cleared using a "clear acl @<ver>" command. If the designated map is also used as an ACL, the ACL will only match the <key> part and will ignore the <value> part. Using the payload syntax it is possible to add multiple key/value pairs by entering them on separate lines. On each new line, the first word is the key and the rest of the line is considered to be the value which can even contains spaces.

  Example:

    # socat /tmp/sock1 -
    prompt

    > add map #-1 <<
    + key1 value1
    + key2 value2 with spaces
    + key3 value3 also with spaces
    + key4 value4

    >

add server <backend>/<server> [args]*
Instantiate a new server attached to the backend <backend>.

The <server> name must not be already used in the backend. A special restriction is put on the backend which must used a dynamic load-balancing algorithm. A subset of keywords from the server config file statement can be used to configure the server behavior. Also note that no settings will be reused from an hypothetical 'default-server' statement in the same backend.

Currently a dynamic server is statically initialized with the "none" init-addr method. This means that no resolution will be undertaken if a FQDN is specified as an address, even if the server creation will be validated.

To support the reload operations, it is expected that the server created via the CLI is also manually inserted in the relevant haproxy configuration file. A dynamic server not present in the configuration won't be restored after a reload operation.

A dynamic server may use the "track" keyword to follow the check status of another server from the configuration. However, it is not possible to track another dynamic server. This is to ensure that the tracking chain is kept consistent even in the case of dynamic servers deletion.

Use the "check" keyword to enable health-check support. Note that the health-check is disabled by default and must be enabled independently from the server using the "enable health" command. For agent checks, use the "agent-check" keyword and the "enable agent" command. Note that in this case the server may be activated via the agent depending on the status reported, without an explicit "enable server" command. This also means that extra care is required when removing a dynamic server with agent check. The agent should be first deactivated via "disable agent" to be able to put the server in the required maintenance mode before removal.

It may be possible to reach the fd limit when using a large number of dynamic servers. Please refer to the "u-limit" global keyword documentation in this case.

Here is the list of the currently supported keywords :

  - agent-addr
  - agent-check
  - agent-inter
  - agent-port
  - agent-send
  - allow-0rtt
  - alpn
  - addr
  - backup
  - ca-file
  - check
  - check-alpn
  - check-proto
  - check-send-proxy
  - check-sni
  - check-ssl
  - check-via-socks4
  - ciphers
  - ciphersuites
  - crl-file
  - crt
  - disabled
  - downinter
  - enabled
  - error-limit
  - fall
  - fastinter
  - force-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
  - id
  - inter
  - maxconn
  - maxqueue
  - minconn
  - no-ssl-reuse
  - no-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
  - no-tls-tickets
  - npn
  - observe
  - on-error
  - on-marked-down
  - on-marked-up
  - pool-low-conn
  - pool-max-conn
  - pool-purge-delay
  - port
  - proto
  - proxy-v2-options
  - rise
  - send-proxy
  - send-proxy-v2
  - send-proxy-v2-ssl
  - send-proxy-v2-ssl-cn
  - slowstart
  - sni
  - source
  - ssl
  - ssl-max-ver
  - ssl-min-ver
  - tfo
  - tls-tickets
  - track
  - usesrc
  - verify
  - verifyhost
  - weight
  - ws

Their syntax is similar to the server line from the configuration file, please refer to their individual documentation for details.

add ssl crt-list <crtlist> <certificate>
add ssl crt-list <crtlist> <payload>
Add an certificate in a crt-list. It can also be used for directories since directories are now loaded the same way as the crt-lists. This command allow you to use a certificate name in parameter, to use SSL options or filters a crt-list line must sent as a payload instead. Only one crt-list line is supported in the payload. This command will load the certificate for every bind lines using the crt-list. To push a new certificate to HAProxy the commands "new ssl cert" and "set ssl cert" must be used.

  Example:
    $ echo "new ssl cert foobar.pem" | socat /tmp/sock1 -
    $ echo -e "set ssl cert foobar.pem <<\n$(cat foobar.pem)\n" | socat
    /tmp/sock1 -
    $ echo "commit ssl cert foobar.pem" | socat /tmp/sock1 -
    $ echo "add ssl crt-list certlist1 foobar.pem" | socat /tmp/sock1 -

    $ echo -e 'add ssl crt-list certlist1 <<\nfoobar.pem [allow-0rtt] foo.bar.com
    !test1.com\n' | socat /tmp/sock1 -

clear counters
Clear the max values of the statistics counters in each proxy (frontend & backend) and in each server. The accumulated counters are not affected. The internal activity counters reported by "show activity" are also reset. This can be used to get clean counters after an incident, without having to restart nor to clear traffic counters. This command is restricted and can only be issued on sockets configured for levels "operator" or "admin".

clear counters all
Clear all statistics counters in each proxy (frontend & backend) and in each server. This has the same effect as restarting. This command is restricted and can only be issued on sockets configured for level "admin".

clear acl [@<ver>] <acl>
Remove all entries from the acl <acl>. <acl> is the #<id> or the <file> returned by "show acl". Note that if the reference <acl> is a file and is shared with a map, this map will be also cleared. By default only the current version of the ACL is cleared (the one being matched against). However it is possible to specify another version using '@' followed by this version.

clear map [@<ver>] <map>
Remove all entries from the map <map>. <map> is the #<id> or the <file> returned by "show map". Note that if the reference <map> is a file and is shared with a acl, this acl will be also cleared. By default only the current version of the map is cleared (the one being matched against). However it is possible to specify another version using '@' followed by this version.

clear table <table> [ data.<type> <operator> <value> ] | [ key <key> ] Remove entries from the stick-table <table>.

This is typically used to unblock some users complaining they have been abusively denied access to a service, but this can also be used to clear some stickiness entries matching a server that is going to be replaced (see "show table" below for details). Note that sometimes, removal of an entry will be refused because it is currently tracked by a session. Retrying a few seconds later after the session ends is usual enough.

In the case where no options arguments are given all entries will be removed.

When the "data." form is used entries matching a filter applied using the stored data (see "stick-table" in section 4.2) are removed. A stored data type must be specified in <type>, and this data type must be stored in the table otherwise an error is reported. The data is compared according to <operator> with the 64-bit integer <value>. Operators are the same as with the ACLs :

    - eq : match entries whose data is equal to this value
    - ne : match entries whose data is not equal to this value
    - le : match entries whose data is less than or equal to this value
    - ge : match entries whose data is greater than or equal to this value
    - lt : match entries whose data is less than this value
    - gt : match entries whose data is greater than this value

When the key form is used the entry <key> is removed. The key must be of the same type as the table, which currently is limited to IPv4, IPv6, integer and string.

  Example :
$ echo "show table http_proxy" | socat stdio /tmp/sock1 >>> # table: http_proxy, type: ip, size:204800, used:2 >>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \ bytes_out_rate(60000)=187 >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \ bytes_out_rate(60000)=191 $ echo "clear table http_proxy key 127.0.0.1" | socat stdio /tmp/sock1 $ echo "show table http_proxy" | socat stdio /tmp/sock1 >>> # table: http_proxy, type: ip, size:204800, used:1 >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \ bytes_out_rate(60000)=191 $ echo "clear table http_proxy data.gpc0 eq 1" | socat stdio /tmp/sock1 $ echo "show table http_proxy" | socat stdio /tmp/sock1 >>> # table: http_proxy, type: ip, size:204800, used:1

commit acl @<ver> <acl>
Commit all changes made to version <ver> of ACL <acl>, and deletes all past versions. <acl> is the #<id> or the <file> returned by "show acl". The version number must be between "curr_ver"+1 and "next_ver" as reported in "show acl". The contents to be committed to the ACL can be consulted with "show acl @<ver> <acl>" if desired. The specified version number has normally been created with the "prepare acl" command. The replacement is atomic. It consists in atomically updating the current version to the specified version, which will instantly cause all entries in other versions to become invisible, and all entries in the new version to become visible. It is also possible to use this command to perform an atomic removal of all visible entries of an ACL by calling "prepare acl" first then committing without adding any entries. This command cannot be used if the reference <acl> is a file also used as a map. In this case, the "commit map" command must be used instead.

commit map @<ver> <map>
Commit all changes made to version <ver> of map <map>, and deletes all past versions. <map> is the #<id> or the <file> returned by "show map". The version number must be between "curr_ver"+1 and "next_ver" as reported in "show map". The contents to be committed to the map can be consulted with "show map @<ver> <map>" if desired. The specified version number has normally been created with the "prepare map" command. The replacement is atomic. It consists in atomically updating the current version to the specified version, which will instantly cause all entries in other versions to become invisible, and all entries in the new version to become visible. It is also possible to use this command to perform an atomic removal of all visible entries of an map by calling "prepare map" first then committing without adding any entries.

commit ssl ca-file <cafile>
Commit a temporary SSL CA file update transaction.
In the case of an existing CA file (in a "Used" state in "show ssl ca-file"), the new CA file tree entry is inserted in the CA file tree and every instance that used the CA file entry is rebuilt, along with the SSL contexts it needs. All the contexts previously used by the rebuilt instances are removed. Upon success, the previous CA file entry is removed from the tree. Upon failure, nothing is removed or deleted, and all the original SSL contexts are kept and used. Once the temporary transaction is committed, it is destroyed.

In the case of a new CA file (after a "new ssl ca-file" and in a "Unused" state in "show ssl ca-file"), the CA file will be inserted in the CA file tree but it won't be used anywhere in HAProxy. To use it and generate SSL contexts that use it, you will need to add it to a crt-list with "add ssl crt-list".

See also "new ssl ca-file", "set ssl ca-file", "abort ssl ca-file" and "add ssl crt-list".

commit ssl cert <filename>
Commit a temporary SSL certificate update transaction.

In the case of an existing certificate (in a "Used" state in "show ssl cert"), generate every SSL contextes and SNIs it need, insert them, and remove the previous ones. Replace in memory the previous SSL certificates everywhere the <filename> was used in the configuration. Upon failure it doesn't remove or insert anything. Once the temporary transaction is committed, it is destroyed.

In the case of a new certificate (after a "new ssl cert" and in a "Unused" state in "show ssl cert"), the certificate will be committed in a certificate storage, but it won't be used anywhere in haproxy. To use it and generate its SNIs you will need to add it to a crt-list or a directory with "add ssl crt-list".

See also "new ssl cert", "set ssl cert", "abort ssl cert" and "add ssl crt-list".

commit ssl crl-file <crlfile>
Commit a temporary SSL CRL file update transaction.

In the case of an existing CRL file (in a "Used" state in "show ssl crl-file"), the new CRL file entry is inserted in the CA file tree (which holds both the CA files and the CRL files) and every instance that used the CRL file entry is rebuilt, along with the SSL contexts it needs. All the contexts previously used by the rebuilt instances are removed. Upon success, the previous CRL file entry is removed from the tree. Upon failure, nothing is removed or deleted, and all the original SSL contexts are kept and used. Once the temporary transaction is committed, it is destroyed.
In the case of a new CRL file (after a "new ssl crl-file" and in a "Unused" state in "show ssl crl-file"), the CRL file will be inserted in the CRL file tree but it won't be used anywhere in HAProxy. To use it and generate SSL contexts that use it, you will need to add it to a crt-list with "add ssl crt-list".

See also "new ssl crl-file", "set ssl crl-file", "abort ssl crl-file" and "add ssl crt-list".

debug dev <command> [args]*
Call a developer-specific command. Only supported on a CLI connection running in expert mode (see "expert-mode on"). Such commands are extremely dangerous and not forgiving, any misuse may result in a crash of the process. They are intended for experts only, and must really not be used unless told to do so. Some of them are only available when haproxy is built with DEBUG_DEV defined because they may have security implications. All of these commands require admin privileges, and are purposely not documented to avoid encouraging their use by people who are not at ease with the source code.

del acl <acl> [<key>|#<ref>]
Delete all the acl entries from the acl <acl> corresponding to the key <key>. <acl> is the #<id> or the <file> returned by "show acl". If the <ref> is used, this command delete only the listed reference. The reference can be found with listing the content of the acl. Note that if the reference <acl> is a file and is shared with a map, the entry will be also deleted in the map.

del map <map> [<key>|#<ref>]
Delete all the map entries from the map <map> corresponding to the key <key>. <map> is the #<id> or the <file> returned by "show map". If the <ref> is used, this command delete only the listed reference. The reference can be found with listing the content of the map. Note that if the reference <map> is a file and is shared with a acl, the entry will be also deleted in the map.

del ssl ca-file <cafile>
Delete a CA file tree entry from HAProxy. The CA file must be unused and removed from any crt-list. "show ssl ca-file" displays the status of the CA files. The deletion doesn't work with a certificate referenced directly with the "ca-file" or "ca-verify-file" directives in the configuration.

del ssl cert <certfile>
Delete a certificate store from HAProxy. The certificate must be unused and removed from any crt-list or directory. "show ssl cert" displays the status of the certificate. The deletion doesn't work with a certificate referenced directly with the "crt" directive in the configuration.

del ssl crl-file <crlfile>
Delete a CRL file tree entry from HAProxy. The CRL file must be unused and removed from any crt-list. "show ssl crl-file" displays the status of the CRL files. The deletion doesn't work with a certificate referenced directly with the "crl-file" directive in the configuration.

del ssl crt-list <filename> <certfile[:line]>
Delete an entry in a crt-list. This will delete every SNIs used for this entry in the frontends. If a certificate is used several time in a crt-list, you will need to provide which line you want to delete. To display the line numbers, use "show ssl crt-list -n <crtlist>".

del server <backend>/<server>
Remove a server attached to the backend <backend>. All servers are eligible, except servers which are referenced by other configuration elements. The server must be put in maintenance mode prior to its deletion. The operation is cancelled if the serveur still has active or idle connection or its connection queue is not empty.

disable agent <backend>/<server>
Mark the auxiliary agent check as temporarily stopped.

In the case where an agent check is being run as a auxiliary check, due to the agent-check parameter of a server directive, new checks are only initialized when the agent is in the enabled. Thus, disable agent will prevent any new agent checks from begin initiated until the agent re-enabled using enable agent.

When an agent is disabled the processing of an auxiliary agent check that was initiated while the agent was set as enabled is as follows: All results that would alter the weight, specifically "drain" or a weight returned by the agent, are ignored. The processing of agent check is otherwise unchanged.

The motivation for this feature is to allow the weight changing effects of the agent checks to be paused to allow the weight of a server to be configured using set weight without being overridden by the agent.

This command is restricted and can only be issued on sockets configured for level "admin".

disable dynamic-cookie backend <backend>
Disable the generation of dynamic cookies for the backend <backend>

disable frontend <frontend>
Mark the frontend as temporarily stopped. This corresponds to the mode which is used during a soft restart : the frontend releases the port but can be enabled again if needed. This should be used with care as some non-Linux OSes are unable to enable it back. This is intended to be used in environments where stopping a proxy is not even imaginable but a misconfigured proxy must be fixed. That way it's possible to release the port and bind it into another process to restore operations. The frontend will appear with status "STOP" on the stats page.
The frontend may be specified either by its name or by its numeric ID, prefixed with a sharp ('#').

This command is restricted and can only be issued on sockets configured for level "admin".

disable health <backend>/<server>
Mark the primary health check as temporarily stopped. This will disable sending of health checks, and the last health check result will be ignored. The server will be in unchecked state and considered UP unless an auxiliary agent check forces it down.

This command is restricted and can only be issued on sockets configured for level "admin".

disable server <backend>/<server>
Mark the server DOWN for maintenance. In this mode, no more checks will be performed on the server until it leaves maintenance. If the server is tracked by other servers, those servers will be set to DOWN during the maintenance.

In the statistics page, a server DOWN for maintenance will appear with a "MAINT" status, its tracking servers with the "MAINT(via)" one.

Both the backend and the server may be specified either by their name or by their numeric ID, prefixed with a sharp ('#').

This command is restricted and can only be issued on sockets configured for level "admin".

enable agent <backend>/<server>
Resume auxiliary agent check that was temporarily stopped.

See "disable agent" for details of the effect of temporarily starting and stopping an auxiliary agent.

This command is restricted and can only be issued on sockets configured for level "admin".

enable dynamic-cookie backend <backend>
Enable the generation of dynamic cookies for the backend <backend>. A secret key must also be provided.

enable frontend <frontend>
Resume a frontend which was temporarily stopped. It is possible that some of the listening ports won't be able to bind anymore (eg: if another process took them since the 'disable frontend' operation). If this happens, an error is displayed. Some operating systems might not be able to resume a frontend which was disabled.

The frontend may be specified either by its name or by its numeric ID, prefixed with a sharp ('#').

This command is restricted and can only be issued on sockets configured for level "admin".

enable health <backend>/<server>
Resume a primary health check that was temporarily stopped. This will enable sending of health checks again. Please see "disable health" for details.

This command is restricted and can only be issued on sockets configured for level "admin".

enable server <backend>/<server>
If the server was previously marked as DOWN for maintenance, this marks the server UP and checks are re-enabled.

Both the backend and the server may be specified either by their name or by their numeric ID, prefixed with a sharp ('#').

This command is restricted and can only be issued on sockets configured for level "admin".

experimental-mode [on|off]
Without options, this indicates whether the experimental mode is enabled or disabled on the current connection. When passed "on", it turns the experimental mode on for the current CLI connection only. With "off" it turns it off.

The experimental mode is used to access to extra features still in development. These features are currently not stable and should be used with care. They may be subject to breaking changes across versions.

When used from the master CLI, this command shouldn't be prefixed, as it will set the mode for any worker when connecting to its CLI.

Example:
echo "@1; experimental-mode on; <experimental_cmd>..." | socat /var/run/haproxy.master - echo "experimental-mode on; @1 <experimental_cmd>..." | socat /var/run/haproxy.master -
expert-mode [on|off]
This command is similar to experimental-mode but is used to toggle the expert mode.

The expert mode enables displaying of expert commands that can be extremely dangerous for the process and which may occasionally help developers collect important information about complex bugs. Any misuse of these features will likely lead to a process crash. Do not use this option without being invited to do so. Note that this command is purposely not listed in the help message. This command is only accessible in admin level. Changing to another level automatically resets the expert mode.

When used from the master CLI, this command shouldn't be prefixed, as it will set the mode for any worker when connecting to its CLI.

Example:
echo "@1; expert-mode on; debug dev exit 1" | socat /var/run/haproxy.master - echo "expert-mode on; @1 debug dev exit 1" | socat /var/run/haproxy.master -
get map <map> <value>
get acl <acl> <value>
Lookup the value <value> in the map <map> or in the ACL <acl>. <map> or <acl> are the #<id> or the <file> returned by "show map" or "show acl". This command returns all the matching patterns associated with this map. This is useful for debugging maps and ACLs. The output format is composed by one line par matching type. Each line is composed by space-delimited series of words.

The first two words are:

<match method>: The match method applied. It can be "found", "bool", "int", "ip", "bin", "len", "str", "beg", "sub", "dir", "dom", "end" or "reg".

<match result>: The result. Can be "match" or "no-match".

The following words are returned only if the pattern matches an entry.

<index type>: "tree" or "list". The internal lookup algorithm.

: "case-insensitive" or "case-sensitive". The interpretation of the case.

: match="". Return the matched pattern. It is useful with regular expressions.

The two last word are used to show the returned value and its type. With the "acl" case, the pattern doesn't exist.

return=nothing: No return because there are no "map".
return="<value>": The value returned in the string format.
return=cannot-display: The value cannot be converted as string.

type="<type>": The type of the returned sample.

get var <name>
Show the existence, type and contents of the process-wide variable 'name'. Only process-wide variables are readable, so the name must begin with 'proc.' otherwise no variable will be found. This command requires levels "operator" or "admin".

get weight <backend>/<server>
Report the current weight and the initial weight of server <server> in backend <backend> or an error if either doesn't exist. The initial weight is the one that appears in the configuration file. Both are normally equal unless the current weight has been changed. Both the backend and the server may be specified either by their name or by their numeric ID, prefixed with a sharp ('#').

help [<command>]
Print the list of known keywords and their basic usage, or commands matching the requested one. The same help screen is also displayed for unknown commands.

httpclient <method> <URI>
Launch an HTTP client request and print the response on the CLI. Only supported on a CLI connection running in expert mode (see "expert-mode on"). It's only meant for debugging. The httpclient is able to resolve a server name in the URL using the "default" resolvers section, which is populated with the DNS servers of your /etc/resolv.conf by default. However it won't be able to resolve an host from /etc/hosts if you don't use a local dns daemon which can resolve those.

new ssl ca-file <cafile>
Create a new empty CA file tree entry to be filled with a set of CA certificates and added to a crt-list. This command should be used in combination with "set ssl ca-file" and "add ssl crt-list".

new ssl cert <filename>
Create a new empty SSL certificate store to be filled with a certificate and added to a directory or a crt-list. This command should be used in combination with "set ssl cert" and "add ssl crt-list".

new ssl crl-file <crlfile>
Create a new empty CRL file tree entry to be filled with a set of CRLs and added to a crt-list. This command should be used in combination with "set ssl crl-file" and "add ssl crt-list".

prepare acl <acl>
Allocate a new version number in ACL <acl> for atomic replacement. <acl> is the #<id> or the <file> returned by "show acl". The new version number is shown in response after "New version created:". This number will then be usable to prepare additions of new entries into the ACL which will then atomically replace the current ones once committed. It is reported as "next_ver" in "show acl". There is no impact of allocating new versions, as unused versions will automatically be removed once a more recent version is committed. Version numbers are unsigned 32-bit values which wrap at the end, so care must be taken when comparing them in an external program. This command cannot be used if the reference <acl> is a file also used as a map. In this case, the "prepare map" command must be used instead.

prepare map <map>
Allocate a new version number in map <map> for atomic replacement. <map> is the #<id> or the <file> returned by "show map". The new version number is shown in response after "New version created:". This number will then be usable to prepare additions of new entries into the map which will then atomically replace the current ones once committed. It is reported as "next_ver" in "show map". There is no impact of allocating new versions, as unused versions will automatically be removed once a more recent version is committed. Version numbers are unsigned 32-bit values which wrap at the end, so care must be taken when comparing them in an external program.

prompt
Toggle the prompt at the beginning of the line and enter or leave interactive mode. In interactive mode, the connection is not closed after a command completes. Instead, the prompt will appear again, indicating the user that the interpreter is waiting for a new command. The prompt consists in a right angle bracket followed by a space "> ". This mode is particularly convenient when one wants to periodically check information such as stats or errors. It is also a good idea to enter interactive mode before issuing a "help" command.

quit
Close the connection when in interactive mode.

set dynamic-cookie-key backend <backend> <value>
Modify the secret key used to generate the dynamic persistent cookies. This will break the existing sessions.

set map <map> [<key>|#<ref>] <value>
Modify the value corresponding to each key <key> in a map <map>. <map> is the #<id> or <file> returned by "show map". If the <ref> is used in place of <key>, only the entry pointed by <ref> is changed. The new value is <value>.

set maxconn frontend <frontend> <value>
Dynamically change the specified frontend's maxconn setting. Any positive value is allowed including zero, but setting values larger than the global maxconn does not make much sense. If the limit is increased and connections were pending, they will immediately be accepted. If it is lowered to a value below the current number of connections, new connections acceptation will be delayed until the threshold is reached. The frontend might be specified by either its name or its numeric ID prefixed with a sharp ('#').

set maxconn server <backend/server> <value>
Dynamically change the specified server's maxconn setting. Any positive value is allowed including zero, but setting values larger than the global maxconn does not make much sense.

set maxconn global <maxconn>
Dynamically change the global maxconn setting within the range defined by the initial global maxconn setting. If it is increased and connections were pending, they will immediately be accepted. If it is lowered to a value below the current number of connections, new connections acceptation will be delayed until the threshold is reached. A value of zero restores the initial setting.

set profiling { tasks | memory } { auto | on | off }
Enables or disables CPU or memory profiling for the indicated subsystem. This is equivalent to setting or clearing the "profiling" settings in the "global" section of the configuration file. Please also see "show profiling". Note that manually setting the tasks profiling to "on" automatically resets the scheduler statistics, thus allows to check activity over a given interval. The memory profiling is limited to certain operating systems (known to work on the linux-glibc target), and requires USE_MEMORY_PROFILING to be set at compile time.

set rate-limit connections global <value>
Change the process-wide connection rate limit, which is set by the global 'maxconnrate' setting. A value of zero disables the limitation. This limit applies to all frontends and the change has an immediate effect. The value is passed in number of connections per second.

set rate-limit http-compression global <value> Change the maximum input compression rate, which is set by the global 'maxcomprate' setting. A value of zero disables the limitation. The value is passed in number of kilobytes per second. The value is available in the "show info" on the line "CompressBpsRateLim" in bytes.

set rate-limit sessions global <value>
Change the process-wide session rate limit, which is set by the global 'maxsessrate' setting. A value of zero disables the limitation. This limit applies to all frontends and the change has an immediate effect. The value is passed in number of sessions per second.

set rate-limit ssl-sessions global <value>
Change the process-wide SSL session rate limit, which is set by the global 'maxsslrate' setting. A value of zero disables the limitation. This limit applies to all frontends and the change has an immediate effect. The value is passed in number of sessions per second sent to the SSL stack. It applies before the handshake in order to protect the stack against handshake abuses.

set server <backend>/<server> addr [port ] Replace the current IP address of a server by the one provided. Optionally, the port can be changed using the 'port' parameter. Note that changing the port also support switching from/to port mapping (notation with +X or -Y), only if a port is configured for the health check.

set server <backend>/<server> agent [ up | down ]
Force a server's agent to a new state. This can be useful to immediately switch a server's state regardless of some slow agent checks for example. Note that the change is propagated to tracking servers if any.

set server <backend>/<server> agent-addr <addr> [port ]
Change addr for servers agent checks. Allows to migrate agent-checks to another address at runtime. You can specify both IP and hostname, it will be resolved. Optionally, change the port agent.

set server <backend>/<server> agent-port
Change the port used for agent checks.

set server <backend>/<server> agent-send <value>
Change agent string sent to agent check target. Allows to update string while changing server address to keep those two matching.

set server <backend>/<server> health [ up | stopping | down ]
Force a server's health to a new state. This can be useful to immediately switch a server's state regardless of some slow health checks for example. Note that the change is propagated to tracking servers if any.

set server <backend>/<server> check-addr [port ]
Change the IP address used for server health checks. Optionally, change the port used for server health checks.

set server <backend>/<server> check-port
Change the port used for health checking to

set server <backend>/<server> state [ ready | drain | maint ]
Force a server's administrative state to a new state. This can be useful to disable load balancing and/or any traffic to a server. Setting the state to "ready" puts the server in normal mode, and the command is the equivalent of the "enable server" command. Setting the state to "maint" disables any traffic to the server as well as any health checks. This is the equivalent of the "disable server" command. Setting the mode to "drain" only removes the server from load balancing but still allows it to be checked and to accept new persistent connections. Changes are propagated to tracking servers if any.

set server <backend>/<server> weight [%]
Change a server's weight to the value passed in argument. This is the exact equivalent of the "set weight" command below.

set server <backend>/<server> fqdn
Change a server's FQDN to the value passed in argument. This requires the internal run-time DNS resolver to be configured and enabled for this server.

set server <backend>/<server> ssl [ on | off ] (deprecated)
This option configures SSL ciphering on outgoing connections to the server. When switch off, all traffic becomes plain text; health check path is not changed.

This command is deprecated, create a new server dynamically with or without SSL instead, using the "add server" command.

set severity-output [ none | number | string ]
Change the severity output format of the stats socket connected to for the duration of the current session.

set ssl ca-file <cafile> <payload>
This command is part of a transaction system, the "commit ssl ca-file" and "abort ssl ca-file" commands could be required. If there is no on-going transaction, it will create a CA file tree entry into which the certificates contained in the payload will be stored. The CA file entry will not be stored in the CA file tree and will only be kept in a temporary transaction. If a transaction with the same filename already exists, the previous CA file entry will be deleted and replaced by the new one. Once the modifications are done, you have to commit the transaction through a "commit ssl ca-file" call.

  Example:
    echo -e "set ssl ca-file cafile.pem <<\n$(cat rootCA.crt)\n" | \
    socat /var/run/haproxy.stat -
    echo "commit ssl ca-file cafile.pem" | socat /var/run/haproxy.stat -	

set ssl cert <filename> <payload>
This command is part of a transaction system, the "commit ssl cert" and "abort ssl cert" commands could be required. This whole transaction system works on any certificate displayed by the "show ssl cert" command, so on any frontend or backend certificate. If there is no on-going transaction, it will duplicate the certificate <filename> in memory to a temporary transaction, then update this transaction with the PEM file in the payload. If a transaction exists with the same filename, it will update this transaction. It's also possible to update the files linked to a certificate (.issuer, .sctl, .oscp etc.) Once the modification are done, you have to "commit ssl cert" the transaction.

Injection of files over the CLI must be done with caution since an empty line is used to notify the end of the payload. It is recommended to inject a PEM file which has been sanitized. A simple method would be to remove every empty line and only leave what are in the PEM sections. It could be achieved with a sed command.

  Example:

   # With some simple sanitizing
    echo -e "set ssl cert localhost.pem <<\n$(sed -n '/^$/d;/-BEGIN/,/-END/p' 127.0.0.1.pem)\n" | \
    socat /var/run/haproxy.stat -

    # Complete example with commit
    echo -e "set ssl cert localhost.pem <<\n$(cat 127.0.0.1.pem)\n" | \
    socat /var/run/haproxy.stat -
    echo -e \
    "set ssl cert localhost.pem.issuer <<\n $(cat 127.0.0.1.pem.issuer)\n" | \
    socat /var/run/haproxy.stat -
    echo -e \
    "set ssl cert localhost.pem.ocsp <<\n$(base64 -w 1000 127.0.0.1.pem.ocsp)\n" | \
    socat /var/run/haproxy.stat -
    echo "commit ssl cert localhost.pem" | socat /var/run/haproxy.stat -

set ssl crl-file <crlfile> <payload> This command is part of a transaction system, the "commit ssl crl-file" and "abort ssl crl-file" commands could be required. If there is no on-going transaction, it will create a CRL file tree entry into which the Revocation Lists contained in the payload will be stored. The CRL file entry will not be stored in the CRL file tree and will only be kept in a temporary transaction. If a transaction with the same filename already exists, the previous CRL file entry will be deleted and replaced by the new one. Once the modifications are done, you have to commit the transaction through a "commit ssl crl-file" call.

  Example:
    echo -e "set ssl crl-file crlfile.pem <<\n$(cat rootCRL.pem)\n" | \
    socat /var/run/haproxy.stat -
    echo "commit ssl crl-file crlfile.pem" | socat /var/run/haproxy.stat -

set ssl ocsp-response This command is used to update an OCSP Response for a certificate (see "crt" on "bind" lines). Same controls are performed as during the initial loading of the response. The must be passed as a base64 encoded string of the DER encoded response from the OCSP server. This command is not supported with BoringSSL.

  Example:
    openssl ocsp -issuer issuer.pem -cert server.pem \
                 -host ocsp.issuer.com:80 -respout resp.der
    echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
                 socat stdio /var/run/haproxy.stat

    using the payload syntax:
    echo -e "set ssl ocsp-response <\n$(base64 resp.der)\n" | \
                 socat stdio /var/run/haproxy.stat

set ssl tls-key <id>
Set the next TLS key for the <id> listener to . This key becomes the ultimate key, while the penultimate one is used for encryption (others just decrypt). The oldest TLS key present is overwritten. <id> is either a numeric #<id> or <file> returned by "show tls-keys". is a base64 encoded 48 or 80 bits TLS ticket key (ex. openssl rand 80 | openssl base64 -A).

set table <table> key <key> [data.<data_type> <value>]*
Create or update a stick-table entry in the table. If the key is not present, an entry is inserted. See stick-table in section 4.2 to find all possible values for <data_type>. The most likely use consists in dynamically entering entries for source IP addresses, with a flag in gpc0 to dynamically block an IP address or affect its quality of service. It is possible to pass multiple data_types in a single call.

set timeout cli <delay>
Change the CLI interface timeout for current connection. This can be useful during long debugging sessions where the user needs to constantly inspect some indicators without being disconnected. The delay is passed in seconds.

set var <name> <expression>
set var <name> expr <expression>
set var <name> fmt <format>
Allows to set or overwrite the process-wide variable 'name' with the result of expression <expression> or format string <format>. Only process-wide variables may be used, so the name must begin with 'proc.' otherwise no variable will be set. The <expression> and <format> may only involve "internal" sample fetch keywords and converters even though the most likely useful ones will be str('something'), int(), simple strings or references to other variables. Note that the command line parser doesn't know about quotes, so any space in the expression must be preceded by a backslash. This command requires levels "operator" or "admin". This command is only supported on a CLI connection running in experimental mode (see "experimental-mode on").

set weight <backend>/<server> <weight>[%]
Change a server's weight to the value passed in argument. If the value ends with the '%' sign, then the new weight will be relative to the initially configured weight. Absolute weights are permitted between 0 and 256. Relative weights must be positive with the resulting absolute weight is capped at 256. Servers which are part of a farm running a static load-balancing algorithm have stricter limitations because the weight cannot change once set. Thus for these servers, the only accepted values are 0 and 100% (or 0 and the initial weight). Changes take effect immediately, though certain LB algorithms require a certain amount of requests to consider changes. A typical usage of this command is to disable a server during an update by setting its weight to zero, then to enable it again after the update by setting it back to 100%. This command is restricted and can only be issued on sockets configured for level "admin". Both the backend and the server may be specified either by their name or by their numeric ID, prefixed with a sharp ('#').

show acl [[@<ver>] <acl>]
Dump info about acl converters. Without argument, the list of all available acls is returned. If a <acl> is specified, its contents are dumped. <acl> is the #<id> or <file>. By default the current version of the ACL is shown (the version currently being matched against and reported as 'curr_ver' in the ACL list). It is possible to instead dump other versions by prepending '@<ver>' before the ACL's identifier. The version works as a filter and non-existing versions will simply report no result. The dump format is the same as for the maps even for the sample values. The data returned are not a list of available ACL, but are the list of all patterns composing any ACL. Many of these patterns can be shared with maps. The 'entry_cnt' value represents the count of all the ACL entries, not just the active ones, which means that it also includes entries currently being added.

show backend
Dump the list of backends available in the running process

show cli level
Display the CLI level of the current CLI session. The result could be 'admin', 'operator' or 'user'. See also the 'operator' and 'user' commands.

  Example :

    $ socat /tmp/sock1 readline
    prompt
    > operator
    > show cli level
    operator
    > user
    > show cli level
    user
    > operator
    Permission denied

operator
Decrease the CLI level of the current CLI session to operator. It can't be increased. It also drops expert and experimental mode. See also "show cli level".

user
Decrease the CLI level of the current CLI session to user. It can't be increased. It also drops expert and experimental mode. See also "show cli level".

show activity
Reports some counters about internal events that will help developers and more generally people who know haproxy well enough to narrow down the causes of reports of abnormal behaviours. A typical example would be a properly running process never sleeping and eating 100% of the CPU. The output fields will be made of one line per metric, and per-thread counters on the same line. These counters are 32-bit and will wrap during the process's life, which is not a problem since calls to this command will typically be performed twice. The fields are purposely not documented so that their exact meaning is verified in the code where the counters are fed. These values are also reset by the "clear counters" command.

show cli sockets
List CLI sockets. The output format is composed of 3 fields separated by spaces. The first field is the socket address, it can be a unix socket, a ipv4 address:port couple or a ipv6 one. Socket of other types won't be dump. The second field describe the level of the socket: 'admin', 'user' or 'operator'. The last field list the processes on which the socket is bound, separated by commas, it can be numbers or 'all'.

  Example :

     $ echo 'show cli sockets' | socat stdio /tmp/sock1
     # socket lvl processes
     /tmp/sock1 admin all
     127.0.0.1:9999 user 2,3,4
     127.0.0.2:9969 user 2
     [::1]:9999 operator 2

show cache
List the configured caches and the objects stored in each cache tree.

  $ echo 'show cache' | socat stdio /tmp/sock1
  0x7f6ac6c5b03a: foobar (shctx:0x7f6ac6c5b000, available blocks:3918)
         1          2             3                             4

  1. pointer to the cache structure
  2. cache name
  3. pointer to the mmap area (shctx)
  4. number of blocks available for reuse in the shctx

  0x7f6ac6c5b4cc hash:286881868 vary:0x0011223344556677 size:39114 (39 blocks), refcount:9, expire:237
           1               2               3                    4        5            6           7

  1. pointer to the cache entry
  2. first 32 bits of the hash
  3. secondary hash of the entry in case of vary
  4. size of the object in bytes
  5. number of blocks used for the object
  6. number of transactions using the entry
  7. expiration time, can be negative if already expired

show env [<name>]
Dump one or all environment variables known by the process. Without any argument, all variables are dumped. With an argument, only the specified variable is dumped if it exists. Otherwise "Variable not found" is emitted. Variables are dumped in the same format as they are stored or returned by the "env" utility, that is, "<name>=<value>". This can be handy when debugging certain configuration files making heavy use of environment variables to ensure that they contain the expected values. This command is restricted and can only be issued on sockets configured for levels "operator" or "admin".

show errors [<iid>|<proxy>] [request|response]
Dump last known request and response errors collected by frontends and backends. If is specified, the limit the dump to errors concerning either frontend or backend whose ID is <iid>. Proxy ID "-1" will cause all instances to be dumped. If a proxy name is specified instead, its ID will be used as the filter. If "request" or "response" is added after the proxy name or ID, only request or response errors will be dumped. This command is restricted and can only be issued on sockets configured for levels "operator" or "admin".

The errors which may be collected are the last request and response errors caused by protocol violations, often due to invalid characters in header names. The report precisely indicates what exact character violated the protocol. Other important information such as the exact date the error was detected, frontend and backend names, the server name (when known), the internal session ID and the source address which has initiated the session are reported too.

All characters are returned, and non-printable characters are encoded. The most common ones (\t = 9, \n = 10, \r = 13 and \e = 27) are encoded as one letter following a backslash. The backslash itself is encoded as '\\' to avoid confusion. Other non-printable characters are encoded '\xNN' where NN is the two-digits hexadecimal representation of the character's ASCII code.

Lines are prefixed with the position of their first character, starting at 0 for the beginning of the buffer. At most one input line is printed per line, and large lines will be broken into multiple consecutive output lines so that the output never goes beyond 79 characters wide. It is easy to detect if a line was broken, because it will not end with '\n' and the next line's offset will be followed by a '+' sign, indicating it is a continuation of previous line.

  Example :
        $ echo "show errors -1 response" | socat stdio /tmp/sock1
    >>> [04/Mar/2009:15:46:56.081] backend http-in (#2) : invalid response
          src 127.0.0.1, session #54, frontend fe-eth0 (#1), server s2 (#1)
          response length 213 bytes, error at position 23:

          00000  HTTP/1.0 200 OK\r\n
          00017  header/bizarre:blah\r\n
          00038  Location: blah\r\n
          00054  Long-line: this is a very long line which should b
          00104+ e broken into multiple lines on the output buffer,
          00154+  otherwise it would be too large to print in a ter
          00204+ minal\r\n
          00211  \r\n

In the example above, we see that the backend "http-in" which has internal ID 2 has blocked an invalid response from its server s2 which has internal ID 1. The request was on session 54 initiated by source 127.0.0.1 and received by frontend fe-eth0 whose ID is 1. The total response length was 213 bytes when the error was detected, and the error was at byte 23. This is the slash ('/') in header name "header/bizarre", which is not a valid HTTP character for a header name.

show events [<sink>] [-w] [-n]
With no option, this lists all known event sinks and their types. With an option, it will dump all available events in the designated sink if it is of type buffer. If option "-w" is passed after the sink name, then once the end of the buffer is reached, the command will wait for new events and display them. It is possible to stop the operation by entering any input (which will be discarded) or by closing the session. Finally, option "-n" is used to directly seek to the end of the buffer, which is often convenient when combined with "-w" to only report new events. For convenience, "-wn" or "-nw" may be used to enable both options at once.

show fd [<fd>]
Dump the list of either all open file descriptors or just the one number if specified. This is only aimed at developers who need to observe internal states in order to debug complex issues such as abnormal CPU usages. One fd is reported per lines, and for each of them, its state in the poller using upper case letters for enabled flags and lower case for disabled flags, using "P" for "polled", "R" for "ready", "A" for "active", the events status using "H" for "hangup", "E" for "error", "O" for "output", "P" for "priority" and "I" for "input", a few other flags like "N" for "new" (just added into the fd cache), "U" for "updated" (received an update in the fd cache), "L" for "linger_risk", "C" for "cloned", then the cached entry position, the pointer to the internal owner, the pointer to the I/O callback and its name when known. When the owner is a connection, the connection flags, and the target are reported (frontend, proxy or server). When the owner is a listener, the listener's state and its frontend are reported. There is no point in using this command without a good knowledge of the internals. It's worth noting that the output format may evolve over time so this output must not be parsed by tools designed to be durable. Some internal structure states may look suspicious to the function listing them, in this case the output line will be suffixed with an exclamation mark ('!'). This may help find a starting point when trying to diagnose an incident.

show info [typed|json] [desc] [float]
Dump info about haproxy status on current process. If "typed" is passed as an optional argument, field numbers, names and types are emitted as well so that external monitoring products can easily retrieve, possibly aggregate, then report information found in fields they don't know. Each field is dumped on its own line. If "json" is passed as an optional argument then information provided by "typed" output is provided in JSON format as a list of JSON objects. By default, the format contains only two columns delimited by a colon (':'). The left one is the field name and the right one is the value. It is very important to note that in typed output format, the dump for a single object is contiguous so that there is no need for a consumer to store everything at once. If "float" is passed as an optional argument, some fields usually emitted as integers may switch to floats for higher accuracy. It is purposely unspecified which ones are concerned as this might evolve over time. Using this option implies that the consumer is able to process floats. The output format used is sprintf("%f").

When using the typed output format, each line is made of 4 columns delimited by colons (':'). The first column is a dot-delimited series of 3 elements. The first element is the numeric position of the field in the list (starting at zero). This position shall not change over time, but holes are to be expected, depending on build options or if some fields are deleted in the future. The second element is the field name as it appears in the default "show info" output. The third element is the relative process number starting at 1.

The rest of the line starting after the first colon follows the "typed output format" described in the section above. In short, the second column (after the first ':') indicates the origin, nature and scope of the variable. The third column indicates the type of the field, among "s32", "s64", "u32", "u64" and "str". Then the fourth column is the value itself, which the consumer knows how to parse thanks to column 3 and how to process thanks to column 2.

  Thus the overall line format in typed mode is :

      <field_pos>.<field_name>.<process_num>:<tags>:<type>:<value>

When "desc" is appended to the command, one extra colon followed by a quoted string is appended with a description for the metric. At the time of writing, this is only supported for the "typed" and default output formats.

  Example :

      > show info
      Name: HAProxy
      Version: 1.7-dev1-de52ea-146
      Release_date: 2016/03/11
      Nbproc: 1
      Process_num: 1
      Pid: 28105
      Uptime: 0d 0h00m04s
      Uptime_sec: 4
      Memmax_MB: 0
      PoolAlloc_MB: 0
      PoolUsed_MB: 0
      PoolFailed: 0
      (...)

      > show info typed
      0.Name.1:POS:str:HAProxy
      1.Version.1:POS:str:1.7-dev1-de52ea-146
      2.Release_date.1:POS:str:2016/03/11
      3.Nbproc.1:CGS:u32:1
      4.Process_num.1:KGP:u32:1
      5.Pid.1:SGP:u32:28105
      6.Uptime.1:MDP:str:0d 0h00m08s
      7.Uptime_sec.1:MDP:u32:8
      8.Memmax_MB.1:CLP:u32:0
      9.PoolAlloc_MB.1:MGP:u32:0
      10.PoolUsed_MB.1:MGP:u32:0
      11.PoolFailed.1:MCP:u32:0
      (...)

In the typed format, the presence of the process ID at the end of the first column makes it very easy to visually aggregate outputs from multiple processes.

  Example :

      $ ( echo show info typed | socat /var/run/haproxy.sock1 ;    \
          echo show info typed | socat /var/run/haproxy.sock2 ) |  \
        sort -t . -k 1,1n -k 2,2 -k 3,3n
      0.Name.1:POS:str:HAProxy
      0.Name.2:POS:str:HAProxy
      1.Version.1:POS:str:1.7-dev1-868ab3-148
      1.Version.2:POS:str:1.7-dev1-868ab3-148
      2.Release_date.1:POS:str:2016/03/11
      2.Release_date.2:POS:str:2016/03/11
      3.Nbproc.1:CGS:u32:2
      3.Nbproc.2:CGS:u32:2
      4.Process_num.1:KGP:u32:1
      4.Process_num.2:KGP:u32:2
      5.Pid.1:SGP:u32:30120
      5.Pid.2:SGP:u32:30121
      6.Uptime.1:MDP:str:0d 0h01m28s
      6.Uptime.2:MDP:str:0d 0h01m28s
      (...)

The format of JSON output is described in a schema which may be output using "show schema json".
The JSON output contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example :

  $ echo "show info json" | socat /var/run/haproxy.sock stdio | \
    python -m json.tool

The JSON output contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example :

  $ echo "show info json" | socat /var/run/haproxy.sock stdio | \
    python -m json.tool

show libs
Dump the list of loaded shared dynamic libraries and object files, on systems that support it. When available, for each shared object the range of virtual addresses will be indicated, the size and the path to the object. This can be used for example to try to estimate what library provides a function that appears in a dump. Note that on many systems, addresses will change upon each restart (address space randomization), so that this list would need to be retrieved upon startup if it is expected to be used to analyse a core file. This command may only be issued on sockets configured for levels "operator" or "admin". Note that the output format may vary between operating systems, architectures and even haproxy versions, and ought not to be relied on in scripts.

show map [[@<ver>] <map>]
Dump info about map converters. Without argument, the list of all available maps is returned. If a <map> is specified, its contents are dumped. <map> is the #<id> or <file>. By default the current version of the map is shown (the version currently being matched against and reported as 'curr_ver' in the map list). It is possible to instead dump other versions by prepending '@<ver>' before the map's identifier. The version works as a filter and non-existing versions will simply report no result. The 'entry_cnt' value represents the count of all the map entries, not just the active ones, which means that it also includes entries currently being added.

In the output, the first column is a unique entry identifier, which is usable as a reference for operations "del map" and "set map". The second column is the pattern and the third column is the sample if available. The data returned are not directly a list of available maps, but are the list of all patterns composing any map. Many of these patterns can be shared with ACL.

show peers [dict|-] [<peers section>]
Dump info about the peers configured in "peers" sections. Without argument, the list of the peers belonging to all the "peers" sections are listed. If <peers section> is specified, only the information about the peers belonging to this "peers" section are dumped. When "dict" is specified before the peers section name, the entire Tx/Rx dictionary caches will also be dumped (very large). Passing "-" may be required to dump a peers section called "dict".

Here are two examples of outputs where hostA, hostB and hostC peers belong to "sharedlb" peers sections. Only hostA and hostB are connected. Only hostA has sent data to hostB.

  $ echo "show peers" | socat - /tmp/hostA
  0x55deb0224320: [15/Apr/2019:11:28:01] id=sharedlb state=0 flags=0x3 \
    resync_timeout=<PAST> task_calls=45122
      0x55deb022b540: id=hostC(remote) addr=127.0.0.12:10002 status=CONN \
        reconnect=4s confirm=0
        flags=0x0
      0x55deb022a440: id=hostA(local) addr=127.0.0.10:10000 status=NONE \
        reconnect=<NEVER> confirm=0
        flags=0x0
      0x55deb0227d70: id=hostB(remote) addr=127.0.0.11:10001 status=ESTA
        reconnect=2s confirm=0
        flags=0x20000200 appctx:0x55deb028fba0 st0=7 st1=0 task_calls=14456 \
          state=EST
        xprt=RAW src=127.0.0.1:37257 addr=127.0.0.10:10000
        remote_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1
        last_local_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1
        shared tables:
          0x55deb0224a10 local_id=1 remote_id=1 flags=0x0 remote_data=0x65
            last_acked=0 last_pushed=3 last_get=0 teaching_origin=0 update=3
            table:0x55deb022d6a0 id=stkt update=3 localupdate=3 \
              commitupdate=3 syncing=0

  $ echo "show peers" | socat - /tmp/hostB
  0x55871b5ab320: [15/Apr/2019:11:28:03] id=sharedlb state=0 flags=0x3 \
    resync_timeout=<PAST> task_calls=3
      0x55871b5b2540: id=hostC(remote) addr=127.0.0.12:10002 status=CONN \
        reconnect=3s confirm=0
        flags=0x0
      0x55871b5b1440: id=hostB(local) addr=127.0.0.11:10001 status=NONE \
        reconnect=<NEVER> confirm=0
        flags=0x0
      0x55871b5aed70: id=hostA(remote) addr=127.0.0.10:10000 status=ESTA \
        reconnect=2s confirm=0
        flags=0x20000200 appctx:0x7fa46800ee00 st0=7 st1=0 task_calls=62356 \
          state=EST
        remote_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1
        last_local_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1
        shared tables:
          0x55871b5ab960 local_id=1 remote_id=1 flags=0x0 remote_data=0x65
            last_acked=3 last_pushed=0 last_get=3 teaching_origin=0 update=0
            table:0x55871b5b46a0 id=stkt update=1 localupdate=0 \
              commitupdate=0 syncing=0

show pools
Dump the status of internal memory pools. This is useful to track memory usage when suspecting a memory leak for example. It does exactly the same as the SIGQUIT when running in foreground except that it does not flush the pools.

show profiling [{all | status | tasks | memory}] [byaddr] [<max_lines>]
Dumps the current profiling settings, one per line, as well as the command needed to change them. When tasks profiling is enabled, some per-function statistics collected by the scheduler will also be emitted, with a summary covering the number of calls, total/avg CPU time and total/avg latency. When memory profiling is enabled, some information such as the number of allocations/releases and their sizes will be reported. It is possible to limit the dump to only the profiling status, the tasks, or the memory profiling by specifying the respective keywords; by default all profiling information are dumped. It is also possible to limit the number of lines of output of each category by specifying a numeric limit. If is possible to request that the output is sorted by address instead of usage, e.g. to ease comparisons between subsequent calls. Please note that profiling is essentially aimed at developers since it gives hints about where CPU cycles or memory are wasted in the code. There is nothing useful to monitor there.

show resolvers [<resolvers section id>]
Dump statistics for the given resolvers section, or all resolvers sections if no section is supplied.

  For each name server, the following counters are reported:
    sent: number of DNS requests sent to this server
    valid: number of DNS valid responses received from this server
    update: number of DNS responses used to update the server's IP address
    cname: number of CNAME responses
    cname_error: CNAME errors encountered with this server
    any_err: number of empty response (IE: server does not support ANY type)
    nx: non existent domain response received from this server
    timeout: how many time this server did not answer in time
    refused: number of requests refused by this server
    other: any other DNS errors
    invalid: invalid DNS response (from a protocol point of view)
    too_big: too big response
    outdated: number of response arrived too late (after an other name server)

show servers conn [<backend>]
Dump the current and idle connections state of the servers belonging to the designated backend (or all backends if none specified). A backend name or identifier may be used.

The output consists in a header line showing the fields titles, then one server per line with for each, the backend name and ID, server name and ID, the address, port and a series or values. The number of fields varies depending on thread count.

Given the threaded nature of idle connections, it's important to understand that some values may change once read, and that as such, consistency within a line isn't granted. This output is mostly provided as a debugging tool and is not relevant to be routinely monitored nor graphed.

show servers state [<backend>]
Dump the state of the servers found in the running configuration. A backend name or identifier may be provided to limit the output to this backend only.

The dump has the following format: - first line contains the format version (1 in this specification); - second line contains the column headers, prefixed by a sharp ('#'); - third line and next ones contain data; - each line starting by a sharp ('#') is considered as a comment.
Since multiple versions of the output may co-exist, below is the list of fields and their order per file format version :

  
   1:
     be_id:                       Backend unique id.
     be_name:                     Backend label.
     srv_id:                      Server unique id (in the backend).
     srv_name:                    Server label.
     srv_addr:                    Server IP address.
     srv_op_state:                Server operational state (UP/DOWN/...).
                                    0 = SRV_ST_STOPPED
                                      The server is down.
                                    1 = SRV_ST_STARTING
                                      The server is warming up (up but
                                      throttled).
                                    2 = SRV_ST_RUNNING
                                      The server is fully up.
                                    3 = SRV_ST_STOPPING
                                      The server is up but soft-stopping
                                      (eg: 404).
     srv_admin_state:             Server administrative state (MAINT/DRAIN/...).
                                  The state is actually a mask of values :
                                    0x01 = SRV_ADMF_FMAINT
                                      The server was explicitly forced into
                                      maintenance.
                                    0x02 = SRV_ADMF_IMAINT
                                      The server has inherited the maintenance
                                      status from a tracked server.
                                    0x04 = SRV_ADMF_CMAINT
                                      The server is in maintenance because of
                                      the configuration.
                                    0x08 = SRV_ADMF_FDRAIN
                                      The server was explicitly forced into
                                      drain state.
                                    0x10 = SRV_ADMF_IDRAIN
                                      The server has inherited the drain status
                                      from a tracked server.
                                    0x20 = SRV_ADMF_RMAINT
                                      The server is in maintenance because of an
                                      IP address resolution failure.
                                    0x40 = SRV_ADMF_HMAINT
                                      The server FQDN was set from stats socket.

     srv_uweight:                 User visible server's weight.
     srv_iweight:                 Server's initial weight.
     srv_time_since_last_change:  Time since last operational change.
     srv_check_status:            Last health check status.
     srv_check_result:            Last check result (FAILED/PASSED/...).
                                    0 = CHK_RES_UNKNOWN
                                      Initialized to this by default.
                                    1 = CHK_RES_NEUTRAL
                                      Valid check but no status information.
                                    2 = CHK_RES_FAILED
                                      Check failed.
                                    3 = CHK_RES_PASSED
                                      Check succeeded and server is fully up
                                      again.
                                    4 = CHK_RES_CONDPASS
                                      Check reports the server doesn't want new
                                      sessions.
     srv_check_health:            Checks rise / fall current counter.
     srv_check_state:             State of the check (ENABLED/PAUSED/...).
                                  The state is actually a mask of values :
                                    0x01 = CHK_ST_INPROGRESS
                                      A check is currently running.
                                    0x02 = CHK_ST_CONFIGURED
                                      This check is configured and may be
                                      enabled.
                                    0x04 = CHK_ST_ENABLED
                                      This check is currently administratively
                                      enabled.
                                    0x08 = CHK_ST_PAUSED
                                      Checks are paused because of maintenance
                                      (health only).
     srv_agent_state:             State of the agent check (ENABLED/PAUSED/...).
                                  This state uses the same mask values as
                                  "srv_check_state", adding this specific one :
                                    0x10 = CHK_ST_AGENT
                                      Check is an agent check (otherwise it's a
                                      health check).
     bk_f_forced_id:              Flag to know if the backend ID is forced by
                                  configuration.
     srv_f_forced_id:             Flag to know if the server's ID is forced by
                                  configuration.
     srv_fqdn:                    Server FQDN.
     srv_port:                    Server port.
     srvrecord:                   DNS SRV record associated to this SRV.
     srv_use_ssl:                 use ssl for server connections.
     srv_check_port:              Server health check port.
     srv_check_addr:              Server health check address.
     srv_agent_addr:              Server health agent address.
     srv_agent_port:              Server health agent port.

show sess
Dump all known sessions. Avoid doing this on slow connections as this can be huge. This command is restricted and can only be issued on sockets configured for levels "operator" or "admin". Note that on machines with quickly recycled connections, it is possible that this output reports less entries than really exist because it will dump all existing sessions up to the last one that was created before the command was entered; those which die in the mean time will not appear.

show sess <id>
Display a lot of internal information about the specified session identifier. This identifier is the first field at the beginning of the lines in the dumps of "show sess" (it corresponds to the session pointer). Those information are useless to most users but may be used by haproxy developers to troubleshoot a complex bug. The output format is intentionally not documented so that it can freely evolve depending on demands. You may find a description of all fields returned in src/dumpstats.c

The special id "all" dumps the states of all sessions, which must be avoided as much as possible as it is highly CPU intensive and can take a lot of time.

show stat [domain <dns|proxy>] [{<iid>|<proxy>} <type> <sid>] [typed|json] \
[desc] [up|no-maint]
Dump statistics. The domain is used to select which statistics to print; dns and proxy are available for now. By default, the CSV format is used; you can activate the extended typed output format described in the section above if "typed" is passed after the other arguments; or in JSON if "json" is passed after the other arguments. By passing <id>, <type> and <sid>, it is possible to dump only selected items :
- <iid> is a proxy ID, -1 to dump everything. Alternatively, a proxy name <proxy> may be specified. In this case, this proxy's ID will be used as the ID selector.
- <type> selects the type of dumpable objects : 1 for frontends, 2 for backends, 4 for servers, -1 for everything. These values can be ORed, for example:
1 + 2 = 3 -> frontend + backend. 1 + 2 + 4 = 7 -> frontend + backend + server. - <sid> is a server ID, -1 to dump everything from the selected proxy.

  Example :
        $ echo "show info;show stat" | socat stdio unix-connect:/tmp/sock1
    >>> Name: HAProxy
        Version: 1.4-dev2-49
        Release_date: 2009/09/23
        Nbproc: 1
        Process_num: 1
        (...)

        # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,  (...)
        stats,FRONTEND,,,0,0,1000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,1,0, (...)
        stats,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,250,(...)
        (...)
        www1,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,250, (...)

        $

In this example, two commands have been issued at once. That way it's easy to find which process the stats apply to in multi-process mode. This is not needed in the typed output format as the process number is reported on each line. Notice the empty line after the information output which marks the end of the first block. A similar empty line appears at the end of the second block (stats) so that the reader knows the output has not been truncated.

When "typed" is specified, the output format is more suitable to monitoring tools because it provides numeric positions and indicates the type of each output field. Each value stands on its own line with process number, element number, nature, origin and scope. This same format is available via the HTTP stats by passing ";typed" after the URI. It is very important to note that in typed output format, the dump for a single object is contiguous so that there is no need for a consumer to store everything at once.

The "up" modifier will result in listing only servers which reportedly up or not checked. Those down, unresolved, or in maintenance will not be listed. This is analogous to the ";up" option on the HTTP stats. Similarly, the "no-maint" modifier will act like the ";no-maint" HTTP modifier and will result in disabled servers not to be listed. The difference is that those which are enabled but down will not be evicted.

When using the typed output format, each line is made of 4 columns delimited by colons (':'). The first column is a dot-delimited series of 5 elements. The first element is a letter indicating the type of the object being described. At the moment the following object types are known : 'F' for a frontend, 'B' for a backend, 'L' for a listener, and 'S' for a server. The second element The second element is a positive integer representing the unique identifier of the proxy the object belongs to. It is equivalent to the "iid" column of the CSV output and matches the value in front of the optional "id" directive found in the frontend or backend section. The third element is a positive integer containing the unique object identifier inside the proxy, and corresponds to the "sid" column of the CSV output. ID 0 is reported when dumping a frontend or a backend. For a listener or a server, this corresponds to their respective ID inside the proxy. The fourth element is the numeric position of the field in the list (starting at zero). This position shall not change over time, but holes are to be expected, depending on build options or if some fields are deleted in the future. The fifth element is the field name as it appears in the CSV output. The sixth element is a positive integer and is the relative process number starting at 1.

The rest of the line starting after the first colon follows the "typed output format" described in the section above. In short, the second column (after the first ':') indicates the origin, nature and scope of the variable. The third column indicates the field type, among "s32", "s64", "u32", "u64", "flt' and "str". Then the fourth column is the value itself, which the consumer knows how to parse thanks to column 3 and how to process thanks to column 2.

When "desc" is appended to the command, one extra colon followed by a quoted string is appended with a description for the metric. At the time of writing, this is only supported for the "typed" output format.

Thus the overall line format in typed mode is :

      <obj>.<px_id>.<id>.<fpos>.<fname>.<process_num>:<tags>:<type>:<value>

  Here's an example of typed output format :

        $ echo "show stat typed" | socat stdio unix-connect:/tmp/sock1
        F.2.0.0.pxname.1:MGP:str:private-frontend
        F.2.0.1.svname.1:MGP:str:FRONTEND
        F.2.0.8.bin.1:MGP:u64:0
        F.2.0.9.bout.1:MGP:u64:0
        F.2.0.40.hrsp_2xx.1:MGP:u64:0
        L.2.1.0.pxname.1:MGP:str:private-frontend
        L.2.1.1.svname.1:MGP:str:sock-1
        L.2.1.17.status.1:MGP:str:OPEN
        L.2.1.73.addr.1:MGP:str:0.0.0.0:8001
        S.3.13.60.rtime.1:MCP:u32:0
        S.3.13.61.ttime.1:MCP:u32:0
        S.3.13.62.agent_status.1:MGP:str:L4TOUT
        S.3.13.64.agent_duration.1:MGP:u64:2001
        S.3.13.65.check_desc.1:MCP:str:Layer4 timeout
        S.3.13.66.agent_desc.1:MCP:str:Layer4 timeout
        S.3.13.67.check_rise.1:MCP:u32:2
        S.3.13.68.check_fall.1:MCP:u32:3
        S.3.13.69.check_health.1:SGP:u32:0
        S.3.13.70.agent_rise.1:MaP:u32:1
        S.3.13.71.agent_fall.1:SGP:u32:1
        S.3.13.72.agent_health.1:SGP:u32:1
        S.3.13.73.addr.1:MCP:str:1.255.255.255:8888
        S.3.13.75.mode.1:MAP:str:http
        B.3.0.0.pxname.1:MGP:str:private-backend
        B.3.0.1.svname.1:MGP:str:BACKEND
        B.3.0.2.qcur.1:MGP:u32:0
        B.3.0.3.qmax.1:MGP:u32:0
        B.3.0.4.scur.1:MGP:u32:0
        B.3.0.5.smax.1:MGP:u32:0
        B.3.0.6.slim.1:MGP:u32:1000
        B.3.0.55.lastsess.1:MMP:s32:-1
        (...)

In the typed format, the presence of the process ID at the end of the first column makes it very easy to visually aggregate outputs from multiple processes, as show in the example below where each line appears for each process :

        $ ( echo show stat typed | socat /var/run/haproxy.sock1 - ; \
            echo show stat typed | socat /var/run/haproxy.sock2 - ) | \
          sort -t . -k 1,1 -k 2,2n -k 3,3n -k 4,4n -k 5,5 -k 6,6n
        B.3.0.0.pxname.1:MGP:str:private-backend
        B.3.0.0.pxname.2:MGP:str:private-backend
        B.3.0.1.svname.1:MGP:str:BACKEND
        B.3.0.1.svname.2:MGP:str:BACKEND
        B.3.0.2.qcur.1:MGP:u32:0
        B.3.0.2.qcur.2:MGP:u32:0
        B.3.0.3.qmax.1:MGP:u32:0
        B.3.0.3.qmax.2:MGP:u32:0
        B.3.0.4.scur.1:MGP:u32:0
        B.3.0.4.scur.2:MGP:u32:0
        B.3.0.5.smax.1:MGP:u32:0
        B.3.0.5.smax.2:MGP:u32:0
        B.3.0.6.slim.1:MGP:u32:1000
        B.3.0.6.slim.2:MGP:u32:1000
        (...)

The format of JSON output is described in a schema which may be output using "show schema json".

The JSON output contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example :

$ echo "show stat json" | socat /var/run/haproxy.sock stdio | \ python -m json.tool
The JSON output contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example :
$ echo "show stat json" | socat /var/run/haproxy.sock stdio | \ python -m json.tool

show ssl ca-file [<cafile>[:<index>]]
Display the list of CA files used by HAProxy and their respective certificate counts. If a filename is prefixed by an asterisk, it is a transaction which is not committed yet. If a <cafile> is specified without <index>, it will show the status of the CA file ("Used"/"Unused") followed by details about all the certificates contained in the CA file. The details displayed for every certificate are the same as the ones displayed by a "show ssl cert" command. If a <cafile> is specified followed by an <index>, it will only display the details of the certificate having the specified index. Indexes start from 1. If the index is invalid (too big for instance), nothing will be displayed. This command can be useful to check if a CA file was properly updated. You can also display the details of an ongoing transaction by prefixing the filename by an asterisk.

  Example :

    $ echo "show ssl ca-file" | socat /var/run/haproxy.master -
    # transaction
    *cafile.crt - 2 certificate(s)
    # filename
    cafile.crt - 1 certificate(s)

    $ echo "show ssl ca-file cafile.crt" | socat /var/run/haproxy.master -
    Filename: /home/tricot/work/haproxy/reg-tests/ssl/set_cafile_ca2.crt
    Status: Used

    Certificate #1:
    Serial: 11A4D2200DC84376E7D233CAFF39DF44BF8D1211
    notBefore: Apr  1 07:40:53 2021 GMT
    notAfter: Aug 17 07:40:53 2048 GMT
    Subject Alternative Name:
    Algorithm: RSA4096
    SHA1 FingerPrint: A111EF0FEFCDE11D47FE3F33ADCA8435EBEA4864
    Subject: /C=FR/ST=Some-State/O=HAProxy Technologies/CN=HAProxy Technologies CA
    Issuer: /C=FR/ST=Some-State/O=HAProxy Technologies/CN=HAProxy Technologies CA

    $ echo "show ssl ca-file *cafile.crt:2" | socat /var/run/haproxy.master -
    Filename: */home/tricot/work/haproxy/reg-tests/ssl/set_cafile_ca2.crt
    Status: Unused

    Certificate #2:
    Serial: 587A1CE5ED855040A0C82BF255FF300ADB7C8136
    [...]

show ssl cert [<filename>]
Display the list of certificates used on frontends and backends. If a filename is prefixed by an asterisk, it is a transaction which is not committed yet. If a filename is specified, it will show details about the certificate. This command can be useful to check if a certificate was well updated. You can also display details on a transaction by prefixing the filename by an asterisk. This command can also be used to display the details of a certificate's OCSP response by suffixing the filename with a ".ocsp" extension. It works for committed certificates as well as for ongoing transactions. On a committed certificate, this command is equivalent to calling "show ssl ocsp-response" with the certificate's corresponding OCSP response ID.

  Example :

    $ echo "@1 show ssl cert" | socat /var/run/haproxy.master -
    # transaction
    *test.local.pem
    # filename
    test.local.pem

    $ echo "@1 show ssl cert test.local.pem" | socat /var/run/haproxy.master -
    Filename: test.local.pem
    Serial: 03ECC19BA54B25E85ABA46EE561B9A10D26F
    notBefore: Sep 13 21:20:24 2019 GMT
    notAfter: Dec 12 21:20:24 2019 GMT
    Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
    Subject: /CN=test.local
    Subject Alternative Name: DNS:test.local, DNS:imap.test.local
    Algorithm: RSA2048
    SHA1 FingerPrint: 417A11CAE25F607B24F638B4A8AEE51D1E211477

    $ echo "@1 show ssl cert *test.local.pem" | socat /var/run/haproxy.master -
    Filename: *test.local.pem
    [...]

show ssl crl-file [<crlfile>[:<index>]]
Display the list of CRL files used by HAProxy.
If a filename is prefixed by an asterisk, it is a transaction which is not committed yet. If a <crlfile> is specified without <index>, it will show the status of the CRL file ("Used"/"Unused") followed by details about all the Revocation Lists contained in the CRL file. The details displayed for every list are based on the output of "openssl crl -text -noout -in <file>". If a <crlfile> is specified followed by an <index>, it will only display the details of the list having the specified index. Indexes start from 1. If the index is invalid (too big for instance), nothing will be displayed. This command can be useful to check if a CRL file was properly updated. You can also display the details of an ongoing transaction by prefixing the filename by an asterisk.

  Example :

    $ echo "show ssl crl-file" | socat /var/run/haproxy.master -
    # transaction
    *crlfile.pem
    # filename
    crlfile.pem

    $ echo "show ssl crl-file crlfile.pem" | socat /var/run/haproxy.master -
    Filename: /home/tricot/work/haproxy/reg-tests/ssl/crlfile.pem
    Status: Used

    Certificate Revocation List #1:
    Version 1
    Signature Algorithm: sha256WithRSAEncryption
    Issuer: /C=FR/O=HAProxy Technologies/CN=Intermediate CA2
    Last Update: Apr 23 14:45:39 2021 GMT
    Next Update: Sep  8 14:45:39 2048 GMT
    Revoked Certificates:
        Serial Number: 1008
            Revocation Date: Apr 23 14:45:36 2021 GMT

    Certificate Revocation List #2:
    Version 1
    Signature Algorithm: sha256WithRSAEncryption
    Issuer: /C=FR/O=HAProxy Technologies/CN=Root CA
    Last Update: Apr 23 14:30:44 2021 GMT
    Next Update: Sep  8 14:30:44 2048 GMT
    No Revoked Certificates.

show ssl crt-list [-n] [<filename>]
Display the list of crt-list and directories used in the HAProxy configuration. If a filename is specified, dump the content of a crt-list or a directory. Once dumped the output can be used as a crt-list file. The '-n' option can be used to display the line number, which is useful when combined with the 'del ssl crt-list' option when a entry is duplicated. The output with the '-n' option is not compatible with the crt-list format and not loadable by haproxy.

  Example:
    echo "show ssl crt-list -n localhost.crt-list" | socat /tmp/sock1 -
    # localhost.crt-list
    common.pem:1 !not.test1.com *.test1.com !localhost
    common.pem:2
    ecdsa.pem:3 [verify none allow-0rtt ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3] localhost !www.test1.com
    ecdsa.pem:4 [verify none allow-0rtt ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3]

show ssl ocsp-response [<id>]
Display the IDs of the OCSP tree entries corresponding to all the OCSP responses used in HAProxy, as well as the issuer's name and key hash and the serial number of the certificate for which the OCSP response was built. If a valid <id> is provided, display the contents of the corresponding OCSP response. The information displayed is the same as in an "openssl ocsp -respin -text" call.

  Example :

    $ echo "show ssl ocsp-response" | socat /var/run/haproxy.master -
    # Certificate IDs
      Certificate ID key : 303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a0202100a
        Certificate ID:
          Issuer Name Hash: 8A83E0060FAFF709CA7E9B95522A2E81635FDA0A
          Issuer Key Hash: F652B0E435D5EA923851508F0ADBE92D85DE007A
          Serial Number: 100A

    $ echo "show ssl ocsp-response 303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a0202100a" | socat /var/run/haproxy.master -
    OCSP Response Data:
      OCSP Response Status: successful (0x0)
      Response Type: Basic OCSP Response
      Version: 1 (0x0)
      Responder Id: C = FR, O = HAProxy Technologies, CN = ocsp.haproxy.com
      Produced At: May 27 15:43:38 2021 GMT
      Responses:
      Certificate ID:
        Hash Algorithm: sha1
        Issuer Name Hash: 8A83E0060FAFF709CA7E9B95522A2E81635FDA0A
        Issuer Key Hash: F652B0E435D5EA923851508F0ADBE92D85DE007A
        Serial Number: 100A
      Cert Status: good
      This Update: May 27 15:43:38 2021 GMT
      Next Update: Oct 12 15:43:38 2048 GMT
      [...]

show ssl providers Display the names of the providers loaded by OpenSSL during init. Provider loading can indeed be configured via the OpenSSL configuration file and this option allows to check that the right providers were loaded. This command is only available with OpenSSL v3.

  Example :
    $ echo "show ssl providers" | socat /var/run/haproxy.master -
    Loaded providers :
        - fips
        - base

show startup-logs
Dump all messages emitted during the startup of the current haproxy process, each startup-logs buffer is unique to its haproxy worker.

show table
Dump general information on all known stick-tables. Their name is returned (the name of the proxy which holds them), their type (currently zero, always IP), their size in maximum possible number of entries, and the number of entries currently in use.

  Example :
        $ echo "show table" | socat stdio /tmp/sock1
    >>> # table: front_pub, type: ip, size:204800, used:171454
    >>> # table: back_rdp, type: ip, size:204800, used:0

show table <name> [ data.<type> <operator> <value> [data.<type> ...]] | [ key <key> ]
Dump contents of stick-table <name>. In this mode, a first line of generic information about the table is reported as with "show table", then all entries are dumped. Since this can be quite heavy, it is possible to specify a filter in order to specify what entries to display.

When the "data." form is used the filter applies to the stored data (see "stick-table" in section 4.2). A stored data type must be specified in <type>, and this data type must be stored in the table otherwise an error is reported. The data is compared according to <operator> with the 64-bit integer <value>. Operators are the same as with the ACLs :

- eq : match entries whose data is equal to this value
- ne : match entries whose data is not equal to this value
- le : match entries whose data is less than or equal to this value
- ge : match entries whose data is greater than or equal to this value
- lt : match entries whose data is less than this value
- gt : match entries whose data is greater than this value

In this form, you can use multiple data filter entries, up to a maximum defined during build time (4 by default).

When the key form is used the entry <key> is shown. The key must be of the same type as the table, which currently is limited to IPv4, IPv6, integer, and string.

  Example :
        $ echo "show table http_proxy" | socat stdio /tmp/sock1
    >>> # table: http_proxy, type: ip, size:204800, used:2
    >>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1  \
          bytes_out_rate(60000)=187
    >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
          bytes_out_rate(60000)=191

        $ echo "show table http_proxy data.gpc0 gt 0" | socat stdio /tmp/sock1
    >>> # table: http_proxy, type: ip, size:204800, used:2
    >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
          bytes_out_rate(60000)=191

        $ echo "show table http_proxy data.conn_rate gt 5" | \
            socat stdio /tmp/sock1
    >>> # table: http_proxy, type: ip, size:204800, used:2
    >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
          bytes_out_rate(60000)=191

        $ echo "show table http_proxy key 127.0.0.2" | \
            socat stdio /tmp/sock1
    >>> # table: http_proxy, type: ip, size:204800, used:2
    >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
          bytes_out_rate(60000)=191

When the data criterion applies to a dynamic value dependent on time such as a bytes rate, the value is dynamically computed during the evaluation of the entry in order to decide whether it has to be dumped or not. This means that such a filter could match for some time then not match anymore because as time goes, the average event rate drops.

It is possible to use this to extract lists of IP addresses abusing the service, in order to monitor them or even blacklist them in a firewall.

  Example :
        $ echo "show table http_proxy data.gpc0 gt 0" \
          | socat stdio /tmp/sock1 \
          | fgrep 'key=' | cut -d' ' -f2 | cut -d= -f2 > abusers-ip.txt
          ( or | awk '/key/{ print a[split($2,a,"=")]; }' )

show tasks
Dumps the number of tasks currently in the run queue, with the number of occurrences for each function, and their average latency when it's known (for pure tasks with task profiling enabled). The dump is a snapshot of the instant it's done, and there may be variations depending on what tasks are left in the queue at the moment it happens, especially in mono-thread mode as there's less chance that I/Os can refill the queue (unless the queue is full). This command takes exclusive access to the process and can cause minor but measurable latencies when issued on a highly loaded process, so it must not be abused by monitoring bots.

show threads
Dumps some internal states and structures for each thread, that may be useful to help developers understand a problem. The output tries to be readable by showing one block per thread. When haproxy is built with USE_THREAD_DUMP=1, an advanced dump mechanism involving thread signals is used so that each thread can dump its own state in turn. Without this option, the thread processing the command shows all its details but the other ones are less detailed. A star ('*') is displayed in front of the thread handling the command. A right angle bracket ('>') may also be displayed in front of threads which didn't make any progress since last invocation of this command, indicating a bug in the code which must absolutely be reported. When this happens between two threads it usually indicates a deadlock. If a thread is alone, it's a different bug like a corrupted list. In all cases the process needs is not fully functional anymore and needs to be restarted.

The output format is purposely not documented so that it can easily evolve as new needs are identified, without having to maintain any form of backwards compatibility, and just like with "show activity", the values are meaningless without the code at hand.

show tls-keys [id|*]
Dump all loaded TLS ticket keys references. The TLS ticket key reference ID and the file from which the keys have been loaded is shown. Both of those can be used to update the TLS keys using "set ssl tls-key". If an ID is specified as parameter, it will dump the tickets, using * it will dump every keys from every references.

show schema json
Dump the schema used for the output of "show info json" and "show stat json".

The contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example :

  $ echo "show schema json" | socat /var/run/haproxy.sock stdio | \
    python -m json.tool

The schema follows "JSON Schema" (json-schema.org) and accordingly verifiers may be used to verify the output of "show info json" and "show stat json" against the schema.

show trace [<source>]
Show the current trace status. For each source a line is displayed with a single-character status indicating if the trace is stopped, waiting, or running. The output sink used by the trace is indicated (or "none" if none was set), as well as the number of dropped events in this sink, followed by a brief description of the source. If a source name is specified, a detailed list of all events supported by the source, and their status for each action (report, start, pause, stop), indicated by a "+" if they are enabled, or a "-" otherwise. All these events are independent and an event might trigger a start without being reported and conversely.

show version
Show the version of the current HAProxy process. This is available from master and workers CLI.

  Example:

      $ echo "show version" | socat /var/run/haproxy.sock stdio
      2.4.9

      $ echo "show version" | socat /var/run/haproxy-master.sock stdio
      2.5.0

shutdown frontend <frontend>
Completely delete the specified frontend. All the ports it was bound to will be released. It will not be possible to enable the frontend anymore after this operation. This is intended to be used in environments where stopping a proxy is not even imaginable but a misconfigured proxy must be fixed. That way it's possible to release the port and bind it into another process to restore operations. The frontend will not appear at all on the stats page once it is terminated.

The frontend may be specified either by its name or by its numeric ID, prefixed with a sharp ('#').

This command is restricted and can only be issued on sockets configured for level "admin".

shutdown session <id>
Immediately terminate the session matching the specified session identifier. This identifier is the first field at the beginning of the lines in the dumps of "show sess" (it corresponds to the session pointer). This can be used to terminate a long-running session without waiting for a timeout or when an endless transfer is ongoing. Such terminated sessions are reported with a 'K' flag in the logs.

shutdown sessions server <backend>/<server>
Immediately terminate all the sessions attached to the specified server. This can be used to terminate long-running sessions after a server is put into maintenance mode, for instance. Such terminated sessions are reported with a 'K' flag in the logs.

trace
The "trace" command alone lists the trace sources, their current status, and their brief descriptions. It is only meant as a menu to enter next levels, see other "trace" commands below.

trace 0
Immediately stops all traces. This is made to be used as a quick solution to terminate a debugging session or as an emergency action to be used in case complex traces were enabled on multiple sources and impact the service.

trace <source> event [ [+|-|!]<name> ]
Without argument, this will list all the events supported by the designated source. They are prefixed with a "-" if they are not enabled, or a "+" if they are enabled. It is important to note that a single trace may be labelled with multiple events, and as long as any of the enabled events matches one of the events labelled on the trace, the event will be passed to the trace subsystem. For example, receiving an HTTP/2 frame of type HEADERS may trigger a frame event and a stream event since the frame creates a new stream. If either the frame event or the stream event are enabled for this source, the frame will be passed to the trace framework.

With an argument, it is possible to toggle the state of each event and individually enable or disable them. Two special keywords are supported, "none", which matches no event, and is used to disable all events at once, and "any" which matches all events, and is used to enable all events at once. Other events are specific to the event source. It is possible to enable one event by specifying its name, optionally prefixed with '+' for better readability. It is possible to disable one event by specifying its name prefixed by a '-' or a '!'.

One way to completely disable a trace source is to pass "event none", and this source will instantly be totally ignored.

trace <source> level [<level>]
Without argument, this will list all trace levels for this source, and the current one will be indicated by a star ('*') prepended in front of it. With an argument, this will change the trace level to the specified level. Detail levels are a form of filters that are applied before reporting the events. These filters are used to selectively include or exclude events depending on their level of importance. For example a developer might need to know precisely where in the code an HTTP header was considered invalid while the end user may not even care about this header's validity at all. There are currently 5 distinct levels for a trace :

user this will report information that are suitable for use by a regular haproxy user who wants to observe his traffic. Typically some HTTP requests and responses will be reported without much detail. Most sources will set this as the default level to ease operations.

proto in addition to what is reported at the "user" level, it also displays protocol-level updates. This can for example be the frame types or HTTP headers after decoding.

state in addition to what is reported at the "proto" level, it will also display state transitions (or failed transitions) which happen in parsers, so this will show attempts to perform an operation while the "proto" level only shows the final operation.

data in addition to what is reported at the "state" level, it will also include data transfers between the various layers.

developer it reports everything available, which can include advanced information such as "breaking out of this loop" that are only relevant to a developer trying to understand a bug that only happens once in a while in field. Function names are only reported at this level.

It is highly recommended to always use the "user" level only and switch to other levels only if instructed to do so by a developer. Also it is a good idea to first configure the events before switching to higher levels, as it may save from dumping many lines if no filter is applied.

trace <source> lock [criterion]
Without argument, this will list all the criteria supported by this source for lock-on processing, and display the current choice by a star ('*') in front of it. Lock-on means that the source will focus on the first matching event and only stick to the criterion which triggered this event, and ignore all other ones until the trace stops. This allows for example to take a trace on a single connection or on a single stream. The following criteria are supported by some traces, though not necessarily all, since some of them might not be available to the source :

backend lock on the backend that started the trace
connection lock on the connection that started the trace
frontend lock on the frontend that started the trace
listener lock on the listener that started the trace
nothing do not lock on anything
server lock on the server that started the trace
session lock on the session that started the trace
thread lock on the thread that started the trace

In addition to this, each source may provide up to 4 specific criteria such as internal states or connection IDs. For example in HTTP/2 it is possible to lock on the H2 stream and ignore other streams once a strace starts.

When a criterion is passed in argument, this one is used instead of the other ones and any existing tracking is immediately terminated so that it can restart with the new criterion. The special keyword "nothing" is supported by all sources to permanently disable tracking.

trace <source> { pause | start | stop } [ [+|-|!]event]
Without argument, this will list the events enabled to automatically pause, start, or stop a trace for this source. These events are specific to each trace source. With an argument, this will either enable the event for the specified action (if optionally prefixed by a '+') or disable it (if prefixed by a '-' or '!'). The special keyword "now" is not an event and requests to take the action immediately. The keywords "none" and "any" are supported just like in "trace event".

The 3 supported actions are respectively "pause", "start" and "stop". The "pause" action enumerates events which will cause a running trace to stop and wait for a new start event to restart it. The "start" action enumerates the events which switch the trace into the waiting mode until one of the start events appears. And the "stop" action enumerates the events which definitely stop the trace until it is manually enabled again. In practice it makes sense to manually start a trace using "start now" without caring about events, and to stop it using "stop now". In order to capture more subtle event sequences, setting "start" to a normal event (like receiving an HTTP request) and "stop" to a very rare event like emitting a certain error, will ensure that the last captured events will match the desired criteria. And the pause event is useful to detect the end of a sequence, disable the lock-on and wait for another opportunity to take a capture. In this case it can make sense to enable lock-on to spot only one specific criterion (e.g. a stream), and have "start" set to anything that starts this criterion (e.g. all events which create a stream), "stop" set to the expected anomaly, and "pause" to anything that ends that criterion (e.g. any end of stream event). In this case the trace log will contain complete sequences of perfectly clean series affecting a single object, until the last sequence containing everything from the beginning to the anomaly.

trace <source> sink [<sink>]
Without argument, this will list all event sinks available for this source, and the currently configured one will have a star ('*') prepended in front of it. Sink "none" is always available and means that all events are simply dropped, though their processing is not ignored (e.g. lock-on does occur). Other sinks are available depending on configuration and build options, but typically "stdout" and "stderr" will be usable in debug mode, and in-memory ring buffers should be available as well. When a name is specified, the sink instantly changes for the specified source. Events are not changed during a sink change. In the worst case some may be lost if an invalid sink is used (or "none"), but operations do continue to a different destination.

trace <source> verbosity [<level>]
Without argument, this will list all verbosity levels for this source, and the current one will be indicated by a star ('*') prepended in front of it. With an argument, this will change the verbosity level to the specified one.

Verbosity levels indicate how far the trace decoder should go to provide detailed information. It depends on the trace source, since some sources will not even provide a specific decoder. Level "quiet" is always available and disables any decoding. It can be useful when trying to figure what's happening before trying to understand the details, since it will have a very low impact on performance and trace size. When no verbosity levels are declared by a source, level "default" is available and will cause a decoder to be called when specified in the traces. It is an opportunistic decoding. When the source declares some verbosity levels, these ones are listed with a description of what they correspond to. In this case the trace decoder provided by the source will be as accurate as possible based on the information available at the trace point. The first level above "quiet" is set by default.

9.4. Master CLI

The master CLI is a socket bound to the master process in master-worker mode. This CLI gives access to the unix socket commands in every running or leaving processes and allows a basic supervision of those processes.

The master CLI is configurable only from the haproxy program arguments with the -S option. This option also takes bind options separated by commas.

Example:

   # haproxy -W -S 127.0.0.1:1234 -f test1.cfg
   # haproxy -Ws -S /tmp/master-socket,uid,1000,gid,1000,mode,600 -f test1.cfg
   # haproxy -W -S /tmp/master-socket,level,user -f test1.cfg

9.4.1. Master CLI commands

@<[!]pid>
The master CLI uses a special prefix notation to access the multiple processes. This notation is easily identifiable as it begins by a @.
마스터 CLI는 특수 접두사 표기법을 사용하여 여러 프로세스에 액세스합니다. 이 표기법은 @로 시작하므로 쉽게 식별할 수 있습니다.

A @ prefix can be followed by a relative process number or by an exclamation point and a PID. (e.g. @1 or @!1271). A @ alone could be use to specify the master. Leaving processes are only accessible with the PID as relative process number are only usable with the current processes.
@ 접두사 뒤에는 상대 프로세스 번호나 느낌표 및 PID가 올 수 있습니다. (예: @1 또는 @!1271).
@만 사용하여 마스터를 지정할 수 있습니다.
상대 프로세스 번호는 현재 프로세스에서만 사용할 수 있으므로 나가는 프로세스는 PID로만 액세스할 수 있습니다.

  Examples:

    $ socat /var/run/haproxy-master.sock readline
    prompt
    master> @1 show info; @2 show info
    [...]
    Process_num: 1
    Pid: 1271
    [...]
    Process_num: 2
    Pid: 1272
    [...]
    master>

    $ echo '@!1271 show info; @!1272 show info' | socat /var/run/haproxy-master.sock -
    [...]

A prefix could be use as a command, which will send every next commands to the specified process.
접두사는 지정된 프로세스에 모든 다음 명령을 보내는 명령으로 사용할 수 있습니다.

  Examples:

    $ socat /var/run/haproxy-master.sock readline
    prompt
    master> @1
    1271> show info
    [...]
    1271> show stat
    [...]
    1271> @
    master>

    $ echo '@1; show info; show stat; @2; show info; show stat' | socat /var/run/haproxy-master.sock -
    [...]

expert-mode [on|off]
This command activates the "expert-mode" for every worker accessed from the master CLI. Combined with "mcli-debug-mode" it also activates the command on the master. Display the flag "e" in the master CLI prompt.

See also "expert-mode" in Section 9.3 and "mcli-debug-mode" in 9.4.1.

experimental-mode [on|off]
This command activates the "experimental-mode" for every worker accessed from the master CLI. Combined with "mcli-debug-mode" it also activates the command on the master. Display the flag "x" in the master CLI prompt.

See also "experimental-mode" in Section 9.3 and "mcli-debug-mode" in 9.4.1.

mcli-debug-mode [on|off]
This keyword allows a special mode in the master CLI which enables every keywords that were meant for a worker CLI on the master CLI, allowing to debug the master process. Once activated, you list the new available keywords with "help". Combined with "experimental-mode" or "expert-mode" it enables even more keywords. Display the flag "d" in the master CLI prompt.

prompt
When the prompt is enabled (via the "prompt" command), the context the CLI is working on is displayed in the prompt. The master is identified by the "master" string, and other processes are identified with their PID. In case the last reload failed, the master prompt will be changed to "master[ReloadFailed]>" so that it becomes visible that the process is still running on the previous configuration and that the new configuration is not operational.

프롬프트가 활성화되면("prompt" 명령을 통해) CLI가 작업 중인 컨텍스트가 프롬프트에 표시됩니다. 마스터는 "마스터" 문자열로 식별되고 다른 프로세스는 해당 PID로 식별됩니다. 마지막 다시 로드가 실패한 경우 마스터 프롬프트가 "master[ReloadFailed]>"로 변경되어 프로세스가 여전히 이전 구성에서 실행 중이고 새 구성이 작동하지 않는다는 것을 볼 수 있습니다.

The prompt of the master CLI is able to display several flags which are the enable modes. "d" for mcli-debug-mode, "e" for expert-mode, "x" for experimental-mode.
마스터 CLI의 프롬프트는 활성화 모드인 여러 플래그를 표시할 수 있습니다. mcli-debug-mode의 경우 "d", 전문가 모드의 경우 "e", 실험 모드의 경우 "x"입니다.

  Example:
     $ socat /var/run/haproxy-master.sock -
     prompt
     master> expert-mode on
     master(e)> experimental-mode on
     master(xe)> mcli-debug-mode on
     master(xed)> @1
     95191(xed)>

reload
You can also reload the HAProxy master process with the "reload" command which does the same as a `kill -USR2` on the master process, provided that the user has at least "operator" or "admin" privileges.

  Example:

    $ echo "reload" | socat /var/run/haproxy-master.sock stdin

Note that a reload will close the connection to the master CLI.

show proc
The master CLI introduces a 'show proc' command to surpervise the processe.

  Example:

    $ echo 'show proc' | socat /var/run/haproxy-master.sock -
    #<PID>          <type>          <reloads>       <uptime>        <version>
    1162            master          5 [failed: 0]   0d00h02m07s     2.5-dev13
    # workers
    1271            worker          1               0d00h00m00s     2.5-dev13
    # old workers
    1233            worker          3               0d00h00m43s     2.0-dev3-6019f6-289
    # programs
    1244            foo             0               0d00h00m00s     -
    1255            bar             0               0d00h00m00s     -

In this example, the master has been reloaded 5 times but one of the old worker is still running and survived 3 reloads. You could access the CLI of this worker to understand what's going on.


10. Tricks for easier configuration management

It is very common that two HAProxy nodes constituting a cluster share exactly the same configuration modulo a few addresses. Instead of having to maintain a duplicate configuration for each node, which will inevitably diverge, it is possible to include environment variables in the configuration. Thus multiple configuration may share the exact same file with only a few different system wide environment variables. This started in version 1.5 where only addresses were allowed to include environment variables, and 1.6 goes further by supporting environment variables everywhere. The syntax is the same as in the UNIX shell, a variable starts with a dollar sign ('$'), followed by an opening curly brace ('{'), then the variable name followed by the closing brace ('}'). Except for addresses, environment variables are only interpreted in arguments surrounded with double quotes (this was necessary not to break existing setups using regular expressions involving the dollar symbol).

Environment variables also make it convenient to write configurations which are expected to work on various sites where only the address changes. It can also permit to remove passwords from some configs. Example below where the the file "site1.env" file is sourced by the init script upon startup :

  $ cat site1.env
  LISTEN=192.168.1.1
  CACHE_PFX=192.168.11
  SERVER_PFX=192.168.22
  LOGGER=192.168.33.1
  STATSLP=admin:pa$$w0rd
  ABUSERS=/etc/haproxy/abuse.lst
  TIMEOUT=10s

  $ cat haproxy.cfg
  global
      log "${LOGGER}:514" local0

  defaults
      mode http
      timeout client "${TIMEOUT}"
      timeout server "${TIMEOUT}"
      timeout connect 5s

  frontend public
      bind "${LISTEN}:80"
      http-request reject if { src -f "${ABUSERS}" }
      stats uri /stats
      stats auth "${STATSLP}"
      use_backend cache if { path_end .jpg .css .ico }
      default_backend server

  backend cache
      server cache1 "${CACHE_PFX}.1:18080" check
      server cache2 "${CACHE_PFX}.2:18080" check

  backend server
      server cache1 "${SERVER_PFX}.1:8080" check
      server cache2 "${SERVER_PFX}.2:8080" check

11. Well-known traps to avoid

Once in a while, someone reports that after a system reboot, the haproxy service wasn't started, and that once they start it by hand it works. Most often, these people are running a clustered IP address mechanism such as keepalived, to assign the service IP address to the master node only, and while it used to work when they used to bind haproxy to address 0.0.0.0, it stopped working after they bound it to the virtual IP address. What happens here is that when the service starts, the virtual IP address is not yet owned by the local node, so when HAProxy wants to bind to it, the system rejects this because it is not a local IP address. The fix doesn't consist in delaying the haproxy service startup (since it wouldn't stand a restart), but instead to properly configure the system to allow binding to non-local addresses. This is easily done on Linux by setting the net.ipv4.ip_nonlocal_bind sysctl to 1. This is also needed in order to transparently intercept the IP traffic that passes through HAProxy for a specific target address.

Multi-process configurations involving source port ranges may apparently seem to work but they will cause some random failures under high loads because more than one process may try to use the same source port to connect to the same server, which is not possible. The system will report an error and a retry will happen, picking another port. A high value in the "retries" parameter may hide the effect to a certain extent but this also comes with increased CPU usage and processing time. Logs will also report a certain number of retries. For this reason, port ranges should be avoided in multi-process configurations.

Since HAProxy uses SO_REUSEPORT and supports having multiple independent processes bound to the same IP:port, during troubleshooting it can happen that an old process was not stopped before a new one was started. This provides absurd test results which tend to indicate that any change to the configuration is ignored. The reason is that in fact even the new process is restarted with a new configuration, the old one also gets some incoming connections and processes them, returning unexpected results. When in doubt, just stop the new process and try again. If it still works, it very likely means that an old process remains alive and has to be stopped. Linux's "netstat -lntp" is of good help here.

When adding entries to an ACL from the command line (eg: when blacklisting a source address), it is important to keep in mind that these entries are not synchronized to the file and that if someone reloads the configuration, these updates will be lost. While this is often the desired effect (for blacklisting) it may not necessarily match expectations when the change was made as a fix for a problem. See the "add acl" action of the CLI interface.


12. Debugging and performance issues

When HAProxy is started with the "-d" option, it will stay in the foreground and will print one line per event, such as an incoming connection, the end of a connection, and for each request or response header line seen. This debug output is emitted before the contents are processed, so they don't consider the local modifications. The main use is to show the request and response without having to run a network sniffer. The output is less readable when multiple connections are handled in parallel, though the "debug2ansi" and "debug2html" scripts found in the examples/ directory definitely help here by coloring the output.

If a request or response is rejected because HAProxy finds it is malformed, the best thing to do is to connect to the CLI and issue "show errors", which will report the last captured faulty request and response for each frontend and backend, with all the necessary information to indicate precisely the first character of the input stream that was rejected. This is sometimes needed to prove to customers or to developers that a bug is present in their code. In this case it is often possible to relax the checks (but still keep the captures) using "option accept-invalid-http-request" or its equivalent for responses coming from the server "option accept-invalid-http-response". Please see the configuration manual for more details.

Example :

  > show errors
  Total events captured on [13/Oct/2015:13:43:47.169] : 1

  [13/Oct/2015:13:43:40.918] frontend HAProxyLocalStats (#2): invalid request
    backend <NONE> (#-1), server <NONE> (#-1), event #0
    src 127.0.0.1:51981, session #0, session flags 0x00000080
    HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000
    HTTP chunk len 0 bytes, HTTP body len 0 bytes
    buffer flags 0x00808002, out 0 bytes, total 31 bytes
    pending 31 bytes, wrapping at 8040, error at position 13:

    00000  GET /invalid request HTTP/1.1\r\n

The output of "show info" on the CLI provides a number of useful information regarding the maximum connection rate ever reached, maximum SSL key rate ever reached, and in general all information which can help to explain temporary issues regarding CPU or memory usage. Example :

  > show info
  Name: HAProxy
  Version: 1.6-dev7-e32d18-17
  Release_date: 2015/10/12
  Nbproc: 1
  Process_num: 1
  Pid: 7949
  Uptime: 0d 0h02m39s
  Uptime_sec: 159
  Memmax_MB: 0
  Ulimit-n: 120032
  Maxsock: 120032
  Maxconn: 60000
  Hard_maxconn: 60000
  CurrConns: 0
  CumConns: 3
  CumReq: 3
  MaxSslConns: 0
  CurrSslConns: 0
  CumSslConns: 0
  Maxpipes: 0
  PipesUsed: 0
  PipesFree: 0
  ConnRate: 0
  ConnRateLimit: 0
  MaxConnRate: 1
  SessRate: 0
  SessRateLimit: 0
  MaxSessRate: 1
  SslRate: 0
  SslRateLimit: 0
  MaxSslRate: 0
  SslFrontendKeyRate: 0
  SslFrontendMaxKeyRate: 0
  SslFrontendSessionReuse_pct: 0
  SslBackendKeyRate: 0
  SslBackendMaxKeyRate: 0
  SslCacheLookups: 0
  SslCacheMisses: 0
  CompressBpsIn: 0
  CompressBpsOut: 0
  CompressBpsRateLim: 0
  ZlibMemUsage: 0
  MaxZlibMemUsage: 0
  Tasks: 5
  Run_queue: 1
  Idle_pct: 100
  node: wtap
  description:

When an issue seems to randomly appear on a new version of HAProxy (eg: every second request is aborted, occasional crash, etc), it is worth trying to enable memory poisoning so that each call to malloc() is immediately followed by the filling of the memory area with a configurable byte. By default this byte is 0x50 (ASCII for 'P'), but any other byte can be used, including zero (which will have the same effect as a calloc() and which may make issues disappear). Memory poisoning is enabled on the command line using the "-dM" option. It slightly hurts performance and is not recommended for use in production. If an issue happens all the time with it or never happens when poisoning uses byte zero, it clearly means you've found a bug and you definitely need to report it. Otherwise if there's no clear change, the problem it is not related.

When debugging some latency issues, it is important to use both strace and tcpdump on the local machine, and another tcpdump on the remote system. The reason for this is that there are delays everywhere in the processing chain and it is important to know which one is causing latency to know where to act. In practice, the local tcpdump will indicate when the input data come in. Strace will indicate when haproxy receives these data (using recv/recvfrom). Warning, openssl uses read()/write() syscalls instead of recv()/send(). Strace will also show when haproxy sends the data, and tcpdump will show when the system sends these data to the interface. Then the external tcpdump will show when the data sent are really received (since the local one only shows when the packets are queued). The benefit of sniffing on the local system is that strace and tcpdump will use the same reference clock. Strace should be used with "-tts200" to get complete timestamps and report large enough chunks of data to read them. Tcpdump should be used with "-nvvttSs0" to report full packets, real sequence numbers and complete timestamps.

In practice, received data are almost always immediately received by haproxy (unless the machine has a saturated CPU or these data are invalid and not delivered). If these data are received but not sent, it generally is because the output buffer is saturated (ie: recipient doesn't consume the data fast enough). This can be confirmed by seeing that the polling doesn't notify of the ability to write on the output file descriptor for some time (it's often easier to spot in the strace output when the data finally leave and then roll back to see when the write event was notified). It generally matches an ACK received from the recipient, and detected by tcpdump. Once the data are sent, they may spend some time in the system doing nothing. Here again, the TCP congestion window may be limited and not allow these data to leave, waiting for an ACK to open the window. If the traffic is idle and the data take 40 ms or 200 ms to leave, it's a different issue (which is not an issue), it's the fact that the Nagle algorithm prevents empty packets from leaving immediately, in hope that they will be merged with subsequent data. HAProxy automatically disables Nagle in pure TCP mode and in tunnels. However it definitely remains enabled when forwarding an HTTP body (and this contributes to the performance improvement there by reducing the number of packets). Some HTTP non-compliant applications may be sensitive to the latency when delivering incomplete HTTP response messages. In this case you will have to enable "option http-no-delay" to disable Nagle in order to work around their design, keeping in mind that any other proxy in the chain may similarly be impacted. If tcpdump reports that data leave immediately but the other end doesn't see them quickly, it can mean there is a congested WAN link, a congested LAN with flow control enabled and preventing the data from leaving, or more commonly that HAProxy is in fact running in a virtual machine and that for whatever reason the hypervisor has decided that the data didn't need to be sent immediately. In virtualized environments, latency issues are almost always caused by the virtualization layer, so in order to save time, it's worth first comparing tcpdump in the VM and on the external components. Any difference has to be credited to the hypervisor and its accompanying drivers.

When some TCP SACK segments are seen in tcpdump traces (using -vv), it always means that the side sending them has got the proof of a lost packet. While not seeing them doesn't mean there are no losses, seeing them definitely means the network is lossy. Losses are normal on a network, but at a rate where SACKs are not noticeable at the naked eye. If they appear a lot in the traces, it is worth investigating exactly what happens and where the packets are lost. HTTP doesn't cope well with TCP losses, which introduce huge latencies.

The "netstat -i" command will report statistics per interface. An interface where the Rx-Ovr counter grows indicates that the system doesn't have enough resources to receive all incoming packets and that they're lost before being processed by the network driver. Rx-Drp indicates that some received packets were lost in the network stack because the application doesn't process them fast enough. This can happen during some attacks as well. Tx-Drp means that the output queues were full and packets had to be dropped. When using TCP it should be very rare, but will possibly indicate a saturated outgoing link.


13. Security considerations

HAProxy is designed to run with very limited privileges. The standard way to use it is to isolate it into a chroot jail and to drop its privileges to a non-root user without any permissions inside this jail so that if any future vulnerability were to be discovered, its compromise would not affect the rest of the system.

In order to perform a chroot, it first needs to be started as a root user. It is pointless to build hand-made chroots to start the process there, these ones are painful to build, are never properly maintained and always contain way more bugs than the main file-system. And in case of compromise, the intruder can use the purposely built file-system. Unfortunately many administrators confuse "start as root" and "run as root", resulting in the uid change to be done prior to starting haproxy, and reducing the effective security restrictions.

HAProxy will need to be started as root in order to :
  - adjust the file descriptor limits
  - bind to privileged port numbers
  - bind to a specific network interface
  - transparently listen to a foreign address
  - isolate itself inside the chroot jail
  - drop to another non-privileged UID

HAProxy may require to be run as root in order to :
  - bind to an interface for outgoing connections
  - bind to privileged source ports for outgoing connections
  - transparently bind to a foreign address for outgoing connections

Most users will never need the "run as root" case. But the "start as root"
covers most usages.

A safe configuration will have :

  - a chroot statement pointing to an empty location without any access
    permissions. This can be prepared this way on the UNIX command line :

      # mkdir /var/empty && chmod 0 /var/empty || echo "Failed"

    and referenced like this in the HAProxy configuration's global section :

      chroot /var/empty

  - both a uid/user and gid/group statements in the global section :

      user haproxy
      group haproxy

  - a stats socket whose mode, uid and gid are set to match the user and/or
    group allowed to access the CLI so that nobody may access it :

      stats socket /var/run/haproxy.stat uid hatop gid hatop mode 600
SOcket CAT: Multipurpose relay (공식)
README
Man Page
socat 사용하기 (블로그)

Email 답글이 올라오면 이메일로 알려드리겠습니다.

혹시 처음이세요?
레디스게이트에는 레디스에 대한 많은 정보가 있습니다.
레디스 소개, 명령어, SQL, 클라이언트, 서버, 센티널, 클러스터 등이 있습니다.
혹시 필요한 정보를 찾기 어려우시면 redisgate@gmail.com로 메일 주세요.
제가 찾아서 알려드리겠습니다.
 
close
IP를 기반으로 보여집니다.