WARNING: If you leave ziproxy unconfigured, bots may abuse it. Unfortunately ziproxy does not seem to restrict access to only localhost by default (as of Feb./2014).The following post was updated to fit Debian 7 with ziproxy 3.2.0
Ziproxy is a forwarding, non-caching, compressing, HTTP proxy server targeted for traffic optimization. The ziproxy software is regarded as lightweight in terms of memory and processing power consumption. This software works by recompressing pictures, gzipping text and HTML/JS/CSS data optimization. Additionally it offers latency reduction by preemptive name resolution.
how to set up ziproxy with authentication, compression and latency reduction
Ziproxy is a high-performance forwarding (non-caching) HTTP proxy that gzips text and HTML files, and reduces the size of images by converting them to lower quality JPEGs or JPEG 2000. It is intended to increase the speed for low-speed Internet connections (mobile, dial-up, other). It's suitable for both home and professional usage. Ziproxy is fully configurable and also features transparent proxy mode, HTML/JS/CSS optimization, operation in daemon mode, a detailed access log with compression statistics, basic authentication, and more. License: GNU General Public License (GPL) Changes:This release fixed an issue where non-processable (htmlopt etc) data that came already gzipped was loaded into memory and recompressed. It is now streamed directly, unmodified. This should improve latency in certain cases.
Performance Enhancing Proxy (PEP) for optimizing the overall bandwidth usage of the system with functionalities that include data caching (e.g., a web proxy server), compression, TCP accelerating etc.
Traffic flow priority is no longer considered here as this was already done in the Shaper. Service classes that require e.g., low latency or low loss rate will need to be mapped on a link with similar characteristics and a scheduling algorithm (e.g., weighted round robin) could prioritize certain service classes.
When original data is encrypted, e.g., in VPN tunnels, data compression will have little or no effect. Data compression tries to remove statistical redundancy. However, encrypted data appears to be completely random data without any statistical redundancy.
Comparison of two options for the relative positioning of the Shaper and TO2, which has a data compression functionality, while considering two users with the same SLA, Alice and Bob, who are sending a compressible and incompressible data traffic flow, respectively, at the same time.
When encryption hides the TCP header (e.g., IPsec VPN), TCP acceleration will be impossible. On the other hand, when the TCP header is not encrypted but its payload is, the TO1 proxy servers will not be able to function as they e.g., do not know the type of application that is used. Furthermore the compression that occurs in TO2 will have no effect on encrypted data as encryption has removed any statistical redundancy. We do not elaborate on these aspects, but they are briefly mentioned here as they should be reckoned with when deployed on the field.
Concerning provision of QoS in the TWCS, the functionality is logically split into the Marker, SLA Enforcer, Shaper and Scheduler. Firstly, the Marker marks packet traffic flows with a service class and priority by using the DiffServ architecture, according to the different services and their traffic flow characteristics. Next, the SLA Enforcer ensures that all traffic flows that belong to the same SLA comply to the SLA stipulations (e.g., maximum data rate, data volume). Then, the Shaper shapes all traffic flows to the available capacity on the wireless T2W link by dropping packets of traffic flows, with respect to the relative priority of the different traffic flows. Finally, the Scheduler needs to schedule all traffic flows on an appropriate link, considering the service class of each traffic flow (e.g., low latency requirement for Voice-over-IP (VoIP)). A 'backpressure' mechanism, based on queue occupation, is suggested for signaling the available capacity from the Scheduler to the Shaper.
For overall bandwidth optimization, a PEP is inserted in the design, which consists of different modules: TOs and an Accelerator. The TOs include caching proxies and data compression. The Accelerator locally intercepts TCP connections to mitigate performance degradation over high latency links. We propose a distributed design for the Accelerator (one component on board and one at the wayside), in order to avoid multiple competing TCP mechanisms over the same link. To maintain SLAs and fairness among the onboard devices, we furthermore paid much attention to the correct order of the different QoS and PEP components. 2ff7e9595c
Comments