auto-import squid-2.4.STABLE6-6.7.3 from squid-2.4.STABLE6-6.7.3.src.rpm

This commit is contained in:
cvsdist 2004-09-09 12:40:04 +00:00
parent 973545ec5c
commit 9ea12eb339
4 changed files with 220 additions and 43 deletions

View File

@ -1 +1,2 @@
msntauth-v2.0.3-squid.1.tar.gz
squid-2.4.STABLE6-src.tar.gz

233
FAQ.sgml
View File

@ -15,8 +15,10 @@ Object Cache software.
<p>
You can download the FAQ as
<url url="FAQ.ps.gz" name="compressed Postscript">, and
<url url="FAQ.txt" name="plain text">.
<url url="FAQ.ps.gz" name="compressed Postscript">,
<url url="FAQ.txt" name="plain text">,
<url url="FAQ.sgml" name="linuxdoc SGML source"> or as a
<url url="FAQ.tar.gz" name="compressed tar of HTML">.
</p>
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
@ -450,6 +452,8 @@ The following people have made contributions to this document:
<url url="mailto:chris@senet.com.au" name="Chris Foote">
<item>
<url url="mailto:elkner@wotan.cs.Uni-Magdeburg.DE" name="Jens Elkner">
<item>
<url url="mailto:simon@mtds.com" name="Simon White">
</itemize>
<P>
Please send corrections, updates, and comments to:
@ -583,6 +587,11 @@ Squid binaries for
<url url="ftp://ftp.netbsd.org/pub/NetBSD/packages/pkgsrc/www/squid/README.html"
name="NetBSD on everything">
<p>
Gurkan Sengun has some
<url url="http://www.linuks.mine.nu/solaris/" name="Sparc/Solaris packages">
available.
<sect1>How do I apply a patch or a diff?
<P>
You need the <tt/patch/ program. You should probably duplicate the
@ -659,7 +668,6 @@ Some options which are used often include:
--enable-kill-parent-hack
Kill parent on shutdown
--enable-snmp Enable SNMP monitoring
--enable-time-hack Update internal timestamp only once per second
--enable-cachemgr-hostname[=hostname]
Make cachemgr.cgi default to this host
--enable-arp-acl Enable use of ARP ACL lists (ether address)
@ -1421,23 +1429,22 @@ behind a firewall or if there is only one parent.
<P>
You can use the <tt/never_direct/ access list in
<em/squid.conf/ to specify which requests must be forwarded to
your parent cache outside the firewall. For example, if Squid
can connect directly to all servers that end with <em/mydomain.com/, but
your parent cache outside the firewall, and the <tt/always_direct/ access list
to specify which requests must not be forwarded. For example, if Squid
must connect directly to all servers that end with <em/mydomain.com/, but
must use the parent for all others, you would write:
<verb>
acl INSIDE dstdomain mydomain.com
never_direct deny INSIDE
acl INSIDE dstdomain .mydomain.com
always_direct allow INSIDE
never_direct allow all
</verb>
Note that the outside domains will not match the <em/INSIDE/
acl. When there are no matches, the default action is
the opposite of the last action. Its as if there is
an implicit <em/never_direct allow all/ as the final rule.
<p>
You could also specify internal servers by IP address
<verb>
acl INSIDE_IP dst 1.2.3.4/24
never_direct deny INSIDE
always_direct allow INSIDE
never_direct allow all
</verb>
Note, however that when you use IP addresses, Squid must
perform a DNS lookup to convert URL hostnames to an
@ -2204,7 +2211,7 @@ easy for someone to see or grab your password.
by <url url="mailto:mark@rts.com.au" name="Mark Reynolds">
<P>
You may like to start by reading the
<url url="http://www.ietf.org/internet-drafts/draft-ietf-wrec-wpad-01.txt" name="Internet-Draft">
<url url="http://www.web-cache.com/Writings/Internet-Drafts/draft-ietf-wrec-wpad-01.txt" name="Expired Internet-Draft">
that describes WPAD.
<P>
@ -2352,15 +2359,6 @@ There are a few basic points common to all log files. The time stamps
logged into the log files are usually UTC seconds unless stated otherwise.
The initial time stamp usually contains a millisecond extension.
<P>
The frequent time lookups on busy caches may have a performance impact on
some systems. The compile time configuration option
<em/--enable-time-hack/ makes Squid only look up a new time in one
second intervals. The implementation uses Unix's <em/alarm()/
functionality. Note that the resolution of logged times is much coarser
afterwards, and may not suffice for some log file analysis programs.
Usually there is no need to fiddle with the timestamp hack.
<sect1><em/squid.out/
<P>
@ -2722,9 +2720,10 @@ The hierarchy information consists of three items:
forwarding it to a peer, or going straight to the source. Refer to
section <ref id="hier-codes"> for details on hierarchy codes and
removed hierarchy codes.
<item>The name of the host the object was requested from. This host may
be the origin site, a parent or any other peer. Also note that the
hostname may be numerical.
<item>The IP address or hostname where the request (if a miss) was forwarded.
For requests sent to origin servers, this is the origin server's IP address.
For requests sent to a neighbor cache, this is the neighbor's hostname.
NOTE: older versions of Squid would put the origin server hostname here.
</enum>
<tag/type/
@ -2953,8 +2952,8 @@ WEBDAV'' extensions.
PROPFIND rfc2518 ? retrieve properties of an object.
PROPATCH rfc2518 ? change properties of an object.
MKCOL rfc2518 never create a new collection.
MOVE rfc2518 never create a duplicate of src in dst.
COPY rfc2518 never atomically move src to dst.
COPY rfc2518 never create a duplicate of src in dst.
MOVE rfc2518 never atomically move src to dst.
LOCK rfc2518 never lock an object against modifications.
UNLOCK rfc2518 never unlock an object.
</verb>
@ -3175,6 +3174,12 @@ only keep up to <em/logfile_rotate/ versions of each log file.
The logfile rotation procedure also writes a clean <em/swap.state/
file, but it does not leave numbered versions of the old files.
<p>
If you set <em/logfile_rotate/ to 0, Squid simply closes and then
re-opens the logs. This allows third-party logfile management systems,
such as <em/newsyslog/, to maintain the log files.
<P>
To rotate Squid's logs, simple use this command:
<verb>
@ -3216,6 +3221,12 @@ You need to <em/rotate/ your log files with a cron job. For example:
0 0 * * * /usr/local/squid/bin/squid -k rotate
</verb>
<sect1>I want to use another tool to maintain the log files.
<p>
If you set <em/logfile_rotate/ to 0, Squid simply closes and then
re-opens the logs. This allows third-party logfile management systems,
such as <em/newsyslog/, to maintain the log files.
<sect1>Managing log files
<P>
@ -3791,6 +3802,7 @@ and port numbers together (see the squid.conf comments).
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
<sect>Memory
<label id="memorye">
<sect1>Why does Squid use so much memory!?
@ -3931,12 +3943,14 @@ your incoming request load. Reducing <em/cache_mem/ will usually
also reduce the process size, but not necessarily, and there are
other ways to reduce Squid's memory usage (see below).
<P>
See also <ref id="how-much-ram" name="How much memory do I need in my Squid server?">.
<sect1>How do I analyze memory usage from the cache manger output?
<label id="analyze-memory-usage">
<P>
<P>
<it>
Note: This information is specific to Squid-1.1 versions
</it>
@ -4303,6 +4317,34 @@ script:
% ./configure --enable-dlmalloc ...
</verb>
<sect1>How much memory do I need in my Squid server?
<label id="how-much-ram">
<P>
As a rule of thumb on Squid uses approximately 10 MB of RAM per GB of the
total of all cache_dirs (more on 64 bit servers such as Alpha), plus your
cache_mem setting and about an additional 10-20MB. It is recommended to
have at least twice this amount of physical RAM available on your Squid
server. For a more detailed discussion on Squid's memory usage see the
sections above.
<P>
The recommended extra RAM besides what is used by Squid is used by the
operating system to improve disk I/O performance and by other applications or
services running on the server. This will be true even of a server which
runs Squid as the only tcp service, since there is a minimum level of
memory needed for process management, logging, and other OS level
routines.
<P>
If you have a low memory server, and a large disk, then you will not
necessarily be able to use all the disk space, since as the cache fills
the memory available will be insufficient, forcing Squid to swap out
memory and affecting performance. A very large cache_dir total and
insufficient physical RAM + Swap could cause Squid to stop functioning
completely. The solution for larger caches is to get more physical RAM;
allocating more to Squid via cache_mem will not help.
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
<sect>The Cache Manager
@ -5097,12 +5139,12 @@ so a url containing ``Cooking'' would not be denied.
Another way is to deny access to specific servers which are known
to hold recipes. For example:
<verb>
acl Cooking2 dstdomain gourmet-chef.com
acl Cooking2 dstdomain www.gourmet-chef.com
http_access deny Cooking2
http_access allow all
</verb>
The <em/dstdomain/ means to search the hostname in the URL for the
string ``gourmet-chef.com.''
string ``www.gourmet-chef.com.''
Note that when IP addresses are used in URLs (instead of domain names),
Squid-1.1 implements relaxed access controls. If the a domain name
for the IP address has been saved in Squid's ``FQDN cache,'' then
@ -5388,6 +5430,7 @@ the neighbor ACL's first in the list of <em/http_access/ lines. For example:
<P>
<itemize>
<item><url url="http://members.lycos.co.uk/njadmin" name="Jasons Staudenmayer">
<item><url url="http://web.onda.com.br/orso/" name="Pedro Lineu Orso's List">
<item><url url="http://www.hklc.com/squidblock/" name="Linux Center Hong Kong's List">
<item>
@ -5400,6 +5443,11 @@ the neighbor ACL's first in the list of <em/http_access/ lines. For example:
</itemize>
<sect1>Squid doesn't match my subdomains
<P>NOTE: Current Squid versions (as of Squid-2.4) will warn you
when this kind of configuration is used. Also the configuration here uses
the dstdomain syntax of Squid-2.1 or earlier.. (2.2 and later needs to
have domains prefixed by a dot)
<P>
There is a subtle problem with domain-name based access controls
when a single ACL element has an entry that is a subdomain of
@ -6255,7 +6303,7 @@ Memory usage is a complicated problem. There are a number
of things to consider.
<P>
First, examine the Cache Manager <em/Info/ ouput and look at these two lines:
Then, examine the Cache Manager <em/Info/ ouput and look at these two lines:
<verb>
Number of HTTP requests received: 121104
Page faults with physical i/o: 16720
@ -6275,6 +6323,9 @@ If the ratio is too high, you will need to make some changes to
<ref id="lower-mem-usage" name="lower the
amount of memory Squid uses">.
<P>
See also <ref id="how-much-ram" name="How much memory do I need in my Squid server?">.
<sect1>WARNING: Failed to start 'dnsserver'
<P>
@ -7180,7 +7231,7 @@ them completely and only use the proxy.pac for example.
<sect1>Requests for international domain names does not work
<p>
By Henrik Nordström
By Henrik Nordstr&ouml;m
<p>
Some people have asked why requests for domain names using national
symbols as "supported" by the certain domain registrars does not work
@ -7202,9 +7253,104 @@ url="http://www.ietf.org/html.charters/idn-charter.html" name="IETF idn">
working group or it's <url url="http://www.i-d-n.net/" name="dedicated
page">.
<sect1>Why do I sometimes get ``Zero Sized Reply''?
<p>
This happens when Squid makes a TCP connection to an origin server, but
for some reason, the connection is closed before Squid reads any data.
Depending on various factors, Squid may be able to retry the request again.
If you see the ``Zero Sized Reply'' error message, it means that Squid
was unable to retry, or that all retry attempts also failed.
<p>
What causes a connection to close prematurely? It could be a number
of things, including:
<enum>
<item>An overloaded origin server.
<item>TCP implementation/interoperability bugs.
<item>Race conditions with HTTP persistent connections.
<item>Buggy or misconfigured NAT boxes, firewalls, and load-balancers.
<item>Denial of service attacks.
</enum>
<p>
You may be able to use <em/tcpdump/ to track down and observe the
problem.
<p>
Some users believe the problem is caused by very large cookies.
One user reports that his Zero Sized Reply problem went away
when he told Internet Explorer to not accept third-party
cookies.
<p>
Here are some things you can try to reduce the occurance of the
Zero Sized Reply error:
<enum>
<item>Delete or rename your cookie file and configure your
browser to prompt you before accepting any new cookies.
<item>Disable HTTP persistent connections with the
<em/server_persistent_connections/ and <em/client_persistent_connections/
directives.
<item>Disable any advanced TCP features on the Squid system. Disable
ECN on Linux with <tt>echo 0 &gt; /proc/sys/net/ipv4/tcp_ecn/</tt>.
</enum>
<p>
If this error causes serious problems for you,
Squid developers would be happy to help you uncover the problem. However,
we will require high-quality debugging information from you, such as
<em/tcpdump/ output, server IP addresses, operating system versions,
and <em/access.log/ entries with full HTTP headers.
<p>
If you want to make Squid give the Zero Sized error
on demand, you can use the short C program below. Simply compile and
start the program on a system that doesn't already have a server
running on port 80. Then try to connect to this fake server through
Squid:
<verb>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <assert.h>
int
main(int a, char **b)
{
struct sockaddr_in S;
int s,t,x;
s = socket(PF_INET, SOCK_STREAM, 0);
assert(s > 0);
memset(&amp;S, '\0', sizeof(S));
S.sin_family = AF_INET;
S.sin_port = htons(80);
x = bind(s, (struct sockaddr *) &amp;S, sizeof(S));
assert(x == 0);
x = listen(s, 10);
assert(x == 0);
while (1) {
struct sockaddr_in F;
int fl = sizeof(F);
t = accept(s, (struct sockaddr *) &amp;F, &amp;fl);
fprintf(stderr, "accpeted FD %d from %s:%d\n",
t, inet_ntoa(F.sin_addr), (int)ntohs(F.sin_port));
close(t);
fprintf(stderr, "closed FD %d\n", t);
}
return 0;
}
</verb>
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
<sect>How does Squid work?
<label id="memory">
<sect1>What are cachable objects?
<P>
@ -8826,8 +8972,8 @@ diff -p -u -r1.40 -r1.41
* SUCH DAMAGE.
*
* @(#)uipc_socket.c 8.3 (Berkeley) 4/15/94
- * $Id: FAQ.sgml,v 1.4 2004/09/09 12:37:50 cvsdist Exp $
+ * $Id: FAQ.sgml,v 1.4 2004/09/09 12:37:50 cvsdist Exp $
- * $Id: FAQ.sgml,v 1.5 2004/09/09 12:40:04 cvsdist Exp $
+ * $Id: FAQ.sgml,v 1.5 2004/09/09 12:40:04 cvsdist Exp $
*/
#include <sys/param.h>
@ -9095,7 +9241,7 @@ or broken TCP/IP implementations.
To work around such broken sites you can disable ECN with
the following command:
<verb>
echo 0 >/proc/sys/net/ipv4/tcp_ecn
echo 0 &gt; /proc/sys/net/ipv4/tcp_ecn
</verb>
<p>
Found this on the FreeBSD mailing list:
@ -10045,10 +10191,10 @@ httpd_accel_uses_host_header on
any IP address, on port 80 - and deliver them to your cache
application. This is typically done with IP
filtering/forwarding features built into the kernel.
On linux they call this <em/ipfilter/ (kernel 2.4.x),
On linux they call this <em/iptables/ (kernel 2.4.x),
<em/ipchains/ (2.2.x) or <em/ipfwadm/ (2.0.x).
On FreeBSD and other
*BSD systems they call it <em/ip filter/ or <em/ipnat/; on many
On FreeBSD its called <em/ipfw/. Other
BSD systems may use <em/ip filter/ or <em/ipnat/. On most
systems, it may require rebuilding the kernel or adding a new
loadable kernel module.
@ -10098,7 +10244,12 @@ users, which you can do with Squid in this configuration.
</itemize>
<sect1>Interception caching for Solaris, SunOS, and BSD systems
<p>
NOTE: You don't need to use IP Filter on FreeBSD. Use the built-in <em/ipfw/ feature
instead. See the FreeBSD subsection below.
<sect2>Install IP Filter
<P>
First, get and install the
<url url="http://coombs.anu.edu.au/ipfilter/"
@ -11071,8 +11222,8 @@ IOS releases:
<sect1>What about WCCPv2?
<p>
Cisco has published WCCPv2 as an <url url="http://www.ietf.org/internet-drafts/draft-wilson-wrec-wccp-v2-00.txt"
name="Internet Draft"> (expires Jan 2001).
Cisco has published WCCPv2 as an <url url="http://www.web-cache.com/Writings/Internet-Drafts/draft-wilson-wrec-wccp-v2-00.txt"
name="Internet Draft"> (expired Jan 2001).
At this point, Squid does not support WCCPv2, but anyone
is welcome to code it up and contribute to the Squid project.
@ -12824,7 +12975,7 @@ want to make a cron job that regularly verifies that your proxy blocks
access to port 25.
<verb>
$Id: FAQ.sgml,v 1.4 2004/09/09 12:37:50 cvsdist Exp $
$Id: FAQ.sgml,v 1.5 2004/09/09 12:40:04 cvsdist Exp $
</verb>
</article>
<!-- LocalWords: SSL MSIE Netmanage Chameleon WebSurfer unchecking remotehost

View File

@ -1 +1,2 @@
0067e2732930853b0d6011589ac0aed8 msntauth-v2.0.3-squid.1.tar.gz
103fe9d03aca06f89218740f29730527 squid-2.4.STABLE6-src.tar.gz

View File

@ -1,7 +1,7 @@
Summary: The Squid proxy caching server.
Name: squid
Version: 2.4.STABLE6
Release: 1.7.2
Release: 6.7.3
Serial: 7
License: GPL
Group: System Environment/Daemons
@ -10,10 +10,16 @@ Source1: http://www.squid-cache.org/Squid/FAQ/FAQ.sgml
Source2: squid.init
Source3: squid.logrotate
Source4: squid.sysconfig
Source10: msntauth-v2.0.3-squid.1.tar.gz
Patch0: squid-2.1-make.patch
Patch1: squid-2.4-config.patch
Patch2: squid-perlpath.patch
Patch3: squid-location.patch
Patch10: squid-2.4.STABLE6-deny_transfer_encoding.patch
Patch11: squid-2.4.STABLE6-ftp_directories.patch
Patch12: squid-2.4.STABLE6-ftp_sanitycheck.patch
Patch13: squid-2.4.STABLE6-gopher.patch
Patch14: squid-2.4.STABLE6-proxy_auth.patch
BuildRoot: %{_tmppath}/%{name}-%{version}-root
Prereq: /sbin/chkconfig logrotate shadow-utils
Requires: bash >= 2.0
@ -33,11 +39,20 @@ lookup program (dnsserver), a program for retrieving FTP data
(ftpget), and some management and client tools.
%prep
%setup -q
%setup -q -a 10
%patch0 -p1 -b .make
%patch1 -p1 -b .config
%patch2 -p1 -b .perlpath
%patch3 -p1
%patch10 -p1
%patch12 -p1
%patch11 -p1
%patch13 -p1
%patch14 -p1
rm -rf auth_moudles/MSNT/*
mv msntauth-v2.0.3-squid.1/* auth_modules/MSNT/
rm -rf msntauth-v2.0.3-squid.1
%build
%configure \
@ -243,6 +258,15 @@ if [ "$1" -ge "1" ] ; then
fi
%changelog
* Tue Jun 25 2002 Bill Nottingham <notting@redhat.com>
- add various upstream bugfix patches
* Fri Jun 21 2002 Tim Powers <timp@redhat.com>
- automated rebuild
* Thu May 23 2002 Tim Powers <timp@redhat.com>
- automated rebuild
* Fri Mar 22 2002 Bill Nottingham <notting@redhat.com>
- 2.4.STABLE6
- turn off carp