This file is indexed.

/usr/share/doc/apt-cacher-ng/html/howtos.html is in apt-cacher-ng 0.9.1-1ubuntu1.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=US-ASCII">
<title>HOWTOs and FAQ</title>
<link rel="previous" href="maint.html">
<link rel="ToC" href="index.html">
<link rel="up" href="index.html">
<link rel="next" href="troublefaq.html">
</head>
<body>
<p><a href="maint.html">Previous</a> | <a href="index.html">Contents</a> | <a href="troublefaq.html">Next</a></p>

<ul>
<li><a href="#howtos">Chapter 8: HOWTOs and FAQ</a>
<ul>
<li><a href="#imp">8.1 Package import</a></li>
<li><a href="#cache-overview">8.2 Cache overview</a></li>
<li><a href="#ssluse">8.3 Access to SSL/TLS remotes (HTTPS)</a></li>
<li><a href="#jigdo">8.4 JIGDO usage</a></li>
<li><a href="#prob-proxy">8.5 Avoid use of apt-cacher-ng for certain hosts</a></li>
<li><a href="#howto-dontcache1">8.6 Avoid caching for certain domains or certain file types</a></li>
<li><a href="#howto-faster">8.7 How to make big download series faster</a></li>
<li><a href="#howto-importiso">8.8 How to import DVDs or ISO images</a></li>
<li><a href="#howto-importdisk">8.9 How to integrate DVDs or ISO image data</a></li>
<li><a href="#howto-hooks">8.10 How to execute commands before and after going online?</a></li>
<li><a href="#howto-interfaces">8.11 Listen to only specific interfaces or IP protocols</a></li>
<li><a href="#howto-outproto">8.12 How to avoid use of IPv4 (or IPv6) where possible?</a></li>
<li><a href="#howto-acngfs">8.13 Use the proxy without storing all data twice</a></li>
<li><a href="#optproxy">8.14 Optional semi-automatic use of proxy</a></li>
<li><a href="#mirroring">8.15 Partial Mirroring</a></li>
</ul></li>
</ul>
<h1><a name="howtos"></a>Chapter 8: HOWTOs and FAQ</h1>
<h2><a name="imp"></a>8.1 Package import</h2>
<p>
Already existing packages can be imported into apt-cacher-ng's cache pool instead of downloading them. There are some restrictions:
</p>
<ol><li>
Don't try to import incomplete files. They will be refused since their contents cannot be checked against the archive metadata.
</li>
<li>
If possible, don't import symbolic links. Even if doing so, they should not point to other files inside of the cache and especially not to other files under the <code>_import</code> directory.
</li>
</ol>
<p>
HOWTO:
</p>
<ol><li><a name="p0"></a>
Make sure that apt-cacher-ng has valid index files in the cache. This is the tricky part. To get them right, a client needs to download them through apt-cacher-ng once. Therefore:
<ol><li>
Configure the server and one client before doing the import. See above for instructions.
</li>
<li>
Run "apt-get update" on client(s) once to teach ACNG about remote locations of (volatile) index files. In some cases this is not sufficient. See the note on APT below for a workaround. 
</li>
</ol>

</li>
<li>
Store copies of your .debs, .orig.tar.gz, ... somewhere in the <code>_import</code> subdirectory in the cache, ie. in <code>/var/cache/apt-cacher/_import/</code>. The files may be links or symlinks, does not matter. When done, apt-cacher will move those files to its own internal locations. Example:
<pre><code>cd /var/cache
mkdir apt-cacher-ng/_import
cp -laf apt-proxy apt-cacher /var/cache/apt-cacher-ng/_import
chown -R apt-cacher-ng apt-cacher-ng/_import
</code></pre>

</li>
<li>
Visit the report page and trigger the import action there. Check the results, look for (red) error messages.
</li>
<li>
Check the <code>_import</code> directory again. All files that could be identified as referenced by archive metadata should no longer be there if they have been successfully moved. If some files have been left behind, check whether the client can use them, i.e. with "apt-cache policy ..." and/or checking checksums with md5sum/sha1sum tools. Probably they are no longer needed by anyone and therefore apt-cacher-ng just left them behind. If no, follow the instructions in <a href="#p0">1</a> or do similar things for your distribution and retry the import operation. Setting the verbosity flag (see checkbox on the command-and-control page) can also help to discover the reason for the refusal to import the particular files.
</li>
</ol>
<p>
NOTE: APT is pretty efficient on avoiding unneccessary downloads which can make a proxy blind to some relevant files. ACNG makes some attempts to guess the remote locations of missed (not downloaded) files but these heuristics may fail, especially on non-Debian systems. When some files are permanently ignored, check the process output for messages about the update of Packages/Sources files. When some relevant package sources are missing there, there is a brute-force method for Debian/Ubuntu users to force their download to the client side. To do that, run:
</p>
<pre><code>rm /var/cache/apt/*cache.bin
rm /var/lib/apt/lists/*Packages 
rm /var/lib/apt/lists/*Sources
</code></pre>
<p>
on the client to purge APT's internal cache, and then rerun "apt-get update" there.
</p>
<h2><a name="cache-overview"></a>8.2 Cache overview</h2>
<p>
To get a basic overview of the cache contents, the distkill.pl script may be used. See <a href="maint.html#distkill">section 7.2</a> for details and warnings.
</p>
<pre><code># /usr/lib/apt-cacher-ng/distkill.pl
Scanning /var/cache/apt-cacher-ng, please wait...
Found distributions:
	1. testing (6 index files)
	2. sid (63 index files)
	3. etch-unikl (30 index files)
	4. etch (30 index files)
	5. experimental (505 index files)
	6. lenny (57 index files)
	7. unstable (918 index files)
	8. stable (10 index files)

WARNING: The removal action would wipe out whole directories containing
         index files. Select d to see detailed list.

Which distribution to remove? (Number, 0 to exit, d for details): d

Directories to remove:
 1. testing: 
  /var/cache/apt-cacher-ng/debrep/dists/testing 
 2. sid: 
  /var/cache/apt-cacher-ng/localstuff/dists/sid 
  /var/cache/apt-cacher-ng/debrep/dists/sid 
 4. etch: 
  /var/cache/apt-cacher-ng/ftp.debian-unofficial.org/debian/dists/etch 
 5. experimental: 
  /var/cache/apt-cacher-ng/debrep/dists/experimental 
 6. lenny: 
  /var/cache/apt-cacher-ng/security.debian.org/dists/lenny 
  /var/cache/apt-cacher-ng/debrep/dists/lenny 
 7. unstable: 
  /var/cache/apt-cacher-ng/debrep/dists/unstable 
  /var/cache/apt-cacher-ng/localstuff/debian/dists/unstable 
 8. stable: 
  /var/cache/apt-cacher-ng/debrep/dists/stable 
Found distributions:

WARNING: The removal action would wipe out whole directories containing
         index files. Select d to see detailed list.

</code></pre>
<h2><a name="ssluse"></a>8.3 Access to SSL/TLS remotes (HTTPS)</h2>
<p>
It is possible to have encrypted access to remote sites via HTTPS protocol with recent versions of apt-cacher-ng if the OpenSSL support was enabled at compile time. However this leads certain side effects and complications; due to the nature of the HTTPS connection model, it is not possible to act as an intermediate server (e.g. caching proxy) by the same rules as with HTTP:
</p>
<ul><li>
SSL ensures strict verification of remote host name. Therefore, you cannot add an URL to sources.list and make them point to "hostname of acng server": the download client would expect the remote site to offer the certificate of "hostname of acng server" but the certificate of the remote server would be used.
</li>
<li>
This restriction also applies if the LAN admin somehow manages to trick the client that it connects to the remote site but the real connection is silently rerouted to the local server: since the proxy in the middle cannot fake the real certificate it would need to create a custom one but that manipulation usually cannot go unnoticed by the client unless some local hack disables or manipulates the certificate verification. All parts of this should be considered a crude hack and won't be described here in detail. In fact, apt-cacher-ng does not implement local SSL server functionality (as of version 0.8.1), partly because of these considerations.
</li>
<li>
If the client assumes that it needs to access a HTTPS remote server through a HTTP proxy, it usually goes for the HTTP tunneling (see the <a href="http://en.wikipedia.org/wiki/HTTP_tunnel">Wikipedia article</a> for details). This is a well-known and widely-used method but it comes with some disadvantages: the proxy has no control of the data flowing through it so the proxy cannot cache the data in any useful way; in environments with additional security restrictions, it's also hard to identify malicious users that abuse the service for illegitimate purposes.
</li>
</ul>
<p>
Considering these difficulties, there are three (and a half) methods to use SSL.
</p>
<ul><li>
First, the "half method" - not using the proxy at all, configuring each client to not use the HTTP proxy for HTTPS urls. This will obviously disable central caching and requires the client has separated configuration options to set this. For Debian based distros, this can be done by adding a configuration like this: <code>Acquire::https::proxy "DIRECT";</code> to apt.conf or one of the apt.conf.d files. See <a href="#prob-proxy">section 8.5</a> for further information.
</li>
<li>
The "backend configuration method": if the clients access the remote sites through URLs remapped on the server, the cacher admin can add https URLs to backend lists instead of http urls. Data will be cached just like usual.
</li>
<li>
The "laissez-faire method": in acng.conf (or related) configure the <code>PassThroughPattern</code> option to contain a regex like <code>.*</code> and configure the clients to use apt-cacher-ng as HTTP proxy and let the clients connect to https URLs "as usual". Some limited access control can be achieved through adjustment of the regular expression (.* permits access to any host and any port, including 443 for https). Data is not cached on the server.
</li>
<li>
The "tell-me-what-you-need method": on the client side, modify the access URLs and change https to http and put the string "HTTPS///" between http:// and the host name. With this trick, the user client will access the proxy like going for a usual HTTP download and the proxy will access the remote URL with the https protocol. Caching (and file merging to repositories) will work and there is still enough flexibility for the users. The disadvantages of this method are basically the same as with the access URLs rewriting (see <a href="config-servquick.html#config-client">Section 3.2</a>) but the method is still the preferred one by the apt-cacher-ng author. For apt's sources.list, the modification may look like the following example.
</li>
</ul>
<pre><code>deb http://HTTPS///get.docker.com/ubuntu docker main
# If apt-cacher-ng is configured as proxy in APT, this makes it
# switch internally to https://get.docker.com/ubuntu
deb http://acnghost:3142/HTTPS///get.docker.com/ubuntu docker main
# Basically the same just with access to apt-cacher-ng through
# URL rewritting instead of setting http proxy.
</code></pre>
<h2><a name="jigdo"></a>8.4 JIGDO usage</h2>
<p>
It's possible to use apt-cacher-ng source with the jigdo-lite utility. There are some limitations, though:
</p>
<ul><li>
since many mirrors do not distribute the jigdo files (or even nothing from cdimage.debian.org at all), there is a high chance to be redirected to a such mirror when using the backend-mapped configuration. I.e. when user follows the official documentation and edits wgetOpts in the jigdo configuration, it will fail in many cases.
</li>
<li>
apt-cacher-ng does not support .template files properly. They might be cached but will be expired (removed from cache), sooner or later.
</li>
</ul>
<p>
But it's possible to feed jigdo-lite with the package contents from your mirror. To do that, first start jigdo-lite as usual, something like:
</p>
<p>
<code>jigdo-lite http://cdimage.debian.org/.../...-DVD-1.jigdo</code>
</p>
<p>
When asked about Debian mirror, enter something like:
</p>
<p>
<code>http://proxy.host:3142/ftp.de.debian.org/debian/</code>
</p>
<p>
i.e. construct the same URL as present in usual apt-cacher-ng's user's sources.list.
</p>
<p>
That's all, jigdo-lite will fetch the package files using apt-cacher-ng proxy.
</p>
<h2><a name="prob-proxy"></a>8.5 Avoid use of apt-cacher-ng for certain hosts</h2>
<p>
Sometimes clients might need to access some remote side directly to do some non-file-transfer oriented work but still passing the data through configured apt-cacher-ng proxy. Such remote hosts can be marked for direct access in apt configuration, e.g. in <code>/etc/apt/apt.conf</code>:
</p>
<pre><code>Acquire::HTTP::Proxy::archive.example.org "DIRECT";
//or Acquire::HTTP::Proxy::archive.example.org  "other.proxy:port"
</code></pre>
<h2><a name="howto-dontcache1"></a>8.6 Avoid caching for certain domains or certain file types</h2>
<p>
Sometimes clients to download through apt-cacher-ng but the data shall not be stored on the harddisk of the server. To get it, use the DontCache directive (see examples for details) to define such files.
</p>
<h2><a name="howto-faster"></a>8.7 How to make big download series faster</h2>
<p>
Symptom: A common situation is a periodic download of hundreds of files through apt-cacher-ng where just a half is present in the cache. Although caching works fine, there are visible delays on some files during the download.
</p>
<p>
Possible cause and relief: the download from the real mirror gets interrupted while apt-cacher-ng delivers a set of files from the internal cache. While the connection is suspended, it times out and needs to be recreated when a miss occurs, i.e. apt-cacher-ng has to fetch more from the remote mirror. A workaround to this behaviour is simple, provided that the remote mirror can handle long request queues: set the pipelining depth to a very high value in apt.conf file or one of its replacement files in /etc/apt/apt.conf.d/. With something like:
</p>
<p>
<code>Acquire::http { Pipeline-Depth "200"; } </code>
</p>
<p>
there is a higher chance to get the server connection "preheated" before a stall occurs.
</p>
<h2><a name="howto-importiso"></a>8.8 How to import DVDs or ISO images</h2>
<p>
First, it should be clear what is needed to be done. In order to integrate the packages from a DVD or ISO image, read on in <a href="#howto-importdisk">section 8.9</a>.
</p>
<p>
The situation with ISO files import is complicated. They are not supported by the cache and there is also no expiration mode for them. The feature might be considered for addition in some future release of apt-cacher-ng.
</p>
<p>
What is possible now is publishing a directory with ISO files using its web server mode, see <code>LocalDirs</code> config option for details.
</p>
<h2><a name="howto-importdisk"></a>8.9 How to integrate DVDs or ISO image data</h2>
<p>
Integrating package files from DVD or ISO images is not much different to the usual import operation, see above for instructions.
</p>
<p>
One possible way to get files into the <code>_import</code> directory is simply mounting it there:
</p>
<pre><code> mount -o loop /dev/cdrom /var/cache/apt-cacher-ng/_import
</code></pre>
<p>
After running the import operation, the disk can be umounted and removed.
</p>
<p>
A possible variation is import via symlinks. This can make sense where the space consumption must be reduced and the ISO image should stay on the server for a long time. To achive this, the image should be mounted at some mount point outside of the <code>_import</code> directory; the mounted state should be made permanent, maybe via an /etc/fstab entry with the loop option; then a symbolic link tree pointing to the mountpoint location should be created in the <code>_import</code> directory (something like <code>cp -as /mnt/image_jessie_01/pool /var/cache/apt-cacher-ng/_import/</code>). The subsequent "import" operation should pick up the symlinks and continue using them as links istead of file copies.
</p>
<h2><a name="howto-hooks"></a>8.10 How to execute commands before and after going online?</h2>
<p>
It is possible to configure custom commands which are executed before the internet connection attempt and after a certain period after closing the connection. The commands are bound to a remapping configuration and the config file is named after the name of that remapping config, like <code>debrep.hooks</code> for <code>Remap-debrep</code>. See <a href="config-serv.html#remap-trickz">section 4.3.2</a>, <code>conf/*.hooks</code> and <code>/usr/share/doc/apt-cacher-ng/examples/*.hooks</code> files for details.
</p>
<h2><a name="howto-interfaces"></a>8.11 Listen to only specific interfaces or IP protocols</h2>
<p>
Unless configured explicitely, the server listens to any interface with IPv4 or IPv6 protocol. To disable some of this, use the <code>BindAddress</code> option. It should contain a list of IP adresseses associated with particular network interfaces, separated by space. When option is set then the server won't listen to addresses or protocols not included there.
</p>
<p>
To limit to specific IP protocol, the address should only be present in the protocol specific syntax (like 192.0.43.10) will limit the use to the specific protocol.
</p>
<p>
The usual wildcard addresses can also be used to match all interfaces configured for the specific protocol, like 0.0.0.0 for IPv4.
</p>
<h2><a name="howto-outproto"></a>8.12 How to avoid use of IPv4 (or IPv6) where possible?</h2>
<p>
Usually, outgoing hosts are accessed by the protocol and with the target IP reported as the first candidate by operating system facilities (getaddrinfo). It is possible to change this behavior, i.e. to skip IPv6 or IPv4 versions or try IPv6 connection first and then use IPv4 as alternative (or vice versa). See option <em>ConnectProto</em> in configuration examples.
</p>
<h2><a name="howto-acngfs"></a>8.13 Use the proxy without storing all data twice</h2>
<p>
There is a general use case where the data storing behavior of APT is not so fortunate. Imagine an old laptop with a slow and small harddisk but a modern network connection (i.e. Cardbus-attached WLAN card). But there is not enough space for APT to store the downloaded packages on the local disk, or not enough to perform the upgrade afterwards.
</p>
<p>
A plausible workaround in this case are moving contents of /var/cache/apt/archives directory to a mounted NFS share and replacing the original directory with a symlink (or bind-mount to the mentioned share). However, this solution would transfer all data at least three times over network. Another plausible workaround might be the use of curlftpfs which would embedd a remote FTP share which then can be specified as file:// URL in sources.list. However, this solution won't work with a local HTTP proxy like apt-cacher-ng (and httpfs http://sourceforge.net/projects/httpfs/ is not an alternative because it works only with a single file per mount).
</p>
<p>
As real alternative, apt-cacher-ng comes with an own implementation of a http file system called <code>acngfs</code>. It makes some assumptions of proxy's behaviour in order to emulate a real directory structure. Directories can be entered but not browsed (i.e. content listing is disallowed because of HTTP protocol limitations). Anyhow, this solution is good enough for APT. When it's checking the contents of the data source located on acngfs share, it reads the file contents of just the files required for the update which makes the apt-cacher-ng server download them on-the-fly.
</p>
<p>
And finally, angfs usage can be optimized for local access. This works best if the proxy daemons runs on the same machine as acngfs and there are hundreds of packages to update while filesystem access costs are negligible. Here the cache directory can be specified in acngfs parameters, and then it gets files directly from the cache if they are completely downloaded and don't have volatile contents.
</p>
<h2><a name="optproxy"></a>8.14 Optional semi-automatic use of proxy</h2>
<p>
Apt-Cacher NG daemon has some optional operations modes regading the use of external proxy (configured with "Proxy" setting). The default mode means use of that proxy for all remote connections.
</p>
<p>
However, in some environments like when running on a portable system, the "upstream" proxy server may not reachable when running outside of home LAN area. For this situation, there are automated ways to detect it and to switch to direct internet access without additional configuration work.
</p>
<p>
One way is to use a (short) timeout value (OptProxyTimeout setting) which simply makes a failed connection attempt after the timeout being interpret as broken proxy. Then the proxy is not used in context of this connection until a new connection is to be established. If OptProxyCheckInterval is set (see below) then this change is effective for the configured time span.
</p>
<p>
Another way is through a custom (shell) command - when it returns successfully, the proxy is used, otherwise not and the command will be rerun only after a specified period. The command may check IP routes or a specific router address in the MAC pool or similar traces of the current environment. This setting (OptProxyCheckCommand) is also affected by another option (OptProxyCheckInterval) that specifies how often this check command shall be rerun. In the meantime, the configured proxy is considered faulty.
</p>
<p>
Hint: this special modes can be combined with non-caching behavior (see above) and another ACNG proxy in the home LAN to get smart caching for laptop users.
</p>
<h2><a name="mirroring"></a>8.15 Partial Mirroring</h2>
<p>
It is possible to create a partial local mirror of a remote package repository. The method to do this is usually known as pre-caching. A such mirror would contain all files available to apt through <code>apt-cacher-ng</code>, making the cache server suitable for pure off-line use.
</p>
<p>
The config uses index files in the local cache in order to declare which remote files shall be mirrored. Choice of relevant files decides which branch, which architecture or which source tree is to be mirrored. For convenience, it's possible to use glob expressions to create semi-dynamic list. The format is shell-like and relative to cache directory, a shell running in the cache directory can be helpful to verify the correctness.
</p>
<p>
<em>Example:</em>
</p>
<p>
<code>PrecacheFor: debrep/dists/unstable/*/binary-amd64/Packages*</code>
</p>
<p>
<code>PrecacheFor: emacs.naquadah.org/unstable/*</code>
</p>
<p>
Assuming that debrep repository is configured with proper remapping setup (see above), this would download all Debian packages listed for amd64 architecture in the unstable branch.
</p>
<p>
There is also support for faster file update using deltas, see <a href="http://debdelta.debian.net/">Debdelta</a> for details. The delta_uri URL mentioned there needs to be added as deltasrc option, see <a href="config-serv.html#remap-trickz">section 4.3.2</a> for details.
</p>
<p>
The operation is triggered using the web interface, various options or estimation mode can also be configured there. The CGI URL generated by the browser can be called with other clients to repeat this job, for example in a daily executed script. Another possible command line client can be the <code>expire-caller.pl</code> script shipped with this package (replacing the CGI parameters through environment, see <a href="maint.html#auto-cleanup">section 7.1.2</a>). For regular tools like wget or curl, remember the need of quotation and secrecy of user/password data - command calls might expose them to local users.
</p>

<hr><address>Comments to <a href='mailto:blade@debian.org'>blade@debian.org</a>
<br>
[Eduard Bloch, Sun, 19 Apr 2015 10:25:49 +0200]</address></body>
</html>