This file is indexed.

/usr/share/doc/python-medusa-doc/txt/proxy_notes.txt is in python-medusa-doc 1:0.5.4-7.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# we can build 'promises' to produce external data.  Each producer
# contains a 'promise' to fetch external data (or an error
# message). writable() for that channel will only return true if the
# top-most producer is ready.  This state can be flagged by the dns
# client making a callback.

# So, say 5 proxy requests come in, we can send out DNS queries for
# them immediately.  If the replies to these come back before the
# promises get to the front of the queue, so much the better: no
# resolve delay. 8^)
#
# ok, there's still another complication:
# how to maintain replies in order?
# say three requests come in, (to different hosts?  can this happen?)
# yet the connections happen third, second, and first.  We can't buffer
# the entire request!  We need to be able to specify how much to buffer.
#
# ===========================================================================
#
# the current setup is a 'pull' model:  whenever the channel fires FD_WRITE,
# we 'pull' data from the producer fifo.  what we need is a 'push' option/mode,
# where
# 1) we only check for FD_WRITE when data is in the buffer
# 2) whoever is 'pushing' is responsible for calling 'refill_buffer()'
#
# what is necessary to support this 'mode'?
# 1) writable() only fires when data is in the buffer
# 2) refill_buffer() is only called by the 'pusher'.
# 
# how would such a mode affect things?  with this mode could we support
# a true http/1.1 proxy?  [i.e, support <n> pipelined proxy requests, possibly
# to different hosts, possibly even mixed in with non-proxy requests?]  For
# example, it would be nice if we could have the proxy automatically apply the
# 1.1 chunking for 1.0 close-on-eof replies when feeding it to the client. This
# would let us keep our persistent connection.