^c @sodZddlZddlZddlZddlZddlZddlZddlZddlm Z ddl Z ddl Z ddl Z ddl Z ddlZddlmZddlmZddlmZddlZddlZddlZddlmZmZmZy)ddlZeed r>eZnWnek rXeZnXy!d ejd d d dUWn d Z nXyddl!m"Z"Wnek rZ#dZ"nXea$dZ%edZ&dZ'e&e'dZ"ddZ(dZ)de*fdYZ+dfdYZ,edZ-dZ.edZ/dfdYZ0d fd!YZ1d"Z2d#Z3d$e4fd%YZ5e5Z6d&e4fd'YZ7eed(rej8ej9ej:Bnej;a<d)Z=ea>d*Z?d+Z@d,ZAd-ZBed d eeed.dd/d0d1d2d3ged4 ZCiZDx"d5D]ZEd6eFeEeDeEZNd?fd@YZOdAZPdBZQedCZRdDZSdEZTdFZUdGZVedHkrkePeQeRdIndS(JsKA high-level cross-protocol url-grabber. GENERAL ARGUMENTS (kwargs) Where possible, the module-level default is indicated, and legal values are provided. copy_local = 0 [0|1] ignored except for file:// urls, in which case it specifies whether urlgrab should still make a copy of the file, or simply point to the existing copy. The module level default for this option is 0. close_connection = 0 [0|1] tells URLGrabber to close the connection after a file has been transferred. This is ignored unless the download happens with the http keepalive handler (keepalive=1). Otherwise, the connection is left open for further use. The module level default for this option is 0 (keepalive connections will not be closed). keepalive = 1 [0|1] specifies whether keepalive should be used for HTTP/1.1 servers that support it. The module level default for this option is 1 (keepalive is enabled). progress_obj = None a class instance that supports the following methods: po.start(filename, url, basename, size, now, text) # length will be None if unknown po.update(read) # read == bytes read so far po.end() multi_progress_obj = None a class instance that supports the following methods: mo.start(total_files, total_size) mo.newMeter() => meter mo.removeMeter(meter) mo.end() The 'meter' object is similar to progress_obj, but multiple instances may be created and updated at the same time. When downloading multiple files in parallel and multi_progress_obj is None progress_obj is used in compatibility mode: finished files are shown but there's no in-progress display. curl_obj = None a pycurl.Curl instance to be used instead of the default module-level instance. Note that you don't have to configure the passed instance in any way; urlgrabber will do all the necessary work. This option exists primarily to allow using urlgrabber from multiple threads in your application, in which case you would want to instantiate a fresh Curl object for each thread, to avoid race conditions. See the curl documentation on thread safety for more information: https://curl.haxx.se/libcurl/c/threadsafe.html Note that connection reuse (keepalive=1) is limited to the Curl instance it was enabled on so if you're using multiple instances in your application, connections won't be shared among them. text = None specifies alternative text to be passed to the progress meter object. If not given, the default progress meter will use the basename of the file. throttle = 1.0 a number - if it's an int, it's the bytes/second throttle limit. If it's a float, it is first multiplied by bandwidth. If throttle == 0, throttling is disabled. If None, the module-level default (which can be set on default_grabber.throttle) is used. See BANDWIDTH THROTTLING for more information. timeout = 300 a positive integer expressing the number of seconds to wait before timing out attempts to connect to a server. If the value is None or 0, connection attempts will not time out. The timeout is passed to the underlying pycurl object as its CONNECTTIMEOUT option, see the curl documentation on CURLOPT_CONNECTTIMEOUT for more information. http://curl.haxx.se/libcurl/c/curl_easy_setopt.html#CURLOPTCONNECTTIMEOUT minrate = 1000 This sets the low speed threshold in bytes per second. If the server is sending data slower than this for at least `timeout' seconds, the library aborts the connection. bandwidth = 0 the nominal max bandwidth in bytes/second. If throttle is a float and bandwidth == 0, throttling is disabled. If None, the module-level default (which can be set on default_grabber.bandwidth) is used. See BANDWIDTH THROTTLING for more information. range = None a tuple of the form (first_byte, last_byte) describing a byte range to retrieve. Either or both of the values may set to None. If first_byte is None, byte offset 0 is assumed. If last_byte is None, the last byte available is assumed. Note that the range specification is python-like in that (0,10) will yield the first 10 bytes of the file. If set to None, no range will be used. reget = None [None|'simple'|'check_timestamp'] whether to attempt to reget a partially-downloaded file. Reget only applies to .urlgrab and (obviously) only if there is a partially downloaded file. Reget has two modes: 'simple' -- the local file will always be trusted. If there are 100 bytes in the local file, then the download will always begin 100 bytes into the requested file. 'check_timestamp' -- the timestamp of the server file will be compared to the timestamp of the local file. ONLY if the local file is newer than or the same age as the server file will reget be used. If the server file is newer, or the timestamp is not returned, the entire file will be fetched. NOTE: urlgrabber can do very little to verify that the partial file on disk is identical to the beginning of the remote file. You may want to either employ a custom "checkfunc" or simply avoid using reget in situations where corruption is a concern. user_agent = 'urlgrabber/VERSION' a string, usually of the form 'AGENT/VERSION' that is provided to HTTP servers in the User-agent header. The module level default for this option is "urlgrabber/VERSION". http_headers = None a tuple of 2-tuples, each containing a header and value. These will be used for http and https requests only. For example, you can do http_headers = (('Pragma', 'no-cache'),) ftp_headers = None this is just like http_headers, but will be used for ftp requests. proxies = None a dictionary that maps protocol schemes to proxy hosts. For example, to use a proxy server on host "foo" port 3128 for http and https URLs: proxies={ 'http' : 'http://foo:3128', 'https' : 'http://foo:3128' } note that proxy authentication information may be provided using normal URL constructs: proxies={ 'http' : 'http://user:host@foo:3128' } libproxy = False Use the libproxy module (if installed) to find proxies. The libproxy code is only used if the proxies dictionary does not provide any proxies. no_cache = False When True, server-side cache will be disabled for http and https requests. This is equivalent to setting http_headers = (('Pragma', 'no-cache'),) prefix = None a url prefix that will be prepended to all requested urls. For example: g = URLGrabber(prefix='http://foo.com/mirror/') g.urlgrab('some/file.txt') ## this will fetch 'http://foo.com/mirror/some/file.txt' This option exists primarily to allow identical behavior to MirrorGroup (and derived) instances. Note: a '/' will be inserted if necessary, so you cannot specify a prefix that ends with a partial file or directory name. opener = None No-op when using the curl backend (default) cache_openers = True No-op when using the curl backend (default) data = None Only relevant for the HTTP family (and ignored for other protocols), this allows HTTP POSTs. When the data kwarg is present (and not None), an HTTP request will automatically become a POST rather than GET. This is done by direct passthrough to urllib2. If you use this, you may also want to set the 'Content-length' and 'Content-type' headers with the http_headers option. Note that python 2.2 handles the case of these badly and if you do not use the proper case (shown here), your values will be overridden with the defaults. urlparser = URLParser() The URLParser class handles pre-processing of URLs, including auth-handling for user/pass encoded in http urls, file handing (that is, filenames not sent as a URL), and URL quoting. If you want to override any of this behavior, you can pass in a replacement instance. See also the 'quote' option. quote = None Whether or not to quote the path portion of a url. quote = 1 -> quote the URLs (they're not quoted yet) quote = 0 -> do not quote them (they're already quoted) quote = None -> guess what to do This option only affects proper urls like 'file:///etc/passwd'; it does not affect 'raw' filenames like '/etc/passwd'. The latter will always be quoted as they are converted to URLs. Also, only the path part of a url is quoted. If you need more fine-grained control, you should probably subclass URLParser and pass it in via the 'urlparser' option. username = None username to use for simple http auth - is automatically quoted for special characters password = None password to use for simple http auth - is automatically quoted for special characters ssl_ca_cert = None this option can be used if M2Crypto is available and will be ignored otherwise. If provided, it will be used to create an SSL context. If both ssl_ca_cert and ssl_context are provided, then ssl_context will be ignored and a new context will be created from ssl_ca_cert. ssl_context = None No-op when using the curl backend (default) ssl_verify_peer = True Check the server's certificate to make sure it is valid with what our CA validates ssl_verify_host = True Check the server's hostname to make sure it matches the certificate DN ssl_key = None Path to the key the client should use to connect/authenticate with ssl_key_type = 'PEM' PEM or DER - format of key ssl_cert = None Path to the ssl certificate the client should use to to authenticate with ssl_cert_type = 'PEM' PEM or DER - format of certificate ssl_key_pass = None password to access the ssl_key size = None size (in bytes) or Maximum size of the thing being downloaded. This is mostly to keep us from exploding with an endless datastream max_header_size = 2097152 Maximum size (in bytes) of the headers. ip_resolve = 'whatever' What type of name to IP resolving to use, default is to do both IPV4 and IPV6. async = (key, limit) When this option is set, the urlgrab() is not processed immediately but queued. parallel_wait() then processes grabs in parallel, limiting the numer of connections in each 'key' group to at most 'limit'. max_connections The global connection limit. timedhosts The filename of the host download statistics. If defined, urlgrabber will update the stats at the end of every download. At the end of parallel_wait(), the updated stats are saved. If synchronous grabs are used, you should call th_save(). default_speed, half_life These options only affect the async mirror selection code. The default_speed option sets the speed estimate for mirrors we have never downloaded from, and defaults to 1 MBps. The speed estimate also drifts exponentially from the speed actually measured to the default speed, with default period of 30 days. ftp_disable_epsv = False False, True This options disables Extended Passive Mode (the EPSV command) which does not work correctly on some buggy ftp servers. RETRY RELATED ARGUMENTS retry = None the number of times to retry the grab before bailing. If this is zero, it will retry forever. This was intentional... really, it was :). If this value is not supplied or is supplied but is None retrying does not occur. retrycodes = [-1,2,4,5,6,7] a sequence of errorcodes (values of e.errno) for which it should retry. See the doc on URLGrabError for more details on this. You might consider modifying a copy of the default codes rather than building yours from scratch so that if the list is extended in the future (or one code is split into two) you can still enjoy the benefits of the default list. You can do that with something like this: retrycodes = urlgrabber.grabber.URLGrabberOptions().retrycodes if 12 not in retrycodes: retrycodes.append(12) checkfunc = None a function to do additional checks. This defaults to None, which means no additional checking. The function should simply return on a successful check. It should raise URLGrabError on an unsuccessful check. Raising of any other exception will be considered immediate failure and no retries will occur. If it raises URLGrabError, the error code will determine the retry behavior. Negative error numbers are reserved for use by these passed in functions, so you can use many negative numbers for different types of failure. By default, -1 results in a retry, but this can be customized with retrycodes. If you simply pass in a function, it will be given exactly one argument: a CallbackObject instance with the .url attribute defined and either .filename (for urlgrab) or .data (for urlread). For urlgrab, .filename is the name of the local file. For urlread, .data is the actual string data. If you need other arguments passed to the callback (program state of some sort), you can do so like this: checkfunc=(function, ('arg1', 2), {'kwarg': 3}) if the downloaded file has filename /tmp/stuff, then this will result in this call (for urlgrab): function(obj, 'arg1', 2, kwarg=3) # obj.filename = '/tmp/stuff' # obj.url = 'http://foo.com/stuff' NOTE: both the "args" tuple and "kwargs" dict must be present if you use this syntax, but either (or both) can be empty. failure_callback = None The callback that gets called during retries when an attempt to fetch a file fails. The syntax for specifying the callback is identical to checkfunc, except for the attributes defined in the CallbackObject instance. The attributes for failure_callback are: exception = the raised exception url = the url we're trying to fetch tries = the number of tries so far (including this one) retry = the value of the retry option retry_no_cache = the value of the retry_no_cache option The callback is present primarily to inform the calling program of the failure, but if it raises an exception (including the one it's passed) that exception will NOT be caught and will therefore cause future retries to be aborted. The callback is called for EVERY failure, including the last one. On the last try, the callback can raise an alternate exception, but it cannot (without severe trickiness) prevent the exception from being raised. failfunc = None The callback that gets called when urlgrab request fails. If defined, urlgrab() calls it instead of raising URLGrabError. Callback syntax is identical to failure_callback. Contrary to failure_callback, it's called only once. It's primary purpose is to use urlgrab() without a try/except block. interrupt_callback = None This callback is called if KeyboardInterrupt is received at any point in the transfer. Basically, this callback can have three impacts on the fetch process based on the way it exits: 1) raise no exception: the current fetch will be aborted, but any further retries will still take place 2) raise a URLGrabError: if you're using a MirrorGroup, then this will prompt a failover to the next mirror according to the behavior of the MirrorGroup subclass. It is recommended that you raise URLGrabError with code 15, 'user abort'. If you are NOT using a MirrorGroup subclass, then this is the same as (3). 3) raise some other exception (such as KeyboardInterrupt), which will not be caught at either the grabber or mirror levels. That is, it will be raised up all the way to the caller. This callback is very similar to failure_callback. They are passed the same arguments, so you could use the same function for both. retry_no_cache = False When True, automatically enable no_cache for future retries if checkfunc performs an unsuccessful check. This option is useful if your application expects a set of files from the same server to form an atomic unit and you write your checkfunc to ensure each file being downloaded belongs to such a unit. If transparent proxy caching is in effect, the files can become out-of-sync, disrupting the atomicity. Enabling this option will prevent that, while ensuring that you still enjoy the benefits of caching when possible. BANDWIDTH THROTTLING urlgrabber supports throttling via two values: throttle and bandwidth Between the two, you can either specify and absolute throttle threshold or specify a theshold as a fraction of maximum available bandwidth. throttle is a number - if it's an int, it's the bytes/second throttle limit. If it's a float, it is first multiplied by bandwidth. If throttle == 0, throttling is disabled. If None, the module-level default (which can be set with set_throttle) is used. bandwidth is the nominal max bandwidth in bytes/second. If throttle is a float and bandwidth == 0, throttling is disabled. If None, the module-level default (which can be set with set_bandwidth) is used. Note that when multiple downloads run simultaneously (multiprocessing or the parallel urlgrab() feature is used) the total bandwidth might exceed the throttle limit. You may want to also set max_connections=1 or scale your throttle option down accordingly. THROTTLING EXAMPLES: Lets say you have a 100 Mbps connection. This is (about) 10^8 bits per second, or 12,500,000 Bytes per second. You have a number of throttling options: *) set_bandwidth(12500000); set_throttle(0.5) # throttle is a float This will limit urlgrab to use half of your available bandwidth. *) set_throttle(6250000) # throttle is an int This will also limit urlgrab to use half of your available bandwidth, regardless of what bandwidth is set to. *) set_throttle(6250000); set_throttle(1.0) # float Use half your bandwidth *) set_throttle(6250000); set_throttle(2.0) # float Use up to 12,500,000 Bytes per second (your nominal max bandwidth) *) set_throttle(6250000); set_throttle(0) # throttle = 0 Disable throttling - this is more efficient than a very large throttle setting. *) set_throttle(0); set_throttle(1.0) # throttle is float, bandwidth = 0 Disable throttling - this is the default when the module is loaded. SUGGESTED AUTHOR IMPLEMENTATION (THROTTLING) While this is flexible, it's not extremely obvious to the user. I suggest you implement a float throttle as a percent to make the distinction between absolute and relative throttling very explicit. Also, you may want to convert the units to something more convenient than bytes/second, such as kbps or kB/s, etc. iN(t responses(tparse150(tStringIO(t HTTPException(trange_tuple_normalizetrange_tuple_to_headert RangeErrortsetsfrom t.is import __version__s???(t_cCs|S(N((tst((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR AscCs |adS(sSet the DEBUG object. This is called by _init_default_logger when the environment variable URLGRABBER_DEBUG is set, but can also be called by a calling program. Basically, if the calling program uses the logging module and would like to incorporate urlgrabber logging, then it can do so this way. It's probably not necessary as most internal logging is only for debugging purposes. The passed-in object should be a logging.Logger instance. It will be pushed into the keepalive and byterange modules if they're being used. The mirror module pulls this object in on import, so you will need to manually push into it. In fact, you may find it tidier to simply push your logging object (or objects) into each of these modules independently. N(tDEBUG(tDBOBJ((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt set_loggerGscCsyO|dkrtjd}n|jd}ddl}|jj|dd}|dkrrt|d}n|dkrtn|j d}t |dkr|d}nd}|dkr|j t j }n0|d kr|j t j}n|j|}|j||jd }t|_|j||j|Wn tttfk rqd}nXt|dS( suExamines the environment variable URLGRABBER_DEBUG and creates a logging object (logging.logger) based on the contents. It takes the form URLGRABBER_DEBUG=level,filename where "level" can be either an integer or a log level from the logging module (DEBUG, INFO, etc). If the integer is zero or less, logging will be disabled. Filename is the filename where logs will be sent. If it is "-", then stdout will be used. If the filename is empty or missing, stderr will be used. If the variable cannot be processed or the logging module cannot be imported (python < 2.3) then logging will be disabled. Here are some examples: URLGRABBER_DEBUG=1,debug.txt # log everything to debug.txt URLGRABBER_DEBUG=WARNING,- # log warning and higher to stdout URLGRABBER_DEBUG=INFO # log info and higher to stderr This function is called during module initialization. It is not intended to be called from outside. The only reason it is a function at all is to keep the module-level namespace tidy and to collect the code into a nice block.tURLGRABBER_DEBUGt,iNiis%(asctime)s %(message)stt-t urlgrabber(tNonetostenvirontsplittloggingt _levelNamestgettintt ValueErrort Formattertlent StreamHandlertsyststderrtstdoutt FileHandlert setFormattert getLoggertFalset propagatet addHandlertsetLeveltKeyErrort ImportErrorR (tlogspectdbinfoRtlevelt formattertfilenamethandlerR ((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt_init_default_loggerZs6           cCs0ts dStjdttjdtdS(Nsurlgrabber version = %sstrans function "_" = %s(R tdebugt __version__R (((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt_log_package_statescCs|S(N((R ((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR streplacecCs(t|tr$|jd|}n|S(s2convert 'unicode' to an encoded utf-8 byte string sutf-8(t isinstancetunicodetencode(tobjterrors((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt_to_utf8scCs6yt|SWn!tk r1t|jdSXdS(Ntutf8(tstrtUnicodeEncodeErrorR7R8(te((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt exception2msgs t URLGrabErrorcBseZdZdZRS(s URLGrabError error codes: URLGrabber error codes (0 -- 255) 0 - everything looks good (you should never see this) 1 - malformed url 2 - local file doesn't exist 3 - request for non-file local file (dir, etc) 4 - IOError on fetch 5 - OSError on fetch 6 - no content length header when we expected one 7 - HTTPException 8 - Exceeded read limit (for urlread) 9 - Requested byte range not satisfiable. 10 - Byte range requested, but range support unavailable 11 - Illegal reget mode 12 - Socket timeout 13 - malformed proxy url 14 - HTTPError (includes .code and .exception attributes) 15 - user abort 16 - error writing to local file MirrorGroup error codes (256 -- 511) 256 - No more mirrors left to try Custom (non-builtin) classes derived from MirrorGroup (512 -- 767) [ this range reserved for application-specific error codes ] Retry codes (< 0) -1 - retry the download, unknown reason Note: to test which group a code is in, you can simply do integer division by 256: e.errno / 256 Negative codes are reserved for use by functions passed in to retrygrab with checkfunc. The value -1 is built in as a generic retry code and is already included in the retrycodes list. Therefore, you can create a custom check function that simply returns -1 and the fetch will be re-tried. For more customized retries, you can use other negative number and include them in retry-codes. This is nice for outputting useful messages about what failed. You can use these error codes like so: try: urlgrab(url) except URLGrabError, e: if e.errno == 3: ... # or print e.strerror # or simply print e #### print '[Errno %i] %s' % (e.errno, e.strerror) cGstj||d|_dS(NsNo url specified(tIOErrort__init__turl(tselftargs((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRCs(t__name__t __module__t__doc__RC(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRAs4tCallbackObjectcBseZdZdZRS(sContainer for returned callback data. This is currently a dummy class into which urlgrabber can stuff information for passing to callbacks. This way, the prototype for all callbacks is the same, regardless of the data that will be passed back. Any function that accepts a callback function as an argument SHOULD document what it will define in this object. It is possible that this class will have some greater functionality in the future. cKs|jj|dS(N(t__dict__tupdate(REtkwargs((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRCs(RGRHRIRC(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRJs cKstj|||S(sJgrab the file at and make a local copy at If filename is none, the basename of the url is used. urlgrab returns the filename of the local file, which may be different from the passed-in filename if the copy_local kwarg == 0. See module documentation for a description of possible kwargs. (tdefault_grabberturlgrab(RDR/RM((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyROscKstj||S(s0open the url and return a file object If a progress object or throttle specifications exist, then a special file object will be returned that supports them. The file object can be treated like any other file object. See module documentation for a description of possible kwargs. (RNturlopen(RDRM((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRPscKstj|||S(s`read the url into a string, up to 'limit' bytes If the limit is exceeded, an exception will be thrown. Note that urlread is NOT intended to be used as a way of saying "I want the first N bytes" but rather 'read the whole file into memory, but don't use too much' See module documentation for a description of possible kwargs. (RNturlread(RDtlimitRM((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRQst URLParsercBsAeZdZdZdZdZdZdZdZRS(sGProcess the URLs before passing them to urllib2. This class does several things: * add any prefix * translate a "raw" file to a proper file: url * handle any http or https auth that's encoded within the url * quote the url Only the "parse" method is called directly, and it calls sub-methods. An instance of this class is held in the options object, which means that it's easy to change the behavior by sub-classing and passing the replacement in. It need only have a method like: url, parts = urlparser.parse(url, opts) c CsEt|}|j}|jr6|j||j}ntj|}|\}}}}} } | st|dkr|tjkr|ddkrtj j |}ndt j |}tj|}d}n|dkr|j ||}n|dkr|j|}n|r,|j|}ntj|}||fS( sparse the url and return the (modified) url and its parts Note: a raw file WILL be quoted when it's converted to a URL. However, other urls (ones which come with a proper scheme) may or may not be quoted according to opts.quote opts.quote = 1 --> quote it opts.quote = 0 --> do not quote it opts.quote = None --> guess iis/\sfile:thttpthttps(RTRUN(R;tquotetprefixt add_prefixturlparseRtstringtlettersRtpathtabspathturllibt pathname2urlt process_httpRtguess_should_quotet urlunparse( RERDtoptsRVtpartstschemethostR\tparmtquerytfrag((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytparse/s(   (   cCs?|ddks |ddkr-||}n|d|}|S(Nit/i((RERDRW((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRXUs  c Cs.|\}}}}}}||||||fS(N(( RERdRDReRfR\RgRhRi((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR`\scCs=|\}}}}}}tj|}||||||fS(s quote the URL This method quotes ONLY the path part. If you need to quote other parts, you should override this and pass in your derived class. The other alternative is to quote other parts before passing into urlgrabber. (R^RV(RERdReRfR\RgRhRi((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRVast0123456789ABCDEFc Cs|\}}}}}}d|kr(dStj|d}|dkrx|dkrt||dkrodS||d|d!j} | d|jks| d|jkrdStj|d|d}qIWdSdS(s Guess whether we should quote a path. This amounts to guessing whether it's already quoted. find ' ' -> 1 find '%' -> 1 find '%XX' -> 0 else -> 1 t it%iii(RZtfindRtupperthexvals( RERdReRfR\RgRhRitindtcode((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRans   ( RGRHRIRjRXR`RVRqRa(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRSs &   tURLGrabberOptionscBseeZdZd dZdZdZdZdZdZ dZ dZ d d Z RS( sClass to ease kwargs handling.cKs3||_|dkr"|jn|j|dS(sInitialize URLGrabberOptions object. Set default values for all options and then update options specified in kwargs. N(tdelegateRt _set_defaultst_set_attributes(RERuRM((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRCs   cCs8|jr+t|j|r+t|j|St|dS(N(RuthasattrtgetattrtAttributeError(REtname((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt __getattr__scCsM|jdkrdSt|jtdkr;t|jS|j|jSdS(sRCalculate raw throttle value from throttle and bandwidth values. iN(tthrottlettypetfloatt bandwidth(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt raw_throttles  cCsQd|_|d krdS|jr|jj|}|dkr|dkra|jjd}q|dkr|jjd}qn|dkrd}n||_dS|jrMtdkryddl}|jaWqtaqXntrMxVtj|D]B}|j drt r6t j d ||fn||_PqqWqMndS( seFind the proxy to use for this URL. Use the proxies dictionary first, then libproxy. tftpRTRUNt_none_Rishttp://susing proxy "%s" for url %s(Rshttpshttps( RtproxytproxiesRtlibproxyt_libproxy_cachet ProxyFactoryR%t getProxiest startswithR tinfo(RERDReRR((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt find_proxys8              cKstd||S(sCreate a derived URLGrabberOptions instance. This method creates a new instance and overrides the options specified in kwargs. Ru(Rt(RERM((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytderivescKsi|jj||jdr4t|j|_n|jdkretdtd|jfndS(s7Update object attributes with those provided in kwargs.trangetsimpletcheck_timestampi sIllegal reget mode: %sN(NRR( RKRLthas_keyRRtregetRRAR (RERM((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRws cCsd|_d|_d|_d|_d|_d|_ddddddg|_d|_t |_ d|_ d|_ d|_ d t|_d|_d |_d|_t|_d|_d|_d|_d|_d|_d|_t|_d |_d|_d|_d|_ d|_!d|_"t#|_$d|_%d|_&d|_'d|_(d|_)t|_*t|_+d|_,d |_-d|_.d |_/d|_0d|_1d |_2d|_3d|_4d|_5d|_6d|_7d|_8t|_9t|_:t|_;dS(sSet all options to their default values. When adding new options, make sure a default is provided here. g?iiiiiiis urlgrabber/%sii,tPEMi iii<g.ANiii'(<Rt progress_objtmulti_progress_objtcurl_objR}Rtretryt retrycodest checkfunct _do_raisetfailfunct copy_localtclose_connectionRR3t user_agentt ip_resolvet keepaliveRR%RRRtfailure_callbacktinterrupt_callbackRWtopenertTruet cache_openersttimeouttminratettextt http_headerst ftp_headerstdataRSt urlparserRVtusernametpasswordt ssl_ca_certt ssl_contexttssl_verify_peertssl_verify_hosttssl_keyt ssl_key_typetssl_certt ssl_cert_typet ssl_key_passtsizetmax_header_sizetasynct mirror_grouptmax_connectionst timedhostst half_lifet default_speedtftp_disable_epsvtno_cachetretry_no_cache(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRvsl                                                    cCs |jS(N(tformat(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt__repr__ss cCs|jj}|jdk r.|jdn|jd}x9|D]1}||dt|t|j|f}qEW|jr|jj|d}||dd|f}n||d}|S(NRus{ s %-15s: %s, s s %-15s: %s s 'delegate't}(RKtkeysRuRtremovetsorttreprR(REtindentRtstktdf((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRs   % N( RGRHRIRRCR|RRRRwRvRR(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRts  #  = cCs |jdS(N(t exception(R9((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR(scCs?|s dSt|r ||S|\}}}||||S(N(tcallable(tcbR9targtkarg((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _run_callback+s   t URLGrabbercBsSeZdZdZdZddZdddZdddZdZ RS(sProvides easy opening of URLs with a variety of options. All options are specified as kwargs. Options may be specified when the class is created and may be overridden on a per request basis. New objects inherit default values from default_grabber. cKst||_dS(N(RtRc(RERM((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRC<sc Gsd}x|d}d}d}trEtjd||j|dny7t||f|i}trwtjdn|SWnOtk r}|}|j}n.tk r}|}|j}|sqnXtrtjd|n|rGtrtjd|nt d|d|dd |d |jd |j } t || n|jdkse||jkrtr{tjd nnt |d d} | dk r| |j krtrtjd| |j nn| dk r | dkr |j r t|_q q dS(Niisattempt %i/%s: %stsuccesss exception: %sscalling callback: %sRRDttriesRRsretries exceeded, re-raisingterrnos)retrycode (%i) not in list %s, re-raising(RR RRtapplyRARtKeyboardInterruptRRJRRRyRRR( RERctfuncRFRRtcallbacktrR?R9t retrycode((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt_retry?sT       !cKst|}|p|jj|}trDtjdt|n|jj||\}}|j||dd}|j |||S(sopen the url and return a file object If a progress object or throttle value specified when this object was created, then a special file object will be returned that supports them. The file object can be treated like any other file object. scombined options: %sicSst|ddd|S(NR/Rc(tPyCurlFileObjectR(RcRD((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt retryfunc}s( R;RcRR R2RRRjRR(RERDRcRMRdR((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRPqs  cKs|t|}|p|jj|}trDtjdt|n|jj||\}}|\}}}} } } |j|||d krt j j t j|}|sd}qn|dkr|j rt j|}|rt j jd||}nt j j|sHtdtd|f} || _| qt j j|stdtd|f} || _| q|js|jd k rtd |d |} t|j| n|Sn|jr||_||_t|jpd |_tj ||Sd }y|j!||||SWn?tk rw}t"j#|d d |||_$t|j%|SXd S(s grab the file at and make a local copy at If filename is none, the basename of the url is used. urlgrab returns the filename of the local file, which may be different from the passed-in filename if copy_local == 0. scombined options: %ss index.htmltfiles//isLocal file does not exist: %sisNot a normal file: %sR/RDicSst|||}z|j|jrq|jd|jd}|jd|jd}tj|||dn|jdk rtd|d|}t |j|nWd|j X|S(NiiR/RD( Rt_do_grabt_tm_lastt _tm_firstt_THRLRRRJRtclose(RcRDR/tfotdlsztdltmR9((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRs   N(&R;RcRR R2RRRjRRRR\tbasenameR^tunquoteRt url2pathnametnormpathtexistsRAR RDtisfileRRRJRRR/RRt _async_queuetappendRRRLRR(RERDR/RcRMRdReRfR\RgRhRiterrR9RR?((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyROsV              c Kst|}|p|jj|}trDtjdt|n|jj||\}}|j||d|dk r|d}nd}|j ||||}|rt ||krt dt d||f}||_|n|S(s2read the url into a string, up to 'limit' bytes If the limit is exceeded, an exception will be thrown. Note that urlread is NOT intended to be used as a way of saying "I want the first N bytes" but rather 'read the whole file into memory, but don't use too much' scombined options: %siicSst|ddd|}d}ze|dkr<|j}n|j|}|jdk rtd|d|}t|j|nWd|jX|S(NR/RcRRRD(RRtreadRRJRR(RcRDRRRRR9((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRs  isExceeded limit (%i): %sN(R;RcRR R2RRRjRRRRRAR RD( RERDRRRcRMRdRRR((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRQs       cCs!t|r|difS|SdS(N((R(REt callback_obj((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt_make_callbacks  N( RGRHRIRCRRRPRORQR(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR3s  2 E*RcBseZdZdZdZdZdZeeZeddZ idZ dZ d Z d Z d Zd Zd ZddZdZddZddZddZdZdZRS(cCsd|_d|_d|_||_tj|jd|_||_t |_ d|_ ||_ |j j dkrtdnt |_d|_d|_tj|_d|_d|_d|_t |_d|_d|_t |_d|_d|_|jdS( NRiRsYcheck_timestamp regets are not implemented in this ver of urlgrabber. Please report this.iii (NN(RRt _hdr_dumpt _parsed_hdrRDRYturlsplitReR/R%Rt reget_timeRcRtNotImplementedErrort _completet_rbuft _rbufsizettimet_ttimet_tsizet _amount_readt _reget_lengtht _prog_runningt_errorRt _hdr_endedRRt_do_open(RERDR/Rc((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRCs2                     cCs/t|j|r"t|j|St|dS(sThis effectively allows us to wrap at the instance level. Any attribute not found in _this_ object will be searched for in self.fo. This includes methods.N(RxRRyRz(RER{((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR|scCsIy0|jt|tjf}|jdkr=||_n ||_|js|jjr|j |j }|jjj |j t j|j|jd|d|jjt|_|jjj|jqn|jt|7_y|jj|Wn,tk r$}tdt||_dSXt|SWntk rDdSXdS(NRRii(RRRRRRRRcRRRtstartt_prog_reportnameR^RRDt_prog_basenameRRRLRtwriteRBRAR@t _cb_errorR(REtbufttmRR?((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _retrieve"s0        cCsk|jr'd|_d|_t|_n|jdt|jd|jjrRdSy|jdkr|j j ddkr|j d d }t ||_q|j r|jdkrd |krt|_ d|_d|_d|_|jjdqn|jdkrd}|jd rT|dj}t|dkrrd}qrn|jdrrt|}n|rt ||_qn|j j ddkrd j|j d d }|j}tj|d|_||_n|j|7_t|jdkrD|dkrDt|_trDtjdqDnt|SWntk rftj SXdS(NRitcurtmax_sizeiRTRUscontent-length:t:is 200 OK Rs213 iis150 tlocations s header ended:(shttpshttps(sftp(!RRRR%t_over_max_sizeRRcRRetlowerRoRRRRRRRRttruncateRtstripRtjoinRYRRDRR R2RtpycurltREADFUNC_ABORT(RER tlengthRR((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _hdr_retrieve?sR    $       !  cCso|jr|jS|jjd}|d7}t}|j|j||jdtj||_|jS(Ns ii(RRRoRR tseekt mimetoolstMessage(REt statusendthdrfp((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt_return_hdr_objrs    tfgetcCs|jjtjS(N(RtgetinfoRt RESPONSE_CODE(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytsc Cs0|s|j}n|js4|jjtjdn|jjtjt|jjtjt |jjtj |j |jjtj |j |jjtj|j|jjtjt |jjtjt |jjtjt trtjdkr|jjtjt n|jr@|jjtj|jn|jr|jj}|dkr|jjtjtjn|dkr|jjtjtjn|dkr|jjtjtjqn|jjtjt |jjtjdd}t|dr,t |j!p#d }n|jjtj"||jjtj#|j$p]d |jjtj%||j&d kr|j'r|jjtj(|j'|jjtj)|j'n|jjtj*|j+|j,r|jjtj-d n|j.r$|jjtj/|j.n|j0rI|jjtj1|j0n|j2r|jjtj3|j2|jjtjdn|j4r|jjtj5|j4n|j6r|jjtj7|j6qn|j&dkrdg}|j8dk r)x1|j8D]#\}}|j:d||fqWn|j;rB|j:dn|rd|jjtj<|qdn|j=sv|j>r|j?}|r|jjtj@|qnt|dr|jAr|jjtjBt |jAn|jCdk r/|jjtjD|jC|jjtjEtjFtjGn|jHr|jIr|j&dkrl|jjtjJtjFn|jHr|jIrd|jH|jIf}|jjtjK|qn|jLr|jjtjMt |jjtjNtO|jLn|jPr|jjtjQtn|jjtjR|jSdS(Nii twhatevertipv4tipv6ii,RiiRUiRTs%s:%ssPragma:no-cacheR(shttpshttps(shttpshttps(TRcRRtsetoptRt FORBID_REUSEt NOPROGRESSR%tNOSIGNALRt WRITEFUNCTIONRtHEADERFUNCTIONRtPROGRESSFUNCTIONt_progress_updatet FAILONERRORt OPT_FILETIMEtFOLLOWLOCATIONR R-tVERBOSERt USERAGENTRRt IPRESOLVEtIPRESOLVE_WHATEVERt IPRESOLVE_V4t IPRESOLVE_V6t MAXREDIRSRxRRtCONNECTTIMEOUTtLOW_SPEED_LIMITRtLOW_SPEED_TIMEReRtCAPATHtCAINFOtSSL_VERIFYPEERRRtSSL_VERIFYHOSTRtSSLKEYRt SSLKEYTYPERtSSLCERTRt SSLCERTTYPERt SSLKEYPASSWDRRRRt HTTPHEADERRRt _build_rangetRANGERtMAX_RECV_SPEED_LARGERtPROXYt PROXYAUTHt HTTPAUTH_ANYtHTTPAUTH_GSSNEGOTIATERRtHTTPAUTHtUSERPWDRtPOSTt POSTFIELDSR;Rt FTP_USE_EPSVtURLRD( RERctiprRtheadersttagtcontentt range_strtuserpwd((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _set_optss                % " c CsM|jr dSy|jjWntjk r}|j}|jd}tj|j }|j drx|j d}n|dkrd|kodknrt |dt qI|dkrt dtd||f}||_ |qI|d krt qIi+td d 6td d 6tdd6tdd6tdd6tdd6tdd6tdd6tdd6tdd6tdd6td d!6td"d#6td$d6td%d&6td'd(6td)d*6td+d6td,d-6td.d/6td0d16td2d36td4d56td6d76td8d96td:d 6td;d<6td=d>6td?d@6tdAdB6tdCdD6tdEdF6tdGdH6tdIdJ6tdKdL6tdMdN6tdOdP6tdQdR6tdSdT6tdUdV6tdWdX6tdYdZ6td[d\6}t|jd]p^|j|d^}|rd|ko~dkn rd_|jj||jdckrtj|p|f}ndb||f}|}t d|}||_ ||_|nHX|j d]rI|j d]}t d|}tj|j |_ |ndS(dNiiii+R ii sTimeout on %s: %si*sCouldn't resolve proxyisCouldn't resolve hostisCouldn't connectisBad reply to FTP serveris Access deniedi sBad reply to FTP passi sBad reply to FTP pasvi sBad reply to FTP 227isCouldn't get FTP hostisCouldn't set FTP typeis Partial fileisFTP RETR command failedisHTTP returned erroris Write errors Upload failedis Read erroris Out of MemoryisOperation timed outsFTP PORT command failedisFTP REST command failedis Range failedi!sHTTP POST failedi"sSSL CONNECT failedi#sCouldn't resume downloadi$sCouldn't read filei%sAborted by callbacksToo many redirectsi/s$Peer certificate failed verificationi3s%Got nothing: SSL certificate expired?i4sSSL engine not foundi5sSSL engine set failedi6sNetwork error send()i7sNetwork error recv()i8sLocal certificate failedi:sSSL set cipher failedi;sLocal CA certificate failedi<sHTTP bad transfer encodingi=sMaximum file size exceededi?sFTP SSL failedi@sAuthentication failureiCsOut of disk space on serveriFsRemove file existsiIs3Problem with the SSL CA cert (path? access rights?)iMis s%s Error %d - %sRTRUscurl#%s - "%s"(shttpshttps(RRtperformRterrort http_codeRFR^RRDRRyRRAR R=RReRpRRs( RER?RsterrcodeterrurlRt pyerr2strterrstrtmsg((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _do_performs    (                                              %#     cCsgt|jdr6|jjdk r6|jj|_n t|_|jj|j|j|jS(NR( RxRcRRt _curl_cachetresetR[RR(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRas$    cCsdS(N((RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _add_headerslscCs'd}d}|jjrt|jtjkrytj|j}Wnt k rYqX|tj |_ |tj }||_ ||_|df}d|_n|jjr|jj}|ddkrd|df}n|d||df}n|r#t|}|r#|jddSndS(NiRit=(RRcRR~R/ttypest StringTypesRtstattOSErrortST_MTIMERtST_SIZERRRRRR(REt reget_lengthtrtRtheader((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRHos,$         c Cs|j|jfSyq|jjretj}tj|jjz|j|}Wdtj|Xn|j|}|j}Wn!t k r}t dt d|j |f}|j |_ |nt k r}t dt d||j f}|j |_ |ntjk rn}t dt d||j f}|j|_||_|j |_ |nAtk r}t|drt|jtjrt dt d|j |f}|j |_ |qt d t d |j |f}|j |_ |ntk rU}t d t d||j f}|j |_ |nZtk r}t d t d |jj|j |f}|j |_ |n X||fSdS(NisBad URL: %s : %si s%s on %sitreasoni sTimeout on %s: %sisIOError on %s: %siisHTTP Exception (%s) on %s: %s(RthdrRcRtsockettgetdefaulttimeouttsetdefaulttimeouttopenRRRAR RDRturllib2t HTTPErrorRsRRBRxR6RrRlRt __class__RG( REtreqRtold_toRRsR?Rtnew_e((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _make_requestsT  "  "  "    $"  "  "    cCs|jr dSt}t|jtjkr|jrt}t|j|_t j j |j|_ |j rvd}nd}trtjd|j|fnyt|j||_Wq#tk r}tdtd|j|f}|j|_|q#Xnd|_d|_ t|_y|jWn3tk rf}|jj|jj|nX|r|jj|jjtdk rytj|jd|jWqqXn|jjtj }|d krNyt j!|j||fWqNt"k rJ}tdtd |j|j|f}|j|_|qNXnyt|jd |_Wqtk r}tdtd |j|f}|j|_|qXn|jj#d t|_dS(s.dump the file to a filename or StringIO bufferNtabtwbs$opening local file "%s" with mode %sis-error opening local file from %s, IOError: %stMEMORYsuser.xdg.origin.urlis7error setting timestamp on file %s from %s, OSError: %sRs'error opening file from %s, IOError: %si($RR%R~R/RiRjRR=RRR\RR RR RRwRRBRAR RDRRdtflushRtxattrRRRR#Rt INFO_FILETIMEtutimeRlR(REt _was_filenametmodeR?Rtmod_time((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRsp !                     c Cs|jrD|dk rDt|j}||kr=||}qDdSn|jsZ|jn|jg}t|j}x|dks|rf|jjr|j|jjtj|j }|dkrtj |ntj|_ n|dkr|j }nt ||j }y|j j|}Wntjk rz}tdtd|j|f} |j| _| ntjk r}tdtd|j|f|j| _| nGtk r }tdtd|j|f|j| _| nXt|} | s Pn|r3|| }n|j||| }| |_|j| |_qxWtj|d|_dS( sfill the buffer to contain at least 'amt' bytes by reading from the underlying file object. If amt is None, then it will read until it gets nothing more. It updates the progress meter and throttles after every self._rbufsize bytes.NiisSocket Error on %s: %si sTimeout on %s: %ssIOError on %s: %sR(RRRRRRcRRRRtsleepRtminRRRtR]RAR RDRRBRRRZR( REtamttLR tbufsizetdifft readamounttnewR?Rtnewsize((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _fill_buffersX      "  "  "       cCso|jd|j|jr dSy0|jrO||j7}|jjj|nWnttfk rjdSXdS(NRi( RRRRRcRRLRRB(REtdownload_totalt downloadedt upload_totaltuploaded((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR0[s  cCs|s-|jjs|j}q-|jj}n|s7tS|tt|dkrtd|j||f}tj|f|_ t StS(Ng?s-Downloaded more than max size for %s: %s > %s( RcRR%RRR RDRtE_FILESIZE_EXCEEDEDRR(RERRRc((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRfs   cCsQ|j||dkr/|jd}|_n|j| |j|}|_|S(NR(RRR(RERR((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRxs   icCs=|js|jn|jjStj|jd}x|dkrd|kogt|jkn rt|j}|j||j t|j|ksPntj|jd|}q;W|dkrt|j}n |d}d|ko t|jknr|}n|j| |j|}|_|S(Ns ii( RRRtreadlineRZRoRRRR(RERRtiRR((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRs"  5  % cCs3|jr"|jjj|jn|jjdS(N(RRcRtendRRR(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRs cCs|jS(si Provide the geturl() method, used to be got from urllib.addinfourl, via. urllib.URLopener.* (RD(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytgeturlsN(RGRHRCR|RRR!tpropertyRsR^R[RdRRgRHR~RRRR0RRRRR(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRs,   3   t l  # 9 Q ?    tGLOBAL_ACK_EINTRcCstjtjadS(sCTo make sure curl has reread the network/dns info we force a reloadN(ReRRtCurl(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytreset_curl_objs cCs |t_dS(s8Deprecated. Use: default_grabber.throttle = new_throttleN(RNR}(t new_throttle((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt set_throttlescCs |t_dS(s:Deprecated. Use: default_grabber.bandwidth = new_bandwidthN(RNR(t new_bandwidth((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt set_bandwidthscCs |t_dS(s@Deprecated. Use: default_grabber.progress_obj = new_progress_objN(RNR(tnew_progress_obj((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytset_progress_objscCs |t_dS(s<Deprecated. Use: default_grabber.user_agent = new_user_agentN(RNR(tnew_user_agent((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytset_user_agentsiiiiiic CsNi|d6|d6|d6|d6|d6|d6|d6| d6} t||| S( s5Deprecated. Use: urlgrab() with the retry arg insteadRRRR}RRRR(RO( RDR/RRRR}RtnumtriesRRRM((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt retrygrabs  s%[(,)] s%%%02xcCs|dkrdS|tkr dS|tkr0dSt|tttfkrUt|St|tkry|j d}nt|tkrd}ddj t ||St|t krdd j t t |St|tkrd d j t t |Std |dS( NRRR%tUTF8cSstj||S(N(t _quoter_mapR(tc((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytquoterss'%s'Rs(%s)Rs[%s]sCan't serialize %s(RRR%R~RtlongRR=R7R8Rtmapttuplet_dumpstlistt TypeError(tvR((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRs$     cCs\d}d}g}d}}xtr)|t|ksJ||dkr||krs|j||||!n|t|krPn||dkr||dkrt|}n|dj||\}}n|d}}q"||dkr||f}g}|d}}q"|d7}q"W|r9tnt|dkrX|d}n|S(Nc SsD|dkrdS|dkr tS|dkr0tSyt|SWntk rQnXyt|SWntk rsnXt|dkr@|d|dkodknr@g}d}xutr-|jd |}|j|||!|dkrPn|jt t||d|d !d |d }qWd j |}n|S( NRRR%iiit'iRniiR( RRR%RRRRRoRtchrR(RtretRtj((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytdecodes4     6  +is,)]s)]t)is[((RRRRRR(RRtstktlRR((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt_loadss4   "    cCsZtj|d}|sdSx*|ddkrH|tj|d7}qW|d jdS(Niis (RRRR(tfdR ((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _readliness t_ExternalDownloadercBs2eZdZd"ZdZd Zd!ZRS(#cCsdtjddtjdtj|_|jjj|_|jjj|_i|_d|_dS(Ns /usr/libexec/urlgrabber-ext-downtstdinR!i( t subprocesstPopentPIPEtpopenRtfilenoR!trunningtcnt(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRC%s  RDR/RRRRR}RRRRRRRRWRRRRRRRRRRRRRRRcCsg}xO|jD]D}t||}|dkr7qn|jd|t|fqW|jrz|jrz|jdndj|}trtj d|j |j |j n|j d7_ ||j|j tjdq>n\tt|d|d}|dd kr%t|d|_ntr>tjd |ntj |jt|dt|d ||jd |j|||fqAW|S( Nsdownloader diedRmiiitOKRit0s failure: %sii(RR!R RRRRRRRt _progressRLtpopRRARsRRDRRR(RERtlinestlinet_idRRctug_err((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR\Ps2  7cCs1|jjj|jjj|jjdS(N(RRRR!twait(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytabortks(surlsfilenamestimeoutsminratesclose_connections keepalivesthrottles bandwidthsrangesregets user_agents http_headerss ftp_headerssproxysprefixsusernamespasswords ssl_ca_certsssl_certs ssl_cert_typesssl_keys ssl_key_types ssl_key_passsssl_verify_peersssl_verify_hostssizesmax_header_sizes ip_resolvesftp_disable_epsvsno_cache(RGRHRCRRR\R(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR$s"   t_ExternalDownloaderPoolcBs,eZdZdZdZdZRS(cCs%tj|_i|_i|_dS(N(tselecttepollRtcache(RE((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRCqs cCstj|jj}|jj|d}|stt}tj|j tj }tj|j tj |tj Bn|j j|jtj||j|j<|j|dS(N(RYRRDtnetlocRRRRtfcntlRtF_GETFDtF_SETFDt FD_CLOEXECRtregisterR!RtEPOLLINRR(RERcRftdltfl((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyRvs #cCsg}x |jjD]\}}|tj@rNtrEtjdntn|tj@sat|j |j }|sqnt |dkst|j |t j|ddjj}||jkr|j|jn|jj||j j||j| 1itfailR(i i(+R\RRRRRARRRtfailuretrettotalRRt removeMeterRRRLRR RtaddRRRRRRRRRRRRDR@Rtmirrortdefault_actionRtdictRR:R( RcRRRRRRtmgR:tfailedtremovedtaction(RRtsingleR(s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR\st       '     !         smax_connections: %d/%dRRMtprivateisNo more mirrors to try.tgrabberRismax_connections(%s): %d/%siN(ii(,RRRRRRRRRRRRNRcRR RRRtmirrorsRtestimateR%RAR RR:RRRRRuRt _join_urlt relative_urlRRjRRDRBRRRRtsave(tmetertmetersRctcountRR\tidxRR:RRtbestt best_speedRRtspeedRRRRRRDRdR?((RRRRs6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt parallel_waits %   G     %   %   !    ! %  RcBsSeZiZdZedZedZeddZedZ RS(cCstjj}|rtjdkryttj}xt|D]}yS|j dd\}}}}t|t|t t||ftj | [copy_local=0|1] [close_connection=0|1]Rhg?i is)throttle: %s, throttle bandwidth: %s B/si(ttext_progress_meterRs LOCAL FILE:i(RtargvRtexitRZRRRRRNR}RtprogressRR*RRORA( RDR/RMtaRRRR?R{((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _main_test s0       c Cs6ytjdd!\}}Wn/tk rKdGtjdGdGHtjnXi}x@tjdD]1}tj|dd\}}t||| [copy_local=0|1] [close_connection=0|1]Rhi(RRtfoocSsp|G|GHddl}|j}|dkrDdGHtddn|dkrgdGHtddndGHdS( Nig?s forcing retryg?sforcing failureisforcing immediate failureR(trandomRA(R/thellotthereR"trnum((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pytcfunc s     R#R$Rs LOCAL FILE:(shello( RRRRRZRRRRR*RRRA( RDR/RMRRRRR?R&R{((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt _retry_test s,    c Csddl}|dkr!t}nd|GHt|}|j}|jxtttt gD]p}|j |}|j }t |dd}d|j G||||j }||krdGHq_dGHq_WdS(Nisusing file "%s" for comparisonsistesting %-30s tpassedtFAILED(t cStringIORt__file__RwRRt_test_file_object_smallreadt_test_file_object_readallt_test_file_object_readlinet_test_file_object_readlinesRRRGtgetvalue( R/R*Rts_inputttestfunctfo_inputt fo_outputtwrapperts_output((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyt_file_object_test s(             cCs0x)|jd}|j||sdSqdS(Ni(RR (R5R4R((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR, s  cCs|j}|j|dS(N(RR (R5R4R((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR- s cCs-x&|j}|j||sdSqdS(N(RR (R5R4R((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR. s   cCs)|j}|jtj|ddS(NR(t readlinesR RZR(R5R4tli((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyR/ s t__main__ttest(WRIRRRYRRZR^RxthttplibRRtthreadRiRkRtftplibRRRRtRRt byterangeRRRRRxRR*RGRR3ti18nR RcR R R1R4R;R@RBRARJRORPRQRSRtRRtobjectRRNRt global_inittGLOBAL_DEFAULTRRReRRRRRRRRRtordRRRRRRRR RR R'R7R,R-R.R/(((s6/usr/lib/python2.7/site-packages/urlgrabber/grabber.pyts            $    !    1   9 l           $   0  L0 Q  "