IronPython.Modules Copy the latest data from the memory buffer. This won't always contain data, because comrpessed data is only written after a block is filled. Add data to the input buffer. This manipulates the position of the stream to make it appear to the BZip2 stream that nothing has actually changed. The data to append to the buffer. Base class used for iterator wrappers. Error function on real values Complementary error function on real values: erfc(x) = 1 - erf(x) Gamma function on real values Natural log of absolute value of Gamma function Provides helper functions which need to be called from generated code to implement various portions of modules. Checks for the specific permissions, provided by the mode parameter, are available for the provided path. Permissions can be: F_OK: Check to see if the file exists R_OK | W_OK | X_OK: Check for the specific permissions. Only W_OK is respected. single instance of environment dictionary is shared between multiple runtimes because the environment is shared by multiple runtimes. spawns a new process. If mode is nt.P_WAIT then then the call blocks until the process exits and the return value is the exit code. Otherwise the call returns a handle to the process. The caller must then call nt.waitpid(pid, options) to free the handle and get the exit code of the process. Failure to call nt.waitpid will result in a handle leak. spawns a new process. If mode is nt.P_WAIT then then the call blocks until the process exits and the return value is the exit code. Otherwise the call returns a handle to the process. The caller must then call nt.waitpid(pid, options) to free the handle and get the exit code of the process. Failure to call nt.waitpid will result in a handle leak. Copy elements from a Python mapping of dict environment variables to a StringDictionary. Convert a sequence of args to a string suitable for using to spawn a process. Python regular expression module. Compiled reg-ex pattern Preparses a regular expression text returning a ParsedRegex class that can be used for further regular expressions. Implementes a resource-based meta_path importer as described in PEP 302. Instantiates a new meta_path importer using an embedded ZIP resource file. Process a sequence of objects that are compatible with ObjectToSocket(). Return two things as out params: an in-order List of sockets that correspond to the original objects in the passed-in sequence, and a mapping of these socket objects to their original objects. The socketToOriginal mapping is generated because the CPython select module supports passing to select either file descriptor numbers or an object with a fileno() method. We try to be faithful to what was originally requested when we return. Return the System.Net.Sockets.Socket object that corresponds to the passed-in object. obj can be a System.Net.Sockets.Socket, a PythonSocket.SocketObj, a long integer (representing a socket handle), or a Python object with a fileno() method (whose result is used to look up an existing PythonSocket.SocketObj, which is in turn converted to a Socket. Represents the date components that we found while parsing the date. Used for zeroing out values which have different defaults from CPython. Currently we only know that we need to do this for the year. Returns the underlying .NET RegistryKey Samples on how to subtype built-in types from C# an int variable for demonstration purposes an int variable for demonstration purposes Creates an optimized encoding mapping that can be consumed by an optimized version of charmap_encode/charmap_decode. Encodes the input string with the specified optimized encoding map. Decodes the input string using the provided string mapping. Optimized encoding mapping that can be consumed by charmap_encode/EncodingMapEncoding. This implementation is not suitable for incremental encoding. This implementation is not suitable for incremental encoding. Walks the queue calling back to the specified delegate for each populated index in the queue. Returns the dialects from the code context. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. The meta class for ctypes array instances. Converts an object into a function call parameter. Base class for all ctypes interop types. Creates a new CFuncPtr object from a tuple. The 1st element of the tuple is the ordinal or function name. The second is an object with a _handle property. The _handle property is the handle of the module from which the function will be loaded. Creates a new CFuncPtr which calls a COM method. Creates a new CFuncPtr with the specfied address. Creates a new CFuncPtr with the specfied address. we need to keep alive any methods which have arguments for the duration of the call. Otherwise they could be collected on the finalizer thread before we come back. Creates a method for calling with the specified signature. The returned method has a signature of the form: (IntPtr funcAddress, arg0, arg1, ..., object[] constantPool) where IntPtr is the address of the function to be called. The arguments types are based upon the types that the ArgumentMarshaller requires. Base class for marshalling arguments from the user provided value to the call stub. This class provides the logic for creating the call stub and calling it. Emits the IL to get the argument for the call stub generated into a dynamic method. Gets the expression used to provide the argument. This is the expression from an incoming DynamicMetaObject. Gets an expression which keeps alive the argument for the duration of the call. Returns null if a keep alive is not necessary. Provides marshalling of primitive values when the function type has no type information or when the user has provided us with an explicit cdata instance. Provides marshalling for when the function type provide argument information. Provides marshalling for when the user provides a native argument object (usually gotten by byref or pointer) and the function type has no type information. The meta class for ctypes function pointer instances. Converts an object into a function call parameter. Fields are created when a Structure is defined and provide introspection of the structure. Called for fields which have been limited to a range of bits. Given the value for the full type this extracts the individual bits. Called for fields which have been limited to a range of bits. Sets the specified value into the bits for the field. Common functionality that all of the meta classes provide which is part of our implementation. This is used to implement the serialization/deserialization of values into/out of memory, emit the marshalling logic for call stubs, and provide common information (size/alignment) for the types. Gets the native size of the type Gets the required alignment for the type Deserialized the value of this type from the given address at the given offset. Any new objects which are created will keep the provided MemoryHolder alive. raw determines if the cdata is returned or if the primitive value is returned. This is only applicable for subtypes of simple cdata types. Serializes the provided value into the specified address at the given offset. Gets the .NET type which is used when calling or returning the value from native code. Gets the .NET type which the native type is converted into when going to Python code. This is usually int, BigInt, double, object, or a CData type. Emits marshalling of an object from Python to native code. This produces the native type from the Python type. Emits marshalling from native code to Python code This produces the python type from the native type. This is used for return values and parameters to Python callable objects that are passed back out to native code. Returns a string which describes the type. Used for _buffer_info implementation which only exists for testing purposes. The meta class for ctypes pointers. Converts an object into a function call parameter. Access an instance at the specified address The meta class for ctypes simple data types. These include primitives like ints, floats, etc... char/wchar pointers, and untyped pointers. Converts an object into a function call parameter. Helper function for reading char/wchar's. This is used for reading from arrays and pointers to avoid creating lots of 1-char strings. The enum used for tracking the various ctypes primitive types. 'c' 'b' 'B' 'h' 'H' 'i' 'I' 'l' 'L' 'f' 'd', 'g' 'q' 'Q' 'O' 'P' 'z' 'Z' 'u' '?' 'v' 'X' Meta class for structures. Validates _fields_ on creation, provides factory methods for creating instances from addresses and translating to parameters. Converts an object into a function call parameter. Structures just return themselves. If our size/alignment hasn't been initialized then grabs the size/alignment from all of our base classes. If later new _fields_ are added we'll be initialized and these values will be replaced. Base class for data structures. Subclasses can define _fields_ which specifies the in memory layout of the values. Instances can then be created with the initial values provided as the array. The values can then be accessed from the instance by field name. The value can also be passed to a foreign C API and the type can be used in other structures. class MyStructure(Structure): _fields_ = [('a', c_int), ('b', c_int)] MyStructure(1, 2).a MyStructure() class MyOtherStructure(Structure): _fields_ = [('c', MyStructure), ('b', c_int)] MyOtherStructure((1, 2), 3) MyOtherStructure(MyStructure(1, 2), 3) The meta class for ctypes unions. Converts an object into a function call parameter. Gets a function which casts the specified memory. Because this is used only w/ Python API we use a delegate as the return type instead of an actual address. Implementation of our cast function. data is marshalled as a void* so it ends up as an address. obj and type are marshalled as an object so we need to unmarshal them. Returns a new type which represents a pointer given the existing type. Converts an address acquired from PyObj_FromPtr or that has been marshaled as type 'O' back into an object. Converts an object into an opaque address which can be handed out to managed code. Decreases the ref count on an object which has been increased with Py_INCREF. Increases the ref count on an object ensuring that it will not be collected. returns address of C instance internal buffer. It is the callers responsibility to ensure that the provided instance will stay alive if memory in the resulting address is to be used later. Gets the required alignment of the given type. Gets the required alignment of an object. Returns a pointer instance for the given CData Gets the ModuleBuilder used to generate our unsafe call stubs into. Given a specific size returns a .NET type of the equivalent size that we can use when marshalling these values across calls. Shared helper between struct and union for getting field info and validating it. Verifies that the provided bit field settings are valid for this type. Shared helper to get the _fields_ list for struct/union and validate it. Helper function for translating from memset to NT's FillMemory API. Helper function for translating from memset to NT's FillMemory API. Emits the marshalling code to create a CData object for reverse marshalling. Wrapper class for emitting locals/variables during marshalling code gen. A wrapper around allocated memory to ensure it gets released and isn't accessed when it could be finalized. Creates a new MemoryHolder and allocates a buffer of the specified size. Creates a new MemoryHolder at the specified address which is not tracked by us and we will never free. Creates a new MemoryHolder at the specified address which will keep alive the parent memory holder. Gets the address of the held memory. The caller should ensure the MemoryHolder is always alive as long as the address will continue to be accessed. Gets a list of objects which need to be kept alive for this MemoryHolder to be remain valid. Used to track the lifetime of objects when one memory region depends upon another memory region. For example if you have an array of objects that each have an element which has it's own lifetime the array needs to keep the individual elements alive. The keys used here match CPython's keys as tested by CPython's test_ctypes. Typically they are a string which is the array index, "ffffffff" when from_buffer is used, or when it's a simple type there's just a string instead of the full dictionary - we store that under the key "str". Copies the data in data into this MemoryHolder. Copies memory from one location to another keeping the associated memory holders alive during the operation. Native functions used for exposing ctypes functionality. Allocates memory that's zero-filled Helper function for implementing memset. Could be more efficient if we could P/Invoke or call some otherwise native code to do this. Used to check the type to see if we can do a comparison. Returns true if we can or false if we should return NotImplemented. May throw if the type's really wrong. Helper function for doing the comparisons. Returns a new callable object with the provided initial set of arguments bound to it. Calling the new function then appends to the additional user provided arguments. Creates a new partial object with the provided positional arguments. Creates a new partial object with the provided positional and keyword arguments. Gets the function which will be called. Gets the initially provided positional arguments. Gets the initially provided keyword arguments. Gets or sets the dictionary used for storing extra attributes on the partial object. Calls func with the previously provided arguments and more positional arguments. Calls func with the previously provided arguments and more positional arguments and keyword arguments. Operator method to set arbitrary members on the partial object. Operator method to get additional arbitrary members defined on the partial object. Operator method to delete arbitrary members defined in the partial object. Generator based on the .NET Core implementation of System.Random handleToSocket allows us to translate from Python's idea of a socket resource (file descriptor numbers) to .NET's idea of a socket resource (System.Net.Socket objects). In particular, this allows the select module to convert file numbers (as returned by fileno()) and convert them to Socket objects so that it can do something useful with them. Return the internal System.Net.Sockets.Socket socket object associated with the given handle (as returned by GetHandle()), or null if no corresponding socket exists. This is primarily intended to be used by other modules (such as select) that implement networking primitives. User code should not normally need to call this function. Create a Python socket object from an existing .NET socket object (like one returned from Socket.Accept()) Perform initialization common to all constructors Convert an object to a 32-bit integer. This adds two features to Converter.ToInt32: 1. Sign is ignored. For example, 0xffff0000 converts to 4294901760, where Convert.ToInt32 would throw because 0xffff0000 is less than zero. 2. Overflow exceptions are thrown. Converter.ToInt32 throws TypeError if x is an integer, but is bigger than 32 bits. Instead, we throw OverflowException. Convert an object to a 16-bit integer. This adds two features to Converter.ToInt16: 1. Sign is ignored. For example, 0xff00 converts to 65280, where Convert.ToInt16 would throw because signed 0xff00 is -256. 2. Overflow exceptions are thrown. Converter.ToInt16 throws TypeError if x is an integer, but is bigger than 16 bits. Instead, we throw OverflowException. Return a standard socket exception (socket.error) whose message and error code come from a SocketException This will eventually be enhanced to generate the correct error type (error, herror, gaierror) based on the error code. Convert an IPv6 address byte array to a string in standard colon-hex notation. The .NET IPAddress.ToString() method uses dotted-quad for the last 32 bits, which differs from the normal Python implementation (but is allowed by the IETF); this method returns the standard (no dotted-quad) colon-hex form. Handle conversion of "" to INADDR_ANY and "<broadcast>" to INADDR_BROADCAST. Otherwise returns host unchanged. Return the IP address associated with host, with optional address family checking. host may be either a name or an IP address (in string form). If family is non-null, a gaierror will be thrown if the host's address family is not the same as the specified family. gaierror is also raised if the hostname cannot be converted to an IP address (e.g. through a name lookup failure). Return the IP address associated with host, with optional address family checking. host may be either a name or an IP address (in string form). If family is non-null, a gaierror will be thrown if the host's address family is not the same as the specified family. gaierror is also raised if the hostname cannot be converted to an IP address (e.g. through a name lookup failure). Return fqdn, but with its domain removed if it's on the same domain as the local machine. Convert a (host, port) tuple [IPv4] (host, port, flowinfo, scopeid) tuple [IPv6] to its corresponding IPEndPoint. Throws gaierror if host is not a valid address. Throws ArgumentTypeException if any of the following are true: - address does not have exactly two elements - address[0] is not a string - address[1] is not an int Convert an IPEndPoint to its corresponding (host, port) [IPv4] or (host, port, flowinfo, scopeid) [IPv6] tuple. Throws SocketException if the address family is other than IPv4 or IPv6. BER encoding of an integer value is the number of bytes required to represent the integer followed by the bytes Enum which specifies the format type for a compiled struct Struct used to store the format and the number of times it should be repeated. Stops execution of Python or other .NET code on the main thread. If the thread is blocked in native code the thread will be interrupted after it returns back to Python or other .NET code. Provides a dictionary storage implementation whose storage is local to the thread. Wrapper provided for backwards compatibility. Special hash function because IStructuralEquatable.GetHashCode is not allowed to throw. Special equals because none of the special cases in Ops.Equals are applicable here, and the reference equality check breaks some tests. gets the object or throws a reference exception gets the object or throws a reference exception zip_searchorder defines how we search for a module in the Zip archive: we first search for a package __init__, then for non-package .pyc, .pyo and .py entries. The .pyc and .pyo entries are swapped by initzipimport() if we run in optimized mode. Also, '/' is replaced by SEP there. Given a path to a Zip file and a toc_entry, return the (uncompressed) data as a new reference. Return the code object for the module named by 'fullname' from the Zip archive as a new reference. Given a path to a Zip archive, build a dict, mapping file names (local to the archive, using SEP as a separator) to toc entries. A toc_entry is a tuple: (__file__, # value to use for __file__, available for all files compress, # compression kind; 0 for uncompressed data_size, # size of compressed data on disk file_size, # size of decompressed data file_offset, # offset of file header from start of archive time, # mod time of file (in dos format) date, # mod data of file (in dos format) crc, # crc checksum of the data ) Directories can be recognized by the trailing SEP in the name, data_size and file_offset are 0. Given a (sub)modulename, write the potential file path in the archive (without extension) to the path buffer. Determines the type of module we have (package or module, or not found). Delivers the remaining bits, left-aligned, in a byte. This is valid only if NumRemainingBits is less than 8; in other words it is valid only after a call to Flush(). Reset the BitWriter. This is useful when the BitWriter writes into a MemoryStream, and is used by a BZip2Compressor, which itself is re-used for multiple distinct data blocks. Write some number of bits from the given value, into the output. The nbits value should be a max of 25, for safety. For performance reasons, this method does not check! Write a full 8-bit byte into the output. Write four 8-bit bytes into the output. Write all available byte-aligned bytes. This method writes no new output, but flushes any accumulated bits. At completion, the accumulator may contain up to 7 bits. This is necessary when re-assembling output from N independent compressors, one for each of N blocks. The output of any particular compressor will in general have some fragment of a byte remaining. This fragment needs to be accumulated into the parent BZip2OutputStream. Writes all available bytes, and emits padding for the final byte as necessary. This must be the last method invoked on an instance of BitWriter. Knuth's increments seem to work better than Incerpi-Sedgewick here. Possibly because the number of elems to sort is usually small, typically <= 20. BZip2Compressor writes its compressed data out via a BitWriter. This is necessary because BZip2 does byte shredding. The number of uncompressed bytes being held in the buffer. I am thinking this may be useful in a Stream that uses this compressor class. In the Close() method on the stream it could check this value to see if anything has been written at all. You may think the stream could easily track the number of bytes it wrote, which would eliminate the need for this. But, there is the case where the stream writes a complete block, and it is full, and then writes no more. In that case the stream may want to check. Accept new bytes into the compressor data buffer This method does the first-level (cheap) run-length encoding, and stores the encoded data into the rle block. Process one input byte into the block. To "process" the byte means to do the run-length encoding. There are 3 possible return values: 0 - the byte was not written, in other words, not encoded into the block. This happens when the byte b would require the start of a new run, and the block has no more room for new runs. 1 - the byte was written, and the block is not full. 2 - the byte was written, and the block is full. 0 if the byte was not written, non-zero if written. Append one run to the output block. This compressor does run-length-encoding before BWT and etc. This method simply appends a run to the output block. The append always succeeds. The return value indicates whether the block is full: false (not full) implies that at least one additional run could be processed. true if the block is now full; otherwise false. Compress the data that has been placed (Run-length-encoded) into the block. The compressed data goes into the CompressedBytes array. Side effects: 1. fills the CompressedBytes array. 2. sets the AvailableBytesOut property. This is the most hammered method of this class.

This is the version using unrolled loops.

Method "mainQSort3", file "blocksort.c", BZip2 1.0.2 Array instance identical to sfmap, both are used only temporarily and independently, so we do not need to allocate additional memory. A read-only decorator stream that performs BZip2 decompression on Read. Compressor State Create a BZip2InputStream, wrapping it around the given input Stream. The input stream will be closed when the BZip2InputStream is closed. The stream from which to read compressed data Create a BZip2InputStream with the given stream, and specifying whether to leave the wrapped stream open when the BZip2InputStream is closed. The stream from which to read compressed data Whether to leave the input stream open, when the BZip2InputStream closes. This example reads a bzip2-compressed file, decompresses it, and writes the decompressed data into a newly created file. var fname = "logfile.log.bz2"; using (var fs = File.OpenRead(fname)) { using (var decompressor = new Ionic.BZip2.BZip2InputStream(fs)) { var outFname = fname + ".decompressed"; using (var output = File.Create(outFname)) { byte[] buffer = new byte[2048]; int n; while ((n = decompressor.Read(buffer, 0, buffer.Length)) > 0) { output.Write(buffer, 0, n); } } } } Read data from the stream. To decompress a BZip2 data stream, create a BZip2InputStream, providing a stream that reads compressed data. Then call Read() on that BZip2InputStream, and the data read will be decompressed as you read. A BZip2InputStream can be used only for Read(), not for Write(). The buffer into which the read data should be placed. the offset within that data array to put the first byte read. the number of bytes to read. the number of bytes actually read Read a single byte from the stream. the byte read from the stream, or -1 if EOF Indicates whether the stream can be read. The return value depends on whether the captive stream supports reading. Indicates whether the stream supports Seek operations. Always returns false. Indicates whether the stream can be written. The return value depends on whether the captive stream supports writing. Flush the stream. Reading this property always throws a . The position of the stream pointer. Setting this property always throws a . Reading will return the total number of uncompressed bytes read in. Calling this method always throws a . this is irrelevant, since it will always throw! this is irrelevant, since it will always throw! irrelevant! Calling this method always throws a . this is irrelevant, since it will always throw! Calling this method always throws a . this parameter is never used this parameter is never used this parameter is never used Dispose the stream. indicates whether the Dispose method was invoked by user code. Close the stream. Read n bits from input, right justifying the result. For example, if you read 1 bit, the result is either 0 or 1. The number of bits to read, always between 1 and 32. Called by createHuffmanDecodingTables() exclusively. Called by recvDecodingTables() exclusively. Freq table collected to save a pass over the data during decompression. Initializes the tt array. This method is called when the required length of the array is known. I don't initialize it at construction time to avoid unneccessary memory allocation when compressing small files. Dump the current state of the decompressor, to restore it in case of an error. This allows the decompressor to be essentially "rewound" and retried when more data arrives. This is only used by IronPython. The current state. Restore the internal compressor state if an error occurred. The old state. A write-only decorator stream that compresses data as it is written using the BZip2 algorithm. Constructs a new BZip2OutputStream, that sends its compressed output to the given output stream. The destination stream, to which compressed output will be sent. This example reads a file, then compresses it with bzip2 file, and writes the compressed data into a newly created file. var fname = "logfile.log"; using (var fs = File.OpenRead(fname)) { var outFname = fname + ".bz2"; using (var output = File.Create(outFname)) { using (var compressor = new Ionic.BZip2.BZip2OutputStream(output)) { byte[] buffer = new byte[2048]; int n; while ((n = fs.Read(buffer, 0, buffer.Length)) > 0) { compressor.Write(buffer, 0, n); } } } } Constructs a new BZip2OutputStream with specified blocksize. the destination stream. The blockSize in units of 100000 bytes. The valid range is 1..9. Constructs a new BZip2OutputStream. the destination stream. whether to leave the captive stream open upon closing this stream. Constructs a new BZip2OutputStream with specified blocksize, and explicitly specifies whether to leave the wrapped stream open. the destination stream. The blockSize in units of 100000 bytes. The valid range is 1..9. whether to leave the captive stream open upon closing this stream. Close the stream. This may or may not close the underlying stream. Check the constructors that accept a bool value. Flush the stream. The blocksize parameter specified at construction time. Write data to the stream. Use the BZip2OutputStream to compress data while writing: create a BZip2OutputStream with a writable output stream. Then call Write() on that BZip2OutputStream, providing uncompressed data as input. The data sent to the output stream will be the compressed form of the input data. A BZip2OutputStream can be used only for Write() not for Read(). The buffer holding data to write to the stream. the offset within that data array to find the first byte to write. the number of bytes to write. Indicates whether the stream can be read. The return value is always false. Indicates whether the stream supports Seek operations. Always returns false. Indicates whether the stream can be written. The return value should always be true, unless and until the object is disposed and closed. Reading this property always throws a . The position of the stream pointer. Setting this property always throws a . Reading will return the total number of uncompressed bytes written through. Calling this method always throws a . this is irrelevant, since it will always throw! this is irrelevant, since it will always throw! irrelevant! Calling this method always throws a . this is irrelevant, since it will always throw! Calling this method always throws a . this parameter is never used this parameter is never used this parameter is never used never returns anything; always throws A write-only decorator stream that compresses data as it is written using the BZip2 algorithm. This stream compresses by block using multiple threads. This class performs BZIP2 compression through writing. For more information on the BZIP2 algorithm, see . This class is similar to , except that this implementation uses an approach that employs multiple worker threads to perform the compression. On a multi-cpu or multi-core computer, the performance of this class can be significantly higher than the single-threaded BZip2OutputStream, particularly for larger streams. How large? Anything over 10mb is a good candidate for parallel compression. The tradeoff is that this class uses more memory and more CPU than the vanilla BZip2OutputStream. Also, for small files, the ParallelBZip2OutputStream can be much slower than the vanilla BZip2OutputStream, because of the overhead associated to using the thread pool. Constructs a new ParallelBZip2OutputStream, that sends its compressed output to the given output stream. The destination stream, to which compressed output will be sent. This example reads a file, then compresses it with bzip2 file, and writes the compressed data into a newly created file. var fname = "logfile.log"; using (var fs = File.OpenRead(fname)) { var outFname = fname + ".bz2"; using (var output = File.Create(outFname)) { using (var compressor = new Ionic.BZip2.ParallelBZip2OutputStream(output)) { byte[] buffer = new byte[2048]; int n; while ((n = fs.Read(buffer, 0, buffer.Length)) > 0) { compressor.Write(buffer, 0, n); } } } } Constructs a new ParallelBZip2OutputStream with specified blocksize. the destination stream. The blockSize in units of 100000 bytes. The valid range is 1..9. Constructs a new ParallelBZip2OutputStream. the destination stream. whether to leave the captive stream open upon closing this stream. Constructs a new ParallelBZip2OutputStream with specified blocksize, and explicitly specifies whether to leave the wrapped stream open. the destination stream. The blockSize in units of 100000 bytes. The valid range is 1..9. whether to leave the captive stream open upon closing this stream. The maximum number of concurrent compression worker threads to use. This property sets an upper limit on the number of concurrent worker threads to employ for compression. The implementation of this stream employs multiple threads from the .NET thread pool, via ThreadPool.QueueUserWorkItem(), to compress the incoming data by block. As each block of data is compressed, this stream re-orders the compressed blocks and writes them to the output stream. A higher number of workers enables a higher degree of parallelism, which tends to increase the speed of compression on multi-cpu computers. On the other hand, a higher number of buffer pairs also implies a larger memory consumption, more active worker threads, and a higher cpu utilization for any compression. This property enables the application to limit its memory consumption and CPU utilization behavior depending on requirements. By default, DotNetZip allocates 4 workers per CPU core, subject to the upper limit specified in this property. For example, suppose the application sets this property to 16. Then, on a machine with 2 cores, DotNetZip will use 8 workers; that number does not exceed the upper limit specified by this property, so the actual number of workers used will be 4 * 2 = 8. On a machine with 4 cores, DotNetZip will use 16 workers; again, the limit does not apply. On a machine with 8 cores, DotNetZip will use 16 workers, because of the limit. For each compression "worker thread" that occurs in parallel, there is up to 2mb of memory allocated, for buffering and processing. The actual number depends on the property. CPU utilization will also go up with additional workers, because a larger number of buffer pairs allows a larger number of background threads to compress in parallel. If you find that parallel compression is consuming too much memory or CPU, you can adjust this value downward. The default value is 16. Different values may deliver better or worse results, depending on your priorities and the dynamic performance characteristics of your storage and compute resources. The application can set this value at any time, but it is effective only before the first call to Write(), which is when the buffers are allocated. Close the stream. This may or may not close the underlying stream. Check the constructors that accept a bool value. Flush the stream. The blocksize parameter specified at construction time. Write data to the stream. Use the ParallelBZip2OutputStream to compress data while writing: create a ParallelBZip2OutputStream with a writable output stream. Then call Write() on that ParallelBZip2OutputStream, providing uncompressed data as input. The data sent to the output stream will be the compressed form of the input data. A ParallelBZip2OutputStream can be used only for Write() not for Read(). The buffer holding data to write to the stream. the offset within that data array to find the first byte to write. the number of bytes to write. Indicates whether the stream can be read. The return value is always false. Indicates whether the stream supports Seek operations. Always returns false. Indicates whether the stream can be written. The return value depends on whether the captive stream supports writing. Reading this property always throws a . The position of the stream pointer. Setting this property always throws a . Reading will return the total number of uncompressed bytes written through. The total number of bytes written out by the stream. This value is meaningful only after a call to Close(). Calling this method always throws a . this is irrelevant, since it will always throw! this is irrelevant, since it will always throw! irrelevant! Calling this method always throws a . this is irrelevant, since it will always throw! Calling this method always throws a . this parameter is never used this parameter is never used this parameter is never used never returns anything; always throws Returns the "random" number at a specific index. the index the random number Computes a CRC-32. The CRC-32 algorithm is parameterized - you can set the polynomial and enable or disable bit reversal. This can be used for GZIP, BZip2, or ZIP. This type is used internally by DotNetZip; it is generally not used directly by applications wishing to create, read, or manipulate zip archive files. Indicates the total number of bytes applied to the CRC. Indicates the current CRC for all blocks slurped in. Returns the CRC32 for the specified stream. The stream over which to calculate the CRC32 the CRC32 calculation Returns the CRC32 for the specified stream, and writes the input into the output stream. The stream over which to calculate the CRC32 The stream into which to deflate the input the CRC32 calculation Get the CRC32 for the given (word,byte) combo. This is a computation defined by PKzip for PKZIP 2.0 (weak) encryption. The word to start with. The byte to combine it with. The CRC-ized result. Update the value for the running CRC32 using the given block of bytes. This is useful when using the CRC32() class in a Stream. block of bytes to slurp starting point in the block how many bytes within the block to slurp Process one byte in the CRC. the byte to include into the CRC . Process a run of N identical bytes into the CRC. This method serves as an optimization for updating the CRC when a run of identical bytes is found. Rather than passing in a buffer of length n, containing all identical bytes b, this method accepts the byte value and the length of the (virtual) buffer - the length of the run. the byte to include into the CRC. the number of times that byte should be repeated. Combines the given CRC32 value with the current running total. This is useful when using a divide-and-conquer approach to calculating a CRC. Multiple threads can each calculate a CRC32 on a segment of the data, and then combine the individual CRC32 values at the end. the crc value to be combined with this one the length of data the CRC value was calculated on Create an instance of the CRC32 class using the default settings: no bit reversal, and a polynomial of 0xEDB88320. Create an instance of the CRC32 class, specifying whether to reverse data bits or not. specify true if the instance should reverse data bits. In the CRC-32 used by BZip2, the bits are reversed. Therefore if you want a CRC32 with compatibility with BZip2, you should pass true here. In the CRC-32 used by GZIP and PKZIP, the bits are not reversed; Therefore if you want a CRC32 with compatibility with those, you should pass false. Create an instance of the CRC32 class, specifying the polynomial and whether to reverse data bits or not. The polynomial to use for the CRC, expressed in the reversed (LSB) format: the highest ordered bit in the polynomial value is the coefficient of the 0th power; the second-highest order bit is the coefficient of the 1 power, and so on. Expressed this way, the polynomial for the CRC-32C used in IEEE 802.3, is 0xEDB88320. specify true if the instance should reverse data bits. In the CRC-32 used by BZip2, the bits are reversed. Therefore if you want a CRC32 with compatibility with BZip2, you should pass true here for the reverseBits parameter. In the CRC-32 used by GZIP and PKZIP, the bits are not reversed; Therefore if you want a CRC32 with compatibility with those, you should pass false for the reverseBits parameter. Reset the CRC-32 class - clear the CRC "remainder register." Use this when employing a single instance of this class to compute multiple, distinct CRCs on multiple, distinct data blocks. A Stream that calculates a CRC32 (a checksum) on all bytes read, or on all bytes written. This class can be used to verify the CRC of a ZipEntry when reading from a stream, or to calculate a CRC when writing to a stream. The stream should be used to either read, or write, but not both. If you intermix reads and writes, the results are not defined. This class is intended primarily for use internally by the DotNetZip library. The default constructor. Instances returned from this constructor will leave the underlying stream open upon Close(). The stream uses the default CRC32 algorithm, which implies a polynomial of 0xEDB88320. The underlying stream The constructor allows the caller to specify how to handle the underlying stream at close. The stream uses the default CRC32 algorithm, which implies a polynomial of 0xEDB88320. The underlying stream true to leave the underlying stream open upon close of the CrcCalculatorStream; false otherwise. A constructor allowing the specification of the length of the stream to read. The stream uses the default CRC32 algorithm, which implies a polynomial of 0xEDB88320. Instances returned from this constructor will leave the underlying stream open upon Close(). The underlying stream The length of the stream to slurp A constructor allowing the specification of the length of the stream to read, as well as whether to keep the underlying stream open upon Close(). The stream uses the default CRC32 algorithm, which implies a polynomial of 0xEDB88320. The underlying stream The length of the stream to slurp true to leave the underlying stream open upon close of the CrcCalculatorStream; false otherwise. A constructor allowing the specification of the length of the stream to read, as well as whether to keep the underlying stream open upon Close(), and the CRC32 instance to use. The stream uses the specified CRC32 instance, which allows the application to specify how the CRC gets calculated. The underlying stream The length of the stream to slurp true to leave the underlying stream open upon close of the CrcCalculatorStream; false otherwise. the CRC32 instance to use to calculate the CRC32 Gets the total number of bytes run through the CRC32 calculator. This is either the total number of bytes read, or the total number of bytes written, depending on the direction of this stream. Provides the current CRC for all blocks slurped in. The running total of the CRC is kept as data is written or read through the stream. read this property after all reads or writes to get an accurate CRC for the entire stream. Indicates whether the underlying stream will be left open when the CrcCalculatorStream is Closed. Set this at any point before calling . Read from the stream the buffer to read the offset at which to start the number of bytes to read the number of bytes actually read Write to the stream. the buffer from which to write the offset at which to start writing the number of bytes to write Indicates whether the stream supports reading. Indicates whether the stream supports seeking. Always returns false. Indicates whether the stream supports writing. Flush the stream. Returns the length of the underlying stream. The getter for this property returns the total bytes read. If you use the setter, it will throw . Seeking is not supported on this stream. This method always throws N/A N/A N/A This method always throws N/A Closes the stream. This class represents adler32 checksum algorithm. This static method returns adler32 checksum of the buffer data Implementation of the Deflate compression algorithm. Deflate algorithm configuration parameters class reduce lazy search above this match length do not perform lazy search above this match length quit search above this match length Constructor which initializes class inner fields Maximum memory level Defalult compression method Default memory level Deflate class congiration table block not completed, need more input or more output Block internalFlush performed Finish started, need only more output at next deflate finish done, accept no more input or output preset dictionary flag in zlib header The deflate compression method The size of the buffer repeat previous bit length 3-6 times (2 bits of repeat count) repeat a zero length 3-10 times (3 bits of repeat count) repeat a zero length 11-138 times (7 bits of repeat count) Gets or sets the Compression level. Gets or sets the Number of bytes in the pending buffer. Gets or sets the Output pending buffer. Gets or sets the next pending byte to output to the stream. Gets or sets a value indicating whether to suppress zlib header and adler32. Pointer back to this zlib stream As the name implies Size of Pending_buf UNKNOWN, BINARY or ASCII STORED (for zip only) or DEFLATED Value of internalFlush parameter for previous deflate call LZ77 Window size (32K by default) log2(w_size) (8..16) w_size - 1 Sliding Window. Input bytes are ReadPos into the second half of the Window, and move to the first half later to keep a dictionary of at least wSize bytes. With this organization, matches are limited to a distance of wSize-MAX_MATCH bytes, but this ensures that IO is always performed with a length multiple of the block size. Also, it limits the Window size to 64K, which is quite useful on MSDOS. To do: use the user input buffer as sliding Window. Actual size of Window: 2*wSize, except when the user input buffer is directly used as sliding Window. Link to older string with same hash index. To limit the size of this array to 64K, this link is maintained only for the last 32K strings. An index in this array is thus a Window index modulo 32K. Heads of the hash chains or NIL. hash index of string to be inserted number of elements in hash table log2(hash_size) hash_size-1 Number of bits by which ins_h must be shifted at each input step. It must be such that after MIN_MATCH steps, the oldest byte no longer takes part in the hash key, that is: hash_shift * MIN_MATCH >= hash_bits Window position at the beginning of the current output block. Gets negative when the Window is moved backwards. length of best match previous match set if previous match exists start of string to insert start of matching string number of valid bytes ahead in Window Length of the best match at previous step. Matches not greater than this are discarded. This is used in the lazy match evaluation. To speed up deflation, hash chains are never searched beyond this length. A higher limit improves compression ratio but degrades the speed. Attempt to find a better match only when the current match is strictly smaller than this value. This mechanism is used only for compression levels >= 4. favor or force Huffman coding Use a faster search when the previous match is longer than this Stop searching when current match exceeds this literal and length tree distance tree Huffman tree for bit lengths Desc for literal tree desc for distance tree desc for bit length tree number of codes at each bit length for an optimal tree heap used to build the Huffman trees number of elements in the heap element of largest frequency Depth of each subtree used as tie breaker for trees of equal frequency index for literals or lengths Size of match buffer for literals/lengths. There are 4 reasons for limiting lit_bufsize to 64K: - frequencies can be kept in 16 bit counters - if compression is not successful for the first block, all input data is still in the Window so we can still emit a stored block even when input comes from standard input. (This can also be done for all blocks if lit_bufsize is not greater than 32K.) - if compression is not successful for a file smaller than 64K, we can even emit a stored file instead of a stored block (saving 5 bytes). This is applicable only for zip (not gzip or zlib). - creating new Huffman trees less frequently may not provide fast adaptation to changes in the input data statistics. (Take for example a binary file with poorly compressible code followed by a highly compressible string table.) Smaller buffer sizes give fast adaptation but have of course the overhead of transmitting trees more frequently. - I can't count above 4 running index in l_buf index of pendig_buf bit length of current block with optimal trees bit length of current block with static trees number of string matches in current block bit length of EOB code for last block Output buffer. bits are inserted starting at the bottom (least significant bits). Number of valid bits in bi_buf. All bits above the last valid bit are always zero. Default constructor Initialization Initialize the tree data structures for a new zlib stream. Initializes block Restore the heap property by moving down the tree starting at node k, exchanging a node with the smallest of its two sons if necessary, stopping when the heap property is re-established (each father smaller than its two sons). Scan a literal or distance tree to determine the frequencies of the codes in the bit length tree. Construct the Huffman tree for the bit lengths and return the index in bl_order of the last bit length code to send. Send the header for a block using dynamic Huffman trees: the counts, the lengths of the bit length codes, the literal tree and the distance tree. IN assertion: lcodes >= 257, dcodes >= 1, blcodes >= 4. Send a literal or distance tree in compressed form, using the codes in bl_tree. Output a byte on the stream. IN assertion: there is enough room in Pending_buf. Adds a byte to the buffer Send one empty static block to give enough lookahead for inflate. This takes 10 bits, of which 7 may remain in the bit buffer. The current inflate code requires 9 bits of lookahead. If the last two codes for the previous block (real code plus EOB) were coded on 5 bits or less, inflate may have only 5+3 bits of lookahead to decode the last real code. In this case we send two empty static blocks instead of one. (There are no problems if the previous block is stored or fixed.) To simplify the code, we assume the worst case of last real code encoded on one bit only. Save the match info and tally the frequency counts. Return true if the current block must be flushed. Send the block data compressed using the given Huffman trees Set the data type to ASCII or BINARY, using a crude approximation: binary if more than 20% of the bytes are <= 6 or >= 128, ascii otherwise. IN assertion: the fields freq of dyn_ltree are set and the total of all frequencies does not exceed 64K (to fit in an int on 16 bit machines). Flush the bit buffer, keeping at most 7 bits in it. Flush the bit buffer and align the output on a byte boundary Copy a stored block, storing first the length and its one's complement if requested. Flushes block Copy without compression as much as possible from the input stream, return the current block state. This function does not insert new strings in the dictionary since uncompressible data is probably not useful. This function is used only for the level=0 compression option. NOTE: this function should be optimized to avoid extra copying from Window to Pending_buf. Send a stored block Determine the best encoding for the current block: dynamic trees, static trees or store, and output the encoded block to the zip file. Fill the Window when the lookahead becomes insufficient. Updates strstart and lookahead. IN assertion: lookahead less than MIN_LOOKAHEAD OUT assertions: strstart less than or equal to window_size-MIN_LOOKAHEAD At least one byte has been ReadPos, or _avail_in == 0; reads are performed for at least two bytes (required for the zip translate_eol option -- not supported here). Compress as much as possible from the input stream, return the current block state. This function does not perform lazy evaluation of matches and inserts new strings in the dictionary only for unmatched strings or for short matches. It is used only for the fast compression options. Same as above, but achieves better compression. We use a lazy evaluation for matches: a match is finally adopted only if there is no better match at the next Window position. Finds the longest matching data part Deflate algorithm initialization ZStream object Compression level Window bits A result code Initializes deflate algorithm ZStream object Compression level Operation result result code Deflate algorithm initialization ZStream object Compression level Window bits Memory level Compression strategy Operation result code Resets the current state of deflate object Finish compression with deflate algorithm Sets deflate algorithm parameters Sets deflate dictionary Performs data compression with the deflate algorithm Static constructor initializes config_table current inflate_block mode if STORED, bytes left to copy table lengths (14 bits) index into blens (or border) bit lengths of codes bit length tree depth bit length decoding tree if CODES, current state true if this block is the last block single malloc for tree space need check check on output Gets or sets the sliding window. Gets or sets the one byte after sliding Window. Gets or sets the Window ReadPos pointer. Gets or sets the Window WritePos pointer. Gets or sets the bits in bit buffer. Gets or sets the bit buffer. Resets this InfBlocks class instance Block processing functions Frees inner buffers Sets dictionary Returns true if inflate is currently at the End of a block generated by Z_SYNC_FLUSH or Z_FULL_FLUSH. copy as much as possible from the sliding Window to the output area Inflate codes mode This class is used by the InfBlocks class current inflate_codes mode length pointer into tree current index of the tree ltree bits decoded per branch dtree bits decoded per branch literal/length/eob tree literal/length/eob tree index distance tree distance tree index Constructor which takes literal, distance trees, corresponding bites decoded for branches, corresponding indexes and a ZStream object Constructor which takes literal, distance trees, corresponding bites decoded for branches and a ZStream object Block processing method An instance of the InfBlocks class A ZStream object A result code Frees allocated resources Fast inflate procedure. Called with number of bytes left to WritePos in Window at least 258 (the maximum string length) and number of input bytes available at least ten. The ten bytes are six bytes for the longest length/ distance pair plus four bytes for overloading the bit buffer. This enumeration contains modes of inflate processing waiting for method byte waiting for flag byte four dictionary check bytes to go three dictionary check bytes to go two dictionary check bytes to go one dictionary check byte to go waiting for inflateSetDictionary decompressing blocks four check bytes to go three check bytes to go two check bytes to go one check byte to go finished check, done got an error--stay here current inflate mode if FLAGS, method byte computed check value stream check value if BAD, inflateSync's marker bytes count flag for no wrapper log2(Window size) (8..15, defaults to 15) current inflate_blocks state Resets the Inflate algorithm A ZStream object A result code Finishes the inflate algorithm processing A ZStream object Operation result code Initializes the inflate algorithm A ZStream object Window size Operation result code Runs inflate algorithm A ZStream object Flush strategy Operation result code Sets dictionary for the inflate operation A ZStream object An array of byte - dictionary Dictionary length Operation result code Inflate synchronization A ZStream object Operation result code Returns true if inflate is currently at the End of a block generated by Z_SYNC_FLUSH or Z_FULL_FLUSH. This function is used by one PPP implementation to provide an additional safety check. PPP uses Z_SYNC_FLUSH but removes the length bytes of the resulting empty stored block. When decompressing, PPP checks that at the End of input packet, inflate is waiting for these length bytes. Creates header remover. As long as header is not completed, call to Remover.MoveNext() returns true and adjust state of z. Stream where gzip header will appear. Contains utility information for the InfTree class Given a list of code lengths and a maximum table size, make a set of tables to decode that set of codes. Return (int)ZLibResultCode.Z_OK on success, (int)ZLibResultCode.Z_DATA_ERROR if the given code set is incomplete (the tables are still built in this case), (int)ZLibResultCode.Z_DATA_ERROR if the input is invalid (an over-subscribed set of lengths), or (int)ZLibResultCode.Z_DATA_ERROR if not enough memory. Build trees Builds dynamic trees Build fixed trees Bit length codes must not exceed MAX_BL_BITS bits This class represents a tree and is used in the Deflate class The dynamic tree Largest code with non zero frequency the corresponding static tree The dynamic tree Largest code with non zero frequency the corresponding static tree Mapping from a distance to a distance code. dist is the distance - 1 and must not have side effects. _dist_code[256] and _dist_code[257] are never used. Compute the optimal bit lengths for a tree and update the total bit length for the current block. IN assertion: the fields freq and dad are set, heap[heap_max] and above are the tree nodes sorted by increasing frequency. OUT assertions: the field count is set to the optimal bit length, the array bl_count contains the frequencies for each bit length. The length opt_len is updated; static_len is also updated if stree is not null. Construct one Huffman tree and assigns the code bit strings and lengths. Update the total bit length for the current block. IN assertion: the field freq is set for all tree elements. OUT assertions: the fields count and code are set to the optimal bit length and corresponding code. The length opt_len is updated; static_len is also updated if stree is not null. The field max_code is set. Generate the codes for a given tree and bit counts (which need not be optimal). IN assertion: the array bl_count contains the bit length statistics for the given tree and the field count is set for all tree elements. OUT assertion: the field code is set for all tree elements of non zero code length. Reverse the first count bits of a code, using straightforward code (a faster method would use a table) Some constants for specifying compression levels. Methods which takes a compression level as a parameter expects an integer value from 0 to 9. You can either specify an integer value or use constants for some most widely used compression levels. No compression should be used at all. Minimal compression, but greatest speed. Maximum compression, but slowest. Select default compression level (good compression, good speed). Compression strategies. The strategy parameter is used to tune the compression algorithm. The strategy parameter only affects the compression ratio but not the correctness of the compressed output even if it is not set appropriately. This strategy is designed for filtered data. Data which consists of mostly small values, with random distribution should use Z_FILTERED. With this strategy, less string matching is performed. Z_HUFFMAN_ONLY forces Huffman encoding only (no string match) The default strategy is the most commonly used. With this strategy, string matching and huffman compression are balanced. Flush strategies Do not internalFlush data, but just write data as normal to the output buffer. This is the normal way in which data is written to the output buffer. Obsolete. You should use Z_SYNC_FLUSH instead. All pending output is flushed to the output buffer and the output is aligned on a byte boundary, so that the decompressor can get all input data available so far. All output is flushed as with Z_SYNC_FLUSH, and the compression state is reset so that decompression can restart from this point if previous compressed data has been damaged or if random access is desired. Using Z_FULL_FLUSH too often can seriously degrade the compression. ZLib_InflateSync will locate points in the compression string where a full has been performed. Notifies the module that the input has now been exhausted. Pending input is processed, pending output is flushed and calls return with Z_STREAM_END if there was enough output space. Results of operations in ZLib library No failure was encountered, the operation completed without problem. No failure was encountered, and the input has been exhausted. A preset dictionary is required for decompression of the data. An internal error occurred The stream structure was inconsistent Input data has been corrupted (for decompression). Memory allocation failed. There was not enough space in the output buffer. The version supplied does not match that supported by the ZLib module. States of deflate operation Data block types, i.e. binary or ascii text Helper class Copies large array which was passed as srcBuf to the Initialize method into the destination array which were passes as destBuff The number of bytes copied Max Window size preset dictionary flag in zlib header The size of the buffer Deflate compression method index This method returns the literal value received The literal to return The received value This method returns the literal value received The literal to return The received value This method returns the literal value received The literal to return The received value This method returns the literal value received The literal to return The received value Performs an unsigned bitwise right shift with the specified number Number to operate on Ammount of bits to shift The resulting number from the shift operation Performs an unsigned bitwise right shift with the specified number Number to operate on Ammount of bits to shift The resulting number from the shift operation Performs an unsigned bitwise right shift with the specified number Number to operate on Ammount of bits to shift The resulting number from the shift operation Performs an unsigned bitwise right shift with the specified number Number to operate on Ammount of bits to shift The resulting number from the shift operation Reads a number of characters from the current source Stream and writes the data to the target array at the specified index. The source Stream to ReadPos from. Contains the array of characters ReadPos from the source Stream. The starting index of the target array. The maximum number of characters to ReadPos from the source Stream. The number of characters ReadPos. The number will be less than or equal to count depending on the data available in the source Stream. Returns -1 if the End of the stream is reached. Reads a number of characters from the current source TextReader and writes the data to the target array at the specified index. The source TextReader to ReadPos from Contains the array of characteres ReadPos from the source TextReader. The starting index of the target array. The maximum number of characters to ReadPos from the source TextReader. The number of characters ReadPos. The number will be less than or equal to count depending on the data available in the source TextReader. Returns -1 if the End of the stream is reached. Converts a string to an array of bytes The string to be converted The new array of bytes Converts an array of bytes to an array of chars The array of bytes to convert The new array of chars see definition of array dist_code below ZStream is used to store user data to compress/decompress. Next input byte array Index of the first byte in the input array. Number of bytes available at _next_in total nb of input bytes ReadPos so far Byte array for the next output block Index of the first byte in the _next_out array Remaining free space at _next_out Total number of bytes in output array A string to store operation result message (corresponding to result codes) A deflate object to perform data compression Inflate object to perform data decompression Adler-32 value for uncompressed data processed so far. Best guess about the data type: ascii or binary Gets/Sets the next input byte array. Index of the first byte in the input array. Gets/Sets the number of bytes available in the input buffer. Gets/Sets the total number of bytes in the input buffer. Gets/Sets the buffer for the next output data. Gets/Sets the index of the first byte in the byte array to write to. Gets/Sets the remaining free space in the buffer. Gets/Sets the total number of bytes in the output array. Gets sets the last error message occurred during class operations. A deflate object to perform data compression Inflate object to perform data decompression Initializes the internal stream state for decompression. The fields , must be initialized before by the caller. If is not null and is large enough (the exact value depends on the compression method), determines the compression method from the ZLib header and allocates all data structures accordingly; otherwise the allocation will be deferred to the first call of . inflateInit returns if success, if there was not enough memory, if the ZLib library version is incompatible with the version assumed by the caller. is set to null if there is no error message. does not perform any decompression apart from reading the ZLib header if present: this will be done by . (So and may be modified, but and are unchanged.) This is another version of with an extra parameter. The fields , must be initialized before by the caller. If is not null and is large enough (the exact value depends on the compression method), determines the compression method from the ZLib header and allocates all data structures accordingly; otherwise the allocation will be deferred to the first call of . The windowBits parameter is the base two logarithm of the maximum window size (the size of the history buffer). It should be in the range 8..15 for this version of the library. The default value is 15 if is used instead. If a compressed stream with a larger window size is given as input, will return with the error code instead of trying to allocate a larger window. inflateInit returns if success, if there was not enough memory, if a parameter is invalid (such as a negative memLevel). is set to null if there is no error message. does not perform any decompression apart from reading the ZLib header if present: this will be done by . (So and may be modified, but and are unchanged.) This method decompresses as much data as possible, and stops when the input buffer () becomes empty or the output buffer () becomes full. It may some introduce some output latency (reading input without producing any output) except when forced to flush. The detailed semantics are as follows. performs one or both of the following actions: Decompress more input starting at and update and accordingly. If not all input can be processed (because there is not enough room in the output buffer), is updated and processing will resume at this point for the next call of . Provide more output starting at and update and accordingly. provides as much output as possible, until there is no more input data or no more space in the output buffer (see below about the parameter). Flush strategy to use. Before the call of , the application should ensure that at least one of the actions is possible, by providing more input and/or consuming more output, and updating the next_* and avail_* values accordingly. The application can consume the uncompressed output when it wants, for example when the output buffer is full (avail_out == 0), or after each call of . If returns and with zero , it must be called again after making room in the output buffer because there might be more output pending. If the parameter is set to , flushes as much output as possible to the output buffer. The flushing behavior of is not specified for values of the parameter other than and , but the current implementation actually flushes as much output as possible anyway. should normally be called until it returns or an error. However if all decompression is to be performed in a single step (a single call of inflate), the parameter should be set to . In this case all pending input is processed and all pending output is flushed; must be large enough to hold all the uncompressed data. (The size of the uncompressed data may have been saved by the compressor for this purpose.) The next operation on this stream must be to deallocate the decompression state. The use of is never required, but can be used to inform that a faster routine may be used for the single call. If a preset dictionary is needed at this point (see ), sets strm-adler to the adler32 checksum of the dictionary chosen by the compressor and returns ; otherwise it sets strm->adler to the adler32 checksum of all output produced so far (that is, bytes) and returns , or an error code as described below. At the end of the stream, ) checks that its computed adler32 checksum is equal to that saved by the compressor and returns only if the checksum is correct. returns if some progress has been made (more input processed or more output produced), if the end of the compressed data has been reached and all uncompressed output has been produced, if a preset dictionary is needed at this point, if the input data was corrupted (input stream not conforming to the ZLib format or incorrect adler32 checksum), if the stream structure was inconsistent (for example if or was null), if there was not enough memory, if no progress is possible or if there was not enough room in the output buffer when is used. In the case, the application may then call to look for a good compression block. All dynamically allocated data structures for this stream are freed. This function discards any unprocessed input and does not flush any pending output. inflateEnd returns if success, if the stream state was inconsistent. In the error case, msg may be set but then points to a static string (which must not be deallocated). Skips invalid compressed data until a full flush point (see the description of deflate with Z_FULL_FLUSH) can be found, or until all available input is skipped. No output is provided. returns if a full flush point has been found, if no more input was provided, if no flush point has been found, or if the stream structure was inconsistent. In the success case, the application may save the current current value of which indicates where valid compressed data was found. In the error case, the application may repeatedly call , providing more input each time, until success or end of the input data. Initializes the decompression dictionary from the given uncompressed byte sequence. This function must be called immediately after a call of if this call returned . The dictionary chosen by the compressor can be determined from the Adler32 value returned by this call of . The compressor and decompresser must use exactly the same dictionary. A byte array - a dictionary. The length of the dictionary. inflateSetDictionary returns if success, if a parameter is invalid (such as null dictionary) or the stream state is inconsistent, if the given dictionary doesn't match the expected one (incorrect Adler32 value). inflateSetDictionary does not perform any decompression: this will be done by subsequent calls of . Initializes the internal stream state for compression. An integer value from 0 to 9 indicating the desired compression level. DeflateInit returns if success, if there was not enough memory, if level is not a valid compression level. is set to null if there is no error message. does not perform any compression: this will be done by . Initializes the internal stream state for compression. An integer value from 0 to 9 indicating the desired compression level. The windowBits parameter is the base two logarithm of the window size (the size of the history buffer). It should be in the range 8..15 for this version of the library. Larger values of this parameter result in better compression at the expense of memory usage. The default value is 15 if DeflateInit is used instead. DeflateInit returns if success, if there was not enough memory, if level is not a valid compression level. is set to null if there is no error message. does not perform any compression: this will be done by . Deflate compresses as much data as possible, and stops when the input buffer becomes empty or the output buffer becomes full. It may introduce some output latency (reading input without producing any output) except when forced to flush. The detailed semantics are as follows. deflate performs one or both of the following actions: Compress more input starting at and update and accordingly. If not all input can be processed (because there is not enough room in the output buffer), and are updated and processing will resume at this point for the next call of . Provide more output starting at and update and accordingly. This action is forced if the parameter flush is non zero. Forcing flush frequently degrades the compression ratio, so this parameter should be set only when necessary (in interactive applications). Some output may be provided even if flush is not set. The flush strategy to use. Before the call of , the application should ensure that at least one of the actions is possible, by providing more input and/or consuming more output, and updating or accordingly ; should never be zero before the call. The application can consume the compressed output when it wants, for example when the output buffer is full (avail_out == 0), or after each call of . If returns and with zero , it must be called again after making room in the output buffer because there might be more output pending. If the parameter is set to , all pending output is flushed to the output buffer and the output is aligned on a byte boundary, so that the decompressor can get all input data available so far. (In particular is zero after the call if enough output space has been provided before the call.) Flushing may degrade compression for some compression algorithms and so it should be used only when necessary. If flush is set to , all output is flushed as with , and the compression state is reset so that decompression can restart from this point if previous compressed data has been damaged or if random access is desired. Using too often can seriously degrade the compression. If deflate returns with == 0, this function must be called again with the same value of the flush parameter and more output space (updated ), until the flush is complete ( returns with non-zero ). If the parameter is set to , pending input is processed, pending output is flushed and deflate returns with if there was enough output space ; if deflate returns with , this function must be called again with and more output space (updated ) but no more input data, until it returns with or an error. After deflate has returned , the only possible operation on the stream is . can be used immediately after if all the compression is to be done in a single step. In this case, avail_out must be at least 0.1% larger than avail_in plus 12 bytes. If deflate does not return Z_STREAM_END, then it must be called again as described above. sets strm-> adler to the adler32 checksum of all input read so far (that is, bytes). may update data_type if it can make a good guess about the input data type (Z_ASCII or Z_BINARY). In doubt, the data is considered binary. This field is only for information purposes and does not affect the compression algorithm in any manner. returns if some progress has been made (more input processed or more output produced), if all input has been consumed and all output has been produced (only when flush is set to ), if the stream state was inconsistent (for example if or was null), if no progress is possible (for example or was zero). All dynamically allocated data structures for this stream are freed. This function discards any unprocessed input and does not flush any pending output. deflateEnd returns if success, if the stream state was inconsistent, if the stream was freed prematurely (some input or output was discarded). In the error case, may be set but then points to a static string (which must not be deallocated). Dynamically update the compression level and compression strategy. The interpretation of level is as in . This can be used to switch between compression and straight copy of the input data, or to switch to a different kind of input data requiring a different strategy. If the compression level is changed, the input available so far is compressed with the old level (and may be flushed); the new level will take effect only at the next call of An integer value indicating the desired compression level. A flush strategy to use. Before the call of , the stream state must be set as for a call of , since the currently available input may have to be compressed and flushed. In particular, must be non-zero. deflateParams returns if success, if the source stream state was inconsistent or if a parameter was invalid, if was zero. Initializes the compression dictionary from the given byte sequence without producing any compressed output. This function must be called immediately after , before any call of . The compressor and decompressor must use exactly the same dictionary (see ). A byte array - a dictionary. The length of the dictionary byte array The dictionary should consist of strings (byte sequences) that are likely to be encountered later in the data to be compressed, with the most commonly used strings preferably put towards the end of the dictionary. Using a dictionary is most useful when the data to be compressed is short and can be predicted with good accuracy; the data can then be compressed better than with the default empty dictionary. Depending on the size of the compression data structures selected by , a part of the dictionary may in effect be discarded, for example if the dictionary is larger than the window size in . Thus the strings most likely to be useful should be put at the end of the dictionary, not at the front. Upon return of this function, adler is set to the Adler32 value of the dictionary; the decompresser may later use this value to determine which dictionary has been used by the compressor. (The Adler32 value applies to the whole dictionary even if only a subset of the dictionary is actually used by the compressor.) deflateSetDictionary returns if success, or if a parameter is invalid (such as null dictionary) or the stream state is inconsistent (for example if has already been called for this stream or if the compression method is bsort). does not perform any compression: this will be done by . Flush as much pending output as possible. All output goes through this function so some applications may wish to modify it to avoid allocating a large buffer and copying into it. Read a new buffer from the current input stream, update the adler32 and total number of bytes read. All input goes through this function so some applications may wish to modify it to avoid allocating a large buffer and copying from it. Frees all inner buffers. Exceptions that occur in ZStream Default constructor. Constructor which takes one parameter - an error message