Documents are correctly parsed to avoid problems
with UTF-8 documents. This avoids the "Parsing of
undecoded UTF-8 will give garbage when decoding
entities" messages. Regular expressions are
allowed in the suppression file, and the program
complains if the suppression file is not a proper
file. Handling of HTTP and FTP servers that have
problems responding to HEAD requests is now more
robust. The original URL is used to report
problems. XHTML compliance was ensured.
This release doesn't throw errors for links that cannot be expected to be valid all the time (e.g. the classid attribute of an object element). it has better fallbacks for some cases where the HEAD request does not work. More classes and IDs have been added to allow more styling of results pages (including an example CSS file). XHTML compliance is ensured. There are better checks for optional dependencies.
A silly build-related problem that prevented
checkbot 1.76 from running at all was fixed. The
presence of a robot's meta tag is now checked and
acted upon.
Error reports now include the page title for easier identification. javascript: links are now ignored because they cannot be checked. The documentation has been updated.
A --cookies option that allows cookies to be set while checking
was added along with a --noproxy option for indicating which
domains should not be passed through the proxy. A new error
code is generated for unknown schemes. Minor bugfixes and
documentation updates were applied.