|
About...This subproject addresses the conformance testing of XML processors, according to the XML 1.0 (2nd edition) specification. Here you will find test results (click on the links at left) as well as information about generating such reports yourself, using some of the related components. Project ComponentsSee this section of the first XML.com conformance article for background on this XML testing. Key components involved in that testing are:
These rely on unofficial revisions to the NIST/OASIS XML conformance test cases (2nd edition). The suite of test cases needs updates to address XML specification errata, as well as problems in the version provided by the NIST/OASIS team (evidently someone mangled a lot of test cases by misusing ZIP). Also, there are some additional tests contributed by the SourceForge project. W3C finally got involved. See their new XML Conformance Test Suites page, which builds on the tests accumulated by OASIS. An interesting thing about that categorized database: it's in XML. The test harnesses rely on a bootstrap XML parser (which can of course be conformance tested, as a sanity and regression test) and XML tools can be provided to massage the database. Namespaces: NYETThere aren't yet any testcases addressing namespace support as such. XML 1.0 processors by definition do not conform to the XML namespaces specification, since they accept colons in places that the namespaces specification disallows them. Properly categorized test cases must address such issues; there are both syntactic issues to test, and structural ones requiring definitions of referenced prefixes. Similarly, the test harnesses don't yet address namespace support. For SAX2 parsers, namespace support should be tested as part of the XML processor API. For DOM/COM based processor APIs, this requires DOM Level 2 support. Helping OutAssociated with this SourceForge project is a forum and a developer's mailing list. Check out the SourceForge control panel for the project to see the latest. You can run tests, and submit them, for parsers; ideally, project teams with parsers would do this when they publish new releases. Test results should be identified according to the developer who reported them. If you have a database of test cases you use (say, things that previously broke your parser, but don't do so any longer!), please drop a line. |