-
Notifications
You must be signed in to change notification settings - Fork 288
Machine readable principle hierarchy and technical language? #3081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@guygastineau if you are simply seeking at a way of quickly looking up technique information based on the tool output, I suggest you check out https://www.w3.org/WAI/WCAG22/Techniques/ That would let you map the technique number (F68 in your example) to the name. I don't know how other feedback from HTML Code Sniffer is provided, but I do know there are other free tools you could try that may give you output in a way you find more useful. You mentioned WAVE. My daytime job is at IBM, and the Equal Access Checker is entirely free to use, and is open source. There is a whole list of tools published by the W3C. You may also want to check out the work of the Accessibility Conformance Testing Task Force of the AG WG (ACT), who are working on actual test rules. I am unaware of this information being available as self-contained structured data that encompasses all the interconnections between guidelines, requirements and techniques. Techniques can apply to more than one requirement, so it would be a little messy. If you grab a copy of the github repo, there is a wcag.json file; however I'm not sure how useful you would find it. For example, the material covering 1.3.1 covers 500 lines in the json file. |
I wanted this too. Figured that a JSON-LD version of the guidelines would help SEO and AI integration. I haven't fleshed it all out yet. It definitely needs to be tested and pushed a lot more. But the roots are there. These will download and not open up in GitHub, but should give you an idea of what I am thinking of: Now I don't really understand JSON-LD, but it's a W3C thing, so we should be able to find some folks to help in @guygastineau is this sorta what you were hoping for? @mbgower do you see any concerns with adding files like this? I don't know that we could include the edu-resources.json officially, as it would point outside the W3C, I just fleshed that out as an example. Having an external reference would allow a government to point an LLM to W3C resources, as well as their own internal resources (or a list of trusted resources). I think this is how it could work in anycase. You were looking for something to work from the HTML Code Sniffer tool, but starting from axe is probably better. In which case one could have an edu-resources.json file which included references like: https://dequeuniversity.com/rules/axe/html/ I think this would minimize the load of finding good, customized answers using an LLM. Probably.. Needs to be tested. |
Worth noting that there is also: And also for WCAG 2.0: And from Tenon there is also: |
@mbgower Thank you for your reply. Somehow I missed the notification back in February. Yes, the Common Failures, General Techniques, and HTML Techniques appear to be the information I need to associate with output from Code Sniffer. Thank you also for pointing me to the I am also interested in implementing HTML accessibility verification in Haskell and Purescript leveraging type logic/programming to achieve a high degree of certainty in meeting requirements. I know this wouldn't be able to handle all cases, but it could be used to implement a WYSIWYG editor for content creators that won't let them create whole classes of accessibility violations. I'll let you all know if I find time to work on that and get anything in a presentable state. @mgifford thank you very much as well for chiming in; without your comments I wouldn't even have noticed the earlier responses. I will take a look at what you have linked when I get the time. |
Just to be clear on the LLM point, I think providing a machine-readable / structured-data versions of various bits of our content is a good thing to do. However, anything we would provide would need to be deterministic, not statistical. I.e. it needs to be generated from the actual source, rather than based on a statistical approach such as an LLM. Using generated examples as a demo is fine, but we need to use the template system and source-data to create the actual output. |
Thanks @alastc this was basically a thought experiment. I'd definitely want something that was more authoritative. The data is pretty good I think, but your approach would be better. I put this into a PR here for consideration w3c/wai-website-data#204 |
For the purposes of high-level reports at my organization, I am trying to find a way to translate error codes from the HTML Code Sniffer tool more obvious natural language. It provide natural language descriptions for each of the errors it reports, but there is no concise name for the error types. Rather it provides a WCAG standards path like
WCAG2AA.Principle1.Guideline1_3.1_3_1.F68
. In order to find more information about this, I have to visit the Guidelines, then Understanding 1.3.1, and finally at the bottom of the page I can see thatF68
means _Failure of Success Criterion 4.1.2 due to a user interface control not having a programmatically determined name _. This is an arduous process.I recognize that my human intelligence and intervention is still required to produce a more general-like classification for our high-level (read. for non-technical leadership) reports, but I was really hoping I could find some structured data that would let me associate these standards paths with those technical descriptions programmatically. This would allow me (and others) to process this information programmatically up to the point of human intervention before continuing some sort of code generation.
I browsed this repository hoping to find structured data containing the information that I otherwise must access manually via the browser, but it appears that the content is scattered through files specific to the build system for the website presentation. Is there any place where this information can be found as self-contained structured data like
xml
orjson
for machine processing? Alternatively, a mapping from groups of issue paths to broad category names would suffice (it is ultimately what I need to produce anyway). WAVE obviously has some sort of mapping to such broad categories; it appears to be closed source, so I have not found a way to identify this mapping from their materials.I seriously appreciate all the work your organization does on the WCAG standards and moving web accessibility forward! I am hoping to find a way to generate (at least partially) this mapping programmatically, because, well, I am a programmer, and we are lazy 🤣. Also, I am part of a very small team. We are trying to do the right thing while minimizing manual tasks. Anyway, I appreciate your time in addressing my issue. If such a mapping from codes to general categories (where possible) does not yet exist in the public domain, then we will release a version when we have finished it.
The text was updated successfully, but these errors were encountered: